article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
recent studies have shown that machine - learned tree - based models , combined with ensemble techniques , are highly effective for building web ranking algorithms within the `` learning to rank '' framework . beyond document retrieval ,tree - based models have also been proven effective for tackling problems in diverse domains such as online advertising , medical diagnosis , genomic analysis , and computer vision .this paper focuses on runtime optimizations of tree - based models that take advantage of modern processor architectures : we assume that a model has already been trained , and now we wish to make predictions on new data as fast as possible .although exceedingly simple , tree - based models do not efficiently utilize modern processor architectures due to the prodigious amount of branches and non - local memory references in standard implementations . by laying out data structures in memory in a more cache - conscious fashion , removing branches from the execution flow using a technique called predication , and micro - batching predictions using a technique called vectorization , we are able to better exploit modern processor architectures and significantly improve the speed of tree - based models over hard - coded if - else blocks .our experimental results are measured in nanoseconds for individual trees and microseconds for complete ensembles .a natural starting question is : do such low - level optimizations actually matter ?does shaving microseconds off an algorithm have substantive impact on a real - world task ?we argue that the answer is _ yes _, with two different motivating examples : first , in our primary application of learning to rank for web search , prediction by tree - based models forms the inner loop of a search engine . sincecommercial search engines receive billions of queries per day , improving this tight inner loop ( executed , perhaps , many billions of times ) can have a noticeable effect on the bottom line .faster prediction translates into fewer servers for the same query load , reducing datacenter footprint , electricity and cooling costs , etc .second , in the domain of financial engineering , every nanosecond counts in high frequency trading .orders on nasdaq are fulfilled in less than 40 microseconds .firms fight over the length of cables due to speed - of - light propagation delays , both within an individual datacenter and across oceans .thus , for machine learning in financial engineering , models that shave even a few microseconds off prediction times present an edge .we view our work as having the following contributions : first , we introduce the problem of _ architecture - conscious _ implementations of machine learning algorithms to the information retrieval and data mining communities . although similar work has long existed in the database community , there is little research on the application of architecture - conscious optimizations for information retrieval and machine learning problems .second , we propose novel implementations of tree - based models that are highly - tuned to modern processor architectures , taking advantage of cache hierarchies and superscalar processors . finally , we illustrate our techniques in a standard , widely - accepted , learning - to - rank task and show significant performance improvements over standard implementations and hard - coded if - else blocks .we begin with an overview of modern processor architectures and recap advances over the past few decades .the broadest trend is perhaps the multi - core revolution : the relentless march of moore s law continues to increase the number of transistors on a chip exponentially , but experts widely agree that we are long past the point of diminishing returns in extracting instruction - level parallelism in hardware . instead , adding more cores appears to be a better use of increased transistor density .since prediction is an embarrassingly parallel problem , our techniques can ride the wave of increasing core counts .a less - discussed , but just as important trend over the past two decades is the so - called `` memory wall '' , where increases in processor speed have far outpaced improvements in memory latency .this means that ram is becoming slower relative to the cpu . in the 1980s ,memory latencies were on the order of a few clock cycles ; today , it could be several hundred clock cycles . to hide this latency, computer architects have introduced hierarchical cache memories : a typical server today will have l1 , l2 , and l3 caches between the processor and main memory .cache architectures are built on the assumption of reference locality that at any given time , the processor repeatedly accesses only a ( relatively ) small amount of data , and these fit into cache .the fraction of memory accesses that can be fulfilled directly from the cache is called the _ cache hit rate _ , and data not found in cache is said to cause a _cache miss_. cache misses cascade down the hierarchy if a datum is not found in l1 , the processor tries to look for it in l2 , then in l3 , and finally in main memory ( paying an increasing latency cost each level down ) .managing cache content is a complex challenge , but there are two main principles that are relevant to a software developer .first , caches are organized into cache lines ( typically 64 bytes ) , which is the smallest unit of transfer between cache levels . that is ,when a program accesses a particular memory location , the entire cache line is brought into ( l1 ) cache .this means that subsequent references to nearby memory locations are very fast , i.e. , a cache hit .therefore , in software it is worthwhile to organize data structures to take advantage of this fact .second , if a program accesses memory in a predictable sequential pattern ( called striding ) , the processor will prefetch memory blocks and move them into cache , before the program has explicitly requested the memory locations ( and in certain architectures , it is possible to explicitly control prefetch in software ). there is , of course , much more complexity beyond this short description ; see for an overview .the database community has explored in depth the consequences of modern processor architectures for relational query processing .in contrast , these issues are underexplored for information retrieval and data mining applications .this is one of the first attempts at developing architectural - conscious runtime implementations of machine learning algorithms .researchers have explored scaling the _ training _ of tree - based models to massive datasets , which is of course an important problem , but orthogonal to the issue we tackle here : given a trained model , how do we make predictions quickly ?another salient property of modern cpus is pipelining , where instruction execution is split between several stages ( modern processors have between one to two dozen stages ) . at each clock cycle , all instructions `` in flight '' advance one stage in the pipeline ; new instructions enter the pipeline and instructions that leave the pipeline are `` retired '' .pipeline stages allow faster clock rates since there is less to do per stage .superscalar _ cpus add the ability to dispatch multiple instructions per clock cycle ( and out of order ) provided that they are independent .pipelining suffers from two dangers , known as `` hazards '' in vlsi design terminology ._ data hazards _ occur when one instruction requires the result of another ( that is , a data dependency ) .this happens frequently when dereferencing pointers , where we must first compute the memory location to access .subsequent instructions can not proceed until we actually know what memory location we are accessing the processor simply stalls waiting for the result ( unless there are other independent instructions that can be executed ) . _control hazards _ are instruction dependencies introduced by if - then clauses ( which compile to conditional jumps in assembly ) . to cope with this, modern processors use _ branch prediction techniques_in short , trying to predict which code path will be taken .however , if the guess is not correct , the processor must `` undo '' the instructions that occurred after the branch point ( `` flushing '' the pipeline ). the impact of data and control hazards can be substantial : an influential paper in 1999 concluded that in commercial rdbmses at the time , almost half of the execution time is spent on stalls . which is `` worse '' , data or control hazards ?not surprisingly , the answer is , it depends . however , with a technique called predication , which we explore in our work , it is possible to convert control dependencies into data dependencies ( see section [ section : approach ] ) .whether predication is worthwhile , and under what circumstances , remains an empirical question .another optimization that we adopt , called vectorization , was pioneered by database researchers : the basic idea is that instead of processing a tuple at a time , a relational query engine should process a `` vector '' ( i.e. , batch ) of tuples at a time to take advantage of pipelining .our work represents the first application of vectorization to optimizing machine learning algorithms that we are aware of . beyond processor architectures , the other area of relevant work is the vast literature on learning to rank , application of machine learning techniques to document ranking in search .our work uses gradient - boosted regression trees ( gbrts ) , a state - of - the - art ensemble method .the focus of most learning - to - rank research is on learning effective models , without considering efficiency , although there is an emerging thread of work that attempts to better balance both factors .in contrast , we focus exclusively on runtime ranking performance , assuming a model that has already been trained ( by other means ) .in this section we describe various implementations of tree - based models , starting from two baselines and progressively introducing architecture - conscious optimizations .we focus on an individual tree , the runtime execution of which involves checking a predicate in an interior node , following the left or right branch depending on the result of the predicate , and repeating until a leaf node is reached .we assume that the predicate at each node involves a feature and a threshold : if the feature value is less than the threshold , the left branch is taken ; otherwise , the right branch is taken .of course , trees with greater branching factors and more complex predicate checks can be converted into an equivalent binary tree , so our formulation is general .note that our discussion is agnostic with respect to the predictor at the leaf node , be it a boolean ( in the classification case ) , a real ( in the regression case ) , or even an embedded sub - model .we assume that the input feature vector is densely - packed in a floating - point array ( as opposed to a sparse , map - based representation ) .this means that checking the predicate at each tree node is simply an array access , based on a unique consecutively - numbered i d associated with each feature .object : as a high - flexibility baseline , we consider an implementation of trees with nodes and associated left and right pointers in c++ .each tree node is represented by an object , and contains the feature i d to be examined as well as the decision threshold . for convenience , we refer to this as the object implementation . in our mind , this represents the most obvious implementation of tree - based models that a software engineer would come up with and thus serves as a good point of comparison .this implementation has two advantages : simplicity and flexibility .however , we have no control over the physical layout of the tree nodes in memory , and hence no guarantee that the data structures exhibit good reference locality . prediction with this implementation essentially boils down to pointer chasing across the heap : when following either the left or the right pointer to the next tree node , the processor is likely to be stalled by a cache miss .codegen : as a high - performance baseline , we consider statically - generated if - else blocks .that is , a code generator takes a tree model and directly generates c code , which is then compiled and used to make predictions . for convenience, this is referred to as the codegen implementation .this represents the most obvious performance optimization that a software engineer would come up with , and thus serves as another good point for performance comparison .we expect this approach to be fast .the entire model is statically specified ; machines instructions are expected to be relatively compact and will fit into the instruction cache , thus exhibiting good reference locality .furthermore , we leverage decades of compiler optimizations that have been built into gcc . note that this eliminates data dependencies completely by converting them all into control dependencies .the downside , however , is that this approach is inflexible .the development cycle now requires more steps : after training the model , we need to run the code generation , compile the resulting code , and then link against the rest of the system .this may be a worthwhile tradeoff for a production system , but from the view of rapid experimentation and iteration , the approach is a bit awkward .struct : the object approach has two downsides : poor memory layout ( i.e. , no reference locality and hence cache misses ) and inefficient memory utilization ( due to object overhead ) . to address the second point ,the solution is fairly obvious : get rid of c++ and drop down to c to avoid the object overhead .we can implement each node as a struct in c ( comprising feature i d , threshold , left and right pointers ) .we construct a tree by allocating memory for each node ( malloc ) and assigning the pointers appropriately .prediction with this implementation remains an exercise in pointer chasing , but now across more memory - efficient data structures .we refer to this as the struct implementation .struct : an improvement over the struct implementation is to physically manage the memory layout ourselves . instead of allocating memory for each node individually , we allocate memory for all the nodes at once ( i.e. , an array of structs ) and linearize the tree in the following way : the root lies at index 0 .assuming a perfectly - balanced tree , for a node at index , its left child is at and its right child is at .this is equivalent to laying out the tree using a breadth - first traversal of the nodes .the hope is that by manually controlling memory layout , we can achieve better reference locality , thereby speeding up the memory references .this is similar to the idea behind css - trees used in the database community . for conveniencewe call this the struct implementation. one nice property of retaining the left and right pointers in this implementation is that for unbalanced trees ( i.e. , trees with missing nodes ) , we can more tightly pack the nodes to remove `` empty space '' ( still following the layout approach based on breadth - first node traversal ) .thus , the struct implementation occupies the same amount of memory as struct , except that the memory is contiguous .pred : the struct implementation tackles the reference locality problem , but there remains one more issue : the presence of branches ( resulting from the conditionals ) , which can be quite expensive to execute .branch mispredicts may cause pipeline stalls and wasted cycles ( and of course , we would expect many mispredicts with trees ) .although it is true that speculative execution renders the situation far more complex , removing branches may yield performance increases . a well - known trick in the compiler community for overcoming these issues is known as predication .the underlying idea is to convert control dependencies ( hazards ) into data dependencies ( hazards ) , thus altogether avoiding jumps in the underlying assembly code .here is how predication is adapted for our case : we encode the tree as a struct array in c , nd , where nd[i].fid is the feature i d to examine , and nd[i].theta is the threshold .we assume a fully - branching binary tree , with nodes laid out via breadth - first traversal ( i.e. , for a node at index , its left child is at and its right child is at ) . to make the prediction, we probe the array in the following manner : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .... i = ( i<<1 ) + 1 + ( v[nd[i].fid ] > = nd[i].theta ) ; i = ( i<<1 ) + 1 + ( v[nd[i].fid ] > = nd[i].theta ) ; ... .... _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we completely unroll the tree traversal loop , so the above statement is repeated times for a tree of depth . at the end, contains the index of the leaf node corresponding to the prediction ( which we look up in another array ) .one final implementation detail : we hard code a prediction function for each tree depth , and then dispatch dynamically using function pointers . note that this approach assumes a fully - balanced binary tree ; to cope with unbalanced trees , we expand by inserting dummy nodes . vpred : predication eliminates branches but at the cost of introducing data hazards .each statement in pred requires an indirect memory reference .subsequent instructions can not execute until the contents of the memory locations are fetched in other words , the processor will simply stall waiting for memory references to resolve .therefore , predication is entirely bottlenecked on memory access latencies. a common technique adopted in the database literature to mask these memory latencies is called _ vectorization _ . applied to our task, this translates into operating on multiple instances ( feature vectors ) at once , in an interleaved way .this takes advantage of multiple dispatch and pipelining in modern processors ( provided that there are no dependencies between dispatched instructions , which is true in our case ) .so , while the processor is waiting for the memory access from the predication step on the first instance , it can start working on the second instance .in fact , we can work on instances in parallel .for , this looks like the following , working on instances i0 , i1 , i2 , i3 in parallel : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .... i0 = ( i0<<1 ) + 1 + ( v[nd[i0].fid ] > = nd[i0].theta ) ; i1 = ( i1<<1 ) + 1 + ( v[nd[i1].fid ] > = nd[i1].theta ) ; i2 = ( i2<<1 ) + 1 + ( v[nd[i2].fid ] >= nd[i2].theta ) ; i3 = ( i3<<1 ) + 1 + ( v[nd[i3].fid ]> = nd[i3].theta ) ; i0 = ( i0<<1 ) + 1 + ( v[nd[i0].fid ] > = nd[i0].theta ) ; i1 = ( i1<<1 ) + 1 + ( v[nd[i1].fid ] >= nd[i1].theta ) ; i2 = ( i2<<1 ) + 1 + ( v[nd[i2].fid ]> = nd[i2].theta ) ; i3 = ( i3<<1 ) + 1 + ( v[nd[i3].fid ] > = nd[i3].theta ) ; ... .... _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in other words , we traverse one layer in the tree for four instances at once . while we re waiting for v[nd[i0].fid ] to resolve , we dispatch instructions for accessing v[nd[i1].fid ] , and so on .hopefully , by the time the final memory access has been dispatched , the contents of the first memory access are available , and we can continue without processor stalls .again , we completely unroll the tree traversal loop , so each block of statements is repeated times for a tree of depth . at the end , contains the index of the leaf nodes corresponding to the prediction for instances .setting to 1 reduces this model to pure predication ( i.e. , no vectorization ) .note that the optimal value of is dependent on the relationship between the amount of computation performed and memory latencies we will determine this relationship empirically . for convenience , we refer to the vectorized version of the predication technique as vpred .given that the focus of our work is efficiency , our primary evaluation metric is prediction speed . we define this as the elapsed time between the moment a feature vector ( i.e. , a test instance ) is presented to the tree - based model to the moment that a prediction ( in our case , a regression value ) is made for the instance . to increase the reliability of our results , we conducted multiple trials and report the mean and variance .we conducted two sets of experiments : first , using synthetically - generated data to quantify the performance of individual trees in isolation , and second , on standard learning - to - rank datasets to verify the performance of full ensembles .all experiments were run on a red hat linux server , with intel xeon westmere quad - core processors ( e5620 2.4ghz ) .this architecture has a 64 kb l1 cache per core , split between data and instructions ; a 256 kb l2 cache per core ; and a 12 mb l3 cache shared by all cores .code was compiled with gcc ( version 4.1.2 ) using optimization flags -o3 -fomit - frame - pointer -pipe . all code ran single - threaded . the synthetic data consisted of randomly generated trees and randomly generated feature vectors .each intermediate node in a tree has two fields : a feature i d and a threshold on which the decision is made .each leaf is associated with a regression value .construction of a random tree of depth begins with the root node .we pick a feature i d at random and generate a random threshold to split the tree into left and right subtrees .this process is recursively performed to build each subtree until we reach the desired tree depth .when we reach a leaf node , we generate a regression value at random .note that our randomly - generated trees are fully - balanced , i.e. , a tree of depth has leaf nodes .once a tree has been constructed , the next step is to generate random feature vectors .each random feature vector is simply a floating - point array of length ( number of features ) , where each index position corresponds to a feature value .we assume that all paths in the decision tree are equally likely ; the feature vectors are generated in a way that guarantees an equal likelihood of visiting each leaf . to accomplish this , we take one leaf at a time and follow its parents back to the root . at each node , we take the node s feature i d and produce a feature value based on the position of the child node .that is , if the child node we have just visited is on the left subtree we generate a feature value that is smaller than the threshold stored at the current parent node ; otherwise we generate a feature value larger than the threshold .we randomize the order of instances once we have generated all the feature vectors . to avoid any cache effects ,our experiments are conducted on a large number of instances ( 512k ) .given a random tree and a set of random feature vectors , we ran experiments to assess the various implementations of tree - based models described in section [ section : approach ] . to get a better sense of the variance , we performed 5 trials ; in each trial we constructed a new random binary tree and a different randomly - generated set of feature vectors . to explore the design space, we conducted experiments with varying tree depths and varying feature sizes .in addition to randomly - generated trees , we conducted experiments using standard learning - to - rank datasets , where training , validation , and test data are provided . using the training and validation sets we learned a complete tree - ensemble ranking model , and evaluationis then carried out on test instances to determine the speed of the various implementations .these experiments assess performance in a real - world application .we used gradient - boosted regression trees ( gbrts ) to train a learning - to - rank model .gbrts are ensembles of regression trees that yield state - of - the - art effectiveness on learning - to - rank tasks .the learning algorithm sequentially adds new trees to the ensemble that best account for the remaining regression error ( i.e. , the residuals ) .we used the open - source jforests implementation of lambdamart to optimize ndcg .although there is no way to precisely control the depth of each tree , we can adjust the size distribution of the trees by setting a cap on the number of leaves ( which is an input parameter to the learner ) .we used two standard learning - to - rank datasets : letor - mq2007 and mslr - web10k .both are pre - folded , providing training , validation , and test instances .table [ table : datasets ] shows the dataset sizes and the numbers of features . to measure variance , we repeated experiments on all five folds .note that mq2007 is much smaller and is considered by many in the community to be outdated ..average number of training , validation , and test instances in our learning - to - rank datasets , along with the number of features .[ table : datasets ] [ cols="<,>,>,>,>",options="header " , ] the values of ( number of features ) in our synthetic experiments are guided by these learning - to - rank datasets .we selected feature sizes that are multiples of 16 ( 4-byte floats ) so that the feature vectors are integer multiples of cache line sizes ( 64 bytes ) : roughly corresponds to letor features and is representative of a small feature space ; corresponds to mslr and is representative of a medium - sized feature space .we introduced a third condition to capture a large feature space condition .in this section we present experimental results , beginning with evaluation on synthetic data and then on learning - to - rank datasets .we begin by focusing on the first five implementations described in section [ section : approach ] ( leaving aside vpred for now ) , using the procedure described in section [ section : experimental_setup : synthetic ] .the prediction time per randomly - generated test instance is shown in figure [ figure : results : synthetic ] , measured in nanoseconds .the balanced randomly - generated trees vary in terms of tree depth _d _ , and each bar chart shows a separate value of _ f _ ( number of features ) .time is averaged across five trials and error bars denote 95% confidence intervals .it is clear that as trees become deeper , prediction speeds decrease overall .this is obvious since deeper trees require more feature accesses and predicate checks , more pointer chasing , and more branching ( depending on the implementation ) .first , consider the high - flexibility and high - performance baselines .as expected , the object implementation is the slowest ( except for pred with ) .it is no surprise that the c++ implementation is slow due to the overhead from classes and objects ( recall the other implementations are in c ) .the gap between object and struct , which is the comparable c implementation , grows with larger trees . also as expected, the codegen implementation is very fast : with the exception of , hard - coded if - else statements are faster or just as fast as all other implementations , regardless of tree depth . comparing struct with struct, we observe no significant improvement for shallow trees , but a significant speedup for deep trees .recall that in struct , we allocate memory for the entire tree so that it resides in a contiguous memory block , whereas in struct we let allocate memory however it chooses .this shows that reference locality is important for deeper trees .finally , turning to the pred condition , we observe a very interesting behavior . for small feature vectors , the technique is actually faster than codegen .this shows that for small feature sizes , predication helps to overcome branch mispredicts , i.e. , converting control dependencies into data dependencies increases performance . for , results are mixed compared to codegen , struct , and struct : sometimes faster , sometimes slower . however , for large feature vectors ( ) , the performance of pred is terrible , even worse than the object implementation .we explain this result as follows : pred performance is entirely dependent on memory latency .when traversing the tree , it needs to wait for the contents of memory before proceeding . until the memory references are resolved , the processor simply stalls . with small feature vectors, we get excellent locality : 32 features take up two 64-byte cache lines , which means that evaluation incurs at most two cache misses .since memory is fetched by cache lines , once a feature is accessed , accesses to all other features on the same cache line are essentially `` free '' .locality decreases as the feature vector size increases : the probability that the predicate at a tree node accesses a feature close to one that has already been accessed goes down .thus , as the feature vector size grows , the pred prediction time becomes increasingly dominated by stalls waiting for memory fetches .the effect of this `` memory wall '' is evident in the other implementations as well .we observe that the performance differences between codegen , struct , and struct shrink as the feature size increases ( whereas they are more pronounced for smaller feature vectors ) .this is because as feature vector size increases , more and more of the prediction time is dominated by memory latencies .how can we overcome these memory latencies ? instead of simply stallingwhile we wait for memory references to resolve , we can try to do other useful computation this is exactly what vectorization is designed to accomplish . in section [ section :approach ] , we proposed _vectorization _ of the predication technique in order to mask memory latencies .the idea is to work on instances ( feature vectors ) at the same time , so that while the processor is waiting for memory access for one instance , useful computation can happen on another .this takes advantage of pipelining and multiple dispatch in modern superscalar processors .the effectiveness of vectorization depends on the relationship between time spent in actual computation and memory latencies .for example , if memory fetches take only one clock cycle , then vectorization can not possibly help .the longer the memory latencies , the more we would expect vectorization ( larger batch sizes ) to help .however , beyond a certain point , once memory latencies are effectively masked by vectorization , we would expect larger values of to have little impact .in fact , values that are too large start to bottleneck on memory bandwidth and cache size . in figure[ figure : results : vectorization ] , we show the impact of various batch sizes , , for the different feature sizes . note that when is set to 1 , we evaluate one instance at a time , which reduces to the pred implementation .prediction speed is measured in nanoseconds and normalized by batch size ( i.e. , divided by ) , so we report _ per - instance _ prediction time . for , yields the best performance ; for , yields the best performance ; for , all provide approximately the same level of performance .these results are exactly what we would expect : since memory latencies increase with larger feature sizes , a larger batch size is needed to mask the latencies . with the combination of vectorization and predication , vpred becomes the fastest of all our implementations on the synthetic data .comparing figures [ figure : results : synthetic ] and [ figure : results : vectorization ] , we see that vpred ( with optimal vectorization parameter ) is actually faster than codegen . table [ table : relative_speed ] summarizes this comparison .vectorization is up to 70% faster than the non - vectorized implementation ; vpred can be twice as fast as codegen .in other words , we retain the best of both worlds : speed and flexibility , since the vpred implementation does not require code recompilation .having evaluated different implementations on synthetic data , we move on to learning - to - rank datasets using tree ensembles .as previously described , we used the implementation of lambdamart by ganjisaffar et al .once a model has been trained and validated , we evaluate on the test set to measure prediction speed .since the datasets come pre - folded five ways , we repeated our experiments five times and report mean and variance across the runs . to handle ensembles in our implementations , we simply add an outer loop to the algorithm that iterates over individual trees in the ensemble .note that ganjisaffar et al .actually construct multiple ensembles , each built using a random bootstrap of the training data ( i.e. , _ bagging _ multiple boosted ensembles ) . in this work, we do not adopt this procedure because bagging is embarrassingly parallel from the runtime execution perspective and hence not particularly interesting .for learning parameters , we used values recommended by ganjisaffar et al . , with the exception of max leaves ( see below ) .feature and data sub - sampling parameters were set to 0.3 , minimum percentage of observations per leaf was set to 0.5 , and the learning rate was set to 0.05 . in terms of performance , shallower trees are naturally preferred .but what is the relationship between tree depth and ranking effectiveness ?tree depth with our particular training algorithm can not be precisely controlled , but can be indirectly influenced by the maximum number of leaves on an individual tree ( an input to the learner ) .table [ table : effectiveness ] shows the average ndcg values ( at different ranks ) measured across five folds on the letor and mslr datasets with different values of this parameter , similar to the range of values explored in .statistical significance was tested using the wilcoxon test ( -value 0.05 ) ; none of the differences on the letor dataset were significant .for each condition , we also report the average depth of the trees that were actually learned .the average tree depth is computed for every ensemble and then averaged across the five folds ; variance is presented in parentheses .results show that for letor , tree depth makes no significant difference on ndcg , whereas larger trees yield better results on mslr ; however , there appears to be little difference between 50 and 70 max leaves .the results make sense : to exploit larger feature spaces we need trees with more nodes .since many in the community consider the letor dataset to be out of date with an impoverished feature set , more credence should be given to the mslr results .turning to performance results , figure [ figure : results : actual ] illustrates per - instance prediction speed for various implementations on the learning - to - rank datasets .note that this is on the entire ensemble , with latencies now measured in microseconds instead of nanoseconds .as described above , the trees were trained with different settings of max leaves ; the _ x_-axis plots the tree depths from table [ table : effectiveness ] . in this set of experiments , we made use of the vpred approach with the vectorization parameter set to for letor and for mslr .results from the synthetic datasets mostly carry over to these learning - to - rank datasets .object is the slowest implementation and struct is slightly faster . on the letor dataset ,struct is only slightly slower than struct , but on mslr , struct is faster than struct by a larger margin in most cases .vpred outperforms all other techniques , including codegen on mslr , but is slower than codegen on letor ( except for the shallowest trees ). however , note that in terms of ndcg , table [ table : effectiveness](a ) shows no difference in effectiveness , so there is no advantage to building deeper trees for letor .the conclusion appears clear : for tree - based ensembles on real - world learning - to - rank datasets , we can achieve the best of both worlds . with a combination of predication and vectorization , we can make predictions faster than statically - generated if - else blocks , yet retain the flexibility in being able to specify the model dynamically , which enables rapid experimentation .our experiments show that predication and vectorization are effective techniques for substantially increasing the performance of tree - based models , but one potential objection might be : are we measuring the right thing ? in our experiments , prediction time is measured from when the feature vector is presented to the model to when the prediction is made .critically , we assume that features have already been computed .what about an alternative architecture where features are computed lazily , i.e. , only when the predicate at a tree node needs to access a particular feature ?this alternative architecture , where features are computed on demand , is difficult to study since results will be highly dependent on the implementation of feature extraction which in turn depends on the underlying data structures ( layout of the inverted indexes ) , compression techniques , and how computation - intensive the features are .however , there is a much easier way to study this issue we can trace the execution of the full tree ensemble and keep track of the fraction of features that are accessed . if during the course of making a prediction , most of the features are accessed , then there is little waste in computing all the features first and then presenting the complete feature vector to the model . table [ table : features ] shows the average fraction of features accessed in the final learned models for both learning - to - rank datasets , with different max leaves configurations .it is clear that , for both datasets , most of the features are accessed during the course of making a prediction , and in the case of the mslr dataset , nearly all the features are accessed all the time ( especially with deeper trees , which yield higher effectiveness ) .therefore , it makes sense to separate feature extraction from prediction .in fact , there are independent compelling reasons to do so : a dedicated feature extraction stage can benefit from better reference locality ( when it comes to document vectors , postings , or whatever underlying data structures are necessary for computing features ) .interleaving feature extraction with tree traversal may lead to `` cache churn '' , where a particular data structure is repeatedly loaded and then displaced by other data . returning to a point in the introduction : do these optimizations actually matter , in the broader context of real - world search engines ?this is of course a difficult question to answer and highly dependent on the actual search architecture , which is a complex distributed system spanning hundreds of machines or more . here , we venture some rough estimates . from figure [figure : results : actual](b ) , the mslr dataset , we see that compared to codegen , vpred reduces per - instance prediction time from around 40 to around 25 ( for max leaves setting of 50 ) ; this translates into a 38% reduction in latency per instance . in a web search engine , the learning to rank algorithmis applied to a candidate list of documents that is usually generated by other means ( e.g. , scoring with bm25 and a static prior ) .the exact details are proprietary , but the published literature does provide some clues .for example , cambazoglu et al . ( authors from yahoo ! ) experimented with reranking 200 candidate documents to produce the final ranked list of 20 results ( the first two pages of search results ) . from these numbers , we can compute the per - query reranking time to be 8ms using the codegen approach and 5ms with vpred .this translates into an increase from 125 queries per second to 200 queries per second on a single thread for this phase of the search pipeline .alternatively , gains from faster prediction can be leveraged to rerank more results or take advantage of more features .this simple estimate suggests that our optimizations can make a noticeable difference in web search , and given that our techniques are relatively simple the predication and vectorization optimizations definitely seem worthwhile .during the course of our experiments , we noticed that two assumptions of our implementations did not appear to be fully valid .first , the pred and vpred implementations assume fully - balanced binary trees ( i.e. , every node has a left and a right child ) . in contrast , recall that struct makes no such assumption because with the left and right pointers we can tightly pack the tree nodes .the fully - balanced tree assumption does not turn out to be valid for gbrts the learner does not have a preference for any particular tree topology , and so the trees are unbalanced most of the time . to compensate for this ,the pred and vpred implementations require insertion of dummy nodes to create a fully - balanced tree .second , we assume that all paths are equally likely in a tree , i.e. , that at each node , the left and right branches are taken with roughly - equal frequency. we noticed , however , that this is often not the case .to the extent that one branch is favored over another , branch prediction provides non - predicated implementations ( i.e. , if - else blocks ) an advantage , since branch prediction will guess correctly more often , thus avoiding pipeline flushes .one promising future direction to address the above two issues is to adapt the model learning process to prefer balanced trees and predicates that divide up the feature space evenly .we believe this can be incorporated into the learning algorithm as a penalty , much in the same way that regularization is performed on the objective in standard machine learning .thus , it is perhaps possible to jointly learn models that are both fast and good , as in the recently - proposed `` learning to _ efficiently _ rank '' framework .modern processor architectures are incredibly complex because technological improvements have been uneven .this paper focuses on one particular issue : not all memory references are equally fast , and in fact , latency can differ by an order of magnitude .there are a number of mechanisms to mask these latencies , although it largely depends on developers knowing how to exploit these mechanisms .the database community has been exploring these issues for quite some time now , and in this respect the information retrieval , machine learning , and data mining communities are behind . in this paper, we demonstrate that two relatively simple techniques , predication and vectorization , along with more efficient memory layouts , can significantly accelerate prediction performance for tree - based models , both on synthetic data and on real - world learning - to - rank datasets .our work explores architecture - conscious implementations of a particular machine learning model but we believe there are plenty of similar opportunities in other areas of machine learning as well .this work has been supported by nsf under awards iis-0916043 , iis-1144034 , and iis-1218043 .any opinions , findings , conclusions , or recommendations expressed are the authors and do not necessarily reflect those of the sponsor .the first author s deepest gratitude goes to katherine , for her invaluable encouragement and wholehearted support .the second author is grateful to esther and kiri for their loving support and dedicates this work to joshua and jacob .a. criminisi , j. shotton , and e. konukoglu .decision forests : a unified framework for classification , regression , density estimation , manifold learning and semi - supervised learning . , 7(23):81227 , 2011 .
tree - based models have proven to be an effective solution for web ranking as well as other problems in diverse domains . this paper focuses on optimizing the runtime performance of applying such models to make predictions , given an already - trained model . although exceedingly simple conceptually , most implementations of tree - based models do not efficiently utilize modern superscalar processor architectures . by laying out data structures in memory in a more cache - conscious fashion , removing branches from the execution flow using a technique called predication , and micro - batching predictions using a technique called vectorization , we are able to better exploit modern processor architectures and significantly improve the speed of tree - based models over hard - coded if - else blocks . our work contributes to the exploration of _ architecture - conscious _ runtime implementations of machine learning algorithms .
prions are self - propagating , usually amyloid - like protein aggregates that are responsible for transmissible diseases .examples of prion diseases are scrapie in sheep , bovine spongiform encephalopathy(bse ) in cattle and new variant creutzfeldt - jakob disease in man .in contrast to these disease - related cases , several proteins displaying prion properties are well known in yeast and fungi . the het - s prion protein produced by the filamentous fungus _podospora anserina _ is thought to be involved in a specific function .the switching on of its prion form triggers the programmed - cell - death phenomenon named `` heterokaryon incompatibility'' which can prevent different forms of parasitism , by inducing the death of the heterokaryon formed by cell fusion of different fungal strains .the proteinase k - resistant core of the prion fibrils formed by the c - terminal residues 218 - 289 ( pfd : prion forming domain ) is unstructured in solution and forms infectious fibrils in vitro .earlier work showed that het - s pfd fibrils consist of four -strands forming two windings of a -solenoid however without giving further information about the details of the intra- and inter - molecular -sheet architecture .recently , the structure of het - s pfd has been determined on the basis of nmr - derived intra and intermolecular distance restraints .this is the only atomic - resolution structural model of an infectious fibrillar state reported to date .on the basis of 134 intramolecular and intermolecular experimental distance restraints , het - s pfd forms a left - handed -solenoid , with each molecule forming two helical windings , a compact hydrophobic core , at least 23 hydrogen bonds , three salt bridges and two asparagine ladders ( see fig .1a and 1b ) .the model is supported by electronic diffraction and scanning transmission electronic microscopy . despite the tremendous economical and social relevance of diseases related to prion infections and protein aggregation and the enormous quantity of research devoted to their study, the mechanisms that rule and trigger fibril formation and elongation remain mostly elusive : the nature of protein aggregation , its limited structural order , its insolubility in water and its involvement with the cell membrane render its experimental study extremely difficult . for these reasons relevant breakthroughs in understandingthe principles of amyloid formation and fibril growth might come from numerical simulations .recent advances in hardware and methodology have allowed for realistic atomic resolution molecular dynamics ( md ) simulations with physics - based potentials of small fibrils consisting of a few monomer units that make possible to span time scales of hundreds of nanoseconds and even of for smaller systems .these results were mainly obtained by studying the amyloid -peptide ( a ) , related to the alzheimer s disease , or its mutants and fragments . in thiscontext , experimental observations had first originated the proposal that incoming a monomers associate to the elongating fibril through a two stages `` dock and lock '' kinetic mechanism .in the first stage , an unstructured monomer docks onto the fibril while maintaining some degree of conformational freedom . in the second stage ,the monomer locks into the final fibril state .such a mechanism was later confirmed and refined , also in the context of a-peptide oligomerization , by other experimental and simulation studies .the dock and lock mechanism was further employed , for the a-peptide , to describe fibril elongation within a thermodynamic context , by means of all - atom md simulations .a continuous docking stage was observed to occur over a wide temperature range without free energy barriers or intermediates , whereas the locking stage at lower temperatures was characterized by many competing free energy minima in a rugged landscape . in the case of the fibrillar het - s pfd protein , it was possible with md atomistic simulation to probe the stability of the nmr structure on a 10 ns timescale and to predict the behaviour of the salt bridge network . on the other hand ,typical elongation times for amyloid fibrils formed by het - s pfd are of the order of hours , so that coarse grained approaches , in which protein chains and amino - acid interactions are modeled in a simplified way , are mandatory to investigate such longer time scales . indeed , despite the difficulty of finding reliable energy functions, these approaches has been successfully used in outlining general aspects of the full phase diagram of a generic aggregating polypeptide system , to emphasize the importance of the contribution of hydrophobic interactions and hydrogen bonding in the aggregation process of the a peptide peptide and even to study the mechanisms of monomer addition for the a peptide and some of its mutants . in this paper , in order to describe the fibril elongation mechanisms of a relatively long protein domain such as het - s pfd , we prefer to employ a still different coarse - grained approach since in this case there is the unique advantage of knowing from experiments the fibril structure . at a general level, our strategy falls in the class of approaches used in protein folding that builds on the importance of the native state topology in steering the folding process . in its simplest example, the formation of contacts is favoured only for pairs of amino acids that are found interacting in the native state , but non - native sequence dependent interactions could be introduced as well . despite the semplicity of this scheme , in the past few years , an increasing number of experimental and theoretical studies have confirmed the utility of the go - like approach in the characterization of various aspect of protein folding and binding processes .the study of protein aggregation has been already tackled by using go - like models , but due to the absence of experimental information on the fibrillar structure , hypothetical aggregate conformations had to be introduced to build the go - energy function driving aggregation . moreover , topologically based models , with reduced effect of non - local interactions , correspond to funneled energy landscapes and therefore their application should be limited to situations in which proteins are evolutionarily designed to follow the principle of minimal frustration , which results in a faster search through the many possible alternatives . in general , a funneled energy landscape is not expected in the case of non - functional fibril formation .therefore , the accurate knowledge of its structure , the complex intra - chain topology , and the plausible involvement in a functional process makes het - s pfd a suitable and , at the moment , unique candidate to exploit successfully go - like models for studying amyloid formation and fibril elongation mechanisms . within this approach , implemented through a monte - carlo procedure combined with replica exchange methods , we analyze the full thermodynamic properties of the fibril elongation process , e.g. of the association of a free chain to the already formed fibrillar structure of het - s pfd under different concentration conditions .the behaviour of both energies and heat capacities shows that the association process becomes more cooperative for concentrations in the range ( ) of standard in - vitro experiment .a careful study of the association process shows that fibril elongation is triggered by the docking of the free chain onto the fibril in a concentration dependent mechanism that involves the formation of both inter- and intra - chain hydrogen bonds stabilizing the longest -strands , rapidly followed by the assembly of the full domain .this behaviour is similar to the `` dock and lock '' mechanism proposed for the amyloid a- fibril formation .another interesting aspect emerges clearly : elongation proceeds differently according to which side of the fibril ( see fig 1a and 1b ) is used as the growing end .elongation from one side is clearly favoured with respect to the other side , implying a strong polarity in the growth of het - s pfd fibrils , which may even lead to unidirectional elongation .polarity in fibril growth is a feature already discussed in the literature for other amyloid - forming proteins , in both experiments and numerical simulations .our data suggest that growth polarity can be explained for het - s pfd on the basis of the complex topological properties of its fibrillar structure .a key role is played by the behaviour of one long loop connecting two -strands in consecutive rungs of the fibrillar structure .depending on the elongation side , this loop may help or not the winding of the c - terminal part of the attaching chain into the fibrillar form .since it is known that the prion loses its infectivity upon partial deletion of that loop , we argue that this topological mechanism may be important for functional fibril growth .chains a and b were selected from the nmr structure of het - s pfd ( pdb code : 2rnm ) , where chain b stays on top of chain a along the fibril .we keep one chain frozen whereas the other one is free . in the top elongation mode ,chain b is free and chain a is frozen , whereas in the bottom elongation mode , chain a is free and chain b is frozen . the free chain is allowed to move in a hemisphere of radius defined by , ( ) for top ( bottom ) elongation .the frozen chain is placed with center - of - mass ( i.e. average c ) coordinates ( i.e. on the hemisphere axis ) and ( ) for top ( bottom ) elongation , rotated in such a way that the fibril axis is parallel to the hemisphere axis . the center - of - mass position along the hemisphere axis is chosen in order to expose only one ` sticky ' end of the frozen chain to the free chain , by placing the other end roughly on the hemisphere base .this allows a smaller computational effort , at the expense of prohibiting conformations that we do not expect to affect in a relevant way the binding of the free chain on the exposed end of the full fibril , here represented by the frozen chain .the portion of het chain that we simulate goes from position 217 to position 289 , which includes 73 aminoacids ( e.g. in the simulation we include also the met aminoacids at position 217 used to obtain the nmr structure ) .we thus simulate a system with 146 aminoacids . in order to perform extensive simulations, we use a coarse - grained representation of the protein chain coupled with an energy function based on the knowledge of the pdb fibril structure . in the spirit of go - like approaches widely used for globular proteins , the energy function is built in such a way to have its minimum for the pdb structure .inspired by ref . , each aminoacid is represented by an effective spherical atom located at the position of the corresponding c atom . the virtual bond angle associated with three consecutive atomsis constrained between and .virtual bond lengths are kept constant and equal to the native values from the pdb file . to implement steric constraintswe require that no two non - adjacent atoms are allowed to be at a distance closer than .we assign an energy to each hydrogen bond that can be formed between two residues that form it in the pdb fibril structure , and we disregard any other attractive interaction ( i.e. hydrogen bonds can not be formed by two residues that do not form it in the pdb fibril structure and we do not consider any other type of pairwise interaction except for the excluded volume constraints ) . only -sheet stabilizing hydrogen bondscan therefore be formed in our simulation , and in order to identify them within a -representation , we use the geometrical rules for non - local hydrogen bonds introduced in ref . . in order to recognize the hydrogen bonds present in the pdb fibril structures the lower bound on the scalar product between binormal and connecting vectorswas decreased to ( in the original formulation , see table 1 and fig . 1 in ref. for the precise listing of all hydrogen bond rules and the definition of binormal and connecting vectors ) . in this way, we find in the pdb fibril structure ( see fig . 2 and fig .3 ) two long parallel -strands ( and ) , connected by 9 hydrogen bonds and four shorter strands : parallel to ( linked together by 4 hydrogen bonds ) and parallel to ( 2 hydrogen bonds ) . those strands alternate within the fibrillar structure in pairs which form hydrogen bonds within the same chain ( intra - chain bonds ) and pairs which form hydrogen bonds between neighbouring chains ( inter - chain bonds ) .each strand in the core of the fibrillar structure forms intra - chain bonds on one side and inter - chain bonds on the other side . in the `` top '' side of the fibril( see fig .2 and fig .3 ) , the exposed strands in the ` sticky ' end are , , whereas in the `` bottom '' side the exposed strands are , . since we keep fixed one chain , the ground state has energy -30 ( 15 intra - chain hydrogen bonds plus 15 inter - chain hydrogen bonds - we are not counting the intra - chain bonds of the frozen chain ) .hence , energy can range from ( unbound chains ) to ( fully bound chains in the fibrillar state ) . in order to fix a realistic temperature scale ,the effective value of hydrogen bond energy in our go - like energy function was given the value , so that the unit temperature of our simulations corresponds to k. we have simulated the elongation of the fibril from both sides .for each side we simulated chains confined in a hemisphere centred in the origin and of radius , , and corresponding to concentrations , and .the latter value is close to typical concentrations used in vitro aggregation experiment .fibril elongation is simulated by means of a monte carlo procedure .multiple markov processes , each replicating the same system of a fixed chain and a free chain attaching to it described above , are run in parallel at different temperatures , with possible swaps of replicas , in order to sample efficiently the conformational space from high ( ) to low temperatures ( ) . within the same replica, conformations are evolved using trial moves , which either pivot a part of the chain from a randomly chosen residue to its end , or rotate a chain portion between two residues along the direction joining them . in the latter case the two residues are either chosen randomly or chosen to be next - nearest neighbours along the chain .trial moves are accepted or rejected according to the metropolis test .we use 20 different replicas , chosing their temperatures to sample more accurately the transition region ( ) and to provide reasonable overlap of energy histograms between neighbouring pairs .roughly monte carlo steps ( is the number of residues of the free chain ) are performed independently for each replica before one replica swap is attempted among a randomly chosen neighbouring pair .overall , roughly replica swaps are attempted for each simulation and the acceptance rate of replica swaps is in all cases above % .the convergence to the equilibrium regime is assessed by looking at the evolution of system energy as a function of monte carlo steps . in order to compute equilibrium thermodynamic averages ,the simulation portion corresponding to the initial swaps is discarded from the collected data , with ranging from to , depending on concentration .data from all temperatures are elaborated with the multiple - histogram method . in fig .4 some snapshots of the simulations are represented .to characterize the thermodynamic properties of the het - s fibril growth process we study the behaviour of the energy and of the heat capacity of the system as a function of the temperature for three different protein concentrations ( see fig .5 ) and by considering elongation both from the top and from the bottom side ( see methods ) .the peaks in the heat capacity curve signal the occurrence of large conformational rearrangements ( that strictly speaking would become transitions only in the thermodynamic limit of the number of system component going to infinite ) related to the process of the deposition of the free het - s pfd chain to the one that is kept fixed to represent the sticky end of the already formed fibril .as expected , the first association temperature , related to the high temperature heat capacity peak , decreases when lowering the concentration .it reaches values close to room temperature for the concentration .interestingly , the cooperativity of the transition increases for lower concentrations : this is signalled by a more sigmoidal behaviour of the energy profiles , by the sharpening of heat capacity peaks and by the almost complete merging in one single narrow peak of the otherwise complex peak structure . despite at a first glancethe growth processes from the two different sides look similar , one can notice that at low concentration a residual peak remains at low temperature for the top elongation case . to elucidate further these behaviours and to understand the nature of the conformational rearrangements related to heat capacity peaks we computed the formation probability of hydrogen bonds for the different strands as a function of temperature ( shown in fig .we define the stabilization temperature of a group of hydrogen bonds to correspond to their average formation probability at thermodynamic equilibrium being equal to 0.5 .we compute stabilization temperatures for 6 possible groups of hydrogen bonds , corresponding to the different strand pairings shown as separate white / black bands in fig .the resulting stabilization temperatures are summarized in table [ t1 ] .we will use the term ` fibrillar ' for inter - chain hydrogen bonds . as an example , strand of the mobile chain couples with strand of the fixed chain , in the case of elongation from the top side ; whereas strand of the mobile chain couples with strand of the fixed chain in the case of elongation from the bottom side , and similarly for other strand pairs ( see fig . 6 caption for a detailed explanation of how intra - chain and fibrillar hydrogen bonds are represented in the figure ) . for all concentrations and for both elongation sides ,the first hydrogen bonds which become stable are the inter - chain ones formed between the long strands and , at a temperature varying from for top elongation at concentration , to for bottom elongation at ( see table [ t1 ] ) . the stabilization temperature of fibrillar hydrogen bonds between the long strands is higher for higher concentrations and , at the same concentration , for the case of top side elongation .this first stabilization process is followed at a lower temperature ( varying from for top elongation at concentration , to for bottom elongation at , see table [ t1 ] ) by the stabilization of intra - chain hydrogen bonds formed between the two long strands , and , of the mobile chain .this second stabilization temperature is again higher for the top elongation case , at the same concentration , and increases with concentration . with a further decrease of the temperaturethe fibrillar hydrogen bonds formed between the short strands ( with and with ) become stable .the stabilization temperature for inter - chain strand pair and does not depend on concentration but only on the elongation side , being higher , , for top elongation with respect to for bottom elongation .the stabilization temperature for inter - chain strand pair and is roughly the same , , in all cases .the last step involves the stabilization of the intra - chain hydrogen bonds between the short strands and do not depend as well on concentration . at this stagethe more significant difference between the two elongation sides emerges . for top side elongation , the two short strand pairsare stabilized roughly together at a temperature much lower with respect to the previous step ( corresponding to the small peak in the heat capacity curve and ) . for bottom side elongation, the stabilization of the two short intra - chain strand pairs takes place at quite different temperatures : hydrogen bonds between strands and are stabilized at a even slightly higher temperature than their inter - chain counterpart ( ) , whereas the intra - chain strand pair and is stabilized at a much lower temperature ( ) . upon assuming that the order in hydrogen bond stabilization mirrors a similar order in the kinetics of the elongation process, we can extract a general message from these results by stating that the attaching of a mobile chain onto the elongating fibril is triggered by the formation of the inter - chain hydrogen bonds of the longest strand ( or ) followed by a first partial folding of the chain through the formation of the long intra - chain strand pair ( between and ) and by the successive formation of all other inter- and intra - chain hydrogen bonds .the mechanism of addition of a soluble unstructured monomer to a preformed ordered amyloid fibril is a complex process : the deposition involves an association of the unstructured monomer to the fibril surface ( docking ) followed by a conformational rearrangement leading to the incorporation onto the underlying fibril lattice ( locking ) .we identify the docking stage with the formation of both inter- and intra - chain hydrogen bonds between the long strands and , as in both cases the stabilization temperatures and depends on concentration ( see table [ t1 ] ) .one indeed expects that in a denser regime it is easier for the mobile chain to dock onto the fibril end , while the locking into the -helix structure necessary for fibril propagation should not be affected by concentration changes .therefore , the locking stage involves the formation of both inter- and intra - chain hydrogen bonds between the remaining shorter strands , since we observe that their stabilization temperatures do not depend on concentration .the dependence of the intra - chain hydrogen bond stabilization temperature upon concentration is non - trivial and is triggered by the strong concentration dependence of the formation probability of the fibrillar hydrogen bonds between the long strands .the higher the concentration , the more probable the fibrillar bonds to be formed .consequently , the more easily the intra - chain bonds are stabilized . from the data shown in fig .6 and in table [ t1 ] , another general feature can be picked out : the temperature range in which the docking stage and thus the full elongation process take place decreases at lower concentrations .this is consistent with the increment of cooperativity as it appears from thermodynamic quantities ( e.g. sharpness of heat capacity peaks in fig .5 ) when concentration diminishes .the most cooperative behavior , as shown by the presence of a single sharp peak in the heat capacity curve , is obtained at concentration in the case of bottom side elongation , whereas a second peak is clearly visible at low temperature for top side elongation at the same concentration .the above analysis of fig .6 data reveals the crucial role of the formation of intra - chain hydrogen bonds between strands and in this respect .the stabilization temperature of these hydrogen bonds constitutes the most relevant difference between the two elongation modes in the first place .moreover , it does correspond closely in both cases to significant features in the heat capacity curve , namely the small peak at for top elongation and the small shoulder in the main peak at for bottom elongation . in order to gain a further understanding of the role played by intra - chain - hydrogen bonds we computed for different temperatures the equilibrium occupation probabilities of macroscopic conformational states , which are defined according to the number of formed intra- or inter - chain hydrogen bonds . in fig . 7 and fig .8 the results are shown for concentration and for the two different growth modes .occupation probabilities are shown in logarithmic scale , so that the resulting data could be interpreted as ( the opposite of ) effective free energies or mean force potentials . at high temperature , the bottom left corner is mostly populated , corresponding to conformations with very few or none intra- and inter - chain hydrogen bonds formed , typical of a mobile chain not yet attached to the fibril end . on the other hand , at very low temperature , the opposite top right corner is populated , describing structures with almost all the hydrogen bonds formed that correspond to mobile chains found already completely associated to the fibril end with a significant probability . consistently with the previous analysis , based on fig6 data , the elongation process is complex , taking place in several stages .macrostates with only intra - chain hydrogen bonds are found to be populated to some extent at high temperatures , hinting to the possibility of a pre - structuring of the mobile chain before docking to the fibril tip , yet the pathway more significantly visited involves the population of first the 9 fibrillar hydrogen bonds between strand and and then the analogous intra - chain bonds ( see in fig . 7 and fig . 8) .after this first stage , that we already identified with docking , two different scenarios emerge again clearly , depending on the growth mode .the overall process is visibly more cooperative for bottom side elongation ( fig . 8) with respect to top side elongation ( fig .7 ) , since the spreading of significantly visited macrostates is restricted to a narrower temperature range in the former case .moreover , around room temperature ( ) , the most populated state for bottom side elongation has 15 fibrillar and 13 intra - chain hydrogen bonds formed corresponding to a chain almost completely attached to the fibril end ( only strand is left wiggling a bit ) .instead , the most populated state for top side elongation has all 15 inter- but only 9 intra - chain hydrogen bonds locked into the fibrillar conformation , signalling again that the stability of intra - chain - hydrogen bonds is strongly weakened with respect to bottom side elongation . in order to appreciatemore easily the variations with temperature in the population of the different macrostates , we computed unidimensional free energy profiles as a function of the number of either intra- or inter - chain hydrogen bonds .the results are shown in fig . 9 for concentration and for the two different growth modes . fig .9 pictures confirm the multistage nature of the association process with different macrostates that become the global free energy minimum at different temperatures .the free energy profiles as a function of the number of fibrillar hydrogen bonds are similar in both elongation modes .the main difference is the value of the temperature below which the free state ( none fibrillar hydrogen bond is formed ) is not the free energy minimum anymore : for top elongation and for bottom elongation , consistently with table [ t1 ] .the free energy of the free state and the free energy barrier that separates it from the competing minimum with fibrillar bonds ( the ones formed between the -strands and that are the first to be stabilized in the association process ) do not basically depend on temperature .this is a signature of their entropic origin ( free energies in fig .9 are plotted in units ) , as they are both dominated by the roto - translational entropy of the free chain . on the other hand , the free energy profiles as a function of the number of intra - chain hydrogen bonds display a relevant difference between the two elongation modes , consistently with previous observations . for top elongation ,the free energy minimum at is the macrostate with only intra - chain hydrogen bonds ( i.e. the -strands and are not yet paired ) , whereas for bottom elongation the free energy minimum at is the macrostate with intra - chain hydrogen bonds ( i.e. the -strands and are already paired ) .interestingly , the above difference is due to the macrostate with intra - chain hydrogen bonds being entropically stabilized for top elongation with respect to bottom elongation .indeed , the free energies for intra - chain hydrogen bonds do not differ in the two elongation modes .moreover , the free energy difference between the two elongation modes for intra - chain hydrogen bonds has to be entropic , since the energy of that macrostate is the same in both cases , being given by the intra - chain plus the inter - chain hydrogen bonds ( the latter is the global free energy minimum at for both elongation modes ) .our analysis clearly established that fibril growth exhibits a deeply different thermodynamic behaviour depending on the side from which elongation proceeds : at room temperature only bottom side elongation is thermodynamically stable .what is the physical reason for the existence of such a strong growth polarity ?since our simulation study is based only on the knowledge of the fibril structure and not on the specificity of the amino - acid sequence , we can expect that growth polarity is a consequence of the topological properties of the structure .there is indeed a deep topological difference between the deposition mechanisms of het - s pfd on the two different sides of the fibril . in the first docking stage , common to both elongation modes ,the formation of both inter- and intra - chain hydrogen bonds between and implies the positioning of the latter strands into a conformation already compatible with the final fibrillar structure .the remaining strands , then , yet to be positioned , acquire distinct topological roles , since and are in a loop between the two chain portions already pinned down in the fibrillar structure , whereas and are in the c - terminal tail of the chain .one can then predict that , for entropic reasons , the former pair can be accomodated into the final fibrillar structure more easily than the latter pair .nevertheless , different elongation modes may change this hierarchy .when elongation proceeds from the top side , the attaching chain wraps up onto the fibril tip starting with its n - terminal part ( see fig .the ` loop ' strands and ( fibrillar ) need to be positioned before the ` tail ' ones and ( intra - chain ) , consistently with the topological order suggested above .indeed we do observe this hierarchy for top side elongation ; even at low temperature intra - chain hydrogen bonds between the short strand pairs are not yet stable . when elongation proceeds from the bottom side , the attaching chain wraps up onto the fibril tip starting with the c - terminal part ( see fig .the ` tail ' strands and ( fibrillar ) need now to be positioned before the ` loop ' ones and ( intra - chain ) .elongation order takes over the topological order so that the difficult positioning of ` tail ' strands is assisted by the easier positioning of ` loop ' strands and both are stabilized at room temperature ( with the partial exception of the shortest strand ) .in this work we have used monte - carlo simulations of a coarse - grained representation of het - s pfd domain in order to get information about the elongation of the corresponding amyloid fibril by attaching of a mobile chain to a pre - fixed fibrillar structure .our approach , based on the knowledge of the fibrillar structure , relies on the currently well accepted assumption that protein topology plays a pivotal factor in determining unimolecular folding and protein assembly . at variance with other go - like studies , based on a hypothetical structure , the reliability of our study is justified by the knowledge of a high - resolution nmr structure of a plausibly functional amyloid fibril .there are two main results of our thermodynamic study .first , we observe that fibril elongation is driven by the formation of inter - chain and intra - chain hydrogen bonds between the long strands and , followed by the positioning of the rest of the attaching chain onto the growing fibril .this mechanism is known as _ dock and lock mechanism _ .we identify the docking stage as that part of the association process whose onset temperature displays a concentration dependance .a similar feature , i.e. the docking temperature range varies with concentration , was previously found in a thermodynamic study of a-peptide fibril growth . within ourgo - like approach , we find a complex multistage association process , where both the docking and the locking stages proceed in several steps in a free energy landscape characterized by several different minima separated by barriers . in the case of a-peptide fibril elongation, it was found on the contrary that docking is continuous and proceeds without the presence of intermediate or free energy barriers .our finding of a multistage docking behaviour might be due to the non - trivial intra - chain topology of het - s pfd monomer , lacking in the a-peptide case .alternatively , it might be an artifact caused by our neglecting of non - native interactions .secondly , we predict that one side of the structure is more suitable to substain the growth of the fibril .the predicted fibril growth polarity can be rationalized by analyzing elongation topological properties , which turn out to be intrinsically different from the two fibril sides .the argument is based on the complex tertiary structure of the monomeric unit within het - s fibril , resulting into alternating intra- and inter - chain pairs of -strands .after the first hydrogen bonds have been formed in the initial docking onto the growing fibril , the portion of the attaching chain which is going to acquire -structure in the following locking stage , is partitioned into a ` loop ' and a ` tail ' part ( see fig .10 ) . as a consequence , the entropy loss of the two parts upon locking is different , implying a ` topological ' hierarchy .the latter may or may not be affected by the ` winding ' hierarchy dictated by the choice of the growth side , thus resulting in a strong growth polarity .bottom side elongation is more sustainable because the ` loop ' part may assist the ` tail ' part to lock . the entropic origin of the growth polarity observed in our results is further confirmed by the free energy profiles shown in fig .9 . as discussed in section [ entr ] ,the macrostate populated after docking and before locking ( with inter - chain and intra - chain hydrogen bonds ) is entropically stabilized in the top elongation mode with respect to bottom elongation .being based on a topological argument , we believe our prediction to be robust against both variations of the details in the implementation of our go - like approach ( changing the coarse - graining of protein chain representation , employing in the energy function general pairwise contacts and not only hydrogen bonds , using different hydrogen bond rules ) and relaxation of other simplifying assumptions that we made , namely the fibril tip is represented by just one chain and kept completely frozen .this latter point is further motivated by the experimental observation that the het - s fibril accomodates incoming prion monomers without a substantial disorganization of its structure .this behaviour is quite different from those of a fibrils where it was experimentally observed an entropy gain in the elongation reaction which was related to an unfolding of the organized fibril end to accomodate the addition of the incoming monomers .we thus suggest our prediction may be accessible to experimental validation , for instance using the single fiber growth assay employed in for sup35 yeast prion , based on two variants of the prion domain that can be differentially labelled and distinguished by atomic force microscopy .the same topological argument can not be used for simpler amyloid structure such as the solid state nmr - model suggested for the alzheimer s-peptide , in the absence of intra - chain hydrogen bonds .in fact , experimental evidence shows bidirectional fibril growth with no clear signs of growth polarity .interestingly , the presence of asymmetrical topological properties of fibril ends was indeed suggested for the -peptide , depending on the different possible quaternary arrangements of the fibril . on the other hand ,a clear evidence of growth polarity was demonstrated for sup35 fibrils , a yeast prion protein believed to have a functional role , similarly to het - s prion .there is no high - resolution information on the structure of sup35 fibrils and different conflicting structural models have been recently proposed , yet some of the available data suggests the existence of a complex intra - chain structure , which could justify the observed growth polarity within a framework similar to the one we propose here for het - s . a peculiar feature of het - s pfd is the presence of a long loop in the fibrillar structure that connects strands and ( aa 246 - 260 ) , which then contributes to increasing the length of the chain portion partitioned as ` loop ' in the topological argument discussed above ( in blue in fig .an interesting question is whether increasing such length favours or disfavours growth polarity according to the mechanism suggested here .one could argue that a longer loop may further decrease the fluctuations available to the ` tail ' part , thus enhancing its assisted locking for bottom side elongation .indeed , it is known that a large deletion in such loop ( 248 - 256 ) led to loss of het - s function and infectivity . even though no evidence is known about the possible biological relevance of fibril growth polarity in fungal and yeast prions , it is tempting to speculate that the role of the loop might be to enhance growth polarity as a way to control the elongation process more thoroughly in the context of the propagation of a functional prion .numerical simulations performed within the same go - like approach , based on a model structure to be built on the basis of het - s pfd but with a shorter loop , will shed further light on this hypothesis .our coarse - grained approach turned out to be effective in studying structural rearrangements which could not be tackled using more detailed protein chain representations .on the other hand , we are neglecting factors which may play an important role in the elongation process , such as the possible presence of oligomeric conformers of het - s pfd in the non - fibrillar soluble state . in order to capture similar effects, one needs to increase the accuracy of the energy function by introducing amino - acid specificity .this can be done by using coarse - grained potentials which take into account the different ability of each pair of amino - acids in forming hydrogen bonds in -strands , coupled with similar potentials describing residue pairwise interactions or local conformational biases .such potentials might be used to modulate the go - like energy function presented in this work .the use of a more realistic description of the protein chain involving side chain atoms may also cause the amino - acid sequence to affect the elongation process in a side - depending manner by imposing chiral stereochemical constraints .the latter are not present in our c-based representation , thus reinforcing the topological origin of growth polarity in our results .finally , the approach presented here could be also used to study the nucleation process of het - s pfd , and further extended to study the aggregation of the full het - s protein . the het - s n - terminal domain in the non - fibrillar soluble formis structured into a globular protein fold whose high - resolution structure has been very recently released ( pdb code 2wvn.pdb ) . such structure is known to lose partially its order upon ordering and aggregation of het - s pfd into the unsoluble fibrillar form .a go - like approach would be especially suited to study the resulting competition between the ordered structures of the two domains in the two different forms .we thank f. chiti and r. riek for enlightening discussions .we acknowledge financial support from university of padua through progetto di ateneo n. cpda083702 .73 natexlab#1#1bibnamefont # 1#1bibfnamefont # 1#1citenamefont # 1#1url # 1`#1`urlprefix[2]#2 [ 2][]#2 , , , * * , ( ) . , * * , ( ) ., , , , , , , , * * , ( ) . , , , , , , , , , , * * , ( ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . , , , , , , , , , , * * , ( ) . , , , , , , , , * * , ( ) . , , , , , , * * , ( ) ., , , , , , * * , ( ) . , , , , , ,* * , ( ) . , * * , ( ) . ,* * , ( ) ., , , , , * * , ( ) . , * * , ( ) . , * * , ( ) . , , , , , * * , ( ) ., , , * * , ( ) . , , , * * , ( ) . , * * , ( ) . , , ,* * , ( ) . , * * , ( ) . , , , , , , , * * , ( ) . , , , ,* * , ( ) . , * * , ( ) . , * * , ( ) ., , , , , , , * * , ( ) . , , , , , * * , ( ) . , * * , ( ) . , , , * * , ( ) . , , , , , , * * , ( ) . , , , , , , * * , ( ) . , , , * * ,, , , , , * * , ( ) . ,* * , ( ) ., , , , , , , , * * , ( ) . , , , * * , ( ) . , , , , * * , ( ) . , , , ,* * , ( ) . , , , , , * * , ( ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . , , , * * , ( ) . , , , * * , ( ) . , , , , , , , * * , ( ) . , , , , * * , ( ) . , , , ,* * , ( ) . , , , * * , ( ) ., , , , , * * , ( ) ., , , * * , ( ) . , * * , ( ) . , , , , , , , , , , , * * , ( ) . , , , , , ,* * , ( ) . , * * , ( ) . , , , , * * , ( ) . , , , ,, * * , ( ) . , , ,, * * , ( ) . , , , ,, * * , ( ) . ,* * , ( ) ., , , , * * , ( ) . , , , , , , , * * , ( ) . , , , * * , ( ) . , * * , ( ) . , , ,* * , ( ) . , * * , ( ) . , , , , * * , ( ) . , , , * * , ( ) . , * * , ( ) . , * * , ( ) . , , , , , , , , * * , ( ) . , , ,* * , ( ) . & & & & + + & 115 & 85 & 64 & 43 & 12 & -1 + & 88 & 71 & 63 & 44 & 11 & -2 + & 75 & 70 & 62 & 43 & 8 & -3 + + & 108 & 80 & 50 & 42 & 45 & 6 + & 85 & 68 & 47 & 40 & 43 & 7 + & 65 & 63 & 50 & 41 & 44 & 6 +
the prion - forming c - terminal domain of the fungal prion het - s forms infectious amyloid fibrils at physiological ph . the conformational switch from the non - prion soluble form to the prion fibrillar form is believed to have a functional role , since het - s in its prion form participates in a recognition process of different fungal strains . based on the knowledge of the high - resolution structure of het - s(218 - 289 ) ( the prion forming - domain ) in its fibrillar form , we here present a numerical simulation of the fibril growth process which emphasizes the role of the topological properties of the fibrillar structure . an accurate thermodynamic analysis of the way an intervening het - s chain is recruited to the tip of the growing fibril suggests that elongation proceeds through a dock and lock mechanism . first , the chain docks onto the fibril by forming the longest -strands . then , the re - arrangement in the fibrillar form of all the rest of molecule takes place . interestingly , we predict also that one side of the het - s fibril is more suitable for substaining its growth with respect to the other . the resulting strong polarity of fibril growth is a consequence of the complex topology of het - s fibrillar structure , since the central loop of the intervening chain plays a crucially different role in favouring or not the attachment of the c - terminus tail to the fibril , depending on the growth side .
surface fit using two - variable orthogonal polynomials has reasonable advantages.} ] the main features of our method are as follow .firstly , in recursively generating orthogonal polynomials with classical or modified gram - schmidt schemes , orthogonality of the polynomials will progressively deteriorate .so an iterating scheme is selected in order to preserve the orthogonality .( generally speaking , as has been pointed out before , one more orthogonalization process is sufficient to significantly improve the orthogonality . ) secondly , a regularization method is introduced . in previous fittings to magnetization data , the overfitting problem has rarely been taken into account .the reason for this probably originates in that not very - high - order polynomials are included in the final fitting expression .however , our numerical results suggest that fluctuations indeed appear between experimentally recorded data , and such fluctuations become even severer near to the boundaries .so , we are convinced that overfitting occurs and has to be addressed .that s why the regularization scheme is introduced to relieve the probable overfitting .although the specific formulations are different , the present idea of regularization is similar to that used in reference,} ] as ^{-1 } , \label{e3}\ ] ] after the orthogonal polynomials are obtained , we can expand the fitting expression with as where denotes the maximum index of orthogonal polynomials used in the fitting expression . by minimizing fitting error ^ 2 \label{sigma1}\ ] ] the coefficient of normalized orthogonal polynomials in the fitting expressionis determined as using above orthogonalization processes , it s found that orthogonality becomes poorer and poorer .although it s better than the the classical ( cgs ) scheme , performance of the modified gram - schmidt ( mgs ) scheme unavoidably becomes poor with increasing the largest index of orthogonal polynomials , . since the orthogonality is closely related to fitting precision , we use the following iterating scheme ( igs ) to improve the orthogonality .* step 1 * recursively compute the values of linearly independent functions ( ; ) .( assume that the first polynomials with have been orthonormalized and assigned to with . estimate the value of the -th orthonormalized polynomial and assign it to ; and save coefficients with and . ) * step 2 * re - orthogonalize and update .( 2-a ) compute the modification coefficient , ; ( 2-b ) re - orthogonalize ; ( 2-c ) update coefficients , ; ( 2-d ) judge whether the orthogonality criterion is satisfied .if true then continue ; else go back to ( 2-a ) .* step 3 * normalize and update .( 3-a ) compute the 2-norm ; ( 3-b ) normalize as ; ( 3-c ) update coefficients ; ( 3-d ) update coefficients , .* step 4 * compute coefficients of in the fitting expression . * step 5 * judge whether fitting precision matches the criterion .if true then and break out the loop ; else continue .* step 6 * subtract the projection of from and update .( 6-a ) compute the coefficients of subsequent orthogonal polynomials , ; ( 6-b ) subtract the projection of from , .* step 7 * update and go back to * step 2*. if the largest index is not a big number , then the changing tendency of experimental data can not be properly reflected . now fitting error is large and underfitting happens . for decreasing fitting error ,more orthogonal polynomials are successively generated and added to the fitting expression until the wished precision is achieved .( the more orthogonal polynomials , the higher fitting precision . )however , too many polynomials will lead to strong local fluctuations in the fitted surface , and overfitting happens .the reason for this is that higher - order polynomials generally imply more inflection points .the degree of overfitting can be controlled by regularization like adding so - called penalty functions into the error expression , in order to strengthen the stiffness of fitted surfaces .in contrast to the method of using penalty functions , we implement the regularization by adding a laplace term to the error expression . in essence, the laplace method aims at suppressing the changing rate of curve slope . after adding a laplace term with regularization parameter , the error expression ( [ sigma1 ] )is rewritten as ^ 2 + \lambda \left [ \nabla^{2 } f(x_{i},y_{i } ) \right]^2.\label{sigma2}\ ] ] it s easy to see that the laplace term in ( [ sigma2 ] ) affects only coefficients . by minimizing ( [ sigma2 ] ), it is obtained that where , with .we next examine whether the laplace method above really leads to regularization .firstly , when , equation([bt2 ] ) reduces to the non - regularized case ( [ bt1 ] ) . secondly , if , the laplace term has no contribution to since ( corresponding to linear fitting ) .thirdly , the laplace term starts to play its role when .if is large enough so that equations ( [ a1 ] ) and ( [ a2 ] ) are satisfied , then reduces to namely , if we assume that when , then , which implies that since . hence , the laplace term introduced above makes rapidly decay with increasing , so that overfitting is avoided and regularization is realized .now , we invoke a cross - validation process to select out a proper regularization parameter .dividing the whole data into three groups , one of which includes much more data , labelled `` training group '' , and the other two has fewer data , labelled cross - validation group " and test group " , respectively . for each fixed value of , the training group is used to determine coefficient by minimizing in ( [ sigma2 ] ) , and the corresponding training error is computed as ^ 2 .\label{err_tr}\ ] ] subsequently , the cross - validation group selects out the value of that minimizes the cross - validation error ^ 2 .\label{err_cv}\ ] ] finally , the test group assesses the applicability of the determined fitting expression by calculating the test error ^ 2.\ ] ] in contrast to the ordinary scheme used in the field such as machine learning , the model used here has two parameters that require determining , namely , the number of orthogonal polynomials and the regularization parameter . by setting certain routine - terminating criterion , the optimal choice of can be determined by minimizing the training error .a useful criterion can be defined by assessing the changing tendency of fitting error .for example , on increasing from , if the fitting error is not apparently decreased , it is reasonable to consider to be the optimal value of .another task is to determine parameter . practically , we find that both and decrease with enlarging s at fixed or with increasing at fixed .thus , the optimal value of can not be identified to the one that minimizes .this motivates us to construct a new quantity to characterize the degree of overfitting ( and also underfitting ) .typically , when is approximately equal to , underfitting happens ; if is much larger than , overfitting occurs .we can define overfitting degree as it is identified as underfitting if , and overfitting when .it s noticed that the calculated value of fitting error depends on the measurement unit used for experimentally recorded data where ] , and ] .hence , the fitting error is calculated from the after - transformed data . with an inverse transformation ,the physical quantities are obtained in the measurement unit . in dividing the experimental data into three groups ,a uniform sampling algorithm is executed in order to optimize fitting performance .original experimental data are firstly sorted in terms of a sample parameter , or .then the uniform sample is executed on a pro - rata basis , which is regulated by the sampling factor , and the sorted data are put into the training , cross - validation , and test groups according to the sampling factor .numerical results show that difference appears between different sampling methods , which is attributed to the different data density along axis with that along axis .after coefficients and being determined , the functional value at arbitrary location within the measuring range can be estimated in a similar way to that used in the fitting routine .another method is to estimate from linearly independent functions as where is the coefficient of the -th linearly independent functions in the fitting expression , and can be readily calculated from and .the latter scheme is recommended for lower computation cost .practically , in implementing a double - precision version of the algorithm , it s found that the fitting value calculated from linearly independent functions significantly differs from that computed from orthogonal polynomials , when the power exponent of the fitting expression is very high .after an extended - precision algorithm is applied , the difference decreases .if the extended - precision operation is also used to recursively generate linearly independent functions , the difference is not longer obvious .these facts suggest that round - off error is rapidly accumulated while recursively computing the linearly independent functions .hence , it is required to carefully consider the influence of error accumulations in fitting using two - variable orthogonal polynomials .since the fitting expression is essentially the linear combination linearly independent functions , utilizing recursive properties of the corresponding partial derivative and integral , it s quite convenient to compute the physical quantities of interest .followings are typical recursive algorithms at given and used in this work . formula given above are quite general . for application to magnetization data ,it s only needed to replace , , and their corresponding size normalizations with , , and , , . here, we fit the magnetization data of polycrystalline samples la obtained with physical property measurement system ( ppms ) of quantum design company .more details can be found in reference .}$ ] the total number of data used is .with one half of the data ( ) uses as the training group , the fitting results are shown in fig .1 . note that satisfactory fitting precision can be reached using two - variable orthogonal polynomials .however , one can not assess the degree of overfitting from the fitting precision , which suggests the necessity to introduce cross - validation .the whole data are uniformly sampled according to temperature and the sampling factor is set to , namely , two thirds of the data being put into the training group , half of the rest one third into the cross - validation group and half into the test group . by setting regularization parameter , we discuss the effect of the number of orthogonal polynomials ( ) on fitting performance .the calculated results are shown in table 1 . for reference, the overfitting degree corresponding to the test error is defined in a similar way with that in ( [ overfit1 ] ) it noted that both and reflect fitting performance , since the data in the cross - validation and test groups can be interchanged .with increasing , fitting error decreases and overfitting degree increases , which suggests that the overfitting degree can be used to monitor fitting performance , although it can not always select out the optimal regularization parameter .fitting errors and overfitting degrees without regularization ( ) . for comparison , the training error with ( i.e., only is used ) is . [ cols="^,^,^,^,^,^",options="header " , ] table 3 summarizes the optimal fitting precision and corresponding overfitting degrees with different regularization parameter .the sampling factor is the same to preceding tables .it s noted that overfitting begins to occur at .since the cross - validation and test group can interchange data , one needs to compare with in order to select out the proper .here we discuss aspects that have not been mentioned above .first of all , the algorithm in this paper can be further optimized so that the computing efficiency is increased and memory decreased .for example , in the re - orthogonalization step of iterating orthogonalization , the projection of the just orthogonalized polynomial is subtracted from all those subsequent polynomials which are not orthogonalized yet ; this definitely increases the the computing cost since redundant polynomials are generated to ensure the fitting precision . secondly ,if only physical quantities that are at experimental - recorded magnetic fields and temperatures are concerned , the computing results for the linearly independent functions and orthogonal polynomials used to fit can be saved to compute these physical quantities .thirdly , after the fitting expression obtained , uniform gridding and interpolation of the data can be readily achieved .thus , besides the iterative method used in this work , operations like numerical derivatives and integrals can be easily executed .fourthly , when the coefficients of linear independent functions in orthogonal polynomials and those of the orthogonal polynomials in the fitting expression are computed , we have not considered the effect of measuring errors .however , the measuring error is always there .what s more , the measuring error of experimental data at extremely weak field is usually bigger .because we have taken into accounted all experimental data with equal weight , the measuring error under weak field will do harm to global fitting performance . to solve this problem ,one method is to abandon the data under weak field ; another method is to introduce a local weight factor that depends on magnetic field and temperature so that the influence of experimental data is confined within a nearby area .fifthly , it s noted that in above fitting , rotational symmetry of magnetic systems has not been considered .because of this symmetry , the terms involving even - number powers of magnetic field should not appear in the analytic expression of magnetization .while talking about the properties closely associated with this symmetry , it needs eliminating such terms in the set of linearly independent functions .finally , since the algorithm considers only the dependence on magnetic field and temperature , the effect of other processes such as rotation of crystal grains and the change of sample volume are not adequate disclosed , that may be the reason for larger fitting error at weak fields .another issue not considered here is the influence of demagnetization factor whose value in polycrystalline materials is hard computed .to conclude , by adding the laplace term into the fitting error expression , the regularization method and corresponding cross - validation scheme are introduced to two - variable orthogonal polynomials fitting . after applying to magnetization data, it s found that the regularization scheme does play its role through rapidly suppressing the coefficients of higher - order terms in the fitting expression and therefore effectively relieving the overfitting problem . with the aid of the concept of overfitting degree , it s also shown that the cross validation scheme can be used to select out the proper regularization parameter .the influences of sampling parameter and sampling factor are also analysed .thus it offers the quite reliable base for the following investigations of the magnetic - entropy - change and phase - transition properties of magnetic functional materials .i would thank wu hong - ye for many helps in software usage and constructive discussions in developing the method in this work . andmy thanks also give to wu ke - han and zhou min for providing experimental data before their paper published .leon s j , bjrck and gander w 2012 _ numer .linear algebra appl . _ * 20 * 492 - 532 li z t , wu p f , tao y q and mao d k 1999 _ acta phys . sin . _ * 48(s ) * s126 ( in chinese ) derrico j r 2006 _ understanding gridfit_. available : + http://www.mathworks.com/matlabcentral/fileexchange/8998-surface-fitting-using-gridfit .wu k h , wan s l , xu b , liu s b , zhao j j and lu y , to be published .
an obstacle encountered in applying orthogonal - polynomials fitting is how to select out the proper fitting expression . by adding a laplace term to the error expression and introducing the concept of overfitting degree , a regularization and corresponding cross validation scheme is proposed for two - variable polynomials fitting . while the fortran implementation of above scheme is applied to magnetization data , a satisfactory fitting precision is reached , and overfitting problem can be quantitatively assessed , which therefore offers the quite reliable base for future comprehensive investigations of magnetocaloric and phase - transition properties of magnetic functional materials .
energy harvesting from environment resources ( e.g. , through solar panels , wind power , or geo - thermal power ) is a potential technique to reduce the energy cost of operating the base stations ( bss ) in emerging multi - tier cellular networks .while this solution may not be practically feasible for macrocell base stations ( mbss ) due to their high power consumption and stochastic nature of energy harvesting sources , it is appealing for small cell bss ( sbss ) that typically consume less power . providing grid power to all sbss may not always be feasible due to their possible outdoor / remote / hard - to - reach locations .wireless energy harvesting thus enables dense deployment of sbss irrespective of the availability of grid power connections . in general ,wireless energy harvesting can be classified into the following two categories : ambient energy harvesting and dedicated energy harvesting . in the former case , energy harvested from renewable energy sources ( such as thermal , solar , wind ) as well as the energy harvested from the radio signals in the environmentcan be sensed by energy - harvesting receivers . in the latter case ,energy from dedicated sources is transmitted to energy - harvesting devices to charge them wirelessly . designing efficient power control policies with different objectives ( e.g. ,maximizing system throughput ) is among one of the major challenges in energy - harvesting networks . in ,the authors proposed an offline power control policy for two - hop transmission systems assuming energy arrival information at the nodes .the optimal transmission policy was given by the directional water filling method .in , the authors generalized this idea to the case where many sources supply energy to the destinations using a single relay .a water filling algorithm was proposed to minimize the probability of outage .although the offline power control policies provide an upper bound and heuristic for online algorithms , the knowledge of energy / data arrivals is required which may not be feasible in practice . in ,the authors proposed a two - state markov decision process ( mdp ) model for a single energy - harvesting device considering random rate of energy arrival and different priority levels for the data packets .the authors proposed a low - cost balance policy to maximize the system throughput by adapting the energy harvesting state , such that , on average , the harvested and consumed energy remain balanced .recently , in , the outage performance analysis was conducted for a multi - tier cellular network in which all bss are powered by the harvested energy . a detailed survey on energy harvesting systemscan be found in where the authors summarized the current research trends and potential challenges .compared to the existing literature on energy - harvesting systems , this paper considers the power control problem for downlink transmission in two - tier macrocell - small cell networks considering stochastic nature of the energy arrival process at the sbss .in particular , we assume that ambient energy harvesting is exploited at a central energy storage ( ces ) from where energy can be transferred to the sbss , for example , by using dedicated power beacons ( pbs ) .pbs are low - cost devices that can potentially charge wireless terminals by transferring energy in a directional manner .note that the power control policies at the mbs and the sbss and their resulting interference levels directly affect the overall system performance .the design of efficient power control policies is thus of paramount importance . in the above context, we formulate a discounted stochastic game model in which all sbss form a coalition to compete with the mbs in order to achieve the target signal - to - interference - plus - noise ratio ( ) of their users through transmit power control .that is , the mbs and the ces ( which actually represents the set of sbss in the game ) compete to achieve the desired targets of macrocell users and small cell users , respectively .note that both the mbs and the sbss transmit in the same channel ( i.e. , a co - channel deployment scenario is considered ) .therefore , the competition ( or conflict ) arises due to the resulting cross - tier interference , i.e. , as the mbs uses more power / energy to increase the utility of macrocell users , it results in higher cross - tier interference to small cell users .similarly , the more energy / power the ces assigns to sbss , the higher would be the cross - tier interference to macrocell users .note that the energy harvesting component is an important factor that indirectly contributes to the conflict .if the energy arrival rate is large , the sbss will have a larger energy pool to spend and thus cause more interference . clearly , we need to take into the account the probability of energy arrival when determining the optimal transmit power policy for each sbs . the amount of available energy at the transmitter will vary according the amount of power transmitted and the energy arrival during each transmission interval .naturally , the competition above can be modeled and analyzed by game theoretic tools . however , unlike in traditional power control games , the actions and the payoffs of the transmitters at successive transmission intervals are correlated .this correlation is taken into account in the proposed stochastic game model .for this game model , the nash equilibrium power control policy is obtained as the solution of a quadratic programming problem . for the casewhen the number of sbss is very large ( i.e. , an ultra - dense small cell network ( scn ) ) , the stochastic game is approximated by a mean field game ( mfg ) . in general , mfgs are designed to study the strategic decision making in very large populations of interacting individuals .recently , in , the authors proposed an mfg model to determine the optimal power control policy for a finite battery powered scn with no energy replacements .however , the stochastic nature of energy arrival for small cells was not considered . in this paper, we will consider the case where the battery can be recharged using random energy arrivals . by solving a set of forward and backward partial differential equations ,we derive a distributed power control policy for each sbs using a stochastic mfg model .the contributions of the paper can be summarized as follows . 1 . for a two - tier macrocell - small cell network ,we consider a centralized energy harvesting mechanism for the sbss in which energy is harvested and then distributed to the sbss through a ces . unlike in , the ces can have any finite number of energy levels in its storage , not only 0 and 1 .note that the concept of ces is somewhat similar to the concept of dedicated power beacons for wireless energy transfer to users in cellular networks .moreover , in a cloud - ran architecture , where along with data processing resources , a centralized cloud can also act as an energy farm that distributes energy to the remote radio heads each of which acts as an sbs .note that the sbss are not restricted to indoor deployments .subsequently , we formulate the power control problem for the mbs and sbss as a discrete single - controller stochastic game with two players . also ,in this paper , we use the signal - to - interference - plus - noise - ratio ( sinr ) model instead of an snr model which has been commonly used in other research works on energy harvesting communication .consideration of random energy arrivals along with both co - tier and cross - tier interferences in the downlink power control problem is the major novelty of the paper .2 . the existence of the nash equilibrium and pure stationary strategies for this single - controller stochastic game is proven .the power control policy is derived as the solution of a quadratic - constrained quadratic programming problem .when the network becomes very dense , a stochastic mfg model is used to obtain the power control policy as a solution of the forward and backward differential equations . in this case , each sbs can harvest , store energy and transmit data by itself .an algorithm using finite difference method is proposed to solve these forward - backward differential equations for the mfg model .numerical results demonstrate that the proposed power control policies offer reduced outage probability for the users served by the sbss when compared to the power control policies using a simple stackelberg game wherein each sbs tries to obtain the target of its users without considering the distribution of energy arrivals .the rest of the paper is organized as follows .section ii describes the system model and assumptions .the formulation of the single - controller stochastic game model for multiple sbss is presented in section iii . in sectioniv , we derive the distributed power control policy using a mfg model when the number of sbss increases asymptotically .performance evaluation results are presented in section v before the paper is concluded in section vi .we consider a single macrocell overlaid with small cells .the downlink co - channel time - slotted transmission of the mbs and sbss is considered and it is assumed that each bs serves only a single user on a given transmission channel during a transmission interval ( e.g. , time slot ) .the mbs uses a conventional power source and its transmit power level is quantized into a discrete set of power levels , where the subscript denotes the mbs . on the other hand ,the sbss receive energy from a centralized energy storage ( ces ) , which harvests renewable energies from the environment .we assume that only the ces can store energy for future use and each sbs must consume all the energy it receives from the ces at every time slot .the energy arrives at the ces in the form of packets ( one energy packet corresponds to one energy level in ces ) .the quantization of energy arrival was assumed in other research studies such as in .the number of energy packet arrivals during any time interval is discrete and follows an arbitrary distribution , i.e. , .we assume that the battery at the ces has a finite storage .therefore , the number of energy packet arrivals is constrained by this limit and all the exceeding energy packets will be lost , i.e. , .the statistics of energy arrival is known _ a priori _ at both the mbs and the ces . at time , given the battery level , the number of energy packet arrivals , and the energy packets that the ces distributes to the sbss , the battery level at the next time slot can be calculated as follows : given energy packets to distribute , the ces will choose the best allocation method for the sbss according to their desired objectives . denoting the slot duration as and the volume of one energy packet as , we have the energies distributed to the sbss at time as where is the transmit power of sbs at time .clearly we must have : from the causality constraint , , i.e. , the ces can not send more energy than that it currently possesses .note that is the current battery level which is an integer and has its maximum size limited by .since the battery level of the ces and the number of packet arrivals are integer values , it follows from ( [ energy ] ) that is also an integer .similar to and , in our system model , the conflict between the ces and the mbs arises due to the interferences between the mbs and the sbss .clearly , if the mbs transmits with large power to achieve the targets for macrocell users , it will cause high interference to the small cell users .this means , the sbss will need to transmit with larger power to combat this cross - tier interference .the ces and the mbs have different objective functions and are free to choose any actions that maximize their own objectives ( i.e. , non - cooperative game ) . since the sbss can only use renewable energy , it is crucial for them to use their harvested energy economically .also , unlike a traditional one - shot transmit power control game , the ces needs to take into the account the future payoff given the current battery size and the probability of energy arrivals . in summary , for our ces model , at each time slot , we will have a random battery size at the ces and our objective is to maintain the long - term average close to the target value as much as possible and thus improve the outage probability .without a centralized ces - based architecture , each sbs can have different amount of harvested energy and in turn battery levels at each time slot , which will make this problem a multi - agent stochastic game .although this kind of game can be heuristically solved by using q - learning , the conditions for convergence to a nash equilibrium are often very strict and in many cases impractical . by introducing the ces ,the number of the possible states of the game is simplified into the battery size of the ces , and the multi - player game is converted into a two - player game .another benefit of the centralized ces - based architecture is that the energy can be distributed based on the channel conditions of the users served by the sbss so that the total payoff will be higher than the case where each sbs individually stores and consumes the energy .for ease of exposition , in this paper , we consider an ideal energy transfer from the ces to the sbss .however , to model a simple energy loss , we can add a fixed percentage of loss into the energy consumption of the ces at each time slot .all the symbols that are used in the system model and section iii are listed in * table i*. and its associated user & & average channel gain between bs and user of bs + ( ) & target sinr for mbs ( sbs ) & & ( discrete ) battery level of ces at time + & duration of one time slot in seconds & & number of quanta distributed by the ces at time + & average interference at the user served by the mbs at time & & average interference at the user served by sbs at time + & maximum battery level of the ces & & finite set of transmit power of the mbs + , & & , & + & probability that the mbs chooses power when & & probability that the ces sends quanta when + & energy harvested at time & & probability that the ces starts with battery level + & discount factor of the stochastic game & & utility function of the mbs and the ces , respectively + & payoff matrix for the mbs and the ces , respectively & , & + the received at the user served by sbs in the downlink at time slot is defined as follows : where is the interference caused by other bss . is the channel gain between mbs and the user served by sbs , represents the channel gain between sbs and the user it serves , and is the channel gain between sbs and the user served by sbs .finally , represents the transmit power of sbs at time .the transmit power of mbs belongs to a discrete set .we ignore the thermal noise assuming that it is very small compared to the cross - tier interference .similarly , the sinr at a macrocell user can be calculated as follows : where is the cross - tier interference from sbss to the macrocell user , denotes the channel gain between the mbs and its user , represents the channel gain between sbs and macrocell user , and is the thermal noise .the channel gain is calculated based on path - loss and fading gain as follows : where is the distance from bs to user served by bs , follows a rayleigh distribution , and is the path - loss exponent .we assume that the sbss are randomly located around the mbs and the users are uniformly distributed within their coverage radii . during a transmission interval ( i.e. , a time slot ), only one user is served by each sbs in the downlink direction .a stochastic game is a multiple stage game where it can have different states at each stage .each player chooses an action from a finite set of possible actions ( which can be different at each stage ) .the players actions and the current state jointly determine the payoff to each player and the transition probabilities to the succeeding state .the total payoff to a player is defined as the discounted sum of the stage payoffs or the limit inferior of the averages of the stage payoffs .the transition of the game at each time instant follows markovian property , i.e. , the current stage only depends on the previous one . in our model ,the mbs and the sbss try to maintain the average of their users to be close to some targets .note that a large target means that a high transmit power will be required for the sbs which could be limited by the energy arrival rate and the battery size of the ces .also , a higher transmit power means a higher level of interference to other users .similar to and , the utility function of the mbs at time is defined as : where is the average interference at the macrocell user at time , and is the target for the macrocell user .clearly , this utility function is maximized when the at the mbs is .if the is larger than the target , this implies that mbs transmits with a larger power than necessary and thus wastes energy . on the other hand ,if the is smaller than the target , it implies that energy is not utilized effectively provided it has sufficient energy .similarly , the utility function of the ces is defined as follows : where is the average interference at the user served by sbs at time .the arguments of both the utility functions demonstrate that the action at time for the mbs is its _ transmission power _ the action of the ces is the _ number of energy packets that is used to transmit data from the sbss_. later , in * remark 2 * , we will show that the interference and the transmit power of each sbs can be derived from and .the conflict in the payoffs of both the players arises from their transmit powers that directly impact the cross - tier interference .note that the proposed single - controller approach can be extended to consider a variety of utility functions ( average throughput , total network throughput , energy efficiency , etc . ) unlike a traditional power control problem , the action space of the ces changes at each time slot and is limited by its battery size . given the distribution of energy arrival and the discount factor , the power control problem can be modeled by a single - controller discounted stochastic game as follows : * there are two players : one mbs and one ces . * the state of the game is the battery level of the ces , which belongs to . * at time and state , the action of the mbs is its transmission power and belongs to the finite set . on the other hand ,the action of the ces is , which is the number of energy packets distributed to sbss . belongs to the set .* let and denote the concatenated mixed - stationary - strategy vectors of the mbs and the ces , respectively .the vector is constructed by concatenating sub - vectors into one big vector as , ] is the average utility of macrocell user at time if the mbs and the ces are using strategy and , respectively .similarly , we define the discounted sum of payoffs at the ces . in (* chapter 2 ) , it was proven that the limit of and always exist when .* * objective : * to find a pair of strategies such that and become a nash equilibrium , i.e. , , where and are the sets of strategies of mbs and ces , respectively .given the distribution of energy arrival at the ces , the transition probability of the system from state to state under action ( ) of the ces is given as follows : also , we assume that information about the average channel gains are available to all players .this implies that the single - controller stochastic game we present here will be a perfect information non - cooperative game .the states of the game can be described by a markov chain for which the transition probabilities are defined by ( [ markovstate ] ) .clearly , the ces controls the state of the game while the mbs has no direct influence .therefore , the single - controller stochastic game can be applied to derive the nash equilibrium strategies for both the mbs and the ces .the two main steps to find the nash equilibrium strategies are : * first , we build the payoff matrices for the mbs and the ces for every state , where . denote them by and , respectively . *second , using these matrices , we solve a quadratic programming problem to obtain the nash equilibrium strategies for both the mbs and the ces . to build and , we calculate and for every possible pair , where and . in this regard ,we first derive the average channel gain .second , from the energy consumed and transmission power of the ces and the mbs , , and the user located within the disk centred at .,width=192 ] respectively , we decide how the ces distributes this energy among the sbss .then , we calculate the transmit power at each sbs and obtain and .the next two remarks provide us with the methods to calculate and .[ rem1 ] given two bss and , assume that a user , who is associated with , is uniformly located within the circle centred at with radius ( fig .[ locations ] ) .assume that does not lie on the circumference of the circle centred at and .denote and , then the expected value of , i.e. , ] given that .for other values of , ] , where = \lambda ] of its user to the ces .then the ces and the mbs will exchange information so that the mbs can have complete knowledge about the average channel gains at each sbs . since we only use average value , the ces and the mbs only need to re - calculate the nash equilibrium strategies when either the locations of sbss change , e.g. , some sbss go off and some are turned on , or when the average of channel fading gain changes , or when distribution of energy arrival at the ces changes .note that the sbss and the mbs only need to send the channel gain information of the corresponding users ( to be served ) to the ces .the main problem of the two - player single - controller stochastic game is the curse of dimensionality " .the time complexity of * algorithm 1 * increases exponentially with the number of states or the maximum battery size .note that and have dimensions of , so the complexity increases proportionally to .moreover , unlike other optimization problems , we are unable to relax the qcqp in ( [ qcqp ] ) , because * theorem 1 * states that the nash equilibrium must be the global solution of the quadratic program in ( [ quadr ] ) . to tackle these problems , we extend the stochastic game model to an mfg model for a very large number of players .the main idea of an mfg is the assumption of similarity , i.e. , all players are identical and follow the same strategy .they can only be differentiated by their state " vectors .if the number of players is very large , we can assume that the effect of a specific player on other players is nearly negligible .therefore , in an mfg , a player does not care about others states but only act according to a mean field " , which usually is the probability distribution of state at time instant . in our energy harvesting game ,the state is the battery and the mean field is the probability distribution of energy in the area we are considering .when the number of players is very large , we can assume that is a smooth continuous distribution function .we will express the average interference at an sbs as a function of the mean field .all the symbols used in this section are listed in * table ii*. [ cols="^,<,^ , < " , ] thanks to similarity , all the sbss have the same set of equations and constraints , so the optimal control problem for the sbss reduces to finding the optimal policy for only one generic sbs .mathematically , if an sbs has infinite available energy , i.e. , , it will act as an mbs . however , for simplicity , we will assume that only the sbss are involved in the game and the interference from the mbs is constant , which is included in the noise as in . except that , the system model and the optimization problem here are similar to those in the discrete stochastic game model . assuming that the optimal control above starts at time with , we obtain the bellman function as .\ ] ] from this function , at time , we obtain the following hamilton - jacobi - bellman ( hjb ) equation : where is the average transmit power at a generic sbs .the hamiltonian is given by the bellman s principle of optimality . by applying the first order necessary condition, we obtain the optimal power control as follows : ^+ .\label{powermfg}\ ] ] the bellman , if exists , is a non - increasing function of time and energy .therefore , we have and . from equation ( ) , given the current interference at a user , the corresponding sbs will transmit less power based on the future prospect . if the future prospect is too small , i.e. , , it stops transmission to save energy .replacing back to the hjb equation , we have ^+\right)^2 = 0 , & \ ] ] which has a simpler form as follows : also , from ( [ mfg:2 ] ) , at time , we have the fokker - planck equation as : where is the probability density function of at time . combining all these information , we have the following proposition . [ mfgpower ]the average transmit power of a generic sbs is a derivative of the average energy available with respect to time and can be calculated as see * appendix c*. since is always non - negative , the average energy in an sbs s battery is a decreasing function of time .that is , the distribution should shift toward left when increases .this is because , we use the wiener process in ( [ mfg:1 ] ) .since has a normal distribution with mean zero , the energy harvested will be equal to the energy leakage .therefore , for the entire system , the total energy reduces when time increases .if and are two solutions of * proposition [ pdes ] * and , then we have .first , from ( [ mfg:2 ] ) we derive fokker - planck equation : since , we subtract the first equation from the second one to obtain this means is a function of .let us denote , then we have . from * lemma [ mfgpower ]* , if then . since , we have now we substitute that results in this means or . notethat is a function of and . since and , it follows that .this lemma confirms that an sbs will act only against the mean field .thus only determines the evolution of the system .two systems with the same mean field will behave similarly . to obtain and ,we use the finite difference method ( fdm ) as in and .we discretize time and energy coefficient into large intervals as ] with and as the step sizes , respectively . then become matrices with size .to keep the notations simple , we use and as the index for time and energy coefficient in these matrices with and .for example , is the probability distribution of energy at time . using the fdm , we replace , , and with the corresponding discrete formulas as follows : by using them in ( [ valuef ] ) and after some simple algebraic steps , we have where similarly , discretizing ( [ discreteimfg ] ) , we have where to obtain , and using * proposition [ pdes ] * , we need to have some boundary conditions . first , to find , we assume that there is no sbs that has the battery level equal to or larger than so that .this is true if we assume that is the largest battery size of an sbs .also , when , from the basic property of probability distribution next , to find , again we need to set some boundary conditions . notice that for all .we further assume the following : * intutitively , if the battery level of a sbs is full , i.e. , when , this sbs should transmit something , or equivalently , .that means therefore , if we know and , we can calculate . *similarly , it must be true that when the available energy is , i.e. , , an sbs will stop transmission .therefore , we can assume again , if we know , we can calculate .* during simulations , in some cases when the density is very high , we obtain very large ( unrealistic ) values of transmit power .therefore , we must put an extra constraint for the upper limit . in this paper , we use , or , where is the duration of one time slot .this means , we have to limit the transmit power during one time step to be smaller than the maximum power that can be transmitted during one time interval . based on the above considerations , we develop an iterative algorithm ( * algorithm [ fdm ] * ) detailed as follows. set up matrices , , , and vector .guess arbitrarily initial values for power , i.e. , .initialize , , , , and .initialize and as the step size of energy and time with .set as the number of iteration .: solve the fokker - planck equation to obtain using ( [ discretem ] ) and ( [ mbound ] ) with given , .update for using discrete form of equation ( [ discreteimfg ] ) .calculate for all by using ( [ discreteu ] ) , ( [ urmax ] ) and ( [ -urmax ] ) with , .calculate new transmission power using ( [ pow ] ) .regressively update with . * for * * if * * then * . * end * . *end * at time slot , sbs with energy battery transmits with power . for the mfg , we do not need the location information for each sbs .however , we need information about the average channel gain , the number of sbss in one macrocell , and the initial distribution of the energy of the sbss .therefore , some central system should measure these information , solve the differential equations , and then broadcast the power policy to all the sbss .it is more efficient than broadcasting all the information to all sbss and let them solve the differential equations by themselves .again , the central system only needs to re - calculate and broadcast to all sbss a new power policy if there are changes in or .in this section , we quantify the efficacy of the developed stochastic policy in comparison to the simple stackelberg game - based power control policy .the stochastic policy is obtained from the qcqp problem . on the other hand , for the stackelberg policy, we follow a hierarchical method .let us assume that at time sbs has joules of energy in its battery .if the mbs transmits with power , then each sbs tries to transmit with a power such that where is the interference from both the sbss and the mbs .since this is a convex problem , although it is solved independently by each sbs , the results are the same .notice that the objective function for each sbs is similar to the utility function in ( [ u1 ] ) ; therefore , it can provide a fair comparison against our method .the constraint means that each sbs can not transmit more than the energy it has in its battery .next , knowing that sbs will solve the above optimization problem to find its , the mbs calculates its for different values of its transmit power and picks the optimal .knowing , each sbs solves the convex optimization above to obtain its transmit power .the tuple will be a nash equilibrium because no one can choose a better option given the others actions . to solve the qcqp in ( [ qcqp ] ), we use the _ fmincon _ function from matlab .note that the _ fmincon _ function may return a local optimal instead .therefore , in order to obtain a good approximation for the optimal solution , in our simulations , we use the incremental method as described below .first , our qcqp problem is stated as follows : where is a group of linear functions of . by solving this using _ fmincon _, we obtain a local optimal result .then we solve the updated qcqp problem as follows : where is a small positive constant . by solving this new qcqp, we obtain a local optimal solution that satisfies the constraints of the original qcqp and returns a better result .we keep repeating this step as long as _fmincon _ is able to return a solution with that satisfies the constraints .then we say that is the optimal solution ( with the error of ) .note that the optimal solution for this qcqp can be found by using brute force search through all possible integer values of .therefore , the result can be double checked when the number of sbss is small . in the simulations ,the ces has a maximum battery size of and the volume of one energy packet is j. the duration of one time interval is ms and the thermal noise is w. the small cell users are considered to be in outage if the falls below 0.02 .the mbs has two levels of transmission power [ 10 ; 20 ] watts and the sinr outage threshold is set to 5 .the energy arrival at each sbs follows a poisson distribution with unit rate .the volume of each energy packet arriving at the ces is times larger than the energy packet collected by each sbs .thus , the amount of energy in each packet at the ces will be .therefore , the ces should have a more efficient method to harvest energy than each sbs ( in the case of stackelberg method ) . however , as the maximum battery size of the ces is limited by , its total available energy is always limited by the product regardless of . for both the cases , each sbs can receive up to 1.5 mj of energy from either the ces or the environment . at the beginning , the ces is assumed to have its battery full . when states , , sbss , .,width=288 ] when states , , sbss , .,width=288 ] from fig .[ density ] , it can be seen that when the number of sbss is smaller than some value , the stochastic method gives better results .this is because , the ces can redistribute the harvested energy among the sbss based on their average channel gains , and also the qcqp in ( [ qcqp ] ) gives a nash equilibrium that favors the ces .however , at some point , its outage probability will be higher than that for the stackelberg approach .this is not surprising since the ces can only store at most of energy .therefore , when the number of sbss increases , the average allocated power per sbs by the ces reduces while the stackelberg method allows each sbs to harvest up to 1.5mj no matter how large is .this means that the stackelberg method can provide a better performance compared to using the ces when is large .following fig .[ threshold ] , by increasing the threshold sinr target while keeping the number of sbss fixed , we can reduce the outage probability of a user served by an sbs .this is understandable since the average will approach the higher target and thus reduce the outage probability .however , for the stochastic methods , the outage will start to increase when the target is larger than some value . to increase the average , the sbss need to transmit with higher power to at least mitigate the cross - tier interferencehowever , a higher transmit power means a higher consumption of harvested energy , which can create shortages later . also , higher transmit powers from the sbss wil make the mbs to increase its own transmit power and thus create high cross - interference .when the target is higher than some value , the stochastic method will behave greedily by transmitting as much as possible and the outage will begin to increase . when the target is large enough, the ces distributes all of the energy it currently has and thus the outage probability will become flat .a similar observation can be made for the stackelberg approach .however , for this method , the energy can not be redistributed to the sbss with good channel gains to its users and the distribution of energy arrivals is ignored ; therefore , the results are worse than those for our stochastic method . fig .[ volume ] shows the outage probability when increasing the quanta volume by choosing a higher multiplier for the ces .it is easy to see that , with a higher , i.e. , choosing a more effective method to harvest energy at the ces , we can achieve a better performance .the stackelberg method does not use the ces , so the outage probability remains unchanged .note that , since the battery size of each sbs is limited to 1.5 mj , at some point , a higher does not improve the outage probability .states , sbss , , .,width=288 ] states , sbss , , .,width=288 ] fig .[ mbs_fbs ] shows the outage probability of the macrocell user when and .the stochastic method gives better results in this case since the sbss are more rational " in choosing their transmit powers in long term .also , unlike the stackelberg method , the ces has a fixed - energy battery , so when is large , the average amount of energy distributed to an sbs will be small , which in turn limits the cross - interference to the macrocell user . with the stackelberg method ,the sbss only try to maximize their payoffs in the current time slot and ignore the distribution of energy ; therefore , it uses a higher transmit power to compete against the mbs when the target is increased ; therefore , it creates a larger cross - interference and in turn increases the outage probability of the macrocell user . in summary , we see that the centralized method using a ces can provide a better performance in terms of outage probability for both the mbs and sbss .the advantages of using ces are two folds : first , it allows the harvested energy to be distributed to the sbss which have good channel gains for the scheduled users , and second , it considers the probability distribution of energy arrivals when calculating the transmit power policy for both the mbs and sbss . however , since the ces has a fixed battery size , this centralized model performs poorer when it needs to support a large number of sbss . to improve this inflexibility, we can adjust other parameters as follows : change the target , increase the multiplier , or increase the battery size of each sbs .we assume that the transmit power at the mbs is fixed at 10w and it results in a constant noise at the user served by a generic sbs .the radius of the macrocell is meter , so we have constant cross - interference w. the target is and assume that .we discretize the energy coefficient into 80 intervals , i.e. , and intervals .similar to the discrete stochastic case , each sbs can hold up to 150 in the battery , so the maximum transmit power is 30 mw .we impose the threshold such that an sbs will not transmit at or .the intensity of energy loss / energy harvesting , is 1 .sbss / macrocell.,width=288 ] sbss / macrocell.,width=288 ] sbss / macrocell.,width=288 ] sbss / macrocell.,width=288 ] sbss / macrocell.,width=288 ] sbss / macrocell.,width=288 ] for sbss / cell , we have , so a generic sbs does not need to use a large amount of power in order to obtain the target .notice that is the average transmit power of a generic sbs .therefore , if a generic sbs reduces , the cost term also reduces . thus the difference between the cost and the received power be smaller , which is desirable .it makes sense that a generic sbs will try to reduce its power as much as possible in this case .the power can not be zero though , because .moreover , from fig . [ powmfg401 ] and fig .[ powmfg402 ] , we see that , at the beginning , the sbs with higher energy ( i.e. , 100 ) will transmit with a high power and will gradually reduce to some value .the sbss with smaller battery will increase their transmit powers gradually . since the transmit power is small, we see that in fig .[ dismfg401 ] and fig .[ dismfg402 ] , the energy distribution shifts to the left slowly . on the other hand , when sbss / macrocell , we have . in this case , the effect is more complicated because reducing the transmit power may not reduce the gap between the received power and the cost term . again , as can be seen from fig .[ powmfg502 ] , the sbss with larger available energy will transmit with large power first and after sometime when there is less energy available in the system , all of them start to use less power . therefore , as can be seen in fig .[ dismfg502 ] , the energy distribution shifts toward the left with a faster speed than the previous case . for sbss / macrocell, we have .this means each sbs needs to transmit with a power larger than the average to achieve the target . in fig .[ powmfg602 ] , we see that the behavior of each sbs is the same as in the previous case .that is , the sbss with higher energy transmit with larger power first and then reduce it , while the poorer " sbss increase their transmit power over time . we compare the mfg model against the stochastic discrete model for different values of . for simplicity , we assume that each sbs has the same link gain to its user as . also , we assume that the channel gain from each sbs to the user of another sbs is . using * remark [ rem2 ]* , it can easily be proven that in this case , each sbs will transmit with the same power , i.e. , if the ces sends of energy to sbss , then each sbs receives .then , the interference at each sbs will be calculated as with multiplier and the maximum battery size of the ces as . because the mbs is not a player of the game ,the simulation step becomes simpler , and we only need to solve a linear program for the mdp problem instead of a qcqp .therefore , we can call it as mdp method to accurately reflect the difference . for the discrete stochastic case , we discretize the gaussian distribution to model the energy arrivals at the ces .the battery size of each sbs is still 150 .the average sinr of a generic small cell user using both the mfg and mdp models with different density is plotted in fig .[ mfgvsdiscrete ] .we see that using the mfg model , the average increases at the beginning and then it starts falling at some point .this is because , when the density is low , the interference from the mbs is noticeable ( i.e. , w in our simulation ) . from the previous figures, it can be seen that an sbs will increase its power when the density is higher .therefore , after some point the co - tier interference becomes dominant and the average will begin to drop .it means at some value of the density , e.g. , sbss / macrocell in fig .[ mfgvsdiscrete ] , we obtain the optimal average .we notice that the mfg model performs better than the mdp model with the ces .this is due to the limited battery size of ces which makes it difficult to support a large number of sbss . in summary , we have two important remarks for the mfg model .first , if the density of small cells is high , the sbss will transmit with higher power .second , from fig .[ mfgvsdiscrete ] , we see that by choosing a suitable density of the sbss , we can obtain the highest average . from ( [ pow ] ), it can be easily proven that the average at a user served by an sbs will always be smaller than the target ( because ) .therefore , the highest average is also the closest to the target , which is our objective in the first place .we have proposed a discrete single - controller discounted two - player stochastic game to address the problem of downlink power control in a two - tier macrocell - small cell network under co - channel deployment where the sbss use stochastic renewable energy source . for the discrete case ,the strategies for both the macrocell and sbss have been derived by solving a quadratic optimization problem .the numerical results have shown that these strategies can perform well in terms of outage probability experienced by the users .we have also applied a mean field game model to obtain the optimal power for the case when the number of sbss is very large .we have also discussed the implementation aspects of these models in a practical network . in this paper , we have not explicitly considered the correlation in the energy arrival process . however , this correlation can be modeled by assuming that the energy arrival has markovian property . in this case , to calculate the transition probability , we will need to extend the definition of the state to a two - element vector , one is the current energy in the storage and the second is the energy arrival at this time slot .moreover , we have not considered the details of cost and latency analysis related to the information exchange to and from the ces and also the charge and discharge loss of the battery storage .these issues can be addressed in future .[ [ section ] ] denote the distance between the bs to its user as .if is uniformly located inside the disk centred at , the pdf of is . denote by the value of the angle , is uniformly distributed between . using the cosine law , we obtain = \int_0^{2\pi}\int_0^r ( r^2 + a^2 - 2ar \cos\theta)^{-2 } \frac{1}{2\pi } \frac{2a}{r^2 } \ , da \ , d\theta .\end{aligned}\ ] ] first , we solve the indefinite integral over as where is a constant and since , after integrating over ] , then there exists ] ) . by dividing both sides by and letting to be very small ( or ), we have and using , , and changing the variable to , we complete the proof .99 k. t. tran , h. tabassum , and e. hossain , a stochastic power control game for two - tier cellular networks with energy harvesting small cells , " _ proc .ieee globecom14 _ , austin , tx , usa , 8 - 12 december , 2014 .n. michelusi , k. stamatiou , and m. zorzi , transmission policies for energy harvesting sensors with time - correlated energy supply , " _ ieee transactions on communications _ , vol .61 , no . 7 , jul .2013 , pp . 29883001 .p. semasinghe and e. hossain , downlink power control in self - organizing dense small cells underlaying macrocells : a mean field game , " _ ieee transactions on mobile computing , _ doi : 10.1109/tmc.2015.2417880 .k. huang and v. k. lau , enabling wireless power transfer in cellular networks : architecture , modeling and deployment , " _ ieee transactions on wireless communications _2014 , pp . 902912 .p. blasco , d. gunduz , and m. dohler , `` a learning theoretic approach to energy harvesting communication system optimization , '' _ ieee transactions on wireless communications , _ vol .4 , pp . 18721882 , apr . 2013 .d. ngo , l. b. le , t. l .- ngoc , e. hossain , and d.i .kim , distributed interference management in two - tier cdma femtocell networks , " _ ieee transactions on wireless communications , _ vol .2012 , pp . 979989 .a. y. al - zahrani , r. yu , and m. huang , a joint cross - layer and co - layer interference management scheme in hyper - dense heterogeneous networks using mean - field game theory , " _ ieee transaction of vehicular technology _ , doi : 10.1109/tvt.2015.2413394 . s. guruacharya , d. niyato , d. i. kim , and e. hossain , hierarchical competition for downlink power allocation in ofdma femtocell networks , " _ ieee transactions on wireless communications _ , vol .2013 , pp . 15431553 .m. huang , member , p. e. caines , and r. p. malham , uplink power adjustment in wireless communication systems : a stochastic control analysis , " _ ieee transactions on automatic control _49 , no . 10 , oct .2004 , pp . 16931708 .
energy harvesting in cellular networks is an emerging technique to enhance the sustainability of power - constrained wireless devices . this paper considers the co - channel deployment of a macrocell overlaid with small cells . the small cell base stations ( sbss ) harvest energy from environmental sources whereas the macrocell base station ( mbs ) uses conventional power supply . given a stochastic energy arrival process for the sbss , we derive a power control policy for the downlink transmission of both mbs and sbss such that they can achieve their objectives ( e.g. , maintain the signal - to - interference - plus - noise ratio ( sinr ) at an acceptable level ) on a given transmission channel . we consider a centralized energy harvesting mechanism for sbss , i.e. , there is a central energy storage ( ces ) where energy is harvested and then distributed to the sbss . when the number of sbss is small , the game between the ces and the mbs is modeled as a single - controller stochastic game and the equilibrium policies are obtained as a solution of a quadratic programming problem . however , when the number of sbss tends to infinity ( i.e. , a highly dense network ) , the centralized scheme becomes infeasible , and therefore , we use a mean field stochastic game to obtain a distributed power control policy for each sbs . by solving a system of partial differential equations , we derive the power control policy of sbss given the knowledge of mean field distribution and the available harvested energy levels in the batteries of the sbss . small cell networks , power control , energy harvesting , stochastic game , mean field game .
separation of the cosmic microwave background ( cmb ) signal from extragalactic and galactic foregrounds ( gf ) is one of the most challenging problems for all the cmb experiments , including the ongoing nasa and the upcoming esa mission .the gf produces the major ( in amplitude ) signal in the raw maps , which is localized at a rather small latitude band . to avoid any contribution of the gf to the derived cmb map , starting from to experiments , a set of masks and disjoint regions of the map are in use for extraction of the cmb anisotropy power spectrum .the question is , what kind of assumption about the properties of the foregrounds should we apply for the data processing and what criteria determines the shape and area of the mask and the model of the foregrounds ? to answer these questions we need to know the statistical properties of the gf to determine the strategy of the cmb signal extraction from the observational data sets .these questions are even more pressing for the cmb polarization .unlike temperature anisotropies , our knowledge about the polarized foregrounds is still considerably poor .additionally , we have yet to obtain a reasonable truly _ whole - sky _ cmb anisotropy maps for statistical analysis , while obtaining a whole - sky polarization map seems to be a more ambitious task .modeling the properties of the foregrounds thus needs to be done for achieving the main goals of the mission : to the cmb anisotropy and polarization signals for the whole sky with unprecedented angular resolution and sensitivity .apart from modeling the foregrounds , ( hereafter toh ) propose the `` blind '' method for separation of the cmb anisotropy from the foreground signal .their method ( see also ) is based on minimizing the variance of the cmb plus foreground signal with multipole - dependent weighting coefficients on k to w bands , using 12 disjoint regions of the sky .it leads to their foreground cleaned map ( fcm ) , which seems to be clean from most foreground contamination , and the wiener - filtered map ( wfm ) , in which the instrumental noise is reduced by wiener filtration .it also provides an opportunity to derive the maps for combined foregrounds ( synchrotron , free - free and dust emissions etc . ) .both fcm and wfm show certain levels of non - gaussianity , which can be related to the residuals of the gf .therefore , we believe that it is imperative to develop and refine the `` blind '' methods for the mission , not only for better foreground separation in the anisotropy maps , but also to pave the way for separating cmb polarization from the foregrounds . the development of `` blind '' methods for foreground cleaning can be performed in two ways : one is to clarify the multipole and frequency dependency of various foreground components , including possible spinning dust , for high multipole range and at the high frequency instrument ( hfi ) frequency range .the other requires additional information about morphology of the angular distribution of the foregrounds , including the knowledge about their statistical properties in order to construct realistic high - resolution model of the observable foregrounds . since the morphology of the cmb and foregrounds is closely related to the phases of coefficients from spherical harmonic expansion , this problem can be re - formulated in terms of analysis of phases of the cmb and foregrounds , including their statistical properties . in , it is reported that a major part of the gf produces a specific correlation in spherical harmonic multipole domain at : between modes and .the series of -correlation from the gf requires more investigation .this paper is thus devoted to further analysis of the statistical properties of the phases of the foregrounds for such correlation .we concentrate on the question as to what the reason is for the correlation in the data , and can such correlation help us to determine the properties of the foregrounds , in order to separate them from the cmb anisotropies . in this paperwe develop the idea proposed by and demonstrate the pronounced symmetry of the gf ( in galactic system of coordinates ) is the main cause of the correlation .the estimator designed in to illustrate and tackle such correlation can help us understand gf manifestation in the harmonic domain , leading to the development of `` blind '' method for foreground cleaning . in combination with multi - frequency technique proposed in ,the removal of correlation of phases can be easily used as an effective method of determination of the cmb power spectrum without galactic mask and disjoint regions for the data .it can serve as a complementary method to the internal linear combination method and to the toh method as well , in order to decrease the contamination of the gf in the derived maps .such kind of correlation should be observed by the mission and will help us to understand the properties of the gf in details , as it can play a role as an additional test for the foreground models for the mission .this paper is organized as follows . in section 2we describe the estimator for -correlation in the coefficients and its manifestation in the observed signals . in section 3we apply the estimator on 3 toy models which mimics galactic foregrounds to investigate the cause of such correlation . in section 4we discuss the connection between the correlation and the foreground symmetry .we also examine the power spectrum of the estimator and the correlations of estimator in section 5 .the conclusion is in section 6 .as is shown in to illustrate the -correlation , we recap the estimator taken from the combination of the spherical harmonic coefficients , where , and the coefficients are defined by the standard way : is the whole - sky anisotropies at each frequency band , are the polar and azimuthal angles of the polar coordinate system , are the spherical harmonics , and are the amplitudes ( moduli ) and phases of harmonics .the superscript in characterizes the shift of the -mode in and , following , we concentrate on the series of correlation for , .note that the singal of galaxy mostly lies close to -plane .the estimator in form eq.([eq1 ] ) is closely related with phases of the multipoles of the signal .taking eq.([eq2 ] ) into account , we get . \label{eq3}\ ] ] from eq.([eq3 ] ) one can see that , if the phase difference , then if , the map synthesized from the estimator is simply a map from the with phases rotated by an angle and the amplitudes lessened by a factor , while for non - correlated phases we have specific ( but known ) modulation of the coefficients ( see the appendix ) .a non - trivial aspect of estimator is that it significantly decrease the brightest part of the galaxy image in the k - w maps . in the following analysis we use a particular case so that , although it can be demonstrated that for the results of analysis do not change significantly as long as , where is the multipole number in the spectrum where the instrumental noise starts dominating over the gf signal .in this section we show how the estimator transforms the gf image in the k - w maps , taking from the nasa lambda archive . in fig.[fig1] we plot the maps synthesized from the estimator for k - w band ( for and ) note that the amplitudes are significantly reduced in each map .it should be emphasized that the map is not temperature anisotropy map , as the phases are altered .let us discuss some of the properties of the estimator , which determine the morphology of the maps .first of all , from eq.([eq1 ] ) one can find that for all modes , the estimator is equivalent to zero if and it is non - zero ( and doubled ) if . in terms of phase difference in eq.([eq4 ] ) this means that for modes estimator removes those which have the same phases , while doubles the amplitudes of others whose phases differing by in the maps .however , such specific case of estimator for modes is not unique for only .it seems typical for any values of parameter .what is unique in the data is that for the order of sign for modes leads to the image without strong signal from the galactic plane .we present in fig.[fig2 ] the images synthesized of the even and odd modes from the w band signal .the even and odd modes reflect different symmetry of the signal , related to the properties of the spherical harmonics and the corresponding symmetries of the foregrounds . for even the brightest part of the signal is mainly localized in the galactic plane area ( the top panel ) , while for odd modes the signal has less dominated central part from the gf , but it has well presented periodic structure in direction ( horizontal stripes ) , from the north to south pole caps crossing the galactic plane . in fig.[fig3 ] we present the symmetry of the gf for the w band signal for even and odd harmonics , including all corresponding modes . as one can see from fig.[fig2]-[fig3 ] , the even and the even maps ( the top of fig.[fig2 ] and fig.[fig3 ] ) have a common symmetrical central part , which looks like a thin belt covered in area and all range . for odd modesthe brightest gf mainly concentrate locally in and rectangular areas .additionally , for the maps of even and odd harmonics in fig.[fig3 ] we have periodic structure of the signal in direction , which is determined by the properties of the spherical harmonics , and , more importantly , by the properties of coefficients of decomposition , which reflect directly corresponding symmetry of the gf .in this section we want to examine why the correlation appears . in order to answer this questionwe introduce 3 toy models for the galaxy emissivity , which reflect directly different symmetries of the galactic signal . in appendixwe analyze more general situation .these 3 toy models are the belt , the rectangular and the spots models .all 3 models are the simple geometrical shapes added with the ilc map . 1 . the belt model : + we add on top of the ilc map with if ,and ] and does not show any -correlation at all . moreover , according to the properties of the sine mode the shift of the argument by the factor just transforms it to the combination &=&\sin\left[(\l+\frac{1}{2})\delta\right]\cos(4k\delta)\nonumber\\ & + & \cos\left[(\l+\frac{1}{2})\delta\right]\sin(4k\delta ) .\label{sin}\end{aligned}\ ] ] thus , one can see that -correlation requires some restriction on the -parameter thus , for and the halfwidth of the rectangular area must be close to . if , for example , then we will have correlation for , but not for . practically speaking , for we can have some particular symmetry , but not general symmetry .conclusions concerning this rectangular model of gf are clearly seen in fig.[fign ] . to understand how each sort of defectsis related with corresponding -correlation , we introduce the model of defects , which can be describe as a sum of peaks with amplitudes and coordinates . for analytical description of the modelwe neglect the beam convolution of the image of point sources ( ps ) , but we include it in the numerical simulation .for the model of defects for the coefficients of the spherical harmonics expansion from eq.([ps ] ) we get as for the rectangular model , we will assume that all and simply we will have for , as well as . as it was shown in previous section , clearly demonstrate correlation .we would like to point out that for the spots model this correlation now is strong , unlike the belt and rectangular models .- correlation .] moreover , implementation of the gaussian shape of the ps which come from beam convolution does not change that symmetry at all . to show that , in fig.[ps ] we plot the model of two ps with amplitudes in order to 10 mk , combined with the ilc map .the reason for such effect is quite obvious .the beam convolution does not change the symmetry of the model , but rescale the amplitudes of the ps by a factor /2\sigma^2 ] , while =2a\cos(\pi m/2 ) ] and the contribution of the strong signal to the map ilc + two ps vanishes .this means that for amplitudes , where and correspond to the ilc and ps signals respectively ) and phases of coefficients , we have where are the ilc phases . as one can see , this is a particular example , when strong , but symmetric in direction signal do not contribute to the set of coefficients at least for the defined range of multipoles .let us discuss the other opposite model , in which the number of spots in the galactic plane is no fewer than 2 , and their coordinates are random in some range .no specific assumptions about the amplitudes are needed . in this modelthe sum in eq.([aa ] ) mostly is represented by modes , , while all modes because of randomness of the phases .this model , actually is close to the rectangular model , in which the width of rectangular side in the direction now is . at the end of this sectionwe would like to demonstrate , how the symmetry of the galaxy image in direction can determine the properties of the map .for that we rotate the w - band map by along the pole axis and produce the same estimation of , as is done for the galactic reference system .the result of estimation is shown in fig.[rot ] .for comparison , in this figure we plot the difference and sum between maps before and after rotation . from these figures ,these new symmetry of the w band map after rotation simply increase the amplitude of signal in galactic plane zone , especially in the the central part of it . at the end of this section , we summarize the main results of investigation of the given models of the gf signal . * for highly symmetrical signal , like the belt model ,all coefficients vanish for the multipole numbers , but .modification of the estimator in form of eq.([eq1b ] ) is crucial to prevent any contribution to the map from the gf . *less symmetric model , like the rectangular model , requires transition for the multipole numbers for estimator , which appears for the range of multipoles .if the resolution of the map we are dealing with is low , that correlation appears for all and the corresponding coefficients for gf are in order of magnitude . for the correlation of phases does not exist at all . *the amplitude of the gf signal , , and its dependency on ( like , is the minimal multipole number for which achieve the maxima , and is the power index ) are crucial for establishing of the correlation of phases . taking asymptotic into account , and defining the critical multipole number , we can estimate the corresponding amplitudes .if at that range of we get , where is the cmb power spectrum , the correlation would be established for all range of multipoles , even if it vanishes for the gf signal for .starting from and for the corresponding for gf play a small role in amplitude noise , in comparison to the amplitudes of the cmb signal . *the estimator effectively decreases the amplitudes of the point - like sources located in the galactic plane , if they have nearly the same amplitudes and are symmetrically distributed in direction around galactic center .non - symmetrical and different in amplitudes point - like sources after implementation of estimator produce significant residues .in this section we apply the proposed estimator to the maps for q , v and w band foregrounds ( which are sum of synchrotron , free - free and dust emission ) .we then transform them by the estimator .these foreground maps do not contain the cmb signal and instrumental noise , therefore they allow us to estimate the properties of the gf in details . in fig.[fgd ]we plot the maps for q , v and w band foregrounds ( ) for the multipole range .this range is determined by the resolution of the foregrounds maps ( ) .as one can see from these maps , the gf perfectly follows to multipole correlation , which remove the brightest part of the signal down to the level 50 mk for the q band , mk for the v band , mk for the w band and mk for the map , the difference between v and w foregrounds .note that these limits are related with the brightest positive and negative spots ( point sources ) in the maps , while diffuse components have significantly smaller amplitudes . to show the high resolution map which characterizes the properties of the foregrounds in v and w band , in fig.[pow ] we plot the map of difference bands , and the corresponding map for .note that map does not contain the cmb signal , but for high the properties of the signal are determined by the instrumental noise .to characterize the power spectrum of the maps we introduce the definition if the derived signal is gaussian , that power represents all the statistical properties of the signal . for non - gaussian signal , power characterizes the diagonal elements of the correlation matrix . from fig.[psim ] it can be clearly seen that for foregrounds , especially for v and w bands , the power spectra of are significantly smaller than the power of the cmb , for estimation of which we simply use the power of toh fcm map , transformed by estimator as assuming that fcm map is fairly clean from the foreground signal .an important point of analysis of the foregrounds is that for v and w bands estimator decreases significantly the amplitude of gf , practically by 1 to 2 order of magnitude below the cmb level .the most intriguing question related to -correlation of the derived map from the v and w band signals is what is reproduced by the estimator ?the next question , which we would like to discuss is why the power spectrum of estimation of the v and w bands shown in fig.[fig22 ] are practically the same at the range of multipoles , when we can neglect the contribution from instrumental noise to both channels and differences of the antenna beams .the equivalence of the powers for these two signals , shown in fig.[fig22 ] , clearly tell us that these derived maps are related with pure cmb signal ( which we assume to be frequency independent ) .in this section we present some analytical calculations which clearly demonstrate what kind of combinations between amplitudes and phases of the cmb signal in the v , w bands and phases of foregrounds are represented in the estimator . as was mentioned in section 1 , this estimator is designed as a linear estimator of the phase difference , if the phase difference is small .let us introduce the model of the signal at each band , where is frequency independent cmb signal and is the sum over all kinds of foregrounds for each band ( synchrotron , free - free , dust emission etc . ) .according to the investigation above on the foreground models , it is realized that without the ilc signal the estimation of the foregrounds , especially for v and w bands , corresponds to the signal to simplify the formulas ] the power of which is significantly smaller then that of the cmb in terms of moduli and phases of the foregrounds at each frequency band where and are the phases of foreground and the cmb , respectively . and from eq.([dd0 ] ) we get and practically speaking , we have .thus , taking the correlation into account , we can conclude that it reflects directly the high correlation of the phases of the foregrounds , determined by the gf .moreover , if any foreground cleaned cmb maps derived from different methods display the correlation of phases , it would be evident that foreground residuals still determine the statistical properties of the derived signal . one of the basic ideas for comparison of phases of two signals is to define the following trigonometric moments for the phases and as : where .we apply these trigonometric moments to investigate the phase correlations for toh fcm and wfm . forthat we simply substitute in eq.([def2 ] ) , and define as the phase of fcm and as that of wfm .the result of the calculations is presented in fig.[comp ] . from fig.[comp ]it can be clearly seen that the fcm has strong correlations starting from which rapidly increase for , while for wfm these correlations are significantly damped , especially at low multipole range .however , the estimator allow us to clarify the properties of phase correlations for low multipole range .the idea is to apply estimator to fcm and wfm , and to compare the power spectra of the signals obtained before and after that .according to the definition of estimator , the power spectrum of the signal is given by eq.([pp1 ] ) , which now has the form .\label{dd}\ ] ] the last term in eq.([dd ] ) corresponds to the cross - correlation between and modes , which should vanish for gaussian random signals after averaging over the realization . for a single realization of the random gaussian processthis term is non - zero because of the same reason ,as well known `` cosmic variance '' , implemented for estimation of the errors of the power spectrum estimation ( see naselsky et al .thus and error of is in order to to evaluate qualitatively the range of possible non - gaussianity of the fcm and wfm , in fig.[fpow ] we plot the function /[d(\l)+c(\l)]$ ] for fcm and wfm , in which we mark the limits . as one can see , potentially dangerous range of low multipoles is , , for the wfm .non - randomness on some of the multipole modes is mentioned in . at the end of this sectionwe would like to demonstrate that application of estimator to maps with foregrounds residuals , such as the fcm , provides additional `` cleaning '' . in fig.[clean] we present the and trigonometric moments for the fcm with shift of the multipoles .one can see that the correlation of phases is strong ( practically , they are at the same level as correlations ) .however , after filtration these correlations are significantly decreased .the implementation of the estimator to the non - gaussian signal significantly decreases these correlations .the properties of the estimator described can manifest themselves more clearly in terms of images of the cmb signal . in fig.[clean1 ] we plot the results of the maps with implemented on fcm and wfm , in order to demonstrate how the estimator works on the non - gaussian tails of the derived cmb maps . in fig.[clean1 ] we can clearly see that the morphology of the maps are the same and difference between and is related to point sources residuals localizes outside the galactic plane ( see the 3rd panel ) .a direct substraction of the wfm from the fcm reveals significant contamination of the gf residuals and non -galactic point sources ( the third from the bottom and bottom maps ) .the second from the bottom map corresponds to difference between and for which the amplitudes of the signal represented in colorbar limit mk .one can see that the gf is removed down to the noise level . in combination of the phase analysiswe can conclude that the implementation of the estimator looks promising as an additional cleaning of the gf residuals and can help investigate the statistical properties of derived cmb signals in more detailed .in this paper we examine a specific group of correlations between , which is used as an estimation of the statistical properties of the foregrounds in the maps .these correlations , in particular , among phases are closely related to symmetry of the gf ( in galactic coordinate system ) . an important point of analysisis that for the foregrounds the correlations of phases for the total foregrounds at v and w bands have specific shape when .these correlations can be clearly seen in the w band of the data sets down to and must be taken into account for modeling of the foreground properties for the upcoming mission .we apply the estimator to the toh fcm , which contains strong residuals from the gf and show that these residuals are removed from the map . moreover , in that map the statistics of the phases display the gaussian statistics closer than the original fcm ( no correlation of phases between different modes except between and , which is chosen as a basic one , defined by the form of estimator . ) in this paper we do not describe in details the properties of the signal derived by estimator from the v and w bands .further developments of the method , including multi - frequency combination of the maps and cmb extraction by the estimator will be in a separate paper . to avoid misunderstanding and confusion , here we stress again that any maps synthesized from the are _ by no means _ the cmb signals ( since the phases of the these signals are not the phases of true cmb ) and the true cmb can be obtained after multi - frequency analysis , which is the subject of our forthcomming paper .we thank h.k . eriksen , f.k .hansen , a.j .banday , c. lawrence , k.m .gorski and p.b .lilje for their comments and critical remarks .we acknowledge the use of nasa legacy archive for microwave background data analysis ( lambda ) and the maps .we also acknowledge the use of the and the package to produce from the data sets . in this appendixwe would like to describe general properties of the periodicity of the galactic signal , taking into account its symmetry .we adopt the following model of the signal , which seems to be general .let define some area around galactic plane , where is the pixel area and index mark the location of the pixel .we assume for simplicity that all the pixels in the map are have the same area . in polar system of coordinates corresponding angles and mark the position of -th pixel in the map .let us define the amplitude of the signal per each pixel as .thus the map which corresponds to the galactic signal is now let assumes that galaxy image is localized in -direction as and it could be or could not be localized in -direction .additionally we will assume that signal per each pixel is the sum of galactic foreground signal and cmb plus instrumental noise signal .important to note that statistical properties of these two components are different as in terms of amplitudes , as in terms of pixel - pixel correlations .particularly , in the area we have , while outside we assume that . using proposed model of the signal in the map we can obtain corresponding coefficients of the spherical harmonic expansion which can be represented as a sum of foreground coefficients and the cmb plus noise coefficients . in order to understand the nature of -periodicity of the galactic foreground ,let discuss the model when .then , from eq.([eq1 ] ) the subject of interest would be the phases of foreground related to the coefficients as follows where sum over corresponds to the pixel in the area .let s define the difference of phases , using their tangents . where as one can see from eq.([a4 ] ) , if , then , which determine the properties of estimator for correlated phases ( see eq.([eq1 ] ) . below the object of out investigationis function from eq.([a5 ] ) .particularly we are interesting in asymptotic , which should reflect directly the symmetry of the foreground signal .simple algebra allows us to represent function in the following form taking into account that area is located close to the , let us discuss the properties of n - function at the limit , using asymptotic of the legendre polynomials .after simple algebra we obtain where + \cos[(\l+\frac{1}{2})(\theta_j-\theta_k)]\}(\cos\theta_k\delta-\cos\theta_j\delta ) \nonumber \\ -\{\sin[(\l+\frac{1}{2})(\theta_j+\theta_k)+m\pi-\frac{\pi}{2 } ] -\sin[(\l+\frac{1}{2})(\theta_j-\theta_k)]\ } ( \sin\theta_k\delta-\sin\theta_j\delta ) .\label{a8}\end{aligned}\ ] ] from eq.([a8 ] ) one can see the symmetry of the legendre polynomials which manifest themselfs trough and modes . if , then depending on we will have thus , choosing mode we take the corresponding properties of the legendre polynomials into consideration. however , as one can see from eq.([a9 ] ) periodicity of the galactic image is not exact . in realitywe have pixels which containt galaxy signal having coordinate close to , but not exactly equivalent to .let us introduce a new variable characterized the deviation of the -th pixel location from the plane . from eq.([a8 ] ) one can find that for .thus , if pixel containt the signal from the galactic foreground , the deviation from the center of the galactic plane should be small enough: .it is clear that this condition does not necessarily correspond to the properties of the galactic image , which is clearly seen from the k , ka and q band signals .taking the above - mentioned properties of function , we represent the asymptotic of this function at the limit , which is applicable for analysis of the galactic signal at v and w bands . +(-1)^{\l+m } \cos[(\l+\frac{1}{2})(\delta_k+\delta_j)]\right\ } \nonumber \\+ \delta(\delta_j-\delta_k ) \left\{\sin[(\l+\frac{1}{2})(\delta_k-\delta_j)]+(-1)^{\l+m } \sin[(\l+\frac{1}{2})(\delta_k+\delta_j)]\right\}. \label{a10}\end{aligned}\ ] ] thus , combining eq.([a7 ] ) and eq.([a10 ] ) , we obtain one may think that choose of described above -mode of legendre polynomials automatically guarantee cancellation of the brightest part of the signal from the map without any restriction on symmetry and amplitude of the foreground . to show that symmetry of the galactic signal is important ,let us discuss a few particular cases , which illuminate this problem more clearly .firstly , let take a look at galactic center ( gc ) , which is one of the brightest sources of the signal . for the gc corresponding amplitudes are localized per pixels , for which in the galactic system of coordinate . from eq.([a11 ] ) one can see that for gc the function is equivalent to zero .more accurately , taking into account that image of the gc has characteristic sizes , where fwhm is the full width of half maximum of the beam , in eq.([a11 ] ) additionally to parameter we get small parameter .secondly , let discuss the model of two bright point like sources , located symmetrically relatively to the gc .let assumes that for that point sources , but . once again , from eq.([a11 ] ) we get for all and these point sources will be automatically removed by the estimator even if they have .another possibility related to the symmetry of the galactic image in direction .we would like to remind , that eq.([a11 ] ) was obtained under approximation , where .this means , that can be as big enough ( ) , as small ( ) as well . for from eq.([a10 ] ) we obtain + \delta ( \l+\frac{1}{2})[-(\delta_j-\delta_k)^2 + ( -1)^{\l+m}(\delta^2_j-\delta^2_k ) ] .\label{a12}\ ] ] as one can see from eq.([a12 ] ) the bright sources located on the same coordinates ( ) does not contribute to estimator .grski , k. m. , hivon , e. , wandelt , b. d. , 1999 , in a. j. banday , r. s. sheth and l. da costa , proceedings of the mpa / eso cosmology conference `` evolution of large - scale structure '' , printpartners ipskamp , nl
we study a specific correlation in spherical harmonic multipole domain for cosmic microwave background ( cmb ) analysis . this group of correlation between , is caused by symmetric signal in the galactic coordinate system . an estimator targeting such correlation therefore helps remove the localized bright point - like sources in the galactic plane and the strong diffused component down to the cmb level . we use 3 toy models to illustrate the significance of these correlations and apply this estimator on some derived cmb maps with foreground residuals . in addition , we show that our proposed estimator significantly damp the phase correlations caused by galactic foregrounds . this investigation provides the understanding of mode correlations caused by galactic foregrounds , which is useful for paving the way for foreground cleaning methods for the cmb .
inspection of bridges , tunnels , wind turbines , and other large civil engineering structures for defects is a time - consuming , costly , and potentially dangerous task . in the future , _ smart coating _ technology , or _smart paint _ , could do the job more efficiently and without putting people in danger .the idea behind smart coating is to form a thin layer of a specific substance on an object which then makes it possible to measure a condition of the surface ( such as temperature or cracks ) at any location , without direct access to the location .the concept of smart coating already occurs in nature , such as proteins closing wounds , antibodies surrounding bacteria , or ants surrounding food to transport it to their nest .these diverse examples suggest a broad range of applications of smart coating technology in the future , including repairing cracks or monitoring tension on bridges , repairing space craft , fixing leaks in a nuclear reactor , or stopping internal bleeding .we continue the study of coating problems in the context of self - organizing programmable matter consisting of simple computational elements , called particles , that can establish and release bonds and can actively move in a self - organized way using the geometric version of the amoebot model presented in . in doing so , we proceed to investigate the runtime analysis of our universal coating algorithm , introduced in .we first show that coating problems do not only have a ( trivial ) linear lower bound on the runtime , but that there is also a linear lower bound on the competitive gap between the runtime of fully local coating algorithms and coating algorithms that rely on global information .we then investigate the worst - case time complexity of our universal coating algorithm and show that it terminates within a linear number of rounds with high probability ( w.h.p . ) , where is the number of particles in the system and is a constant . ] , which implies that our algorithm is optimal in terms of worst - case runtime and also in a competitive sense .moreover , our simulation results show that in practice the competitive ratio of our algorithm is often better than linear . in the _ geometric amoebot model _, we consider the graph , where is the infinite regular triangular grid graph .each vertex in is a position that can be occupied by at most one particle ( see figure [ fig : graph_handover](a ) ) .each particle occupies either a single node or a pair of adjacent nodes in .any structure a particle system can form can be represented as a subgraph of .two particles occupying adjacent nodes are _ connected _ by a _bond _ , and we refer to such particles as _ neighbors_. the bondsdo not only ensure that the particles form a connected structure but they are also used for exchanging information as explained below .particles move by executing a series of _ expansions _ and _ contractions_. a particle which occupies one node is _ contracted _ and can expand to an unoccupied adjacent node to occupy two nodes .if it occupies two nodes it is _ expanded _ and can contract to occupy a single node . in figure[ fig : graph_handover](b ) , we illustrate a set of expanded and contracted particles on the underlying graph . for an expanded particle, we denote the node the particle last expanded into as the _ head _ of the particle and the other occupied node as its _tail_. for a contracted particle , the single node occupied by the particle is both its head and its tail . , where nodes of are shown as black circles .( b ) shows five particles on ; the underlying graph is depicted as a gray mesh ; a contracted particle is depicted as a single black circle and an expanded particle is depicted as two black circles connected by an edge .( c ) depicts the resulting configuration after a handover was performed by particles and in ( b ) . ][ fig : graph_handover ] to stay connected as they move , neighboring particles coordinate their motion in a _ handover _ , which can occur in two ways .a contracted particle can initiate a handover by expanding into a node occupied by an expanded neighbor , `` pushing '' and forcing it to contract .alternatively , an expanded particle can initiate a handover by contracting `` pulling '' a contracted neighbor to the node it is vacating , thereby forcing to expand .figures [ fig : graph_handover](b ) and [ fig : graph_handover](c ) illustrate two particles labeled and performing a handover .particles are _ anonymous _ but each has a collection of uniquely labeled _ ports _ corresponding to the edges incident to the nodes the particle occupies. bonds between adjacent particles are formed through ports that face each other .the particles are assumed to have a common _ chirality _ , meaning they share the same notion of _ clockwise ( cw ) direction_. this allows each particle to label its ports counting in the clockwise direction ; without loss of generality , we assume each particle labels its head and tail ports from to .however , particles may have different offsets of the labelings , so they do not share a common sense of orientation . each particle hosts a local memory of constant size for which any neighboring particle has read and write access. with this mechanism particles can communicate by writing into each other s memory .the _ configuration _ of the system at the beginning of time consists of the nodes in occupied by the object and the set of particles , and for each particle , contains the current state of , including whether it is expanded or contracted , its port labeling , and the contents of its local memory . following the standard asynchronous model of computation , we assume that the system progresses through atomic _activations _ of individual particles . at each ( atomic )activation a particle can perform at most one movement and an arbitrary bounded amount of computation involving its local memory and the shared memories of its neighbors .a classical result under this model is that for any asynchronous execution of atomic particle activations , we can organize these activations sequentially still producing the same end configuration .we count ( asynchronous ) time in terms of the number of activations .a _ round _ is over once each particle has been activated at least once .we assume the activation sequence to be _ fair _ , i.e. , for each particle and any point in time , will eventually be activated at some time . in the _ universal coating problem_ we consider an instance where represents the particle system and represents the fixed object to be coated .let be the number of particles in the system , be the set of nodes occupied by , and be the set of nodes occupied by ( when clear from the context , we may omit the notation ) .for any two nodes , the _ distance _ between and is the length of the shortest path in from to .the distance between a and is defined as .define _ layer _ to be the set of nodes that have a distance to the object , and let be the number of nodes in layer .an instance is _ valid _ if the following properties hold : 1 .the particles are all contracted and are initially in the _ idle _ state .the subgraphs of induced by and , respectively , are connected , i.e. , there is a single object and the particle system is connected to the object .the subgraph of induced by is connected , i.e. , the object has no holes .does contain holes , we consider the subset of particles in each connected region of separately . ] is -connected , i.e. , can not form _ tunnels _ of width less than .note that a width of at least is needed to guarantee that the object can be evenly coated .the coating of narrow tunnels requires specific technical mechanisms that complicate the protocol without contributing to the basic idea of coating , so we ignore such cases in favor of simplicity .a configuration is _ legal _ if and only if all particles are contracted and meaning that all particles are as close to the object as possible or_ coat as evenly as possible_. a configuration is said to be _ stable _ if no particle in ever performs a state change or movement .an algorithm _ solves _ the universal coating problem if , starting from any valid instance , it reaches a _ stable legal configuration _ in a finite number of rounds .many approaches have been proposed with potential applications in smart coating ; these can be categorized as active and passive systems . in passive systems ,particles move based only on their structural properties and interactions with their environment , or have only limited computational ability and lack control of their motion .examples include dna self - assembly systems ( see , e.g. , the surveys in ) , population protocols , and slime molds .our focus is on active systems , in which computational particles control their actions and motions to complete specific tasks .coating has been extensively studied in the area of _ swarm robotics _ , but not commonly treated as a stand - alone problem ; it is instead examined as part of _ collective transport _ ( e.g. , ) or _ collective perception _ ( e.g. ,see respective section of ) .some research focuses on coating objects as an independent task under the name of _ target surrounding _ or _boundary coverage_. the techniques used in this context include stochastic robot behaviors , rule - based control mechanisms and potential field - based approaches .while the analytic techniques developed in swarm robotics are somewhat relevant to this work , those systems have more computational power and movement capabilities than those studied in this work .michail and spirakis recently proposed a model for network construction inspired by population protocols .the population protocol model is related to self - organizing particle systems but is different in that agents ( corresponding to our particles ) can move freely in space and establish connections at any time .it would , however , be possible to adapt their approach to study coating problems under the population protocol model . in we presented our universal coating algorithm and proved its correctness .we also showed it to be worst - case work - optimal , where work is measured in terms of number of particle movements .in this paper we continue the analysis of the _ universal coating algorithm _ introduced in . as our main contribution in this paper, we investigate the runtime of our algorithm and prove that our algorithm terminates within a _ linear number of rounds _ with high probability .we also present a matching linear lower bound for local - control coating algorithms that holds with high probability .we use this lower bound to show a _ linear lower bound on the competitive gap _ between fully local coating algorithms and coating algorithms that rely on global information , which implies that our algorithm is also optimal in a competitive sense .we then present some simulation results demonstrating that in practice the competitive ratio of our algorithm is often much better than linear . in section [ sec : algo ] , we again present the algorithm introduced in .we present a comprehensive formal runtime analysis of our algorithm , by first presenting some lower bounds on the competitive ratio of any local - control algorithm in section [ sec : performance ] , and then proving that our algorithm has a runtime of rounds w.h.p . in section [ sec : wcruntime ] , which matches our lower bounds .in this section , we summarize the universal coating algorithm introduced in ( see for a detailed description ) .this algorithm is constructed by combining a number of asynchronous primitives , which are integrated seamlessly without any underlying synchronization .the _ spanning forest _ primitive organizes the particles into a spanning forest , which determines the movement of particles while preserving system connectivity ; the _ complaint - based coating _ primitive coats the first layer by bringing any particles not yet touching the object into the first layer while there is still room ; the _ general layering _ primitive allows each layer to form only after layer has been completed , for ; and the _ node - based leader election _ primitive elects a node in layer 1 whose occupant becomes the leader particle , which is used to trigger the general layering process for higher layers .we define the set of _ states _ that a particle can be in as _ idle _ , _ follower _ , _ root _ , and _ retired_. in addition to its state , a particle maintains a constant number of other flags , which in our context are constant size pieces of information visible to neighboring particles .a flag owned by some particle is denoted by .recall that a _ layer _ is the set of nodes that are equidistant to the object .a particle keeps track of its current layer number in . in order to respect the constant - size memory constraint of particles ,we take all layer numbers modulo . each root particle has a flag which stores a port label pointing to a node of the object if , and to an occupied node adjacent to its head in layer if .we now describe the coating primitives in more detail .the * _ spanning forest primitive _ * ( algorithm [ alg : spanningforestalgorithm ] ) organizes the particles into a spanning forest , which yields a straightforward mechanism for particles to move while preserving connectivity ( see for details ) .initially , all particles are _idle_. a particle touching the object changes its state to _root_. for any other idle particle , if has a root or a follower in its neighborhood , it stores the direction to one of them in , changes its state to _ follower _ , and generates a complaint flag ; otherwise , it remains idle .a follower particle uses handovers to follow its parent and updates the direction as it moves in order to maintain the same parent in the tree ( note that the particular particle at may change due to s parent performing a handover with another of its children ) . in this way, the trees formed by the parent relations stay connected , occupy only the nodes they covered before , and do not mix with other trees .a root particle uses the flag to determine its movement direction . as moves , it updates so that it always points to the next position of a clockwise movement around the object . for any particle , we call the particle occupying the position that resp . points to the _ predecessor _ of .if a root particle does not have a predecessor , we call it a _ super - root_. a particle acts depending on its state as described below : + lx * idle * : & if is adjacent to the object , it becomes a _ root _ particle , makes the current node it occupies a _ leader candidate position _ , and starts running the leader election algorithm .if is adjacent to a _ retired _ particle , also becomes a _root _ particle .if a neighbor is a root or a follower , sets the flag to the label of the port to , puts a _ complaint flag _ in its local memory , and becomes a _follower_. if none of the above applies , remains idle .+ * follower * : & if is contracted and adjacent to a retired particle or to , then becomes a _ root_ particle .if is contracted and has an expanded parent , then initiates handover ( algorithm [ alg : handover ] ) ; otherwise , if is expanded , it considers the following two cases : if has a contracted child particle , then initiates handover ; if has no children and no idle neighbor , then contracts .finally , if is contracted , it runs the function forwardcomplaint ( algorithm [ alg : complaint ] ) .+ * root * : & if particle is in layer 1 , participates in the leader election process .if is contracted , it first executes markerretiredconditions ( algorithm [ alg : retiredcondition ] ) and becomes _ retired _ , and possibly also a _ marker _ , accordingly .if does not become retired , then if it has an expanded root in , it initiates handover ; otherwise , calls layerextension ( algorithm [ alg : boundarydirectionalgorithm ] ) .if is expanded , it considers the following two cases : if has a contracted child , then initiates handover ; if has no children and no idle neighbor , then contracts .finally , if is contracted , it runs forwardcomplaint . +* retired * : & clears a potential complaint flag from its memory and performs no further action .the * _ complaint - based coating primitive _ * is used for the coating of the first layer .each time a particle holding at least one complaint flag is activated , it forwards one to its predecessor as long as that predecessor holds less than two complaint flags .we allow each particle to hold up to two complaint flags to ensure that a constant size memory is sufficient for storing the complaint flags and so the flags quickly move forward to the super - roots .a contracted super - root only expands to if it holds at least one complaint flag , and when it expands , it consumes one of these complaint flags .all other roots move towards whenever possible ( i.e. , no complaint flags are required ) by performing a handover with their predecessor ( which must be another root ) or a successor ( which is a root or follower of its tree ) , with preference given to a follower so that additional particles enter layer 1 .as we will see , these rules ensure that whenever there are particles in the system that are not yet at layer 1 , eventually one of these particles will move to layer 1 , unless layer 1 is already completely filled with contracted particles . initiates a handover with particle initiates a handover with particle initiates a handover with particle forwards one complaint flag to the * _ leader election primitive _ * runs during the complaint - based coating primitive to elect a node in layer 1 as the leader position .this primitive is similar to the algorithm presented in with the difference that leader candidates are nodes instead of static particles ( which is important because in our case particles may still move during the leader election primitive ) .the primitive only terminates once all positions in layer 1 are occupied . once the leader position is determined , all positions in layer 1 are filled by contracted particles and whatever particle currently occupies that position becomes the _leader_. this leader becomes a marker particle , marking a neighboring position in the next layer as a _ marked position _ which determines a starting point for layer 2 , and becomes _ retired_.once a contracted root has a retired particle in the direction , it retires as well , which causes the particles in layer 1 to become retired in counter - clockwise order . at this point ,the general layering primitive becomes active , which builds subsequent layers until there are no longer followers in the system .if the leader election primitive does not terminate ( which only happens if and layer 1 is never completely filled ) , then the complaint flags ensure that the super - roots eventually stop , which eventually results in a stable legal coating .the layer number of any node occupied by the object is equal to 0 .let be any neighbor of with smallest layer number ( modulo ) . * extending layer * expands in direction consumes a complaint flag , if it holds one in the * _ general layering primitive _ * , whenever a follower is adjacent to a retired particle , it becomes a root .root particles continue to move along positions of their layer in a clockwise ( if the layer number is odd ) or counter - clockwise ( if the layer number is even ) direction until they reach either the marked position of that layer , a retired particle in that layer , or an empty position of the previous layer ( which causes them to change direction ) .complaint flags are no longer needed to expand into empty positions .followers follow their parents as before .a contracted root particle may retire if : ( i ) it occupies the marked position and the marker particle in the lower layer tells it that all particles in that layer are retired ( which it can determine locally ) , or ( ii ) it has a retired particle in the direction .once a particle at a marked position retires , it becomes a marker particle and marks a neighboring position in the next layer as a marked position . , becomes a _ retired _particle becomes a _ retired _particle becomes a _ retired _ particlerecall that a _ round _ is over once every particle in has been activated at least once .the _ runtime _ of a coating algorithm is defined as the worst - case number of rounds ( over all sequences of particle activations ) required for to solve the coating problem . certainly , there are instances where every coating algorithm has a runtime of ( see lemma [ lem : lowerbound ] ) , though there are also many other instances where the coating problem can be solved much faster . since a worst - case runtime of is fairly large and therefore not very helpful to distinguish between different coating algorithms , we intend to study the runtime of coating algorithms relative to the best possible runtime .[ lem : lowerbound ] the worst - case runtime required by any local - control algorithm to solve the universal coating problem is . particles ( black dots ) in a line connected to the surface via a single particle .,width=226 ] [ fig : worstcaseretiredstructure ] assume the particles form a single line of particles connected to the surface via ( figure [ fig : worstcaseretiredstructure ] ) .suppose .since , it will take rounds in the worst - case ( requiring movements ) until touches the object s surface .this worst - case can happen , for example , if performs no more than one movement ( either an expansion or a contraction ) per round .unfortunately , a large lower bound also holds for the competitiveness of any local - control algorithm . a coating algorithm is called _ -competitive _ if for any valid instance , \leq c \cdot { \ensuremath{\text{opt}}}(p , o ) + k\ ] ] where is the minimum runtime needed to solve the coating problem and is a value independent of .[ thm : competitiveness ] any local - control algorithm that solves the universal coating problem has a competitive ratio of .we construct an instance of the coating problem which can be solved by an optimal algorithm in rounds , but requires any local - control algorithm times longer .let be a straight line of arbitrary ( finite ) length , and let be a set of particles which entirely occupy layer 1 , with the exception of one missing particle below equidistant from its sides and one additional particle above in layer 2 equidistant from its sides ( see figure [ fig : opt1borders ] ) .an optimal algorithm could move the particles to solve the coating problem for the given example in rounds , as in figure [ fig : opt2 ] .note that the optimal algorithm always maintains the connectivity of the particle system , so its runtime is valid even under the constraint that any connected component of particles must stay connected .however , for our local - control algorithms we allow particles to disconnect from the rest of the system . .the particles are all contracted and occupy the positions around the object , with the exception that there is one unoccupied node below the object and one extra particle above the object. borders and are shown as red lines . ][ fig : opt1borders ] now consider an arbitrary local - control algorithm for the coating problem . given a round ,we define the _ imbalance _ at border as the net number of particles that have crossed from the top of to the bottom until round ; similarly , the imbalance at border is defined to be the net number of particles that have crossed from the bottom of to the top until round .[ fig : opt2 ] certainly , there is an activation sequence in which information and particles can only travel a distance of up to nodes towards or within the first rounds .hence , for any , the probability distributions of and are independent of each other .additionally , particles up to a distance of from and can not distinguish between which border they are closer to , since the position of the gap is equidistant from the borders .this symmetry also implies that =\pr[\phi_r(r)=k] ] and ] . since , on the other hand , ,we have established a linear competitive ratio .therefore , even the competitive ratio can be very high in the worst case .we will revisit the notion of competitiveness in section [ sec : experimental ] .in this section , we show that our algorithm solves the coating problem within a linear number of rounds w.h.p .. we start with some basic notation in section [ sec : preliminaryanalysis ] .section [ sec : parallel ] presents a simpler synchronous parallel model for particle activations that we can use to analyze the worst - case number of rounds .section [ sec : firstlayer ] presents the analysis of the number of rounds required to coat the first layer .finally , in section [ sec : higherlayers ] , we analyze the number of rounds required to complete all other coating layers , once layer 1 has been completed .we start with some notation .recall that denotes the number of nodes of at distance from object ( i.e. , the number of nodes in layer ) .let be the the layer number of the final layer for particles ( i.e. , satisfies ) .layer is said to be _ complete _ if every node in layer is occupied by a contracted retired particle ( for ) , or if all particles have reached their final position , are contracted , and never move again ( for ) . given a configuration , we define a directed graph over all nodes in occupied by _ active _ ( follower or root ) particles in . for every expanded active particle in , contains a directed edge from the tail to the head of . for every follower , has a directed edge from the head of to . for the purposes of constructing , we also define parents for root particles : a root particle sets to be the active particle occupying the node in direction once has performed its first handover expansion with . for every root particle , has a directed edge from the head of to , if it exists . certainly , since every node has at most one outgoing edge in , the nodes of form either a collection of disjoint trees or a ring of trees .a ring of trees may occur in any layer , but only temporarily ; the leader election primitive ensures that a leader emerges and retires in layer 1 and marker particles emerge and retire in higher layers , causing the ring in to break .the super - roots defined in section [ subsec : coatingprimitives ] correspond to the roots of the trees in .a _ movement _ executed by a particle can be either a _ sole contraction _ in which contracts and leaves a node unoccupied , a _ sole expansion _ in which expands into an adjacent unoccupied node , a _ handover contraction with _ in which contracts and forces its contracted neighbor to expand into the node it vacates , or a _ handover expansion with _ in which expands into a node currently occupied by its expanded neighbor , forcing to contract . in this section ,we show that instead of analyzing our algorithm for asynchronous activations of particles , it suffices to consider a much simpler model of parallel activations of particles .movement schedule _ to be a sequence of particle system configurations .[ defn : parallelschedule ] a movement schedule is called a _ parallel schedule _if each is a valid configuration of a connected particle system ( i.e. , each particle is either expanded or contracted , and every node of is occupied by at most one particle ) and for every is reached from such that for every particle one of the following properties holds : 1 . occupies the same node(s ) in and , 2 . expands into an adjacent node that was empty in , 3 . contracts , leaving the node occupied by its tail empty in , or 4 . is part of a handover with a neighboring particle .while these properties allow at most one contraction or expansion per particle in moving from to , multiple particles may move in this time . consider an arbitrary fair asynchronous activation sequence for a particle system , and let , for , be the particle system configuration at the end of asynchronous round in if each particle moves according to algorithm [ alg : spanningforestalgorithm ] .a _ forest schedule _ is a parallel schedule with the property that is a forest of one or more trees , and each particle follows the unique path which it would have followed according to , starting from its position in .this implies that remains a forest of trees for every .a forest schedule is said to be _ greedy _ if all particles perform movements according to definition [ defn : parallelschedule ] in the direction of their unique paths whenever possible .we begin our analysis with a result that is critical to both describing configurations of particles in greedy forest schedules and quantifying the amount of progress greedy forest schedules make over time .specifically , we show that if a forest s configuration is `` well - behaved '' at the start , then it remains so throughout its greedy forest schedule , guaranteeing that progress is made once every two configurations .[ lem : expparentchild ] given any fair asynchronous activation sequence , consider any greedy forest schedule .if every expanded parent in has at least one contracted child , then every expanded parent in also has at least one contracted child , for .suppose to the contrary that is the first configuration that contains an expanded parent which has all expanded children .we consider all possible expanded and contracted states of and its children in and show that none of them can result in and its children all being expanded in .first suppose is expanded in ; then by supposition , has a contracted child . by definition [ defn : parallelschedule ], can not perform any movements with its children ( if they exist ) , so performs a handover contraction with , yielding contracted in , a contradiction .so suppose is contracted in .we know will perform either a handover with its parent or a sole expansion in direction since it is expanded in by supposition .thus , any child of in say does not execute a movement with in moving from to .instead , if is contracted in then it remains contracted in since it is only permitted to perform a handover with its unique parent ; otherwise , if is expanded , it performs either a sole contraction if it has no children or a handover with one of its contracted children , which it must have by supposition . in either case, has a contracted child in , a contradiction . as a final observation , two trees of the forestmay `` merge '' when the super - root of one tree performs a sole expansion into an unoccupied node adjacent to a particle of another tree .however , since is a root and thus only defines as its parent after performing a handover expansion with it , the lemma holds in this case as well .for any particle in a configuration of a forest schedule , we define its _ head distance _ ( resp . , _ tail distance _ ) to be the number of edges along from the head ( resp . , tail ) of to the end of . depending onwhether is contracted or expanded , we have . for any two configurations and and any particle , we say that _ dominates w.r.t . _ , denoted , if and only if and .we say that _ dominates _ , denoted , if and only if dominates with respect to every particle .then it holds : [ lem : forestdom ] given any fair asynchronous activation sequence which begins at an initial configuration in which every expanded parent has at least one contracted child , there is a greedy forest schedule with such that for all .we first introduce some supporting notation .let be the sequence of movements executes according to .let denote the remaining sequence of movements in after the forest schedule reaches , and let denote the first movement in .a greedy forest schedule can be constructed from configuration such that , for every , configuration is obtained from by executing only the movements of a greedily selected , mutually compatible subset of .argue by induction on , the current configuration number . is trivially obtained , as it is the initial configuration .assume by induction that the claim holds up to .w.l.o.g .let , for , be the greedily selected , mutually compatible subset of movements that performs in moving from to .suppose to the contrary that a movement is executed by a particle .it is easily seen that can not be ; since was excluded when was greedily selected , it must be incompatible with one or more of the selected movements and thus can not also be executed at this time .so , and we consider the following cases : is a sole contraction . then is expanded and has no children in , so we must have , since there are no other movements could execute , a contradiction . is a sole expansion .then is contracted and has no parent in , so we must have , since there are no other movements could execute , a contradiction . is a handover contraction with , one of its children . then at some time in before reaching , became a descendant of ; thus , must also be a descendant of in .if is not a child of in , there exists a particle such that is a descendant of , which is in turn a descendant of .so in order for to be a handover contraction with , must include actions which allow to `` bypass '' its ancestor , which is impossible .so must be a child of in , and must be contracted at the time is performed . if is also contracted in , then once again we must have .otherwise , is expanded in , and must have become so before was reached . butthis yields a contradiction : since is greedy , would have contracted prior to this point by executing either a sole contraction if it has no children , or a handover contraction with a contracted child whose existence is guaranteed by lemma [ lem : expparentchild ] , since every expanded parent in has a contracted child . is a handover expansion with , its unique parent .then we must have that is a handover contraction with , and an argument analogous to that of case 3 follows .we conclude by showing that each configuration of the greedy forest schedule constructed according to the claim is dominated by its asynchronous counterpart .argue by induction on , the configuration number . since , we have that .assume by induction that for all rounds , we have .consider any particle .since is constructed using the exact set of movements executes according to and each time moves it decreases either its head distance or tail distance by , it suffices to show that has performed at most as many movements in up to as it has according to up to .if does not perform a movement between and , we trivially have . otherwise , performs movement to obtain from .if has already performed according to before reaching , then clearly .otherwise , must be the next movement is to perform according to , since has performed the same sequence of movements in the asynchronous execution as it has in up to the respective rounds , and thus has equal head and tail distances in and . it remains to show that can indeed perform between and .if is a sole expansion , then is the super - root of its tree ( in both and ) and must also be able to expand in . similarly ,if is a sole contraction , then has no children ( in both and ) and must be able to contract in . if is a handover expansion with its parent , then must be expanded in .parent must also be expanded in ; otherwise , contradicting the induction hypothesis .an analogous argument holds if is a handover contraction with one of its contracted children .therefore , in any case we have , and since the choice of was arbitrary , .we can show a similar dominance result when considering complaint flags .[ defn : complaintparallelschedule ] a movement schedule is called a _complaint - based parallel schedule _if each is a valid configuration of a particle system in which every particle holds at most _ one _ complaint flag ( rather than two , as described in algorithm [ alg : complaint ] ) and for every , is reached from such that for every particle one of the following properties holds : 1 . does not hold a complaint flag and property 1 , 3 , or 4 of definition [ defn : parallelschedule ] holds , 2 . holds a complaint flag and expands into an adjacent node that was empty in , consuming , 3 . forwards a complaint flag to a neighboring particle which either does not hold a complaint flag in or is also forwarding its complaint flag . a _ complaint - based forest schedule_ has the same properties as a forest schedule , with the exception that is a complaint - based parallel schedule as opposed to a parallel schedule .a complaint - based forest schedule is said to be _ greedy _ if all particles perform movements according to definition [ defn : complaintparallelschedule ] in the direction of their unique paths whenever possible .we can now extend the dominance argument to hold with respect to _ complaint distance _ in addition to head and tail distances .for any particle holding a complaint flag in configuration , we define its complaint distance to be the number of edges along from the node occupies to the end of . for any two configurations and and any complaint flag , we say that _ dominates w.r.t . _ , denoted , if and only if . extending the previous notion of dominance, we say that _ dominates _ , denoted , if and only if dominates with respect to every particle and with respect to every complaint flag .it is also possible to construct a greedy complaint - based forest schedule whose configurations are dominated by their asynchronous counterparts , as we did for greedy forest schedules in lemma [ lem : forestdom ] .many of the details are the same , so as to avoid redundancy we highlight the differences here .the most obvious difference is the inclusion of complaint flags .definition [ defn : complaintparallelschedule ] restricts particles to holding at most one complaint flag at a time , where algorithm [ alg : complaint ] allows a capacity of two .this allows the asynchronous execution to not `` fall behind '' the parallel schedule in terms of forwarding complaint flags .basically , definition [ defn : complaintparallelschedule ] allows a particle holding a complaint flag in the parallel schedule to forward to its parent even if currently holds its own complaint flag , so long as is also forwarding its flag at this time .the asynchronous execution does not have this luxury of synchronized actions , so the mechanism of buffering up to two complaint flags at a time allows it to `` mimic '' the pipelining of forwarding complaint flags that is possible within one round of a complaint - based parallel schedule .another slight difference is that a contracted particle can not expand into an empty adjacent node unless it holds a complaint flag to consume .however , this restriction reflects algorithm [ alg : boundarydirectionalgorithm ] , so once again the greedy complaint - based forest schedule can be constructed directly from the movements taken in the asynchronous execution .moreover , since this restriction can only cause a contracted particle to remain contracted , the conditions of lemma [ lem : expparentchild ] are still upheld .thus , we obtain the following lemma : [ lem : flagforestdom ] given any fair asynchronous activation sequence which begins at an initial configuration in which every expanded parent has at least one contracted child , there is a greedy complaint - based forest schedule with such that for all .by lemmas [ lem : forestdom ] and [ lem : flagforestdom ] , once we have an upper bound for the time it takes a greedy forest schedule to reach a final configuration , we also have an upper bound for the number of rounds required by the asynchronous execution .hence , the remainder of our proofs will serve to upper bound the number of parallel rounds any greedy forest schedule would require to solve the coating problem for a given valid instance , where .let be such a greedy forest schedule , where is the initial configuration of the particle system ( of all contracted particles ) and is the final coating configuration . in sections [ sec : firstlayer ] and [ sec: higherlayers ] , we will upper bound the number of parallel rounds required by in the worst case to coat the first layer and higher layers , respectively . more specifically, we will bound the worst - case time it takes to complete a layer once layers have been completed . for convenience, we will not differentiate between complaint - based and regular forest schedules in the following sections , since the same dominance result holds whether or not complaint flags are considered . to prove these bounds , we need one last definition : a _ forest path schedule _ is a forest schedule with the property that all the trees of are rooted at a path , and each particle must traverse in the same direction .our algorithm must first organize the particles using the spanning forest primitive , whose runtime is easily bounded : following the spanning forest primitive , the particles form a spanning forest within rounds . initially all particles are idle . in each round any idle particle adjacent to the object , an active ( follower or root ) particle , or a retired particle becomes active .it then sets its parent flag if it is a follower , or becomes the root of a tree if it is adjacent to the object or a retired particle . in each roundat least one particle becomes active , so given particles in the system it will take rounds in the worst case until all particles join the spanning forest . for ease of presentation ,we assume that the particle system is of sufficient size to fill the first layer ( i.e. , ; the proofs can easily be extended to handle the case when ) ; we also assume that the root of a tree also generates a complaint flag upon its activation ( this assumption does not hurt our argument since it only increases the number of the flags generated in the system ) .let be the greedy forest path schedule where is a truncated version of , for is the configuration in in which layer becomes complete , and is the path of nodes in layer .the following lemma shows that the algorithm makes steady progress towards completing layer .[ lem : progresslayer1 ] consider a round of the greedy forest path schedule , where . then within the next two parallel rounds of , at least one complaint flag is consumed , at least one more complaint flag reaches a particle in layer , all remaining complaint flags move one position closer to a super - root along , or layer is completely filled ( possibly with some expanded particles ) .if layer 1 is filled , is satisfied ; otherwise , there exists at least one super - root in .we consider several cases : there exists a super - root in which holds a complaint flag .if is contracted , then it can expand and consume its flag by the next round . otherwise , consider the case when is expanded .if it has no children , then within the next two rounds it can contract and expand again , consuming its complaint flag ; otherwise , by lemma [ lem : expparentchild ] , must have a contracted child with which it can perform a handover to become contracted in and then expand and consume its complaint flag by . in any case , is satisfied .no super - root in holds a complaint flag and not all complaint flags have been moved from follower particles to particles in layer 1 .let be a sequence of particles in layer 1 such that each particle holds a complaint flag , no follower child of any particle except holds a complaint flag , and no particles between the next super - root and hold complaint flags . then , as each forwards its flag to according to definition [ defn : complaintparallelschedule ] , the follower child of holding a flag is able to forward its flag to , satisfying .no super - root in holds a complaint flag and all remaining complaint flags are held by particles in layer 1 . by definition [ defn : complaintparallelschedule ] , since no preference needs to be given to flags entering layer 1 , all remaining flags will move one position closer to a super - root in each round , satisfying .we use lemma [ lem : progresslayer1 ] to show first that layer will be filled with particles ( some possibly still expanded ) in rounds . from that point on , in another rounds , one can guarantee that expanded particles in layer will each contract in a handover with a follower particle , and hence all particles in layer will be contracted , as we see in the following lemma : [ lemma : filled ] after rounds , layer 1 must be filled with contracted particles .we first prove the following claim : after rounds of , layer must be filled with particles .suppose to the contrary that after rounds , layer 1 is not completely filled with particles .then none of these rounds could have satisfied of lemma [ lem : progresslayer1 ] , so one of , or must be satisfied every two rounds .case can be satisfied at most times ( accounting for at most rounds ) , since a super - root expands into an unoccupied position of layer 1 each time a complaint flag is consumed .case can also be satisfied at most times ( accounting for at most rounds ) , since once all remaining complaint flags are in layer 1 , every flag must reach a super - root in moves .thus , the remaining rounds must satisfy times , implying that flags reached particles in layer 1 from follower children .but each particle can hold at most one complaint flag , so at least flags must have been consumed and the super - roots have collectively expanded into at least unoccupied positions , a contradiction . by the claim , it will take at most rounds until layer is completely filled with particles ( some possibly expanded ) . in atmost another rounds , every expanded particle in layer will contract in a handover with a follower particle ( since ) , and hence all particles in layer will be contracted after rounds . once layer is filled , the leader election primitive can proceed .the full description of the universal coating algorithm in uses a node - based version of the leader election algorithm in for this primitive . for consistency, we kept this description of the primitive in this paper as well .however , in order to formally prove with high probability guarantees on the runtime of our universal coating algorithm , we use a monte carlo variant of the leader election algorithm in .a description of this variant and its corresponding proofs appear in .this updated algorithm elects a leader with high probability and gives the following runtime bound .[ lem : leaderelection ] within further rounds , a position of layer 1 has been elected as the leader position , w.h.p . once a leader position has been elected and either no more followers exist ( if ) or all positions are completely filled by contracted particles ( which can be checked in an additional rounds ) , the particle currently occupying the leader position becomes the leader particle . once a leader has emerged , the particles on layer retire , which takes further rounds .together , we get : [ cor : layer1 ] the worst - case number of rounds for to complete layer is , w.h.p .we again use the dominance results we proved in section [ sec : parallel ] to focus on parallel schedules when proving an upper bound on the worst - case number of rounds denoted by for building layer once layer is complete , for .the following lemma provides a more general result which we can use for this purpose .[ lem : tree - time ] consider any greedy forest path schedule with and any such that .if every expanded parent in has at least one contracted child , then in at most configurations , nodes will be occupied by contracted particles .let be the super - root closest to , and suppose initially occupies node in .additionally , suppose there are at least active particles in ( otherwise , we do not have sufficient particles to occupy nodes of ) .argue by induction on , the number of nodes in starting with which must be occupied by contracted particles .first suppose that .by lemma [ lem : expparentchild ] , every expanded parent has at least one contracted child in any configuration , so is always able to either expand forward into an unoccupied node of if it is contracted or contract as part of a handover with one of its children if it is expanded .thus , in at most configurations , has moved forward positions , is contracted , and occupies its final position .now suppose that and that each node , for , becomes occupied by a contracted particle in at most configurations .it suffices to show that also becomes occupied by a contracted particle in at most two additional configurations .let be the particle currently occupying ( such a particle must exist since we supposed we had sufficient particles to occupy nodes and ensures the particles follow this unique path ) . if is contracted in , then it remains contracted and occupying , so we are done . otherwise , if is expanded , it has a contracted child by lemma [ lem : expparentchild ] .particles and thus perform a handover in which contracts to occupy only at , proving the claim . for convenience , we introduce some additional notation .let denote the number of particles of the system that will not belong to layers 1 through , i.e. , , and let ( resp ., ) be the round ( resp . ,configuration ) in which layer becomes complete .when coating some layer , each root particle either moves either through the nodes in layer in the set direction ( cw or ccw ) for layer , or through the nodes in layer in the opposite direction over the already retired particles in layer until it finds an empty position in layer .we bound the worst - case scenario for these two movements independently in order to get a an upper bound on .let be the path of nodes in layer listed in the order that they appear from the marker position following direction , and let be the greedy forest path schedule where is a section of . by lemma [ lem : tree - time ] , it would take rounds for all movements to complete ; an analogous argument shows that all movements complete in rounds .this implies the following lemma : [ lem : layeri2 ] starting from configuration , the worst - case additional number of rounds for layer to become complete is .putting it all together , for layers through : [ cor : higherlayers ] the worst - case number of rounds for to coat layers 2 through is .starting from configuration , it follows from lemma [ lem : layeri2 ] that the worst - case number of rounds for to reach a legal coating of the object is upper bounded by where is a constant .combining corollaries [ cor : layer1 ] and [ cor : higherlayers ] , we get that requires rounds w.h.p . to coat any given valid object starting from any valid initial configuration of the set of particles . by lemmas [ lem : forestdom ] and [ lem : flagforestdom ], the worst - case behavior of is an upper bound for the runtime of our universal coating algorithm , so we conclude : [ thm : chain ] the total number of asynchronous rounds required for the universal coating algorithm to reach a legal coating configuration , starting from an arbitrary valid instance , is w.h.p . , where is the number of particles in the system .in this section we present a brief simulation - based analysis of our algorithm which shows that in practice our algorithm exhibits a better than linear average competitive ratio .since ( as defined in section [ sec : performance ] ) is difficult to compute in general , we investigate the competitiveness with the help of an appropriate lower bound for .recall the definitions of the distances and for and .consider any valid instance .let be the set of all legal particle positions of ; that is , contains all sets such that the positions in constitute a coating of the object by the particles in the system .we compute a lower bound on as follows .consider any , and let denote the complete bipartite graph on partitions and .for each edge , set the cost of the edge to .every perfect matching in corresponds to an assignment of the particles to positions in the coating .the maximum edge weight in a matching corresponds to the maximum distance a particle has to travel in order to take its place in the coating .let be the set of all perfect matchings in .we define the _ matching dilation _ of as since each particle has to move to some position in for some to solve the coating problem , we have .the search for the matching that minimizes the maximum edge cost for a given can be realized efficiently by reducing it to a flow problem using edges up to a maximum cost of and performing binary search on to find the minimal such that a perfect matching exists .we note that our lower bound is not tight .this is due to the fact that it only respects the distances that particles have to move but ignores the congestion that may arise , i.e. , in certain instances the distances to the object might be very small , but all particles may have to traverse one `` chokepoint '' and thus block each other .[ fig : simresults ] we implemented the universal coating algorithm in the amoebot simulator ( see for videos ) . for simplicity, each simulation is initialized with the object as a regular hexagon of object particles ; this is reasonable since the particles need only know where their immediate neighbors in the object s border are relative to themselves , which can be determined independently of the shape of the border . the particle system is initialized as idle particles attached randomly around the hexagon s perimeter .the parameters that were varied between instances are the radius of the hexagon and the number of ( initially idle ) particles in .each experimental trial randomly generates a new initial configuration of the system .figure [ fig : simresults](a ) shows the number of rounds needed to complete the coating with respect to the hexagon object radius and the number of particles in the system .the number of rounds plotted are averages over 20 instances of a given with 95% confidence intervals .these results show that , in practice , the number of rounds required increases linearly with particle system size .this agrees with our expectations , since leader election depends only on the length of the object s surface while layering depends on the total number of particles . figure [ fig : simresults](b ) shows the ratio of the number of rounds to the matching dilation of the system .these results indicate that , in experiment , the average competitive ratio of our algorithm may exhibit closer to logarithmic behaviors .figure [ fig : simresults](c ) shows the number of rounds needed to complete the coating as the radius of the hexagon object is varied .the runtime of the algorithm appears to increase linearly with both the number of active particles and the size of the object being coated , and there is visibly increased runtime variability for systems with larger radii .this paper continued the study of universal coating in self - organizing particle systems .the runtime analysis shows that our universal coating algorithm , presented in , terminates in a linear number of rounds , so it is worst - case optimal .this , along with the linear lower bound on the competitive gap between local and global algorithms , further shows our algorithm to be competitively optimal .furthermore , the simulation results show the competitive ratio of our algorithm is better than linear in practice . in the future, we would like to apply the algorithm and analysis to the case of bridging , in which particles create structures across gaps between disconnected objects .we would also like to extend the algorithm to have self - stabilization capabilities , so that it could successfully complete coating without human intervention after occasional particle failure or outside interference .
imagine coating buildings and bridges with smart particles ( also coined smart paint ) that monitor structural integrity and sense and report on traffic and wind loads , leading to technology that could do such inspection jobs faster and cheaper and increase safety at the same time . in this paper , we study the problem of uniformly coating objects of arbitrary shape in the context of _ self - organizing programmable matter _ , i.e. , the programmable matter consists of simple computational elements called particles that can establish and release bonds and can actively move in a self - organized way . particles are anonymous , have constant - size memory and utilize only local interactions in order to coat an object . we continue the study of our universal coating algorithm by focusing on its runtime analysis , showing that our algorithm terminates within a _ linear number of rounds _ with high probability . we also present a matching linear lower bound that holds with high probability . we use this lower bound to show a _ linear lower bound on the competitive gap _ between fully local coating algorithms and coating algorithms that rely on global information , which implies that our algorithm is also optimal in a competitive sense . simulation results show that the competitive ratio of our algorithm may be better than linear in practice .
the snowmass young physicists ( syp ) was first organized at the community planning meeting held at fermilab in october of 2012 .this group was formed to provide a conduit for young ( nontenured ) particle physicists to participate in the forthcoming community summer study ( known as snowmass on the mississippi ) which took place in the summer of 2013 in minneapolis , mn . the primary charge taken up bythe syp was to facilitate and encourage young people to get involved with physics studies and meetings in preparation for snowmass on the mississippi .the syp generated an online platform ( http://snowmassyoung.hep.net ) as well as an offline network for advertising tasks that need to be done and connecting interested syp members with the relevant frontiers .this was accomplished by having our members attend many of the `` pre - snowmass '' meetings across the frontiers . while at these meetings , dedicated parallel sessions were arranged with remote participation to provide a platform for discussion and input .invited speakers from the fermilab directorate , department of energy ( doe ) , national science foundation ( nsf ) , as well as senior scientists and faculty came to speak with a broad section of the young community .in addition to these meetings , over a dozen syp town hall meetings were held prior to snowmass during which plans and concerns were raised .the second charge is related to a `` deliverable '' to the community summer study .the syp gathered information about demographics , career outlook , physics outlook , planned experiments , and general concerns of young physicists in the form of an online survey .this information is intended to provide a voice to the next generation of leaders in high energy physics ( hep ) and serve as a basis for discussion with senior physicists , politicians , and funding agencies about the current and future state the field .additionally the opinions expressed here represent many people who were unable to attend snowmass on the mississippi personally .the results of this survey consititute the majority of this paper .the next charge of the syp is to become a long term asset to young physicists .this is done through multiple channels such as providing information and resources to people in high energy physics when they are making career decisions .this includes , but is not limited to , information about current and planned scientific experiments and collaborations , having cross - frontier talks and seminars , and providing information and resources for those who decide to take the many skills learned in physics out into the general work force .furthermore , the syp will make efforts to engage members of congress from the many districts in which the syp members live and help lobby for the interests of young people in hep .moreover , the syp aims to provide a chance for young physicists to network and meet each other as well as become known outside of each of our particular subfields . beyond the above listed tasks, syp has made contact with individuals who have recently left hep for non - academic career paths . through gathering information about their experienceswe can illustrate the broader positive impact members of hep have on the wider u.s . after leaving the field .moreover this information can serve as an example for career opportunities of current would - be physicists in the future .in addition to the online survey that was taken leading up to snowmass , a follow - up document is availabe for people who know the contact information for those who have left hep ( http://snowmassyoung.hep.net/oa_contacts.pdf ) . the mechanism of young scientists organizing to provide input into the snowmass process is not a new phenomenon , as a group organized during snowmass 2001 and became known as the young particle physicists ( ypp ) .the ypp also administered a survey in order to collect the opinions and concerns of many in hep .this work became the basis for the syp in 2013 who were in fact helped a great deal by these original scientists , many of whom have moved into leadership roles across hep .this paper is dedicated to showing a summary of the results of the syp 2013 survey .the data on which these results are based is now publicly available ( http://snowmassyoung.hep.net/syp2013_publicdata.tar.gz ) .section [ sec : method ] will outline the methodology of the survey .section [ sec : demo ] will present the results from the demographic information collected in the survey . in section [ sec : careeroutlook ] we present results pertaining to the opinions of peoples career outlook in hep .more generally , subsection [ sec : spires ] helps frame this discussion in terms of available and filled jobs as reported by spires data .section [ sec : physicsoutlook ] presents results pertaining to opinions on the physics prospects in hep including information about which of the planned experiments appear most exciting to the community . in section [ sec : nonacademic ] we present some of the opinions gathered by those who received their training within hep but have chosen to pursue careers outside of academia .finally in section [ sec : surveyquestions ] we present all the survey questions and include some of the direct responses we received from the many participants .the goal of the survey administered by the snowmass young physicists was to collect a broad range of opinions that reflect the physics interests , the career outlook , and the general mood of the field leading up to snowmass 2013 .the survey was administered using google forms and available from april 1st to july 15th 2013 ( http://tinyurl.com/snowmassyoung ) .this survey contains four distinct sections : 1 . * demographic information : * this includes general information such as the gender , marital status , and citizenship but also information that is specific to the demographics of hep .this information includes current frontier of work , current position within academia ( e.g. graduate student , post - doctoral researcher , tenured faculty , etc ... ) , and number of years in your current position .similar information for those on a non - academic career path was also gathered .* career outlook : * this section asked the respondent questions pertaining to their feelings and outlook on their current career .questions included what type of jobs are most attractive to young scientists , what factors impact their decisions to pursue an academic career , as well as what various external factors ( family , availability of jobs , job location , etc ... ) that impact their career decisions .* physics outlook : * these questions focused on the science that is being planned during snowmass 2013 .for example , respondents were asked : what their outlook for funding of hep in the future is , which frontier will have the greatest impact in the next 10 years on hep , and to indicate which of the planned experiments ( given from a non - exhaustive list ) did the respondents have the highest priority for . 4 .* non - academic career paths : * finally , this section of the survey was dedicated to gathering information from those who had received their training within hep but have since left to pursue a career outside of academia .in this section we asked questions about how their time in hep prepared them for their current career .additionally we gathered information to see how their professional lives compare to those working within hep .the survey was structured such that respondents would only see certain questions based on information about their current employment .figure [ fig : surveyflowchart ] shows a summary of the flow chart for the survey .this allows us to look for similarities across the different career paths without exposing respondents to questions that had little relevance to them .in hindsight , it would have been helpful to do the same for tenured faculty and young people , since many questions asked about preparing for careers outside of hep , and had little relevance for those more senior members of our field . for the survey that is to be administered post - snowmass , http://tinyurl.com/post-snowmass ,this issue has been resolved . ]the parsing of responses to the survey allowed us to identify correlations across different frontiers and different career stages . while many of the questions from the career outlook section focused on concerns most relevant to young scientists , most of the questions and answers included options that could be answered by all who were taking the survey .a small amount of data clean up was necessary before the sample could be analyzed .this was mostly to format answers to questions in such a way that they could be analyzed using c++ scripts .an example of one such clean up came from a question asking users to input the average number of hours worked in a week .many users inputted `` 45 '' .this tilde was simply removed from the data before being outputted to c++ code .finally results of the survey were outputted to a series of root files which can be read by a simple analysis script .a complete survey response is seen as an `` event '' and this allows the end user to write a simple analysis script to apply selection , make histograms , and output the results . a complete package including the raw data , translated root files , and an example analysis script can be found here : http://snowmassyoung.hep.net/syp2013_publicdata.tar.gz the plots shown in this paper have negligible uncertainties and a full set of plots with uncertainties can be found here : http://snowmassyoung.hep.net/plots.htmldemographic information composed an important part of the survey and included 10 - 12 questions ( depending on if you were on the academic or non - academic path of the survey ) including information about gender , marital status , number of children , and current employment .a total of 1112 responses were collected for the syp 2013 survey , of which 956 fit the `` young '' definition , e.g. non - tenured inside academia , and 74 respondents coming from non - academic career paths . comparing this to the 2001 survey which collected 1508 responses , of which 857 fit the `` young '' definition, we can see that we reached fewer total respondents , but a larger fraction of respondents were young people .below is a summary of some general demographic information collected and , where relevant , comparisons to u.s .2010 census data are made . 1 . * what is your gender ? * + male : * 79 * ( u.s . 2010 census data : 49 ) + female : * 21 * ( u.s .2010 census data : 51 ) + 2 . * what is your current marital status ?* + married : * 38 * ( u.s .2010 census data for population 18 years old : 51 ) + un - married : * 62 * ( u.s .2010 census data for population 18 years old : 49 ) +* how many children do you have ? * + no children : * 79 * ( u.s .2010 census data for population 18 years old : 52 ) + 1 child * 21 * ( u.s .2010 census data for population 18 years old : 48 ) +* what is your household salary ( usd / year ) ? * + * making 75,000 usd / year : * 69 ( u.s . 2010 census data for population 18 years old : 80 ) + * making 75,000 usd / year : * 31 ( u.s .2010 census data for population 18 years old : 20 ) + exploring the demographic information further , figure [ fig : currentpostion2013 ] shows the breakdown of the respondents whom are currently working in academia by their current position . figure [ fig : currentpostion2001 ] shows this same break down of the hep respondents by their academic positions in the 2001 survey .[ currentpos_2001 ] the snowmass process has identified 7 `` frontiers '' in hep : energy , intensity , theory , cosmic , education / outreach , instrumentation , and computing .figure [ fig : frontier ] shows the number of respondents who primarily work in each of those frontiers .a large number of participants come from the energy frontier , with the intensity , theory , and cosmic frontiers making up the majority of the rest .figure [ fig : citizen ] shows breakdown of survey participants by their current citizenship , with over half coming from the united states .figure [ fig : whereresearch ] shows where participants do the majority of their research .the largest number of respondents are based at universities , followed by cern and fnal .this is a shift from the responses found in the 2001 survey which had the majority of their responses coming from users at fermilab .[ exitment_2013 ] finally , figure [ fig : attend ] shows the breakdown of the survey participants that plan on attending the snowmass meeting .almost 60 of those responding did not plan on attending the snowmass meeting with another 27 undecided .figure [ fig : contribute ] shows that fewer than one - fourth of survey respondents were contributing to snowmass prior to taking the survey .the demographic information tells us that the survey reached mostly young scientists in hep , many of whom do their research at universities and who had not been participating in the snowmass process prior to taking this survey . moreover , many of those taking the survey will not be attending the snowmass meeting .this makes the data presented here particularly relevant as it expresses the opinions and concerns of many who otherwise have have not been involved in the snowmass process .career outlook questions were generally focused on the opinions and outlook of those pursuing a career within hep .information about the outlook for future funding and its impact on career decisions as well as questions about the dominant factors that impact decisions to pursue an academic career were covered .some general trends we observed about the career outlook of those in hep responding to our survey : 1 . * on a scale of 1 to 10 ( 1 = funding will stop , 10 = funding will thrive ) how do you feel about the funding within your frontier within the next decade ?* + nearly 60 of the respondents believed that funding was more likely to decline ( giving an answer less than or equal to 5 ) in the future .+ + figure [ fig : fundingyoungold ] shows the breakdown of this question by the classification of young and senior .this demonstrates that the young respondents are slightly more pessimistic about the future funding than their senior counterparts .+ 2 . * do you intend to pursue a permanent career in academia ? * + despite this knowledge about a bleak funding profile being well known amongst the young scientists , of the respondents still intend to pursue a career in academia .+ 3 . *which of the following career related concerns do you find the most important to you and your future in high energy physics ? * + the two most important career related concerns were the availability of university based jobs followed by the availability of laboratory based jobs .+ + + [ fig : contribute ] + + [ fig : contribute ] + + [ fig : contribute ] in addition to the general trends from the questions listed above , figure [ fig : jobsearchall ] shows responses to the question of where young scientists who intend to seek an academic position will search for a permanent position .this shows that the majority intend to search for a permanent position within the united states with the next largest group willing to search wherever they can find at job .figure [ fig : jobsearchusnonus ] shows the breakdown of this question based on whether the person in a u.s . or non - u.s . citizen .clearly , u.s . citizens prefer to search for a job within the u.s . while non - u.s .citizens are more apt to take a job wherever they can find one .one possible interpretation to this breakdown could be that in order to attract the best of the non - u.s .members of the hep community , job availability is already a driving force .figure [ fig : jobsearchgradpostdoc ] suggests that the younger generation of u.s .scientists are more willing to take a job wherever they can find one compared to the post - doctoral respondents .figure [ fig : jobsearchfrontier ] shows the breakdown to this question based on frontier , indicating that people working in the intensity frontier are more apt to search for a job in the u.s . while those from the theory , cosmic , and energy frontiers are more open to accepting a job wherever they can find one .these trends again suggest that a major driving force for where the young and bright physicists will go is where the jobs are available .a further confirmation of this general trend can be seen in figure [ fig : jobsearchacademia ] .this figure shows where our respondents intend to search for a job broken down by whether they intend to pursue an academic job .the results indicate that those who wish to pursue a career in academia are more willing to go wherever the jobs are , with less regard for where those jobs are . to follow - up on this observed trendwe show the responses from those who intend to seek an academic career to the question if they will be more inclined to search for a job outside the u.s . if the next major experiment in their frontier is built outside the u.s . to follow - up on thisobserved trend we ask respondents where they will be more inclined to search for a job if the next major experiement in their frontier is built outside of the u.s .figure [ fig : leavingusall ] indicates that nearly 50 would be more likely to search for a job outside the u.s in this if the next major experiment is built outside of the u.s .this trend is greater amongst the non - u.s .members of hep , as shown in figure [ fig : leavinguscitizen ] , and much more heavily present amongst those seeking a job in academia , as shown in figure [ fig : leavingusacademia ] .finally , figure [ fig : leavinggradpostdoc ] indicates that graduate students ( the youngest in our field ) are more inclined than post - docs to search elsewhere if the next major experiment is built outside the u.s . furthering the trends we observe above .figure [ fig : leavingusfrontier ] reinforces this and shows that those in the intensity and energy frontier are more apt to search for a job outside the u.s . if the next major experiment from their frontier is built outside the u.s .taking these two questions together leads to the observation that while most of the respondents to the survey would prefer to seek a job in the u.s ., there could be `` brain drain '' depending on whether or not major experiments take place in the u.s .this effect is particularly apparent among graduate students and the u.s.s ability to attract non - u.s .scientists , but uniformly impacts all of the frontiers . to expand on these job related concerns further , we gathered some data from the inspire high energy physics information system .this data can be found at http://hoc.web.cern.ch/hoc/jobs_stats2.txt .figure [ fig : spires ] shows the jobs listed on inspire that were filled from 2007 - 2012 .these jobs fall into the category of post - doc , junior , and senior positions .this job data is broken down by location , showing jobs listed in north america and jobs everywhere else ( global ) . a clear downward trend in the last few years can be observed broadly in all markets , with no sustained growth whatsoever since 2007 .a continuation of these trends will almost certainly lead to fewer young scientists remaining in the field of particle physics and exasperate the trends of failing to attract non - u.s .scientists , as indicated in the previous section .questions in this section highlighted the science that is being planned during snowmass 2013 .we asked survey participants such questions as : which frontier will have the greatest impact in the next 10 years on hep , to indicate which of the planned experiments ( given from a non - exhaustive list ) did the they have the highest priority for , and whether or not they would encourage other people to pursue the science within their frontier .some general trends were observed about the physics outlook of those in hep responding to our survey : 1 .* would you encourage other young physicists to pursue a career in your frontier ? *+ more than 75 of the respondents would recommend other talented young physicists to pursue a career in their frontier .this is a trend that is shown to be true across all the frontiers and for both young and senior members of hep .this particular fact is remarkable given the rather pessimistic outlook for funding and jobs and demonstrates that the science is found to be very compelling .+ + 2 . * which of the following frontiers as defined by the snowmass process will have the greatest impact on the landscape of high energy physics in the next 10 years ? * + the results for this question are shown in figures [ fig : impactonhepall ] and [ fig : impactonhepyounold ] . the energy frontier is seen as likely to have the greatest impact in the coming decade. however this fact seems less important than when broken down by frontier of the respondent .our sample contained more energy frontier people , so we received more energy responses .cosmic and intensity frontiers appear in second and third position , respectively .although the trend is the same for both groups , we notice that young scientists mentioned energy frontier and instrumentation less often than seniors , while preferring cosmic , computing and theory slightly more often . + from these responses sub - samples were created based on which frontier people reported to be currently working on . except for theorists, most respondents regarded their own current frontier as the one likely to have the greatest impact in the next 10 years ( 57%,60%,70% for intensity , energy and cosmic frontier , respectively ) .theorists responded with the energy frontier ( 33% ) and cosmic frontier ( 30% ) likely to have the greatest impact .another area we asked respondents was to indicate which of the planned experiments ( given from a non - exhaustive list ) from each of the three major frontiers they gave the highest priority to .section [ sec : intensitysub ] shows the responses for intensity frontier experiments , section [ sec : energysub ] shows responses for the energy frontier , and section [ sec : cosmicsub ] shows the responses for the cosmic frontier .each section is broken down by the current frontier the survey participant lists .figure [ fig : intensityall ] shows a non - exhaustive list of planned experiments from the intensity frontier which the survey respondent was asked to check which of the following experiments they are excited about .the respondent could select more than one experiment .the three intensity frontier experiments receiving the most overall votes are lbne ( 312 votes ) , project - x ( 295 votes ) , and majorana ( 274 votes ) .we looked at which experiments that respondents from different frontiers were the most excited about .figure [ fig : intensityfrontierexp ] shows the excitement for the various intensity frontier experiments broken down by the survey respondents current frontier .the top six intensity frontier experiments respondents were excited about , broken down by their current frontier , is shown in table [ tab : intensityexpfrontier ] .figure [ fig : energyall ] shows a list of planned experiments from the energy frontier which the survey respondent was asked to check which of the following experiments they are excited about .the respondent could select more than one experiment .the three energy frontier experiments receiving the most overall votes are vlhc ( 452 votes ) , muon collider ( 399 votes ) , and linear collider collaboration ( 376 votes ) .figure [ fig : intensityfrontierexp ] shows the excitement for the various energy frontier experiments broken down by the suvery respondents current frontier .the top six energy frontier experiments respondents were excited about , broken down by their current frontier , is shown in table [ tab : energyexpfrontier ] . figure [ fig : cosmicall ]lists the planned experiments from the cosmic frontier which the survey respondent was asked to check which of the following experiments they are excited about. the respondent could select more than one experiment .the three cosmic frontier experiments receiving the most overall votes are icecube ( 454 votes ) , fermi telescope ( 400 votes ) , and dark energy survey ( 381 votes ) .figure [ fig : cosmicfrontierexp ] shows the excitement for the various cosmic frontier experiments broken down by the suvery respondents current frontier .the top six cosmic frontier experiments respondents were excited about , broken down by their current frontier , is shown in table [ tab : cosmicexpfrontier ] .clearly there is excitement about many of the upcoming experiments proposed through snowmass .it is worthwhile remarking that across the frontiers , the experiment seen from within the frontier as having the highest priority never has the highest priority across the other three frontiers .taking the example of the energy frontier , the vlhc is clearly seen as exciting and a high priority within the energy frontier as well as with the cosmic and theory frontier .however , this experiment receives only a third rank within the intensity frontier .similar trends can be said of the intensity frontier and the long baseline experiments as well as the cosmic frontier and the lsst .seeing the priorities vary so much across frontiers indicates that much more work is required to build consensus in the coming months .in addition to our survey reaching those currently working in hep , we endeavoured to reach people who had received their training within hep but have gone on to pursue careers outside of academia .in particular we asked how people on non - academic career paths work lives compare with people pursuing an academic career . in total we received 74 responses from people now working outside of hep . in an effort to expand this sample ,a handout was provided at snowmass collecting information from senior scientists about former colleagues who have since left the field .this form can be found at http://snowmassyoung.hep.net/oa_contacts.pdf .we urge any reader who may have contact information to please visit this form and email snowmass2013young.com with the information .some general trends we observed from this group responding to our survey : 1 . * before going into your current field of work , did you attempt to find a job in academics ? * + 57 of the survey respondents did not attempt to find a job inside academia before pursuing their career .+ 2 . * how many hours per week ( weekend )do you work on average ? *+ respondents from non - academic career paths report spending , on average , 50 hours per week at work , of which 7 hours are spent on the weekend working .this trend mirrors the respondents from academic career paths who report , on average , 49 hours per week at work , of which 7 hours are spent on the weekend working as well .this result ran counter to many of our intuitions about who works more both inside and outside of an academic career path .+ 3 . * in the last year , how many weeks have you had to travel for work related to your current position ? * the overwhelming majority of respondents from non - academic career paths report typically less than 4 weeks for work related travel in the last year . while the majority of respondents from academic career paths report spending greater than 4 weeks of work related travel .+ figure [ fig : oaskills ] shows the main skills that people found most valuable to their current career include programming , data analysis , and statistical analysis .it is also worth noting that skills such as oral communication and technical writing are seen as almost as important in their current job .finally , figure [ fig : oahappy ] shows that the overwhelming majority of the non - academic career path respondents are very happy with their current field of employment .the survey administered by the syp was intended to collect a range of opinions that reflect the physics interests , the career outlook , and the general mood of the field leading up to snowmass 2013 , we collected 1112 total responses , 74 of which came from people who had received their training from hep and have since chosen to pursue a non - academic career path .some broad conclusions we would like to draw from a look at the survey data : 1 . * demographic conclusion : * + the survey contained a large fraction of young people in hep , many of whom had not been participating in snowmass prior to taking this survey and many of which did not plan on attending the snowmass meeting .this makes the viewpoints and opinions expressed even more important as it is likely their voices would go unheard otherwise .* career outlook conclusion : * despite the wide spread perspective that the funding situation in the next ten years will likely be bleak , most young people are excited about the prospect of pursuing a job within hep .moreover , many of the respondents , both u.s .citizen and non - u.s .citizen intend to pursue a job within the u.s . .however , all this can shift if the u.s . misses an opportunity to build the next major physics experiment most relevant to the various frontiers in hep , helping support the idea that the most compelling science will attract the best and brightest minds . 3 .* physics outlook conclusion : * there is a general sense of excitement about the science and the upcoming experiments being proposed at snowmass .this excitement is seen across the frontiers and is shared by both the more senior members as well as the young scientists .however , the breakdown of which experiments that people from other frontiers find exciting shows there is still a lot of work left to do to build consensus .* non - academic career path conclusion : * the self reported work habits seem the mostly the same for both academic and non - academic careers .furthermore , many of the skills learned in hep are seen as valuable skills for those on a non - academic career path .finally , those who have received their training in hep and are now pursuing careers outside of academia are generally very happy with their current careers .100 * snowmass young physicist movement * http://snowmassyoung.hep.net/ * snowmass young physicist survey * http://tinyurl.com/snowmassyoung * young particle physicists * http://ypp.hep.net/ b. t. fleming , j. krane , g. p. zeller , f. canelli , b. connolly , r. erbacher , g. t. fleming , d. a. harris , e. hawker , m. hildreth , b. kilminster , s. lammers , d. medoff , j. monroe , d. toback , m. sorel , a. turcot , m. velasco , m. o. wascko , g. watts , e. d. zimmerman , * results of the survey on the future of hep * , _ arxiv : hep - ex/0108040 _ august 2001 * inspires * http://inspirehep.net/?ln=en * inspires raw data * http://hoc.web.cern.ch/hoc/jobs_stats2.txt * google forms * http://www.google.com/drive/apps.html * root * http://root.cern.ch/drupal/ * united states census bureau : people and households * http://www.census.gov/people/* what is your gender ?* what is your current marital status ? * are you a us citizen ? *what is the most advanced degree you currently hold ? *how many children do you have ? * what is your household salary ( usd / year ) ? * in what country did you earn your most recent degree ? * at what type of institution are you currently employed ? * what is your current field of employment ? * how long have you been in your current position ? * in the last year , how many weeks have you had to travel for work related to your current position * before going into your current field of work , did you attempt to find a job in academics ? * what resources did you find useful in finding your first non - academic job ? *have you ever not pursued a job offer because it would require relocating ? * do you maintain a residence in a different location than your spouse / partner because of you work ? *have you found your career options limited due to personal relationships ? * on a scale of 1 to 10 , how well did your hep physics experience prepare you for your current job ? * what skills learned during your hep physics experience were the most valuable to you in your current job ?* would you encourage other young physicists to pursue a career in your field ? * on a scale of 1 to 10 how happy are you with your decision to work in your current field ? * which of the following would you rather be doing now in your career ?* please choose which category best describes your current position * how long have you been in your current category ? * according to the snowmass definitions , which froniter best describes the research you primarily work on ? * according to the snowmass definitions , which frontier do you see yourself working on 5 years from now ? *which of the following institutions do you do the majority of your research at ?* have you been contributing to the pre - snowmass meetings and/or the snowmass process prior to taking this survey ? * do you plan on attending the snowmass on the mississippi in minneapolis , mn in july 2013 ? *do you maintain a residence in a different location than your spouse / partner or children in order to work or study ?* during your current position , which of the following best describes your primary responsibility ?* in the last year , how many weeks have you had to travel for work related to your current position ? * which of the following best describes your primary reason for your work related travel ? * in the last year , how many conference talks did you give ? * select two institutions which you most desire to be employed with after completing your training in high energy physics * do you intend to pursue a permanent career in academia ? * what do you think are the odds that you will obtain a permanent academic position ?* what percentage of postdocs in hep do you think go on to permanent positions in hep ? * how much consideration have you given to pursuing a job outside of academia ? * in the future , will you be searching for a permanent position in the us or abroad ? * if the next major expreiment in your frontier was built outside the us , would you be more inclined to search for a permanent position outside the us ?* would you encourage other young physicist to pursue a career in your frontier ? *have you found your career options limited due to personal relationships ? ** please grade each of the following career related concerns in terms of which you find the most important to you and your future in high energy physics . ** availability of university based jobs * availability of laboratory based jobs * funding for large scale long lead time experiments in the future * funding for small scale short lead time experiments in the future * proximity to your home institution of future planned experiments * bureaucracy and administrative difficulties to conduction research * on a scale of 1 to 10 ( 1 = funding will stop , 10 = funding will thrive ) how do you feel about the funding within your frontier within the next decade * * which of the following statements do you most agree with * * i believe we should invest the majority of our resources in one big project next ( e.g. linear collider , lsst , long baseline neutrino experiment , muon collider , vlhc , etc ... ) * i believe we should invest the majority of our resources in a larger variety of smaller experiments ( e.g. mu2e , project 8 , lar1 , ms - desi etc ... ) * * which of the following frontiers as defined by the snowmass process will have the greatest impact on the landscape of high energy physics in the next 10 years ? * * pick from a list of future planned experiments which you are most excited about : * * intensity frontier experiments * energy frontier experiments * cosmic frontier experiments * how many hours per week do you work ( on average ) ? * how many hours per weekend do you work ( on average ) ?* how many hours per week do you spend in meetings ( on average ) ?* do you think your training in academia is adequate to prepare you for your career path ?
from april to july 2013 the snowmass young physicists ( syp ) administered an online survey collecting the opinions and concerns of the high energy physics ( hep ) community . the aim of this survey is to provide input into the long term planning meeting known as the community summer study ( css ) , or snowmass on the mississippi . in total , 1112 respondents took part in the survey including 74 people who had received their training within hep and have since left for non - academic jobs . this paper presents a summary of the survey results including demographic , career outlook , planned experiments , and non - academic career path information collected .
las estrellas de carbono ( css ) son estrellas gigantes fras evolucionadas que presentan material circunestelar en forma de , granularidad amorfa , discos o nubes . uno de sus fenmenos caractersticos es la variabilidad y el anlisis de la misma permite explicar las propiedades fsicas y los procesos que tienen lugar en sus atmsferas , como as tambin determinar su estado evolutivo ( alksne et al .1991 ) .el relevamiento vvv provee una excelente oportunidad para llevar adelante estudios precisos de variabilidad , ya que nos brinda fotometra infrarroja profunda y multi - pocas que permiten construir curvas de luz de alta calidad ( merlo 2015 , 2016a , b ) .debido a la gran cantidad de datos que el mismo brinda , elaboramos un cdigo que permite obtener para tal fin las magnitudes yzjhks de cada objeto de inters .en la figura [ fig : ab1 ] se muestra el procedimiento general del cdigo .el mismo consta de tres mdulos : _ descarga identificacin tabulacin _( merlo 2016a ) .el procedimiento se inicia introduciendo en un genrico el cdigo numrico que permite la descarga de los catlogos , previamente gestionado en la plataforma casu , el de trabajo y las coordenadas de catlogo de la fuente de inters .posteriormente se lo ejecuta en consola y automticamente lleva a cabo cada uno de los mdulos para lo cual requiere conectividad internet permanente .las corridas del cdigo fueron realizadas en grupos de 8 a 10 fuentes , debido a imprevisibles cortes de la conexin a internet .y cdigos ) .ver detalles en merlo ( 2016a ) . ]los catlogos fotomtricos utilizados fueron obtenidos a partir de las imgenes vvv ya procesadas puestas a disponibilidad del grupo colaborador por casu ( _ cambridge astronomical survey unit _ ) .las mismas contienen las posiciones , los flujos y algunas mediciones de forma obtenidas con diferentes aperturas , que incluye la clasificacin morfolgica ms probable .cada unidad de observacin que brinda el telescopio vista , de cuatro metros de dimetro , se denomina `` '' , y consiste de 6 campos individuales de calado denominados `` '' ( ligeramente desplazados ) , cada uno de stos a su vez conformados por las 16 imgenes obtenidas en cada uno de los ccds del instrumento , convenientemente superpuestos para producir la imagen final .toda la fotometra para este trabajo est basada en los catlogos casu 1.3 .estos fueron elegidos en vez de los catlogos ms profundos ya que , en estos ltimos , se han realizado correcciones fotomtricas a travs del _casu vista data flow system _ con el objetivo de aplanar diferencias espaciales que pudieran aparecer en cada uno de los detectores .estas correcciones se han determinado para degradar aleatoriamente la precisin fotomtrica de algunas fuentes puntuales ( irwin 2011 ) .la ventaja del uso de los catlogos radica en el hecho de disponer de mayor cantidad ( 6x ) de datos fotomtricos ( pocas ) para la fuente de inters , lo cual redunda en una mejor calidad de las curvas de luz construidas a partir de ellas .se han desarrollado una familia de versiones , las cuales se diferencian principalmente en el algoritmo de bsqueda de fuentes como as tambin en el tipo de archivos de catlogo utilizado .las versiones 1.x utilizan el procedimiento de bsqueda por aproximaciones sucesivas , en el cual se va restringiendo el campo alrededor de la coordenada de catlogo y contando las fuentes incluidas , detenindose el proceso cuando no se encuentra ninguna fuente ( ver fig .[ fig : ab2]a ) .en cambio las versiones 2.x hacen uso de un procedimiento de minimizacin de distancia a las coordenadas de catlogo . en el mismo se identifican las fuentes estelareshalladas en un campo centrado en dichas coordenadas y se calculan las distancias angulares respectivas a ellas , seleccionndose aquella fuente que disponga del menor valor ( ver fig .[ fig : ab2]b ) . a su vez , las versiones x.0 utilizan los catlogos provenientes de las baldosas o ( merlo 2016a ) , mientras que las versiones x.5 los catlogos ( merlo 2016b )en relacin a los tiempos de procesamiento , y a modo de ejemplo , en un campo con 396 catlogos , el nuevo proceso de identificacin ( versin 2.5 ) insumi 7 ( seg / cat ) , mientras que en otro campo , con 77 catlogos , el anterior proceso de identificacin ( versin 1.0 ) necesit 1,5 ( seg / cat ) .los grficos de la fig .[ fig : ab3 ] muestran , en trazo continuo , los nuevos valores ( * s2 * ) de las magnitudes ks de las cs halladas , comparados con las versiones anteriores 1.0 y 1.5 ( * s * ) .las estrellas pertenecen a los del bulbo galctico y .los tiempos de procesamiento obtenidos disminuyeron significativamente , logrndose adems optimizar el proceso de deteccin e identificacin eliminando muchos falsos positivos .este cdigo se encuentra en etapa de depuracin y ampliacin , las cuales una vez implementadas el mismo ser puesto a disposicin de toda la comunidad interesada en utilizarlo .recientemente se ha finalizado el procedimiento de descarga y obtencin de las magnitudes yzjhks - vvv de todas las estrellas de carbono pertenecientes al bulbo de la va lctea , por lo que se ha iniciado el proceso de anlisis de variabilidad .una vez finalizado , se llevar adelante un procedimiento similar con las css pertenecientes a la zona del disco galctico cubierto por el relevamiento vvv . cabe destacar , finalmente , que este relevamiento tendr prximamente una extensin denominada vvv - x , el cual ampliar un poco ms la zona cubierta por el vvv .por ello tenemos previsto continuar nuestro estudio utilizando los nuevos y valiosos ctalogos que surgirn del mismo .alksne , z. et al . , 1991 , `` properties of galactic carbon stars '' , orbit book co. alksnis , a. et al . , 2001 , balt.a . , 10 , 1 .irwin , m.j ., 2011 , comunicacin privada .merlo , d. , 2015 , baaa , 57 , 111 .merlo , d. , 2016a , 7vvv workshop , 29 - 1/03 , antofagasta , chile .[ link ] merlo , d. , 2016b , fof2016 , 29 - 1/04 , crdoba , argentina .[ link ] saito , r. et al ., 2012 , a&a , 544 , a147 .
an improved version of the identification stellar sources module in the quasi - automatic sacaman code is presented , which allows to obtain yzjhks - vvv magnitudes for a set of predefined objects . the procedure uses a proximity algorithm 70 less time - consuming than the previous method of successive and counting approximation . this code is being applied to the study of variability of carbon stars belonging to the galactic bulge . 1.0 cm nuevo algoritmo de identificacio en sacaman se presenta una versin mejorada del mdulo de identificacin de fuentes estelares en el cdigo cuasi - automtico sacaman , el cual permite obtener magnitudes yzjhks - vvv para un conjunto de objetos de inters . el procedimiento utiliza un algoritmo de proximidad , insumiendo un tiempo de procesamiento 70 veces menor que el mtodo de aproximaciones y conteos sucesivos de la versin anterior . este cdigo est siendo utilizado en el estudio de variabilidad de estrellas de carbono pertenecientes al bulbo galctico .
modulation classification is the process of choosing the most likely scheme from a set of predefined candidate schemes that a received signal could belong to .various approaches have been proposed to address this problem .there has recently been growing interest in modulation classification for applications such as software defined radio , cognitive radio and interference identification .existing classification methods can generally be categorized into two main groups : feature based classifiers and likelihood based ( ml ) classifiers .the ml classifiers give the minimum possible classification error of all possible discriminant functions given perfect knowledge of the signal s probability distribution .however , this approach is very sensitive to modeling errors such as imperfect knowledge of the signal to noise ratio ( snr ) or phase offset .further , such approaches have very high computational complexity and are thus impractical in actual hardware implementation . to address these issue ,various feature based techniques such as cumulant - based classifiers and cylostationary - based classifiers have been proposed .recently , goodness - of - fit ( gof ) tests such as the kolmogorov - smirnov ( ks ) distribution distance have been proposed to identify the constellation used in qam modulation .based on the ks classifier , we proposed a new reduced complexity kuiper ( rck ) classifer in .the rck classifier only finds the empirical cumulative distribution function ( ecdf ) in a small set of predetermined testpoints that have the highest probability of giving the maximum distribution distance , effectively sampling the distribution function .the algorithm offered reduced computational complexity by removing the need to estimate the full ecdf while still providing better performance than the ks classifier .it also increased the robustness of the classifier to imperfect parameter estimates .the idea of improving the classification accuracy of the rck classifier by using more testpoints was proposed in .the method is referred to as variational distance ( vd ) classifier where testpoints are selected to be the pdf - crossings of two classes being recognized .the sum of the absolute distances is then used as the final discriminating statistic .we refer to methods such as rck and vd , that utilize the value of the ecdf at a small number of testpoints , as sampled distribution distance classifiers . in this workwe derive the optimal discriminant functions for classification with the sampled distribution distance given a set of testpoint locations .we also provide a systematic way of finding testpoint locations that provide near optimal performance by maximizing the bhattacharyya distance between classes . finally , we present results that compare the performance of this approach with existing techniques .following , we assume a sequence of discrete , complex , i.i.d . and sampled baseband symbols , ] , where , .we further define the snr as /\sigma^2 ] where is the chosen mapping from received symbols to the extracted feature vector , where is the length of the feature vector .possible feature maps include ( magnitude , ) , the concatenation of and ( quadrature , ) , the phase information ( angle , ) , among others . the theoretical cdf of given and , denoted as , is assumed to be known _ a priori _ ( methods of obtaining these distributions , both empirically and theoretically , are presented in ( * ? ? ?iii - a ) ) . in this paperwe focus on algorithms that use the ecdf defined as as the discriminating feature for classification . here, is the indicator function whose value is 1 if the function argument is true , and 0 otherwise . if the complete ecdf resulting from the entire feature vector , , is used for classification , we get the conventional distribution distance measures such as kuiper , kolmogorov - smirnov , anderson - darling and others . details of these measures are discussed in .once the ecdf is found and the appropriate distribution distance is calculated , the candidate constellation with minimum distance is chosen . however , prior work in have shown that improved classification accuracy can be achieved at much lower computational complexity and with increased model robustness by finding the value of the ecdf at a small number of specific testpoints .we describe these methods formally by defining a set of testpoints : ] , .we refer to any classifier that utilizes the feature vector as a _ sampled distribution distance - based classifier_. as an example , the variational distance ( vd ) classifier from proposed forming from ecdf points that give either a local maxima or minima of the difference between two theoretical cdfs of the candidate classes . instead of using the sampled ecdf directly , vd classifier finds the number of samples that fall between two consecutive testpoints , which is equivalent to taking the difference of the ecdf at consecutive testpoints , . in this paperour goal is to optimize the classification accuracy of the sampled distribution distance classification approach defined as intuitively , there are two ways to improve . first , since different testpoints have varying distribution distance , it is expected that different weights should be assigned to each testpoint .second , the choice of the number and location of the points along the ecdf should also be investigated to find the proper balance between complexity and classification accuracy .both of these improvements are addressed in the following subsection .we first assume that has been selected _ a priori _ and our goal is to find the optimal classifier for the resulting feature vector .we want to find a discriminant function for each ] , where corresponds to region , , is jointly distributed according to a multinomial probability mass function ( pmf ) given as where $ ] , and is the probability of an individual sample being in region . given that is drawn from , , for . given a particular ,the number of samples in each of the regions could be found as where and .this gives a mapping from any given to and therefore to the pmf as defined in ( [ eq : multinomial ] ) .therefore we have the complete class - conditional pdf , with in ( [ eq : multinomial ] ) determined by , the cdf of class .thus we have the optimal classifier. we will refer to and conditioned on class as and .although the multinomial pmf in ( [ eq : multinomial ] ) can be used for minimum error rate classification , its calculation is very computationally intensive . to address this issue we note that asymptotically the multinomial pmf , in ( [ eq : multinomial ] ) , approaches a multivariate gaussian distribution , as .where , since is simply the cumulative sum of ( i.e. ) , which is a linear operation , it follows that where , having shown that the feature vector is asymptotically gaussian distributed , we can proceed to apply the _bayes decision procedure _ in ( [ eq : bdp ] ) .however , the full multivariate pdfs are not required to perform classification because the optimal discriminant functions for gaussian feature vectors are known to be quadratic with the following form : where and in the following sections we will simply refer to this classifier as the bayesian approach .similar to rck and vd the bayesian approach only needs to store the testpoint locations for a fixed set of snrs since the theoretical cdf is dependent on snr . given a of size , vd and rck require both and for each class .in contrast , the bayesian approach requires the same vector , an matrix , a vector of size , and a scalar for each class . however , there are typically no more than 12 testpoints ( total number of pdf - crossings ) , so this additional storage requirements are negligible .the bayesian approach also requires the calculation of a quadratic form expression ( [ eq : discriminant ] ) .again , due to the fact that only a relatively small number of testpoints is used , the additional complexity is minimal . in this subsectionwe present a method for choosing testpoint locations , , that provide good classification performance .the method of using the pdf - crossings make intuitive sense , since it tries to find the testpoints that provide the maximum difference in the theoretical cdf while providing some heuristic rule that the testpoints will be spaced apart .tespoints that are too close to each other are not as effective because the ecdf tends to be highly correlated and thus provide minimal additional information .another issue with using the pdf - crossing is that it does not factor in knowledge of the correlation between testpoints . as we have shown in section [ subsec : optimal ] , the distribution follows an approximate multivariate gaussian with statistics given in ( [ eq : mu ] ) and ( [ eq : sigma ] ) . therefore , the class - conditional means and covariance matrices are sufficient to completely describe the distribution of the feature vectors conditioned on .thus , these statistics are also sufficient to find the optimal testpoint locations , .however , since are clearly not equal for all , a closed form expression for the classification accuracy for this problem does not exist . instead , a -dimensional integration is required and the limits , determined by the decision boundaries defined by ( [ eq : discriminant ] ) , are non - trivial .as is typically done in this scenario , we replace exact with a sub - optimum distance metric that is easier to evaluate and does not require a -dimensional integral .in particular we use the bhattacharyya distance first studied for signal selection in shown to be a very effective as a `` goodness '' criterion in the process of of selecting effective features to be used in classification .the metric is shown here for reference : note that the bhattacharyya distance is calculated between 2 classes . as a result, the search for testpoints can only be performed for the case . however , this could be done sequentially through all the possible pairs of . as is a function of and which are functions of our testpoint selection , , then we can express it as .we thus find the good candidate testpoint by under the constraint .as this is an -dimensional optimization problem , a closed - form solution is beyond the scope of this letter paper .instead , we turn to numerical optimization methods ( gradient descent methods ) to find local maxima . the intial point of these procedures could be chosen to coincide with the pdf - crossings or equally spaced over some interval .for the results section we focus on the quadrature feature which is a concatenation of the i and q component of each symbol . in fig . [fig : testpoints ] , we show the results of the testpoint selection procedure with , under 0 db snr , for varying number of testpoints with the two class being 4-qam and 16-qam . .the solid line shows the cdf difference between the two classes ( 4-qam and 16-qam , under snr=0 db , ) ] the solid line plot corresponds to the difference of the two theoretical cdfs .we note that in the vd classifier the local maxima and minima of this plot are used as the testpoints .however , we find that the numerical optimization finds `` good '' testpoints to be close , but not exactly at the local maxima and minima .this is due to the additional information provided by the covariance matrices .in contrast to vd classifier that has a fixed number of testpoints ( 4 for this particular problem ) corresponding to the number of local maxima and minima , the optimization procedure allows more flexibility in choosing the number of testpoints . in fig .[ fig : testpoints ] , we show the result of the optimization procedure for a range of 1 to 8 testpoints .it confirms our intuition that `` good '' testpoints tend to be 1 ) spaced apart to avoid high correlation , 2 ) concentrated around locations that have high cdf difference , and 3 ) are not necessarily the same for different values of .this result further confirms the need to jointly optimize the testpoint locations . as mentioned in the previous section , the proposed approach has the flexibility of varying the number of testpoints .this effectively gives more flexibility to trade - off classification accuracy with computational complexity .this idea is illustrated in fig .[ fig : vary_tp ] . for and snr=0db , we show the classification accuracy of the proposed method as the number of testpoints is increased from 1 to 8 , for all possible pairs of .the dotted lines correspond to the accuracy of the ml classifier which serves as an upperbound to classification accuracy , while the dashed lines correspond to that of the vd classifier .note that both are plotted as horizontal lines because ml does not utilize testpoints , while vd has a fixed number of testpoints corresponding to the pdf - crossings . for all possible pairs of constellations of interest.the classification accuracy of both ml and vd classifiersare also shown for comparison .( snr=0 db , =200 ) ] [ fig : vary_tp ] we see that the proposed method is able to exceed the accuracy of the vd classifier with as low as 3 testpoints .further , the method s accuracy could be improved by adding more testpoints but at the cost of higher complexity .we also note that with additional testpoints , the bayesian classifier reaches classification accuracy close to the ml classifier . finally , in fig .[ fig : vary_snr ] , we compare the performance of the proposed method with the existing techniques under varying snr with symbols used for classification . to have a fair comparison ,the same number of testpoints are used for both vd and bayesian . for the entire range of snr the proposed bayesian approachis shown to provide substantial gains over the vd classifier .we emphasize again that asymptotically , the proposed approach is the optimal classifier when using the sampled distribution distance as the discriminating feature . also shown in the plot are the classification accuracy of the ml classifier which acts as the upperbound , and the conventional kuiper classifier .= 200 symbols used for classification .the same number of testpoints are used for both vd and bayesian . ][ fig : vary_snr ]in this letter we presented the optimal discriminant functions for classifying using the sampled distribution distance .this method was shown to provide substantial gains compared to other existing approaches .the performance of this method is also shown to be close to the ml classifier but at significantly lower computational complexity .although modulation classification is presented in this paper to illustrate the basic concept , the approach is not limited to this application .the same classifier can be generalized to any classification problem where the cdf of each class is available .p. urriza , e. rebeiz , p. pawelczak , and d. cabric , `` computationally efficient modulation level classification based on probability distribution distance functions , '' _ ieee commun ._ , vol . 15 , no . 5 , pp .476478 , may 2011 .
in this letter , we derive the optimal discriminant functions for modulation classification based on the sampled distribution distance . the proposed method classifies various candidate constellations using a low complexity approach based on the distribution distance at specific testpoints along the cumulative distribution function . this method , based on the bayesian decision criteria , asymptotically provides the minimum classification error possible given a set of testpoints . testpoint locations are also optimized to improve classification performance . the method provides significant gains over existing approaches that also use the distribution of the signal features .
in this paper we study the many server n - system shown in figure [ fig.n ] , with poisson arrivals and exponential service times , under first come first served and assign to longest idle server policy ( fcfs - alis ) , as the number of servers becomes large . before describing the model in detail, we will first discuss our motivation for studying this system .the n - system is one of the simplest special cases of the so called parallel server systems , as defined in and further studied in .the general model has customers of types , servers of types , and a bipartite compatibility graph where if customer type can be served by server .arrivals are renewal with rate , where successive customer types are i.i.d . with probabilities ,there is a total of servers , of which are of type , and service times are generally distributed with rates .assume the system is operated under the fcfs - alis policy , that is servers take on the longest waiting compatible customer , and arriving customers are assigned to the longest idle compatible server .for this general system necessary and sufficient conditions for stability ( positive harris recurrence for given ) , or for complete resource pooling ( there exists critical such that the system is stable for , and the queues of all customer types diverge for ) can not be determined by 1st moment information alone ( as shown by an example of foss and chernova ) .in particular , under fcfs - alis calculation of the matching rates , which are long term average fractions of services performed by servers of type on customers of type , is intractable . in the special case that service rates depend only on the server type , and not on customer type , with poisson arrivals and exponential service times ,the system has a product form stationary distribution , as given in . in that case matching ratescan be computed from the stationary distribution .the following conjecture was made in : if the system is stable and has complete resource pooling for given , and we let both become large together , the behavior of the system simplifies : there will exist such that servers of type perform a fraction of the services , and the matching rates will converge to the rates for the fcfs infinite matching model with , as calculated in ( see also ) .the conjecture is based on the following heuristic argument : in steady state the times that each server becomes available form a stationary process which is only mildly correlated with the other servers and so servers become available approximately as a superposition of almost independent stationary processes which in the many server limit becomes a poisson process , and server types are then i.i.d . with probabilities , while customer types arrive as an i.i.d .sequence with probabilities , which corresponds exactly to the model of fcfs infinite matching . in our current study of the many server n - systemwe shall verify the conjectured many server behavior for this simple parallel server system .to do so we start from the known stationary distribution of the n - system with many servers , as derived from , and study its behavior as . as it turns out , the product form stationary distribution even for this simple case is far from simple , and the derivations of limits , which use summations over server permutations and asymptotic expansions of various expressions are quite laborious .we feel that this emphasizes the difficulty of verifying the conjectured behavior of the general system , which remains intractable at this time .we mention that the n - system with just two servers , has been the subject of several papers , . in this paper , our focus is on the n - system with many servers under fcfs - alis policy , and its limit property .the rest of the paper is structured as follows : in section [ sec.model ] we describe the model , in section [ sec.fluid ] we use some heuristic arguments to obtain a guess at the limiting behavior . in section [ sec.stationary ] we obtain the stationary behavior under many server scaling . in section [ sec.numerics ]we illustrate our results with some numerical examples . to improve the readability of the paper we have put all the proofs for section [ sec.stationary ] in the appendix .in our n - system customers of types and arrive as independent poisson streams , with rates .there are skill based parallel servers , servers of type which are flexible and can serve both types , and servers of type which can only serve type customers .we assume service times are all independent exponential , with server dependent rates .the service rate of an server is , the service rate of an server is , see fig [ fig.n ] .we let .service policy is fcfs - alis .the system is obviously markovian . in the following state description forthe server dependent poisson exponential system with server types and customer types was used : imagine the customers arranged in a single queue by order of arrivals , and servers are attached to customers which they serve , and the remaining idle servers are arranged by increasing idle time , see figure [ fig : ivostate ] the state is then , where is a permutation of the servers , the first servers are the ordered busy servers , and the last servers are the ordered idle servers , and where are the queue lengths of the customers waiting for one of the servers , and skipped ( can not be served ) by servers .for the special case of the n - system , the following three random quantities are important : the number of idle servers of type , the number of idle servers of type , and the number of servers of type which follow the last server of type in the sequence .we let be the total number of idle servers . because ofthe structure of the n - system , and the fcfs - alis policy the following properties hold for and : ( i ) : : there are no customers waiting for any server which precedes the last server in the permutation . in other words , for all we have . in particular ,if there is an idle type server , in other words if , then there are no waiting customers at all .( ii ) : : if there are any idle servers , then there are no type customers waiting for service , in other words , if then all the waiting customers are of type .( iii ) : : if there are no idle servers , then only the last queue can contain type customers , in other words , if then the last queue may contain customers of both types , but all the other waiting customers are of type .denote then a necessary and sufficient condition for stability is we shall require a stronger condition of complete resource pooling , defined by where equals the long run fraction of services performed by servers .the value of will be calculated in the next section .using the results of we can then write the exact stationary distribution of this system .we wish to show that as the arrival rates and the number of servers increase the system simplifies , and we get very precise many server scaling limits .we will investigate the behavior of the system when are fixed , and . to be precise, we shall then have , , all of which go to .we perform the following heuristic calculation : as long as the system is underloaded ( ) , each server of type will have a cycle of service of mean length , followed by an idle period , and similarly each server of type will have service of mean length followed by an idle period .the key idea now is that when , the idle periods should have the same length for both types , because of alis .let be the average length of the idle time .the average cycle times will be : and .denote by the long run fraction of services performed by servers , and for type .the flow rate out of one type server is , the flow rate out of all type servers should equal . similarly the flow rate out of all type servers should equal .that is , now we solve for and : we rewrite and eliminate : to get a quadratic equation for : here by , so the equation has one positive and one negative root . solving for positive get : note : for the case of we get . from and little s lawwe can obtain the average number of idle servers in pool 1 and pool 2 , denoted by and respectively . the values of and are then : note : both are positive , so . also , when we get .the value of does not come into the equation for , or the calculation of .hence , once we solve and obtain , the property of complete resource pooling will consist of checking that .we will show that the following holds for the stationary queue , as : * is distributed as a geometric random variable , taking values with probability of success .it is independent of .* is close to a bivariate normal , with means , variances and correlation * successive idle servers except for the last are i.i.d . of type with probability and of type with probability first obtain the stationary distribution for each state .we note that the stationary probabilities depend mainly on the values of .let denote the service rate of the server at position .[ thm.markov ] the stationary distribution of the state of the fcfs - alis many server n - system is given by : where is a normalizing constant .this follows for all three parts of ( [ eqn.pistate ] ) by utilizing properties ( i),(ii),(iii ) in section [ sec.model ] and substituting into equation ( 2.1 ) , theorem 2.1 , in .before we manipulate equation ( [ eqn.pistate ] ) , we introduce a lemma to facilitate the calculation .[ thm.sum ] let denote a permutation of given positive real numbers , we have where denotes the set of all the permutations of .now we can get the joint stationary distribution of .we denote by the stationary probability of , and .[ thm.pii1i2k ] the steady state joint distribution of is given by : where is a normalizing constant .in this section we obtain the asymptotic distribution of conditional on , as .we first show that as , the probability of no idle servers of type goes to zero , and so the probability that customers need not wait goes to 1 .next we condition on and show , where where is given in ( [ eqn.tsolution ] ) . finally , we condition on and show that the scaled and centered values of converge in distribution to a bivariate normal distribution . [ prop.zeroqueue ] when , as long as , , from this theorem we see that when , .therefore , for any . from equation ( [ eqn.piki ] ) , given , the limiting stationary distribution as is [ thm.fluidlimit ] conditional on , converge to in probability for any .that is , for any , when , we have after showing the fluid limit result , we are now ready to show the central limit result . [ thm.idledistribution ] for any , when , we have \right)\ ] ] where [ thm.kdist ] for any , as , theorem [ thm.kdist ] shows that converges in distribution to a geometric distribution , so .therefore , we can extend theorem [ thm.fluidlimit ] and theorem [ thm.idledistribution ] into unconditional versions .[ thm.uncondition ] when , becomes independent of and . converges in distribution to the bivariate normal distribution described in [ eqn.clt ] .consider a special case when , we have . and can be easily solved : when , converges in distribution to a bivariate normal distribution with mean , variance and correlation the total idleness has mean of and variance of in the infinite matching model corresponding to the n - system there is an infinite sequence of customers , of types , where the customer types are i.i.d . , type is with probability and with probability , and an independent sequence of servers , of types , where the server types are i.i.d ., type is with probability and with probability , and compatibility graph with arcs . successive customers and serversare matched according to fcfs : each server is matched to the first compatible customer that was not matched to a previous server , and each customer is matched to the first compatible server that was not matched to a previous customer .after of the customers have been matched , consider the sequence of remaining servers .let be the number of servers of type that are first in this sequence , preceding the first server of type .the is a markov chain .the steady state distribution for this markov chain is that , which is exactly the limiting distribution of in ( [ thm.kdist ] ) .we test our results by investigating an n - system . , , , . from our approximation results ,as long as , or , both pools should have similar utilization . so the average number of idle servers in each pool is close to 50 , with variance of .we use exact stationary distribution to verify this .now we can calculate the expectation and variance of idle number in each pool exactly , listed in the following table ..the exact calculation [ cols="^,^,^,^,^",options="header " , ] we can see that even when , the improved approximation is not bad .we are grateful to ivo adan for helpful discussion of this paper .summation over the geometric terms in ( [ eqn.pistate ] ) gives next we see that in this expression , permutations of with the same have a similar structure .we now sum over all the permutations of the appropriate . by lemma [ thm.sum ]we obtain each permutation of the remaining servers , has the same stationary probability .it remains to count the number of permutations .when we have . for each permutationwe choose 1 type server and out of type servers to form the last servers .the number of permutations is when , we have . for each permutation, we choose out of type servers and out of type servers .we then choose 1 from the idle servers of type , and from the idle servers of type to obtain the last servers .the number of permutations is multiplying the terms in ( [ eqn.pipermutation ] ) by the appropriate number of permutations and defining gives ( [ eqn.piki ] ) .[ [ proofs-of-theorems-prop.zeroqueue-thm.fluidlimit-and-thm.idledistribution ] ] proofs of theorems [ prop.zeroqueue ] , [ thm.fluidlimit ] and [ thm.idledistribution ] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ( i ) : : we show that where . note that by we have .note also that .( ii ) : : we show that \\ & & \qquad \exp\left[-n_2 \left(\log\left(1-\frac{m_2}{n_2}\right)+\frac{m_2}{n_2 } \right)\right]\end{aligned}\ ] ] where and are defined in ( [ eqn.meanidle ] ) .( iii ) : : we show that as which proves the proposition .first we calculate we use induction to calculate from to .when , suppose then therefore , the induction is valid and we have next we calculate similar to the induction calculating above , we can obtain therefore , where is a poisson random variable with parameter . using stirling s approximation , recall that and note that .note also that when , can be approximated by a normal distribution with mean and variance .next we analyze in 3 cases depending on . * when , from the normal distribution approximation , when , .therefore , * when , .when , the normal distribution approximation gives . * when , when , the normal distribution approximation gives .we need more care to treat this case . for any , therefore , in fact , for any fixed , when , for any , let .we have .there exists an such that when , for any , therefore , therefore , when , we have since when , we have when , note that is of the order of . therefore , increases exponentially .when , converges to a constant ; when , increases in the order of .therefore , when and , when , we have that therefore , when , this completes the proof that when , first we show that the weak convergence is valid given .then we show that the same holds when , for any fixed . when , we prove the convergence in probability in 2 steps . *we show that for all states , the conditional probability is dominated by a bounded constant multiple of the conditional probability of some point on the boundary of the rectangle . *when , we approximate the conditional probability of the points in the rectangle . we then show that the probability of points on the boundary is negligible compared with the conditional probability at .* when and , we have . therefore , ; * when and , we have . therefore , ; * when , and , we have . therefore , ; * when , and , we have . therefore , ; * when , and . as long as , we have .when is large , this requires as long as , we have .when is large , this requires eventually the movement stops at the boundary which are away from .therefore , the probability of any state satisfying or would be dominated by the probability of some point at the boundary . when ] , and grows large , we can use stirling s approximation . where .note that and .define and , we have ] . we define the first order derivatives on and : noting , we can verify that solve the first order conditions .look at the second order derivatives : the hessian matrix is negative definite .therefore , is strictly concave on and reaches its unique global maximum at . since is strictly concave and reaches its unique global maximum at .the maximum of on \times [ \delta , 1-\theta-\delta]\backslash ( f_1-\epsilon , f_1+\epsilon)\times(f_2-\epsilon , f_2+\epsilon)$ ] is on the boundary .since the boundary is a compact set , the maximum is attainable , denoted by , where . to obtain the asymptotic distribution of as we need to consider , by theorem [ thm.fluidlimit ] , only values for which and .we write , with , .note that are of the same order of magnitude as , and we only consider of the same order of magnitude . where the use of stirling s approximation is justified for large . here and .we clearly have : so we can treat that part as a constant .consider then from the taylor expansion of the logarithm function , we have therefore , similar expansions are valid for and .we now use the calculations in section [ sec.fluid ] to evaluate all the coefficients . by ( [ eqn.meanidle ] ) we have therefore , we are left with define we have therefore , given converges in distribution as to the bivariate normal distribution as stated in ( [ eqn.clt ] ) . when , and , similarly , we again write , with , .we then have we can now use the same approximation as for to show that converge to the same bivariate normal distribution . take a fixed arbitrary .fix , for any satisfying , and , from ( [ eqn.piki ] ) , noting for any and , we have therefore , for fixed , note the above inequality is valid for any satisfying , , we have from theorem [ thm.fluidlimit ] , there exists an such that when , then we have , this upper bound can be arbitrarily close to 0 when choosing , and .therefore , we have shown the tightness of , that is using for fixed , when , the ratio is lower bounded by and upper bounded by for any satisfying , and , in addition to ( [ eq : upperbound ] ) , we have the lower bound now we have .\ ] ] therefore , ,\ ] ] that is , .\ ] ] for fixed , as , the lower bound and the upper bound in ( [ eq : bounds ] ) both converge to .noting can be arbitrarily close to 0 , we have this together with the tightness ( [ eq : tightness ] ) proves ( [ eqn.limk ] ) .bell , s. l. , williams , r. j. ( 2001 ) .dynamic scheduling of a system with two parallel servers in heavy traffic with resource pooling : asymptotic optimality of a threshold policy . _the annals of applied probability _ , 11(3 ) , 608 - 649 .ghamami , s. , ward , a. r. ( 2013 ) .dynamic scheduling of a two - server parallel server system with complete resource pooling and reneging in heavy traffic : asymptotic optimality of a two - threshold policy ._ mathematics of operations research _ , 38(4 ) , 761 - 824 .pinsky , mark .the normal approximation to the hypergeometric distribution .unpublished manusript , https://www.dartmouth.edu/ chance / teaching_aids / books_articles / probability_book / pinsky - hypergeometric.pdf
the n - system with independent poisson arrivals and exponential server - dependent service times under first come first served and assign to longest idle server policy has explicit steady state distribution . we scale the arrival and the number of servers simultaneously , and obtain the fluid and central limit approximation for the steady state . this is the first step towards exploring the many server scaling limit behavior of general parallel service systems .
a locally self - interacting random walk on the set of integers is a sequence of integer - valued random variables with for all , that can be defined inductively as follows : one initializes a `` local time profile '' by first choosing some function that associates to each edge of the lattice a real number ( the set of unoriented edges can be identified to the set or to the set of couples where ) and one chooses the starting point .the simplest choice is of course to set this initial function to be equal to zero on all edges , and to start at the origin ( i.e. , ) with probability one .the law of the walker is then described via some function from to ] ) , and the size of this jump is growing with time .hence , one could a priori naively expect that the walk would not slow down , but on the contrary speed up when this gap widens .this can be explained by the fact that the self - interacting random walk therefore captures some feature of the discrete local time profile that is not visible in the continuous limit .the coming three subsections are devoted to the proof of proposition [ t2 ] .let us first describe in plain words a possible behaviour for our walk ( we shall then prove that indeed , this behaviour occurs with a positive probability ) : the walk starts at the origin , jumps to its right i.e. , and it forever remains positive i.e. for all positive times .in fact , when , goes off to . for each , denote the past maximum of the walk . in our scenario , for all . in other words , for each , either is equal to its past maximum , or it is equal to or .we see that we can therefore decompose the set of times as follows : define for each the hitting time of by , and let .then , in our scenario , during each of the intervals , the walk jumps back and forth on the edges between and .when it stops doing so , it first jumps to , and then jumps back and forth on the edges between and and so on .imagine for a moment that this has happened for a while and that just before ( for a large ) i.e. when , the following loosely defined event is true : the walker has visited each of the edges , and many times , and the number of times it has visited these three edges remain however comparable ( in the sense that the difference between these three numbers of visits is small ) . in particular , immediately before it chooses to jump to for the first time , the value of is rather close to zero ( mind that at that moment , and that because the walk has not visited yet ) so that the probability to indeed jump from to is therefore not too small. then , the walk arrives at for the first time . at that moment , the two edges to its right have not yet been visited , the edge between and ( to its left ) has been visited only once ( because the walker arrives at for the first time ) . on the other hand ,the edge between and has been visited a lot of times .hence , the walk will ( very likely ) jump back to .once it is back at , the probability to jump to or to at that moment is neither very small nor very close to ( because the situation can not drastically change because of the two previous jumps ) . note that if the walk then jumps to , the value of will then be very small , so that the walk will want to jump immediately back to .hence , the walk is ( with high probability ) trapped between and for a while .how long does this happen ?well , one should note that each time the walk jumps on or on , it will increase the chance to jump to the right next time it is at .hence , after a short while , the walk will in fact only jump back and forth between and .this will be the case until the number of times at which it has jumped on starts to be comparable with the number of times at which it has jumped on , because the walk will then have a significant chance to move to .hence , we end up in a situation where holds .one important point in this scenario is that in the end , the walk will tend to jump a little more often on than on .so , by a law of large numbers type argument , the total number of visits of the edge will turn out to grow `` linearly '' in as .this will a posteriori justify our assumption ( in the definition of ) that at the time , the walk has visited and many times .we now start to turn the previous ideas into a rigorous proof .let us define a family of independent random variables , where , , .for each , and , the law of the random variable is the following : we can then define ( deterministically ) our random walk using this family of random variables as follows .let and for each , choose inductively where is the -th time at which and .note that when , then is necessarily even ( because and are odd , while and are even ) .we will for the time being forget about this coupling between and the s , and just list a few simple observations concerning this family of random variables .a particular role will be played by the random variables when , and is an integer .we denote them by ( and drop the subscript ) .let us insist on the fact that are not defined for semi - integer s ( as opposed to the s ) .let us also keep in mind that for a given and , when is very large , the probability that and is very close to .we now define certain events : * let denote the event that for all , all and all , * note that almost surely , for each , the number of positive ( respectively , negative ) s for which is positive ( respectively , negative ) is finite ( it follows immediately from the definition that it has a finite expectation ) . is the event that for all , * for all , let us define there is no problem with the definition of for the same reason as above .note that because of symmetry , .the law of large numbers therefore ensures that almost surely we define the event , that for all , ] , the walk did only visit the three sites and .3 . the last jumps of the walk before were all on the edge between and .4 . at the time , the two quantities and differ by not more than , and they are both larger than and smaller than . , width=288 ]assume that , , and that do hold ( and that hold as well ) . for simplicity ,let us first assume that are all equal to when and to when ( we will then later see what difference it makes if for finitely many values of this is not the case ) .because holds , it implies that at the ca . first times at which the walk has been at before time , the value of at those times was greater than ( because was small , was large and ) .furthermore , the value of at the time is very negative ( recall that , that at this time and are comparable , while is greater than and ) . hence , our assumption on the s ensures that , and ( note that the used in those steps can not exceed the number of times one visited the sites ) .after these two steps , the value of ( when the walk is at ) has decreased by i.e. , whereas if . hence , we see that the walk will have to jump back and forth a number of times on the edge between and ; this will stop being the case because at some point , the walk will be at and will meet a that is equal to one .because before that , the s ( at the visiting times of ) have been decreasing , we see that this can only happen when one uses a i.e. when one uses ( this is the that is equal to one , for which is the largest ) . from that moment ( call it ) onwards , the walk will start jumping back and forth on the edge between and ( this is because when it is at , it will use s for negative s , and when it is at , it will use for very large values of ) .this will happen until the time . at that time , i.e. the number of times at which the walk jumped on is equal to the number of times the walk jumped on , which ensures that in this very particular situation , indeed holds . what is the difference due to those s that are equal to if or to if ?the first remark is that all these particular s will indeed be used in our process ( note that is decreasing two by two at each visit of , and that it starts from a very positive value and become very negative , and therefore has to use all s for $ ] .each time it meets a positive such that , it jumps to the right instead of jumping to the left . at the end of the day ( i.e. at ) this will mean that will be diminished by two .conversely , if it meets a non - negative such that , this will add to the number of jumps on .hence , we see that in our `` real '' case , and that will still hold because of our assumptions on the sum of the s . as a consequence, we see that on the event of positive probability , all s hold .this implies in particular that for each , the walk will not come back to the site after i.e. that is in fact the total number of times the walk does jump on the edge .recall that and that almost surely , as , so that on our event of positive probability , .it follows that on our event of positive probability , so that ( recall that between and we know that is equal to , or ) and as . this concludes the proof of proposition [ t2 ] .we now focus on an example that exhibits yet another possible asymptotic behaviour for the walk : where and we then choose as before .a naive first guess would be that this walk is self - attractive ( it is `` driven '' by the positive gradient of its local time ) and that it should get stuck .however : [ p4 ] with positive probability , as .again , the proof shows that this behaviour is valid for a wider class of self - interactions .in fact , it is quite similar to the case ( with ) studied in the previous subsections .the main difference is that ( in the `` good '' scenario that we will describe ) the walk will visit approximatively twice more the edge than the edge for large .since the proof is otherwise almost identical to the previous case , we will only describe in plain words this `` scenario '' , and leave the details of the proof to the interested reader .suppose that for some particular large time : * the walk is at its past maximum : .we call this site , and for all , we define , and to be the respective values of at , and . * the values of , and satisfy : \hbox { and } w_0 \in [ v_0 /2 , v_0 ] .\ ] ] we also suppose that the value of is very large . note that under these assumptions , it very likely that and , i.e. that the walk will jump back and forth along the edge between and , because is very large , while is negative and has a large absolute value .a quick analysis shows that ( with high probability ) the walk will jump back and forth on this edge until the negative drift at stops being huge , and the walk will then jump to for the first time .when this happens , this means that at this time , and are comparable. then , for some time , it will jump on the three sites , and , but while doing so , the drift at ( i.e. , that it feels when it is at ) grows fast , so that it will quickly be forced to jump along the edge between and only .we then let the first time at which is greater than and note that satisfies the same conditions as thos we required for .note also that once is found , the scenario is very likely to hold until when is large ( its conditional probability goes to very quickly with as gets larger ) . from this , it follows easily that with positive probability , the `` good scenario '' will indefinitely repeated .then , clearly the total number of visits to the edge grows like as , and the proposition follows . to conclude ,let us finally mention without proof a much less surprising possible asymptotic behaviour , namely the ballistic one .the following pictures correspond to the trajectory and local time for the case where where that could seem at first glance to be again one of the walks driven ( like tsrw ) by the negative gradient of .this example , like many others in the present paper illustrates how sensitive the discrete model is to little shifts in the definition of the driving dynamics .let us conclude with a list of ( possibly accessible ) open problems related to the questions that we have just discussed : 1 .prove the scaling behaviour of the walk in the `` tsrm regime '' ( i.e. where and and the stationary distribution makes sense ) , or even the convergence to tsrm . or construct anote related markovian model ( for instance with another function ) where one can prove this ? 2 .describe in some detail the `` actual '' ( i.e. , the one that actually dominates ) trapping strategy in the `` trapped case '' and .get some ( even partial ) description of the dynamics in the `` slow phase . ''does the qualitative behaviour depend only on ? 4 . in the case where the walk goes deterministically to infinity ,are other scaling behaviours than , and possible ? 5 . improve some of the results to almost sure statements ( instead of `` with positive probability '' ) .is it possible that for some choice of the parameters in such self - interacting random walks with finite range , the qualitative asymptotic behaviour is actually not almost sure ( i.e. can two different behaviours that exist with positive probability exist ? ) .do qualitatively really new asymptotic behaviours arise when one considers larger ( but finite ) self - interaction ranges? * acknowledgements . *bt thanks the kind hospitality of ecole normale suprieure , paris , where part of this work was done .the research of bt is partially supported by the hungarian national research fund , grant no .ww s research was supported in part by anr-06-blan-00058 .the cooperation of the authors is facilitated by the french - hungarian bilateral mobility grant balaton/02/2008 .
we study certain self - interacting walks on the set of integers , that choose to jump to the right or to the left randomly but influenced by the number of times they have previously jumped along the edges in the finite neighbourhood of their current position ( in the present paper , typically , we will discuss the case where one considers the neighbouring edges and the next - to - neighbouring edges ) . we survey a variety of possible behaviours , including some where the walk is eventually confined to an interval of large length . we also focus on certain `` asymmetric '' drifts , where we prove that with positive probability , the walks behave deterministically on large scale and move like or like . = msbm10 _ dedicated to erwin bolthausen on the occasion of his 65th birthday _
solar and stellar irradiance records are often plagued by data gaps .the proper interpolation of these missing data is a longstanding and notoriously delicate problem that requires a good understanding of the data .considerable attention has been given to this problem in fields such as climate science but much less so in solar physics and in astrophysics .often , the limited attention that is paid to data gaps contrasts with the sophistication of the analysis that is subsequently performed on these data .while short gaps can easily be filled by linear or by nonlinear interpolation , data gaps whose duration exceeds the characteristic time scales are much more difficult to handle .a notable exception is when multichannel synoptic observations of the same process are available , with gaps in some or in all of them .spectral irradiance observations , which we shall concentrate on , precisely belong to that category .our examples will be taken from the sun , but the results can be easily extended to other types of multichannel observations .our method applies to any set of observations that are recorded simultaneously ( i.e. the time stamps are the same for all records ) , are correlated with each other , and whose time intervals fully or partly overlap .our main assumption is their linear correlation , in the sense that each record can be approximated by a linear combination of the other ones .a strong linear correlation is typically observed between spectral irradiance observations made at different wavelengths or between simultaneous measurements of different proxies .these synoptic records are frequently used to assess subtle changes in the variability of the sun ; they are often remarkably coherent in time and in wavelength . as a consequence, their variability can be explained in terms of a few contributions only .this property is well known for the extreme ultraviolet ( euv ) but also for the visible range , when measured from space .the same coherency is observed among different proxies for solar activity .this property is rooted in the structuring effect of the solar magnetic field .the coherency partly breaks down during the impulsive phase of solar flares because the spectrum then considerably depends on the local conditions of the solar atmosphere . here, however , as in many applications , we consider daily or hourly averages , so that the effect of short transients can be discarded .this coherency in both time and wavelength is the key to the reconstruction technique we shall introduce below . by interpolating along two dimensions ( in time and along different records ) ,we not only improve the quality of the reconstruction , but we also can fill arbitrarily large data gaps without having to rely on the tedious bookkeeping that is required by most interpolation schemes . the nonparametric and data - adaptive method we advocate is based on the svd or singular value decomposition , which is to linear algebra what the fourier transform is to spectral analysis .the svd allows the extraction of the coherent part of the solar spectral irradiance , which is then used to fill the data gaps iteratively .the method is described in section 2 , and two applications are detailed .the first one ( sec .3 ) deals with solar spectral irradiance data in the euv . in the second application ( sec .4 ) we consider a set of solar proxies with numerous gaps .let be a multichannel record that represents either the solar spectral irradiance at different wavelengths ( or in different spectral bands ) or a set of solar proxies , or a combination thereof .all these quantities must be sampled simultaneously ; the sampling rate , however , does not need to be constant .these data are conveniently stored in a matrix ] .the computation of the svd at each iteration typically takes several seconds .for that reason , it may be desirable to process separately those spectral bands that evolve differently , such as the soft x - ray , the euv and the muv bands .the routine in matlab is available from the author .the solar euv monitor ( sem ) is a solar extreme ultraviolet ( euv ) spectrometer that has been operating continuously on the soho satellite since january 1996 . in its first - order mode , sem measures the irradiance within an 8 nm bandpass centred about the bright 30.38 nm he ii line . on june 25 , 1998 , soho suffered a mission interruption , leading to the loss of several months of data .this long data gap considerably complicates the use of sem data for upper atmosphere model validation .the sem , however , mostly captures chromospheric emissions , which are highly correlated with other gauges of solar activity .foremost among these are : * the or decimetric index , which is the solar radio flux at 10.7 cm .this index , which is measured from the ground , captures a mix of thermal and electron gyro - resonance emissions , and has been shown to be highly correlated with the euv flux . * the mg ii index , which is the core - to - wing ratio of the mg ii line at 280 nm .this index is widely used as a proxy for chromospheric activity . *the intensity of the h i lyman line at 121.57 nm , which is the brightest spectral line below 200 nm . together with the flux from the sem , we have four quantities that have different physical origins and yet are highly correlated , thereby opening the prospect of filling the large gaps in the sem data .we consider daily averages made from january 1 , 1996 until april 29 , 2011 .the linear correlation between the index and the other proxies improves when taking its square root , which we shall systematically do from now on .the correlation between these four proxies on both long and short time - scales is illustrated in fig .[ fig_sem_excerpt ] .our working hypothesis is that each of the missing samples from the sem can be reconstructed from a linear combination of ( possibly non - simultaneous ) observations of the other proxies . as we shall see shortly ,the best value of the embedding dimension is 4 ; let us therefore select and first determine the optimum number of modes . with four variables and an embedding dimension of 4 , the total number of svd modes is 16 ; their weights are displayed in fig .[ fig_sem_k ] .the first weight surpasses all the others because the first mode is an average of all four proxies , which is by far the most conspicuous coherent feature .the inflexion point between the few heaviest weights and the flat tail provides a convenient but visual criterion for determining the number of significant modes . according to this criterion , the best interpolation skill is for modes out of 16 . a better validation test consists in generating a small number of synthetic gaps , reconstructing them , and then checking how the residual error varies with the model parameters .to do so , we remove 5 - 10 % of the samples from each record and then compute the normalised error where the average is computed for synthetic gaps only .this procedure is repeated ten times to obtain an estimate of the average value of the normalised error .a value of 100 % can be interpreted as an error whose standard deviation equals the solar cycle variability of the original data .this value truly reflects the error made by filling short data gaps .note that it tends to underestimate the error for larger gaps , unless the length distribution of the synthetic gap matches that of the original data .the evolution of the normalised error with is illustrated in fig .[ fig_sem_k ] , which shows a broad minimum around , in agreement with the estimate obtained by visualisation .note that the four minima occur at different values of .the normalised error is on average larger for the index , which suggests that this quantity is relatively more difficult to reconstruct than the others .this is not so surprising , because it is the only emission from the radio band .the smallest normalised error is obtained for the sem , with .this value is about half that of the estimated normalised uncertainty , which shows the excellent quality of the reconstruction . in practice , the optimum value of is frequently found to be one or two units higher than the value obtained by visual inspection . as fig .[ fig_sem_k ] suggests , an overestimation of is preferable to an underestimation .the choice of the embedding dimension is mostly based on physical insight . with ( i.e. no embedding ) we assume that the missing samples are reconstructed from simultaneous observations only , whereas implies that the information contained in past and future observations is also used . setting involves a weighted averaging over time , which is appropriate for records whose samples are highly correlated in time . in fig .[ fig_sem_d ] we estimate the normalised error for different embedding dimensions , using the optimum number of modes for each of them .the smallest error is obtained for an embedding dimension of .larger dimensions hardly reduce the error but do increase the computational load substantially. as expected , the higher the value of , the smoother the reconstruction and the more likely that fine features may be missed .this is particularly evident in august 1998 , when a group of rapidly evolving active regions were moving across the solar disc .an embedding dimension of 4 properly captures their evolution , whereas a dimension of 15 smears out all but the most pronounced peaks .this example illustrates a relatively simple case because only one record has gaps in it .let us now , however , consider a more frequent case in which several of the records have large gaps .filling these gaps by standard interpolation schemes can become very time - consuming because of the amount of bookkeeping that is required to test whether gaps occur simultaneously in several records , etc .the svd - based interpolation does not require any of these tests .the ca k index is the normalised intensity of the ca ii k - line at 393.37 nm and has been advocated as a proxy for magnetic activity , including plages , faculae , and the network .this line is measured from the ground , so it can not be observed continuously .here we consider a record of daily observations made at the national solar observatory at sacramento peak , in which about 66% of the samples are missing .this index is known to be highly correlated with other solar indices , in particular with the mg ii index , so that the svd method is ideally suited for filling its gaps . to reconstruct the missing values, we consider the following set of proxies that are highly correlated with the ca k index : the square root of the index , the intensity of the h i lyman line , the mg ii index and the magnetic plage strength index ( mpsi ) .the time interval ranges from nov . 1, 1980 to july 1 , 2010 ; all proxies have data gaps except for the first two .these gaps occur erratically and 6% of them exceed 10 days . in this particular example, the coherency between proxies is crucial and indeed the choice of the embedding dimension does not significantly affect the results .let us take , which is the value that is recommended by the reconstruction error .the maximum number of svd modes is 10 because we have five records . out of these ,three only are found to be significant .the result of the reconstruction is illustrated in fig .[ fig_cak1 ] for periods of high and low solar activity .note that the results obtained with different number of modes lead to similar temporal evolutions .the reconstruction at solar maximum looks reasonable because it passes through the observations while staying highly correlated with the mg ii index . during solar minimum , however , the observed values of the ca k index continue to fluctuate whereas the reconstructed values and the other proxies stay almost constant .the difference between the observed ca k index and the smoothly varying reconstruction varies randomly in time , which questions its solar origin . to further investigate the origin of this difference between the observed and reconstructed index , we filtered the reconstructed data with the _ trous _ wavelet transform , which allows the separation of the sharp peaks from the more regular reconstruction .the residuals , i.e. the difference between the filtered reconstruction and the original observations , are shown in fig .[ fig_cak2 ] : they are found to be independent and their gaussian distribution only weakly varies with the solar cycle .this is a strong indication that the residuals are measurement errors rather than solar fluctuations .their standard deviation is 0.0008 , which represents 20 % of the solar cycle variability of the ca k index .our reconstruction thereby provides a means for fitting the numerous data gaps in the ca k index while also evaluating the confidence interval of the observations .this study shows that svd - interpolation is a powerful technique for filling arbitrarily large gaps in multi - wavelength , multichannel or in synoptic records .we focused here on solar spectral irradiance observations , which are frequently plagued by missing data .these gaps may be distributed at random in time or in wavelength .the main tuneable parameter is the number of svd modes that is needed to reconstruct the data ; this value may be estimated either by visualisation or by cross - validation .the method works best when each record can be approximated by a linear combination of the others .since it relies on linear combinations only , it may be desirable to apply a nonlinear static transform beforehand to increase the linear correlation between the records . for the method to work ,the observations must be sampled simultaneously but not necessarily evenly .non - simultaneous observations can be handled by resampling all variables to a common grid , for example by fourier decomposition ( e.g. * ? ? ?* ) , and then filling the gaps by svd . by alternating between the two ,both the gaps and the interpolated values can be progressively refined .this method has several applications in addition to mere interpolation .the first one is the cross - calibration of measurements of the same quantity by different instruments .the mg ii index , for example , is at present measured by different instruments that give different amplitudes .these data sets are incomplete and only partly overlap , which considerably impairs their inter - comparison . the iterative svd method is ideally suited for filling these gaps because the records are by definition strongly correlated .a second potential application is the stitching together of total solar irradiance ( tsi ) observations . merging tsi records from several instrumentsis a delicate and controversial task because instruments disagree on the absolute value of the tsi and often do not operate simultaneously .the iterative svd provides a means for estimating the different offsets in a self - consistent way because it allows us to extrapolate each tsi record by assuming that its statistical properties with respect to the other records do not change in time .this property is particularly useful for checking composites that are built from different records , such as the tsi , the h i lyman intensity , the mg ii index and the sunspot index .this will be detailed in a forthcoming publication .i thank the following institutes for providing the data : the laboratory for atmospheric and space physics ( university of colorado ) for the mg ii and h i lyman composites , the national solar observatory at sacramento peak ( data produced cooperatively by nsf / noao , nasa / gsfc and noaa / sec ) for the ca k index , the mount wilson observatory ( operated by ucla , with funding from nasa , onr and nsf , under agreement with the mt .wilson institute ) for the mpsi index and the space sciences center ( university of southern california ) for the sem data .this study received funding from the european community s seventh framework programme ( fp7/2007 - 2013 ) under the grant agreement nr .218816 ( soteria project , http://www.soteria-space.eu ) ., s. l. , henry , t. w. , and fleck , b. ( 1998 ) . .in balasubramaniam , k. s. , harvey , j. , and rabin , d. , editors , _ synoptic solar physics _ , volume 140 of _ astronomical society of the pacific conference series _ , pages 301308 .
data gaps are ubiquitous in spectral irradiance data , and yet , little effort has been put into finding robust methods for filling them . we introduce a data - adaptive and nonparametric method that allows us to fill data gaps in multi - wavelength or in multichannel records . this method , which is based on the iterative singular value decomposition , uses the coherency between simultaneous measurements at different wavelengths ( or between different proxies ) to fill the missing data in a self - consistent way . the interpolation is improved by handling different time scales separately . two major assets of this method are its simplicity , with few tuneable parameters , and its robustness . two examples of missing data are given : one from solar euv observations , and one from solar proxy data . the method is also appropriate for building a composite out of partly overlapping records .
mean - variance hedging is one of the classical problems from mathematical finance . in financial terms , its goal is to minimize the mean squared error between a given payoff and the final wealth of a self - financing strategy trading in the underlying assets .mathematically , one wants to project the random variable in on the space of all stochastic integrals , perhaps after subtracting an initial capital .the contribution of our paper is to solve this problem via stochastic control methods and stochastic calculus techniques for the case where the asset prices are given by a general ( locally -square - integrable ) semimartingale , under a natural no - arbitrage assumption .the literature on mean - variance hedging is vast , and we do not try to survey it here ; see for an attempt in that direction .there are two main approaches ; one of them uses martingale theory and projection arguments , while the other views the task as a linear - quadratic stochastic control problem and uses backward stochastic differential equations ( bsdes ) to describe the solution . by combining tools from both areas ,we improve earlier work in two directions we describe the solution more explicitly than by the martingale and projection method , and we work in a general semimartingale model without restricting ourselves to particular setups ( like it processes or lvy settings ) .we show that the value process of the stochastic control problem associated to mean - variance hedging possesses a quadratic structure , describe its three coefficient processes by semimartingale bsdes and show how to obtain the optimal strategy from there .in contrast to the majority of earlier contributions from the control strand of the literature , we also give a rigorous derivation of these bsdes . for comparison ,the usual results ( especially in settings with it processes or jump - diffusions ) start from a bsde system and only prove a verification theorem that shows how a solution to the bsde system induces an optimal strategy . apart from being more precise , we think that our approach is also more informative since it shows clearly and explicitly how the bsdes arise , and hence provides a systematic way to tackle mean - variance hedging via stochastic control in general semimartingale models .more detailed comparisons to the literature are given in the respective sections .the paper is structured as follows .we start in section [ sec1 ] with a precise problem formulation and state the martingale optimality principle for the value process of the associated stochastic control problem . assuming that each ( time ) conditional problem admits an optimal strategy ,we then show that is a quadratic polynomial in whose coefficients are stochastic processes that do not depend on .this is a kind of folklore result , and our only claim to originality is that we give a very simple proof in a very general setting .we also show that the coefficient equals the value process for the control problem with initial value and . motivated by the last result , we study in section [ sec2 ] the particular problem for and .we impose the no - arbitrage condition that there exists an equivalent -martingale measure for with -square - integrable density and are then able to characterize the process as the solution of a semimartingale bsde .more precisely , theorem [ theorem2.4 ] shows that all conditional problems for admit optimal strategies if and only if that bsde ( [ equ2.18 ] ) has a solution in a specific class , and in that case , the unique solution is and the conditionally optimal strategies can be given in terms of the solution to ( [ equ2.18 ] ) . in comparison to earlier work, we eliminate all technical assumptions ( like continuity or quasi - left - continuity ) on , and we also do not need reverse hlder inequalities for our main results . section [ sec3 ] considers the general case of the mean - variance hedging problem with and .the analog of theorem [ theorem2.4 ] is given in theorem [ theorem3.1 ] , where we describe the three coefficient processes by a coupled system ( [ equ3.1])([equ3.3 ] ) of semimartingale bsdes .existence of optimal strategies for all conditional problems for is shown to be equivalent to solvability of the system ( [ equ3.1])([equ3.3 ] ) , with solution , and we again express the conditionally optimal strategies in terms of the solution to ( [ equ3.1])([equ3.3 ] ) . as mentioned above , this is stronger than only a verification result . in section [ sec4 ] ,we provide equivalent alternative versions for our bsdes which are more convenient to work with in some examples with jumps .this also allows us to discuss in more detail the connections to the existing literature .finally , section [ sec5 ] illustrates the use of our results and gives further links to the literature by a number of simple examples .we start with a finite time horizon and a filtered probability space with the filtration satisfying the usual conditions of right - continuity and .let be an -valued rcll semimartingale , and denote by the space of all predictable -integrable processes , for short , such that the stochastic integral process is in the space of semimartingales .our basic references for terminology and results from stochastic calculus are and . for and , the problem of _ mean - variance hedging _( _ mvh _ ) is to \qquad \mbox{over all .}\ ] ] the interpretation is that models the ( discounted ) prices of risky assets in a financial market containing also a riskless bank account with ( discounted ) price 1 .an integrand together with then describes a self - financing dynamic trading strategy with initial wealth , and stands for the ( discounted ) payoff at time of some financial instrument . by using , we generate up to time via trading a wealth of , and we want to choose in such a way that we are close , in the -sense , to the payoff . we embed this into a _ stochastic control problem _ and define for and ] .our goal is to study the _ dynamic value family _\\[-8pt ] & = & { \mathop{\operatorname{ess}\operatorname{inf}}}_{\vartheta\in\theta } e\biggl [ \biggl ( h - x - \int_t^t \vartheta_r{\,d}s_r \biggr)^2 \bigg| { { \mathcal f}}_t \biggr],\qquad t\in[0,t ] , \nonumber\end{aligned}\ ] ] in order to describe the optimal strategy for the mvh problem ( [ equ1.1 ] ). observe that with this notation , we have the identity \!]}t , t{]\ ! ] } } \bigr ) = { v^h}_u\bigl ( x , \psi i_{{]\!]}t , u{]\ ! ] } } \bigr)\ ] ] for . because the family of random variables \ ] ] for is closed under taking maxima and minima ,we have the classical _ martingale optimality principle _ in the following form ; see , for instance , for the general theory , or for a formulation closer to the present one .[ prop1.1 ] fix . for every and ] is optimal for .for the special case , the fact that is a cone immediately gives = x^2 { v^0}_t(1).\ ] ] this holds for any random variable .so proposition [ prop1.1 ] almost directly gives : [ cor1.2 ] for every ] , set and define \!]}t , t{]\!]}} ] is optimal for so that \\ & & \qquad \le i_{d_t } e\biggl [ \biggl ( 1+\vartheta^*{\cdot}s_t + \int_t^t \varphi _r{\,d}s_r \biggr)^2 \bigg| { { \mathcal f}}_t \biggr ] = 0\end{aligned}\ ] ] by the definitions of and .this yields again by the definition of , and so we get ( [ equ1.4 ] ) . as in propositiona.2 of or theorem 2.28 of , we also obtain : [ prop1.3 ] fix . for every , ] of random variables can be aggregated into an rcll process , which we again call in the sequel , we always choose and work with the rcll versions from proposition [ prop1.3 ] . for easier discussion of the next result, we introduce some more terminology .we denote by the ( a priori possibly empty ) set of all probability measures equivalent to on such that is a --martingale and . assuming that is nonempty is one way of imposing _ absence of arbitrage _ for our financial market and also fits naturally with the fact that our basic problem is cast in quadratic terms .the density process of with respect to is denoted by , and we say that satisfies the reverse hlder inequality if there is a constant with \le c ( { z^q}_\tau)^2 ] , ( [ equ1.2 ] ) has a solution for every .suppose also that for any , a.s .implies that in . then each is of the affine form and each has the quadratic form for rcll processes not depending on .moreover , is the solution of ( [ equ1.3 ] ) , and the quadratic coefficient equals from ( [ equ1.3 ] ) and does not depend on .fix ] of and by its closure in .since the problems ( [ equ1.2 ] ) with payoff for and have solutions ( which are given by projections ) , so does problem ( [ equ1.2 ] ) for and payoff by taking differences , and the latter problem is identical to ( [ equ1.2 ] ) for so that .both here and in the next argument , we exploit our assumption that a self - financing strategy is uniquely determined by its wealth process . if is the projection in on , then clearly and so ( [ equ1.5 ] ) follows with and gives \\ & = & e\biggl [ \biggl ( h - \int_t^t\vartheta^{0,t}_r{\,d}s_r - x\biggl ( 1+\int_t^t \vartheta^{1,t}_r{\,d}s_r \biggr ) \biggr)^2 \bigg| { { \mathcal f}}_t \biggr],\end{aligned}\ ] ] and hence we directly obtain the expression ( [ equ1.6 ] ) with , \\{ v^{(1)}}_t & = & e\biggl [ \biggl ( h-\int_t^t { \vartheta^{*,t}}_r(0 , h ) { \,d}s_r \biggr ) \biggl ( 1+\int_t^t { \vartheta^{*,t}}_r(1,0 ) { \,d}s_r\biggr ) \bigg| { { \mathcal f}}_t \biggr]\end{aligned}\ ] ] and = { v^0}_t(1).\ ] ] since the families \} ] -a.s .for each ] for all .\(3 ) by part ( 1 ) of proposition [ prop1.1 ] , is a -submartingale , hence a semimartingale , and so are and by ( 1 ) and ( 2 ) . because by theorem [ theorem1.4 ] , also is then a -special semimartingale .moreover , for all due to ( [ equ1.2 ] ) implies that , \qquad 0\le t\le t,\ ] ] by ( 1 ) and ( 2 ) so that is of class ( d ) .the rest of part ( 3 ) is then clear .in this section , we give a description of ( the rcll version of ) the value process , \qquad0\le t\le t,\ ] ] of the problem ( [ equ1.3 ] ) . since this is by ( [ equ1.7 ] ) and theorem [ theorem1.4 ] the _ quadratic _ coefficient in the representation ( [ equ1.6 ] ) , we use in this section the shorter notation we also remark that coincides with the _ opportunity process _ from , although the latter is defined there with a different space of integrands for .let us first prove strict positivity of , as well as of . [ lem2.1 ] suppose .then and are both strictly positive , in the sense that and for = 1 ] .then the bayes rule gives \over e[z_t^2 ] } , \\\label{equ2.3 } { z^{r;q}}_t & : = & { dr\over dq}\bigg|_{{{\mathcal f}}_t } = { e_q[z_t | { { \mathcal f}}_t ] \over e[z_t^2 ] } = { 1\over z_t } { z^{r;p}}_t.\end{aligned}\ ] ] using the bayes rule and ( [ equ2.2 ] ) , jensen s inequality , again the bayes rule and ( [ equ2.3 ] ) yields \\ & & \qquad = { z^{r;p}}_t e[z_t^2 ] e_r\biggl [ ( z_t^2)^{-1 } \biggl ( 1+\int_t^t \vartheta _r{\,d}s_r \biggr)^2 \bigg| { { \mathcal f}}_t \biggr ] \\ & & \qquad \ge { z^{r;p}}_t e[z_t^2 ] \biggl ( e_r\biggl [ ( z_t)^{-1 } \biggl ( 1+\int_t^t \vartheta_r{\,d}s_r \biggr ) \bigg| { { \mathcal f}}_t \biggr ] \biggr)^2 \\ & & \qquad = { z^{r;p}}_t e[z_t^2 ] \biggl ( ( { z^{r;q}}_t)^{-1 } e_q\biggl [ ( e[z_t^2])^{-1 } \biggl ( 1+\int_t^t \vartheta_r{\,d}s_r \biggr ) \bigg| { { \mathcal f}}_t \biggr ] \biggr)^2.\end{aligned}\ ] ] but as already noted before theorem [ theorem1.4 ] , is a -martingale whenever and .so we get by using ( [ equ2.3 ] ) and ( [ equ2.2 ] ) that \ge { { z^{r;p}}_t e[z_t^2 ] \over ( { z^{r;q}}_t e[z_t^2])^2 } = { z_t^2 \over e[z_t^2 | { { \mathcal f}}_t]},\ ] ] and the first assertion follows since -a.s . by the minimum principle for supermartingales and < \infty ] and such that the product of and the density process of with respect to is a --martingale .we call _ variance - optimal _ if for all , and we say that the _ variance - optimal martingale measure _ ( _ vomm _ ) exists if is variance - optimal .( in particular , is then by definition equivalent to . )if is continuous , theorem 1.3 of shows that is sufficient for the vomm to exist ; but if can have jumps , the situation is more complicated. the dynamic problem of finding the vomm has the value process then we have the following direct connection to and ( [ equ2.1 ] ) .[ prop2.2 ] suppose and that the vomm exists. then . we know from ( [ equ2.4 ] ) in the proof of lemma [ lem2.1 ] that for and , \ge 1 / e [ ( { z^q}_t / { z^q}_t ) ^2 | { { \mathcal f}}_t ] , \qquad 0\le t\le t.\ ] ] taking the ess inf over and the ess sup over implies that .conversely , since , the martingale optimality principle in corollary [ cor1.2 ] gives ,\qquad 0\le t\le t,\ ] ] for every .but if we define , as in , then is by corollary 2.9 of the closure of in , and this allows us to extend ( [ equ2.5 ] ) to every . indeed , for a sequence in with in , the right - hand side of ( [ equ2.5 ] ) for converges in to the right - hand side of ( [ equ2.5 ] ) for , and because we have ] .applying ( [ equ2.5 ] ) with and using the bayes rule therefore gives \ge ( z^{{\widetilde q}}_t)^2 ( e_{{\widetilde q } } [ z^{{\widetilde q}}_t | { { \mathcal f}}_t ] ) ^2 { v^0}_t(1 ) = ( e [ ( z^{{\widetilde q}}_t ) ^2 | { { \mathcal f}}_t ] ) ^2 { v^0}_t(1)\ ] ] and hence this completes the proof . for experts on mean - variance hedging ,proposition [ prop2.2 ] is also a kind of folklore result . for the case where the filtration is continuous ,it can , for instance , be found in proposition 4.2 of ( with the remark that it extends to general if is continuous ) .but we do not know a reference for the level of generality given here .henceforth , we often use the following simple fact : in ( [ equ2.6 ] ) , the ( right ) superscript denotes the compensator or dual predictable projection .this should not be confused with the predictable projection of a process which is denoted by , with a left superscript .the most frequent application of ( [ equ2.6 ] ) will be for ] when is a locally square - integrable local martingale ._ in the sequel _ , _ we focus on the case so that is one - dimensional ._ one can obtain analogous results for ( and we shall comment on this later ) , but the arguments and formulations look more technical without providing extra insight . when so that is in particular a -special semimartingale , we write for its -canonical decomposition and note that and is predictable and of locally square - integrable ( or even locally bounded ) variation .if we also have , then it is well known that satisfies the so - called structure condition , that is , that has the form with and ; see theorem 1 of .this implies that = \biggl [ \int\lambda{\,d}{\langle m\rangle}\biggr ] = \sum(\lambda_s \delta{\langle m\rangle}_s)^2 = ( \lambda^2 \delta{\langle m\rangle}){\cdot}{\langle m\rangle}\ll { \langle m\rangle}.\ ] ] because is predictable , ] , and if ] ] is in ( and hence has a compensator ) is , for instance , satisfied if is bounded , hence in particular for . in the context of the equations we study , the operation in ( [ equ2.12 ] )can sometimes be simplified .if is continuous , then so are .}\ ] ] in general , we still have \!]} ] is in and hence also .but ] by ( [ equ2.6 ] ) and ( [ equ2.8 ] ) , and so ) ^{{{\bolds{\mathsf p } } } } } - \delta(1+\lambda^2\delta{\langle m\rangle } ) { \cdot}{\langle m\rangle}\\ & = & \int\biggl ( { d { ( { q}{\cdot}[s ] ) ^{{{\bolds{\mathsf p } } } } } \over d{\langle m\rangle } } - \delta(1+\lambda ^2\delta{\langle m\rangle } ) \biggr ) { \,d}{\langle m\rangle}\in{{\mathcal a}}^+_{{\mathrm{loc}}}(p).\end{aligned}\ ] ] writing and = [ { q } , [ s ] ] ] . for the final assertion ,note that the preceding proof shows that so that the nonnegative process is prelocally bounded .since is like predictable , it is therefore by , remark viii.11 also locally bounded , and this means that is locally bounded away from 0 .if , both ] and .we then take a predictable with and define the matrix - valued predictable process by ] ^{{{\bolds{\mathsf p}}}}}_t \over db_t } , \qquad 0\le t\le t.\ ] ] analogously to lemma [ lem2.3 ] , one can then prove that recalling the notation ( [ equ2.12 ] ) , we now consider the _ backward equation _ _ solution _ of ( [ equ2.18 ] ) is a triple , where is a local -martingale which is strongly -orthogonal to , is in and is a semimartingale with ] \in{{\mathcal a}}_{{\mathrm{loc}}}(p) ] , there exists an optimal strategy for ( [ equ1.2 ] ) with .b. there exists a solution to the bsde ( [ equ2.18 ] ) having , , bounded and strictly positive and such that for every ] and .( we sometimes omit writing the dependence of on . )this gives .\ ] ] next we apply the product rule with ( [ equ2.22 ] ) , ( [ equ2.20 ] ) , ( [ equ2.7 ] ) , ( [ equ2.21 ] ) and then use and + [ { q } , [ s ] ] = ( { q}_-+\delta{q}){\cdot}[s ] = { q}{\cdot}[s ] ] that for each , and for the optimal , the processes in ( [ equ2.28 ] ) and ( [ equ2.29 ] ) vanish identically .indeed , ( [ equ2.29 ] ) follows from ( [ equ2.28 ] ) since the process in ( [ equ2.29 ] ) is simply the continuous part of the process in ( [ equ2.28 ] ) . for any , we thus have with probability 1 that because is predictable , there are stopping times taking only rational values and such that and on ; see theorem iv.77 in .thus we obtain for -almost all that these integrals tend to as because , and so we get which means that -a.e ., for each . for the optimal , we get the null process in ( [ equ2.28 ] ) , hence equality in ( [ equ2.30 ] ) , and so we have for the continuous part , ( [ equ2.29 ] ) gives with that p ] , -a.s . setting ,we then get as in appendix b of via fubini s theorem that ( dropping arguments from ) this uses that by continuity of .the second term on the right - hand side of ( [ equ2.34 ] ) tends to 0 as by corollary b.1 in .writing we have for by continuity of and therefore as .moreover , we have ( uniformly in and ) which is in , hence -a.s .finite , for .the first term on the right - hand side of ( [ equ2.34 ] ) can now be estimated above by since by continuity of .now we use the definition of in ( [ equ2.24 ] ) to obtain this shows that -a.s . , for all as .moreover , can be bounded uniformly in and , -a.s ., and using shows that we can apply dominated convergence to get as , -a.s . with a similar argument, we can prove ( [ equ2.33 ] ) .indeed , for , we have and the first term on the right - hand side tends to 0 -a.s . as by continuity of .writing , we have as by the right - continuity of the stochastic integral and since from ( [ equ2.24 ] ) is continuous with respect to the second argument .so ( [ equ2.33 ] ) will follow by dominated convergence as soon as we show that but the definition of in ( [ equ2.24 ] ) yields that and so ( [ equ2.37 ] ) follows again by ( [ equ2.35 ] ) and ( [ equ2.36 ] ) because -a.s .this establishes ( [ equ2.33 ] ) .putting together all the results so far , ( [ equ2.34 ] ) therefore yields that with probability 1 , we have in as .together with ( [ equ2.32 ] ) , this gives -a.e ., for each . for the optimal , we again get equality so that finally and combining this with ( [ equ2.31 ] ) yields ( [ equ2.25 ] ) .\(3 ) we next show that for fixed is given by ( [ equ2.19 ] ) .since satisfies ( [ equ2.18 ] ) , it s formula gives via ( [ equ2.22 ] ) and ( [ equ2.8])([equ2.11 ] ) like in ( [ equ2.23 ] ) for any that \\[-8pt ] & & \hspace*{33pt}\qquad\quad { } + { q}_{r- } \vartheta_r^2 ( 1+\lambda_r^2 \delta{\langle m\rangle}_r ) + 2 x^\vartheta_{r- } \vartheta_r ( \varphi_r + \lambda_r \delta{b^{{q}}}_r ) \nonumber\\ & & \hspace*{95.4pt}\qquad\quad { } + \vartheta_r^2 \bigl ( \delta{b^{{q}}}_r ( 1+\lambda_r^2 \delta{\langle m\rangle}_r ) + { g}_r({q } ) \bigr ) \biggr){\,d}{\langlem\rangle}_r \nonumber\\ & & \qquad= m_u - m_t + \int_t^u \biggl ( \vartheta_r \sqrt{{{\mathcal n}_{r}({q } ) } } + x^\vartheta_{r- } { \varphi_r + \lambda_r { { } ^{{{\bolds{\mathsf p}}}}{q}}_r \over\sqrt{{{\mathcal n}_{r}({q } ) } } } \biggr)^2 { \,d}{\langle m\rangle}_r .\nonumber\end{aligned}\ ] ] by corollary [ cor1.2 ] , the process in ( [ equ2.38 ] ) is a martingale on \!]}.}\ ] ] integrating with respect to thus shows for that satisfies the linear sde for , and this implies that because is in , we have so that the stochastic exponential is indeed in ; and plugging ( [ equ2.40 ] ) into ( [ equ2.39 ] ) yields the expression ( [ equ2.19 ] ) for . since was arbitrary , we have now shown that ( a ) implies ( b ) and that we then have ( [ equ2.19 ] ) .\(4 ) conversely , let us start from ( b ) .again fix .using the fact that solves the bsde ( [ equ2.18 ] ) , we obtain completely analogously as for ( [ equ2.38 ] ) for any that \\[-8pt ] & & \qquad = m_u - m_t + \int_t^u \biggl ( \vartheta_r \sqrt{{{\mathcal n}_{r}(y ) } } + x^\vartheta_{r- } { \psi_r + \lambda_r { { } ^{{{\bolds{\mathsf p}}}}y}_r \over\sqrt{{{\mathcal n}_{r}(y ) } } } \biggr)^2 { \,d}{\langle m\rangle}_r \nonumber\end{aligned}\ ] ] for .so is a local -submartingale on \!]} ] so that gives ] . to prove the converse inequality , define the predictable process by the right - hand side of ( [ equ2.19 ] ) .integrating then shows as for ( [ equ2.40 ] ) that and because this stochastic exponential is in by the assumption in b ) , we see that coming from ( [ equ2.19 ] ) is actually in .plugging into ( [ equ2.41 ] ) shows by ( [ equ2.19 ] ) that the -integral vanishes ; so is a -martingale on \!]} ] by ( [ equ2.1 ] ) .so we obtain , hence also , , and this shows that any solution of ( [ equ2.18 ] ) with the properties in ( b ) coincides with , giving uniqueness .finally , shows that is a -submartingale on \!]} ] , we then have , \qquad\mbox{-a.s.}\ ] ] and the process is a -martingale on \!]}p 0\le t\le t ] . note for ( [ equ2.44 ] ) that .another straightforward but slightly lengthier computation shows that is a local -martingale so that .finally , the representation ( [ equ2.44 ] ) of as a constant plus a `` good '' stochastic integral of implies that is variance - optimal ; see , for instance , lemma 2.1 in .note here that the same argument as in step ( 4 ) of the proof of theorem [ theorem2.4 ] implies that the integrand is in so that is a -martingale for every .if ( [ equ2.45 ] ) holds , then clearly ; so is then equivalent to , hence in , and is the vomm . from ( [ equ2.43 ] ) , the proof of proposition [ prop2.6 ] and , we can see that under the assumptions of theorem [ theorem2.4 ] and ( [ equ2.45 ] ) , the process is a -martingale with final value .this implies that , \qquad 0\le t\le t.\ ] ]recall from theorem [ theorem1.4 ] that the dynamic value process of the mean - variance hedging problem has the quadratic form our goals in this section are to describe the coefficient processes via backward stochastic differential equations ( bsdes ) and to give explicit expressions for the optimal strategies .this will be done under the same assumptions as in section [ sec2 ] .a general solution for the mvh problem has been given by in their theorem 4.10 and corollary 4.11 .however , that solution involves either a process which is very hard to find [ see , definition 3.12 ] or the variance - optimal martingale measure [ called in ; see their proposition 3.13 ] which is also notoriously difficult to determine . with our approach, we can be more explicit . to formulate our main result, we introduce the system of bsdes , \\[-8pt ] & & { } + { \psi^{(1)}}_s{\,d}m_s + d{l^{(1)}}_s,\qquad { y^{(1)}}_t = h,\nonumber \\ \label{equ3.3 } d{y^{(0)}}_s & = & { ( { \psi^{(1)}}_s+\lambda_s { } ^{{{\bolds{\mathsf p}}}}{y^{(1)}}_s)^2 \over{{\mathcal n}}_s({y^{(2 ) } } ) } { \,d}{\langle m\rangle}_s + d{n^{(0)}}_s,\qquad { y^{(0)}}_t = h^2.\end{aligned}\ ] ] a _ solution _ of this system consists of tuples , , where are in ; are in and strongly -orthogonal to ; is a local -martingale ; and are -special semimartingales with ] \in{{\mathcal a}}_{{\mathrm{loc}}}(p) ] , there exists an optimal for ( [ equ1.2 ] ) for every .b. for each , there is a solution to the bsde system ( [ equ3.1])([equ3.3 ] ) with : a. , , bounded and strictly positive , and with the property that for every ] , the solution of the linear sde on \!]} ] , the optimal wealth process , , satisfies the sde ( [ equ3.4 ] ) and is given by the feedback formula 2 .suppose , in addition , that there is some satisfying the reverse hlder inequality .then the value process from ( [ equ1.2 ] ) has the form ( [ equ3.5 ] ) , where the processes are those unique solutions of the bsde system ( [ equ3.1])([equ3.3 ] ) for which and are of class ( d ) and for constants .moreover , for every ] , and so the predictable finite variation term on the right - hand side of ( [ equ3.9 ] ) must be identically zero . with predictable and such that , , we thus obtain that the process vanishes identically .since , we can argue analogously to steps ( 1 ) and ( 2 ) in the proof of theorem [ theorem2.4 ] to get integrating with respect to gives and plugging this into ( [ equ3.7 ] ) shows that satisfies the bsde ( [ equ3.2 ] ) .moreover , as already used , we know from lemma [ lem1.5 ] that is of class ( d ) , and it only remains for ( ii ) to check the last integrability property .\(2 ) we next argue that the bsde ( [ equ3.3 ] ) has a solution , starting with a calculation that is used again later .fix , take any in and consider as in the proof of theorem [ theorem2.4 ] the process , .( again , we usually do not explicitly indicate the dependence of on the starting time , nor on . )lemma [ lem1.5 ] yields , and as satisfies the bsde ( [ equ3.1 ] ) , the same computation as for ( [ equ2.38 ] ) gives , with ( [ equ2.42 ] ) , that finally , using the product rule , ( [ equ2.7 ] ) , the bsde ( [ equ3.2 ] ) for , ( [ equ3.8 ] ) and ( [ equ2.10 ] ) leads to \\ & = & dm + { v^{(1)}}_- \vartheta\lambda{\,d}{\langle m\rangle}- x^\vartheta_- \gamma \bigl({\psi^{(1)}}+ \lambda { } ^{{{\bolds{\mathsf p}}}}{v^{(1)}}\bigr ) { \,d}{\langle m\rangle}\\ & & { } + \vartheta\bigl({\psi^{(1)}}+ \lambda\delta{a^{(1)}}\bigr ) { \,d}{\langle m\rangle}\\ & = & dm + \bigl({\psi^{(1)}}+ \lambda{}^{{{\bolds{\mathsf p}}}}{v^{(1)}}\bigr ) ( \vartheta - \gamma x^\vartheta_- ) { \,d}{\langle m\rangle}.\end{aligned}\ ] ] using ( [ equ3.5 ] ) and adding up therefore gives \\[-8pt ] & & { } - \int_t^u 2 \bigl ( { \psi^{(1)}}_r + \lambda_r { } ^{{{\bolds{\mathsf p}}}}{v^{(1)}}_r\bigr ) ( \vartheta_r - \gamma_r x^\vartheta_{r- } ) { \,d}{\langle m\rangle}_r \nonumber\\ & & { } + \int_t^u ( \vartheta_r - \gamma_r x^\vartheta_{r-})^2 { { \mathcal n}}_r\bigl({v^{(2)}}\bigr ) { \,d}{\langle m\rangle}_r + m_u - m_t . \nonumber\end{aligned}\ ] ] now choose and of the form \!]}t , \varrho _ t{]\!]}} ] .so if we take a predictable with and , we obtain that the process for is , for all ] if we plug in for the optimal . because both integral terms on the right - hand side are increasing due to ( [ equ3.11 ] ) , they must then both vanish identically , on \!]} ] , and since is in , the unique solution of ( [ equ3.4 ] ) is in .so we have now proved that ( a ) implies ( b ) , and also that we then have ( [ equ3.5 ] ) and ( [ equ3.6 ] ) .\(4 ) conversely , let us start with ( b ) ; then we have to prove the existence of an optimal .fix , set for and use ( [ equ2.22 ] ) and the bsdes ( [ equ3.1])([equ3.3 ] ) for to compute as for ( [ equ3.10 ] ) and ( [ equ3.13 ] ) that for any , \\[-8pt ] & & { } + \int_t^u \biggl ( ( \vartheta_r - \gamma_r x^\vartheta_{r- } ) \sqrt { { { \mathcal n}}_r\bigl({y^{(2)}}\bigr ) } - { { \psi^{(1)}}_r + \lambda_r { } ^{{{\bolds{\mathsf p}}}}{y^{(1)}}_r \over \sqrt { { { \mathcal n}}_r({y^{(2 ) } } ) } } \biggr)^2 { \,d}{\langle m\rangle}_r \nonumber\end{aligned}\ ] ] for .so is a local -submartingale on \!]} ] .this implies that \ge { v^h}_t(x),\ ] ] and so we conclude that and that is optimal for ( [ equ1.2 ] ) , giving existence of .this proves that ( b ) implies ( a ) and that we then also have for all , hence for .this ends the proof of ( 1 ) .\(5 ) finally , the assertion of part ( 2 ) follows , similarly to theorem [ theorem2.4 ] , from the proof of part ( 1 ) ; we only need to notice again that is closed in for every .in this section , we give equivalent alternative versions for the bsdes obtained in sections [ sec2 ] and [ sec3 ] .one reason is that in some models , these versions are more convenient to work with ; a second is that it allows us to discuss how our results relate to existing literature . for reasons of space, we only look at ( [ equ2.18 ] ) or ( [ equ3.1 ] ) in detail ; this is the most complicated equation ._ throughout this section , we assume as in theorem [ theorem2.4 ] that and . _ for convenience, we recall that ( [ equ2.18 ] ) reads where and ] ^{{{\bolds{\mathsf p } } } } } \over d{\langle m\rangle } } , ] . in view of theorem [ theorem2.4 ] ( where is bounded ) , we restrict ourselves to solutions with and . for better comparison with ( [ equ3.1 ] ) ,we really ought to write a superscript for , but we omit this to alleviate the notation . the bsde ( [ equ4.1 ] ) is written with the local -martingale from the canonical decomposition of . in simple models with jumps , it is useful to split into its continuous and purely discontinuous local martingale parts and , respectively. then , and we define the predictable processes we now consider the backward equation a _ solution _ of ( [ equ4.2 ] ) is a priori a tuple with strongly -orthogonal to both and , , and a semimartingale with ] \in{{\mathcal a}}_{{\mathrm{loc}}}(p) ] gives ] ^{{{\bolds{\mathsf p}}}}}_t & = & { \bigl [ { \psi^c}{\cdot}{m^c}+ { \psi^d}{\cdot}{m^d}+ { l^\prime } , [ { m^d } ] \bigr]^{{{\bolds{\mathsf p}}}}}_t \\[-2pt ] & = & { \bigl [ { \psi^d}{\cdot}{m^d}+ { l^\prime } , ( s_- \eta)^2{\cdot}[n ] \bigr]^{{{\bolds{\mathsf p}}}}}_t \\[-2pt ] & = & ( s_-^3 { \psi^d}\eta^3){\cdot}{n^{{{\bolds{\mathsf p}}}}}_t \\[-2pt ] & = & ( s_-^3 { \psi^d}\eta^3 \alpha){\cdot}t\end{aligned}\ ] ] so that .using the notation , and plugging in then allows us to rewrite the bsde ( [ equ4.2 ] ) after simple calculations as y_t & = & 1.\end{aligned}\ ] ] it depends on the choice of the filtration whether we can have a nontrivial strongly -orthogonal to both and , or and . if is generated by and , then automatically by the martingale representation theorem in . for models with more general jumps , the version ( [ equ4.2 ] ) of the basic bsde ( [ equ4.1 ] ) is less useful because one can not easily express in terms of integrands like in the preceding example .we therefore use semimartingale characteristics and , in particular , work with the jump measure of . for the required notation and results, we refer to chapter ii of .we take there so that \times{{\mathbb r}} ] .denote by the random measure associated with the jumps of and by its -compensator . using proposition ii.2.9 of , we have for a predictable increasing null at 0 .moreover , ( [ equ2.7 ] ) gives and +{\langle m\rangle} ] implies that so that can be reformulated as with the notation , we now consider the backward equation \\[-8pt ] & & { } + \int_0^t \varphi_s { \,d}{m^c}_s + w*(\mu^s-\nu)_t + { l^\prime}_t,\qquad y_t = 1 .\nonumber\end{aligned}\ ] ] a _ solution _ of ( [ equ4.5 ] ) is a priori a tuple such that , [ see ( 3.62 ) in ] , strongly -orthogonal to and to the space of stochastic integrals , and a -special semimartingale with ] \in{{\mathcal a}}_{{\mathrm{loc}}}(p) ] .this is the so - called _ jacod decomposition _ ; see , theorem 3.75 , or theorem 2.4 in for a more detailed exposition .we next express in terms of and .using ( [ equ4.1 ] ) and ( [ equ4.6 ] ) yields moreover , \equiv0 ] is in , this implies that is in so that is a local -martingale by , ( 3.73 ) .hence we obtain ] ^{{{\bolds{\mathsf p } } } } } & = & { \bigl ( \bigl ( x^2 \bigl(w(x ) - { \widehat w}\bigr ) \bigr ) * \mu^s \bigr)^{{{\bolds{\mathsf p } } } } } = \bigl ( x^2 \bigl(w(x ) - { \widehat w}\bigr ) \bigr)*\nu\\ & = & \biggl ( \int x^2 \bigl(w(x ) - { \widehat w}\bigr ) f(dx ) \biggr){\cdot}b,\end{aligned}\ ] ] and so .moreover , = [ s]^c + \sum(\delta s)^2 = { \langle m^c\rangle}+ x^2 * \mu^s\ ] ] gives ^{{{\bolds{\mathsf p } } } } } = { \langle m^c\rangle}+ x^2 * \nu= ( { \delta^c}+ \int x^2 f(dx ) b ) { \cdot}{\langle m\rangle} ] is a local -martingale by yoeurp s lemma , and a similar argument as just above , using now that ; so and is strongly -orthogonal to .moreover , we have for all that = 0 ] , and so for all by , exercice 3.23 . finally , ( [ equ2.7 ] ) and yoeurp s lemma yield ^{{{\bolds{\mathsf p } } } } } \nonumber\\ & = & { [ w*(\mu^s-\nu ) , s ] ^{{{\bolds{\mathsf p } } } } } \nonumber\\[-8pt]\\[-8pt ] & = & { \bigl ( \bigl ( x \bigl(w(x ) -{ \widehat w}\bigr ) \bigr ) * \mu^s \bigr)^{{{\bolds{\mathsf p } } } } } \nonumber\\ & = & \bigl( x \bigl(w(x ) - { \widehat w}\bigr ) \bigr ) * \nu . \nonumber\end{aligned}\ ] ] taking in ( [ equ4.9 ] ) the covariation with and using also yields so that we get plugging ( [ equ4.11 ] ) and ( [ equ4.8 ] ) into ( [ equ4.1 ] ) and using ( [ equ4.9 ] ) , we see that solves ( [ equ4.5 ] ) .conversely , if solves ( [ equ4.5 ] ) , then we define by ( [ equ4.11 ] ) and then , due to ( [ equ4.4 ] ) and because , and so .moreover , equation ( [ equ4.10 ] ) , the definitions of and via ( [ equ4.11 ] ) and the definitions of and yield by the orthogonality properties of , so that is strongly -orthogonal to . finally , the jacod decomposition applied to implies that the latter must have the form due to its orthogonality properties .but then we obtain from ( [ equ4.5 ] ) again ( [ equ4.7 ] ) , hence also ( [ equ4.8 ] ) , and then plugging in shows that solves ( [ equ4.1 ] ) .this completes the proof . just for completeness , but without any details , we give here the equivalent versions of the bsdes ( [ equ3.2 ] ) and ( [ equ3.3 ] ) for and .they are and finally , the recursive representation for the optimal strategy in ( [ equ3.6 ] ) takes the form of course , this can equivalently be rewritten as a linear sde for as in ( [ equ3.4 ] ) , simply by integrating with respect to . at this point , it seems appropriate to comment on related work in the literature , where we restrict ourselves to papers that have used bsde techniques in the context of mean - variance hedging . while extending work by many authors done for an it process setting in a brownian filtration , the results in mania and tevzadze ( ) and still all assume that is continuous . at the other end of the scale , have a general , with ; but their methods do not exploit stochastic control ideas and results at all , and bsdes appear only very tangentially in their equations ( 3.32 ) and ( 3.37 ) . as a matter of fact , their opportunity process equals our coefficient , and so their equation ( 3.37 ) , which gives a bsde for , should coincide with our equation ( [ equ4.5 ] ). however , give no proof for ( 3.37 ) and even remark that `` it is not obvious whether this representation is of any use . ''moreover , a closer examination shows that ( 3.37 ) is not entirely correct ; it seems that they dropped the jumps of the fv part of somewhere , which explains why their equation has instead of ( the correct term ) .the paper closest to our work is probably .they first study the variance - optimal martingale measure as in via the problem dual to mean - variance hedging and obtain a bsde that describes ; see our proposition [ prop2.2 ] . for mean - variance hedging itself, they subsequently describe the optimal strategy in feedback form with the help of a process ( called ) for which they give a bsde .their assumptions are considerably more restrictive than ours because , in addition to and , they also suppose that is quasi - left - continuous ; and for the results on mean - variance hedging , they additionally even assume that is generated by integrals of [ and also that the vomm exists and satisfies the reverse hlder inequality and a certain jump condition ] .we found it hard to see exactly why this restrictive condition on is needed ; the proof in for their verification result is rather computational and does not explain where the rather technical bsdes come from .finally , a similar ( subjective ) comment as the last one also applies to .the problem studied there is mean - variance hedging ( not the vomm ) , and the process is a multivariate version of the simple jump - diffusion model in example [ exa4.2 ] , with a -dimensional brownian motion and an -variate poisson process .the filtration used for strategies and payoffs is generated by and ; but all model coefficients ( including the intensity of ) are assumed to be -predictable .technically speaking , this condition serves to simplify lim s equation ( [ equ3.1 ] ) , which corresponds to our equation from example [ exa4.2 ] for without the jump term. it would be interesting to see also at the conceptual level why the assumption is needed . as already pointed out before theorem [ theorem3.1 ] ,the bsde system ( [ equ3.1])([equ3.3 ] ) is less complicated than it looks .it is only weakly coupled , meaning that one can solve ( [ equ3.3 ] ) ( even directly ) once one has the solutions of ( [ equ3.1 ] ) and ( [ equ3.2 ] ) , and that ( [ equ3.2 ] ) is linear and hence also readily solved once one has the solution of ( [ equ3.1 ] ) . in general , however , ( [ equ3.1 ] ) has a very complicated driver , and it seems a genuine challenge for abstract bsde theory to prove existence of a solution directly via bsde techniques .we do not do that ( and do not need to ) since we only use the bsdes to describe optimal strategies ; existence of the latter ( and hence existence of solutions to the bsdes ) is proved directly via other arguments . in the special case where the filtration is continuous , the complicated equation ( [ equ3.1 ] ) or ( [ equ2.18 ] )can be reduced to a classical quadratic bsde , as follows .first of all , as already pointed out before lemma [ lem2.3 ] , the operation in ( [ equ2.12 ] ) reduces to , at least in the context of ( [ equ2.18 ] ) .so ( [ equ2.18 ] ) becomes and we know from lemma [ lem2.1 ] that the solution is strictly positive .if we introduce , apply it s formula and define , , then it is straightforward to verify that ( [ equ4.12 ] ) can be rewritten as this can then be tackled by standard bsde methods , if desired .in this section , we present some simple examples and special cases to illustrate our results .we keep this deliberately short in view of the total length of the paper ._ throughout this section _, _ we assume that and ._ recall the -canonical decomposition of our price process . because , the process is in with .moreover , it is easy to check that is a local -martingale so that is a so - called signed local martingale density for .if is a true -martingale and in , then with is in and called the _ minimal signed _ ( _ local _ ) _ martingale measure _ for ; if even so that is in , then is the _ minimal martingale measure _ ( _ mmm _ ) for .the mmm is very convenient because its density process can be read off explicitly from . on the other hand ,the important quantity for mean - variance hedging is the variance - optimal martingale measure ( vomm ) . by proposition [ prop2.6 ], we could construct a solution to the bsde ( [ equ2.18 ] ) from by } , \qquad 0\le t\le t,\ ] ] but the density process is usually difficult to find . an exception is the case when , since then and the above formula allows us to find an explicit expression for . to make this approach work , we need conditions when and coincide .this has been studied before , and we could give some new results , but do not do so here for reasons of space .we only mention the mmm since it comes up later in another example . in terms of complexity, the bsde ( [ equ2.18 ] ) or one of its equivalent forms ( [ equ3.1 ] ) , ( [ equ4.2 ] ) , ( [ equ4.5 ] ) is the most difficult one .so we focus on that equation , in the form ( [ equ4.5 ] ) , and we try to have a solution tuple with and .then ( [ equ4.5 ] ) simplifies to which gives .but by ( [ equ2.10 ] ) , and plugging this in above and solving for allows us to get so that ( [ equ4.5 ] ) becomes this is the equation for a generalized stochastic exponential , and so it is not surprising that we can find an explicit solution .[ cor5.1 ] set and suppose that with a constant and a -martingale which is strongly -orthogonal both to and to the space of stochastic integrals .then the solution of ( [ equ4.5 ] ) is given by , and = { { \mathcal e}}(k)_t ( c+m_t ) , \nonumber\\[-8pt]\\[-8pt ] { l^\prime}_t & = & \int_0^t { { \mathcal e}}(k)_{s- } { \,d}m_s + [ { { \mathcal e}}(k ) , m ] _ t .\nonumber\end{aligned}\ ] ] since ( [ equ5.0 ] ) can be written as , defining and by ( [ equ5.1 ] ) gives by the product rule that satisfy ( [ equ5.0 ] ) with , and is a local -martingale like by yoeurp s lemma . finally , for every , we have that \bigr ] & = & \sum\delta\bigl ( { \bar w}*(\mu^s-\nu ) \bigr ) \delta{{\mathcal e}}(k ) \delta m\\ & = & \delta{{\mathcal e}}(k ) { \cdot } [ { \bar w}*(\mu^s-\nu ) , m]\end{aligned}\ ] ] is a local -martingale because is strongly -orthogonal to .hence is also strongly -orthogonal to , and so is a solution to ( [ equ4.5 ] ) .[ exa5.2 ] a special case of corollary [ cor5.1 ] occurs if the ( final ) _ mean - variance tradeoff _ and all the jumps are _deterministic_. then , the solution for is [ which is adapted because is deterministic ] , and all other quantities in the bsdes ( [ equ2.18 ] ) or ( [ equ4.2 ] ) or ( [ equ4.5 ] ) are identically 0 . if or or even only is continuous , the above expression simplifies to now we briefly look at the special case of a model in finite discrete time .our price process is given by , and we assume as in ( [ equ2.7 ] ) that with a martingale null at 0 .we assume that is square - integrable to avoid technical complications , and we write for the increments of a process .the doob decomposition is then explicitly given by ] and the density of ] ^{{{\bolds{\mathsf p}}}}} ] so that .\ ] ] moreover , we have + ( e[\delta_j s| { { \mathcal f}}_{j-1 } ] ) ^2\nonumber\\[-8pt]\\[-8pt ] & = & e [ ( \delta_j s)^2 | { { \mathcal f}}_{j-1 } ] , \nonumber\end{aligned}\ ] ] and the galtchouk kunita watanabe decomposition yields \\[-8pt ] & = & { \operatorname{cov } } ( y_j , \delta_j s | { { \mathcal f}}_{j-1 } ) .\nonumber\end{aligned}\ ] ]hence we get e[y_j | { { \mathcal f}}_{j-1 } ] \bigr)^2 \\ & = & ( e[y_j \delta_j s | { { \mathcal f}}_{j-1}])^2.\end{aligned}\ ] ] writing out the discrete - time analog of ( [ equ2.18 ] ) , expanding the ratios in the first appearing sum with and using ( [ equ5.3])([equ5.6 ] ) then yields )^2 \over e[y_j ( \delta_j s)^2 | { { \mathcal f}}_{j-1 } ] } \\ & & { } + \sum_{j=1}^k \psi_j \delta_j m + l_k,\qquad y_t=1 . \nonumber\end{aligned}\ ] ] but gives = y_{j-1 } + \delta_j b^y = { n^y}_{j-1 } + b^y_j,\ ] ] and the denominator in the third sum in ( [ equ5.7 ] ) therefore equals = e [ y_j ( \delta_j s)^2 | { { \mathcal f}}_{j-1 } ] .\ ] ] passing to increments and taking conditional expectations to make the martingale increments vanish , equation ( [ equ5.7 ] ) thus can be written as = e [ y_k | { { \mathcal f}}_{k-1 } ] - { ( e[y_k \delta_k s | { { \mathcal f}}_{k-1 } ] ) ^2 \over e[y_k ( \delta_k s)^2 y_t=1.\ ] ] this is exactly the recursive relation derived in equation ( 3.1 ) in theorem 1 of ; see also equation ( 3.36 ) in . under more restrictive assumptions ,analogous equations have also been obtained in equation ( 5 ) in theorem 2 of or in equation ( 2.19 ) in theorem 1 of .our final example serves to illustrate the relations between our work and that of , whose assumptions are rather similar to ours .more precisely , assumes that ( which he calls ) is locally bounded , and that the vomm exists in and satisfies the reverse hlder inequality and a condition on the jumps of .this implies of course and . does not use bsdes , but works with a change of numeraire as in .his numeraire is $ ] , and to ensure that this is positive , the existence of the vomm in is needed .the example below illustrates that our assumptions are strictly weaker than those of .[ exa5.3 ] we start with two independent simple poisson processes with the same intensity and define , .we then set so that is clearly locally bounded , hence in , and even quasi - left - continuous .we claim that we can choose the parameters such that : the variance - optimal signed martingale measure coincides with the minimal signed martingale measure , but is not in , which means in our terminology and that of that the vomm does not exist .let us first argue ( 2 ) . because implies that and we have , we obtain so as soon as we have we get at jumps of so that also takes negative values . because the mean - variance tradeoff process , , is deterministic ,the signed mmm is variance - optimal by theorem 8 of .moreover , is clearly in and so is in , but not in .this gives ( 2 ) . to construct an element of ,start with , which is clearly in . to ensure that , we need and .next , the product is by it s formula seen to be a local -martingale if and only if , which translates into the condition .this allows us to rewrite ( [ equ5.8 ] ) as and if we choose , this boils down to and . by the bayes rule, is then a local -martingale under with . if we now choose and , , , , one readily verifies that all conditions above are satisfied ;hence since it contains .if we take , we even keep since . by its construction , the minimal martingale density always based on . with our above choice of model parameters , this is symmetric in and and therefore risks getting negative jumps rather easily .in contrast , writing with shows that it can be very beneficial to have some extra freedom when choosing an elmm or a martingale density .this is quite analogous to the well - known counterexample in .
we solve the problem of mean - variance hedging for general semimartingale models via stochastic control methods . after proving that the value process of the associated stochastic control problem has a quadratic structure , we characterize its three coefficient processes as solutions of semimartingale backward stochastic differential equations and show how they can be used to describe the optimal trading strategy for each conditional mean - variance hedging problem . for comparison with the existing literature , we provide alternative equivalent versions of the bsdes and present a number of simple examples . , , .
cellular membranes are two - dimensional fluid interfaces that consist of a large variety of components .they form the boundary between the cell and the outside world , and , for eukaryotic cells , separate the inside of the cell into numerous compartments known as organelles . in order for biological processes like cell division , vesicular trafficking and endo / exocytosis to occur, cellular membranes have to reshape constantly .consequently , membranes exhibit a variety of morphologies , from a simple spherical liposome to bewildering complex structures like interconnected tubular networks as found in mitochondria and the endoplasmic reticulum ( er ) , or connected stacks of perforated membrane sheets in the golgi apparatus . there are different mechanisms by which membranes achieve these structures , the most important of which is through the interplay between membrane lipids and various proteins .a biological membrane is home to different types of proteins that are adhered to or embedded in it .these proteins deform the membrane and , consequently , they can either repel or attract each other . spatial organization of such proteins in biological membranes is essential for stabilizing the membrane and for the dynamic behaviour of cellular organelles .recently , it has been experimentally and theoretically revealed that membrane - curving particles , like colloids or identical proteins , adhered to a membrane self - assemble into striking patterns .for instance it has been shown that colloids adhered to a spherical membrane form linear aggregations . in all of the studies to date , the global shape of the membrane is selected from one of three options : planar , spherical , or tubular .these global membrane shapes impose a homogeneous background curvature , which is considered to be conserved throughout the process under investigation .outside factors changing the membrane have not yet been included in the study of membrane - mediated interactions .membranes in cellular compartments such as in er and the golgi complex are however dynamic entities and possess peculiar shapes forming regions with high local curvature and regions with less curvature .forming and stabilizing such shape inhomogeneities is necessary for cellular functions like sensing and trafficking .it is therefore warranted to investigate how the interactions between membrane inclusions are affected by anisotropies in the membrane curvature . in this study , through a numerical experiment, we investigate the interactions between colloids adhered to a quasi - ellipsoidal membrane with a varying curvature .we also include all other factors from earlier studies such as surface tension , adhesion energy ( required for colloids to adhere to the membrane ) and constant volume effects .we use a dynamic triangulation network to model the membrane , and computationally minimize the total energy of the membrane via a monte carlo algorithm .firstly , we show that the interaction between two colloids adhered to spherical vesicles is significantly affected by the vesicle curvature .secondly , we demonstrate that linear aggregates of colloids exploit the curvature anisotropy and adjust their orientation to minimize the total energy on a quasi - ellipsoidal membrane . using umbrella sampling ,we further show that the total energy of the membrane favors two colloids to attract each other at the mid - plane of a prolate ellipsoid that is perpendicular to its major axis .finally , we investigate how the various terms in the total energy of the membrane affect the strength of the interactions .our results show that the variation in the membrane shape can play a crucial role in a variety of cellular functions that require macromolecular assembly or membrane remodeling .the conformation of a fluid membrane can be described as the shape minimizing the classical helfrich energy functional : + where is the mean curvature at any point on the surface of the membrane and geometrically is defined as the divergence of the normal vector to the surface , . in our computational scheme , we discretize the membrane by a triangulated network , whose triangles represent course - grained patches of the membrane . using a discretized form of the helfrich energy ,we define the curvature energy as : where and are the normal vectors to any pair of adjacent triangles and , respectively .the summation runs over all pairs of such triangles . in order to guarantee the fluidity of the membrane, we cut and reattach the connection between the four vertices ( which we label with and refer to as beads ) of any two neighboring triangles .the membrane in our system does not undergo any topological changes and we can thus ignore the gaussian curvature contribution in the bending energy .we impose the conservation of membrane surface area ( ) and enclosed volume ( ) by adding the terms and to the energy during the minimization process , with and the target values of the membrane s area and enclosed volume .the corresponding constants are chosen such that both the area and volume deviate less than from their target values . to enable colloids to adhere to the membrane, we introduce an adhesion potential , , between colloids and the membrane , where is the strength of the adhesion energy and , and are , respectively , the center to center distance and the minimum allowed separation between colloids and membrane beads .finally , we need to give the membrane an anisotropic shape for which we deform our spherical membrane into a prolate ellipsoid . in order to do so ,we introduce two weak ( compared to the strength of the adhesion energy ) spring - like potentials between two small areas of the vesicle ( the two poles of the ellipsoid ) and the center of the vesicle , ; , and are the potential strength , the major axis of the ellipsoid and the length of any line connecting the beads situated at the poles of the ellipsoid to the center , respectively .since the adhesion energy is stronger than the applied harmonic potential , colloids effectively do not feel any difference between the energy cost for bending the membrane at these two areas and at the regions belonging to the rest of the ellipsoid .we verified this claim by considering a spherical membrane , and find that there is no significant difference between the case of including with being the radius of the vesicle , and the case we do not include such a potential . + having defined all the contributions to the total energy of the membrane ( ) , we perform monte carlo simulations to reach the equilibrium shape of an ellipsoid containing an arbitrary number of colloids .to do so , we implement the metropolis algorithm , in which we have three types of moves : we can modify the position of a random bead of the membrane , impose a rearrangement in the connections of beads , or move the colloids around .the first two moves are energetically evaluated based on the total energy , while any changes in the position of colloids are only based on the adhesion energy .+ during the simulations , we keep the number of particles constant and set all the relevant parameters as : , , , and , where is the thermal energy and is the diameter of the beads constructing the membrane . the diameter of the colloids is set to .first , we analyze the interaction between two colloids adhered to the surface of two vesicles of different sizes .we keep the size of colloids and beads the same in both cases .we use umbrella sampling to calculate the excess energy of the membrane as a function of the distance between the colloids . in effect, we apply a harmonic potential , as our biased potential , between the two colloids directed along the coordinate of interest in order to restrain the system to sample around each distance .having performed the sampling process , we use the weighted histogram analysis method ( wham ) for obtaining the optimal estimate of the unbiased probability distribution , from which we can calculate the free energy of the system . the free energy is calculated with respect to the initial position of the colloids .the excess area of the membrane available for colloids to adhere to is equal in both vesicles . as illustrated in fig .[ fig : twovesicles ] , the depth of the excess energy of the membrane with a smaller radius is significantly larger . in contrast , for the larger vesicle after a short distance colloids do not feel each other and the energy becomes flat . as the only difference between two test cases is the curvature , we conclude that this effect is due to vesicles being of different radii .next , we examine the interaction between two colloids on the surface of a quasi - ellipsoidal membrane .we position the colloids symmetrically along the major axis of the ellipsoid ( see fig . [fig : fig2]d(_i _ ) ) .we repeat the sampling procedure for different aspect ratios , , of the ellipsoid .since the volume is conserved during the shape evolution , one can easily calculate the semi - minor axis , , as : . as depicted in fig .[ fig : fig2]a , along the major axis colloids attract each other in order to minimize both the adhesion and curvature energies . decreasing the asphericity of the ellipsoid ( ) in this directionenhances the deformable area ( i.e. the number of accessible beads ) , hence the strength of the attraction energy increases .similarly , particles that are situated along the semi - minor axis ( as depicted in fig .[ fig : fig2]d(_ii _ ) ) attract each other .there is , however , an important difference between the two directions .in contrast to the previous case , decreasing the asphericity of the ellipsoid makes the attraction force between colloids weaker . since the number of membrane beads adhered to each colloidremains the same , this behavior can not be explained by the adhesion energy of the membrane . to illuminate the reason that colloids select the direction along the minor axis to attract each other , we investigate the energy of a pair of colloids along a different coordinate . as shown in fig .[ fig : fig2]d(_iii _ ) , we rotate a pair of colloids , that are constrained at a fixed distance to their center , along the angle spanning the space between the semi - major and -minor axes . as fig .[ fig : fig2]c depicts , the most energetically favorable configuration is when the colloids are aligned with the direction perpendicular to the major axis .this itself introduces a mechanism by which , without involving any other factors , two colloids find the mid - plane perpendicular to the symmetry axes of the ellipsoid , as it minimizes the total energy of the membrane .in contrast , in the case of having a perfect spherical membrane , it is not possible to predict the localization of colloidal aggregates as it will be randomly chosen .increasing the major axis of the membrane ( making larger ) , drives the colloid reorientation stronger .one should be careful about the values for the bending moduli and adhesion coefficients during the simulations , as it can cause an effect where colloids are arrested and prevented from diffusing on the surface of the membrane .in addition , a very high value of , in addition to influencing the adhesion energy between the colloids and the membrane , would also pull two tubes out of the vesicle .+ although the above results quantitatively show different behavior in two directions , the dominant contribution in the total energy of the membrane causing this effect is not yet clarified . in order to approximately determine it ,we proceed as follows : we pick a vesicle with and constrain the position of the colloids with a strong potential . here , in contrast to earlier , we do not use the sampling method .instead we let the system explore possible configurations of the membrane after reaching equilibrium , and then take the average of the energies for all those configurations . as depicted in fig .[ fig : fig3 ] , both the adhesion energy and the curvature of the membrane decrease when the angle between the line connecting two colloids and the semi - major axis of the ellipsoid ( fig .[ fig : fig3]d ) approaches .the bending energy , as quantified in fig .[ fig : fig3]b , has a larger contribution to the total energy than the adhesion energy ( fig .[ fig : fig3]c ) .+ putting all the results together , we expect that when we have more than two colloids they will initially attract each other to form linear aggregates ( to minimize the adhesion energy ) , and afterwards these aggregations change their orientation to align with the minor axes of the ellipsoid .this is indeed what we observe in our simulations .[ fig : fig4 ] depicts the equilibrium shape of the membrane for different numbers of colloids . in all the test cases colloids tend to form a ring - like structure in the mid - plane of the ellipsoid . with a sufficiently large number of colloids ( figs .[ fig : fig4]c and [ fig : fig4]d ) , they form a full ring in this plane ( see also the supplemental movie sm1 ) .it is important to mention that these patterns are quite stable during the whole simulation .in contrast , in spherical vesicles there is no preferred direction for the aggregation of particles . although colloids attract each other on a spherical membrane ( fig .[ fig : twovesicles ] ) , there is no preference for the direction of the attraction .this means that even in case of forming a perfect ring on a vesicle , particles self - assemble in an arbitrary direction on the membrane . + as the final experiment , we look at the movement of particles on a vesicle having negatively curved regions .to do so , we first overstretch the springs ( by which we give quasi - ellipsoidal shape to vesicles ) and form negatively curved regions in a big vesicle ( ) .having inserted a dimer in the system , we then look at the migration of the dimer . as shown in fig .[ fig : fig5 ] in this case the dimer does not stay at the mid - plane of the vesicle .it instead spends much of its time during mc simulation at the regions that are negatively curved .since the springs are overstretched , in the regions close to poles there is no excess area for the dimer to adhere to and therefore the dimer can not explore that area ( see also supplemental movies sm2 - 4 ) .this type of dimer migration toward the areas having higher deviatoric curvature is also experimentally observed in ref .+ the type of pattern formation we observe in our simulations is reminiscent of recruiting proteins by the membrane during different biological processes . it has been shown that , for example , dynamin proteins form a ring like structure during exocytosis to facilitate membrane scission and that ftsz proteins self - assemble into rings during the last step of bacterial cell division , namely cytokinesis . because most of the proteins in biological cells are either anchored to or embedded in the membrane , their interaction is a response to the deformation of the membrane they themselves impose . as in our simulationthe varying curvature is a determining factor that drives the pattern formation , we can relate our results to those membrane trafficking machinery functions .although in this study we adjusted the included harmonic potential strength such that it would not affect the interaction of the colloids with the membrane , it has been proven that during the cell division we have the same situation .cytoplasmic dynein , as a multi - subunit molecular motor , generates the force that is exploited by the cell to direct the orientation of the division axis by mitotic spindles .our results show that curvature inhomogeneity and anisotropy can at least facilitate the process of protein self - assembly in the mid - plane of the cell .+ although we have only investigated the interaction between identical isotropic inclusions , our results can explain the behavior of a system containing anisotropically shaped inclusions as well .based on the local deformation of an ellipsoid , we expect that anisotropic inclusions adhered to a spherical membrane attract each other in the direction of negative curvature ( with respect to the curvature of the membrane ) .this situation corresponds to having an isotropic inclusion embedded in a membrane with an anisotropic shape , which is the case we have studied here .we studied the role of curvature heterogeneity and anisotropy on the interaction between colloids adhered to a membrane .first , we showed that the strength of the interaction between two colloids on the surface of a spherical vesicle is altered by changing the size of the vesicle .next , we focused on such interactions on a membrane with an ellipsoidal shape .we revealed that the interaction on such an inhomogeneously shaped membrane depends on direction .for example , decreasing the asphericity of an ellipsoidal membrane makes the attraction between the colloids stronger along the semi - major axis and weaker in the semi - minor direction .similarly , it has been previously shown in simulations that , on an elastic cylindrical membrane , colloids assemble perpendicularly to its major axis in the regime dominated by the bending energy . in case of fluid membranes , through an analytical framework, it has also been shown that inclusions _`` embedded '' _ in a tubular membrane can attract each other in a transversal direction . simulating a vesicle containing many colloids, we showed how they form a ringlike structure around the mid - plane of the ellipsoid .while the cluster of colloids freely explores all the surface of a spherical membrane , less curved area energetically is more favorable for colloids on an ellipsoid .our results suggest that forming regions of different curvatures on membrane vesicles can control pattern formation of inclusions , and this can be important from both nanotechnological application and biological points of view .this work was supported by the netherlands organisation for scientific research ( nwo / ocw ) , as part of the frontiers of nanoscience program .
cellular membranes exhibit a large variety of shapes , strongly coupled to their function . many biological processes involve dynamic reshaping of membranes , usually mediated by proteins . this interaction works both ways : while proteins influence the membrane shape , the membrane shape affects the interactions between the proteins . to study these membrane - mediated interactions on closed and anisotropically curved membranes , we use colloids adhered to ellipsoidal membrane vesicles as a model system . we find that two particles on a closed system always attract each other , and tend to align with the direction of largest curvature . multiple particles form arcs , or , at large enough numbers , a complete ring surrounding the vesicle in its equatorial plane . the resulting vesicle shape resembles a snowman . our results indicate that these physical interactions on membranes with anisotropic shapes can be exploited by cells to drive macromolecules to preferred regions of cellular or intracellular membranes , and utilized to initiate dynamic processes such as cell division . the same principle could be used to find the midplane of an artificial vesicle , as a first step towards dividing it into two equal parts .
thanks to recent technological advances in producing high - quality polarized monochromatic photon beams , and in developing polarized nucleon targets , it becomes possible to measure a sufficiently large amount of single- and double - polarization observables in pion and kaon photoproduction from the nucleon . as a result ,a status of complete quantum mechanical information of meson photoproduction comes within reach .measurements are complete whenever they enable one to determine unambiguously all amplitudes of the underlying reaction process at some specific kinematics .we consider the reaction as a prototypical example of pseudoscalar - meson photoproduction from the proton .the transversity amplitudes express the transition matrix elements in terms of the and spinors ( with quantization axis perpendicular to the reaction plane ) and of linear photon polarizations .we propose to use normalized transversity amplitudes ( nta ) to perform an amplitude analysis of the single- and double- polarization observables .the nta provide complete information after determining the differential cross section .the corresponding polarization observables can be expressed in terms of linear and nonlinear equations of bilinear products of the . for a given kinematical setting determined by the meson angle and the invariant mass , the fully determined by six real numbers conveniently expressed as three real moduli and three real relative phases .all observables are invariant under a transformation of the type , with an arbitrarily chosen overall phase . in fig .[ f1 ] we show predictions for the nta at mev and various .the adopted model for is the regge - plus - resonance ( rpr ) approach in its most recent version rpr-2011 .the model has a reggeized -channel background and the -channel resonances , , , , , , and .the rpr approach provides a low - parameter framework with predictive power for and photoproduction on the proton and the neutron .an obvious advantage of using the transversity amplitudes is that linear equations connect the moduli of the nta to the single - polarization observables accordingly , a measurement of at given allows one to infer the moduli of the nta .the graal collaboration provides data for at 66 combinations in the ranges gev ( mev ) and ( ) .figure [ f2 ] shows the extracted at three intervals along with the rpr-2011 predictions .for a few kinematic points the could not be retrieved from the data .this occurs whenever one or more arguments of the square roots in eq .( [ eq:1 ] ) become negative due to finite experimental error bars .the rpr-2011 model offers a fair description of the dependence of the extracted except for the most forward angles at gev .furthermore , the data confirm the predicted dominance of the .inferring the nta phases from data requires measured double asymmetries .complete sets of the first kind , which involve seven observables ( e.g. ) , lead to the following set of nonlinear equations for the phases where and .solutions to the above set of nonlinear equations gives the phases for given moduli .we stress that single - polarization observables are part of any complete set as they provide the information about the moduli .double polarization observables are required to get access to the phases . in all practical situations one has .finite error bars introduce a bias for the choices made with regard to the reference phase ( here , ) for the above equations .a consistent set of estimators for the independent phases ( insensitive to choices made with regard to the reference phase ) has been proposed in ref . .to date , the published double polarization observables for do not allow one to extract the phases of the nta .we have conducted studies with pseudo - data generated by the rpr-2011 model for .we have considered ensembles of 200 pseudo - data sets each containing samples of 50 events for the asymmetries .the pseudo - data are drawn from gaussians with the rpr-2011 prediction as mean and a given as standard deviation .the retrieved do not necessarily comply with the input amplitudes .there are various sources of error : ( i ) imaginary solutions for the moduli ; ( ii ) imaginary solutions for the phases ; ( iii ) incorrect solutions which stem from the fact that can not be exactly obeyed for data with finite errors .we find that the amount of incorrect and imaginary solutions is much larger for the phases than for the moduli .the frequency of finding imaginary solutions can be dramatically reduced by improving on the experimental resolution .we have sketched a possible roadmap for reaching a status of complete information in pseudoscalar - meson photoproduction .we suggest that the use of transversity amplitudes is tailored to the situation that experimental information about is more abundant ( and most often more precise ) than for the double polarization observables .linear equations connect to the moduli of the nta .an analysis of data for from graal allowed us to extract the in the majority of considered combinations .extracting the nta independent phases is far more challenging as they are connected to the double asymmetries by means of nonlinear equations .it has been suggested that over - complete sets which involve more than seven polarization observables may provide a solution to tackle the problem of extracting the relative phases of the amplitudes from the data .this work is supported by the research council of ghent university and the flemish research foundation ( fwo vlaanderen ) .0 t. vrancx , j. ryckebusch , t. van cuyck , p. vancraeyveld , _ phys .c _ * 87 * , 055205 ( 2013 ) . l. de cruz , j. ryckebusch , t. vrancx , p. vancraeyveld , _ phys .c _ * 86 * , 015212 ( 2012 ) . l. de cruz , t. vrancx , p. vancraeyveld , j. ryckebusch , _ phys .lett . _ * 108 * , 182002 ( 2012 ) .p. vancraeyveld , l. de cruz , j. ryckebusch , t. vrancx , _ nucl ._ * a897 * , 42 ( 2013 ) .a. lleres _ et al . _( graal collaboration ) , _ eur .phys . j. _* a31 * , 79 ( 2007 ) and * a 39 * , 149 ( 2009 ) .wen - tai chiang and f. tabakin , _ phys .c _ * 55 * , 2054 ( 1997 ) .
a possible roadmap for reaching a status of complete information in is outlined .
most scientists publish their findings to disseminate their research results widely and hope that their research has some impact in the scientists community and society in general . for many years, citation counts have been used for this goal .this research evaluation approach has produced interesting results identifying the complex nature of physics impact in the research community ( for instance see refs . , , ) .this identification of diverse research impacts is important to research managers/ sponsors/ evaluators , and of course performers .they are interested in the types of people and organizations citing the research outputs , and whether the citing audience is the target audience .also , they are interested in whether the development categories and technical disciplines impacted by the research outputs are the desired targets .since fundamental research can evolve along myriad paths , tracking diverse impacts becomes very complex .recently , scientists have addressed the problem of citation in scientific research from different perspectives : looking for topological description of citation , or for power laws in citation networks , or obtaining power laws in number of cites received by journals according to their number of published papers , or though two - step competition model relating the number of publications and number of cites .these different approaches use power laws trying to obtain simple results from complex interactions , assuming that the precise details of the interactions among the parts of the system play no role in determining the overall behavior of the system .other approaches try to find some kind of universality in the behavior of research institutions , .however , in order to obtain a detailed representation of the system it is important to know the details of the interactions .the creation of roadmaps for science and technology illuminates these interactions , and allows the progression of research to be portrayed from both retrospective and prospective perspectives . and with these , .the analysis of the cites and authorship from a social network perspective gives other useful information to characterize scientific disciplines , .however , these approaches : scaling , networks and roadmaps , have limitations due to the fact that they explore only partially the data available from the citation system .the detailed analysis of all the available data of the citing community is required to obtain more information and knowledge .until now there has been no comprehensive systematic methodology to deal with the information available through cites of the scientific article . to overcome the above mentioned limitations of these techniques , we have developed a phenomenological approach to deal with all the citation information available , and obtain a more detailed description of this complex system .the aim of this paper is to show how we can obtain a more complete profile of the citing papers , and thereby get a more complete representation of the impact of science .the application of this kind of phenomenological detailed description is useful to obtain a different and illustrative view of the complex systems .the enhanced coverage of the research literature by the web version of the science citation index ( sci-5300 leading research journals ) allows a broad variety of bibliometric analyses of r&d units ( papers , researchers , journals , institutions , countries , technical areas ) to be performed .aggregation of citation number counts is characteristic of almost all published citation studies, , ; this approach identifies r&d units that have had ( and have not had ) gross impact on the user community .however , as we have already mentioned , this abscence of fine - structure represents a limited perspective ; therefore , we require an approach that utilizes all the available citing data and could help answer questions such as : what types of people and organizations are citing the research outputs ; is this the desired target audience ?what development categories are citing the research outputs ?what technical disciplines are citing the research outputs ?what are the relationships between the citing technical disciplines and the cited technical disciplines ?the aim of the present study is to show the power and capability of this new phenomenological approach to citer profiling .it is necessary to stress that the aim is not to assess the productivity and magnitude of impact of any individual researcher , research group , laboratory , institution , or country . to perform such an assessment , the authors would need a charter and statistically representative data based on the unit of assessment , i.e. to make a portrait of the research impact according with the current available scientific databases .the organization of this paper is the following : in the second section , we summarize the methodology , and describe citation mining . in the third section , we present the results of analyzing four sets of papers . in the followingwe will refer to these papers as cited papers , in order to distinguish them from the citing papers that are the papers in which the cited papers are referenced .we profile the four selected paper sets through the analysis of the characteristics of the citing authors , journals where the citing papers appeared , references in the citing papers , and also through the analysis of linguistic correlations in the abstract of the cited and citing papers as well as the titles and the keywords , and other registers available in the sci database .finally , we conclude with some remarks on the present study .in this section we describe a simple procedure to incorporate much of the sci information in the analysis of the impact of scientific papers .first , we identify the types of data contained in the sci ( circa early 2000 ) , and the types of analyses that will be performed on this information ( see table 1 ) .table 1 shows a record from the sci , without the field tags . the actual paper that it represents is referred in the following description as the full paper. starting from the top , the individual fields are : [ cols="<,<",options="header " , ] the citing papers to usf paper representing different categories of development and different disciplines from those of the cited paperare portrayed graphically in figure 12 , the axes are category , alignment and papers .the category represents the level of development characterized by the citing paper ( 1=basic research ; 2=applied research ; 3=advanced development/ applications ) , and the alignment represents the degree of similarity between the main themes of the citing and cited papers ( 1=strong alignment ; 2=partial alignment ; 3=little alignment ) .there are three interesting features on figure 12 .first , the tail of total annual citation counts is very long , and shows little sign of abating , this is one characteristic feature of a seminal paper .second , the fraction of extra - discipline basic research citing papers to total citing papers ranges from about 20 - 40% annually , with no latency period evident .this instant extra - disciplinary diffusion may have been due to the combination of intrinsic broad - based applicability of the subject matter and publication of the paper in a high - circulation science journal with very broad - based readership .third , there was a four - year latency period before the higher development category citing papers began to emerge .one can see that black dots ( earlier cites ) are completely in the category .this correlates with the results from the bibliometrics component .the latency could have been due to the information remaining in the basic research journals , and not reaching the applications community , or the time that an application needs to be developed is of the order of four years .thus , the basic science publication feature that may have contributed heavily to extra - discipline citations may also have limited higher development category citations for the latency period .the present phenomenological approach of identifying impact themes through text mining allows a much more detailed and informative picture of the impact of research to be obtained compared to semi - automated journal classification comparison approaches .it represents the difference between stating that a physics paper impacted geology research and a paper focused on sand - pile avalanches for surface smoothing impacted analyses of steep hill - slope landslides . in the final data analysis, a taxonomy of the usf citing papers was generated using phrase clustering .the abstracts of all the usf citing papers were converted to phrases and their frequencies of occurrence with use of a natural language processor contained in the techoasis software package .the 153 highest frequency technical content phrases ( expert - selected ) were exported to a statistical clustering software package ( winstat ) . based on the relations among phrases generated by this package , a taxonomy was generated by the authors .a particularly helpful output for each clustering run was the dendogram , a tree - like diagram showing the structural branches that define the clusters .figure 13 is one dendogram based on the 48 highest frequency phrases ( for illustration purposes only ) .the abscissa contains the phrases that are clustered .the ordinate is a distance metric. the smaller the distance at which phrases , or phrase groups , are clustered , the closer is the connection between the phrases .thus , samples of the phrases combined , near the right hand end of the graph include dissipation , collisions and energy , and mixtures/ alternating layers/ small grains/ large grains/ stratification . in the middle part of the graph vibration and amplitude can be found . at some later time , the vibration - amplitude combination is grouped with the gravity - granular media combination to form the next hierarchical level grouping , and so on .many statistical agglomeration techniques for clustering were tested ; the average neighbor method appeared to provide reasonably consistent good results .analyses were performed of the numerous cluster options that were produced .the following is one of the top - level cluster descriptions that represented the results of the phrase and word lists clustering best , as well as the factor matrix clustering from the techoasis results ( more clusters can be found in kostoff et al . ) . the highest level categorization based on the highest frequency 153 phrases produced three distinct clusters : structure/ properties , flow - based experiments , modeling and simulation . in the description of the structure and properties cluster ( right part of the dendogram figure 13 ) that follows, phrases that appeared within the clusters will be capitalized .this cluster contained mixtures of large grains and small grains , with stratification along alternating layers based on size segregation and grain shape and geometrical profile .the mixture forms a pile with an angle of repose . when the angle of repose is larger than a critical angle , dynamical processes produce avalanches , resulting in surface flow within thin layers .in this paper , we have presented a phenomenological technique to analyze some aspects of a complex system like the multi - path non - monotonic impact of scientific research .the result of using citation mining ( bibliometrics and text mining ) to analyze the impact of science , through the use of the available information from the web of science isi , allows the profiling of the citing papers of a given paper , research group , scientific organization , etc .we illustrated citation mining through the analysis of four research groups .this analysis provided multiple facets and perspectives of the myriad impacts of research .citation mining offers insights that would not emerge if only separate citing paper counts were used independently , as is the prevalent use of citation analysis today .moreover , by removing the need to actually read thousands of abstracts through the use of text mining , comprehensive assessments of research impact become feasible .one important result from the basic research citation mining was that impacts are possible in myriad fields and applications not envisioned by the researchers .this reference also questioned whether fundamental sand - pile research would receive funding from tokamak , air traffic control , or materials programs , even though sand - pile research could impact these or many other types of applications , as shown in the paper .the reference concluded that sponsorship of some unfettered research must be protected , for the strategic long - term benefits on global technology and applications !99 amaral , lan , gopikrishnan , p. , matia , k. , plerou , v. and stanley e.h .application of statistical physics methods and concepts to the study of science & technology systems , scientometrics , 51 , 3 ( 2001 ) .katz , j.s .the self - similar science systems , research policy 28 , 501 - 517 ( 1999 ) ; katz , j.s . scale - independent indicators and research evaluation , electronic working paper series spru , university of sussex , ( 2000 ) http://www.sussex.ac.uk / spru/. kostoff , r.n . ,del ro , j.a ., humenik .j.a . , garca , e.o . and ramrez , a.m. citation mining : integrating text mining and bibliometrics for research user profiling j. am .52 1148 - 1156 ( 2001 ) .newman , m.e.j .scientific collaboration networks i , phys .e 64 , 016131 ( 2001 ) ; newman , m.e.j . scientific collaboration networks ii , phys .e 64 , 016132 ( 2001 ) ; newman , m.e.j . the structure of scientific collaboration networks , p. natl .usa 98 , 404 - 409 ( 2001 ) .plerou , v. amaral , l.a.n . ,goplkrishnan , p. , meyer , m. and stanley h.e .similaties between the growth dynamics of university research and of competitive economic activities , nature 400 , 433437 ( 1999 ) .stanley , h.e . , amaral , lan , goplkrishnan , p. , ivanov p.ch .keitt t.h . and plerou , v. scale invariance and universality : organizing principles in complex systems .physica a 281 , 60 - 68 ( 2000 ) .time profile of citing papers for usf .here it is important to stress a four year delay in the appearence of applied citing papers , but no delay in the appearence of extradiscipline fundamental papers .figure 6 . citing country development phase . in this graphwe are plotting the relative number of citing countries according to their development phase .each axis indicates the ratio between developed or developing citing countries and the total number of citing countries to each group .lines are drawn in order to guide the eye and group the different origins of the citing papers . in the fundamental groups ,the difference between the number of papers produced in developed and developing countries is clear .usf , usa and brif receive more cites from developed countries than from developing ones. however , note that again the topic of low cost technology is more interesting for developing countries .
in this paper we present a phenomenological approach to describe a complex system : scientific research impact through citation mining . the novel concept of citation mining , a combination of citation bibliometrics and text mining , is used for the phenomenological description . citation mining starts with a group of core papers whose impact is to be examined , retrieves the papers that cite these core papers , and then analyzes the technical infrestructure ( authors , jorunals , institutions ) of the citing papers as well as their thematic characteristics . the science citation index is used as the source database for the core and citing papers , since its citation - based structure enables the capability to perform citation studies easily . this paper presents illustrative examples in photovoltaics ( applied research ) and sandpile dynamics ( basic research ) to show the types of output products possible . bibliometric profiling is used to generate the technical infrastructure , and is performed over a number of the citing papers record fields to offer different perspectives on the citing ( user ) community . text mining is performed on the aggregate citing papers , to identify aggregate citing community themes , and to identify extra - discipline and applications themes . the photovoltaics applied research papers had on the order of hundreds of citations in aggregate . all of the citing papers ranged from applied research to applications , and their main themes were fully aligned with those of the aggregate cited papers . this seems to be the typical case with applied research . the sandpile dynamics basic research papers had hundreds of citations in aggregate . most of the citing papers were also basic research whose main themes were aligned with those of the cited paper . this seems to be the typical case with basic research . however , about twenty percent of the citing papers were research or development in other disciplines , or development within the same discipline . there was no - time lag between publication and citation by the extra - discipline research papers , but there was a four year lag time between publication and citation by the development papers .
chromospheric wave propagation has been an extensively studied , but poorly understood , subject in solar physics for some time .one puzzle has been the presence of propagating waves with periods on the order of 5 minutes or more .such waves were observed by , e.g. , and ; this was considered surprising since the acoustic cutoff period in the chromosphere , above which waves should not be able to propagate , is on the order of 200 s. such long - period propagation was later found to be widespread , generally occurring wherever the local magnetic field is strong . show an example of observations of neighboring internetwork and network regions that have a marked difference in their fourier spectra .it was realized that , since magnetoacoustic waves in a strongly magnetized medium are restricted to propagating along field lines , the effective gravity ( i.e. , the component of gravity along the magnetic field ) would be reduced in magnetic regions . since the cutoff frequency ( in an isothermal atmosphere ) is given by where is the sound speed and is the ratio of specific heats , the cutoff frequency would also be lower in regions of strong inclined field , potentially allowing 5-minute ( mhz ) waves to propagate .this hypothesis has since been tested in a number of increasingly advanced numerical simulations , e.g. , , and .all have found that it is an effective mechanism for transmitting 5-minute power through the chromosphere ; some models have suggested that the leakage of 5-minute waves into the chromosphere can also explain the presence of 5-minute oscillatory signal in the corona .one criticism of this explanation has been that not all observations of 5-minute propagation are in regions of obviously inclined field .an alternative explanation has therefore been suggested , originally by and later developed and tested by and , in which changes in the radiative relaxation time associated with small scale magnetic structures are responsible for increasing the cutoff period .this mechanism has also been demonstrated to enable propagation of 5-minute oscillations , even in vertical magnetic structures . however , the underlying basis of this theory and the related simulations is a highly simplified energy equation in which radiative losses are approximated by newton s law of cooling ( and references therein ) .a related subject that has been studied extensively is the generation and propagation of chromospheric jets such as spicules ( type i and ii ) , macrospicules , surges , fibrils , and mottles .a variety of models have been proposed over the years ( see for a review ) , and although it remains unclear whether the many types of jets are different manifestations of the same underlying physical phenomenon or not , there are at least strong indications that the jets known as dynamic fibrils are driven by shock waves traveling through the chromosphere . many of these jets have lifetimes around 5 minutes , and so the waves driving them need to have been channeled into the upper chromosphere via one of the processes outlined above . in this paper , we present the results of two - dimensional simulations of wave propagation from the convection zone to the transition region and corona .the simulations include a sophisticated treatment of radiative losses and heat conduction , and study the effects of different magnetic field geometries and strengths on the propagation of waves through the chromosphere .we also look at the jets that these waves produce once they reach the transition region , and perform a statistical comparison of the jets produced in a model with vertical field and in one with inclined field . in 2we describe the different simulations and the code .3 contains an analysis of wave propagation and periodicities in the various models , and the results are discussed in 4 .5 looks at jet formation and properties , and a summary and conclusions follow in 6 .the simulations have been run using the _ bifrost _ code ; a detailed code description has recently been published .it is a 3d mhd code that is designed to model the solar atmosphere to a high degree of realism by including as many of the relevant physical processes as practical and possible .the code is staggered such that scalars are defined on the grid interior while fluxes are defined on the edges between computational cells .sixth order polynomials are used to calculate derivatives .high order interpolation is also used when a given variable needs to be evaluated half a grid cell away . in order to maintain stability , artificial viscosity and resistivity terms with strengths that can be set by the userare included following a hyperdiffusive scheme , such that regions of large gradients are also those with the largest viscosity and magnetic diffusivity .this allows the diffusivity to be low throughout most of the simulation box , only becoming significant where the conditions require it .lccl case a & & & extremely weak + case b & & & moderate , vertical + case c & & & strong expanding tube + case d & & & moderate , inclined + case e & & & strong , inclined the code includes thermal conduction along magnetic field lines , computed implicitly using a multigrid algorithm .a realistic equation of state is pre - computed based on lte , and stored in tabular form to compute temperatures , pressures , and radiation quantities , given the mass density and internal energy .the code solves the equations of radiative transfer for the photosphere and lower chromosphere , including scattering , using the multigroup opacity methods developed by and in short characteristic form . in the upper chromosphere , transition region , and corona ,we utilize an advanced radiative loss function consisting of several parts . in the corona and transition region ,the optically thin approximation is used , taking into account radiation from hydrogen , helium , carbon , oxygen , neon , and iron using atomic collisional excitation rates from the hao - diaper atom data package . in the upper chromosphere ,most important lines are not in local equilibrium , and excitation rates are precomputed and tabulated from 1d chromospheric models with radiative transfer calculated in detail ( e.g. , * ? ? ?* ; * ? ? ?more details on these radiative loss functions as well as on the code itself can be found in and . in sum ,the code thus includes most physics that are important in the solar atmosphere ; the main omission in the present work is time - dependent ionization , which mainly has an effect in the upper chromosphere and transition region , above a height of mm .see for a discussion of these effects .the horizontal boundary conditions are periodic , while both the vertical boundaries are open .the upper one is based on characteristic extrapolation of the magnetohydrodynamic variables ; the lower one allows outflowing material to leave the box , while the entropy of the inflowing material is set .there is no piston or other imposed external driver producing wave motions ; all oscillations are produced self - consistently by the turbulent motion in the convection zone ., while the gray lines are magnetic field lines .case a is high- throughout and contains only a very weak magnetic field . the dotted vertical black line in case c marks the location of the horizontal boundary ;as this model has a smaller horizontal extent than the others , the rightmost 5.5 mm have been repeated on the left . ] , while the gray lines are magnetic field lines . ]we will be analyzing five different simulations , using model atmospheres that are about 16 mm high .they extend from the upper layers of the convection zone at the lower boundary ( mm ) , through the photosphere ( mm corresponding to the average height where ) , chromosphere , transition region and corona , with the upper boundary at mm .the simulation boxes are two - dimensional and contain grid cells , using a uniform horizontal spacing of 32.5 km . in the vertical direction, we use a non - uniform spacing which is 28 km between each cell from the lower boundary to mm , increasing gradually until mm ; from there to the upper boundary it stays at a constant value of 150 km .the properties of our five different simulations are summarized in table [ simtab ] , and figure [ initfig ] shows their initial temperature and magnetic field structures .case c has slightly smaller dimensions and higher resolution than the others ; the other models differ mainly in the strengths and orientations of their magnetic fields .all the simulations have been run for 4500 s , or 75 minutes , of solar time ( after an initial relaxation period ) .the turbulent motion in the convection zone is able to move magnetic flux tubes around , which means that the field configurations in the high- sections ( where is the ratio between the gas and magnetic pressures ) of all five simulations are rather dynamic .in contrast , the field in the upper chromosphere and corona in all models except case a is rather stable and changes little with time .the motion in the convection zone also generates waves that propagate upwards ( see figure [ propagation ] ) and create jets of various lengths and lifetimes once they reach the transition region .many such jets can be seen in figure [ initfig ] .they tend to follow the magnetic field , taking on relatively thin , elongated shapes , but in the non - magnetic case a , they look more like extended spherical waves moving the transition region up and down . since we are primarily interested in wave propagation in the chromosphere , not the corona , we omit from our analysis the regions above mm , the height of the peaks of the tallest jets . in figure [ tprc ]we show the temperature , pressure , and density structures of case d in greater detail as functions of height , as well as the cutoff frequency ( given by equation [ cutoffeq ] ) .the other cases look similar .the temperature plot clearly shows the significant movement the transition region undergoes as jets are continually formed and push it upwards .the three prominent horizontal lines in the temperature structure correspond to the ionization temperatures of hydrogen and helium ( the latter twice ) , and so are an effect of the tabulated equation of state .the plot of the cutoff frequency shows the region , on average between about mm and mm but with large variation in time and space , where the cutoff is above mhz , corresponding to a period of 5 minutes .this is the region where such long - period waves are evanescent , and which they will have to `` tunnel '' through if they are to survive into the upper layers of the atmosphere . in figure [ cutoffval ]we have plotted the average cutoff frequency in our case d and the cutoff calculated from the semi - empirical val3c model .the val model has a somewhat flatter cutoff in the high chromosphere than our average model , but agrees that the cutoff frequency remains high until one reaches the transition region at about mm .equation [ cutoffeq ] , which we used in calculating the cutoff , is strictly speaking only valid in an isothermal atmosphere .there are many possible definitions of the cutoff frequency in a stratified atmosphere , and the values in figures [ tprc ] and [ cutoffval ] should not be considered exact .the main point is to illustrate that the solar atmosphere contains a height range where the cutoff frequency is higher than the frequencies associated with photospheric oscillations such as -modes .we should point out that such long - period waves have very long wavelengths , so `` local '' conditions are not a very well defined term when analyzing them. a wave with a 5-minute period will have a wavelength of 2100 - 3000 km , given typical sound speeds in the lower atmosphere of 7 - 10 km s .the entire height of the photosphere and chromosphere therefore amounts to less than one wavelength . from this, we may expect that even in the absence of any channeling , through field inclination or other processes , a portion of the 5-minute power could make it through to the upper layers .another expectation is that such channeling does not necessarily have to be active throughout the entire 2 mm high region , though the effect will likely be greater the more prevalent conditions conducive to channeling are ( in both space and time ) .this paper aims to bring greater clarity to issues such as the effectiveness of channeling mechanisms and the effects of propagating long - period waves on higher atmospheric layers . to that end ,our analysis focuses on two separate but related phenomena : first , the propagation of waves of different periodicities through the atmosphere and the influence upon them by the magnetic field , and second , the jets produced by these waves once they reach the transition region . to study the former , we perform an extensive fourier and wavelet analysis of the five simulations . for the latter , we individually measure properties such as the lifetimes , lengths and maximum velocities of the jets , and make a statistical comparison between case b ( vertical field ) and case d ( inclined field )mm in case a , with superimposed contours showing the regions where at mm is greater than g. the white dashed line shows the location of the periodic boundary ; the leftmost mm are repeated on the right . ]we start by analyzing case a , a reference model that contains only a very weak magnetic field .it is everywhere weaker than 1 g , and the model s behavior is expected to be essentially non - magnetic . for consistency with the other models ,however , the magnetic field is included in the analysis below .the initial temperature state is shown in the top panel of figure [ initfig ] .this model is meant to be representative of conditions in the internetwork .the turbulent motion of the convection zone generates waves and flows at many different periods and in many locations .figure [ weakbuz ] shows a plot of the vertical velocity ( grayscale ) as a function of horizontal position ( ) and time , at a constant height ( ) of 1 mm .the overplotted black contours , for consistency with the other cases , mark the locations where the magnetic field is strongest ; specifically , where the vertical component of the magnetic field at mm ( 1 mm below the height at which the velocity is plotted ) is g or greater .obviously , these `` flux concentrations '' are still very weak .since one of the flux concentrations lies right on the ( periodic ) boundary , the plot shows the leftmost 3.5 mm of the box again on the right ; the location of the boundary is marked with a vertical white dashed line . in this case , the wavefronts appear to be rather uniformly distributed , with no strong clustering into specific regions either in time or space .they are typically horizontally coherent on length scales of 3 - 5 mm .figure [ weakbuz2 ] shows the vertical velocity at the horizontal position mm , allowing us to see the wave shapes in more detail .the waves are notably asymmetric , showing the characteristic `` n''-shape of compressive waves that have steepened into shocks as a result of the density stratification .the amplitude varies quite a bit with time due to the randomness of the photospheric driver .the strongest shock trains appear around s , s , and towards the end of the simulation . at these times , the shocks reach peak - to - peak amplitudes of up to 30 km s , generally with stronger downflows than upflows .in fact , the time - averaged velocity at this location is a downflow of about 3 km s , although this does not imply a net downward mass flux , since the shock waves propagating upward have greater densities .the time between velocity peaks is typically around 200 s. we have created power spectra of the simulations by performing fourier transforms of the velocity at each grid cell .an example of the resulting spectra is shown in figure [ cuzpow ] .the lower panel shows the power spectrum itself as a function of and frequency at our analysis height of mm , the same height where we plotted the velocity field in figures [ weakbuz ] and [ weakbuz2 ] .this height is also marked by the light gray horizontal line in the upper panels .the bell - shaped black curves on either side delineate the two dominant frequency bands used in the subsequent analysis , the `` 5-minute '' band centered at mhz and the `` 3-minute '' band centered at mhz .both are gaussian bands including contributions at frequencies up to 1 mhz above and below the central frequency .the central frequencies correspond to actual periods of 333 s and 200 s , respectively .the upper panel shows the ratio between the power in the 3-minute band and that in the 5-minute band , showing 3-minute dominance as pink and white , 5-minute dominance as green and black , and approximately equal power as blue .superimposed on this plot are the height where the power spectrum in the lower panel is plotted ( light gray horizontal line ) , and magnetic field lines calculated from the time - averaged field ( dark gray ) .the small panel to the right shows the height profile of the total velocity fourier power across all frequencies , multiplied by the -averaged density and sound speed to create a measure of the energy flux density rather than the straight velocity power .the middle panel of the figure shows the total time - averaged magnetic field at mm ( black ) and at mm ( gray ) . in this model , the 3-minute band is dominant throughout the chromosphere ( mm ) . only in a small area near mmdoes the 5-minute band achieve rough parity , but a look at the spectrum ( lower panel ) shows that the dominant frequencies at that point are at or above 6 mhz , and begin to fall outside the 3-minute band .in fact , there is significant power at frequencies of mhz across most of the simulation box .this picture , with 3-minute domination throughout , is typical of the classical results from observations of weakly magnetic regions in the internetwork ( e.g. , and references therein ) .it is a natural consequence of the influence of the acoustic cutoff frequency and the radiative damping of higher - frequency waves . under chromospheric conditions ,the acoustic cutoff frequency is typically around - 5 mhz , corresponding to periods of 200 - 220 s ( see figure [ cutoffval ] ) , and waves at lower frequencies will have difficulty propagating upwards .it is , however , possible to see small amounts of 5-minute power across much of the box , particularly around mm , mm , and mm .this shows that although the long - period waves are damped , their long wavelength still ensures that part of their power reaches mm .mm in case b , with superimposed contours showing the regions where at mm is greater than 300 g. the white dashed line shows the location of the periodic boundary ; the leftmost mm are repeated on the right .( an mpeg animation of this figure is available in the online journal . ) ] having studied an essentially non - magnetic model , we now turn our attention to models with magnetic fields of varying strengths and orientations .case b is a model with several flux concentrations in the photosphere .these flux concentrations are relatively narrow ( a few hundred km ) and have strengths of g. such concentrations are a result of the convective motions below the photosphere , which tend to concentrate the magnetic field in relatively narrow regions , often connected with an average downflow , such as intergranular lanes . as we move upwards , the field expands , creating inclined field lines on the edges of the flux concentrations in the chromosphere , while once we reach coronal heights , the field fills all space and the orientation is close to vertical .these effects can be seen in the second panel from the top of figure [ initfig ] , which shows the initial state of this case .this model is intended to represent solar conditions in and around network or plage regions . , but showing the field - aligned ( rather than vertical ) velocity at mm .this is the velocity that is used in the subsequent fourier and wavelet analysis . ]the velocity field of this case is shown in figures [ vertbuz ] and [ vertbud ] . as previously , the grayscale image shows the velocity , the black contours outline the field concentrations at mm , now using a threshold of 300 g , while the white dashed line shows the location of the periodic boundary .figure [ vertbuz ] shows the vertical velocity at mm . at photospheric heights ,the velocity field is dominated by `` global '' oscillations with large horizontal wavelengths and periods around 5 minutes . at mm , there are still some remnants of these , for example at mm after 2500 s , but the shocks generated in and around the flux concentrations are stronger and have begun to fan out .this figure is available as an animation in the online journal , showing how the velocity field develops from mm to mm .the animation shows quite clearly how the waves propagate upwards and fan outwards .figure [ vertbud ] shows the velocity _ along the magnetic field _ at mm , and it is this velocity component we will be performing our analysis on in this and all subsequent cases . since waves are forced to propagate along the magnetic field in regions where the field is dominant , this should give us a cleaner velocity signal than the vertical velocity ; the reason the field - aligned velocity was not used in case a is that it is not a very meaningful quantity in weak - field ( high- ) regions . unlike in casea , the velocity field at mm is now far from uniformly distributed . as we see , all strong flux concentrations are associated with a region of high velocity amplitude higher up , and nearly all regions of high amplitude are connected with flux concentrations .one main exception is the region of relatively high - frequency signal that starts out at mm , eventually moving to mm .this area is situated between two flux concentrations and is a region of interference between waves coming from either side one can see the interference pattern developing in the animated version of figure [ vertbuz ] , which shows the evolution of the wave pattern with height . some other regions between flux concentrations , for example around mm and for the last 1000s around mm , show quite weak signals .the many curved wavefronts are a consequence of the outward propagation of waves from the flux concentrations ; particularly notable examples are found at mm , mm , and near the boundary at mm .it is also interesting to note how much the flux concentrations are moved around by the convective motion one flux tube starts out at mm and ends up at mm , and several flux concentrations merge during the simulation .this horizontal movement is on the order of 1 km s , which is within the ranges reported from observations of the movement of bright points in intergranular lanes . as in case a, we have performed a fourier analysis to investigate the periodicity of the velocity signal .the results are shown in figure [ auzpow ] , which , like figure [ cuzpow ] for case a , shows the power spectrum in the bottom panel , the time - averaged magnetic field in the middle panel , and the ratio of power in the 3-minute and 5-minute bands in the top panel .as before , the horizontal line in the top panels shows the analysis height ( mm ) , and the dark gray lines are time - averaged field lines ; the new thick white line indicates the time - averaged height where , the dividing line between the regions dominated by the magnetic field ( above the line ) and by gas pressure forces ( below it ) .although the central flux tube appears very dominant when looking at the time - averaged field strength , it is in fact not that much stronger than the others ; however , it moves much less horizontally , making it stronger when averaging over the whole simulation .the top panel shows that the 3-minute band remains dominant in the chromosphere .this is as expected from classical theory , since the acoustic cutoff period is shortest there , and 5-minute disturbances are expected to be evanescent .it is notable that the peaks in the 3-minute power seem to correspond very well with the peaks in the magnetic field strength .if the effects of the reduced radiative relaxation time in strongly magnetized regions were important , these are locations where 5-minute propagation would be expected . there are , however , several windows in the chromosphere where there is also significant power in the 5-minute band , notably around mm , mm , and mm .the latter two are regions of inclined field on either side of the central strong flux concentration .this fits well with the results of many previous simulations that have found that otherwise evanescent long - period disturbances can still propagate upwards along strong , inclined field , since the reduced effective gravity increases the acoustic cutoff period .we should point out that the previous simulations have generally assumed strong , inclined flux tubes all the way down to the photosphere , while we find that long - period propagation occurs even when only a part of the propagation is along strong , inclined field at the edges of a mostly vertical flux concentration .on the other hand , mm is a region where the average field inclination is not particularly high , at least not compared to the regions surrounding it . however , as we have seen from figure [ vertbud ] , the field is quite dynamic , and the time - averaged field inclination shown in the top panel of figure [ auzpow ] does not tell the whole story .in fact , a look at figure [ vertbud ] shows that mm connects to two different photospheric flux concentrations over the course of the simulation , and it seems to be particularly during the time when the region is on the outer edge of the central flux tube ( from 1000 to 2500 s ) that it is dominated by longer - period waves .so we need to take the variations with time into account , not just the averages over the whole simulation . the fourier analysis deals with averages over the whole analyzed time period , and in order to get sufficient spectral resolution ,one typically needs time series on the order of one hour .the solar atmosphere , however , is dynamic on timescales much shorter than that , and when conditions at one location change significantly during the analysis time , the fourier analysis averages this out and can be misleading .a wavelet analysis , which takes variations in time into account , is a more precise tool ; we have performed such an analysis at some selected locations . in figure[ vertpwrudang185 ] , the top panel shows the power in the 3 and 5-minute bands taken from the fourier analysis , with a vertical line marking the horizontal location , mm , where we are plotting the data in the three lower panels .the upper middle panel shows the time series of the field - aligned velocity , while the lower middle panel shows the variation of the ( absolute ) field inclination means the field is vertical . ] , both at mm . since the appearance of the wave pattern does not just depend on conditions at this height , but is a function of the conditions throughout the lower atmosphere , we have also traced the field lines downwards at each timestep and plotted , in the same panel , the inclination 250 km ( dashed line ) and 500 km ( dotted line ) lower along the field . the bottom panel shows the wavelet power spectrum of the velocity signal at mm , calculated using the morlet wavelet ( see * ? ? ?* ) , with a black line showing the cone of influence ; values below this line are subject to edge effects .darker color corresponds to higher power . in these plotswe see how the period of the velocity signal increases significantly between about 1000 and 2600 s , which is the time when this region is located towards the outer edge of the central flux concentration and the inclination is much greater than earlier and later . at this time we get a large peak in the power spectrum at periods around 5 minutes , while at other times , most of the power is in 3-minute waves .this again strongly supports the results in the earlier literature , showing that field inclination is important for long - period wave propagation .so , do inclination increases always result in increased 5-minute power ?yes and no .the peaks in the 5-minute power tend to be associated with periods of increased inclination , but the area around mm is unique in having such a strong , relatively pure signal coherent over five wave periods ; hence the large peak in the power spectrum there . but even in other locations where the 5-minute power does not increase markedly , the wave dynamics are still quite different when the inclination is large .an example can be seen in figure [ vertpwrudang360 ] , showing the velocity signal and inclination at mm . here , the inclination is large for the first 2000 s , and the velocity signal at that time is weaker and more irregular than later in the simulation , when the field is more vertical .one explanation for this can be deduced from a comparison with figure [ vertbud ] : the times when the inclination is large are , in general , the times when the region is located at the interface between two flux concentrations . since the flux concentrations are the main sources of wave power in the system , such interface regions are also regions of interference between the waves coming from the two flux concentrations .in many cases , this interference is destructive and reduces the amplitude of the disturbances . the periodicity is also perturbed since the signal is now a superposition of the waves coming from the two sources .examples of regions with such destructive interference are at mm after 3000 s ( a case of triple interference , as there is a weaker , g flux concentration acting as a wave source below this location ) , and the plotted mm for the first 2000 s. in contrast , the regions of most dominant 5-minute power , at mm and mm , are located towards the sides of the region dominated by the strongest central flux tube , with the interference regions located on the outside of these .it is also worth noting that the regions of strongest 3-minute dominance in the chromosphere are at mm and mm , both of which are directly above flux concentrations that show relatively little horizontal movement ( cf .figure [ vertbud ] ) .mm ( top panel ) and vertical velocity at mm ( bottom panel ) in case c , with superimposed contours showing the regions where at mm is greater than 150 g. the white dashed line shows the location of the periodic boundary ; the rightmost mm are repeated on the left.,title="fig : " ] + mm ( top panel ) and vertical velocity at mm ( bottom panel ) in case c , with superimposed contours showing the regions where at mm is greater than 150 g. the white dashed line shows the location of the periodic boundary ; the rightmost mm are repeated on the left.,title="fig : " ] while case b had several magnetic flux concentrations of comparable strength , case c contains only one dominant flux tube . unhindered by the influence of neighboring flux tubes , it can expand freely and fill the entire simulation box at coronal heights .this expansion creates a large region of inclined field at the edges of the flux concentration , which was where we preferentially saw 5-minute propagation in case b. the initial state of this case is shown in the middle panel of figure [ initfig ] .although the main flux concentration is strong , with a height reaching down to mm ( far below the photosphere ) at any given time , the magnetic field in the rest of this model is rather weak . as a result ,the average height where is higher in this case than in case b ( see figure [ poreafourier ] ) ; between mm and mm , it is above our analysis height of 1 mm , and the field - aligned velocity is not a meaningful quantity there . because of this , we will be performing the analysis of this case at mm .figure [ poreabuz ] ( top ) shows the field - aligned velocity at that height ( grayscale ) , with superimposed black contours showing where at mm is greater than 150 g. this value was chosen to point out the secondary flux concentration that merges with the primary at s ; the primary has a field strength on the order of 2000 g. since the primary flux concentration is located towards the left of the simulation box , we have plotted the rightmost 3.5 mm again on the left , with the white dashed line marking the location of the boundary . although there are some powerful shocks connected with the secondary flux concentration in the first 1200 s , the velocity field is dominated by the waves propagating outwards from the primary .new oscillations are constantly being generated in the center of the flux concentration .( figure [ propagation ] is an example of such wave propagation , taken from this case ; see also for a study of wave generation in a similar model . )these propagate upwards and outwards , but there is large variability in how fast they move , and how far to the sides they reach . in general , the waves travel faster and longer in the direction that the flux concentration itself is moving at the time ; this is not unreasonable , as the waves from the lower layers will tend to align with the axis of the flux concentration , and this is itself inclined in the direction the flux concentration is moving .we do occasionally see some wavefront merging as a result of these varying propagation patterns , and this could potentially affect the calculated periodicities ; however , most of the wavefronts `` absorbed '' this way are rather weak .two things are obvious from the picture : although the flux concentration is only a few hundred km wide at photospheric heights , it dominates a very large area ( up to 4 mm to each side ) in the upper chromosphere , and the period of oscillations at its center is significantly shorter than that of the oscillations to the side . since the analysis in the other cases is performed at mm , we also show the velocity field at that height in this case ( figure [ poreabuz ] , bottom panel ) . as is partially below the average height , we here plot the vertical velocity rather than the velocity along the magnetic field . in this case , we see wavefronts that are coherent over several mm horizontal distance , especially in the first 2500 s. these sometimes become very strong shocks with amplitudes of more than 30 km s , and some of the strongest are connected with the secondary flux concentration .the typical period of these oscillations is 200 - 250 s , though there is some variation .meanwhile , in part superimposed on the larger `` global '' pattern , we see the wavefronts clearly propagating outwards from the primary flux concentration .the area farthest from the flux concentration , around mm , shows very weak signal in the later part of the simulation . at this time , the magnetic field in the region is close to horizontal , and temporarily lower temperature makes the medium magnetically dominated .this creates a `` lid '' which inhibits vertical wave propagation . as in the other cases ,we have performed a fourier analysis , the results of which are shown in figure [ poreafourier ] . a pattern is clear : we see 3-minute propagation mainly above the center of the flux concentration , and to some extent in the very weak - field area around mm , and 5-minute propagation in the inclined field regions on the sides of the flux concentration .although the ratio plot shows this even more clearly at greater heights , it is also plainly visible in the spectrum itself ; the central flux tube region between mm and mm has its whole spectrum shifted towards higher frequencies than in the regions surrounding it .this is in very good agreement with what we found in case b. mm in case d , with superimposed contours showing the regions where at mm is greater than 300 g. the white dashed line shows the location of the periodic boundary ; the leftmost mm are repeated on the right . ]we now turn our attention to case d , which has similar flux concentrations as case b at the photospheric level , but also includes an additional imposed uniform field of inclination ( slanting towards lower ) .this uniform field is dominant in the corona and upper chromosphere , but is not generally strong enough to dominate dynamics at the photospheric or lower chromospheric levels .the initial state of this model is shown in the second panel from the bottom of figure [ initfig ] .this case is meant to represent a plage region with nearby field of opposite polarity .figure [ 45degbud ] , like figure [ vertbud ] for case b , shows the field - aligned velocity at mm in case d as a function of and time , with overplotted black flux concentrations ( g ) at mm and the leftmost mm plotted again on the right due to the flux concentration on the boundary . as in case b, we see that the regions with a powerful velocity signal are correlated with the regions of strong flux concentrations below .in fact , the association seems even stronger in case d , with large areas between flux concentrations showing quite weak velocity signals ( e.g. between and mm in the later half of the simulation time ) .other regions between flux concentrations , e.g. at mm and mm , become sites of less regular flows , especially early in the simulation .the field inclination in these regions at those times is generally very large , even near - horizontal in the region around mm . the sudden outburst at mm between 3200 s and 3500 s is due to a reconnection event at mm .in general , due to the left - leaning field , each flux concentration dominates the velocity in a large area to its left , but only a short distance to its right .a fourier analysis of this case is shown in figure [ budpow ] .as in case b , the 3-minute band is dominant in large parts of the chromosphere ( notably mm and mm ) , but there are some windows where the 5-minute band dominates .the most prominent of these are around mm , mm , and mm .the latter two also exhibit large amounts of low - frequency power , due to the non - periodic flows present there .compared to case b ( figure [ auzpow ] ) , there is less of a connection between the strongest average photospheric fields and the peaks in the 3-minute power , even allowing for the slight leftward drift that would be expected due to the inclination of the additional homogeneous field .this is likely primarily because the flux concentrations in this model display more horizontal movement than in case b ( cf .figure [ budpow ] ) for example , the strong flux tube that starts out at mm moves first left to mm , and later right to mm .meanwhile , there is a separate flux concentration at mm that eventually merges with the other one after 3000 s. the area connected to these is in general the area of greatest velocity power , in particular in the 3-minute band .it is also the region of lowest average field inclination .the flux concentration at the boundary , mm , moves comparatively little and this region does correspond to a peak in the 3-minute power .overall , though , comparison with the average fields and inclinations is of less value in case d due to the horizontal movement . in order to figure out what is going on , it is necessary to take time variations into account .the wavelet analysis is a tool better suited for this analysis , and we will again look at the time variation of the velocity and inclination at some specific horizontal positions .figure [ bampang024 ] is a four - panel figure showing ( top ) the 3-minute ( solid ) and 5-minute ( dashed ) power , with a vertical line showing the current horizontal location ( mm ) ; ( upper middle ) the field - aligned velocity as a function of time at that location ; ( lower middle ) the field inclination as a function of time at that location ; and ( bottom ) the wavelet power spectrum .all panels are plotted at the height mm ; as previously , the field inclination is also shown at two lower heights along the field .the region around mm has the highest 5-minute power in the simulation , and it appears to be mainly due to the behavior in the first 1700 s. at that time , there is a quite regular train of long - period shock waves coming up . between 1700s and 3000 s , the velocity signal is much more irregular , correlated with large and rapid variations in the inclination at lower heights , although the dominant periodicity remains 5 minutes .a comparison with figure [ 45degbud ] shows that the change in behavior happens when the region becomes an interference region between the flux concentration at the boundary and the one starting out at mm after roughly 1700 s ; before that , it is dominated by waves coming from the latter flux concentration .after about 3300 s , the flux concentration at the boundary has moved to a position just below our observation point ; the inclination becomes smaller and we get a wave train with shorter periodicity .the general behavior is similar to what we saw in case b ; much of the difference seems to be that with an imposed , left - leaning field , each flux concentration tends to dominate the wave field in a large region to its left , but only in a small region to its right .figure [ bampang338 ] shows the situation at mm , one of the largest peaks of 3-minute power . here , the inclination remains mostly below at mm and at lower heights , and we get a fairly regular short - period velocity signal , though with varying amplitude . in the last 1000 s , the inclination increases , and this does lead to a more irregular signal , with longer time between the peaks , though the effect seems noticeable only during the relatively short time when the inclination is above . finally , figure [ bampang456 ] shows the situation at mm , at the center of a wide region where the 3-minute power is low and the 5-minute power is dominant .this is a region where the field inclination is generally very large during most of the simulation , it is more than . a few initial shocks , which actually dominate the power spectrum , are followed by a long period with near horizontal field and a steadily increasing downflow , culminating at 1800 s. then , after 2200 s , a rather irregular , mostly long - period signal sets in .a comparison with figure [ budpow ] shows that this signal comes from the flux tube on the boundary to the right .a new flux concentration appears ( at mm ) after around 2000 s , but it connects only to the left and does not dominate the velocity field directly above it .the most notable effect at this location is the near complete absence of 3-minute signal due to the large field inclination . in sum, this case shows a behavior that is rather similar to case b , which has a more vertical field .we see 3-minute signal above the strongest flux concentrations , where the vertical component of the field is usually strong enough to keep the inclination low , and 5-minute signal towards the edges of the regions dominated by each flux concentration . among the differences between case b and cased , we find that case d has less power at high frequencies , connected with interference regions in case b , and that the 3-minute signal is very weak in regions of heavily inclined field , which were absent in case b. the following case looks into what happens when the strength of the constant inclined field is increased , allowing it to dominate at lower heights .mm in case e , with superimposed contours outlining the regions where at mm is greater than 1000 g. the white dashed line shows the location of the periodic boundary ; the leftmost mm are repeated on the right . ] in case e , the imposed field is still at a 45 angle with respect to the vertical , but the field is significantly stronger than in case d. in the mid - chromosphere , around mm , it is about 8 times stronger ; lower down it is more variable . the field - aligned velocity at mm is plotted in figure [ caseebud ] , with overplotted contours outlining the regions where at mm is greater than 1000 g ( as compared to the 300 g used in the similar plot for case d , figure [ 45degbud ] ) . the initial state of the model is shown in the bottom panel of figure [ initfig ] .this model represents conditions in very strong plage with nearby field of opposite polarity . as in the other cases , the regions of highest velocity amplitude seem to be located directly above the strongest flux concentrationsthis may seem counterintuitive at first with such a strong field acting as a wave guide , we might expect the velocity signal to appear to the side of the locations of the flux concentrations .but we should keep in mind that the flux concentrations themselves are fairly wide .many have widths on the order of 1 mm , or the same as the height we are observing at , so even though the sideways propagation starts around mm ( see figure [ caseefourier ] ) , the waves coming from a flux concentration will still tend to be located above it .as we move higher in the atmosphere , the velocity signal does move sideways along the magnetic field , but this effect is not yet very pronounced at mm. between the flux concentrations , there are channels where almost no wave signal can be seen .these neatly split the computational box into zones of influence for each flux concentration .the channels are actually downflow regions , with slowly varying velocities of km s .a single larger downflow event can be seen between and mm after 4000 s. figure [ caseefourier ] shows the fourier analysis of this case . at this field strength and inclination ,the 5-minute band is dominant across most of the chromosphere ( top panel ) . only in two locations , around mm and mm ,does the 3-minute band dominate .the 3-minute power is highest above high- regions , i.e. where the magnetic field is weak .this is the opposite of what was found in case b ( figure [ auzpow ] ) , where the strongest flux concentrations were the sites of highest 3-minute power . in that case , however , the strongest fields were also the most vertical , whereas here , the weaker fields are more vertical than the stronger fields .the results therefore support the idea that the field inclination is central to wave propagation .the full fourier spectrum at mm ( bottom panel ) shows the same situation : significant power in the 5-minute band across most locations , though the 3-minute peak at mm is the strongest in the simulation .the 5-minute power is more evenly distributed , with one broad peak at mm and some smaller , also quite broad peaks .we also see the very low - frequency power connected with the downflow regions .although the strong - field case e has a more steady velocity field than the cases with weaker fields , there is still some drift and variation with time .we once again perform a wavelet analysis in order to look more closely at the wave propagation .figure [ ewvl073 ] shows the results at mm , the largest peak of 3-minute power .the velocity amplitude ( upper middle panel ) is large in the initial stages , when the inclination is relatively low ( less than or around 30 ) .there is power at a rather wide range of frequencies at that time , though by far the strongest signal is found in the 3-minute band between 4 and 6 mhz .after 2000 s , most of the signal dies out , as this vertical location becomes part of a rather quiescent downflow region with velocities of 4 - 5 km s .after 4000 s , a single larger downflow event occurs , which shows up as power at very low frequencies ( below the line marking the cone of influence ) .the largest peak of the 5-minute power is centered at mm , shown in figure [ ewvl334 ] .this is directly above a flux concentration ( figure [ caseebud ] ) and there is some periodic velocity signal throughout the simulation , though the amplitude varies with time .again , it is strongest early on , though the peak is mostly within the region subject to edge effects .although the inclination is below 40 in the beginning of the simulation , little coherent signal is visible in the 3-minute band .the 5-minute band shows power throughout , with a second peak appearing towards the end ( again partially within the cone of influence ) .the field inclination slowly increases with time and ends up at 53 .the main result from our simulations is that 5-minute waves are able to propagate through the chromosphere in regions where the magnetic field is inclined and sufficiently strong . in regions where the field is vertical or weak ,the velocity field is dominated by waves with periods of around 3 minutes . as mentioned in the introduction , a model which uses the radiative relaxation time , rather than the inclination of the magnetic field , as the mechanism for increasing the cutoff period , has been proposed , andhas been demonstrated to work in numerical simulations . however , it relies on a simple newtonian cooling model for approximating radiative losses .our simulations use an advanced and realistic method for computing the radiative losses that includes all the important mechanisms involved in radiative losses in the photosphere and chromosphere , and as such it is considerably more realistic than a simple newtonian cooling model .while the radiative relaxation time model expects 5-minute propagation above all strong small - scale magnetic structures , regardless of the field inclination , our simulations show 3-minute propagation above the central region of flux concentrations , and 5-minute propagation in inclined - field regions to the sides .this is exactly the result predicted by the field inclination model , but is in conflict with the predictions of the radiative relaxation model .our conclusion , based on the simulations presented here , is therefore that the radiative relaxation model is not effective when a more realistic energy equation is considered .while large - scale regions of homogeneous inclined field , like our cases d and e , represent idealizations of the conditions at the edge of plages , our case c represents realistic conditions for an isolated magnetic element or pore . in this model ,the magnetic field is largely vertical in the photosphere and above the transition region , while the field expansion leads to significant inclination where it is needed the most , in the chromosphere .this can then account for the 5-minute propagation , but we expect 3-minute propagation in the center of the flux concentration . is this supported by observations ?the answer is in fact not very clear . an example of a network region which has very little power at frequencies above 4 mhz over most of its area .there is , however , a pronounced spike of higher - frequency power located in the center of the network ( mm in their figure 4 ) .the authors attribute this to noise from seeing , which it may very well be ; it is visible at frequencies up to 20 mhz , and a study of the coherence spectra also lends credence to this explanation .however , the signal at frequencies of 5 - 6 mhz in this spike is much stronger than that at higher frequencies , and indeed is of the same magnitude as the ( real ) signal at those frequencies in the neighboring quiet regions .it is notable that such a spike , with a spatial extent of 1 - 2 mm , is in fact exactly what we find in our case c. the central spike observed by could thus be real . in order to make more direct comparisons to their results , which were based on doppler shifts in observations of the ca ii h line at and the fe i line at ,we have computed synthetic spectra in these lines from our simulations using the non - lte radiative transfer code multi .the ca line is treated in full non - lte , while the fe line , which is formed in the wing of the ca line , is included in the same computation using additional localized opacity and source function terms based on lte .the calculations have been performed column by column , i.e. , neglecting any radiative interaction in the -direction .the computed spectra have been smoothed over , which appears to be close to the effective resolution of the observations in , and we have then calculated doppler shifts from these smoothed spectra . it should be noted that ca ii h has a very complex line profile , usually with several emission peaks within the deeper absorption line . in a dynamic atmosphere , particularly in the presence of strong shocks, these emission peaks can become very large and are often asymmetric ( so called bright grains ) , and the line can undergo central reversal as well . defining and identifyingthe line center of such a line is a non - trivial task .we have used a method that finds the center of the region where the intensity is below a certain threshold above the minimum intensity .this method generally gives acceptable results , but can not be expected to correspond directly to the velocity at any one given height in the simulations .the power spectra of the calculated doppler velocities are shown in the two right - hand panels of figure [ litesrad ] ; the two left - hand panels show the power spectra of the vertical velocity taken directly from our simulation data , at heights corresponding to the approximate formation heights of the lines .since the observations of covered a larger area than our simulation boxes , we show a combination of the spectra from two of our simulations in the figure . in the center of all panels , between mm and mm ,we show the spectrum of our case b , representing network conditions . on both sides , from mm to mm and from mm to mm , we show the spectrum of our case a , representing conditions in the weakly magnetized internetwork .this figure should be compared with figure 4 in .while our synthesized ca power spectrum ( upper right ) is not a perfect match to the observations of , there are many similarities .of particular note is the difference in the dominant frequencies between the non - magnetic regions on both sides and the network region in the center .the network ( case b ) is dominated by lower frequencies , particularly in the area between mm and mm , where the dominant frequencies are 3- mhz .the internetwork ( case a ) has a more scattered spectrum , but the dominant frequencies are mostly between 5 and 7 mhz .furthermore , the network has significant power at very low frequencies , below 3 mhz .these effects are also found by .this power at low frequencies is not found in the simulation velocity at mm ( upper left ) , and although the velocity spectrum and the ca spectrum are similar in many ways , there are also several differences .these differences are partly due to the smoothing applied to the ca data , partly due to the difficulty of defining a meaningful doppler shift of the highly complex ca line profile , and partly due to the fact that the line is formed over a range of heights rather than at one given height , and this height range can also vary with both horizontal position and time . using the ca doppler velocity as a proxy for atmospheric velocity on the real sun can therefore be misleading .the velocity spectrum of the fe line ( lower right ) shows a very good correspondence with the velocity at mm in the models ( lower left ) .this line has a much simpler profile and is formed in a region with few strong shocks affecting local conditions .the dominant frequency is 3 mhz in both the network and the internetwork , though there is also some power at 5 mhz in most locations . also find that the power in this line is in the 3 - 5 mhz band , although most of it is between 3 and 4 mhz . like us, they find no significant difference between the spectra in the internetwork and in the network in this line .more recent observations related to the question of wave periodicity have been performed by , , and ( see also * ? ? ? used the tenerife infrared polarimeter of the german vacuum tower telescope at the observatorio del teide , with a seeing - limited spatial resolution of .they then found propagating 5-minute waves in the chromosphere above a facular region .the photospheric magnetic field as determined from stokes inversions was within 20 of the vertical . , using the solar optical telescope on _ hinode _ with a resolution of , found 3-minute signal in the center of a plage region , but more 5-minute propagation towards the sides in the direction of the expanding field . used a combination of data from _ hinode _ and the ibis instrument at the dunn solar telescope , with an estimated average resolution of .they found propagating 5-minute waves along the inclined field on the edges of a pore , and some power in 3-minute oscillations at the center .they also found both 5-minute and 3-minute propagation , though with more power in the 5-minute band , in a nearby region with small magnetic elements where they estimate that the chromospheric magnetic field is close to vertical , based on a force - free field extrapolation .the results of our simulations are in agreement with , and with the propagation patterns observed by around their pore . in the more vertical magnetic structures observed by and , we would expect more power in the 3-minute band than in the 5-minute band based on the results of our simulations , if indeed the field is mainly vertical and the flux tubes do not move around very much .there are , however , several possible mechanisms that could explain the 5-minute dominance and resolve this apparent difference .for one , all flux tubes naturally expand with height , and this expansion creates a region of inclined field ( as illustrated , on a large scale , by our case c ) .thus , even if the field is close to vertical in the photosphere , there will be regions between the photosphere and the chromosphere where the field at the edges of the flux tubes is inclined , and the long - period waves can propagate there . do study coherence spectra to look for signs of a possible horizontal shift in the signal as a result of propagation along inclined fields , and find good coherence between the photospheric and chromospheric signal , but our results show that the field does not need to be inclined throughout the photosphere and chromosphere in order to enable 5-minute wave propagation .a few hundred km along the edges of an expanding flux tube may be enough , and any horizontal shift could then be less than one resolution element ( in their data ) . in the observations of ,the photospheric field is not uniformly vertical , and a force - free extrapolation is not a very good approximation in the chromosphere .furthermore , although there is more power in the 5-minute signal , they also find significant signal at periods around 3 minutes in the region with smaller magnetic elements .we believe that our model , where field inclination is the dominant mechanism for allowing long - period wave propagation , is compatible with these findings . flux tube movement and limited resolution may also be partially responsible for the relative dominance of 5-minute power in the observations of . in case b, we found that the strongest 3-minute power appeared above flux tubes that undergo little horizontal motion , while 5-minute power was found in inclined field regions at the edges of flux tubes .if the flux tubes move around , both the 5-minute and 3-minute power will be spread out and one would not see a clear distinction between the ( average ) flux tube center and the sides in a fourier analysis .the flux tube center also covers a rather small area at any given time , and this makes the related 3-minute waves difficult to observe in low - resolution data . in the higher - resolution data of , regions of 3-minute propagation are found , and this could possibly be because the flux tubes at these locations move around less. we would encourage observers to look for differences in the periodicity of oscillations at the center and edges of flux tubes in future high - resolution datasets .a third possibility is that heating may play a role .the temperature structure of the chromosphere is in general not well known . in particular , the real sun may have more magnetic heating of the upper chromosphere than our 2d models. higher temperature would reduce the cutoff frequency and allow 5-minute waves to propagate more easily .yet another suggestion , as mentioned by , is that the field may be twisted .the waves could then travel along field lines that are everywhere inclined with respect to the local vertical , but without significant horizontal displacement .such field twist is a 3d effect and can not be tested in our 2d simulations , but should be considered in later work . a different point , that we have already mentioned in our analysis , is also worth making : although a fourier analysis can be a powerful tool , it has some important limitations . in order to achieve sufficient spectral resolution ,one usually needs time series on the order of one hour . the solar atmosphere , however , is dynamic on timescales of minutes .atmospheric conditions can and do change , and the fourier transform is not well suited for picking up such changes . at least in areas where the general signal is weak , non - recurring events can end up dominating the power spectrum ( see figure [ bampang456 ] for an example ; there are several other examples in the dataset ) .in such cases , the fourier analysis does not say all that much about the general conditions at that location .this can be particularly dangerous because the eye is naturally drawn to peaks in the power spectrum .also , there can be times when local conditions are notably different from the time average , and these can be correlated with changes in the signal ( e.g. , figure [ vertpwrudang185 ] ) . a wavelet analysis , which takes time variations into account , or at least a careful examination of the time series , is essential for identifying the actual processes going on in the atmosphere .so far we have been looking at the propagation of waves through the chromosphere .but one of the reasons why we are interested in this propagation is the effect these waves have once they reach the transition region .jets of chromospheric material go by many names , such as spicules , mottles , fibrils , straws , macrospicules , or surges .there is no widespread agreement on whether all or some of these represent different aspects of the same underlying phenomenon , or on the driving mechanism or mechanisms behind them .however , there is mounting evidence that at least dynamic fibrils and some mottles appear to be driven by waves coming from below .such wave - driven jets also appear frequently in our simulations here , and we will look into their properties and make a statistical comparison between jets in case b ( mostly vertical field ) and case d ( field inclined 45 ) . a large number of jets are formed in the simulations , and we have used semi - automatic routines to identify and measure them . we identify the jets as local maxima ( in both space and time ) of the height where the temperature is 40 000 k , corresponding to the lower transition region .we have included snapshots of the tallest jets in the two simulations in figures [ vertspic9 ] ( case b ) and [ 45degspic9 ] ( case d ) .we see that they can be quite tall , reaching heights of up to 6 mm , and that their axes follow the magnetic field , as we would expect . in order to analyze the properties of the jets , we want to look at their time evolution . a rising and falling jet will generally have a parabolic shape in a plot of distance vs. time ( e.g. , * ? ? ?* ) , but for the most accurate results , the measurements should be taken close to the central axis of the jet .the jet peaks found previously are used as starting points . in case d , where the magnetic field is inclined ,we track the magnetic field line that passes through the jet peak , and measure the variables along this ( moving ) line through the jet s lifetime . in caseb , the magnetic field and the jet axes are generally close to vertical , and we use a simplified procedure taking vertical cuts through the locations of the jet peaks .experiments using the more involved field - tracking procedure in case b show that the inaccuracies resulting from the simplified treatment are small . using a method similar to the one of and , working on observations , , working on 1d simulations , and , working on 3d simulations , we produce distance - time diagrams of the temperature and fit the position of the transition region with parabolas .these fits are then used to calculate properties such as the maximum velocity , deceleration , maximum length , and duration of each jet .jets that do not have a parabolic shape or that represent only minor disturbances are discarded from the analysis .note that , as jets sometimes occur in quick succession in the same locations , occasionally they can overlap .this can lead to situations where the descending phase of one jet is cut short by the next jet appearing .this will mainly affect the calculated lifetime of the jet , though it can to a lesser extent also impact the maximum measured length and velocity .the calculated deceleration depends only on the overall shape of the parabola and is largely unaffected .overall , this does not happen frequently enough to significantly change the statistics , though it can lead to some outliers , particularly with respect to the durations . in total, we have identified 192 parabolic jets in case b , and 129 in case d. their properties and various correlations are shown in figure [ spiccorr ] , which is in the same format as figure 1 in for ease of comparison .they should also be compared with figures 12 and 13 in .overall , the results of the different simulations and the observations are in good agreement . as we see , there are clear linear correlations between several of the jet properties , in particular between maximum velocity and maximum length ( bottom right ) , deceleration and maximum velocity ( bottom left ) , and maximum length and duration ( center right ) .the same correlations are found in observations by , and in simulations by , who only plot deceleration vs. maximum velocity , and , who find a weaker correlation between length and duration .in addition , we find a correlation between the maximum velocity and the duration ( top right ) in our simulations , whereas and find little evidence of a correlation here .conversely , they both find a weak anticorrelation between deceleration and duration ( top left ) . in our simulations, there is a quite weak anticorrelation in the distribution for case d ( blue diamonds ) , whereas case b ( red asterisks ) shows little evidence of a correlation .the two distributions do however cluster in different regions , with the longer - duration jets in case d having lower decelerations than the shorter - lived jets in case b. this regional difference between the distributions was also found in the observations . neither we , , nor find any strong correlation between the deceleration and the maximum length of the jets ( center left ) .as noted above , the properties of the jets formed in case b and case d can be somewhat different for example , at a given maximum velocity , the jets in case d ( inclined field ) tend to be longer ( bottom right panel ) and have a longer duration ( top right panel ) . in order to further investigate these differences ,we have made histograms showing the distributions of the jet properties in the two cases .the results are plotted in figure [ spichist ] , which shows the jets in case b as solid red columns , case d as hatched blue columns , and the sum as black outlines .the distributions are notably different with respect to deceleration ( upper left panel ) and duration ( lower right ) , with the inclined jets lasting longer and having lower decelerations .the distributions of maximum lengths ( upper right ) are broadly similar , but the inclined jets have somewhat lower maximum velocities ( lower left ) . found similar regional differences in their observations , with jets in dense plage regions ( referred to as region 2 in their paper ) having higher decelerations and shorter lifetimes than jets in adjacent regions with weaker and more inclined field ( referred to as region 1 ) .they suggested that the differences were due to the field inclination , which is supported by our results .the results are also consistent with what we have found out about wave propagation in the different models : long - period waves are more prevalent in inclined field regions , and these waves then produce longer - lived jets with lower decelerations .in addition to differences between the cases , there are also some regional differences within each case .we see in figure [ vertspic9 ] that a number of the tallest spicules in case b are located in regions where field lines from different flux concentrations meet ( ref .figure [ vertbud ] ) .this is particularly noticeable in the cases of jets and , where the central axis is right along the interface region .jets , , , and are also located close to the interface , while jets and are above the center of their connected flux concentrations . in figure[ vertjetzvsx ] , we have plotted the maximum heights reached by the jets as a function of their horizontal location .notably , the jets that form above the central , wide flux concentration ( mm ) reach lower heights than the ones forming in other locations , while the tallest jets form in interface regions between flux concentrations at mm and mm .there are at least two possible explanations why this should be so .one is that you can get constructive interference between the waves coming from different flux concentrations , leading to stronger shocks and longer jets .for example , jet in figure [ vertspic9 ] can be traced to such an event happening at mm , s ( cf .figure [ vertbud ] , which shows the velocity field ) .another possible explanation is that when several flux concentrations ( of the same polarity ) are located near each other , the field lines connected with each can not spread out over such a large area , and the waves coming from below will be more concentrated and propagate more vertically .conversely , above the large central flux concentration , the wave energy is spread over a much larger area as much of it follows the widening field .this then leads to wider but less powerful jets .we have presented several simulations investigating the periodicity of waves propagating through the chromosphere .we find that waves with periods of around 5 minutes can propagate in regions where the magnetic field is strong and inclined , including at the edges of flux tubes . in regions where the magnetic field is weak or vertical ,we find primarily 3-minute waves ; this also applies above vertical flux tubes and in the center of strong expanding flux tubes .these results indicate that field inclination is critical to the propagation of long - period waves . where the flux tubes undergo significant horizontal motion ,both the 5-minute and the 3-minute power is spread out and the distinction is not as clearly visible .since we have included an advanced treatment of radiative losses in our simulations and find 3-minute propagation above vertical field regions , we conclude that variation in the radiative relaxation time is not an effective mechanism for increasing the cutoff period .our simulations are in agreement with the results of recent high - resolution _ hinode _ observations .we have also studied the jets produced by these waves once they reach the transition region .we find systematic differences between the jets produced in a model with mostly vertical field , and in a model with mostly inclined field .the results are in agreement with observations of dynamic fibrils .it is also important to point out , for the purpose of future analyses of wave propagation and periodicity , whether from simulations or observations , that the fourier analysis can be misleading and hide important information about the state of the medium .this is because the solar atmosphere is dynamic and changes on timescales much shorter than one hour , which is often the minimum timescale needed to achieve sufficient spectral resolution in a fourier analysis .a wavelet analysis takes variations in time into account and can be an invaluable tool for figuring out what processes are important for the dynamics .
we present the results of numerical simulations of wave propagation and jet formation in solar atmosphere models with different magnetic field configurations . the presence in the chromosphere of waves with periods longer than the acoustic cutoff period has been ascribed to either strong inclined magnetic fields , or changes in the radiative relaxation time . our simulations include a sophisticated treatment of radiative losses , as well as fields with different strengths and inclinations . using fourier and wavelet analysis techniques , we investigate the periodicity of the waves that travel through the chromosphere . we find that the velocity signal is dominated by waves with periods around 5 minutes in regions of strong , inclined field , including at the edges of strong flux tubes where the field expands , whereas 3-minute waves dominate in regions of weak or vertically oriented fields . our results show that the field inclination is very important for long - period wave propagation , whereas variations in the radiative relaxation time have little effect . furthermore , we find that atmospheric conditions can vary significantly on timescales of a few minutes , meaning that a fourier analysis of wave propagation can be misleading . wavelet techniques take variations with time into account and are more suitable analysis tools . finally , we investigate the properties of jets formed by the propagating waves once they reach the transition region , and find systematic differences between the jets in inclined field regions and those in vertical field regions , in agreement with observations of dynamic fibrils .
this paper is devoted to the understanding of the set of celestial ir ( infrared ) emission bands in the range 3 - 15 m which have been loosely , but wittingly , dubbed unidentified infrared bands ( uibs ) well after their discovery in the 70s . since then they have been thoroughly documented , using higher sensitivity and resolution. particularly large amounts of high quality data were collected by the ir satellites iso [ 1 ] and irts [ 2 ] , which were launched and operated in the 90s .knowledgeable reviews of these and other observations may be found , for instance , in [ 3,4,5 ] .while electronic transitions in atomic ions , such as argon and sulphur , are also detected in the relevant range , sometimes on top of a uib , they are easily distinguished by their much narrower width .the only present consensus regarding the uibs is that they are due to fluctuations of the electric dipole moment associated with the vibrations of the atoms of carbon - rich carriers .different research groups do not agree as to whether the carriers are _ free - flying molecules _ or _ solid - state grains ._ very closely related to this dichotomy is the issue of the excitation mechanism which gives rise to the ir emission . roughly speaking, the free - flying molecule model has been associated from inception to _ stochastic heating _, i.e. random absorption of a single uv photon , followed by _ immediate thermalization _ of the deposited energy , and subsequent cooling due to ir radiation ; this of course requires very small carriers ( atoms ) and strong uv irradiation .moreover , these carriers are considered to be pahs ( polycyclic aromatic hydrocarbons ) , which are planar clusters of hexagonal c rings with peripheral attached h atoms .no assignment to any known particular pah has yet been proposed , but it is suggested that the problem will ultimately be solved by a size - distribution of neutral / ionized , more or less hydrogenated pahs . in this model , separate particles , the very small grains ( vsg ) , have to be invoked in order to account for the observed underlying continuum , but no composition or structure has yet been specified for this family as well .donn et al . [ 6 ] have discussed this model long ago , but sellgren , tokunaga , boulanger and others [ 5,3,4 ] have recently spelled out some of the `` puzzles '' [ 5 ] presented by the new observations . in this paper ,i explore the extent to which these puzzles can be solved by a solid - state model [ 7 ] , and more specifically the _ coal / kerogen _ model .this differs from the pah model on nearly every count .the composition , here , is not restricted to c and h , but also includes small amounts of o , n and s , as suggested by cosmic abundances . _the structure is not planar or regular , but 3-d and highly disordered ; it is partly aromatic , partly aliphatic and olefinic _ depending on dust environment and history . to accomodate this diversity and disorder , the carrier size should be larger than a few hundred atoms .most importantly , _well defined natural terrestrial analog materials are proposed _ : these are all members of the kerogen / coal `` saga '' , which have been thoroughly documented during the past half century , and whose composition and structure have been abundantly described in the literature .the diversity of the celestial ir spectra , in relative feature intensities and in position and width , is matched by that of the terrestrial spectra , which itself is not erratic but known to be due to progressive release of s , n , o and h atoms upon ageing and/or heating of the material .no ionization or size distribution need be invoked .the structure accounts for the feature bandwiths and ensures the presence of a continuum where required .finally , the excitation mechanisms invoked here are 1 ) steady state `` thermal '' heating for strongly illuminated dust , 2 ) chemiluminescence for weakly illuminated dust .the coal model was first presented in 1989 [ 8 ] . at that time , modelling the available celestial spectra required the use of relatively highly aromatic coal analogs [ 9 ] .the data now available warrant the complementary use of the lesser aromatic analogs , kerogens [ 10 ] .these data also prompted the consideration and elaboration of a new excitation process [ 11 ] .i therefore believe that the coal / kerogen model is now in a position to cure the main riddles of the pah model .these developments motivated the writing of the present report .section 2 is a short reminder of the main properties and behaviours of kerogens and coals , relevant to the present discussion . in sec .3 , i delineate some of the main riddles signalled by authors who applied the pah model to the interpretation of astronomical observations . in each case, i then discuss how the solid - state model can remedy the problem or provide an alternative interpretation .section 4 discusses in more detail chemiluminescence as an alternative to stochastic heating .operational conclusions are drawn in the last section .the following is a cursory summary of the properties of natural amorphous carbons of interest to the present issue .details and abundant references to specialized literature can be found in [ 7,10 ] .2.1 _ composition and structure _ the term coal refers here to the residual part of the raw terrestrial material after minerals and free molecules preexisting in its pores have been disposed of by suitable , standard , physical and chemical procedures ._ this residue is almost exclusively composed of c , h and o , with traces of n and s. _ while coal is extracted from deep in the earth , kerogen is obtained from the superficial sedimentary rocks after treatment by aqueous alkaline and other organic solvents .the solid residue can be considered as coal in dispersed " form ( very small particles ) ._ it is in this form that most of the organic matter in meteoritic carbonaceous chondrites is found _ , a useful hint for our present purposes .the properties of coal do not depend heavily on the geographical origin of the material . as a result ,each sample is concisely characterized by one point in the van krevelen diagram ( fig .1 ) with coordinates o / c and h / c ( atomic ratios ) .the representative points of coals from different origins form an inverted - l - shaped strip . for coals extracted from various mining depths , but same geographical location, the strip nearly reduces to a line of the same shape ; as the mining depth increases , the representative point follows the line down to the origin .the composition and properties of kerogens are more dispersed at high heteroatom concentrations , but ultimately merge with those of coals at low heteroatom concentrations .this graphical orderliness stems from the chemical properties of carbon . like in pahs, the atoms in coals and kerogens are held together by and covalent _ bonds_. but , while the former include only aromatic c _ sites _ , the latter comprise all three types of sites : sp ( acetylenic ) , sp ( aromatic , olefinic ) and sp ( aliphatic ) .this is a result of the presence of impurities " ( o , n , ... ) which impedes the tentency towards the more thermodynamically stable aromatic structures and leads , instead , to _disordered materials_. as a consequence , pahs know of only one type of c - h bond , while coals / kerogens admit of several others : c - h ( non - aromatic ) , ch ( olefinic , aliphatic ) and ch .other important consequences are : a diversity of c - c bonds ( single , double ) , the presence of coh and c = o groups ( which all have strong incidence on ir spectra ) , and -o- bridges which provide the structure with some flexibility and allow it to grow in 3-d . together with disorder comes the potentiality for _ evolution_. figure 2 illustrates the displacements brought about , in van krevelen s diagram , by spontaneous or induced losses of co , h and ch . in earth , these losses are stimulated by heat and pressure , which explains the trajectory of the representative point in fig . 1 towards the origin as mining depth increases . the structural rearrangements which necessarily accompany these compositional changes are schematically outlined , in fig .2 , along the left vertical axis , showing the aromatization trend along the evolutionary ( maturation ) track . in the laboratory ,mild , prolonged , heating ( annealing ) of a low aromaticity sample prompts its representative point to quickly proceed along the same natural track it would have followed in earth , given time , had it not been extracted .this simple and coherent behaviour of coal / kerogen again emphasizes the tight link between composition and structure and provides a guiding line to find one s way in the _ diversity _ of these materials .thousands of samples have been studied thoroughly all over the world , using all available analytical techniques : thermal , thermogravimetric , ir , raman and nmr spectroscopies , etc .this made it possible to find correlations between spectral bands and assign nearly every band to a _ functional group _ ( a handful of associated atoms ) .these efforts culminated in the building up of representations of the structures of coals and kerogens as a function of their evolutionary stage , i.e. their location in the van krevelen diagram ( fig .1 ) . the work of behar and vandenbroucke [ 12 ] is a good example of this development .they found that _ it is not possible to adequately describe the observed spontaneous and continuous changes in properties in terms of a limited number of small , specific , molecules ( a conclusion to be extended below to the interpretation of uibs)_. in order to fit the measurements on different samples , one has rather to build a _ random array _made of a dominant carbon skeleton with functional groups of heteroatoms ( h , o , n , s ) attached randomly to it , then statistically tailor the number of different bondings ( c - c , c = c , etc ) , functional groups ( c - h , c = o , etc ) and aromatic or polyaromatic rings , etc .the variety of environments of the functional groups ensures that their characteristic vibrations will blend into bands of the right width .the relative intensities of the latter will change according to the concentrations of the corresponding functional groups . _this scheme accounts for the continuous set of ir spectra observed along the evolutionary track_. behar and vandenbroucke give numbers for the structural parameters enumerated above and for 8 representative evolutionary stages .they also give sketches of the corresponding structures , examples of which are shown in fig .a remarkable feature in these sketches is that , even though the size and number of aromatic clusters are quite limited , all the main uibs are present in the corresponding ir spectra , albeit with different intensities .the drawing of the clusters also highlights the 3-d nature of the structures .note the large number of short aliphatic chains in the lesser aromatic samples ; the aromatic clusters become large and dominant in the late stages of maturation .2.2 _ the ir spectrum _ as a typical example , fig .4 ( adapted from [ 13 ] ) displays the ir absorbance spectrum of a young kerogen of type ii ( h / c=1.32 , o / c=0.104 ) .the peak wavenumber ( ) and wavelength ( m ) , integrated intensity ( cm / mg ) and assignment of the bands are , respectively , ( see [ 14 ] ) : \1 .(.95 ) ; 46.2 ; oh stretch ; due partly to chemically bonded oh groups and partly to adsorbed or trapped , not chemically bonded , h molecules ; the latter are responsible for the long redward tail , through h - bondings with other parts of the skeleton ; peak position and band profile change considerably with h content ( depending on evolutionary stage or heat treatment ) .this feature is clearly distinct from the water ice band , which peaks near 3.1 m and has a much steeper red wing ( associated to that observed in the sky towards young stellar objects ) .adsorbed h is probably absent from is dust .3060 ( 3.27 ) ; c - h aromatic or olefinic stretch , barely visible here , and only measurable for very small h / c and o / c ratios . \3 . 2920 ( 3.42 ) ; 48.2 ; blend of anti - symmetric and symmetric ch stretch at 2962 and 2872 , asymmetric and symmetric ch2 stretch at 2926 and 2853 ; ch stretch at 2890 , respectively . here , _ the non - aromatic c - h bonds reside mainly on non - aromatic structures , not at the periphery of pahs like in the work of wagner et al . [ 15 ] . _ \4 .1710 ( 5.85 ) ; 14 ; c = o ( ketone ) stretch .1630 ( 6.15 ) ; 11.4 ; disputed ( but perhaps concurrent ) assignments to h deformation , quinonic c = o with h bond and c = c olefinic and aromatic stretch .1455 ( 6.87 ) ; 6.1 ; asymmetric ch and ch deformation .1375 ( 7.28 ) ; 1.2 ; symmetric ch deformation .1800 to 900 ( 5.5 to 11 ) ; 67.3 ; massif ( underlying broad band ) peaking near 8 m , due to c ... c and c - o stretch , c - h in - plane bend and oh deformation ; in highly aromatic samples , there may be a contribution by electrons , in the form of a plasma resonance [ 16 ] . \9 .930 to 700 ( 11 to 14 ) ; aromatic out - of - plane bending , depending on the number of adjacent protons ; weakly contrasted 3 or 4 peaks , barely visible here ; intensity increases with mining depth but never exceeds an intensity of 5 .. an underlying continuum , roughly decreasing with frequency , is mostly visible in the near - ir , but also present farther to the red in all coals / kerogens ; part of it is due to light scattering ( in transmission measurements ) and part of it to real absorption ; it increases strongly with aromaticity .the intensity , k , given here ( third number ) is related to the absorbance , , by where is the material density in g ( ) and , the bandwidth in ._ k ( cm / mg ) is times the intensity usually defined in the astrophysical literature _ ; it is also related to the integrated cross - section per c atom , , by not unexpectedly , a strong relationship holds between the bands at 2920 and 1455 for all kerogen types and evolutionary stages : the ratio of their integrated intensities is . also , k(3.4 m ) increases linearly with h / c , from 0 at .3 at least up to 80 cm / mg at h / c=1.3. it should be clear from the above that a single spectrum like fig .4 can not , by far , convey the full diversity of coal and kerogen spectra .series of spectra in restricted wavelength ranges and corresponding to successive degrees of maturation may help for this purpose .an example is given in fig .6 for the 3 m range ; others can be found in [ 10 , 14 ] .here i discuss some of the problems encountered in applying the pah model to the most recent astronomical observations . in enunciating these ,i shall try to remain faithful to the wording of the referenced papers ; this task is made easier by the authors candour .the recent review by sellgren [ 5 ] was particularly useful in this respect .3.1 _ carrier composition _ by definition , pahs are pure hydrocarbons .however , the carriers of the uibs are most likely formed in the winds of c - rich post agb stars . in general , oxygen is not much less abundant than carbon in these winds .nitrogen and sulphur are also present in notable quantities .it is therefore difficult to understand why these three elements should be excluded from the dust formed in such environments .all the more so since the less abundant silicon shows up in the sic feature at 11.3 m , and , most importantly since the ism ( interstellar medium ) harbors many families of molecules containing these so - called heteroatoms , especially o , mostly associated with c ( see [ 17,18 ] ) . on the other hand , minority atoms , mostly o ,are present in coals and kerogens in known concentrations , and these inclusions have important consequences : 1 ) they considerably enhance the dipole moment of some vibrational transitions involving c atoms , such as the 6.2 m stretch of the c = c bond ; 2 ) they give rise to features observed in the sky in the mid - ir , such as the band at 1710 ( 5.85 m ) and the massif at 7 - 9 m [ 10 ] , as well as in the far ir , such as the 30 m blend associated with coh wagging ( see [ 19 ] ) ; 3 ) minority atoms efficiently interfere with the tendency of c atoms to settle into an ordered aromatic structure ; _ hence the amorphous 3-d structure of fig .3 , which determines all the favourable properties of this model _ , as shown below . 3.2 _ feature bandwidths _the uibs are strikingly wider than the bands of any observed and identified molecule , the narrowest width being about 15 . even at the highest resolution ,no rotation - vibration decomposition is discernible , although such decomposition was clearly demonstrated , with the same spectrometers , for many identified molecules , such as h and h .neither is it possible to assign these widths to lifetime broadening , since the typical width of 30 would correspond to a lifetime of about 0.1 ps , way shorter than the radiative lifetime ( .1 s ) and non - radiative lifetimes , ranging between 1 and 1000 ps .thermal doppler effects are also excluded for the masses and temperatures of particles of interest .thus , although uibs can be fit by one or more lorentzians or gaussians , these are too wide to carry physical meaning : the uibs are definitely broader than laboratory pah features _ in absorption _ ( see [ 20 ] , [ 5 ] ) . in _ emission_ , highly excited molecules may have much broader features than in absorption because of _ anharmonicity and spectral congestion_. this is illustrated by williams and leone ( 21 ] , who excited gas - phase naphtalene molecules ( plane clusters of 2 benzenic rings , 18 atoms ) with short laser pulses of photons of energy 5.3 and 6.6 ev . monitoring the outgoing radiation at 3.3 m , ( c - h stretch ), they found its width to decrease from to , while the peak shifted to the blue by 45 , as the energy content of the molecule was reduced by on - going radiative emission .no such behaviour is observed in the sky .being associated by birth with single - photon excitation ( stochastic heating ) , the pah model is therefore faced with a dilemma : either the molecule is so small , and the photon so energetic , that the `` temperature '' can reach about 1000 k and the emission is strong enough as to be detected , but then with unacceptably large widths and red shifts ; or the molecule is large and the photon energy low so that no undesirable broadening and/or red shift occurs , but then with unacceptably weak intensities and narrow widths .recent attempts to solve this dilemma relied on the strong but unsupported assumption that the band position and widths of the features are the same functions of temperature for all interstellar pah sizes , or on very specific mixtures of selected pah cations supposed to be independent of history and environment ( see discussions by verstraete et al . [ 22 , 23 ] and boulanger et al . [24 ] ) .by contrast , the _ disordered solid - state _ model assigns the features to the synchronous vibrations of specific functional groups , and most of their width to the dispersion of similar functional groups among various environments , a situation corresponding to small - scale inhomogeneity [ 20 ] .the widths of uibs is indeed typical of solid , disordered materials [ 25 ] and , in particular , coals and kerogens [ 10 ]. the disorder , here , is mainly induced by heteroatoms and is , therefore , chemical and structural in nature ( sec .2 ) . within limits , and especially below 500 k ,feature widths are not sensitive to temperature or grain size , provided it is large enough ( a few hundred atoms ) to accomodate the required diversity of environments for a given average chemical composition . under such circumstances , small degrees of ionization and dehydrogenation are not likely to alter the spectrum notably , in stark contrast with pah behaviour .while this explains the remarkable stability of the uib widths [ 3,22 ] , it also leaves room for the observed diversity [ 20 ] , which may result from different compositions at inception , as well as different thermal and radiative processing in the interstellar medium .because of this disordered , solid - state structure , the coal / kerogen model also adequately mimicks the celestial massifs at 7 - 9 m ( see fig .4 ) and 11 - 13 m [ 9 ] , which are major puzzles for the pah model [ 5 ] .3.3 _ carrier aromaticity _the use of the term pah implies full aromaticity .this radical assumption was initially suggested by the strength and ubiquity of the 3.3 , 6.2 and 11.3 m features .however , it was not supported by later observations illustrated by fig . 5 for the stretch bands and in [ 7 ] for other ones :distinct emission features between 3.3 and 3.6 m are seen towards agb stars such as iras 05341 + 0852 , 04296 + 3429 and crl 2688 , in planetary nebulae ( e.g. hb 5 [ 26 ] ) and towards at least one nova : nova cen 1986 [ 27 ] . in some cases ,they are even stronger than the 3.3 m feature , as in absorption towards the galactic center , where the aromatic feature is dwarfed by a massif with peaks near 3.4 and 3.5 m .the initial assignment of these `` red '' features to hot bands ( v ) of the c - h str transition has proven unconvincing because of their strength and the non - detection of the correlative first harmonic to be expected near 1.67 m [ 15 , 26 ] .it seems therefore more reasonable to assign them to the well documented aliphatic and olefinic , symmetric and asymmetric , c - h stretches at 3.4 , 3.46 , 3.515 and 3.56 m , in ch , ch , ch and aldehydes ( -cho ) groups [ 33,34 ] .figure 6 illustrates this statement by setting , side by side with the spectra of fig .5 , the spectra of coals at increasingly higher stages of aromatization ( see sec .2 ) : not only is the fit of corresponding spectra remarkable , but the regular decrease of the red blend together with the rise of the 3.3 feature confirms their assignment to non - aromatic vibrations . neither is it obvious that the 6.2 and 11.3 m features are characteristic of aromaticsindeed , while the 11 - 13 m massif with 3 or 4 peaks , observed towards post - agb stars may reasonably be ascribed to the out - of - plane vibrations of aromatic c - h bonds [ 9 ] , it has proven very difficult to devise a pah cluster that could display a highly contrasted , strong and lonely 11.3 feature as observed in most is ( interstellar ) spectra . according to [ 35 ] , this requires `` rectangular '' pahs with 100 to 150 c atoms , a very special requirement indeed ! ( see next subsection ) . moreover , the expected correlation between the 3.3 and 11.3 features is not confirmed by observations [ 47 , 52 , 63 ] .even the exact spatial coincidence of the intensity peak of the 11.3 feature with that of the 3.3 feature appears to be in doubt ( see [ 63 , 64 ] ) . on the other hand , a ( more familiar ) alternative or concurrent assignment for the 11.3 m band could be the wagging of the two h atoms of a vinylidine group ( r-c = c - h ) , which also occurs at the desired frequency [ 33 ] .clearly , the constraint of full aromaticity is not warranted : an understanding of the 3.4 " feature , as well as of the diversity of uib spectra , is better served by thinking in terms of _ evolution of is dust composition and structure _ , both intimately connected and induced by prolonged heating ( sec .2 ) and irradiation [ 36 ] . while this is natural , straightforword and quantitative in the coal / kerogen model , it seems to be hardly possible with compact pahs , whatever their size and degree of ionization or hydrogenation .3.4 _ spectral continuum _known pahs , even when ionized , can not provide enough visible _ continuum _ absorption to account for the energy in the emission features observed around uv - poor reflection nebulae [ 5].neither can they mimick the unpolarized near - ir continuum detected towards the same objects [ 5,3 ] .this , of course , does not come as a surprise since free - flying molecules are not usually known for their continuum , although they do exhibit lines or bands in the vis / uv . herein lies yet another , perhaps the main , reasons for invoking pahs much larger than initially envisioned [ 5,35 ] . unfortunately, none has yet been produced in the laboratory , if this is possible at all .indeed it is well known that , in the absence of externally applied pressure , a planar sheet of compact benzene rings ( graphene ) of increasing size tends to bend , warp and finally folds upon itself to form a fullerene or a carbon nanotube[37 ] .now , these particular carbon structures indeed have features in the uv / vis but they also have features elsewhere , such as the four conspicuous ir bands of fullerene at 526 , 576 , 1183 and 1428 [ 38 ] .none of these has been detected in the sky despite intensive seach . moreover, even these structures do not display mid- and far - ir continua , which points to still another puzzle of the pah model : there is also a _ mid - ir _ continuum underneath the uibs ; for want of a better assignment , this has been attributed to a separate dust component , the very small grains ( vsgs ) , but its composition and structure could not be specified further [ 5 , 39 ] . note that they were thought to be small solely on the grounds of the self - imposed constraint that their emission should be excited upon absorption of a single uv / vis photon .recent surveys of the spatial variations of this continuum have added to the confusion and seem to require again very special behaviour of the assumed separate component in order to match the systematic changes in the ratio of features to continuum [ 24 , 39 , 40 , 41 , 42 ] .of course , theorists never lack an explanation , but all tentative applications of the pah model seem to converge towards particles much larger than invoked in the first place [ 5 , 24 , 35 , 43 , 44 ] . at that point , it must be recalled that _ bulk _ carbonaceous materials indeed display a continuum , but the profile and intensity of the latter depend heavily on the composition and structure of the material [ 7 ] .thus , the continuum of glassy carbon is very strong and varies approximately as at least as far as 100 m , but coals and a - c : h ( amorphous hydrogenated carbon ) are semi - conductors whose absorption falls down sharply towards the red at a cut - off wavelength ( m ) which increases with aromaticity .the lower aromaticity kerogen has a cut - off at shorter wavelengths and , hence , a weaker mid - ir continuum , better suited to the high contrast of the uibs ; it has also a near - ir continuum rising towards short wavelengths ; it has the further advantage of carrying both continuum and bands [ 10 ] , thus eschewing the need for two different carriers .3.5 _ near- and far - uv bands _simple aromatic molecules are known for their near- and far - uv transitions between ground and excited electronic states[45 ] .none of these was detected in the sky , either in absorption or in emission ( fluorescence ) [ 6 ] , although several attempts were made to identify the dibs ( diffuse interstellar bands ) with such bands . in this field , too , hopes now hinge upon large 3-d systems [ 46 ] . indeed , it was shown that high - temperature treatment ( htt ) of coals favored aromatization and the growth of a _ single _ near - uv feature which irreversibly becomes narrower and bluer [ 7 ] . in some favourable cases ,the end point of this evolution is _ polycrystalline ( disordered ) graphite _ , with a single , conspicuous , feature near 217 nm , on top of a continuum which increases sharply to the blue and decreases slowly to the red [ 7 ] .then , of course , the material can hardly be designated as a pah .3.6 _ infrared bands _regarding wavelength and bandwidth , the uibs are remarkably stable against large variations in the environment and irradiation .this stability is incompatible with an assignment of the bands to a soup of free - flying independent pahs , neutral or ionized and more or less hydrogenated , as suggested by a number of authors in the past , since later laboratory studies showed that individual pahs are highly sensitive to irradiation wavelength and intensity ( see discussions in [ 5 , 24 , 48 , 49 , 50 ] . in trying to use these parameters to fit observations , one is often led to conflicting or paradoxical conclusions ( see discussions in [ 22 , 23,35 , 49 , 50 , 51 ] ) .one is therefore led again to prefer a solid - state ( bulk ) model : in that case , ionization and dehydrogenation can only affect the surface of the grain , a minor fraction of the whole .also , the dust is then much more robust against destruction by shocks .moreover , the structure and composition of a bulk , amorphous , material as coal / kerogen can _ evolve slowly , spontaneously or upon heat treatment _ as shown in sec .this is accompanied by subtle changes in band wavelengths and widths ( small fractions of a micrometer ) , also observed in the sky ( see [ 10 , 20 ] ) .very large changes in relative intensities are also observed , especially in the ratios [ 3.3]/[3.4 ] , [ 3.3]/[11.3 ] and [ 11.3]/[12.7 ] ( see[47 , 52 ] ) . on terrestrial analogs ,most of these can be explained by coherent and systematic changes of composition and structure brought about by heat or slow spontaneous evolution .the most spectacular and best documented case is that of c - h stretch ( see fig .6 ) : dischler analyzed in detail the evolution of each component of this group of vibrations [ 34 ] . thus _ the spectral diversity of coals / kerogens of different degrees of maturation ( ranks ) and origins matches the diversity of astronomical spectra .the main astronomical features associated with carbonacious is dust , including the continuum , are provided , to various extents , by coals / kerogens of various ranks .different coals / kerogens or mixtures thereof can account , roughly at least , for astronomical spectra from different is environments .these models introduce no significant spurious feature anywhere in the explored spectrum ._ examples of quantitative fits between uibs and models can be found in [ 7 ] for the uv / vis range , here in fig . 5 and 6 for the near - ir , in [ 10 , 11 ] for the 5 - 10 m range , in [ 9 ] for the 11 - 13 m range and [ 19 ] for the far - ir range ; see also [ 32 ] .that is not to mean that a perfect match between celestial and laboratory spectra is easily obtained .for one thing , the high quality of iso spectra has revealed many new weak features , or details of the profiles of previously known features , which have yet to be elucidated ( see [ 35 , 52 ] ) .more importantly , no satisfactory interpretation is yet available for the 12.7 feature , nor for the 11.3 feature being usually so much stronger than the underlying blend ( 11 - 13 m ) which is commonly assigned to c - h mono , duo , trio and quarto out - of - plane bends ( see sec ._ the 11.3 and 12.7 bands are not present with a high contrast in the solid- state model spectra . _but the coal / kerogen model provides at least a framework with definite guidelines and valuable assets : -representative natural samples are available with known composition , structure , physical properties and band assignments ; -well - defined heat treatments can alter these properties controllably and coherently ; -the range of possible model spectra can be extended by using samples of different depths and geographical origins , thus scanning the whole van krevelen diagram ( fig .1 ) , i.e. a large range of _ natural _ structures .-the existence , in space , of carbon chains and molecules as large as acetone is easier to understand in terms of breakup of large , inhomogeneous , kerogen - like , grains ( formed in dense " circumstellar shells and destroyed in strong is shocks ) , rather than in terms of synthesis in the tenuous is medium .3.7 _ excitation _ the relatively high ratio of short to long wavelength uib intensities initially led to the conclusion that the emitter temperature should be very high . under the assumption of stochastic heating by absorption of a photon , and for the relatively small pahs initially considered, calculations showed that such a temperature already requires the absorption of uv to far uv photons [ 53 ] .the steady increase of the absolute intensities of the uibs with ambient uv radiation field intensity ( g value ) over 4 to 5 orders of magnitude has been considered as the strongest argument in favour of stochastic heating by uv photons [ 4 ] .these conclusions are in conflict with a number of astronomical and laboratory facts .1)the relative uib intensities from reflection nebulae are independent of the temperature of the illuminating star , i.e. of the hardness of the radiation field , from 22000 down to about 5000 k ( see [ 5 ] ) ; this is at variance with the sharp decrease of dust temperature with photon energy predicted by calculations [ 53 ] .2)shortward of 10 m , the shape of uib spectra of several galaxies that have been studied are strikingly similar for nuclear regions , active regions in the disks , as well as quiet regions [ 49 ] . a similar stability is observed among spectra of regions of hugely different g values , which are accompanied by large variations of star temperature and , hence , of the spectrum of the exciting radiation .again , one should therefore expect notable non - linear variations of relative band intensities , which is not the case [ 4 ] .the authors of [ 49 ] conclude that the global relation between uib intensity and uv illumination might be only indirect .this is corroborated by the observation of uib emission from poor - uv regions , such as galaxy m31 [ 61 ] .3)the small pah - stochastic heating " paradigm is also confronted with the stability of the uib wavelengths and widths against environmental variations , especially in view of attendent dehydrogenation , ionization or downright destruction of the molecule [ 4 , 5 , 24 , 39 , 48 , 50 ] ( see sec .4)it appears from the previous subsections that all available spectral evidence weighs in favour of bulk , solid - state models of is dust , as opposed to free - flying individual molecules .recent discussions involving pahs also invoke large molecules " , up to 1000 atoms or more [ 5 , 24 , 35 , 43 ] .now , according to the former calculations , this leads to prohibitively low dust temperatures and undetectable uib emission .one way out of this paradox is to hypothesize very special , but as yet not measured , electronic properties : high photon absorption cross - sections , absorption cut - off sliding to the red as the particle size increases , etc .this becomes still more accrobatic if one is to account for uibs detected in poor - uv environments ( see [ 22 , 54 ] ) .the coal model is confronted with a symmetrical puzzle as long as it holds to the simplest excitation process : thermal equilibrium .this is only effective in strong radiation fields , in the immediate vicinity of hot stars , as is the case with post - agb stars and ppne ( pre - planetary nebulae ) ; see [ 9 ] ) ; but it is out of the question in rne ( reflection nebulae ) and questionable in other instances .all models are therefore confronted with the task of conceiving a third excitation mechanism .a significant hint towards this goal is provided by the spatial distribution of the intensity of the 3.3 feature across the orion bar observed edge - on [ 55 ] : it forms a narrow peak between the ionization front ( delineated by the br and p lines ) and the h excitation peak ( delineated by its ro - vibration transitions in the ground state ) . while the sharp drop towards the ionization front may perhaps be attributed to partial or total destruction of the carrier in the ( hot ) hii region , the drop on the opposite sideis more problematic because it can not be due to a decrease in the column density .sellgren et al .[ 55 ] finally assigned it to the extinction of the stellar radiation by molecular hydrogen , assuming a density of 10 for the latter .however , if this was the case ( with a small pah , as required by single - photon heating ) , one would also expect a change in the spectrum of the exciting radiation and , consequently , a change in the 3.3 emission feature across the bar .what is observed , in fact , is a remarkable stability of wavelength , width , [ 3.4]/[3.3 ] and [ continuum]/[3.3 ] ratios .now , the steep , large and opposite intensity variations of the ionic and molecular hydrogen spectral features observed to occur across the bar suggest that , in between the ionic and molecular regions , their must be a narrow transition layer of atomic hydrogen , roughly coincident with the 3.3 feature peak .a similar situation has been described in ngc 7027 [ 56 ] and seems to have been observed at the edge of high - latitude cirruses [ 41 , 65 ] .this led us to surmise that the excitation of the uibs is linked with the presence of these atoms , rather than directly with the ionizing radiation [ 57 ] .such a scenario ( developed in the next section ) naturally and straightforwardly explains the small transversal extent of the 3.3 layer , without having recourse to uncertain or qualitative assumptions as to the destruction of dust carriers towards the exciting star [ 41 ] , or steep extinction of the uv towards the molecular cloud .hydrogen _ radicals _ ( free atoms ) are known to be highly reactive , especially with hydrogen and carbon .when such a radical hits a hydrogeneted surface , it is likely to form , with an h atom of the surface , a h molecule , which will promptly escape , leaving a _ dangling bond_. this catalytic reaction between two h atoms is the main route envisioned by astrochemists to explain the high rate of formation of molecular hydrogen in space : here , the solid surface provides the third body which is necessary to ensure conservation of momentum . since carbon is known for its high catalytic efficiency , our is carbon dust is a good candidate for the job , provided it can present a large fraction of dangling bonds per c atom .it must therefore be disordered and porous , as are coals and kerogens .note that the nascent h molecule carries away a large part of the energy made available by this exothermic reaction ; it must therefore be ro - vibrationally excited and this may be the source of part of the h lines observed in pdr s and at the edge of molecular clouds , between ionized h and molecular gas ( co ) .now , when another h atom approaches the dangling bond left on the dust , it is very likely to be captured and form a new strong c - h bond .the chemical ( potential ) energy of this bond , .5 ev , is insufficient to dissociate other bonds or excite higher electronic states ; it is then available as ground - state vibrational energy for the atoms in the grain .initially , this energy is essentially in the form of c - h stretch .however , it quickly spreads around , exciting the other characteristic ( normal ) modes of the given particle .the bond anharmonicity ( non - linearity of restoring forces ) opens channels for the available energy to be shared among the modes , which otherwise would be isolated .this process is called ivr ( internal vibrational redistribution ) , and is usually very short - lived , of order 1 ps , after which the energy deposited in an isolated grain can only escape through ir radiation , with a time constant of order 0.1 s. such _ infrared chemiluminescence _ , following molecular reaction in the gas phase was observed long ago [ 58 ] and often thereafter .similar processes occur in an isolated gaseous molecule upon absorption of a vis / uv photon , as in the experiment of williams and leone [ 21 ] , referred to above .the problem of theoretically predicting the chemiluminescence spectrum is extremely difficult to solve because of lack of detailed knowledge of the various couplings ( force fields ) between atoms in a given molecule : electrostatic dipole and higher order moments , van der waals and other forces , hydrogen bonding , etc.this is compounded by the length of dynamical calculations in the case of large systems , such as those that are now envisioned as dust models .fortunately , commercial software is now available which is dedicated to numerical simulation of molecular structure , dynamics and reactions .i have used such gear to collect quantitative , semi - quantitative and qualitative data on the structure of coal / kerogen model particles as large as 500 atoms , the interaction of h atoms with such particles and the resulting vibrational spectra [ 59 , 60 ] .the main result of these computations , of interest for present purposes , is that , to a large extent , _ the predicted emission spectrum is very similar to the absorption spectrum , independent of particle size and colour temperature of the illuminating star .no immediate thermalization occurs_. moreover , since the only relaxation route is through ir radiation , _ the luminescence efficiency in the uibs should be very high _ ( of order 0.1 if it is defined as the ratio of ir energy emitted over deposited energy ) .chemiluminescence thus solves the puzzles encountered above with the two other excitation mechanisms , _ provided enough atomic h is available_. if there is a luminous star near by a molecular cloud , it will give rise , at the edge of the cloud , to a pdr ( photodissociation r egion ) such as the orion bar ( referred to in sec .3.7 ) , with contiguous ionized , atomic and molecular regions , successively in the radial direction from the star .quantity _ of chemically excited dust in the atomic layer , and hence the uib intensities , will obviously depend on g , but the ir emission spectrum will not !this is all compatible with the roughly linear increase of the individual uib intensities versus g , as well as with the stability of their spectrum over 5 orders of magnitude of g [ 24 ] .moreover , because the rates of dissociation of h molecules and ionization of h atoms depend on the flux of uv photons , the uib intensities will depend , in absolute but not relative value , on the stellar colour temperature , which is not the same for all g s .this , together with the random variations of molecular density at the cloud edge , accounts for the observed excursions from strict linearity of intensity with g [ 24 ] . of course, this does not exclude the coexistence of thermal equilibrium heating of grains if the ambient radiation field is strong enough . in the opposite case of no neighbouring luminous star , atomic his still present , but in smaller quantities , if only for the diffuse galactic is radiation field ; this accounts for the uib luminosity of nebulae at high galactic latitudes , such as the chameleon cloud or the ursa major cirrus [ 41 ] .a further observational support for this excitation model is the spatial coincidence of the uib emission with the filament " of excited h emission at the edges of ngc 7027 [ 56 ] and of the main cloud [ 65 ] : indeed , as indicated at the beginning of this section , chemiluminescence induced by atomic hydrogen is genetically associated with the production of hydrogen molecules which carry away part of the chemical energy of the incoming h atom and are , therefore , ro - vibrationally excited . in application of these ideas ,a chemical kinetic model was built [ 11 ] for the galaxy m31 , whose diffise ism emits the uibs although it is nearly devoid of far uv radiation [ 61 ] .a good numerical fit to the observed ir intensities was obtained assuming an atomic h density between 50 and 200 ( in agreement with the range determined for the edge of a typical molecular cloud in our galaxy ) and a chemiluminescence efficiency of 0.3 , in agreement with the numerical chemical simulation referred to above .the essential difference between this excitation mechanism and stochastic heating does not lie in the energy deposition process ( atomic h capture or photon absorption ) , but in the unwarranted assumption at the core of the stochastic heating theory , that _ thermalization of the absorbed energy occurs before radiative relaxation _, only because the latter may require up to 1 s. common wisdom mixes up ivr with thermalization and the last decades of the past century were necessary to discriminate between various types of _ chaos _ and between various degrees of _ randomization _ ( see bibliography in [ 60 ] ) . in momentum - configuration phase space ,thermal ( statistical ) equilibrium is the extreme case where all points of the energetically available phase space are equally probable and ultimately visited by the representative point of the molecular system .this stage is only ideal and , in fact , very difficult to reach on a microscopic scale and in the finite time of experiments , but it is easier to approach when dealing with macroscopic assemblies of identical particles interacting with each other and with a thermal bath " ( itself an ideal concept ! ) .by contrast , experiments resolved in time , space and wavelength have shown that the vibrations of atoms in an isolated system may retain a high degree of coherence for a long time . _that means that only parts of phase space are visited , that randomization is not complete and that a temperature can not be defined_.i have shown numerically that this is the case for a disordered system , even as large as 500 atoms [ 60 ] .for this purpose , i used a structure inspired by chemical models of coal [ 62 ] .a small fragment of this is shown in fig .7 , and fig .8 displays the frequency spectrum of its elecric dipole fluctuations after deposition of .5 ev in one c - h bond .if the same energy were thermalized , the peak temperature reached by the particle would be 300k and the ratio [ 3.3]/[11.3 ] would fall below 1 !experiments such as that of williams and leone [ 21 ] and wagner et al .[ 15 ] seem to show that randomization in pahs is also slow after absorption of a uv photon , so that ir _ _ photo__luminescence is observed . this process could therefore coexist with chemiluminescence .however , it probably suffers from the undesirable loss of part of the deposited energy to vis / uv fluorescence and electronic continuum emission ; neither of these affects chemiluminescence since the chemical excitation considered here leaves the system in the electronic ground state .one is overwhelmed by the immense diversity of both uib and coal / kerogen ir spectra .some relief is provided by 1 ) the sparsness of the prominent bands , 2 ) their relative stability in wavelength , 3 ) the encouraging fits attempted between observations and models over most of the uibs and extended to the uv / vis and far ir ranges , 4 ) the vast leeway for model tailoring , offered by the disordered structure of coals and kerogens , as opposed to the very restrictive aromaticity of pahs , 5 ) the credibility of the model material , because it is so easily found in nature and essentially made of only the four most abundant elements in space ( after he ) , and 6 ) the fact that this material has already been so thoroughly studied in the laboratory that even its diversity is mostly understood . the issue of the excitation mechanism is intimately linked to the issue of modelling the observed spectra . by invoking chemiluminescence on large particles , one avoids high dust temperatures and the attendent puzzles of varying band wavelengths and widths which are not observed in the sky .-the need for a library of complete celestial ir spectra ( to m ) , each associated with a known environment , differing from the others by the intensity of the radiation field or the nature of the object : rn , ppn , pn , pdr , gc , etc .; this would help understand , by analogy with coals and kerogens , the evolution of is dust from inception in the envelopes of agb stars to its consumption in protostars ; the coal / kerogen model can set a frame for the discussion of these and other issues .however , keeping in mind that these terrestrial materials were forged under special physical and chemical conditions , the model will no doubt have to be amended according to observations , as the debate goes on .i am indebted to j. conard , j .-n . rouzaud ( cnrs / orleans ) , and m. cauchetier , o. guillois , i. nenner , c. reynaud ( cea / drecam , saclay ) for a fruitful collaboration , and to many others for enlightening discussions .
the recent influx of high - quality infrared spectroscopic data has prompted an extensive reassessment of various laboratory models in comparison with the observed uibs ( unidentified infrared bands ) . as a result , significant modifications were brought to the original paradigms . the focus here is on the evolution of the coal model , characterized by 1 ) a shift towards less aromatic materials of the same family ( kerogens ) , 2 ) the introduction of a new excitation mechanism ( chemiluminescence ) , based on the capture of hydrogen atoms by carbonaceous dust . both developments are intended to accomodate observations from a larger range of dust environments and evolutionary stages . this leads to a more quantitative description of dust composition and structure , and a better understanding of its history . in short , according to the present model , the evolution of dust from inception in the circumstellar shells of agb stars , through strong interstellar radiation fields , to consumption in protostars , is approximately mimicked by progressively more aromatic materials , starting from the young and mostly aliphatic kerogens , through more and more mature coals , to the final stage of polycrystalline graphite . a similar family of materials is obtained in the laboratory by annealing in vacuum up to or beyond 3000 k. the composition , structure and ir spectrum of these materials are extensively documented in the geology literature . the fundamental characters of this model , viz . chemical and structural disorder , and diversity of chemical bondings , naturally point to ways of further tailoring , in order to fit particular observations more closely .
the puzzle of glass - forming systems has remained sufficiently elusive over the years such that even the puzzle pieces themselves have changed shape .for example , the puzzle piece of the lack of a growing lengthscale may now have to be modified since a growing lengthscale can perhaps be extracted from a higher - order correlation function .a less recent change in puzzle pieces is the distinction by angell between fragile and strong glasses , where certain effects are more dramatic in fragile glasses than in strong ones .one piece of the puzzle that has remained constant over the years , however , is the dramatic slowing down of the dynamics of the particles near the glass transition .more precisely , a supercooled liquid s viscosity can increase by fourteen orders of magnitude as the temperature is decreased near a `` working '' definition of the glass temperature .two main phenomenological models for this dynamical slowing down have emerged over the years : mode - coupling theory and kinetically constrained models .we will not focus on mode - coupling theory here and simply refer the reader to several excellent reviews . as for the second phenomenological approach ,kinetically constrained models , the goal is to understand whether glassy dynamics can be understood as arising from steric constraints on the particles alone .one of the simplest such examples is the kob - andersen model .it is motivated by the caging of particles , ultimately observed in larger scale systems such as colloidal glasses .the kob - andersen model is a hard - core lattice gas model , but with the constraint that a particle can only hop to an adjacent site if and only if it has less than a certain number of neighbors , , both before and after the move .early simulations of the kob - andersen model on the hypercubic lattice for relevant values of appeared to find a dynamical phase transition at a nontrivial critical density .however , subsequent mathematically rigorous results found that the phase transition does not occur until the fully packed state .this corresponds to a zero - temperature glass transition .tbf proved this by showing that at any monomer density , there were mobile cores that could diffuse at sufficiently long time scales .while the hypercubic version of the kob - andersen model does not exhibit a finite - temperature glass transition , the mean field version does . since it is still up for debate whether or not mean field is relevant for physical systems, one can ask whether or not there exists another finite - dimensional , kinetically constrained model that exhibits a finite temperature glass transition . while the kob - andersen ( and the fredrickson - andersen ) models , are elegant in their simplicity , there are indeed two more involved kinetically constrained models in two dimensions that can be proven to exhibit a finite - temperature glass transition .these two models have been dubbed the spiral model and the sandwich model .both models exhibit an unusual phase transition in that the fraction of frozen particles jumps discontinuously at the transition , typical of a first - order phase transition .however , as the transition is approached from below , there exists a crossover lengthscale that diverges faster than a power law in .the crossover length , , distinguishes between squares of size , which are likely to contain a frozen cluster , and squares of size , for which the probability of containing a frozen cluster is exponentially unlikely .given this combination of a discontinuity in the fraction of frozen particles , and a faster than power law diverging length scale , the transition has unique characteristics .models such as the spiral and sandwich models are proof in principle that further exploration of kinetically constrained models in finite - dimensions may be fruitful , in particular , because should an ideal glass transition exist , it may indeed be of unusual character . more specifically , the edwards - anderson order parameter should be discontinuous at the transition , yet accompanied by rapidly diverging time scales .kinetically constrained models can be related to models of correlated percolation , which are models in which each site is initially occupied with an independent probability , as in normal percolation , but then correlations are induced through some culling rule that is , sites can only be occupied if certain conditions on their neighboring sites are met .the two types of models are related , with the immobile particles of the kinetically constrained models corresponding to the stable occupied particles of the correlated percolation models .the simplest model of correlated percolation is -core percolation , in which an occupied site must have at least occupied neighbors .occupied sites that do not satisfy this stability requirement are removed , and this condition is applied repeatedly , until all remaining sites are stable .this model can be mapped to the fredrickson - andersen model .one might think that kinetically constrained models and correlated percolation systems would be easy to numerically simulate , and that construction of rigorous proofs would be nothing more than an interesting problem for mathematicians .however , while simulating these systems is easy , extracting their properties in the infinite system size limit is much harder . just as with the kob - andersen model , initial numerical simulations of -core percolation for certain values of found evidence of first - order phase transitions at nontrivial critical densities , and of second - order phase transitions in a different universality class than normal percolation .however , subsequent mathematically rigorous analyses found that the critical point is for those .because the critical point only approaches unity very slowly in the limit of infinite system size , the simulations on finite - size systems were misled as to the location of the true critical point , thereby highlighting the importance of rigorous results for these models as well . for a review of -core percolation ,see ref . .there is another finite - dimensional system exhibiting an unusual transition , at least numerically .it is the jamming transition in repulsive soft spheres .numerical simulations of soft spheres show a critical point at which the average coordination number jumps discontinuously to a universal , isostatic value . butquantities such as the shear modulus and the deviation of the average coordination number from its isostatic value show a nontrivial power law behavior in the vicinity of the critical point .recent experiments on two - dimensional photoelastic beads support this notion of a mixed transition .interestingly , it has been conjectured that the physics of granular systems , colloidal systems , and glassy systems are of a similar character .the mean field results of not one but several correlation percolation models corresponding to glassy physics support this notion quantitatively .furthermore , experimental evidence of caging in another two - dimensional granular system also supports this notion .the question of finite - dimensional glassy models being quantitatively similar to the repulsive soft sphere system is still being investigated .certainly the spiral and sandwich models show that , qualitatively , one can have a glassy system exhibiting an unusual finite - dimensional transition . however , they do not appear to be in the same class as the jamming system , since the order parameter exponent is unity just above the transition in the jamming percolation models , but is one - half in the jamming system . to explore the possible link between jamming and glassy systems in terms of a finite - temperature glass transition ,tbf initially introduced the knights model , a model of correlated percolation similar to the spiral model , and called it a model of jamming percolation .in fact , the spiral , knights , and sandwich models are all models of jamming percolation . in this paper, we expand on our earlier work ( ref . ) , in which we introduced the sandwich model , by presenting the details of the proof of an unusual transition in this model .this proof is based on modifying the proof developed by tbf in refs . for the spiral model ( although originally misapplied to the knights model ) .we also introduce further generalizations of models of jamming percolation to demonstrate that the phenomenon of a finite - dimensional transition is indeed somewhat generic . in doing so, we show that the tbf proof only gives a rigorous derivation of these novel properties if a `` no parallel crossing '' rule holds .this rule says that two similarly - oriented directed percolation chains can not cross without having sites in common .the effect of this rule is that one directed percolation - like process can not be used to locally stabilize the other . for models such as the sandwich model , which satisfy this `` no parallel crossing '' rule , but fail a `` no perpendicular crossing '' rule, the tbf proof works only with some modification .the methods described in this paper can be used to understand for which correlated percolation models a proof along the lines of the tbf proof can be used to show a glassy transition , and for which models they can not .given that there are two very detailed papers on the tbf proof , we will refer to them quite often , as opposed to making this paper self - contained .finally , we will discuss connections between jamming percolation and force - balance percolation , another correlated percolation model inspired by granular systems , where numerical evidence points toward an unusual transition in finite dimensions .we consider a class of models that generalizes the knights model , the earliest of the jamming percolation models .the class of models is defined on the two - dimensional square lattice .initially , each site is occupied with an independent probability .each site has four neighboring sets , and each set contains two sites .the four sets are labelled as the northeast , northwest , southeast , and southwest neighboring sets .( the sites in those sets only lie in precisely those compass directions for the knights model , but we continue to label the four sets in this manner for all our correlated percolation models until section [ sec : the implications of property b ] . ) to be stable , an occupied site must either have ( 1 ) at least one northeast neighbor and at least one southwest neighbor , or ( 2 ) at least one northwest neighbor and at least one southeast neighbor .all other occupied sites are unstable , and are vacated .this culling process is then repeatedly applied sites that were previously stable may become unstable by earlier cullings until all remaining sites are stable .the neighboring sets of the original knights model introduced by tbf are shown in fig .[ fig : knights ] . fig .[ fig : sandwichmodel ] shows the neighboring sets in the sandwich model , which we introduced in , and fig .[ fig : spiralmodel ] shows the neighboring sets in the spiral model , which tbf introduced in . in these models , stable sitesmust either be part of a chain running from the northeast to the southwest , or part of a chain running from the northwest to the southeast . in the final configuration , all sites must be stable , so any chain must either continue forever ( to the boundary ) , or terminate in a chain of the other type ( this latter caseis called a `` t - junction '' ) .any sites left after the culling procedure must thus be connected by an infinite series of chains ( or , in a finite system , connected by a series of chains to the distant boundary ) , so asking for the critical probability at which an infinite cluster first appears is the same as asking for the minimum probability at which some sites remain unculled in the infinite size limit .if sites could only be stable by having northeast and southwest neighbors , then every stable site would be part of a northeast - southwest chain on a particular sublattice the sublattice extending to the northeast of the site for the sandwich model is shown in figure [ fig : sublattice.sandwich ] .chains on this sublattice are isomorphic to infinite chains in directed percolation , which has a well - studied second - order phase transition . however , in these models there is an additional mechanism by which sites can be made stable that is , by having northwest and southeast neighbors . adding an extra way for a sites to be stable can only possibly depress the critical probability ,so we immediately see that for these models .it will turn out to be useful to divide these models into classes based on two properties .we define a model as having `` no parallel crossing '' property if whenever two northeast - southwest chains ( or two northwest - southeast chains ) intersect , they must have sites in common we abbreviate this a `` property a. '' and we define a model as having a `` no perpendicular crossing '' property , or property b , if whenever a northeast - southwest and northwest - southeast chain intersect , they must have sites in common .table [ table : properties ] shows which properties each of the three models possesses : examples of where the properties fail for each model are shown in figure [ fig : crossings ] . our analysis here is based to a large extent on the claimed tbf proof of a glassy transition for the knights model .their proof consisted of two parts .first , they claimed to show that the critical point of the knights model is exactly the same as that for directed percolation .second , once this was done , they were able to use well - known results on directed percolation ( assuming a well - tested conjecture about anisotropic rescaling in directed percolation ) to show that this model has a glassy transition specifically , they were able to find structures with a finite density at the critical point of directed percolation , and to show that just below this critical point , the crossover length and culling times diverged . a short version of their proof appeared in refs . , with more detailed explanations in ref . and .our analysis of this more general class of models shows that the tbf proof that is only valid for models satisfying property a ; so it works for the sandwich and spiral models , but fails for the knights model .the second part of their proof , showing a glassy transition , implicitly assumes property b. the spiral model exhibits property b and hence the tbf method of proof carries through .however , we show that the proof can be modified to work for models that fail to have property b. the spiral _ and _ the sandwich models thus have provably glassy transitions , while the knights model does not .we sketch the tbf proof that , which will let us understand why property a is sufficient , and most likely necessary to the result .the key to the tbf proof is to show that voids ( clusters of empty sites ) of particular shapes have a finite probability of growing forever .for example , for the diamond - shaped void in the sandwich model , shown in figure [ fig : sandwich.void ] , if the key site labelled is vacant , it will trigger the removal of all the sites marked with stars , increasing the void size by one .( the corresponding void for the knights model appears as figure 1c of . )if this process repeats forever , with such key sites repeatedly removed , this void will grow to infinity .the tbf proof for the knights model is based on the claim that the key sites of the knights model , located at the corners of octagonal voids of size , can only be stable if part of a directed percolation chain of . once this claim is granted , the rest of the proof is straightforward . for ,such long chains are exponentially suppressed , and thus for a large void , the vertices at the corners of the void are exponentially likely to be culled . summing up the relevant probabilities , thisresults in a finite probability that the void will grow to infinity .for an infinite lattice , it is thus certain that there will be at least one void that grows to infinity , showing that all sites are culled below . since , this is supposed to show that . the claim that sites at the corners of voids can only be stable if part of a long chain of is only valid for models with property a. a counterexample to this claim for the knights model can be seen in fig .[ fig : counterexample ] .this counterexample also shows why property a is necessary and sufficient for this claim to be valid . to be stable, the site must be part of a northeast - directed chain ; pick the lowest northeast - directed chain coming out of the corner site .if that chain stops before reaching length , it must terminate in a northwest - southeast chain .since the northeast chain is not as long as a wall of the void , the new southeast - directed chain will eventually hit the void ( thus resulting in the culling of all chains , and the corner site ) , unless it hits a t - junction .that new t - junction results a second northeast - southwest chain , which will eventually reach the first northeast chain , as in figure [ fig : counterexample ] . for models with propertya , the two chains will intersect , contradicting the original assumption that we chose the lowest northeast - directed chain out of . thus , by contradiction , for models with property a , the first northeast chain must be for to be stable , and it indeed follows that .what about for models such as the knights model , that lack property a ?is it possible that despite this counterexample to the claim , is actually equal to , for some other reason ? while we do not have a mathematically rigorous proof that , we present an argument that the two are almost certainly unequal .we present our arguments in the context of the knights model , but they generalize to other models that lack property a. consider the substructure in fig .[ fig : sub1to2 ] .all sites in it are stable under the knights model culling rules , except for the two sites at its ends , and those sites will become stable if the substructure is attached between two northeast - southwest chains . furthermore , there is no northeast - southwest chain internal to the substucture connecting the two ends . the substructure is internally stabilized by northwest - southeast links .this means that the substructure can act as a `` rest stop . ''northeast - southwest chains can have breaks in their paths , connected by this substructure .just below , long northeast - southwest chains almost form an infinite structure .they are almost linked , so a few extra connections , through these substructures , should create an infinite cluster even below .so we expect that .we can make the argument more formal by considering the following modification of the directed percolation problem , which we call jumping directed percolation .as with normal directed percolation , we occupy sites on the square lattice with probability , and connect each site with directed bonds to its neighbors to the north and east .however , now we define an additional way for sites to be connected .we divide the lattice into blocks of size 9x9 , and for each block , if the two hollow squares of sites shown in fig . [ fig : jumpperc ] are occupied , we with probability connect the two hollow squares with a directed bond from the southwest square to the northeast square .the critical point of this model is a function of : . by repeating fig .[ fig : sub1to2 ] three times , to create diamonds connecting sublattice # 1 to # 2 to # 3 and then back to # 1 , we obtain a structure that links two separated diamonds on sublattice # 1 , through sites in the other two sublattices .so if we restrict ourselves to looking at sites in sublattice # 1 , and the directed percolation structures on that sublattice , sites that appear disconnected may be connected by these sites in sublattices # 2 and # 3 .the structure obtained by repeating fig . [ fig : sub1to2 ] three times has 24 sites in sublattices # 2 and # 3 , each of which is occupied with probability , and has 24 sites in sublattice # 1 .the sites in sublattice # 1 in this repeated structure map onto the occupied sites in fig .[ fig : jumpperc ] .so with , gives the probability of having appropriate `` hidden '' occupied sites in sublattices # 2 and # 3 that connect and make stable the two hollow squares .infinite chains in the jumping directed percolation model are infinite stable clusters in the knights model , and thus . however , since jumping directed percolation is just directed percolation with an extra connection process , it is reasonable to expect that for all , implying . while this argument is not mathematically rigorous , it is strongly suggestive , particularly when we recall previous results on enhancements in percolation by aizenman and grimmett .their work showed that if percolation on a lattice was `` enhanced '' by adding , for specified subconfigurations , extra connections or occupations with probability , this would _ strictly _ decrease the critical probability , for any , so long as the enhancement was essential .essential enhancements were defined as those such that a single enhancement could create a doubly - infinite path where none existed before .see ref . for a more rigorous and precise statement of the results on enhancements , and ref . for a general discussion of enhancements .the results of ref . were obtained for undirected percolation , so are not directly relevant for the jumping directed percolation model considered here , but they are analogous enough to strongly suggest that for all .it is difficult to see how such adding such a new route for paths to infinity could leave the critical probability completely unchanged .for models satisfying property a , we have , but we still need to check that the tbf proof that the transition is glassy ( discontinuous with a diverging crossover length ) is valid .the tbf proof of a glassy transition implicitly assumes that the knights model has property b. for example , to show discontinuity , they construct a configuration that has a finite density at fig .2b of .this figure , and others like it , implicitly assume property b , because they are based on drawing overlapping rectangles in independent directions , and assuming that if paths in these intersecting rectangles cross , they must stabilize each other ( form a t - junction ) .the resulting frozen structure is shown on the left side of figure [ fig : twofrozenstructures ] . however ,if a model lacks property b , the paths can cross without stabilizing each other .the knights model does not satisfy property a , so whether or not it satisfies property b is a moot point .but what about the sandwich model , which satisfies property a , but not property b ?the tbf proof as it stands is not immediately valid in these cases .nevertheless , it turns out that for such models , the tbf proof can be made to work by a modification of their structures .the basic idea of the modification is as follows .the tbf proof of a glassy transition is based on drawing structures consisting of sets of overlapping rectangles , showing that there is a sufficiently high probability ( finite for the proof of discontinuity , and approaching 1 for the proof of diverging crossover length ) that each rectangle has a spanning path in the desired direction , and then using property b to conclude that the intersecting paths form a frozen cluster . for models that lack property b, we use the same figures as in the tbf proof ( e.g. figs 2a and 2b of ref . ) , but pick the rectangle sizes large enough that there is a high probability that each rectangle has _ multiple _ spanning paths ( for a rectangle of ) , each occurring in a disjoint parallel subrectangle .then in each place where the tbf proof assumes a t - junction based on property b , we will have a northeast ( northwest ) path crossing many northwest ( northeast ) paths .the probability that no t - junction occurs turns out to decay exponentially with the number of northwest ( northeast ) paths . in figure[ fig : twofrozenstructures ] we show how the discontinuity structure of the tbf proof ( from fig .2a of ref . ) is modified by this procedure . to implement these ideas , we need to modify proposition 5.1 of ref . , which says that sufficiently large rectangles of size are very likely at the critical point to have chains connecting the sides of length : * proposition 5.1 * ( from ref . . ) : + for there exists s.t .+ here is the the dynamical exponent , which has been numerically found to be approximately 0.63 in two dimensions .the tbf proof of this proposition assumes a conjecture that there is a well - defined ( conjecture 3.1 of ref .we replace their proposition 5.1 with the following proposition , which instead states that we are likely to have connecting chains in disjoint parallel subrectangles : * proposition 100 * : + for there exists and s.t .+ we divide the box of size by into parallel disjoint subrectangles , each of size . assuming the conjecture of anisotropic scaling in directed percolation ( conjecture 3.1 of ref . ) , each subrectangle has a probability of having a path connecting the two sizes of length , contained within that subrectangle .the expected number of crossings is , and for any the probability of having less than crossings decays exponentially in . the tbf proof of discontinuity shows that for certain structures of rectangles , there is a nonzero probability that each rectangle has a suitable `` event , '' and that , assuming property b , the existence of each event ( a rectangle crossing ) results in a stable structure at the critical point .now , with our modified proposition , each `` event '' is the presence of multiple crossings ( in disjoint parallel subrectangles ) in each rectangle , rather than single crossings . without property b, this does not guarantee a frozen cluster .however , we see in appendix [ sec : independence ] that when a northeast ( northwest ) path crosses northwest ( northeast ) paths , in disjoint subrectangles , the probability of not forming a single t - junction decays exponentially in .this is physically obvious , since for large subrectangles , the probability of each t - junction in each subrectangle is essentially independent ; however , since the probabilities are not truly independent , more work is needed to make this rigorous .the details of the proof are relegated to appendix [ sec : independence ] .more generally , the arguments in appendix [ sec : independence ] show that we can treat the probabilities of t - junctions in different subrectangles as independent , when establishing an upper bound on the probability .we will use this throughout this section to multiply such probabilities as if they were independent . assuming property b, the tbf proof shows that the probability of having suitable events in each rectangle is nonzero at the critical point .we now need to show that , even without property b , this results in a finite probability of an appropriate set of t - junctions . in the tbf discontinuity structure , shown in fig .6 of ref . , the rectangles are labelled , .each rectangle is twice as large as the rectangles . so the `` suitable events '' of proposition 100 give at least crossings parallel to its long direction , where is some positive constant .the arguments in appendix [ sec : independence ] show that this results in a t - junction with probability , for some positive and .starting at the origin , the probability of forming appropriate t - junctions off to infinity can then be seen to be for some positive , , and .this product converges to a positive number , so the transition is proven to be discontinuous ( subject to assumption of the well - tested conjecture of an anisotropic critical exponent in directed percolation ) .the proof of the diverging crossover length can be made to avoid the assumption of property b by a similar modification of the tbf structures .again , we begin by repeating the tbf structures , with the set of parallel rectangles in figure 2a of ref . . in that picture ,if every rectangle has a spanning path , _ and _ the paths all intersect , there will be a spanning frozen cluster .tbf consider the case where each rectangle has sides of order the directed percolation parallel correlation length , .they then show that and can be chosen such that if the system size is $ ] , the probability that each rectangle is occupied by a spanning cluster approaches as , .if property b were to hold , this would result in t - junctions that would create a frozen structure , and show that the crossover length diverges as .we no longer have property b ; but instead , by replacing proposition 5.1 with proposition 100 , we can choose the rectangle sizes such that each rectangle is occupied by `` many '' spanning clusters ( with `` many '' defined by proposition 100 ) .given this , we can start at an arbitrary rectangle , and then work our way out , looking for t - junctions to create a spanning frozen structure .we will only fail to create a frozen structure if at some point we reach intersecting rectangles where one spanning path in one rectangle crosses many spanning paths in the other rectangle , but without creating a t - junction . by the arguments in appendixa , the probability of this occurring decays exponentially in for some .so even with intersections , the probability that we ever fail in this process goes to 0 as and , and we are essentially guaranteed a frozen structure .this shows that the crossover length diverges as we approach the critical point , with the same lower bound that tbf found .the models we have discussed so far have two possible ways in which a site can be stable , and varying neighboring relations for the culling relations .however , we can also define generalizations in which there are three or more ways in which a site can be stabilized .for example , for the pinwheel model , shown in figure [ fig : pinwheel ] , the condition for a site to be stable is that it ( 1 ) have neighbors in both the sets a and b , or ( 2 ) have neighbors in both the sets c and d , or ( 3 ) have neighbors in both the sets e and f. this gives a site three possible directions for stabilizing chains .similarly , figure [ fig : pinwheelplus ] shows a model in which there are four possible directions for a site to be stable , such that there is an extra `` or '' : for the sets g and h. we denote this model the 8-spiral model . despite the extra ways in which sites can be made stable, the tbf proof of a glassy transition is still valid , because property a holds for both the pinwheel model and the 8-spiral model .that is , in both of these models if two a - b chains ( or two c - d chains , or two e - f chains , or two g - h chains ) cross , they must have sites in common .this turns out to be sufficient to show that there will be a void that grows forever in the infinite system limit . for the sandwich and spiral models, we needed to show that a stable site at the corner of a diamond - shaped void has to be part of a directed percolation - like chain ( dp - like chain ) of order the size of the void . for the pinwheel model, we consider hexagonal voids , and show that stable sites at the corners of these voids must be `` associated '' with a long dp - like chain , where `` associated '' will be defined by the construction below .then , just as for the sandwich and spiral models , for , long dp - like chains are exponentially suppressed , giving voids a finite probability to grow forever , showing that the infinite system is empty for .consider the site at the corner of the hexagonal void in figure [ fig : pinwheel.void ] .look at all a - b chains coming out of , and pick the lowest possible chain ; in other words , look for successive a neighbors , and if a site has two possible a neighbors , pick the lower one .if that chain reaches the dashed line in figure [ fig : pinwheel.void ] , we have a dp - like chain of order the size of the void , and are done .otherwise , this a - b chain must terminate either in a c - d chain , or an e - f chain .if it terminates in an e - f chain , that chain must terminate in a c - d chain , which must then cross the original a - b chain .( note that the e - f chain can not terminate in an a - b chain , since by the `` no parallel crossing '' rule , the new a - b chain would intersect the first a - b chain , and contradict our assumption that we chose the lowest a - b chain coming out of the site . )so if the a - b chain coming out of does not reach the dashed line , it must either terminate in a c - d chain or cross a c - d chain(the latter case is shown in figure [ fig : pinwheel.void ] ) .pick the lowest of all the c - d chains that cross or intersect our a - b chain .this c - d chain must reach the dashed line , using the same logic as before ( if it terminated in an a - b chain , that would intersect the first a - b chain , and contradict the assumption that we chose the lowest a - b chain coming out of ; while if it terminated in an e - f chain , that e - f chain would have to turn into either an a - b or c - d chain before reaching the void , again resulting in a contradiction . )since the a - b and c - d chains that we have constructed cross , and together include both and the dashed line , at least one of the chains must be of order the size of the void .a similar construction can also be used for the 8-spiral model using a diamond - shaped void .having established where the critical point is , we follow tbf , and as before consider an infinite sequence of two types of rectangular regions , ( see fig . [fig : twofrozenstructures ] ) , one of which contains ab paths , and the other of which contains ef paths .( any pair of types of paths ab / cd , ab / ef , or cd / ef are permissible , as are any of the 6 possible pairs for the 8-spiral model . ) again , the sequence is constructed such that the ab paths and the ef paths are mutually intersecting with a frozen backbone that contains the origin , and the tbf proof simply carries through with the additional modification we have introduced for models that do not obey property b. all in all , since having more neighboring relations gives more ways for an occupied site to be stable ( without depressing the dp critical point ) , the tbf constructions of discontinuous percolation structures at the critical point simply carry through . just as for the sandwich model, the tbf proof needs to be modified to deal with the lack of property b. one needs at least two intersecting rectangular regions in which the probability for two `` transverse '' blocking directions each undergoing a directed percolation transition independently is nonzero .consideration of the `` no parallel crossing '' rule shows that for higher - dimensional generalizations of the knights model , the tbf proof can not be generalized in a straightforward manner to show provably glassy transitions .we will show that the``no parallel crossing '' rule never holds , so the critical point is always depressed below that of directed percolation . to be specific ,suppose that in three dimensions we have disjoint neighboring sets , , , and , generalizing the northeast , southwest , northwest , and southeast sets of the knights model ( see figure [ fig : knights ] ) .each set should consist of three linearly independent vectors , and the sets and should be opposite of each other , as should the sets and .then , just as in the knights model , the culling rule is that a stable site should have occupied neighbors in both the sets and , or in both the sets and .then , just as for the sandwich and spiral models , there are two directed percolation processes by which a site can be made stable ( chains and chains ) , and one might think that for an appropriate set of neighboring relations the tbf proof could be used to show a glassy transition . however , it turns out that because these models never satisfy three - dimensional generalizations of the `` no parallel crossing '' rule , the critical point is always depressed below that of three - dimensional directed percolation , and the tbf proof can not be directly generalized for these models .property a says that two chains running in similar directions can not cross without having sites in common . for certain two - dimensional models , such as the sandwich and spiral models, these property can be required by the topology and neighboring relations . however , in three dimensions , the topology always makes it easy for two directed chains to miss each other , and so no three - dimensional generalization of property a can be satisfied , regardless of the neighboring relations .furthermore , if the two chains miss each other , then the buttressing of each type of chain does not occur and the resulting transition may be continuous .this rough argument can be formalized by showing that three - dimensional generalizations of the knights model always have substructures such as the one shown in figure [ fig : sub1to2 ] .that is , it is always possible to find substructures that have no long chains connecting their ends , but which can join up two chains and stabilize their ends .then , by the arguments in section [ sec : critical.point ] , we should expect these to depress the critical point below that of three - dimensional directed percolation .specializing to three dimensions for convenience , let consist of three linearly independent 3-vectors , and consist of three linearly independent 3-vectors with . also , define and . for an occupied site to be stable, it must have either ( 1 ) occupied neighbors from both and or ( 2 ) occupied neighbors from both and .the first condition we denote the a - b condition , the second , the c - d condition .if we only enforced the a - b condition , then we would just have three dimensional directed percolation ( modulo finite clusters ) . however , for the model defined above , there is an extra way to be stable , resulting in .consider a _finite _ structure with the following properties : ( 1 ) all occupied sites are stable under the culling rules except occupied sites and , ( 2 ) has an occupied neighbor in and has an occupied neighbor in , and ( 3 ) there is no ab path connecting and , but the structure is stable because there exists a path where at least one occupied site is stable under the c - d condition .we relegate to appendix b the proof of the existence of such a finite structure .with the finite structures defined above , some occupied sites that were unstable under the a - b condition now become stable . slightly below ,the system is about to percolate using the a - b condition alone .the substructures act as extra , local bonds , joining up long a - b paths and pushing the system above the critical point , just as in the knight model .again , arguments similar to these have been made rigorous by aizenmann and lebowitz in the case of undirected percolation , and perhaps can be made rigorous in the case of directed ( oriented ) percolation .the discovery of a two - dimensional percolation transition , where the sudden emergence of a discontinuous backbone coincides with a crossover length diverging faster than a power law , is recent , and of great interest for glassy systems , jamming systems , and phase transitions in general .unusual transitions have been found previously in mean field systems of a slightly different nature , but not in finite dimensions . and while the finite - dimensional transition is discontinuous , it is not driven by nucleation , as with ordinary discontinuous transitions , but instead by a scaffolding of many tenuous directed percolation paths occurring simultaneously to form a bulky structure .we have shown that property a is required for the proof that , but property b is not .all that is needed to prove that the transition is discontinuous ( once is established ) is a finite probability for two transverse percolating structures to intersect , to prevent each other from being culled .therefore , one can construct other models , such as the pinwheel and 8-spiral models , that exhibit a similar transition in two dimensions .the phenomena is not as specific as might seem at first glance .however , such a buttressing mechanism in dimensions higher than two is more difficult because it is more difficult for percolating _paths _ to intersect and form a buttressing , bulky structure .models like the knights model , where is most likely less than , provide physicists , mathematicians , and computer scientists with a motivation to study new models of correlated percolation models that are not isomorphic to directed percolation , but quite possibly in the same universality class . once this avenue is pursued further , one can then easily extend the class of models for which a finite - temperature transition can be rigorously shown . to begin, it would be interesting to consider a directed percolation model in two dimensions where the number of nearest neighbors is greater than two .for example , if the number of nearest neighbors was increased to four , would the percolation transition still be in the same universality class as directed percolation ?if so , as is presumably the case , then one could construct a jamming percolation model with sets larger than two sites .these jamming percolation models would then be isomorphic to the next - neighbor directed percolation models and then one could use results from directed percolation to prove a percolation transition. there exists another class of correlated percolation models called force - balance percolation . the first model in this classwas defined in ref .other force - balance percolation models are currently being constructed and studied .the force - balance percolation models differ from the jamming percolation models in that ( 1 ) the sets , such as a , b , etc . , are overlapping and ( 2 ) the `` or '' between pairs of sets is changed to `` and '' . given these differences ,the methods of proof used here can not be easily applied .numerical results indicate that the transition is discontinuous with a nontrivial `` correlation length '' exponent , indicating that the transition may not the garden - variety discontinuous transition .the force - balance models are perhaps less artificial in that they mimic force - balance by requiring that an occupied site ( i.e. a particle ) have occupied neighbors to its left and right , as well as its top and bottom , in order to be stable. however , little has been rigorously proven about them . to make progress along these lines would be useful . what about the lack of finite stable clusters in models of jamming percolation ?recent numerical work on another correlated percolation model , -core percolation , with on the four - dimensional hypercubic lattice , appears to exhibit an _, discontinuous percolation transition driven by nucleation .finite clusters exist in this model , unlike in the jamming percolation models .therefore , in the jamming percolation models , there can be no surface tension between the percolating and nonpercolating phases , which is typical of an ordinary discontinuous transition .if finite clusters are allowed in a correlated percolation model , one might guess that the unusual nature of the transition would be destroyed .however , finite clusters , other than individually floating particles , do not appear in the jamming transition of granular particles .otherwise , the packing would not be static .so it is unclear whether the existence of finite clusters pertains to the jamming of granular particles .this is also the case for the glass transition .if more of an analogy between jamming and models of jamming percolation is to be made in finite dimensions ( setting aside the matter of the critical dimension of jamming ) , a model where the fraction of sites participating in the infinite cluster increases smaller than linearly just above the transition must be found .furthermore , the existence of a jamming percolation model with a universal jump in the number of occupied sites at the transition that `` naturally '' emerges as opposed to being externally imposed , is yet another necessary quest if a jamming percolation model of jamming is to be found .finally , models of correlated percolation , such as the sandwich model , tell us that there do indeed exist kinetically constrained models of glassy dynamics that exhibit unusual phase transitions in finite dimensions .therefore , this avenue of exploration for understanding possible finite - temperature glass transitions in finite - dimensions remains open . since our work helps to clarify which jamming / correlated percolation models can be rigorously shown to have an unusual finite - temperature glass transition with a particular set of properties , other models exhibiting possibly other unusual behaviours can hopefully be more easily developed in the near future .in this appendix we justify the claim made in section [ sec : the implications of property b ] that given crossings , the probability that no crossing results in a t - junction decays at least exponentially in . this would be immediately true if each crossing resulted in an independent probability of a t - junction .so what we show is that these crossings , by occurring in disjoint subrectangles , can be effectively treated as independent ( in establishing an upper bound on the probability ) .the relevant picture is shown for in fig .[ fig : multiplecrossings ] .there are disjoint rectangles , labeled by , , each of which has at least one northwest spanning path .we call this event , and conditionalize upon the occurrence of . each northwest path must cross the northeast spanning path , but without property b , these crossings do not necessarily result in t - junctions , where by a t - junction we mean specifically a site in common between the paths that stabilizes the northeast part of the northeast path .let be the event that at least one crossing in rectangle forms a t - junction .now for a configuration with , look at the sites in the vicinity of the crossing ( if there is more than one crossing , we choose one by an arbitrary ordering of possible crossing locations ) . restricting ourselves first to the sandwich model , fig .[ fig : missedcrossing ] shows the only way that can fail to happen . the site labeled by a star must be vacant , and if that site is made occupied , the new configuration is in .so a local change in the vicinity of the crossing can always create a t - junction .this local change induces a mapping from the set of states with to a subset of the set of states with .the mapping is one - to - one onto this subset , and for any state with , the probability of the configuration is times the probability of the configuration , where is the site occupation probability .thus more generally , if we want to consider other variations of the knights model that lack property b , we need only that for any crossing without a t - junction , some local configuration of changes in a bounded region around the crossing can create a t - junction .the induced mapping can be many - to - one , so long as the `` many '' is bounded ( which follows automatically from the restriction that the configuration changes occur in a bounded region around the crossing ) .this will more generally give for some .since the configuration changes only take place within a rectangle , they do not affect whether or not we have for , and we can write the above inequality for a state where we specify whether these other occur .for example , for we might write with the same as above .this can be intuitively thought of as treating the different probabilities of forming t - junctions as independent .repeatedly using equation [ eq : sampleg ] in different subrectangles , we find that we are thus exponentially unlikely to have no t - junctions .in this appendix we prove the existence of the finite structures discussed in the section on jamming percolation in three - dimensions . before giving the formal proof ,we sketch the qualitative idea behind the construction of these structures . in the structure for the knights model , shown in figure [ fig : sub1to2 ] , there are two parallelograms consisting of ( northeast - southwest ) chains . the two parallelogragrams are parallel to each other , and are connected and stabilized by ( northwest - southeast ) chains . in two dimensions ,such a figure can only be constructed by having some chains cross each other , and this results in a long chain connecting the two ends , unless the model violates property a. however , in three dimensions , the chains can always be run in a direction independent of the plane of the parallelograms , so such a substructure can always be formed , regardless of the neighboring relations .we now formalize this argument . recalling that , , , and are four three - dimensional vectors , with the first three being linearly independent and not equal to any of the first three , there must exist , , , and such that if all of the are positive , then it is easy to make the desired structure .let . then make the structure in figure [ fig : simpleline ] , where each of the vectors represents an a - b chain of length , and represents a c - d chain of length .this structure has the desired properties .if all of the are negative , we simply replace with and use the same argument .we are now left with the most complicated case , where some are positive , and some are negative . in this case, we redefine the , and rewrite equation [ eq : lineardependence ] as where , all , and all , are positive , and for any , either or is zero .then define and . and are both nonzero and linearly independent .we can now make the structure shown in figure [ fig : substructure ] the vectors and represent a - b chains of length and . in these chainsevery site is stable by the a - b condition except for and .the vectors are chains of length in which every site is stable by the c - d condition .next , since form a complete basis for three - dimensional space , and are only two vectors , there exists a vector , with all , such that is linearly independent of .( note : this is where the three - dimensional case differs from the two - dimensional case . for models such as the spiral and sandwich models ,the two vectors and already span the space . )we can then make a second copy of figure [ fig : substructure ] , displaced from the original by , as shown in figure [ fig : fullstructure ] .we can now check that in figure [ fig : fullstructure ] all sites except the start and end sites , and are stable under the culling rules , and that and have neighbors from and , respectively .it remains to check that there is no a - b chain connecting to in this structure .there is no obvious such chain , but depending on the vectors and , it is possible that there are some sites in the , , and chains by chance are separated by a vector , inadvertently forming an a - b chain between and .however , if this is the case , we can create new larger structure , simply by multiplying , all , all , and all by the same multiplicative constant .the structure thus grows larger , while the vectors stay the same , so for a sufficiently large multiplicative constant , it is impossible for the different chains in the structure to be adjacent by connections .we thus have a structure with the desired properties .l. berthier , g. biroli , j.p .bouchaud , l. cipelletti , _ et al ._ , science * 310 * , 1797 ( 2005 ) . for earlier hints of a diverging length scale , see n. menon and s. r. nagel , phys .74 * , 1230 ( 1995 ). e. r. weeks and d. a. weitz , phys .. lett . * 89 * , 095704 ( 2002 ) .e. r. weeks and d. a. weitz , chem .phys . * 284 * , 361 ( 2002 ) . c. toninelli , g. biroli , and d. s. fisher , phys .lett . * 92 * , 185504 ( 2004 ) . c. toninelli , g. biroli and d. s. fisher , j. stat .phys . * 120 * , 167 ( 2005 ) .g. h. fredrickson and c. h. andersen , phys .lett . * 53 * , 1244 ( 1984 ) .see ref . , and references therein . c. toninelli and g. biroli , `` jamming percolation and glassy dynamics , '' j. stat. phys . * 126 * , 731 ( 2007 ) . c. toninelli , g. biroli , and d. s. fisher , phys .lett . * 98 * , 129602 ( 2007 ) .m. jeng , j. m. schwarz , phys . rev. lett . * 98 * , 129601 ( 2007 ) .the kosterlitz - thouless transition also exhibits unusual characteristics .there is a discontinity in the spin wave stiffness , or the free energy cost to applying a gradient , accompanied with an exponentially diverging correlation length .however , there is no magnetization , or local order parameter , that goes to zero at the transition .the jamming percolation models have a simple order parameter , the fraction of sites in the infinite cluster .j. chalupa , p. l. leath , and g. r. reich , j. phys .c * 12 * , l31 ( 1979 ) .b. pittel , j. spencer , and n. wormald , j. comb .series b * 67 * , 111 ( 1996 ) .p. m. kogut , j. phys .c : solid state phys .* 14 * , 3187 ( 1981 ) .n. s. branco , s. l. a. de queiroz , and r. r. dos santos , j. phys .c : solid state phys . * 19 * , 1909 ( 1986 ) .j. adler and d. stauffer , j. phys .a. : math . gen .* 23 * , l1119 ( 1990 ) .a. c. d. van enter , j. stat. phys . * 48 * , 943 ( 1987 ) .r. h. schonmann , ann .prob . * 20 * , 174 ( 1992 ) .m. aizenman and j. l. lebowitz , j. phys .a. * 21 * , 3801 ( 1988 ) .j. adler , physica a * 171 * , 453 ( 1991 ) .s. leonard et ., `` non - equilibrium dynamics of spin facilitated glass models , '' cond - mat/0703164 , to appear in j. stat . mech . : theory and expt . c. s. ohern , s. a. langer , a. j. liu , and s. r. nagel , phys88 * , 075507 ( 2002 ) .h. hinrichsen , adv . phys .* 49 * , 815 ( 2000 ) .this observation was kindly pointed by c. toninelli and g. biroli in arxiv:0709.0583 after an initial draft of our paper was made available making the opposite claim .
we investigate kinetically constrained models of glassy transitions , and determine which model characteristics are crucial in allowing a rigorous proof that such models have discontinuous transitions with faster than power law diverging length and time scales . the models we investigate have constraints similar to that of the knights model , introduced by toninelli , biroli , and fisher ( tbf ) , but differing neighbor relations . we find that such knights - like models , otherwise known as models of jamming percolation , need a `` no parallel crossing '' rule for the tbf proof of a glassy transition to be valid . furthermore , most knight - like models fail a `` no perpendicular crossing '' requirement , and thus need modification to be made rigorous . we also show how the `` no parallel crossing '' requirement can be used to evaluate the provable glassiness of other correlated percolation models , by looking at models with more stable directions than the knights model . finally , we show that the tbf proof does not generalize in any straightforward fashion for three - dimensional versions of the knights - like models .
the circadian clock is one of the most remarkable cyclic behaviors ubiquitous to the known forms of life , ranging from the unicellular to the multicellular level including prokaryotes . because of its importance, the underlying chemical reactions have been the subject academic interest for a long time and have recently been elucidated experimentally .circadian clocks have three important features : + 1 .they persist in the absence of external cues with an approximately 24-h period , which is rather long compared with most chemical reactions .they can be reset by exposure to external stimuli such as changes in illumination ( dark / light ) or temperature .the period of the circadian clock is robustly maintained across a range of physiological temperatures .( temperature compensation ) .the emergence of cyclic behavior with a capacity for entrainment is theoretically understood as being result of the existence of a limit - cycle attractor in a class of dynamical systems described by chemical rate equations .however , the phenomenon of temperature compensation is not yet fully understood .generally , the rate of chemical reactions depends strongly on the temperature .most biochemical reactions , in particular , have an energy barrier that must be outcome with the aid of enzymes , and thus the rate can be expected to follow the arrhenius form .thus , the period of chemical or biochemical oscillators can be expected to strongly depend on the temperature .thus , the ubiquitous temperature - compensation ability of biological circadian clocks , suggests that there may be some common mechanism(s ) behind it .overall , there are two possibilities : one is that compensation exists at each elementary step , and the other is that compensation occurs at the system level for the total set of enzymatic reactions .recently , it was found that the rates of some elementary reaction steps in circadian clocks depend only slightly on temperature , suggesting that the activation energy barrier of some reactions is rather low .although some element - level compensation is important , it is difficult to imagine that every reaction step is fully temperature - compensated at the single - molecule level . indeed ,if that were the case , temperature changes would not influence the oscillation of chemical concentrations at all .however , even though the period of oscillation is generally insensitive to temperature changes , it is known that the amplitude changes indeed depend on temperature . furthermore , circadian clocks are known to entrain to external temperature cycles , and they can be reset by temperature cues . thus , although temperature changes can influence the oscillation , the period is still robust .hence , it is necessary to search for a general logic that underlies the temperature compensation phenomenon at the system level .indeed , several models have been proposed . in most of the studies ,several processes that are responsible for the period are considered , which cancel out the temperature dependence with each other . with such balance mechanism ,the temperature compensation is achieved .these mechanisms , however , need fine - tuned set of parameters , or rather ad - hoc combination of processes for the cancellation . considering the ubiquity of temperature compensation ,a generic and robust mechanism that dose not require tuning parameters is desirable .here we propose such a mechanism that has general validity for any biochemical oscillator consisting of several reaction processes catalyzed by enzymes .the mechanism can be briefly outlined as follows : biochemical clocks comprise multiple processes , such as phosphorylation and dephosphorylation , with each generally having a different activation energy barrier .the rate of such reactions , then , is proportional to , where as the concentration of `` free '' enzyme available .if the enzyme concentration were insensitive to temperature , the rate would just agree with the simple arrhenius form , thus implying high temperature dependence .now , consider a situation where several substrates share the same enzyme . at lower temperature, the reactions with higher activation energy will be slower and the substrates involved in these reactions will accumulate . then , because they share the same enzyme , competition for the enzyme will also increase .accordingly , the concentration of available ( free ) enzyme decreases , and when it reaches a level satisfying , these enzymatic reactions will be highly suppressed .the system spends most of its time under such conditions , which limits the rate .thus , the arrhenius - type temperature dependence is compensated for by the concentration of available enzyme as .although this is a rather basic description , its validity may suggest that temperature compensation emerges in any general chemical oscillation consisting of steps , catalyzed by a common enzyme . here , we first study the validity of this enzyme - limited temperature compensation mechanism for the specific case of kai protein clock model introduced by van zon et al . to explain the circadian clock of kaiabc proteins in cyanobacteria , which was discovered by kondo and his colleagues indeed , in this system , the period of circadian oscillation of kai proteins is temperature compensated .further , some of the elementary components in this system , specifically , kaic s atpase activity and kaic s phosphatase activity , were suggested to depend only slightly on temperature , but the origin of system - level temperature compensation has not yet been explained . here , we show numerically that system - level temperature compensation emerges from the differences in activation energy between phosphorylation and dephosphorylation and competition for kaia as the enzyme that catalyzes kaic phosphorylation .furthermore , we elucidate the conditions necessary for this temperature compensation to work . then , based on this analysis , we introduce a simpler model consisting of few catalytic reactions , to illustrate the above mechanism .possible relations between our results and reported experimental findings for circadian clocks are also discussed .the kai - protein - based circadian clock , discovered by kondo s group consists of kaia , kaib , kaic proteins with atp as an energy source .kaic has a hexameric structure with six monomers , each with two phosphorylation sites .it has both self - kinase activity and self - phosphatase activity , but the self - phosphatase activity is usually stronger , and so it is spontaneously dephosphorylated .kaia , in a dimer form , attaches to kaic and thus increases its kinase activity , leading to phosphorylation of kaic , while kaib inhibits the activity of kaia .this phosphorylation / dephosphorylation process of the kaiabc proteins constitutes a circadian rhythm . here, we simplify the process , to focus on the temperature compensation of the period .we reduce the two phosphorylation residues to just one , because abundance of singly phosphorylated kaic is strongly correlated with that of doubly phosphorylated kaic so that the phosphorylation of the two residues are equilibrated on a rather short time scale .next , we do not include kaib explicitly in our model , because changes in the concentration of kaib affect the period only slightly .note that although kaib is necessary to generate circadian oscillations , the effect can be accounted for by introducing a parameter value for kaia activity . here , we adopt a slightly simplified version of the model introduced by van zon , et al .(see also ) first , each kaic monomer has two states active and inactive .second , allosterically regulated kaic hexamers in the active state can be phosphorylated , whereas those in the inactive state can be dephosphorylated .a phosphorylated kaic monomer energetically prefers the inactive state , whereas a dephosphorylated kaic has the opposite tendency . herethe flip - flop transition between active and inactive states occurs only from the fully phosphorylated or fully dephosphorylated states , as assumed in the concerted mwc model .no intermediate states are assumed .hence , the reaction process exhibits a cyclic structure as in fig .1 . , while in the inactive state , dephosphorylation progresses without any enzymes at the rate .affinity between active kaic and kaia reduces as the number of phosphorylated sites of kaic increases successively ., title="fig : " ] next , kaia facilitates phosphorylation of active kaic with an affinity that depends on the number of phosphorylated residues of each kaic hexamer . kaics with a smaller phosphorylation number have stronger affinity to kaia and are phosphorylated faster .this assumption is necessary for generating stable oscillations .then , the reactions are given by { } } } ac_i \xrightarrow{k_p } c_{i+1 } + a\ ] ] here and denote the concentrations of active kaic and inactive kaic , respectively , with phosphorylated sites ; denotes the concentration of free kaia dimer . to study the temperature compensation of the period , we must also account for the temperature dependence of the reaction rate . here , the rates of phosphorylation and dephosphorylation are governed by the arrhenius equation. then the rate constants and are as follows : + with inverse temperature ( ) by taking the unit of boltzmann constant as unity .we could include the temperature dependence of the rates and in a similar manner , but as the reaction between active and inactive states progresses faster and does not influence the period , this dependence is neglected .van zon et al . demonstrated that temperature compensation occurs when the speed of phosphorylation and dephosphorylation is completely temperature - compensated at a level of elementary reaction process .however , it can not explain why the amplitude of oscillation depends on temperature , or the entrainment to temperature cycle occurs .the compensation mechanism at a system - level is wanted . here ,the formation and dissociation of kaiac complexes occur at much faster rates than other reactions and so are eliminated adiabatically .thus , the change in the concentration is given by + where are the dissociation constants . considering the increase in affinity for kaia with the number of phosphorylated sites , we set . we adopted the deterministic rate equation given by the mass - action kinetics , and it is simulated by using the fourth - order runge - kutta method .as mentioned , a kaic allosteric model was analyzed .specifically , we study the case in which the activation energy for phosphorylation , , is larger than that of dephosphorylation , .see the supplementary information for other cases . ) .( a ) red line indicates the time series of the mean phosphorylation level defined by , whereas the green line indicates that of the fraction of free kaia , .a decrease in temperature causes a decrease in the amplitude of the phosphorylation level .( b ) time course of the abundance of each form of kaic , . at low temperature , the basal amount of ( magenta line ) is remarkably high ., title="fig : " ] for a certain range of parameter values , we found periodic oscillation in the kaic phosphorylation level and free kaia abundance , as shown in the time series in fig .the oscillation is described by a limit - cycle attractor , as represented in the orbit in a two - dimensional plane of kaic phosphorylation level and free kaia abundance . as the temperature increases , the amplitude of the limit - cycle increases ( fig .lowering the temperature causes a decrease in the maximum amount of free kaia and increase in the minimum level of kaic phosphorylation . further lowering it , however , results in the limit - cycle changing into a stable fixed point via hopf bifurcation .the timeseries of and the temperature dependence are shown in fig .2b for = 1.0 , 1.5 , 2.0 . with a decrease in temperature, we see an increase in , that is , the abundance of kaic with five phosphorylated residues ( 5p - kaic ) , which leads to an increase in the minimum kaic phosophorylation level , as shown in fig . 2b . notethat there is a remarkable change in the time course of ( the abundance of 5p - kaic ) at ( fig .4a ) . below this inverse temperature ,the minimum is close to zero . at higher ( i.e. , lower temperature ) , however , never comes close to zero , and the minimum value increases with lowering of temperature . .( a ) maximum and minimum values of mean phosphorylation level over a cycle .the maximum value is nearly constant against temperature changes , whereas the minimum value increases with above , i.e. , at temperatures below the characteristic temperature .the oscillation disappears via hopf bifurcation at .( b ) period of the oscillation . at high temperature ( low ) ,the period changes exponentially with , whereas below , the period is nearly constant against changes in temperature ., title="fig : " ] ( a ) , ( b ) , ( c ) plotted against the inverse temperature .( a)(b ) maximum ( red ) , minimum ( green ) , and average ( blue ) over a cycle .( a ) note that the maximum value of is nearly constant , wheras its minimum and average increase with beyond ( i.e. , at lower temperature ) .( b ) the average value of is fitted well by the value of the unstable fixed point of the equation ( magenta line ) , which is proportional to .( c ) the concentration of free kaia in the plateau region of is plotted , as estimated from the time where reaches a peak .this free kaia concentration closely follows ( magenta line ) , for , i.e. , below the characteristic temperature ., title="fig : " ] the transition at is also reflected in the temperature dependence of the period .we plotted the period of oscillation as a function of ( inverse ) temperature , together with the maximum and minimum kaic phosphorylation levels ( see fig .3 ) . above ,the temperature dependence of the period follows , as can be naturally expected from a reaction process with a jump beyond the energy barrier .however , at lower temperature , the period is no longer prolonged exponentially and is nearly constant .thus , the temperature compensation of the circadian period appears at lower temperature .there is also a clear difference in the amplitude of oscillation below and above . at higher temperature ( without temperature compensation ) , the amplitude of oscillation is almost constant over a large interval of temperatures .however , at lower temperature ( in the temperature - compensated phase ) , the amplitude decreases with lowering of temperature ; eventually , the oscillation disappears via hopf bifurcation .this decrease in amplitude is due to the increase in the minimum value of the kaic phosphorylation level , caused by the increase in the minimum abundances of .the temperature dependences of the abundance of each kaic and free kaia also show distinct behaviors below and above ( see fig . 4 , supplementary fig .the average and minimum abundances of increase remarkably with a decrease in temperature below , whereas the maximum hardly changes with temperature ( fig .4a ) . however , the amplitude of the oscillation of has a peak at around , and the minimum value increases as the temperature decreases .the minimum as well as the plateau value of free kaia increase when the temperature is decreased .the minimum inactive kaic abundance ( independent of the residue number ) and showed behaviors similar to those of , whereas the average followed dependence throughout the temperature range . considering that the total amount of all kaics is conserved and the difference in activation energy , this dependence itself is rather natural .thus , the transition in the oscillation and temperature compensation behavior at low temperature is the salient feature of the present system .next , we analyzed the conditions for and , the activation energies for phosphorylation and dephosphorylation , respectively , to elucidate the temperature compensation .supplementary fig .3 shows a plot for the region where temperature compensation appears in the parameter space of and . from supplementary fig .3 , we observe that the temperature compensation appears in the regime . the overall periodic behavior is determined mainly by , rather than the individual magnitudes .when , there still exists a transition to a phase with weaker temperature dependence of period on lowering the temperature , but this effect is not sufficient to produce the temperature compensation ( see also fig .+ ( see supplementary informations for the case with . ) ) and the abundance of total kaia ( ) .red : temperature - compensated oscillation with ( : the period ) below the characteristic temperature .green : oscillation showing transition at the characteristic temperature .blue : oscillation without transition , with a simple increase with temperature of the arrhenius form .black : no oscillations at all .as increases from 0 to 1 , we successively see temperature compensation , transition , and a disappearance of oscillation .( for the case with , see supplementary information ) . as increases, the width of the temperature compensation region decreases .( b ) effect of kaia increase on the period .an increase in kaia led to narrowing of the range of the periodic solution , and also the range of temperatures at which temperature compensation occurs ., title="fig : " ] as already mentioned by von zon et al ., oscillation of kaic abundance requires that the amount of kaia is less than that of kaic .an increase in kaia abundance leads to decrease in the period , finally leading to no oscillations .the range of temperatures where the oscillations exist narrows as kaia abundance increases , and the oscillation disappears at higher temperatures . here, the transition temperature shifts to lower temperatures ( see fig .5b ) . furthermore, the temperature compensation at is lost kaia abundance is increased .although the temperature dependence of the period is weaker at , the dependence still exists with smaller than .we plotted the range where temperature compensation occurs at in the two - dimensional plane with kaia abundances and .we see that low level of kaia abundance is necessary for temperature compensation .since kaic is a hexamer , we adopted six phosphorylation sites in our model .however , to understand the biological significance of this number of sites , we examine models with a reduced number of phosphorylation sites ( supplementary fig .we find that reducing the number of phosphorylation sites from hexamer to pentamer , and then to tetramer , narrows the temperature range where oscillations exist and temperature compensation occurs .for the tetramer , temperature compensation is not observed even at . )are entrained by the temperature cycle between and 1.7 within about 10 cycles ., title="fig : " ] it has been experimentally it is suggested that kaic s phosphorylation cycle is entrained to external temperature cycle . to examine such entrainment , we cycle the temperature between and ( i.e. , within the region in the temperature compensation ) periodically in time with a period close to that of the kai system ( 27 h ) . within about 10 cycles ,the phase of oscillation of the kaic system is entrained with that of temperature , independently of the initial phase of oscillation .thus , entrainment to temperature cycle is achieved ( see fig .here we discuss how temperature compensation is achieved .as presented in the results section , two stages are necessary : transition in the temperature dependence of the period , and complete temperature compensation of the period at lower temperature . as seen in the phase diagram ( fig .5a ) , the former requires that is sufficiently larger than , which , as will be shown , means that the phosphorylation process is rate limited . for the latter , the abundance of kaia should be sufficiently small so that there is limited free kaia that can be used for phosphorylation ( i.e. , competition for free kaia ) .as will be shown below , the abundance of free kaia decreases as the temperature increases , which compensates for the increase in the rate constant of the reaction .phase transition when there is a difference between the energy barriers for phosphorylation and dephosphorylation , the temperature dependence of the rate of each processes is different . roughly speaking , the time scale for the phosphorylation process changes in proportion to , whereas the dephosphorylation has , where is the concentration of free kaia .thus , there exists a characteristic temperature at which the two rates are comparable ; the rate - limited reaction switches at this temperature , where the phase transition to temperature dependence occurs .thus , where , the concentration of free kaia , is estimated from the steady - state solution of our model ( see eq.([a ] ) ) to afford if and if .thus , a sufficient difference in activation energy is necessary for the phase transition - like behavior because the critical temperature will diverge as .if the difference is small , the critical temperature goes beyond the temperature for the onset of oscillation and the transition never occurs .moreover , if is too large or too small , the critical temperature is lower or higher than the range where the oscillation exists , and thus for both the cases , the phase transition disappears .these estimates agree with the phase diagram in fig . 5a .temperature compensation by self - adjustment of the kaia concentration when the temperature is lower than the transition temperature , the phosphorylation process takes more time , and is accumulated before dephosphorylation from progresses , as already discussed ( see fig.2b ) .the increase in the abundance of active kaic leads to competition for kaia , and thus a decrease in free kaia .if the total kaia abundance is limited , the system reaches a stage where phosphorylation almost stops .this leads to the plateau in the time course of , as observed in fig .this drastic slow down of the phosphorylation process occurs when is decreased to the level at .thus , during the plateau in , this approximate estimate gives , , so that the temperature dependence of the phosphorylation rate is compensated for by the decrease in .this plateau region is rate - limited in the circadian cycle , making the whole period independent of temperature . in more detail, this compensation is also estimated as follows .the abundances of the inactive forms of kaic decrease with an increase in . considering the differences in speed between phosphorylation and dephosphorylation , the total inactive kaic abundance ( at the fixed point )is estimated as + which is consistent with fig .the flow from inactive kaic is thus estimated by this flow starts the phosphorylation processes from , but is slowed down at some residue number . in the present model ,this slow down starts at due to the paucity of free kaia . following the above estimate of flow , the maximum estimated to be proportional to .now from eq.([a ] ) and and at this time , the abundance of free kaia can be estimated approximately as when is small enough , the minimum free kaia abundance is smaller than ,thus . therefore , when is small enough and is sufficiently greater than , the time scale of the phosphorylation process is , and the period of the cycle is temperature - compensated . indeed , as shown in supplementary fig .2 , the maximum ( and ) values show dependence , whereas free kaia shows approximately dependence when the phosphorylation is slowed down . in essence , the temperature compensation mechanism requires two properties : + difference in activation energy between phosphorylation and dephosphorylation processes , and a limited abundance of enzyme kaia .the former is essential to the phase transition , and the latter to the compensation of the arrhenius - type by the temperature dependence of the individual reactions .as long as these conditions are met , the period is temperature - compensated at low temperature , without the need for fine tuning of the parameters of the system .these two properties generally appear if there are two types of processes with different activation energy , with one type catalyzed by a common enzyme and the other without catalyzation .if the enzyme abundance is limited , the competition for the enzyme will lead to temperature compensation in cyclic reaction systems . in particular , if the main component of the chemical reactions has allosteric structure like kaic , the competition for enzymes will occur naturally .are catalyzed by the same enzyme , but the other reactions are not .the affinity between and weaken as increases .( b ) maximum and minimum values of over a cycle and ( c ) the period , plotted against the inverse temperature .the oscillation is temperature - compensated over a specific range of temperatures , although the transition is not as sharp as in the kai model . , title="fig : " ] as an example , we consider the cyclic process shown in fig . 7 , where a cyclic change in five substrates occurs. three processes a high activation energy barrier , and are catalyzed by the common enzyme . in this case , the period of the oscillation in the concentration of each substrate is temperature - compensated at low temperature .( if the number of reactions with the common enzyme is decreased to two , compensation still appears , but it is weaker , as shown in supplementary fig . 5 ) .this example demonstrates the generality of the present mechanism , and opens the possibility of applications to temperature compensation in other biochemical oscillations as well .the regulation of reaction process by autonomous changes in enzyme concentration , as adopted here , was previously pointed out by awazu and kaneko , who reported that relaxation to equilibrium slowes down when the concentrations of substrate and enzymes are negatively correlated . excess substrate hinders the enzymatic reaction , leading to a plateau in relaxation dynamics . in the present model , the total concentration of kaia as an enzyme is a conserved quantity but the fraction of free kaia availableis reduced when there is abundant active kaic and so the speed of phosphorylation slows down dramatically .such regulation of reaction speed by limitation of catalysts should be important , not only for the temperature compensation of circadian clocks in general , but also for other biological processes .now we briefly discuss the relevance of the present results to experiments on the kai protein circadian system . in our model, it is assumed that kaic s affinity to kaia depends on the phosphorylation levels .this assumption is necessary not only for the emergence of temperature compensation but also for the existence of the oscillation itself .recent reports that kaic phosphorylation induces conformational changes may account for such changes in the binding affinity . in the actual circadian clock comprising kai proteins ,kaib is also involved besides kaia and kaic . however , as mentioned , an increase in kaib abundance has minor influence on the clock .earlier theoretical study suggested that kaib binds to kaiac complexes strongly and restricts the concentration of free kaia .thus , the inclusion of kaib is expected to not alter the present temperature compensation mechanism , but to facilitate it by strengthening the limitation of enzyme availability .note that previously reported temperature compensation at element - level kaic s atpase activity and auto - dephosphorylation activity is also relevant to the system - level compensation in our model .the former is relevant to achieve fast equilibration between kaia and kaia complex used for adiabatic elimination in eq.([a ] ) , and the latter to provide .it is known that the amplitude of kaic oscillation decreases as the temperature is lowered , which agrees well with our results ( fig .2a ) . indeed ,as described already , this decrease in amplitude is tightly coupled with the temperature compensation mechanism .it is also interesting to note that in many circadian clocks , the period does not depend on the temperature , although the amplitude decreases with temperature . if every elementary step of the circadian clock were temperature compensated at the single - molecule level , this temperature dependence of the amplitude would not be possiblefurthermore , the entrainment of the kaic oscillation to imposed temperature cycles , as observed in a recent experiment would also not be possible . in our study ,the compensation is based on the different temperature dependences of the phosphorylation and dephosphorylation processes ; the entrainment is a result of this difference .our mechanism for temperature compensation depends on paucity of kaia increasing the kaia concentration leads to loss of the compensation .we expect that this prediction will be directly confirmed in a future experiment .the authors would like to thank a. awazu , h. iwasaki , y. murayama , h. r. ueda , t. yomo for useful discussion .( , ) .( a ) maximum and minimum mean phosphorylation levels over a cycle .the maximum value is nearly constant against temperature changes . the minimum value increases with for , and is nearly constant below .the oscillation disappears at .( b ) period of oscillation .the period changes with the arrhenius form , i.e. , in an exponential manner over the whole range of temperature , while the slope changes below and above .below ( beyond ) , the temperature dependence of the period is about , and is with larger than at ( ) ., title="fig : " ] and , while is fixed at .red : temperature - compensated oscillation satisfying ( : the period ) below the characteristic temperature .gray : oscillation without temperature compensation .when , the oscillation is temperature - compensated at temperatures below the characteristic temperature ., title="fig : " ] , while a pentamer and tetramer are the reduced models containing and , respectively .the same parameter values are used as in the original model .models with fewer phosphorylation sites ( ) can not generate any oscillation .in the reduced models , the temperature ranges where oscillations occur and where temperature - compensated oscillations occur are both narrower , and the temperature compensation in the tetramer model is not perfect ., title="fig : " ] are catalyzed by the same enzyme , and other reactions are not .the affinity between and is higher than that between and .( b ) the period , plotted as a function of the inverse temperature .the range with temperature compensation is narrower , and the compensation is not perfect .( inset : maximum and minimum values of over a cycle ) . ,title="fig : " ]
circadian clocks ubiquitous in life forms ranging bacteria to multi - cellular organisms , often exhibit intrinsic temperature compensation ; the period of circadian oscillators is maintained constant over a range of physiological temperatures , despite the expected arrhenius form for the reaction coefficient . observations have shown that the amplitude of the oscillation depends on the temperature but the period does not this suggests that although not every reaction step is temperature independent , the total system comprising several reactions still exhibits compensation . we present a general mechanism for such temperature compensation . consider a system with multiple activation energy barriers for reactions , with a common enzyme shared across several reaction steps with a higher activation energy . these reaction steps rate - limit the cycle if the temperature is not high . if the total abundance of the enzyme is limited , the amount of free enzyme available to catalyze a specific reaction decreases as more substrates bind to common enzyme . we show that this change in free enzyme abundance compensate for the arrhenius - type temperature dependence of the reaction coefficient . taking the example of circadian clocks with cyanobacterial proteins kaiabc consisting of several phosphorylation sites , we show that this temperature compensation mechanisms is indeed valid . specifically , if the activation energy for phosphorylation is larger than that for dephosphorylation , competition for kaia shared among the phosphorylation reactions leads to temperature compensation . moreover , taking a simpler model , we demonstrate the generality of the proposed compensation mechanism , suggesting relevance not only to circadian clocks but to other ( bio)chemical oscillators as well .
feature finding algorithms are designed to automatically localize the intensity centroid of a specific feature in an image . in biological systems ,the features of interest are often fluorescently labelled proteins and organelles , with dimensions below the diffraction limit ; the intensity distributions to be localized are therefore diffraction limited spots .feature finding algorithms consist of two main steps to automatically detect and then localize all features of interest in a given image .the first step is an initial detection of local intensity maxima over the entire image , identifying candidate feature locations to pixel resolution . in the second step ,the full intensity distribution of the candidate features , which spans many pixels , is used to localize the centroid of each intensity distribution to sub - pixel resolution . to extract the real space dynamics of a feature from a time - series of images , tracking algorithms establish the correspondence between features in successive image frames ,forming a set of single - particle trajectories .in general , feature correspondence is determined by minimizing the displacement of localized features between frames .the ability of tracking algorithms to reconstruct feature trajectories with high fidelity is critically dependent on the ability of the localization algorithm to determine the particle positions accurately . among articles that assess the limits of accuracy and precision of particle localization algorithms , cheesum _ et al . _ quantitatively compare algorithms for localizing fluorescently labelled objects in two - dimensions , from among centre - of - mass algorithms , direct gaussian fitting which exploits the shape of the intensity distribution for a diffraction - limited spot , image cross - correlation algorithms , and sum - absolute difference methods .typically , centre - of - mass and direct gaussian fitting methods obtain sub - pixel resolution directly .in contrast , both the cross - correlation and sum - absolute difference methods determine the object s position to only pixel resolution . to obtain sub - pixel resolution ,the cross - correlation or sum - absolute difference matrices must be interpolated .measured accuracy as the mean difference between the localized position and actual position of a simulated object over a large number of trials and determined , unsurprisingly , the direct gaussian to be the superior algorithm for localizing point sources ._ examine the factors that limit the precision of centroid localization , for limiting cases where a large number of signal photons are collected , in which the data is considered photon shot - noise limited ; and the limit of small , where the image is considered background noise limited , obtaining the theoretical limiting precision for one spatial dimension , by considering each of these limits independently and linear interpolation between them . in a more general treatment ,ober _ et al ._ have used a fisher information matrix to determine the limit of localization precision for single - molecule microscopy .the inverse of the fisher information matrix is independent of the particular localization procedure .however , importantly , ober _ et al ._ assumed the localization procedure would provide an _ unbiased _ estimate of the object location .this assumption is not justified for all localization routines , as was shown in and .the ultimate limit of localization precision , as a function of the emission wavelength , photon emission rate , objective lens numerical aperture , and acquisition time , is , where is the optical system efficiency , defined as the fraction of photons leaving the object that reach the detector .this is a fundamental limit of localization and does not include the effects of pixelation or noise .when these effects are included , the relation is much more complicated , and the limiting precision depends on the particular imaging conditions and detector specifications .knowledge of the expected intensity distribution for a diffraction - limited spot is used in feature localization routines to obtain the spatial coordinates of an object with high precision and accuracy .this knowledge can also be used to segregate two spots that are closely spaced . developed a method for automatic detection of diffraction - limited spots separated by less than the rayleigh limit in three - dimensional image stacks under the key assumption that each spot detected in an image is comprised of a finite number of superimposed psf s . using simulated data , thomann __ found that their algorithm can resolve points at sub - rayleigh separation .the localization accuracy approached the nanometer range for signal - to - noise greater than db , and was sub-20 nm for lower signal - to - noise ratios .high accuracy was only maintained for points separated by at least the rayleigh limit , and depended on the relative brightness of the two points .spots of equal brightness could be distinguished at a distance of half the rayleigh limit . for cases where the spots differ in brightness ,the resolution limit decreased by up to 50 % .several works address high fidelity tracking in systems where particle trajectories may overlap , a common problem encountered in using two - dimensional detection to acquire images of three - dimensional objects .imaging using three dimensional detection of diffraction - limited spots separated by greater than the rayleigh limit lifts the degeneracy of particle overlaps and ( dis)appearances .just as the dynamical resolution of point - like objects is limited by the resolution in the feature localization , so also the resolution for volumetric changes of cells or subnuclear regions is limited by the faithfulness of any surface reconstruction being performed .many works address the problem of reconstructing the topography of a surface from sample points in three dimensional spaces .watertight surface reconstructions , which return a well - defined interior and exterior , are needed for many applications , including computer graphics , computer aided design , medical imaging and solid modeling , but are not guaranteed by many reconstruction methods .basic surface reconstruction techniques can be classified into explicit representations , including parametric surfaces and triangulated surfaces , in which all or most of the points are directly interpolated based on structures from computational geometry , such as delaunay triangulations ; and implicit representations , which solve the problem implicitly in 3d space by fitting the point cloud to a basis function , and then extracting the reconstructed surface as an iso - surface of the implicit function .methods in the second category differ mainly in the different implicit functions used .implicit representations are a natural representation for volumetric data : the marching - cube ( mc)-like algorithm is currently the most popular algorithm for isosurface extraction from medical scans ( ct , mri ) or other volumetric data .mc has been applied to subcellular and cellular volumetric data using commercially - available software applied to three - dimensional image data obtained by confocal microscopy .mc produces the desired watertight , closed surface and is easy to parallelize , but requires uniform ( over-)sampling , produces degenerate triangles ( requiring post - processing re - meshing ) , and does not preserve features .while the output mesh that mc generates is adequate for visualization purposes , it is far from being suitable for surface or volume quantification in situations of local undersampling due to high curvature . in practice , the data may sample only part of a surface densely , and may suffer from inadequate sampling in other , local regions , either due to non - perfect homogeneous distribution of a fluorescent reporter , or to local high curvature of those regions relative to the imaging ( pixel ) sampling rate .for example , in budding yeast , due to the small cell size and the even smaller incipient daughter cell ( bud ) size , there are local regions in which the curvature may be high relative to the sampling , even when sampled at the highest microscope magnification .existing surface reconstruction algorithms face difficulty if the data contains local undersampling , and will yield holes or artifacts under these conditions .we envision many scenarios where local undersampling is present .the tight cocone method generates a surface representation given a set of sample points , suppressing undesirable triangles and repairing and filling the holes in the surface .it provides a water - tight reconstruction even when the sampling bounds required by other algorithms are not met . for surfaces that are complex , i.e. that are not single compartments , the tight cocone algorithm will reconstruct the full surface . by decomposing the space and re - running the algorithm, we are able to build more complete volumetric representation of the dividing yeast cells , producing watertight reconstructions of each compartment . in this work ,feature finding and tracking in living cells was developed together with surface reconstruction and volume determination .the algorithms are semi - automated to maximize the number of cells that can be analyzed with a minimal amount of subjective manual input .these new methods were applied to investigate budding yeast mitotic spindle dynamics at a high spatial resolution over the entire cell cycle , and the compartment volumes of dividing cells .to analyze the spatial location of mitotic spindle poles in a reference frame intrinsic to the dividing _ s. cerevisiae _ cell , we determined the 3d position of the poles and , in parallel , the cell bounding surface , the plane of cell division ( neck plane ) , and neck plane center , as landmarks .to fluorescently tag spindle poles , we used cells in which the chromosomal copy of spc42 was fused to the red fluorescent tandem - dimer tomato .we additionally used cells in which the chromosomal copy of the g - protein gpa1 was fused to gfp , to visualize the cell surface in the same cells .we imaged asynchronous live - cell populations in three dimensions , yielding images containing up to cells each . and lie in the plane of separating the mother and bud cavity during cell division , called the _ bud neck _ in budding yeast .the coordinate in this definition is positive in the mother cavity and negative in the bud.,scaledwidth=50.0% ] experiments were performed at a temperature of 25 . at this temperaturethe budding yeast cell cycle is hours .photobleaching restricts the window of time that a single cell can be observed at several nanometer resolution .thus , to capture spindle dynamics over the entire cell division , populations of unsynchronized cells were observed , with each cell in a field - of - view acting as a sample of a specific temporal window of the cell cycle .the spindle poles were labelled with one fluorescent protein and cell periphery was labelled with a different fluorescent protein .the spindle pole trajectories and mother and bud cavity surfaces were reconstructed from fluorescence data as described below .the final result of pole and surface finding is an ensemble of spindle pole trajectories , with the capability for each trajectory to be described in a coordinate system relevant to cell division for that cell .the geometry resulting from such an analysis for a single cell is plotted in fig .[ fig : img_sim : cell_coord ] . in the next section ,the algorithms used for extracting spindle pole dynamics and cell surface topography are described .all implementations were written in matlab ( mathworks , natick , ma ) . as described above , knowledge of the expected intensity distribution for a diffraction - limited spot can be used to automatically localize fluorescently - labelled objects that are smaller than the diffraction - limit .the general principle of the localization routine is depicted in fig .[ fig : ff:3d_ff_principle ] . by fitting the full intensity distribution of the object to a three - dimensional gaussian function, the centroid of the spot can be determined to sub - pixel resolution in all three dimensions . to construct spindle pole trajectories from a time series of confocal fluorescence image stacks ,an automated three - dimensional feature - finding and tracking algorithm was used .the algorithm locates the centroids of the spindle pole intensity distributions , termed features , to sub - pixel resolution in 3d in a series of successive confocal image stacks acquired at controlled time intervals .it was originally developed as center - of - mass fitting for applications in colloidal systems at 3d sub pixel resolution , and adapted for application in living cells using gaussian fitting by two of the authors .the tracking algorithm links localized features in sequential image stacks and outputs a time series of coordinates for each feature .the resolution of the three - dimensional localization depends on the signal - to - noise ratio of the images , as will be discussed in detail below . for this study, the localization accuracy is between and nm in 3d . the automated feature finding consists of three main steps : filtering , initial estimation of feature positions , and refinement of position estimates . to suppress image artifacts and noise , which exhibit high spatial frequencies ( small length scale ), the images are initially filtered using a three - dimensional band - pass gaussian kernel . by choosing a spatial cut - off for the filter kernel close to the characteristic size of a diffraction - limited spot, high frequency noise is attenuated to a level far below that of true features .once the image has been filtered , pixels that are local intensity maxima can be used as initial estimates for feature positions . to further distinguish real features from noise, the integrated intensity is calculated for a sub - volume surrounding each of the local maxima .features for which the integrated intensity is below a cut - off value are excluded from further processing . at this point , the feature position estimates have a spatial resolution equal to the pixel size , typically 180 nm in and and 300 nm in . to refine the feature position ,a three - dimensional gaussian function is fit to the intensity distribution within the sub - volume surrounding the initial estimate , using non - linear least squares .the final sub - pixel estimation of the feature location is obtained by a second iteration of gaussian fitting .the second iteration involves first shifting the pixel - spaced sub - volume so that it is distributed around the centroid of the initial gaussian fit , followed by a second least squares gaussian function fit , using the outputs of the initial fit as parameter estimates for the second fit .the band - pass length scale , the dimensions of the sub - volume used for fitting , the integrated intensity cut - off value , and a characteristic size of the object are used to define a region around the candidate feature in which a second feature can not be located , and are defined by the user .the last parameter is adjusted to allow or reject any overlap between two or more features of interest .these four parameters are set to optimize the fidelity of feature localization across a large number of cells contained in a single field - of - view .since photobleaching degrades the signal of a given feature as it is observed over time , the integrated intensity cut - off can be automatically reduced by a user - defined constant factor for each timepoint in the series of confocal stacks .the tracking algorithm links localized features in sequential image stacks and outputs a time series of coordinates for each feature .the algorithm used here is based on crocker _et al . _ and implemented in matlab .feature coordinates are linked in time by minimizing the total displacement of each of the identified features between successive frames .the search for corresponding features in consecutive frames is constrained to a maximum likely displacement over the timescale between frames .accurate and robust tracking is obtained by tailoring the timescale of image acquisition and the cut - off distance to the particular dynamics of the features being tracked .high fidelity tracking is obtained by applying successive iterations of the tracking algorithm to each data set .this method ensures that the majority of trajectories over a cell population are captured , and results in data sets biased for trajectories that persist to long times .the cell periphery is visualized using fluorescent reporters that localize to the cell cortex . using the procedure we developed ,if the reporter is distributed uniformly over the entire surface , the fluorescence intensity distribution in a confocal image stack can be used to reconstruct the cell surface .automated surface reconstruction is carried out on stacks of images obtained using reporters that label the periphery of living cells , provided the resolution in is sufficient and the label ( dye or fluorescent reporter protein ) distributes uniformly around the cell periphery .key features of the surface shape that are important to cell division , such as the bud neck in budding yeast , may then be extracted .the plane of cell division in budding yeast is called the bud neck : in any dividing cell it is characterized by local deformation of the bounding surface to create two cavities out of one .in addition to the final topology change , local geometry is also modified smoothly during the process of cell division . using the mother and bud cavity fits for each cell, the neck plane position and orientation can be calculated using the methods described here .the automated surface reconstruction procedure we developed consists of five main steps : filtering and thinning , initial estimation of closed bounding surface , refinement of surface estimates by decomposition of the space , calculation of volume of each convex cavity ( here , mother and bud cavity of mitotic cells ) , and , in mitotic cells , determination of position , orientation , and size of the neck separating the mother and bud cavities , which provides a basis coordinate system for cell division unique to each cell .the first step of surface feature finding begins with spatial filtering of the image stack , with the characteristic size parameter of the 3d band pass algorithm corresponding to the surface thickness . using a first estimate of the thickness, one can iterate through different values until the surface and a reasonable cavity size are reproduced with optimum fidelity .the thresholding step must use a threshold value that maximizes the amount of surface included , while minimizing artefacts such as protrusions , small clusters away from the surface , and non - faithful connection between opposite sides of a cavity .typically , the true surface data is comprised of thousands of connected voxels , whereas small noise clusters may span only a few hundred voxels or less , providing an effective means for the identification and removal of small noise clusters . the thresholded image stack is then sliced in -plane images , each of which is `` thinned '' to a minimal collection of points that preserves holes and general shape , to allow for a full surface reconstruction .thinning was used instead of skeletonization which is less suited for minimizing the surface data as it sporadically produces branches .the thinning algorithm transforms an entity within the image into a single pixel thickness entity .the algorithm yields good results on slices near the equator of the surface to be analyzed , where it reduces a thick shell to a 1 pixel - wide shell .an example on one focal plane of data acquired from the surface reporter imaging in budding yeast , before and after the thinning step , are shown in fig . [fig : meth_surf3 ] ( a - b ) .on nearly - flat curved surfaces , which are manifest in slices through the top or bottom of a cell , the thinning algorithm has the undesired effect of reducing a uniform disk to a single or branching lines .by additionally slicing the image stack into sets of and planes , and processing these sets of planes with the thinning algorithm , we eliminate these artifacts specific to the slicing direction . in these new slices , `` top '' and`` bottom '' caps are instead part of equatorial profiles .the result produces good performance , defining the top and bottom of the cell as a collection of points in a contour .the estimated minimal set of points describing the surface is then constructed by merging the results from the , and slice directions .we assemble the final cloud of points by requiring that for a point in space to be retained , it must appear in at least two of the slicing directions .the resulting 3d thinned version of the thresholded image stack is composed of a cloud of points centered approximately in the middle of the original thick surface of the image stack .again small disconnected clusters of points can be removed from the thinned surface by requiring a certain minimum size of connected points .the output of this discrete cloud of points estimation step is used as input to the voronoi - based algorithm to obtain useful surface representation , which can then be used to quantify volume . to define the volume , the scattered point samples of the surface need to be turned into a proper watertight surface mesh . in cases where the signal is strong and uniform , an accurate, arbitrarily - shaped tessellated surface can be obtained via a full surface reconstruction .we achieve this using the tight cocone software applied to the minimized discrete surface points .the tight cocone algorithm takes a point distribution in 3d and attempts to reconstruct a water - tight surface described by those points .the algorithm is robust to local undersampling as intended , but as it is sensitive to noise , we supply the filtered data .the output is a list of triangles that comprise the reconstructed surface . as a proof - of - concept example , the result on one budding yeast cell for which the surface is thus tessellated is shown in fig .[ fig : meth_surf3 ] .the determined surface is remarkably faithful to the actual cell s topography , reproducing bumps , curves , and the typical smooth curvature transitions at the bud neck . from a reconstructed surface , quantities such as volume may then be computed .if the shape is a single convex hull , the volume can be computed simply , by using a well - known algorithm .if the complete surface is not a single convex hull , this approach will not robustly provide the volume .for example , in budding yeast , the bud cavity can protrude significantly out of the mother , with a concave neck region linking the two cavities . however , individually , the mother and bud cavities are convex .we effectively decompose the complex multiple - hull surface into individual convex hulls , for each of which compartment the volume can be determined . in principle, the negative curvature of the neck region could be used to obtain the orientation and location of the plane separating the mother and bud cavities . in practice , even though the periphery is well - labeled so that the whole surface can be reconstructed , undersampling prevents determination of the local curvature everywhere . moreover , in small budded cells the neck curvature may only just change sign at the transition region from mother to bud . instead, we exploit other information to uniquely identify the tessellated surface data in this region between the two cavities . in our dividing cells ,we subdivide the surface into sections by performing a fit of two ellipsoids to the minimized surface points . for each vertex of the closed bounding surface, we then compute the shortest distance to the fitted ellipsoids .the computed proximity is used to segregate the vertices into two subsets : those assigned to the mother cavity , and those assigned to the bud cavity . after this assignment step, we perform the water - tight surface reconstruction on each segregated point set .this provides individual convex surfaces which are used for robust calculation of volumes for each of the two cavities . following this step ,those triangles in the original full surface reconstruction having at least one vertex belonging to each of the mother and bud sets are assigned to the neck region , and are used to reconstruct the surface of the neck region .all of these steps are carried out for each dividing cell .the volume of a convex hull can be computed simply , as follows .considering any point on the surface of or inside the volume , the volume is calculated as the sum of all pyramids whose base is one of the triangles forming the surface and whose summit is that point .these pyramids constitute a complete set , containing the volume enclosed by the surface , and the sum of their volumes is the total volume of the convex hull . from the minimized points forming the neck surface ,we extract geometric characteristics of the neck by fitting a circle ( or ellipse ) to these points .this yields the neck plane location ( including the center ) ; circumference ; and , via its normal , orientation with respect to the imaging frame of reference .knowledge of this plane provides a reference frame for cell division unique to each cell . as can be seen in fig .[ fig : meth_surf3 ] ( d ) , this automated procedure results in faithful description of the orientation and position of the neck plane between the mother and bud cavities .the results obtained show the suitability of the method for a correct representation of the target object .the trajectories of other point - like features obtained simultaneously with the surface in each cell can now be transformed into their cell - unique coordinate system , and therefore examined relative to the neck origin .together , the spindle position and surface mapping could additionally be used in tandem to define a wealth of physical observables .measuring the dynamics of the mitotic spindle in a coordinate space defined by the plane of cell division , over a population of cells , would reveal dynamical and shape cues related to stages of the cell cycle not quantifiable precisely by other methods . specifically , spindle orientation , and translocation with respect to the neck plane ,could be obtained automatically .the orientation of the normal vector to the neck plane , is determined from the fitting as described above , and its sign can be assigned as desired : we here define as extending into the mother cavity .a spindle vector may be defined as the vector spanning from the pole proximal to the bud to the pole distal to the bud .spindle orientation is then parametrized by the angle between and , given by , where .for each cell , the spindle orientation and its angular excursions may be calculated over the observation window .the distance of the spindle poles from the neck plane may be computed by constructing the vectors , representing the displacement of each spb located at point from the neck centre at point .the distance , given by , is positive for all in the mother cavity , and negative for all in the bud cavity , using our definition of the direction .accurate segregation of aberrant spindle positioning behavior from that of normal cells would require knowledge of the spindle size and position relative to the mother and bud cavities .such studies , achievable at high resolution in cells in sc medium with both poles and surfaces labelled , will be enabled by these tools . the ability to calculate compartment volume might have other uses .not all volumes one wants to quantify are conveniently regular .chromosome territories , nucleoli , and golgi bodies are all examples of compartments with complex distributions or irregular outlines , occupying in higher eukaryotes approximately the same volume as the yeast cells under study here , a few tens of femtoliters . methods to study such compartments in a statistically meaningful manner have been limited and are generally complicated by the fact that estimating the volume of such exclusion zones requires calculating the volume of irregular domains .the accuracy of feature finding and tracking in cells is influenced by many factors .signal - to - noise must be maximized with well defined intensity distributions for each feature , spanning multiple voxels so that the nyquist criterion is satisfied . the hardware , signal - to - noise , and sampling requirements limit the achievable temporal resolution .investigation of fast dynamics in the cell requires some trade - off between temporal and spatial resolution . a detailed assessment of the factors influencing the performance of localization algorithms to determine the limits of the spatial resolution greatly aids in the design and interpretation of feature finding experiments .a simulation of diffraction - limited spots that is relevant to the experimental conditions requires knowledge of the true psf of the optical system used for the experiment .the theoretical description of a confocal psf is only relevant in the ideal aberration - free scenario , and it is necessary to measure the real psf of the microscope .measurement of the psf will reveal any aberrations that may be present in the optical system . the image of an object that is smaller than the diffraction - limit closely approximates the true psf of the microscope . in an optical system , a diffraction - limited object acts as a low - pass spatial filter with a high cut - off frequency relative to that of the otf .the cut - off frequency increases as the inverse of the object size .the true psf is the image of an infinitesimal object , i.e. one with an infinite bandwidth .however , in practical imaging conditions , background and noise signals often mask any contribution of high frequency components to the psf , so that the effect of a small but non - zero object size on the psf is negligible. therefore subresolution fluorescent beads may be used to measure the psf .measurement of the psf for our optical system is described in the supplementary materials . to simulate all aspects of image acquisition realistically ,the background or bias signal and the readout noise from the ccd camera were also measured , as also described in . in order to determine the accuracy and precision associated with the feature localization algorithms used in this hybrid method , the localization algorithmswere applied to computer generated confocal images of spindle pole bodies .our procedure for generating these simulated images , using parameters drawn directly from our hardware , signal - to - noise , and 3d sampling characteristics , is described in . for each simulation run ,an image was generated at a given signal level and with a given displacement between the two diffraction - limited spots in a prescribed direction in three - dimensional space .noise characteristic of the yeast medium and camera ( as measured ; ) was added to the image , and the feature localization algorithm was applied . at each signal level andspot separation tested , 500 iterations of feature finding were performed with a new noise profile generated for each iteration .all simulations were performed in matlab .feature localization parameters were set to the values used for live cell trials .the feature coordinates , as determined by the localization algorithm , were then compared to the real coordinates to obtain an estimate of the localization error over a range of signal - to - noise values and spot separations .knowledge of the true spot locations allows for an estimate of both precision and accuracy of a given measurement .the deviation between the actual position and the localized position was derived in this way .the result for a point - to - point displacement typical of a metaphase spindle , over a range of signal - to - noise levels , is shown in fig . [fig : img_sim : error_vs_rsn ] .anaphase spindles have greater separation and even less vector component along the direction , so suffer less from localization error than than do metaphase spindles .budding yeast cells are small , rendering subcellular quantitation challenging .nevertheless , the spindle poles in these cells are separated by in the early phase of cell division ( metaphase ) , and by larger distance later in the cell cycle .thus the metaphase spindle is the more stringent test of true subpixel and aberration - free resolving of the spindle pole features from one another . in our applications studied here , metaphase and anaphase , the spindle pole psfs in our optical system do not overlap . using the simulations we had developed , we tested the effect of overlap at distances of the same order as , and smaller than , our metaphase spindle length , to validate that effects of overlapping of features need not be taken into account at these distances , and determine at what separations such effects play a role .interestingly , we found that when filtering is used , the filtering itself is a source of small overestimation of the separations at distances .for such separations , spatial band - pass filtering attenuates the summed intensity distribution in the region between the two points , and forces the least - squares fitting routine to localize the centroid of each intensity distribution away from the overlap region , resulting in overestimation of the point - to - point separation . as overlap increases ,the centroids of each feature are forced back towards the overlap region , initially decreasing the estimated separation , until significant overlap has occurred at , and the two spots are difficult to distinguish ( see figs . [fig : img_sim:2poinintdist ] , [ fig : img_sim : msep_vs_asep ] ) . for automated feature detection with live cell image data ,spatial band - pass filtering is often necessary .real data is subject to many sources of noise , and accurate automated segregation of the real features of interest from noise is often not possible without spatial filtering .therefore , although our feature separations here well exceed this limit , the systematic bias demonstrated here will always contribute to the localization error for closely - spaced features .although small , this effect must be considered in the interpretation of the results of a feature finding experiment under such conditions .propagation of localization error in the separation of two points was studied in order to determine its role , if any , in the observable , the spindle pole - pole separation .comparison of measured error with propagated error for point - to - point separation showed that for large separations , the covariant terms are negligible ( see fig . [fig : img_sim : errorpropcompare ] ) . as the two features approach one another to within less than , the covariant terms become significant and theymust be included for an accurate estimate of the error .we determined that these two effects must be taken into account when the distance between two spots is such that their point spread functions overlap . in our observable , budding yeast spindle length in metaphase and anaphase, the spindle pole - pole distances we are measuring always exceed these limits .we used budding yeast as a model system in which to study mitotic spindle dynamics .live unsynchronized cell populations expressing tdtomato - tagged spc42 , a protein in the spindle pole body ( spb ) protein complex , and the surface reporter gpa1-egfp , were imaged by confocal fluorescence microscopy .spb s labelled with a red fluorescent protein variant ( tdtomato29 ) at the protein spc42 are point - like features which were localized and tracked using the methodology described above .surface data was analyzed using the surface fitting described above ..,scaledwidth=48.0% ] all yeast strains used were derived from strain by4741 .media used for yeast culture are described in .yeast genetic manipulations were carried out as described in .for microscopy , yeast strains were incubated in sc at 25 until early logarithmic phase ( normalized optical density at nm ) . to obtain the desired cell number density for imaging, 1 ml of suspended cells were pelleted for 15 s at , washed twice with of lactate medium , then re - suspended by gentle mixing 100 times with a pipet , and of culture was pipetted onto a sc - infused agar pad on a slide and covered with a glass coverslip .cells were imaged at 30 in stacks of 21 focal planes spaced apart , spanning the entirety of the cells in the axial direction , at 10 s per stack , using a custom multi - beam scanner ( pinholes ) confocal microscope system ( visitech ) mounted on a leica dmirb equipped with a nanodrive piezo stage ( asi ) and and solid state lasers controlled by an aotf .images were acquired with a em - ccd camera ( hamamatsu ) using a 63x 1.4 na objective . from the images , we computed the positions of the spindle pole bodies at three - dimensional sub - pixel resolution by fitting the intensity distribution to a three - dimensional gaussian function ( fig .[ fig : ff:3d_ff_principle ] ( b ) ) , obtaining a spatial resolution of in 3d for each pole .all analysis was carried out using custom scripts in matlab ( mathworks ) . for imaging surfaces , yeast strain of exactly the same genotype with a g - protein labeled as a gfp clone was obtained ( life technologies , ) .the g - protein , gpa1-egfp , localizes to the cell cortex and labels the whole cell periphery . for all analysis ,data was collected from cell populations representing four independently - derived yeast strains . for combining of position data from the two channels ,chromatic shift between the and excitation channels was measured and corrected for prior to combining results ( see ) .we discovered that for yeast cells imaged in lactate medium , exposure to laser irradiation for image stack collection at 5 second intervals results in a phototoxic effect that is not alleviated by increasing the quantum efficiency of the fluorescent reporters .the effect causes cells to arrest at the transition from metaphase to anaphase .this effect was confirmed by observing cells in different media types , and varying exposure times for prolonged periods under sealed coverslips .literature - reported values for the rapid phase of anaphase b in budding yeast demonstrate that this rapid phase is a slowly - varying process compared to the image stack acquisition rate we were using .therefore , our acquisition scheme should capture the transition from pre - anaphase to anaphase .specifically , in observation of an asynchronous budding yeast cell population over a finite 20-minute observation window , representing /5th of the cell cycle , the transition should be observed for cells in an analyzed population of 160 total cells . initially imaging in lactate medium, we observed a major discrepancy from the expected biological phenotype : not a single cell traversed to anaphase while under observation . to investigate this and determine carefully in what regime the cells can be observed without any photon - absorption - induced perturbation , wild - type yeast strains expressing the spindle pole reporter spc42-tdtomato were prepared for microscopy as discussed above using lactate medium .a small population of asynchronous cells were imaged under a sealed coverslip for 30 minutes at 20 second exposure intervals .six different fields of view were examined with approximately ten cells per field of view .no qualitative evidence of anaphase spindle elongation was observable in any of the cells .another population of cells was cultured , washed and imaged in a minimal medium , synthetic complete ( sc ) , which consists only of the 22 amino acids for yeast together with sources of carbon ( dextrose ) and nitrogen ( peptone ) .four different fields of view were examined with approximately ten cells per field of view .the cells were imaged for 30 minutes at 5 , 10 , and 20 second exposure intervals . in this minimal medium , a proportionate (expected ) number of cells displayed anaphase spindle elongation , for all the exposure intervals studied .furthermore , the labelled spindle poles were observed to maintain signal intensity longer when imaged in sc medium than in the lactate medium , further indicative of fewer oxygen free radicals being created from photon absorption . following these initial studies , the spindle dynamics of cells incubated and imaged in sc mediumwas investigated in detail , using the protocol for quantitative image acquisition we derived from our studies .to minimize phototoxic effects , the acquisition interval was selected to be 10 seconds , and the total acquisition length to be 20 minutes .a population of unsynchronized wild - type cells expressing the spc42-tdtomato reporter was incubated , washed , and imaged in sc medium .a confocal stack was collected with the 491 nm excitation line for each time point ( sampling rate ) , with the exposure time set to . within a given field - of - view , only cells for which two spindle poles could be detected within our resolution limits were analyzed .signal - to - noise levels were typically between 7 and 11 db .the spindle pole dynamics were analyzed in all cells in three - dimensions . in these data ,those cells with two poles were segregated from the data . for each time - point in these cells , the displacement between the two poles was computed , to construct the time evolution of spindle length . for cells in metaphase and early to mid - anaphase, the separation between spb s corresponds to the spindle length .an additional level of segregation based on spindle length can thus be performed to yield separate metaphase and anaphase subpopulations . for cells in the latest stage of anaphase ,the elongated spindles begin to buckle and then disassemble as a mechanical entity , so that in this latest phase of anaphase , the spb separation does not reflect the true , curvilinear , intact spindle length .the measured spindle pole separation versus time obtained following analysis of the resulting images is plotted in fig .[ fig : results : sc_wttraj ] . for representative cells from the total imaged population , individual spindle - length trajectories across the entire imaging time are displayed . as a function of time , in a cell traversing anaphase .( b ) results are displayed for 3 metaphase cells showing representative bipolar spindle behaviors from a total population of analyzed cells each of wild type ( blue triangles ) , kip1 ( orange stars ) , and cin8 ( green circles ) .( c - e ) representative anaphase behaviors of ( c ) , wild type cells ; ( d ) , kip1 cells ; and ( e ) , cin8 cells.,scaledwidth=48.0% ] these data obtained from cells imaged in sc medium demonstrate the full range of stereotypical dynamical phases that are characteristic of normal progression through the cell cycle. there are represented the approximately steady - state pole - pole separation at characteristic of yeast metaphase ( sometimes called pre - anaphase)(fig . [fig : results : sc_wttraj ] ( b ) ) .there are the directed , apparently irreversible rapid spindle elongation , characteristic of yeast early anaphase , in which pole - pole separation increases rapidly from to over a duration of several minutes , followed by slower , less robust spindle elongation ( fig .[ fig : results : sc_wttraj ] ( c ) ) .there are additionally cells with initially long spindles , in late anaphase , that exhibit uncorrelated pole - pole motion characteristic of total spindle disassembly once anaphase . using these tools and in sc medium , assignment of these cell cycle phases in an unsynchronized populationmay be done via automated segregation of individual traces based on their dynamical behavior . since in budding yeast , chromosome condensation and a metaphase plateare not visible by microscopy , this segregation provides a valuable intrinsic metric for cell cycle state . in their performing of useful work in the cell, it is expected that the ensembles of kinesin-5 motors exhibit collective dynamics under nonequilibrium conditions .this should manifest itself in the observable . for cells in metaphase ,the spb separation exhibited fluctuations superimposed upon a nearly constant , slowly increasing , function time .these fluctuations were well above the resolution limits of our feature finding of nm , as the signal to noise level over the imaging duration remained . for cells in anaphase, the motor activity at the spindle mid - zone is increased . due to their intrinsic directionality on the antiparallel mts at the spindle mid zone ,the extensile , coherent dipole character of the motors becomes manifest as directed , biased motion . at the scale of the whole spindle , this results in elongation .the collective dynamics of the motors gives rise coherent motion over longer timescales than the individual motors typically exhibit . with the precision we obtain for the pole - pole distance in 3d, we can determine whether fluctuations also appear superimposed on the directed , biased anaphase motion , as they do on the metaphase motion .upon observation of the motions with the detail enabled here , the anaphase motions were seen to be coherent , absent the tens of nanometer scale fluctuations we observed in metaphase . to investigate the specific contributions of mitotic motors to spindle dynamics , genetic perturbations of the mitotic motorswere induced in budding yeast .imaging and analysis of spindle dynamics using these tools was carried out for cells in which the gene for one or the other of the two kinesin-5 mitotic motors , and , was deleted . aside from these single - gene deletions , the genotypes were identical to that of the unperturbed cells studied above .populations of cells in which one kinesin-5 motor was deleted were prepared with the fluorescent label .these or cell populations were cultured and imaged in the sc medium as described above .data for the measured spindle pole separation versus time , obtained following analysis of the resulting images , for representative cells from the total imaged populations , are plotted in fig .[ fig : results : sc_wttraj ] .the metaphase spindle length fluctuations displayed in fig .[ fig : results : sc_wttraj ] ( b ) do not differ appreciably between wild - type and mitotic motor mutant populations .many of the cells are observed to pass through anaphase spindle elongation in both the kip1 and cin8 deletion strains .the nature of the anaphase motion in kip1-deleted cells , i.e. those carrying only the cin8 kinesin-5 , is very similar to that in wild type cells , aside from slower spindle pole separation in late anaphase .by contrast , the cin8-deleted cells , i.e. those carrying only the kip1 kinesin-5 , move more slowly through anaphase , consistent with what has been observed previously . additionally , this population also shows much greater variability in overall rate of spb - spb separation from cell to cell ( data not shown ) .the nanometer - scale - resolution enabled by the tools we developed here permit more detailed examination .upon further scrutiny , we see that the data for anaphase in cells show some stop - start stuttering , where the extensile motion is apparently persistent for several tens of seconds , punctuated by periods of motion in the opposite direction .this oppositely - directed , contractile - like motion is not apparent in the anaphase motions of wild - type and cells . it could arise from either brief , passive partial collapse of the spindle , or coherent ( persistent ) motion in the opposite direction ; that is , active contractile motion after directional switching .directional switching has recently been observed for the kip1 motor _ in vitro _ under certain conditions .we have recently used these context - rich tracking tools to examine the statistical mechanics of the fluctuations during metaphase for a large population of cells , in the context of models for the non - equilibrium activity of motors / microtubule ( de)polymerization .it will be interesting to examine the statistical mechanics of the coherent motions during anaphase , for populations of non - genetically perturbed cells and populations of cells with mitotic kinesin-5 molecular motors deleted , enabled by these methods , and to compare these observation to _ in vitro _experiments once _ in vitro _systems are developed in which controllable _ ensembles _ of kinesin-5 motors can be studied .mother and bud compartment volumes of dividing cells were calculated using the method described above applied to the thinned data . in fig .[ fig : results : mutant_volumes - dists ] , the bud cavity volume is plotted versus the mother cavity volume .each data point corresponds to a separate wild - type cell .as expected , the bud cavity was always smaller than the mother cavity . in fig .[ fig : results : mutant_volumes - dists ] , the mother and bud volume distributions over the cell population are plotted .each of the mother and bud volume distributions were fit to a gaussian function .there is a degree of variability in the volume of the mother cell and in the volume of the daughter cell within a cell population , as was observed in early quantification using area as a proxy for volume .despite the variability , the distributions are bounded from above and below , due to known mechanisms of the cell cycle .the distribution of mother cells is bounded on the lower side by a minimum cell size threshold for replicative division ; and on the upper side by the fact that the time from one division to the next is compressed in cells that have already divided , and by the fact that during mitosis , growth of the mother cell is minimal as most of the increase in mass goes into bud growth .( red ) or ( blue ) .the line at is drawn for reference .bottom : probability distributions of bud and mother cavity volumes for populations of pre - anaphase wild - type cells ( dark green , bud ; light green , mother ) and for populations of cells with mitotic kinesin deletions ( : bud , light red ; mother , dark red ; : bud , dark blue ; mother , light blue).,scaledwidth=48.0% ] cell volumes quantitated for the motor deletion strains are displayed in fig .[ fig : results : mutant_volumes - dists ] , superimposed on the results for wild - type cells .the data show that cells have mother and bud volume distributions practically unchanged from those of wild - type cells .by contrast , the volumes in the population are markedly larger , for both the mother and bud cavities , than in wild - type and populations .moreover , the distribution of volumes across the population is much broader for both mother and bud , and the mother and bud distributions are no longer distinguishable .the mean and standard deviations of mother and bud cavity volumes obtained from gaussian fits to all the distributions are collected in table [ tab : volumes ] ..mean and standard deviation of the distribution of mother and bud volumes over populations of cells of wild - type cells or with one or the other of the mitotic kinesin-5 motor proteins deleted .[ cols="^,^,^,^,^,^",options="header " , ] the differences observed between wild - type and motor - deleted populations on the large mother and bud cavity volumes suggests a potential relationship between the control mechanism for dynamics of chromosome segregation , in which the kinesin-5 motors are involved , and that for cell size .most striking is the effect of deletions on cell volume .it is possible that the large volumes observed in the population were a result of a delay in the cell cycle for this population during a stage of cell growth : if chromosome segregation in the mitotic spindle is delayed , the cell may continue to produce and partition membrane and mass to the bud , resulting in a bud much larger than normal ; eventually , the distribution stabilizes in homeostatic equilibrium after multiple generations .nevertheless , there is a remarkable correlation , synchrony .given the synchrony already observed between sister chromatid separation and onset of rapid spindle pole separation , specifically at the molecular level by increased coherent kinesin-5 motion in the spindle midzone .this synchrony is now known to be due sharing of the _ same _ signaling elements across the two processes .it would not be surprising if cell wall growth were also shown to be in synchrony through some shared feedback mechanism with mitotic motors on the spindle . since cell growthis coordinated with the cell cycle , the bud - to - mother volume ratio serves as a complementary metric for cell cycle progression that at a larger spatial scale than the spindle . to explore this further, we demonstrated here use the tools we developed in tandem . in fig .[ fig : results : volratio_vs_meanl_overlay ] , the bud - to - mother volume ratio is plotted against the mean sbp separation . for wild - type pre - anaphase cells , the bud - to - mother volume ratio is approximately proportional to , indicating a correlation between cell growth and mitotic progression .the data suggest that spindle length and bud - to - mother volume ratio are coupled : there is a constant ratio between nuclear size and cell size. there may be global mechanical signals that tell a cell or organelle about its overall shape .the coupling between mitotic progression and cell growth appears to be a fundamental property of wild - type pre - anaphase cell populations .although a link between nuclear size and cytoplasmic volume has been suspected for many years , recent studies in yeast have introduced a tractable genetic system in which this question could be answered .[ fig : results : volratio_vs_meanl_overlay ] also plots the bud - to - mother volume ratio versus for the motor - deleted populations . in the cell population ,the bud - to - mother volume ratio versus follows the same trend as for wild - type cells . for the pre - anaphase population ,the bud - to - mother volume ratio is dramatically less coupled to as compared to wild - type and cells . in some cells , the bud has grown larger than the mother , exhibited by the values of the bud - to - mother volume ratio greater than one .removal of the cin8 motor may perturb components of the control system , altering the coupling between chromosome segregation and cell size control .this can help to explain how the bud cavity may become even larger than the mother cavity .it suggests that if there are internal mechanical signals that tell a cell about its overall state at a larger length scale , these signals are perturbed when the composition of certain force - generating elements of the mitotic spindle are perturbed .although cell size control in yeast has been explained for decades in terms of a minimum cell size before entry into mitosis , many questions remain .for example , it is unclear how yeast which lack lamins and lamin - associated proteins adjust nuclear volume in response to changes in cytoplasmic volume . in any organism, the mechanism by which the upper limit to nuclear growth is established is unknown .( red ) or ( blue ) deleted.,scaledwidth=53.0% ] the results presented here serve as a proof of principle of the methods developed to track spindle pole bodies with high accuracy in full 3-d space , and faithfully reconstruct budding yeast cell surfaces and volumes .we have presented of a novel application of high - resolution position finding in conjunction with computational geometry techniques to reconstruct surfaces of cells as small as a few micrometers , to locate and track dynamic entities at their address in a living cell , and demonstrated the use of this method in capturing subcellular dynamics . besides imaging features at high resolution in context - rich fashion, our technique also makes possible measurement of the volume and mapping of the surface of small individual cells or subnuclear regions for which the signal from the bounding surface may suffer from local reporter heterogeneity or high underlying surface curvature .the ability to calculate volumes and shapes in situations of low sampling of surface points is likely to become more important as we develop more sophisticated models of how the interior of the cell and the interior of the nucleus are organized , and how they deform and transduce forces .the authors thank professor tamal k. dey , department of computer science and engineering , ohio state university for providing us surface mesh generation software , and dr .jeffrey n. strathern , nih / nci , for providing the yeast strain carrying the spc42-tdtomato fusion protein .54 natexlab#1#1bibnamefont # 1#1bibfnamefont # 1#1citenamefont # 1#1url # 1`#1`urlprefix[2]#2 [ 2][]#2 , , , * * , ( ) . , , , , , * * , ( ) . , , , , * * , ( ) . , , , , , * * , ( ) . , , , , , , , , , * * , ( ) . , , , , * * , ( ) . , , , , , * * , ( ) . , , , , , , , * * , ( ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . , , , * * , ( ) . , , , , , * * , ( ) . , , , ,* * , ( ) . , , , * * , ( ) ., , , * * , ( ) . , , , * * , ( ) . , * * , ( ) . , * * , ( ) . , , , , * * , ( ) . , , , , * * , ( ) . , , , , , , , * * , ( ) . , * * , ( ) . , in _ _( ) , pp . . , , , , ,* * , ( ) . , , , in _ _( ) , pp . . , , , ,* * , ( ) . , in _ _( , , ) , pp . . , in _( ) , pp . . ,* * , ( ) ., , , , * * , ( ) . , , , * * , ( ) ., , , , , , , , , , , * * , ( ) . , , , , , , * * , ( ) . , , , , , , , * * , ( ) . , , , _ _ ( , , ) , ed ._ ( , ) . , , , , , , , * * , ( ) . , * * , ( ) . , * * , ( ) . , , , , , * * , ( ) . , , , , , , , , , , * * , ( ) . , , , , , , ,* * , ( ) . , , , , , , , , , , * * , ( ) ., , , * * , ( ) . , , , , , * * , ( ) . , * * , ( ) . , , ,* * , ( ) ., , , , , , * * , ( ) . , in _ _ , edited by ( , ) , chap ., pp . , ed . __ , ( ) . ,_ _ ( , ) , ed .* supplemental materials : automated three - dimensional single cell phenotyping of spindle dynamics , cell shape , and volume *to measure the true psf of the microscope , sub - diffraction beads were imaged in 3d with a confocal point scanner ( vt - eye , visitech international ) attached to an inverted microscope ( leica dm4000 ) with a oil - immersion objective ( leica microsystems hcx pl apo , numerical aperture of 1.4 ) at wavelengths and .sub - resolution diameter beads ( tetraspec , molecular probes , eugene , or ) were imaged at exposure time .coverslips were coated with poly - d - lysine to adhere beads to the coverslip .the beads remained immobile throughout the experiment .the concentration of beads was sufficiently low that they were spaced far apart , and only 15 to 20 beads were visible in a given field - of - view ( ) . for five different fields - of - view ,a set of confocal image stacks was acquired at a spacing of 0.2 confocal planes .typically , 40 stacks were acquired per set . to estimate the noise - free psf, the intensity distribution of each bead in each of the image stacks was localized to sub - pixel resolution using the feature finding algorithm described below .all intensity distributions for a single bead in a set were then averaged with their centroids aligned . since noise is not temporally or spatially coherent , this averaging dramatically reduces the noise signal . and profiles of the psf obtained in this manner for the excitation channel are shown in fig .[ fig : psf ] . plane , and ( b ) the centre plane , of the three - dimensional point spread function measured by acquiring , localizing , and averaging intensity distributions of 100 nm diameter fluorescently labelled beads .the psf for the channel shown here .the psf for channel is similar ( see text).,scaledwidth=95.0% ] the measured point spread function was symmetric about the , and axes , and showed no significant optical aberrations .the measured psf s were fit to a three - dimensional gaussian distribution $ ] , to obtain a model psf for simulations .reported values are the mean and standard deviation of the gaussian fit parameters , over all five different fields - of - view .the standard deviations of the gaussian psf obtained for the gfp ( ) channel were pixels , pixels , and pixels ( , , and ) in the , , and directions , respectively .for the tdtomato ( ) channel , the measured standard deviations of the gaussian psf were pixels , pixels , and pixels ( , , and ) in the , , and directions , respectively .the readout noise can not properly be determined by calculating a histogram for all pixel intensities across a single image because fixed - pattern noise , whose sources are pixel - to - pixel variations in dark current and non - uniformities in photoresponse , add additional spread to the intensity distribution .the background signal in the absence of fixed - pattern noise was obtained by using the intensity recorded for a single pixel over many measurements . to accurately reproduce the background present when imaging live cells , a sample of imaging medium ( sc ) was placed in the microscope during the measurements .the objective lens was focused to a point in the media just beyond the coverslip , and 3000 images of a single focal plane were acquired using the shortest possible exposure time ( ms ) .this was performed under three different conditions . in the dark field condition ,all shutters were open but no illumination source was used . for gfp and tdtomato illumination conditions ,the 491 nm and 532 nm excitation lasers were respectively turned on .since no florescent reporter is present in the sample , the measured intensity distributions represent an estimate of the detector background signal including scattered light effects , as well as ccd readout and dark current noise . because the relevant units for describing noise in the photon detector are electrons , the signal is converted from adu s to electrons in the ccd by multiplying by the ccd gain factor of 5.8 electrons / adu and dividing by the em gain factor used in the acquisition .detector noise from dark current is expected to vary as , where is the dark current and is the exposure time . for the camera used , electrons / pixel / sec , so electrons , and the noise from dark current is expected to be negligible .readout noise should be gaussian distributed with a variance of electron .histograms were calculated for a single pixel at different regions over the field - of - view for each illumination condition . for the dark field , 491 nm illumination , and 532 nm illumination ,no appreciable change was observed between the histograms from different regions across the field of view .the intensity distribution for each of these conditions follows a gaussian profile , but with a small exponential tail .this exponential tail may be explained by multiplicative noise in the gain - register of the em - ccd , which will produce an exponential distribution of pulse heights with many small pulses and few large ones .the image of a diffraction - limited spot was constructed by first generating a sphere of diameter in a high resolution object space .the object space containing the sphere is a three - dimensional grid of voxels with dimensions of 29 nm in and , and 50 nm in .the choice of object space pixel size is a compromise between approximation of a continuous space , which requires small pixel sizes , and available computer memory , which limits the size of arrays that can be convolved in a practical time frame .a gaussian approximation to the microscope psf was also generated in the object space using the parameters found earlier .the gaussian function used to generate the psf was not normalized and the amplitude was set to 1 unit .convolution of the object and psf was performed by two - dimensional fast fourier transform of each of the arrays , multiplication in the frequency domain , and finally , inverse fourier transformation ( fig .[ fig : img_sim ] ( c ) ) . to avoid circular convolution ,the image and psf arrays were appropriately zero padded prior to fourier transformation .convolution of the object with the psf results in the intensity distribution of a diffraction - limited spot in the continuous image space , the magnitude of which is interpreted in terms of a number of photons . to generate a realistic image that is comparable to experimentally acquired images , it is necessary to sample the continuous image space intensity distribution into a discrete distribution collected by the ccd detector . to do this ,the image space intensity distribution is resampled to a pixel grid , with each pixel representing an individual ccd element ( fig .[ fig : img_sim ] ( d ) ) . to simulate the spindle poles ,two diffraction - limited spots are produced in the image .the ccd grid has voxel dimensions of 174 nm in and , and 300 nm in , to mimic the experimental conditions . to construct the ccd voxel distribution ,the continuous image space distribution was sampled at a number of focal planes in the -stack .finite depth - of - field was explicitly accounted for at each -slice by weighting the image space matrix by a gaussian function with an amplitude of 1 and a standard deviation of pixels , centered on the particular -slice .the intensity distribution in the focal plane was then constructed by integrating the depth - of - field - weighted image space matrix along .each ccd pixel in the focal plane was formed by integrating the focal plane over square regions corresponding to the ccd pixels .the resulting matrix represents the distribution of photons impinging upon each of the ccd elements for each focal plane collected .this matrix is adjusted by a scaling factor to control the signal - to - noise .sub - pixel displacements of the object were produced by shifting the image space grid by one or more of its elements , with respect to the overlying ccd grid .detector noise and background offset were added to each image stack at intensity levels corresponding to experimental values . to represent the photon detection process accurately ,poisson noise was simulated for each pixel by random sampling from a poisson distribution with a mean of , where represents the number of photons impinging on each pixel .the intensity units , which represent numbers of photons , were converted to grey scale units by multiplying by the quantum efficiency and by the gain factors of the ccd , and dividing by the photo - electron / adu conversion factor provided in the camera manual .these values could then be directly converted to integers , that represent grey scale levels in the camera s dynamic range ( 16 bit ) .the signal - to - noise ratio was calculated as the mean pixel value above background of the diffraction - limited spot divided by the standard deviation of the background pixel levels .the mean signal level , mean background level , and variance were calculated in terms of numbers of photons , and the signal - to - noise ratio is expressed as a decibel quantity , localization error is not a monotonic function of the point - to - point separation .as the displacement decreases below , there is a large increase in the localization error . at some separation below ,the error decreases , and then upon further reduction in separation rapidly diverges as the two features become difficult to distinguish . in order to examine this effect more closely , the measured point - to - point separationwas compared to the true point - to - point separation . in figure[ fig : img_sim : msep_vs_asep ] . is db.,scaledwidth=80.0% ] we determined that the initial increase in error , at - 9 pixels separation , is due to a systematic over - estimation of the point - to - point separation .the overlap between the intensity distributions forces the least - squares fitting routine to localize the centroid of each intensity distribution away from the overlap region , resulting in overestimation of the point - to - point separation . as overlap increases ,the centroids of each feature are forced back towards the overlap region , initially decreasing the estimated separation , until significant overlap has occurred , at pixels , and the two spots are difficult to distinguish . to investigate this behaviour of the localization error at close separations in greater detail, a simulation was performed wherein the features were localized in only one - dimension .three - dimensional images of point features were created as described above .the features were displaced from each other along the direction by a known amount and the coordinate of each point was localized by fitting each intensity distribution independently to a one - dimensional gaussian function .the centroids were displaced at fractional pixel values along the -axis and the integer pixel locations of the centroids were used as initial parameter estimates for the gaussian fits .the -direction was chosen because localization of the -coordinate has the largest error for closely spaced points . to localize features with limited spatial filtering , i.e. the extreme values of and , the noise had to be reduced to unrealistically small values , mimicking unattainably perfect imaging conditions .for suitable comparison , the simulations in figure [ fig : img_sim:2poinintdist ] were performed with . when filtering is used , it is a source of small overestimation of the separations .filtering removes high spatial frequency components from the image , predominantly noise ; however , the operation also smoothes components of the real intensity distribution that vary rapidly in space . when the distance between points is smaller than , the filtering operation modifies the summed intensity distribution of the two features such that the distance between the two is overestimated by fitting each feature independently to a gaussian function .the origin of the behavior of the localization error for features that are closely spaced is verified upon examining the effect of the band - pass filter . at some separation close to ,the effect of filtering in the overlap region becomes negligible compared to the effect of the overlap itself . at this separationthe measured separation agrees with the actual separation again ( fig .[ fig : img_sim : msep_vs_asep ] ) . at these andsmaller separations , the measured separation begins to systematically diverge from the actual separation , due to overlap of the feature psf s .plots of the intensity distributions before and after filtering for two different spacings are shown in figure [ fig : img_sim:2poinintdist ] .propagation of localization error in the separation of two points was studied in order to determine its role , if any , in the observable , the spindle pole - pole separation .the error for any arbitrary quantity that has been calculated from the set of coordinates for both features can be estimated by propagating the errors associated with each of the coordinates . for notational simplicity ,the coordinates representing the positions of each of the features are written as the set , where .the errors for all the coordinates are written generally as , and the covariance terms are given by .the set of values is the set of most probable estimates of the true coordinates , which is exactly known for the simulations .frequently when uncertainties are estimated , the covariant terms are assumed negligible . however , for parameters obtained from fitting a curve to data , the covariant terms can contribute significantly to uncertainties .the appropriate relation for estimation of uncertainty in is : the error in point - to - point distance can be determined directly during a simulation by comparing the real separation with the measured separation .this error can also be determined from the uncertainties of each coordinate via equation [ eq : errorprop ] , since . to test this ,a simulation was performed with the displacement vector between the points oriented in an arbitrary direction in , , and , and the error in determined over a range of separations .the results of this simulation are displayed in figure [ fig : img_sim : errorpropcompare ] . . for this simulation is 10.5 db.,scaledwidth=60.0% ] it can be seen that for large separations , in our case for , the covariant terms are negligible . as the two features approach one another to within less than , the covariant terms become significant and they must be included for an accurate estimate of the error . by simulating point - to - point separation trajectories for each of the directions in the set the form of all ,can be obtained .using this relation for , the errors for all quantities determined from feature finding can be estimated .chromatic shift was measured and corrected for prior to combining results obtained using two different wavelengths . the chromatic shift in , , and between the gfp and red acquisition channelswas determined by imaging approximately -diameter inspeck beads ( invitrogen ) immobilized on a poly - k - coated cover glass .the intensity distribution of each of the beads in a field - of - view was fit to a three - dimensional gaussian function to find the centroid of every bead in each channel .gfp- and red - channel centroid positions were then compared for each of the immobilized beads to determine the chromatic shift between the excitation channels , averaged across all the beads .the position in the 532 nm channel was systematically higher than it was in the 491 nm channel by for the 63x objective lens .the in - plane chromatic shift between the 532 nm and 491 nm channels was and , in and dimensions , respectively .the measured chromatic shift is corrected for in all succeeding analysis where data from the two excitation channels is combined .
we present feature finding and tracking algorithms in 3d in living cells , and demonstrate their utility to measure metrics important in cell biological processes . we developed a computational imaging hybrid approach that combines automated three - dimensional tracking of point - like features , with surface reconstruction from which cell ( or nuclear ) volume , shape , and planes of interest can be extracted . after validation , we applied the technique to real space context - rich dynamics of the mitotic spindle , and cell volume and its relationship to spindle length , in dividing living cells . these methods are additionally useful for automated segregation of pre - anaphase and anaphase spindle populations . we found that genetic deletion of the yeast kinesin-5 mitotic motor cin8 leads to abnormally large mother and daughter cells that were indistinguishable based on size , and that in those cells the spindle length becomes uncorrelated with cell size . the technique can be used to visualize and quantify tracked feature coordinates relative to cell bounding surface landmarks , such as the plane of cell division . by virtue of enriching the information content of subcellular dynamics , these tools will help advance studies of non - equilibrium processes in living cells . locating submicrometer entities at their address in a living cell is an important task in capturing subcellular dynamics . current imaging studies of subcellular , or subnuclear , dynamics of localized features of interest in small bounding volumes , for example in the important model organism budding yeast , the eukaryotic nucleus , or er / golgi bodies , suffer from important limitations . first , dynamics single features such as a chromosome locus position are often described only by position , without contextual information , subjecting the extracted features to translational , rotational , or vibrational noise , and to lack of knowledge of position in the cell s cycle . features that are free to move in three - dimensional ( 3d ) space are often either detected in two dimensions ( 2d ) , or in 3d without subpixel resolution in 3d , allowing out - of - plane motion to be ascribed to in - plane motion . in many other studies , contextual information is used , for example relative positions of two spots such as in the spindle pole bodies ( spbs ) , but without subpixel resolution in 3d , so that the uncertainty in the measurement is insufficiently small to enable new and valuable dynamical information at tens of nanometers scale to be uncovered . there are however important exceptions to these limitations . finally , in many reports cell volume is extracted by measuring an area and extrapolating to volume by assuming the cell has some regular azimuthal symmetry , which it may not have . in various micropipette aspiration studies to determine mechanical behavior , the cell has been either considered as a fully incompressible material ( with true 3d volume unmonitored during deformation ) , or has been assumed compressible without any emphasis on its importance . yet , a number of cell types have already been shown to be compressible . cell compressibility may prove to be more general . on account of these limitations , changes in cell volume due to deformation and other perturbations may be strongly underappreciated . commercial software solutions exist for extracting volume using isosurface methods on a continuous mass distribution and have been applied to cells . isosurface methods are notoriously sensitive to the threshold chosen . there is a need for methods of reconstructing cell or small subcellular volumes surface that are relatively insensitive to thresholding , and that are robust to challenging conditions of local regions of high curvature . clearly , additional , and more flexible and accessible , high - accuracy computational imaging tools for point and bounded surface features are needed . such methods would not only enable direct volume measurements in combination with feature tracking of localized entities , but would facilitate accurate monitoring of cell volume under nonlinear deformation of cells , and open up new avenues distinguishing between compressibility and viscoelastic bulk relaxation / fluid flow in the nonlinear deformation behavior of cells . here we describe a hybrid computational imaging technique that overcomes these obstacles and generates high resolution tracking of nanoscale to organelle - sized spots in living cells at identifiable locations within the cell , together with bounding surfaces , from hundreds of cells . the watertight bounding surface is obtained using computational geometry techniques and enables monitoring of the volumes of dividing cells and the orientation of the virtual plane separating them . the features can then be transformed to a new co - ordinate system unique to each cell if desired . these methods allow for accurate dissection of yeast mitosis or chromosome locus motions in their precise cellular context _ in vivo_. besides automated high - resolution , high - throughput image and data analysis , our technique also makes possible the extraction of new phenotypic metrics by which cell - to - cell heterogeneity across populations can by quantified . much cellular motion is stochastic and must be assessed statistically from large populations . the ability to calculate volumes and shapes in situations of low sampling of surface points is likely to become more important as more sophisticated models are developed for how the interior of the cell and nucleus are organized . here , we use these methods to demonstrate nanometer - scale measurements of directed motions or fluctuations of the eukaryotic mitotic spindle , using relative motion to unambiguously identify cells in different stages of cell division ; additionally , to demonstrate measurements of cell volume of individual small biological cells . we show that the resolution of the point feature finding in our optical system , for which we perform simulations of model spindle poles under similar image detection conditions to what is expected in the living cell , is in 3d , depending on the specific signal - to - noise in the acquired images . the volume methods were applied to a large population of non - genetically perturbed cells and populations of cells with mitotic kinesin-5 molecular motors deleted .
a symbolic framework for boundary problems was built up in for linear ordinary differential equations ( lodes ) ; see also for more recent developments .one of our long - term goals is to extend this to boundary problems for linear partial differential equations ( lpdes ) .since this is a daunting task in full generality , we want to tackle it in stages of increasing generality . in the first instance , we restrict ourselves to _ constant coefficients _ , where the theory is quite well - developed . within this classwe distinguish the following three stages : 1 .the simplest is the cauchy problem for _ completely reducible _ operators .2 . the next stage will be the _ cauchy problem _ for general hyperbolic lpdes .3 . after that we plan to study _ boundary problems _ for elliptic / parabolic lpdes . in this paperwe treat the first case ( section [ sec : cauchy - analytic ] ) .but before that we build up a _ general algebraic framework _ ( sections [ sec : boundary - data ] and [ sec : greens - operators ] ) that allows a symbolic description for all boundary problems ( lpdes / lodes , scalar / system , homogeneous / inhomogeneous , elliptic / hyperbolic / parabolic ) . using these concepts and toolswe develop a general solution strategy for the cauchy problem in the case ( 1 ) .see the conclusion for some thoughts about the next two steps .the passage from lodes to lpdes was addressed at two earlier occasions : * an _ abstract theory of boundary problems _ was developed in , including lodes and lpdes as well as linear systems of these .the concepts and results of sections [ sec : boundary - data ] and [ sec : greens - operators ] are built on this foundation , adding crucial concepts whose full scope is only appreciated in the lpde setting : boundary data , semi - homogeneous problem , state operator .* an algebraic language for _ multivariate differential and integral operators _ was introduced in section 4 of , with a prototype implementation described in section 5 of the same paper .this language is generalized in the pidos algebra of section [ sec : cauchy - analytic ] , and it is also implemented in a mathematica package . in this paperwe will not describe the current state of the _ implementation _ ( mainly because of space limitations ) .let us thus say a few words about this here .a complete reimplementation of the pidos package described in is under way . the new package is called opido ( ordinary and partial integro - differential operators ) , and it is implemented as a standalone mathematica package unlike its predecessor , which was incorporated into the theorema system .in fact , our reimplementation reflects several important design principles of theorema , emphasizing the use of functors and a strong support for modern two - dimensional ( user - controllable ) parsing rules .we have called this programming paradigm funpro , first presented at the mathematica symposium .the last current stable version of the ( prototype ) package can be found at http://www.kent.ac.uk/smsas/personal/mgr/index.html . at the time of writing , the ring of _ ordinary integro - differential operators _is completed and the ring of _ partial integro - differential operators _ is close to completion ( for two independent variables ) .compared to , the new pidos ring contains several crucial new rewrite rules ( instances of the substitution rule for resolving multiple integrals ) .our conjecture is that the new rewrite system is noetherian and confluent but this issue will be analyzed at another occasion ._ notation_. the algebra of matrices over a field is written as , where or is omitted .thus we identify with the space of column vectors and with the space of row vectors .more generally , we have .as mentioned in the introduction , we follow the _ abstract setting _ developed in . we will motivate and recapitulate some key concepts here , but for a fuller treatment of these issues we must refer the reader to and its references . let us recall the notion of boundary problem .fix vector spaces and over a common ground field of characteristic zero ( for avoiding trivialities one may assume and to be infinite - dimensional ) .then a _ boundary problem _ consists of an epimorphism and a subspace that is orthogonally closed in the sense defined below .we call the differential operator and the _ boundary space_. similar to the correspondence of ideals / varieties in algebraic geometry , we make use of the following galois connection ( * ? ? ?if is any subspace of the space , its _ orthogonal _ is defined as .dually , for a subspace of the dual space , the orthogonal is defined by .if we think of as `` functions '' and of as `` boundary conditions '' , then is the space of valid conditions ( the boundary conditions satisfied by the given functions ) while is the space of admissible functions ( the functions satisfying the given conditions ) .naturally , a subspace of either of or is called _ orthogonally closed _ if .but while any subspace of itself is always orthogonally closed , this is far from being the case of the subspaces of the dual .hence the condition on boundary spaces to be orthogonally closed is in general not trivial .however , if is finite - dimensional as in boundary problems for lodes ( as in example [ ex : heat - conduction ] below ) , then it is automatically orthogonally closed . for lpdes ,the condition of orthogonal closure is important ; see example [ ex : wave - inhom - medium ] for an intuitive explanation . in andalso in the abstract setting of we have only considered what is sometimes called the _ semi - inhomgeneous boundary problem _ , more precisely the semi - inhomogeneous incarnation of ; see definition [ def : transfer - operator ] for the full picture .this means we are given a _ forcing function _ and we search for a solution with in other words , satisfies the inhomogeneous `` differential equation '' and the homogeneous `` boundary conditions '' given in . a boundary problem which admits a unique solution for every forcing function called _ regular_. in terms of the spaces , this condition can be expressed equivalently by requiring that ; see for further details . in this paperwe shall deal exclusively with regular boundary problems . for singular boundary problems we refer the reader to and . for a regular boundary problem ,one has a linear operator sending to is known as the _ green s operator _ of the boundary problem . from the abovewe see that is characterized by and .[ ex : heat - conduction ] a classical example of this notion is the two - point boundary problem . as a typical case ,consider the simplified model of _ stationary heat conduction _ described by here we can choose for the function space such that the differential operator is given by and the boundary space by the two - dimensional subspace of spanned by the linear functionals and for evaluation on the left and right endpoint .in the sequel we shall write ] for denoting the orthogonal closure of the linear span , we can thus write ] , meaning is the orthogonal closure of the span of the . note that a boundary basis is typically smaller than a -linear basis of .all traditional boundary problems are given in terms of such a boundary basis .in example [ ex : cauchy - wave - eq ] , the boundary basis could be spelled out by using with and .relative to a boundary basis , we call the _ boundary values _ of .as we can see from the next proposition , we may think of the trace as a basis - free description of boundary values .conversely , one can always extract from any given boundary data the boundary values as its coordinates relative to the boundary basis .[ lem : trc - coord ] let be a boundary space with boundary basis . if for any one has then also . in particular , for any , the trace depends only on the boundary values . since is the image under the trace map , we have and for some .so assume for all and thus for all by the definition of orthogonal closure .then we have and thus .the _ analytic interpretation _ of this proposition is clear in concrete cases like example [ ex : wave - inhom - medium ] : once the values and are fixed , all differential and integral consequences , as in the above examples or , are likewise fixed .it is therefore natural that an interpolator need only consider the boundary values rather than the full trace information .this is the contents of the next lemma .[ lem : int - coord ] let be a boundary space with boundary basis and write for the boundary values of any and for the -subspace of generated by all boundary values . then any linear map with induces a unique interpolator defined by .we must show that is a right inverse of .so for arbitrary we must show . by the definition of we can write for some . since , we are left to prove .using lemma [ lem : trc - coord ] , it suffices to prove that , which is true by hypothesis . as noted above, we can always extract the boundary values of some boundary data relative to fixed basis of .however , since one normally has got _ only _ the boundary values ( coming from some function ) , where does the corresponding come from ? by definition , it has to assign values to all , not only to the making up the boundary basis . as suggested by the above lemmata , for actual computations those additional values will be irrelevant .nevertheless , it gives a feeling of confidence to provide these values : if is any _ interpolator _ , we have .this follows immediately from the fact that is a right inverse of the trace map and that it depends only on the boundary values by lemma [ lem : int - coord ] . in the analysis setting thismeans we interpolate the given boundary value and then do with the resulting function whatever is desired ( like derivatives and integrals in example [ ex : wave - inhom - medium ] ) .using the notion of boundary data developed in the previous section , we can now give the formal definition of the _ semi - homogeneous boundary problem_. in fact , we can distinguish three different incarnations of a `` boundary problem '' ( as we assume regularity , the _ fully homogeneous problem _ is of course trivial ) .[ def : transfer - operator ] let be a regular boundary problem with and boundary space . then we distinguish the following problems : [ cols="^,^,^ " , ] they are , respectively , called the _ fully inhomogeneous _ , the _ semi - inhomogeneous _ and the _ semi - homogeneous _ boundary problem for . the corresponding linear operators will be written as , and , and , .each of the three problems in definition [ def : transfer - operator ] has a unique solution for the respective input data , so the operators are well - defined .employing the usual superposition principle , we can restrict ourselves to the semi - inhomogeneous and the semi - homogeneous boundary problem .for the former , the existence and uniqueness is verified in . for the semi - homogeneous problem ,existence is seen as follows : since the given boundary data can be written as by the definition of , we have to find such that and for all .using the decomposition corresponding to the direct sum , we set .then is clear , and we check further that . for uniqueness , it suffices to prove that has only the trivial solution , which is clear since the sum is direct .the terminology for the operators is not uniform in the literature . in the past, we have only considered and called it the `` green s operator '' acting on a `` forcing function '' .while this is in good keeping with the engineering tradition and large parts of the standard mathematical culture , it is difficult to combine with suitable terminology for and . in this paper , we shall follow the systems theory jargon and refer to as the ( full ) _ transfer operator _, to as the ( zero - state ) signal transfer operator or briefly _ signal operator _ , and to as the ( zero - signal ) state transfer operator or briefly _ state operator_.this terminology reflects the common view of forcing functions as `` signals '' and boundary data as ( initial ) `` states '' .one of the advantages of the abstract formulation is that it allows us to describe the _ product of boundary problems _ in a succinct , basis - free manner ( and it includes lodes and lpdes as well as systems of these ) .the composite boundary problem can then be solved , both in its semi - inhomogneous and its semi - homogeneous incarnation ( the latter is presented here for the first time ) .[ prop : bp - product ] define the product of two boundary problems and with and , by then is regular if both factors are . in that case , if , have , respectively , the signal operators , and the state operators , , then has the signal operator and the state operator acting by .the preservation of regularity and the relation for the signal operators is proved in proposition3.2 of . forthe statement about the composite state operator , note first that the sum is direct by ( * ? ? ?* ( 3.2 ) ) ; hence the definition is consistent .now let be arbitrary boundary data and set .then since and both and vanish by the definition of state operator .hence the differential equation of the composite semi - homogeneous boundary problem is satisfied .it remains to check the boundary conditions for and for .for the first , we use and again to compute .since and is the state operator for , we obtain as required . for the second set of boundary conditions, we use that since is the signal operator with homogeneous boundary conditions .hence , and now the claim follows because is the state operator for the inhomogeneous boundary conditions . as detailed in , the computation of the _ signal operator _ can be decomposed in two parts : ( 1 ) finding a _ right inverse _ of the differential operator , which involves only the differential equation without boundary conditions ( so we may replace the boundary by intial conditions , thus having again a unique solution : this is the so - called fundamental right inverse ) .( 2 ) determining the _ projector _ onto the homogeneous solution space along the space of functions admissible for the given boundary conditions the projector `` twists '' the solutions coming from the right inverse into satisfying the boundary conditions .an analogous result holds for the computation of the _ state operator _ if we replace the right inverse of by the interpolator for the boundary space .[ prop : proj - transfer ] let be regular with operators as in definition [ def : transfer - operator ] .then we have and , hence for the transfer operator . here is any right inverse of the differential operator and any interpolator for while is the projector determined by and .the formula for is given in ( * ? ? ? * ( 2.3 ) ) .for proving , let be arbitrary and set .then follows since .furthermore , for every we have by the definition of and .this means that satisfies the boundary conditions , so solves the semi - homogeneous boundary problem for .if is a completely reducible differential operator with constant coefficients in , the determination of reduces to solving an inhomogeneous first - order equation with constant coefficients which is of course straightforward ( lemma [ lem : transport - equation ] ) .also the determination of the interpolator turns out to be easy for a cauchy problem since it is essentially given by the corresponding taylor polynomial .hence it remains to find some means for computing the _ kernel projector _ for a boundary problem . in the case of a lode of order ,the method for computing given in the proof of theorem 26 of and in section 6 of is essentially a gaussian elimination on the so - called _ evaluation matrix _{ij } \in k^{n \times n} ] for the _ pidos algebra _ generated over by the operators , the substitutions induced by and the exponential basis polynomials . here denotes the arguments for any , with exponents and frequencies .obviously , ] can be described by a _rewrite system _ ( pidos = partial integro - differential operator system ) , analogous to the one given in .we will present this system in more detail in particular proofs of termination and confluence at another occasion . since in this paperwe restrict ourselves to the analytic setting , we can appeal to the well - known _ cauchy - kovalevskaya theorem _* thm . 2.22 ) for ensuring the existence and uniqueness of the solution of the cauchy problem .while the theorem in its usual form yields only local results , there is also a global version ( * ? ? ?7.4 ) that provides a good foundation for our current purposes . since this form of the theorem is not widely known , we repeat the statement here . as usual , we designate one _ lead variable _ , writing the other ones as before .note that in applications is not necessarily time .the apparently special form of the differential equation implies no loss of generality : whenever ] be a differential operator in caucy - kovalevskaya form with respect to , meaning with and .then the cauchy problem has a unique solution for given . in the _ abstract language _ of sections [ sec : boundary - data ] and[ sec : greens - operators ] this is the semi - homogeneous boundary problem with boundary space ,\ ] ] where the evaluation is written as the substitution denotes. hence the solution of is given by the state operator if we identify the boundary data with its coordinate representation relative to the above boundary basis .in detail , is the unique linear map sending to ; confer lemma [ lem : trc - coord ] for the uniqueness statement . in the sequelthese identifications will be implicit . for future reference, we mention also that the usual taylor polynomial allows one to provide a natural _ interpolator _ for the initial data , namely which we will not need here because compute the kernel projector directly from its first - order factors . in this paper, we will study the cauchy problem for a _ completely reducible operator _ , meaning one whose characteristic polynomial = { \mathbb{c}}[\lambda_1 , \dots , \lambda_n] ] . by a well - known consequence of the ehrenpreis - palamodov theorem ,the general solution of is the sum of the general solutions of the factor equations ; see the corollary on .hence it remains to consider differential operators that are powers of first - order ones ( we may assume all nonconstant coefficients are nonzero since otherwise we reduce after renaming variables ) . [lem : power - first - order ] let ] be a first - order operator with all .then the cauchy problem , has the state operator .moreover , the differential operator has the right inverse here is the transformation with , and is its inverse .this follows immediately from lemma [ lem : power - first - order ] .the right inverse is computed using lemma 3 of after transforming the lpde to a lode . by proposition[prop : proj ] , we can determine the _ kernel projector _ for the cauchy problem of lemma [ lem : transport - equation ] as , where in this simple case . having the kernel projector and the right inverse in lemma [ lem : transport - equation ] , the _ signal operator _is computed by as usual .now we can tackle the general cauchy problem by a simple special case of proposition [ prop : bp - product ] .[ prop : ivp - product ] let ] .since each of these is defined as a biorthogonal , it suffices to prove that the system has the same solutions as the system .but the latter is given by where the first term and the -derivatives vanish since .using , this implies that the two systems are indeed equivalent .this settles the completely reducible case : using proposition [ prop : ivp - product ] we can break down the _ general cauchy problem _ into first - order factors with single initial conditions . for each of thesewe compute the state and signal operator via lemma [ lem : transport - equation ] , hence the state and signal operator of by proposition [ prop : bp - product ] .as explained in the introduction , we see the framework developed in this paper as the first stage of a more ambitious endeavor aimed at boundary problems for general constant - coefficient ( and other ) lpdes . following the enumeration of the introduction , the next steps are as follows : 1. stage ( 1 ) was presented in this paper , but the _ detailed implementation _ for some of the methods explained here is still ongoing .the crucial feature of this stage is that it allows us to stay within the ( rather narrow ) confines of the pidos algebra . in particular , no fourier transformations are needed in this case , so the analytic setting is entirely sufficient .2 . as we enter stage ( 2 ) , it appears to be necessary to employ stronger tools .the most popular choice is certainly the _ framework of fourier transforms _ ( and the related laplace transforms ) .while this can be algebraized in a manner completely analogous to the pidos algebra , the issue of choosing the right function space becomes more pressing : clearly one has to leave the holomorphic setting for more analysis - flavoured spaces like the schwartz class or functions with compact support .( as of now we stop short of using distributions since that would necessitate a more radical departure , forcing us to give up rings in favor of modules . )3 . for the treatment of genuine boundary problems in stage ( 3 )our plan is to use a powerful generalization of the fourier transformation the _ ehrenpreis - palamodov integral representation _ , also applicable to systems of lpdes .much of this is still far away .but the _ general algebraic framework _ for boundary problems from sections [ sec : boundary - data ] and [ sec : greens - operators ] is applicable , so the main work ahead of us is to identify reasonable classes of lpdes and boundary problems that admit a symbolic treatment of one sort or another .anja korporal , georg regensburger , and markus rosenkranz .regular and singular boundary problems in maple . in _ proceedings of the 13th international workshop on computer algebra in scientific computing , casc2011 ( kassel , germany , september 5 - 9 , 2011 ) _ , volume 6885 of _ lecture notes in computer science_. springer , 2011 .anja korporal , georg regensburger , and markus rosenkranz .symbolic computation for ordinary boundary problems in maple . in _ proceedings of the 37th international symposium on symbolic and algebraic computation ( issac12 )software presentation .ulrich oberst and franz pauer .the constructive solution of linear systems of partial difference and differential equations with constant coefficients ., 12(3 - 4):253308 , 2001 .special issue : applications of grbner bases to multidimensional systems and signal processing .markus rosenkranz , georg regensburger , loredana tec , and bruno buchberger . a symbolic framework for operations on linear boundary problems . in vladimirp. gerdt , ernst w. mayr , and evgenii h. vorozhtsov , editors , _ computer algebra in scientific computing .proceedings of the 11th international workshop ( casc 2009 ) _ , volume 5743 of _ lncs _ , pages 269283 , berlin , 2009 .springer .markus rosenkranz , georg regensburger , loredana tec , and bruno buchberger .symbolic analysis of boundary problems : from rewriting to parametrized grbner bases . in ulrich langer andpeter paule , editors , _ numerical and symbolic scientific computing : progress and prospects _ , pages 273331 .springer , 2012 .
we introduce a general algebraic setting for describing linear boundary problems in a symbolic computation context , with emphasis on the case of partial differential equations . the general setting is then applied to the cauchy problem for completely reducible partial differential equations with constant coefficients . while we concentrate on the theoretical features in this paper , the underlying operator ring is implemented and provides a sufficient basis for all methods presented here .
the omnipresence of power - laws in natural , socio - economic , technical , and living systems has triggered immense research activity to understand their origins .it has become clear in the past decades that there exist several distinct ways to generate power - laws ( or asymptotic power - laws ) , for an overview see for example . in short , power - laws of the form in critical phenomena , in systems displaying self - organized criticality , preferential attachment type of processes , multiplicative processes with constraints , systems described by generalized entropies , or sample space reducing processes , i.e. processes that reduce the number of possible outcomes ( sample space ) as they unfold .literally thousands of physical , natural , man - made , social , and cultural processes exhibit power - laws , the most famous being earthquake magnitudes , city sizes , foraging and distribution pattern of various animal species , evolutionary extinction events , or the frequency of word occurrences in languages , known as zipf s law .it is obvious that estimating power - law exponents from data is a task that sometimes should be done with high precision .for example if one wants to determine the universality class a given process belongs to , or when one estimates probabilities of extreme events . in such situations small errors in the estimation of exponents may lead to dramatically wrong predictions with potentially serious consequences . estimating power - law exponents from data is not an entirely trivial task .many reported power - laws are simply not exact power - laws , but follow other distribution functions . despite the importance of developing adequate methods for distinguishing real power - laws from alternative hypotheses , we will not address this issue here since good standard literature on the topic of bayesian _ alternative hypotheses testing _ exists , see for example .for power - laws some of these matters have been discussed also in .here we simply focus on estimating power - law exponents from data on a sound probabilistic basis , using a classic bayesian parameter estimation approach , see e.g. , that provides us with _ maximum likelihood _ ( ml ) estimators for estimating power - law exponents over the full range of reasonably accessible values .having such estimators is of particular interest for a large classes of situations where exponents close to appear ( zipf s law ) .we will argue here that whenever dealing with data we can assume discrete and bounded samples spaces ( domains ) , which guarantees that power - laws are normalizable for arbitrary powers .we then show that the corresponding ml estimator can then also be used to estimate exponents from data that is sampled from continuous sample spaces , or from sample spaces that are not bounded from above . in physicsthe theoretical understanding of a process sometimes provides us with the luxury of knowing the exact form of the distribution function that one has to fit to the data .for instance think of critical phenomena such as ising magnets in 2 dimensions at the critical temperature , where it is understood that the susceptibility follows a power - law of the form , with a critical exponent , that occasionally even can be predicted mathematically . however , often and especially when dealing with complex systems we do not enjoy this luxury and usually do not know the exact functions to fit to the data . in such a case ,let us imagine that you have a data set and from first inspection you think that a power - law fit could be a reasonable thing to do .it is then essential , before starting with the fitting procedures , to clarify what one knows about the process that generated this data .the following questions may help to do so .plfit and r can be used .[ fig : diagram ] ] * do you have information about the dynamics of the process that is generating what appears to be a power - law ? * is the data generated by a bernoulli process ( e.g. tossing dice ) , or not ( e.g. preferential attachment ) ?* is the data available as a collection of samples ( a list of measurements ) , or only coarse - grained in form of a histogram ( binned or aggregated data ) .* is the data sampled from a discrete ( e.g. text ) or continuous sample space ( e.g. earthquakes ) ? * does the data have a natural ordering ( e.g. magnitudes of earthquakes ) , or not ( e.g. word frequencies in texts ) ?the decisions one has to take before starting to estimate power - law exponents are shown as a decision - tree in fig .( [ fig : diagram ] ) . if it is known that the process generating the data is not a bernoulli process ( for example if the process belongs to the family of history dependent processes such as e.g. preferential attachment ) , then one has the chance to use this information for deriving parameter estimators that are tailored exactly for the particular family of processes .if no such detailed information is available one can only treat the process as if it were a bernoulli process , i.e. information about correlations between samples is ignored .if we know ( or assume ) that the data generation process is a bernoulli process , the next thing to determine is whether the data is available as a collection of data points , or merely as coarse grained information in form of a histogram that collects distinct events into bins ( e.g. histograms of logarithmically binned data ) . if data is available in form of a data set of samples ( not binned ) , a surprisingly general maximum likelihood ( ml ) estimator can be used to predict the exponent of an underlying power - law .this estimator that we refer to as , will be derived in the main section .its estimates for the underlying exponent , are denoted by .the code for the corresponding algorithm we refer to as ` r_plfit ` .if information is available in form of a histogram of binned data , a different estimator becomes necessary .the corresponding algorithm ( ` r_plhistfit ` ) is discussed in appendix a and in the section below on discrete and continuous sample spaces .both algorithms are available as matlab code . for how to use these algorithms ,see appendix b. if we have a dataset of samples ( not binned ) , so that the ` r_plfit ` algorithm can be used , it still has to be clarified whether the data has a natural order or not ?numerical observables such as earthquake magnitudes are _ naturally _ ordered .one earthquake is always stronger or smaller than the other .if observables are non - numeric , such as word types in a text , then a natural order can not be known _ a priori_. the natural ordercan only be inferred approximately by using so - called _ rank - ordering _ ; or alternatively by using the so - called _ frequency distribution _ of the data .details are discussed below in the section on rank - order , frequency distributions , and natural order .other issues to clarify are to see if a given sample space is continuous or discrete , and if the sample space is bounded or unbounded .these questions however , turn out to be not critical .one might immediately argue that for unbounded power - law distribution functions normalization becomes an issue for exponents .however , this is only true for bernoulli processes on _ unbounded _ sample spaces . since all real - world data sets are collections of finite discrete values one never has to actually deal with normalization problems .moreover , since most experiments are performed with apparati with finite resolution , most data can be treated as being sampled from a bounded , discrete sample space , or as binned data . for truly continuous processesthe probability of two sampled values being identical is zero .therefore , data sampled from continuous distributions can be recognized by sample values that are unique in a data set .see appendix a for more details .statistically sound ways to fit power - laws were advocated and discussed in .they overcome intrinsic limitations of the _ least square _( ls ) fits to logarithmically scaled data , which were and are widely ( and often naively ) used for estimating exponents .the ml estimator that was presented in we refer to as the ( for clauset - shalizi - newman ) estimator ; its estimates for the exponent we denote by .the approach that leads to focuses on continuous data that follows a power - law distribution from eq .( [ pow ] ) , and that is bounded from below but is not bounded from above ( i.e. with ) . in emphasisis put on how ml estimators can be used to infer whether an observed distribution function is likely to be a power - law or not .also the pros and cons of using cumulative distribution functions for ml estimates are discussed , together with ways of treating discrete data as continuous data . for the continuous and unbounded case ,simple explicit equations for the estimator can be derived .the continuous approach however , even though it seemingly simplifies computations , introduces unnecessary self - imposed limitations with respect to the range of exponents that can be reliably estimated . works brilliantly for a range of exponents between and .here we show how to overcome these limitations and by doing so extend the accessible range of exponents by presenting the exact methodology for estimating for discrete bounded data with the estimator . while this approach appears to be more constrained than the continuous one we can show also theoretically that data from continuous and potentially unbounded sample spaces can be handled within essentially the same general ml framework as well .the key to the estimator is that it is not necessary to derive explicit equations for finding .implicit equations in exist for power - law probability distributions over discrete or continuous sample spaces that are both bounded from below _ and _ above .solutions can be easily obtained numerically .an implementation of the respective algorithms can be found in , for a tutorial see appendix b. there exist three distinct types of distribution functions that are of interest in the context of estimating power - law exponents : + i : : the _ probability distribution _ assigns a probability to every observable state - value .discrete and bounded sample spaces are characterized by state - types , with each type being associated with a distinct value .ii : : the _ relative frequencies _ , , where is the number of times that state - type is observed in experiments . is the _ histogram _ of the data . as explained below in detail, the relative frequencies can be ordered in two ways .+ if is ordered according to their descending magnitude this is called the _rank ordered _ distribution .+ if is ordered according to the descending magnitude of the probability distribution , then they are _ naturally ordered _relative frequencies .iii : : the _ frequency distribution _ counts how many state - types fulfill the condition . in fig .( [ fig : rankvsfreq ] ) we show these distribution functions .there data points are sampled from , with probabilities .the probability distribution is shown ( red ) .the relative frequency distribution is plotted in natural order ( blue ) , the rank - ordered distribution is shown with the yellow line , which clearly exhibits an exponential decay towards the the tail .the inset shows the frequency distribution of the same data .we next discuss how different sampling processes can be characterized in terms of natural order , rank - order , or frequency distributions . for some sampling processes the ordering of the observed statesfor example think of representing the numerical values of earthquake magnitudes . hereany two observations and can be ordered with respect to their numerical value , or their _natural order_. since power - law distributions are monotonic this is equivalent to ranking observations according to the probability distribution they are sampled from : the most likely event has _natural rank _ , the second most likely rank , etc . in other words, we can order state - types in a way that over the sample space , is a monotonic and decreasing function . with an exponent ( red line ) .the relative frequencies are shown for sampled data points according to their natural ( prior ) ordering that is associated with ( blue ) . the rank - ordered distribution ( posterior )is shown in yellow , where states are ordered according to their observed relative frequencies .the rank - ordered distribution follows a power - law , except for the exponential decay that starts at rank .a low frequency cut - off should be used to remove this part for estimating exponents .the inset shows the frequency distribution that describes how many states appear times ( green ) .the frequency distribution has a maximum and a power - law tail with exponent . to estimate , one should only consider the tail of the frequency distribution function .[ fig : rankvsfreq ] ] if is not known _ a priori _ because the state - types have no numerical values attached , as happens for example with words in a text , we can only count relative frequencies ( a normalized histogram ) of states of type , _ a posteriori _ ,i.e. after sampling . to be clear ,let be the histogram of recorded states . is the number of times we observed type , then is the relative frequency of observing states of type .after all samples are taken , one can now order states with respect to , such that the rank is assigned to state with the largest , rank to with the second largest , etc . is called the _ rank - ordered _ distribution of the data . the natural order imposed by andthe rank - order imposed by are not identical for finite .however , if data points have been sampled independently , then converges toward ( for ) and the rank - order induced by will asymptotically approach the natural order induced by . the highest uncertainty on estimating the order induced by using associated with the least frequent observations .therefore , when estimating exponents from rank - ordered distributions , one might consider to use a low - frequency cut - off to exclude infrequent data .exponents of power - laws can also be estimated from _ frequency _distributions .these counts how many distinct state - types occur exactly times in the data .it does not depend on the natural ( prior ) order of states and therefore is sometimes preferred to the ( posterior ) rank - ordered distribution .however , complications may appear also when using .the frequency distribution that is associated with a power - like probability distribution ( and asymptotically to ) is not an exact power - law but a non - monotonic distribution ( with a maximum ) .only its tail decays as a power - law , .the exponents and are related through the well known equation if the probability distribution has exponent , the tail of the associated frequency distribution has exponent . since the frequency distribution behaves like a power - law only in its tail , estimating makes it necessary to constrain the observed data to large values of .note that this is equivalent to using a low - frequency cut - off .one option to do that is to derive a maximum entropy functional for and fit the resulting ( approximate ) max - ent solution to the data .we do not follow this route here .if the natural order of the data is known , one can directly use the natural ordered data in the ml estimates for the exponents . if it is not known , either the rank - ordered distribution can be used to estimate , or the frequency distribution to estimate , see fig .( [ fig : diagram ] ) .one might also estimate both , in the rank ordered distribution , and in the frequency distribution of the data .( [ alphalambda ] ) to compare the two estimates may be used as a rough quality - check .if estimates do not reasonably coincide one should check whether the used data ranges have been appropriately chosen .if large discrepancies remain between and this might indicate that the observed distribution function in question is only an approximate power - law , for which eq .( [ alphalambda ] ) need not hold . for a tutorial on how to use ` r_plfit ` to perform estimates see appendix b. data can originate from continuous sample spaces ] , where is a finite fixed number , say .those values may be chosen to be given by the expression for .the parameters and are defined in the following way : first define , and , where and are parameters of the algorithm .then define with . if is the optimal solution of eq .( [ bayes4 ] ) for some , then we can choose , and and .one then continues by iterating times until , where is the desired accuracy of the estimate of . as a consequence ,the value , for which holds , optimally estimates in the iteration with an error smaller than .note that is the error of the -estimator with respect to the exact value of the predictor , and is not the error of with respect to the ( typically unknown ) value of the exponent of the sampling distribution . , and .for values of in the range between 0 and 4 , we sample events from , from a power - law probability distribution .the estimated exponents for the estimators ( red ) , the ( green , ) , and the new ( black , ) , are plotted against the true value of the exponent of the probability distribution samples are drawn from .clearly , below the estimator no longer works reliably . and work equally well in a range of . outside this range performs consistently better than the other methods .the inset shows the mean - square error of the estimated exponents .the ls - estimator has a much higher over the entire region , than the -estimator .the blue dot represents the estimate for the zipf exponent of c. dickens `` a tale of two cities '' .clearly , this exponent could never reliably be obtained from the rank ordered distribution using , whereas works fine even for values of .[ fig : comparison ] , title="fig : " ] + controlling the fit region over which the power - law should be obtained therefore becomes a matter of restricting the sample space to a convenient .this can be used for dynamically controlling low - frequency cut - offs .these cut - offs are set to exclude states for which , where is the minimal number of times that any state - type is represented in the data set .this means that we re - estimate on with we see in eq .( [ bayes4 ] ) that iteratively adapting to subsets , and then re - evaluating , requires to solve , where is the restricted sample - size and are the relative frequencies re - normalized for . is the index - set of . iterating this procedure either leads to a fixed point or to a limit cycle between two low - frequency cut - offs with two slightly different estimates for .these two possibilities need to be considered in order to implement an efficient stopping criterion for the iterative search of the desired low - frequency cut - off in the data .the algorithm therefore consists of two nested iterations .the `` outer iteration '' searches for the low - frequency cut - off , the `` inner iteration '' solves the implicit equation for the power - law exponent . the matlab code for the algorithm is found in , see appendix b for a tutorial .to test the proposed algorithm implementing the estimator , we first perform numerical experiments and then test its performance on a number of well known data sets . for 400 different values of , ranging from to , we sample data points , with states , with probabilities .we fit the data in three ways , using ( i ) least square fits ( ls ) , ( ii ) the csn algorithm providing estimates , and ( iii ) the implicit method providing estimates . in fig .[ fig : comparison ] we show these estimates for the power exponents , as a function of the true _ values _ of .the , , estimators are shown as the red , green , and black curves respectively . obviously and work equally well for power - law exponents with values . in this rangethe three approaches coincide .however , note that in the same region the mean square error , where is the number of repetitions , i.e. the number of data - sets we sampled from the , . is the value estimated for from the data set .depending on the estimator corresponds to ( ) , , ( ) , or the ls estimator .we used and for any given . ] for the ls method is much larger than for and .outside this range the assumptions and approximations used for start to lose their validity and both and estimates outperform the estimates .the inset also shows that consistently estimates much better than the estimator ( two orders of magnitude better in terms of ) for the entire range of .the blue dot in fig .[ fig : comparison ] represents the estimate for the zipf exponent of c. dickens ` a tale of two cities ' .clearly , this small exponent could never be obtained by , see also tab .[ table1 ] ..comparison of the estimators and on empirical data sets that were used in .these include the frequency of surnames , intensity of wars , populations of cities , earthquake intensity , numbers of religious followers , citations of scientific papers , counts of words , wealth of the forbes 500 firms , numbers of papers authored , solar flare intensity , terrorist attack severity , numbers of links to websites , and forest fire sizes .we added the word frequencies in the novel a tale of two cities " ( c. dickens ) .the second column states if or were estimated .the exponents reported in are found in column , those reproduced by us applying their algorithm to data is shown in column .the latter correspond well with the new algorithm . for values , can not be used .we list the corresponding values for kolmogorov - smirnov test for the two estimators , and .[ table1 ] [ cols="<,^,^,^,^,^,^",options="header " , ] we finally compare the new estimator on several empirical data sets that were used for demonstration in . in tab .[ table1 ] we collect the results .the second column states if or were estimated .column presents the value of the estimator as presented in .column contains the values of the same estimator using the data from and using the algorithm provided by .the results for the estimator agrees well with those of in the range where the latter works well . to demonstrate how works perfectly outside of the comfort zone of ( for ), we add the result of the rank distribution of word counts in the novel a tale of two cities " ( charles dickens , 1859 ) , which shows an exponent of .this exponent can be fitted directly from the data using the proposed algorithm , while can not access this range , at least not without the detour of first producing a histogram from the data and then fitting the tail of the frequency distribution .the values for the corresponding kolmogorov - smirnov tests ( see e.g. ) for the two estimates , and , are similar for most cases .we discuss the generic problem of estimating power - law exponents from data sets .we list a series of questions that must be clarified before estimates can be performed .we present these questions in form of a decision tree that shows how the answers to those questions lead to different strategies for estimating power - law exponents . to follow this decision treecan be seen as a recipe for fitting power exponents from empirical data .the corresponding algorithms were presented and can be downloaded as matlab code .the two algorithms we provide are based on a very general ml estimator that maximizes an appropriately defined cross entropy .the method can be seen as a straight forward generalization of the idea developed in .the two estimators ( one for binned histograms and for raw data sets ) allow us to estimate power - law exponents in a much wider range than was previously possible .in particular , exponents lower than can now be reliably obtained .this work was supported in part by the austrian science foundation fwf under grant p29252 .b.l . is grateful for the support by the china scholarship council , file - number 201306230096 .10 url # 1`#1`urlprefix[2]#2 [ 2][]#2 m.e.j .newman , _ power - laws , pareto distributions and zipf s law _ , contemporary physics 2005 ; * 46 * 32351 . m. mitzenmacher , _ a brief history of generative models for power - law and lognormal distributions _ , internet mathematics 2004 ; * 1 * 22651kadanoff , et al . , _static phenomena near critical points : theory and experiment _ , rev .1967 ; * 39 * 395413 .d. sornette , _ critical phenomena in natural sciences _ , springer , berlin , 2006 .p. bak , c. tang , and k. wiesenfeld , _ self - organized criticality : an explanation of 1/f noise _ , phys .lett . 1987 ; * 59 * 38184 .simon , _ on a class of skew distribution functions _ , biometrika 1955 ; * 42 * 42540. a.rka , and a.l .barabsi , _ statistical mechanics of complex networks _ , rev . mod . phys .2002 ; * 74 * 4797 .barabsi , and a. rka , _ emergence of scaling in random networks _ ,science 1999 ; * 286 * 509 - 12 .yule , _ a mathematical theory of evolution , based on the conclusions of dr .j. c. willis , f.r.s _ , phil .royal soc .b 1925 ; * 213 * 2187 .h. takayasu , a .- h .sato , and m. takayasu , _ stable infinite variance fluctuations in randomly amplified langevin systems _ , phys .1997 ; * 79 * 96667 . c. tsallis , _ introduction to nonextensive statistical mechanics _ , springer , new york , 2009 .r. hanel , s. thurner , s , and m. gell - mann , _ how multiplicity of random processes determines entropy : derivation of the maximum entropy principle for complex systems _ , proc .usa 2014 ; * 111 * 690510. b. corominas - murtra , r. hanel , and s. thurner , _ understanding scaling through history - dependent processes with collapsing sample space _ ,usa 2015 ; * 112 * , 5348 - 53. b. gutenberg , and c.f .richter , _ frequency of earthquakes in california _ , bull .1944 ; * 34 * 18588 .k. christensen , l. danon , t. scanlon , and p. bak , _ unified scaling law for earthquakes _ proc .usa 2002 ; * 99 * 2509 - 13. f. auerbach , _ das gesetz der bevlkerungskonzentration _, petermanns geographische mitteilungen 1913 ; * 59 * 74 - 76 .x. gabaix , _zipf s law for cities : an explanation _ ,1999 ; * 114 * 73967 .shaffer , _ spatial foraging in free ranging bearded sakis : traveling salesmen or lvy walkers ? _ , amer .j. primatology 2014 ; * 76 * 47284 .newman , and r.g .palmer , _ modeling extinction _ , oxford university press , 2003 .zipf , _ human behavior and the principle of least effort _ , addison - wesley , cambridge , massachusetts , 1949 .press , _ subjective and objective bayesian statistics : principles , models , and applications _ , wiley series in probability and statistics , 2010 .berger , _ statistical decision theory and bayesian analysis _ , springer , new york , 1985 .fisher , _ on an absolute criterion for fitting frequency curves _ , messenger of mathematics 1912 ; * 41 * 15560 .a. clauset , c.r .shalizi , and m.e.j .newman , _ power - law distributions in empirical data _ , siam review 2009 ; * 51 * 661703. y. virkar , and a. clauset , _ power - law distributions in binned empirical data _ , annals of applied statistics 2014 ; * 8 * 89119 .a. deluca , and a. corral , _ fitting and goodness - of - fit test of non - truncated and truncated power - law distributions _acta geophysica 2013 ; * 61 * 135194 a. broder , r. kumar , f. maghoul , p. raghavan , s. rajagopalan , r. stata , a. tomkins , and j. wiener , _ graph structure in the web _ , computer networks 2000 ; * 33 * 30920 . d.c .roberts , and d.l .turcotte , _ fractality and self - organized criticality of wars _ , fractals 1998 ; * 6 * 35157. s. redner , _ how popular is your paper ?an empirical study of the citation distribution _ , epj b 1998 ; * 4 * 13134 .a. clauset , m. young , and k.s .gleditsch , _ on the frequency of severe terrorist events _ , journal of conflict resolution 2007 ; * 51 * 5887 .http://tuvalu.santafe.edu//powerlaws/ http://www.complex-systems.meduniwien.ac.at/ + si2016/r_plfit.m + http://www.complex-systems.meduniwien.ac.at/ + si2016/r_plhistfit.m h.s .heaps , _ information retrieval : computational and theoretical aspects _ , academic press , 1978 .g. herdan , _ type - token mathematics _ ,gravenhage , mouton & co , 1960 .if events are drawn from a continuous sample space ] ( compare eq .( [ norm ] ) first line ) . to work with well defined probabilities we have to bin the data first .probabilities to observe events within a particular bin depend on the margins of the bins , with and .the histogram counts the number of events falling into the bin , and the probability of observing in the bin is given by binning events sampled from a continuous distribution may have practical reasons .for instance data may be collected from measurements with different physical resolution levels , so that binning should be performed at the lowest resolution of data points included in the collection of samples .we will not discuss the ml estimator for binned data in detail but only remark that for given bin margins it is sufficient to insert of eq ( [ appa1 ] ) into eq .( [ bayes3 ] ) with , to derive the appropriate ml condition for binned data .an algorithm for binned data ` r_plhistfit ` , where we assume the bin margins to be given , is found in .we point out that if margins for binning have not been specified prior to the experiments , then specifying the optimal margins for binning the data becomes a parameter estimation problem in itself , i.e. the optimal margins have to be estimated from the data as well .one major source of uncertainty in the estimates of from binned data is related to the uncertainty in choosing the upper and lower bounds and of the data , i.e. specifying the bounds of the underlying continuous sample space .binning becomes irrelevant for clean continuous data for the following reason .suppose we fix the sample space ] into segments of length and then taking to zero explains why typically tha _ primitive _ estimates , and , provides fairly good results .alternatively , strategies such as suggested in could be used to optimize the choices for and . however, this procedure can not be directly derived from bayesian arguments .neither will we discuss this approach in this paper nor implement such an option in ` r_plfit ` .however , bayesian estimates of and exist .although we will not discuss those estimators in detail here we will eventually implement them in ` r_plfit ` to replace the primitive estimates .the idea of constructing such estimators is the following .for instance , one asks how likely can the maximal value of the sampled data be found to be larger than some value . by deriving ) ] , as a consequence , it becomes possible to derive bayesian estimators for and .the matlab function ` function out = r_plfit(data , varargin ) ` implements the algorithm discussed in the main paper .the function returns a struct ` out ` that contains information about the data , the data range , but most and for all ` out.exponent ` returns the estimated exponent of the power - law .whether the exponent ` out.exponent ` is the exponent of the sample distribution or the exponent of the frequency distribution of the data depends on how ` function out = r_plfit(data , varargin ) ` gets used as explained below . in the code the sample space is equivalent to a vector ] ( default ) * a histogram ` data ` ] occurring in the data by using the option ` out = r_plfit(x,'urange ' ) ` . in order to define a fit range maximal and minimal data values taken into accountcan be set by ` out = r_plfit(x,'urange','rangemin',minval , ... ` ` ... ' rangemax',maxval ) ` such that ` r_plfit ` only takes into account data in the range ` minval ` ` maxval ` . to control the data range individually use ` out = r_plfit(x,'range',z ) ` .if the data has been sampled from a continuous sample space , and the histogram over the unique data is flat , i.e. each value in the data only appears once ( more or less ) , then one can tell ` r_plfit ` that the data is sampled from a continuous sample space by setting the option ` ' cdat ' ` , i.e. by running ` out = r_plfit(x,'cdat ' , ... ) ` .this option tells the algorithm to use the normalization constant for continuous sample spaces and estimates and . moreover , ` ' cdat ' ` implicitly sets the ` ' urange ' ` and the ` ' nolf ' ` option . ` ' nolf ' ` ( see below ) switches off the search of the algorithm for an optimal low frequency cut - off . + * fitting with histograms * : using histograms as input works in exactly the same way as for fitting if we want to estimate the exponent of the frequency distribution and use ` r_plfit ` in the ` out = r_plfit(k ) ` mode .if we use ` r_plfit ` in the ` out = r_plfit(k,'hist ' ) ` mode , the algorithm assumes by default that the sample space is given by ] .bins can be specified by giving bin margins $ ] such thatevents counted in had a magnitude such that .usage , ` r_plhistfit(k,'margins',b ) ` . by default ` r_plhistfit ` assumes that .other options work similar to the ones available for ` r_plfit ` and can be reviewed by typing ` r_plhistfit('help ' ) ` in the matlab command line .
it has been repeatedly stated that maximum likelihood ( ml ) estimates of exponents of power - law distributions can only be reliably obtained for exponents smaller than minus one . the main argument that power laws are otherwise not normalizable , depends on the underlying sample space the data is drawn from , and is true only for sample spaces that are unbounded from above . here we show that power - laws obtained from bounded sample spaces ( as is the case for practically all data related problems ) are always free of such limitations and maximum likelihood estimates can be obtained for arbitrary powers without restrictions . here we first derive the appropriate ml estimator for arbitrary exponents of power - law distributions on bounded discrete sample spaces . we then show that an almost identical estimator also works perfectly for continuous data . we implemented this ml estimator and discuss its performance with previous attempts . we present a general recipe of how to use these estimators and present the associated computer codes .
tensor computations play a crucial role in the studies of general relativity ( gr ) .however , in typical cases those calculations are tediously complicated . fortunately , with the development of symbolic calculation in computer technology, a number of computer packages have been built to calculate the gr tensors , including grtensor , xact , ricci , and so on .general relativitists would most likely find whatever functionality they need in those packages and work with them .however , in case the researchers do have a need to modify some internals of the packages , or hope to have a complete understanding of the package they use , things become complicated .although many packages are kindly provided open source , they have in general a heavy code base . for example, the latest xact package has over 20,000 lines of code ( loc ) , where the xtensor.m file alone has more than 8,000 loc .as another example , the ricci package has over 7,000 loc .there is definitely nothing wrong with heavy weight packages which provide as many functionalities as possible .but on the other hand , it would also be helpful to have lightweight packages which are easy for the users to understand the underlying mechanisms and modify them when needed .the _ mathgr _package is built for this purpose .currently , _mathgr _ has about 500 loc , and those numbers will not grow significantly in the future . the package is also built modularly , separating the tensor manipulation , gr definitions , integration by parts , specific model definition , general utilities , display parser into different files .while keeping to be lightweight , _ mathgr _ still provides competing functionalities for gr computations , with fast speed .the functionalities of _ mathgr _ include * tensor simplifications with symmetries .symmetries , anti - symmetries for any subset of indices , and any permutational symmetries defined by the _ mathematica _cycles can be brought into unique forms , such that if there can be cancellation between terms in an expression , they indeed cancel each other after simplification . * tensor calculations with either abstract or explicit ( or mixed ) indices , and decomposition from abstract indices to explicit ones .for example , tensors with abstract tensor indices such as the ones inside the calculation of the ricci scalar ( where the dummy indices denotes summation throughout this paper and assumed by the package ) can be calculated and simplified , and later decomposed if needed , with builtin command into where dot denotes derivative with respect to the 0 index , and run from to and run from to .alternatively , one can also decompose the indices into completely explicit indices , or decompose indices into explicit indices . * simplification with total derivatives .when expanding an action of a physical system , total derivatives can either be dropped or reduced into boundary terms ._ mathgr _ can try to reduce a given expression into total derivatives plus the rest of the terms which are minimal under various conditions . , or the remaining terms has minimal leaf count . however ,in the leaf count case , _ mathgr _ can not guarantee to give the simplest result because there is no general algorithm for such reduction as far as we know . ] for example , \end{aligned}\ ] ] * cosmic perturbations .there is a builtin model calculating gr tensors for the frw metric with adm type perturbations .this model can on the one hand be used directly for the research of inflationary cosmology , and on the other hand provide an example for writing models . *make use of _ mathematica _ s builtin tensor engine . since _mathematica _ 9.0 ,a symbolic tensor engine is introduced .this engine is powerful and fast in reducing tensors with symmetries into unique forms .however , the interface of the tensor engine is in the coordinate independent form .the input looks like : arrays[{dim , dim , dim } , complexes , antisymmetric[{1 , 2 } ] ] ) ; tensorreduce [ tensorcontract[t\[tensorproduct]t , { { 1 , 5 } , { 2 , 6 } , { 3 , 4 } } ] - tensorcontract[t\[tensorproduct]t , { { 1 , 4 } , { 2 , 6 } , { 3 , 5 } } ] ] } } & & \verb! path ` in _mathematica_. depending on operating systems , the typical directories are + [ cols="^,^",options="header " , ] * * use without installation : one can put _ mathgr _ anywhere and use ` setdirectory ` to specify the load directory .the structure of the _ mathgr _ package.,scaledwidth=90.0% ] the _ mathgr _ package is built in 3 tiers . the structure is illustrated in fig . [ fig : structure ] , where tier 2 depends on tier 1 only , and tier 3 can depend on anything . in this sectionwe introduce the functions provided by each module . to make the notation less cluttered , we have used upper and lower indices in the tensor notations . for example , the inputform of should be input as ] where is a variable .however , once is a variable and assigned to a value like , the indices will have the same replacement and will not work as desired ( unless one does it by purpose ) .thus it is in general recommended to use strings as indices ( or clear and protect the indices before usage if variables are preferred ) . ] , dn["a " ] , dn["b " ] , dn["c"]]!\end{aligned}\ ] ] one can load the typeset.m package ( to be introduced later ) using !\end{aligned}\ ] ] after which the inputform is displayed as .one may either copy this to future input to increase readability , or keep the style of inputform .alternatively , one may load most of mathgr packages with one command = ... is the desired output , instead of part of input . ] !\\ \nonumber & & \verb!out [ ] = mathgr , by yi wang ( 2013 , 2014 ) , https://github.com/tririver/mathgr . ! \\\nonumber & & \verb!bugs can be reported to https://github.com / tririver / mathgr / issues!\\ \nonumber & & \verb!loaded components tensor , decomp , gr , ibp , typeset , util.!\end{aligned}\ ] ] the module tensor.m provides functions for general tensor calculation and simplification . to make use of tensor.m, we first load the package using !\end{aligned}\ ] ] the publicly defined functions and symbols are * _ _ define indices _ _ ] '' is not allowed ( and never necessary in gr ) . ] : this step is optional . as we have encountered previously , we have used up and dn as type identifiers of indices .if this is satisfying , nothing in addition needs to be done . on the other hand, one can define type identifier of indices oneself , for example !\end{aligned}\ ] ] here `` \{myu , myd } '' are the new identifiers , `` mydim '' is the dimensions of the manifold on which those indices are defined .`` greekidx '' is a list , which shows the indices to use for dummy indices .other choices are latinidx and latincapitalidx , myu[``a '' ] ] is not allowed . ] .`` red '' is the color in which the indices are typeset .any other color that builtin in mathematica can be chosen .+ actually , the identifiers `` \{up , dn } '' are similarly defined in the package : !\end{aligned}\ ] ] + as we shall cover in the next sessions , metrics can be defined associated to types of indices . thus definingthe type of indices essentially defines manifold or sub - manifold .this provides a mechanism to deal with tensor decomposition from high dimension to lower dimensions . *_ declare tensor symmetry ( declaresym ) _ :this step is also optional .if a tensor has no symmetry , one can use the tensor directly without declaration . on the other hand , for tensors with symmetry , one had better to declare the symmetry such that _ mathgr _ can make use of this symmetry to simplify the tensor .+ the symmetry of a tensor can be declared using !\end{aligned}\ ] ] here the list of indices are a sequence of up and dn ( or user defined index identifiers ) , representing upper and lower indices respectively .the symmetry could be either symmetric or antisymmetric of some slots , or some generic cyclic symmetries .the grammar for `` symmetry '' here is identical to the one defined in the _ mathematica _ symbolic tensor system thus one can check _ mathematica _ documentation for more details .+ for example , !\end{aligned}\ ] ] defines a tensor } } t_{abc } ~ , \endaccsupp{}\end{aligned}\ ] ] + where the indices can be any other indices , but upper and lower are distinguished here , as usual in gr . herethe tensor is anti - symmetric with permutation of indices . in case of totally symmetric tensor ( declared by symmetric[all ] ) , declaresym also sets the attribute of to be orderless , thus the simplifications are made automatically before sending to the simplification engine . *_ delete tensor symmetry ( deletesym ) _ : the operation declaresym can be undone by deletesym .for example , !\end{aligned}\ ] ] removes the symmetry defined above . *_ simplify tensors ( simp ) _ : an expression ( optionally with pre - defined symmetries ) is brought into a unique form by the command simp .this ensures cancellation between different terms in the expression , which can be transformed into each other by using symmetry , or with redefinition of dummy indices .for example , for the tensor defined above , one can run ! \\\nonumber & & \beginaccsupp{actualtext={t[dn@"a " , dn@"b " , dn@"c"]t[dn@"m " , dn@"n " , dn@"b"]-t[dn@"b " , dn@"a " , dn@"c"]t[dn@"n " , dn@"m " , dn@"b"]}}t_{abc}t_{mnb } - t_{bac}t_{nmb } \endaccsupp { } \verb ! //simp!\\ \nonumber & & \verb!out [ ] = 0 ! \end{aligned}\ ] ] less trivial examples are considered when introducing gr.m , with simplification of curvature tensors .+ there is also a list named simphook .this is a list of rules that the user wants to apply before and after simplification .the rules are in the format of rule or ruledelayed , e.g. : > t f[dn@"a " , dn@"b " , dn@"c " ] } } t_{abc } \verb ! :> !t*f_{abc } \endaccsupp { } \verb ! } ! \end{aligned}\ ] ] + note that the simp function does a lot to simplify the expressions , including making assumptions and communicating with the mathematica command tensorreduce .if a tensor expression is expected to contain lots of terms ( say , more than ) after simp , and each term is very simple , simp becomes then overkilling and slow . in this case, one may try to speed up by !\end{aligned}\ ] ] in this case dummy indices are re - arranged .however , different forms of a tensor are not guaranteed to be brought to the same form . especially , the defined symmetries are not considered ( except for the totally symmetric ones ) .on the other hand , the operation can be order - of - magnitude faster .+ note that for mathematica versions lower than 9 , simp is default to fast method because of lack of builtin function tensorreduce . * _ partial derivative ( pd ) _ : here we define the partial derivative pd separately , instead of modifying the systemwide partial derivative d. the partial derivative can be called as , dn[index ] ] ! \end{aligned}\ ] ] which displayed as , where is actually \[capitalsampi ] in _ mathematica _ , which looks like ( we try to avoid modifying system symbols such as the real ) .+ the derivative pd has linearity and leibniz rules builtin .+ internally , the ` pd ` derivative is evaluated into an object named ` pdt ` , with = pd[ ... pd[pd[f , a1 ] , a2 ] , ... ] ! ~,\end{aligned}\ ] ] where ` pdvars ` is a pre - defined orderless function . the reason to convert ` pd ` into ` pdt ` is that , by using ` pdvars ` , the orderlessness of partial derivative is transparent .then it is possible to define general rules on partial derivatives .+ by default , the partial derivative acting on all tensors or scalars are nonzero . to define constants with vanishing partial derivative, one has to declare explicitly .for example , _ ] : = 0!\end{aligned}\ ] ] defines both f1 and f2[ as constants .+ note that this `` define constant '' approach is different from the builtin derivative in _ mathematica _ , where without explicit function dependence , the variable is by default considered as constant .we consider the former to be safer for our purpose , otherwise one may forget to define non - constants . * _( anti-)symmetrize tensors ( sym , antisym ) _ : the command ! \end{aligned}\ ] ] symmetrizes the tensor .when indices are not given , all free indices are symmetrized .for example , sym[ gives .note that we do not add factors as here .the function antisym does similar things , only that a sign is added in front of each terms , determined by if the permutation is even or odd .* _ the kronecker symbol ( dta ) _ : the simplification of quantities like is automatic . without the need of calling simp , this quantity is directly evaluated into . on the other hand ,if there are standalone which can not be simplified , is replaced by }_{ab}6 ] , up[6 ] , dn[ 6 '' and `` 8 ] , up[8 ] , dn[$9 ] ] ! \end{aligned}\ ] ] + however , the above expression is still long , and the length will grow quickly with lots of contractions . to save writing ,we provide a function metriccontract , which is nothing fundamental , but a shorthand of writing , to allow contraction with the default metric .the above example can be rewritten as = f[dg[1 ] , dg[1 ] ] // metriccontract //simp ! \\\nonumber & & \verb!out [ ] = ! f_{ab}g^{ab}\end{aligned}\ ] ] where dg and ug are a new type of indices , which is parsed by metriccontract .the same labels are contracted using the default metric . heresimp is used to rewrite the dummy indices into familiar ones .+ multiple contractions can be calculated similarly , for example = f[dg[1 ] , dg[2 ] ] f[dg[2 ] , dg[3 ] ] f[dg[3 ] , dg[1 ] ] //simp ! \\\nonumber & & \verb!out [ ] = !f_{ab } f_{cd } f_{ef } g^{af } g^{bc } g^{de}\end{aligned}\ ] ] the above example calculates , in case only with lower indices are defined .the decomposition module can be loaded by !\end{aligned}\ ] ] _ decomposition of tensors into lower dimensional ones ( decomp ) _ : this function converts the dummy indices of dimensional tensors into and dimensional indices respectively .explicit indices can also be used .the explicit indices are marked as ue[n ] and de[n ] for upper and lower indices respectively , where n is a number .the usage is !\end{aligned}\ ] ] if indices are not given , all dummy indices are decomposed .the rules are of the form , for example , where the lower or upper index is converted into one explicit index de[0 ] or ue[0 ] , and another abstract index .there are a number of predefined decomposition schemes , namely decomp0i[expression , indices ] : convert indices into 0 and i components .the rule is as illustrated above .decomp01i[expression , indices ] : convert indices into 0 , 1 , i components .decomp0123[expression , indices ] : convert indices into all explicit indices , 0 , 1 , 2 , 3 .decomp1i[expression , indices ] : convert indices into 1 , i components .decomp123[expression , indices ] : convert indices into all explicit indices , 1 , 2 , 3 .decompse[expression , indices ] : convert indices into two parts , where both parts has general dimensions .again if the indices are not explicitly specified , all dummy indices are converted .the free indices , if exist , are not touched . in those predefined schemes ,the original class identifier of indices are \{utot , dtot}. for example , b[dtot["a " ] ] ] ! \\\nonumber & & \verb!out [ ] = a[u1["!\alpha\verb!"]]*b[d1["!\alpha\verb ! " ] ] + a[u2["a"]]*b[d2["a"]]!\end{aligned}\ ] ] here after decomposition , the indices are identified by \{u1 , d1 } , and \{u2 , d2 } ( those index identifiers are defined in decomp.m , with dimensions dim1 and dim2 ) . as another example , b[dtot["a " ] ] ] ! \\\nonumber & & \verb!out [ ] = a[ue[0]]*b[de[0 ] ] + a[up["a"]]*b[dn["a"]]!\end{aligned}\ ] ] here the indices are converted into one explicit index and another abstract index . to further ease the calculation, there is a list decomphook , which contains a set of replacement rules to be applied after the decomposition .a frequent use case is to specify the explicit form of the higher dimensional metric using those set of rules , as illustrated in the model file frwadm.m .one may define some `` homogeneous and isotropic background '' quantities with :=0 ( * but pd[b , de[0 ] ] is not zero * ) !\end{aligned}\ ] ] one typical use case for a gr package is to expand an action where if does not have boundary , total derivatives in can be dropped ( e.g. ) .if has a boundary , those total derivatives can be reduced into boundary terms on the boundary . for this purposes, we developed a module to factor into total derivatives and the rest part .as usual , integration - like operations needs more intelligence than derivative - like operations .this module does not guarantee to find the total derivative for complicated cases .nevertheless it works for simple cases and provide convenience for research . in case that is a pure total derivative ,the final target result is unique .however , typically , is a total derivative plus some rest part . here depending on use cases ,we designed different criteria to try minimizing the rest part : * _ eliminate derivatives on a variable _ : this can be used in a variation principle .for example , !\\ & & \verb!ibp[y pd[x , dn["i " ] ] , ibpvar[x ] ] ! \end{aligned}\ ] ] tries to eliminate derivatives acting on , and gives a result = -x pd[y , dn["i " ] ] + pdhold[x y , dn["i"]]!\end{aligned}\ ] ] here pdhold is a function defined to hold total derivatives .one can release this held total derivative by replacing it to pd , or alternatively set it to zero by pdhold[__]=0 if the manifold does not have a boundary . * _ bring the rest part into standard form of a second order action _ : in this case ibp will try to eliminate terms in the rest part with more than two time derivatives on it . and the terms like is transformed into [ , de[0 ] ] .the usage is , ibpstd2 ] ! \\\nonumber & & \verb!out [ ] = -(x^2*pd[f , de[0]])/2 + pdhold[(f*x^2)/2 , de[0 ] ] ! \end{aligned}\ ] ] * _ leaf count _ :if no criteria is given to ibp , i.e. ibp is called as ibp[expression ] , the leafcount function is applied to compare the rest part . internally , the ibp function works as follows : first , a set of rules with patterns are defined for integration by parts . then every possible rule is applied to the expression and the result is sorted .the one with simplest rest part is chosen and the same set of rules are tried repeatedly on the new result until a fixed point is reached .it is easy to extend the above algorithm such that the rules are applied multiple times before the result is sorted and selected .however in this case the time complexity increase quickly . in typical use cases ,ibp may deal with expressions with of order 1000 or more terms ( considering the complexity of the gravitational action ) . in this casemultiple - step rules are not realistic . herewe have introduced the ibp function motivated by expanding an action . on the other hand , this function can be certainly used for other purposes , as long as total derivatives are wanted . as we mentioned at the beginning of this section , the output of _ mathgr _ can be parsed and brought into a better looking form . for this purpose , the package typeset.m should be loaded .no functions are provided in typeset.m . instead , this module use makeboxes and makeexpression to define the appearance for the tensors .there are also predefined typeset styles for partial derivatives .partial derivative acting on an abstract index is displayed as capital sampi , and time derivative ( w.r.t de[0 ] ) is denoted by dot . to ease some typical calculations ,some utilities are provided .those utilities are not directly about tensor calculation but can save some writing for those calculations . the utilities can be loaded using !\end{aligned}\ ] ] and provide * _ solveexpr _ : by default , the variable to be inputted to the _ mathematica _ command solve should be atomic or a simple function .expressions with head plus , times and power are not allowed .solveexpr solves this problem .for example , one can use ! \\ \nonumber & & \verb!out [ ] = { { x^2 - > -y } } !\end{aligned}\ ] ] to find a solution of .to realize this , solveexpr first replace by a unique temporary variable , solve the equation and replace the temporary variable back with .* _ series expansion and coefficients _: + in _ mathgr _ , the default variable to control orders of perturbations is named eps . in case of a perturbation theory calculation ,one multiplies every perturbation variable by eps and expand them together ( an example can be found in frwadm.m ) . the series expansion and extracting the coefficients simply makes use of _ mathematica _ functions series and coefficient .to save some writing , one can use !\end{aligned}\ ] ] where n is an explicit integer , to expand expression up to nth order in eps , or !\end{aligned}\ ] ] to extract the order eps terms in the expression and disregard all other terms .simplification function simp is called automatically after expansion or extraction of coefficients .to illustrate an explicit use case of the package , here we calculate the cosmic perturbations of inflationary cosmology up to second order .the result is well known for decades ( for a review with the same notation used here , see ) . nevertheless to present a standard and familiar calculation for illustration purpose may be more useful for a manual compared with presenting a new and unfamiliar calculation . herewe present the input , the explanations and the final results .the intermediate outputs are long and is available in the file resources / mathgr_manual.nb the model specification of a frw universe with adm type perturbations can be loaded by !\end{aligned}\ ] ] here the metric is defined by where where .note that for simplicity only the scalar sector is considered in .we have yet one gauge degree of freedom in the scalar sector .we can fix the gauge by either set , or set the inflaton field to be homogeneous and isotropic .we shall consider the latter case as an example .the action up to second order can be calculated with := 0 ( * total derivatives can be neglected here . * ) ! \\ & & \verb!pd[!\phi\verb!|pd[!\phi\verb ! , de[0 ] ] , _dn]:= 0 ( * the inflaton perturbation is gauged away . * ) ! \\ & & \verb!s012 = sqrtg ( radm[]/2 + decompg2h[x[!\phi\verb ! ] ] - v[!\phi\verb ! ] ) //ss[2]!\end{aligned}\ ] ] here decompg2h is a function provided in frwadm.m ( may move to more general places in case other models also need this function ) .this function calls decomp0i to decompose a 4-dimensional quantity into 3 + 1 dimensions , where in the 4-dimensional quantities the metric is used and in the 3-dimensional quantities the metric is used .the background equation of motion can be derived by the first order action . for this purpose, we extract the first order action and consider the variation principle : ! \\ & & \verb!solbg = solveexpr[{d[s1 , !\alpha\verb!]==0 , d[ibp[s1 , ibpvar[!\zeta\verb ! ] ] , !\zeta\verb!]==0 } , { v[!\phi\verb ! ] , pd[!\phi\verb ! , de]^2 } ] ! \\ & & \verb!simphook = union [ simphook , solbg[[1 ] ] ] ! \end{aligned}\ ] ] where on the second line the background equation of motion is solved .note that solveexpr is used because we want to eliminate a composed expression .also note that there are derivatives on in s1 .thus we should first do integration by parts before applying the variation principle d [ ... , ]=0 . on the third linethe background solution is added to simphook .thus it will be automatically applied when simplifying the second order action .here we presented the usage of a new tensor package , _ mathgr _ , which is simple and lightweight , such that people can understand and modify the internal more easily .we shall keep the simplicity of the package .while new functionalities are expected to be added , we shall not add functionalities which significantly increase the complexity of the package , especially for the core parts tensor.m and gr.m .we shall in the future add more comments to the existing code , and add broader coverage for unit tests and integrated tests .those efforts will help for the users who want to hack and fork the package .finally but most importantly , as the package is being tested and used in realistic research , we expect to encounter bugs and provide bug fixes .as always , the result from _ mathgr _ should be checked by independent calculations before being trusted and used in research .we thank j. m. martin - garcia for comments on an earlier version of this paper .this work was supported by the world premier international research center initiative ( wpi initiative ) , mext , japan , a starting grant of the european research council ( erc stg grant 279617 ) , and the stephen hawking advanced fellowship .peter musgrave , denis pollney and kayll lake , http://grtensor.phy.queensu.ca/ ; peter musgrave , denis pollney and kayll lake 1996 `` grtensorii : a package for general relativity '' fields institute communications 15 313 - 318 v. f. mukhanov and g. v. chibisov , `` quantum fluctuation and nonsingular universe .( in russian ) , '' jetp lett . *33 * , 532 ( 1981 ) [ pisma zh .fiz . * 33 * , 549 ( 1981 ) ] .y. wang , arxiv:1303.1523 [ hep - th ] .
we introduce the _ mathgr _ package , written in _ mathematica_. the package can manipulate tensor and gr calculations with either abstract or explicit indices , simplify tensors with permutational symmetries , convert tensors from abstract indices to partially or completely explicit indices and convert partial derivatives into total derivatives . frequently used gr tensors and a model of frw universe with adm type perturbations are predefined . the package is built around the philosophy to `` keep it simple '' , and makes use of latest tensor technologies of _
the international linear collider ( ilc ) is a future high energy lepton collider .it can make precision measurements of the higgs boson , top and electroweak physics .ilc has a capability of exploring new physics as well . to realize these measurements, we should prepare a detector with excellent precision . in the case of ilc , many events will include jets in their final state . for precise measurements, these jets should be reconstructed as well as possible .we plan to use particle flow algorithm ( pfa ) to achieve jet energy resolution ( jer ) 3 - 4% . to make full use of pfa , we should separate each shower made from individual particles in jets .thus , we need high granular calorimeter especially in ecal . in this paper, we show basic properties of silicon sensors which should be measured for quality control , and we also report the response to an infrared laser .the silicon sensor chips are produced by hamamatsu ( hpk ) .the sensors are a kind of silicon pin diode , and they have 256 5.5.5mm pixels in an area of 9 ( figure 1 ) . the thickness of sensors is 320 m . the guard ring can collect surface current of the chips , but it also limit sensitive region of chips . furthermore , from a former test beam result , 1 guard ring structure makes fake signals along it when particles come into near guard ring . in this way ,guard ring has advantage and disadvantage , thus we have to decide which design should be used for ild ecal .m ( 1 guard ring ) , or 20 m ( 2 , 4 guard rings).,width=302,height=226 ] there are four types of guard ring structure . currently , we have only 1 guard ring structure in this size .the other structures are provided by smaller chips 3 pixels ( 4 guard rings ) , or 4 pixels ( no guard ring , 2 guard rings ) .we measured only 1 guard ring type chip , but in the near future we will measure the other chip and compare the properties of each structure .we made two setups for the measurements of basic properties .one of the them is for the measurement of leakage current , the other is for capacitance measurement ( figure 3 ) .we can control temperature and humidity of dark box , thus we are able to assume these two dependence are negligible in both measurements . in our measurements , we set temperature at 27.5 ( or 20.0 for capacitance measurement ) and humidity at 50% . during both measurements , we used copper plate to combine signals from all sensor cells channel at one readout channel ( see figure 4 ) . between sensor and read out circuitare connected by using springs .the operation and data taking were controlled from a pc ( see figure 5 ) . for leakage current measurement, we took data from 0 v to 800 v in 10 v step .each step has 1 second , and if the chip reaches to breakdown voltage we stop the measurement to prevent the chip from getting damaged . as shown in figure 6 , the leakage current is about 100 na with applied bias voltage of 100 - 150 v , and its voltage dependence is very small in this region .we repeated this procedure 10 - 20 times on each chip , and found that the breakdown voltage becomes higher than the previous scan at first 3 - 4 scans .this phenomenon needs to be understood .however , we can conclude the leakage current is quite small and stable at the operation voltage of 100 - 150 v. if a chip has bad channel of high leakage current , we expect that the chip reaches breakdown on much smaller voltage .therefore , this measurement is also important to test the chips before assembly , especially in mass production phase . for capacitance measurement , we applied bias voltage from 0.1 v to 130 v by step of 0.1 v. figure 7 shows good linearity in 5 v to 45 v region . from 0v to 5 v , the capacitance curve is slightly winding because of its metal - oxide semiconductor structure .shown in figure 7 , the capacitance saturates at around 60 v , which is consistent of the full depletion voltage of 65 v in the specification .we also estimated that the chip thickness is 318 m .this value is also consistent with the specification .to understand the difference between different guard ring structures , we prepared a laser system .we use an infrared laser ( class-3b ) whose wavelength is 1064 nm .photons of this wavelength can pass through the surface of silicon with high transmission to produce a pair of electron - hole in the active area .since infrared photons can not go through electrode , we shoot the laser into the gap ( between a pixel and a pixel , or a pixel and a guard ring ) . at first , we measured the response from a specific pixel to check our system .we used an oscilloscope to see waveform , and we also check the fluctuation of peak value by using the waveform . the waveform taken at 100 v bias voltage is shown in figure 8 .the decay tail was caused by the time constant of the pre - amplifier .we obtained the peak fluctuation value of 3% , shown in figure 9 , which corresponds to the stability of our measurement .we also investigated the response for various bias voltages .the result is shown in figure 10 . in this figure ,the response saturates at 80 v , which indicates that the chip reached full depletion .since this saturation is not consistent to the full depletion voltage of 65 v ( obtained from previous measurements ) , we are now investigating the reason of the difference .we established a silicon sensor test system . we can measure basic properties of silicon sensors such as leakage current and capacitance .our first result from these measurements meet the specification .we also investigated the response to an infrared laser .we have a small disagreement for the full depletion voltage between the capacitance measurement and the laser measurement .now we are trying to understand this issue .our final goal is to make a decision on the chip design including guard ring structures , thus we will compare these properties on prototypes of various structures in the near future .3 t , behnke , james e. brau , b. foster , j. fuster , m. harrison , et al , the international linear collider technical design report - volume 1 : executive summary . 2013 .t , behnke , james e. brau , philip n. burrows , j. fuster , m. peskin , et al .the international linear collider technical design report - volume 4 : detectors .2013 . m.a .particle flow calorimetry and the pandorapfa algorithm .instrum . and meth .a611 ( 2009 ) 25 - 40 .e.fretwurst , h.herdan , g.lindstroem , u.pein , m.rollwagen , h.schatz , p.thomsen , r.wunstorf , nucl .instr . and meth .a288 ( 1990 ) 1 - 5 .
the international large detector ( ild ) is a proposed detector for the international linear collider ( ilc ) . it has been designed to achieve an excellent jet energy resolution by using particle flow algorithms ( pfa ) , which rely on the ability to separate nearby particles within jets . pfa requires calorimeters with high granularity . the ild electromagnetic calorimeter ( ecal ) is a sampling calorimeter with thirty tungsten absorber layers . the total thickness of this ecal is about 24 x , and it has between 10 and 100 million channels to make high granularity . silicon sensors are a candidate technology for the sensitive layers of this ecal . present prototypes of these sensors have 256 5.5.5 mm pixels in an area of 9.we have measured various properties of these prototype sensors : the leakage current , capacitance , and full depletion voltage . we have also examined the response to an infrared laser to understand the sensor s response at its edge and between pixel readout pads , as well the effect of different guard ring designs . + in this paper , we show results from these measurements and discuss future works .
over the past century , researchers have devoted considerable effort into studying animal grouping behavior due to its important implications for social intelligence , collective cognition , and potential applications in engineering , artificial intelligence , and robotics . indeed , grouping behaviors are pervasive across all forms of life .for example , european starlings ( _ sturnus vulgaris _ ) are known to form murmurations of millions of birds which perform awe - inspiring displays of coordinated movement .western honeybees ( _ apis mellifera _ ) communicate the location of food and nest sites to other bees in their group via a complex dance language . even relatively simple bacteria exhibit grouping behavior , such as _ escherichia coli _ forming biofilms which allow their group to survive in hostile environments . _ swarming _ is one example of grouping behavior , where animals coordinate their movement with conspecifics to maintain a cohesive group .although swarm - like groups could arise by chance , e.g. , little egrets ( _ egretta garzetta _ ) pursuing a common resource in water pools , typically swarms are maintained via behavioral mechanisms that ensure group cohesion . as with many traits, swarming behavior entails a variety of fitness costs , such as increased risk of predation and the requisite sharing of resources with the group . with this fact in mind, significant effort has been dedicated to understanding the compensating benefits that grouping behavior provides .many such benefits of grouping behavior have been proposed , for example , swarming may improve mating success , increase foraging efficiency , or enable the group to solve problems that would be impossible to solve individually . furthermore , swarming behaviors are hypothesized to protect group members from predators in several ways .for example , swarming can improve group vigilance , reduce the chance of being encountered by predators , dilute an individual s risk of being attacked , enable an active defense against predators , or reduce predator attack efficiency by confusing the predator .unfortunately , many swarming animals take months or even years to produce offspring .these long generation times make it extremely difficult to experimentally determine which of the aforementioned benefits are sufficient to select for swarming behavior as an evolutionary response , let alone study the behaviors as they evolve . in this paper, we use a digital model of predator - prey coevolution to explore hamilton s selfish herd hypothesis .briefly , the selfish herd hypothesis states that prey in groups under attack from a predator will seek to place other prey in between themselves and the predator , thus maximizing their chance of survival . as a consequence of this selfish behavior , individuals continually move toward a central point in the group , which gives rise to the appearance of a cohesive swarm . in our model ,both predators and prey have the ability to detect and interact with other agents in the environment .we evolve the agents with a genetic algorithm by preferentially selecting predators and prey based on how effective they are at consuming prey and surviving , respectively .forming a selfish herd is a possible solution for the prey to survive longer , but is not selected for directly . in this study , we first test whether a selfish herd evolves within a two - dimensional virtual environment with different forms of simulated predation .doing so enables us to experimentally control the specific modes of predation .we find that if predators are able to consistently attack the center of the group of prey , the selfish herd will not evolve . in subsequent experiments, we discover that density - dependent predation can provide a generalization of hamilton s original formulation of `` domains of danger . ''following these findings , we coevolve groups of predators and prey in a similar virtual environment to determine if coevolving predators impact the likelihood of the selfish herd to evolve .consequently , this study demonstrates that density - dependent predation provides a sufficient selective advantage for prey to evolve selfish herd behavior in response to predation by coevolving predators .finally , we analyze the evolved control algorithms of the swarming prey and identify simple , biologically - plausible agent - based algorithms that produce emergent swarming behavior .a preliminary investigation of the selfish herd hypothesis was presented at the genetic and evolutionary computation conference in 2013 .this paper expands on that work by studying the long - term evolutionary effects of differing attack modes , exploring a new attack mode that directly selects against selfish herd behavior , and providing an analysis of the control algorithms that evolved in the swarming prey .hamilton s original formulation of the selfish herd hypothesis introduced the concept of `` domains of danger '' ( dods , figure [ fig : domains - of - danger ] ) , which served as a method to visualize the likelihood of a prey inside a group to be attacked by a predator .prey on the edges of the group would have larger dods than prey on the inside of the group ; thus , prey on the edges of the group would be attacked more frequently . moreover , hamilton proposed that prey on the edges of the group would seek to reduce their dod by moving inside the group , thus placing other group members between themselves and the predator .further work has expanded on this hypothesis by adding a limited predator attack range , investigating the effects of prey vigilance , considering the initial spatial positioning of prey when the group is attacked , and even confirming hamilton s predictions in biological systems .additional studies have focused on the movement rules that prey in a selfish herd follow to minimize their dod .this line of work began by demonstrating that the simple movement rules proposed by hamilton reduce predation for prey inside the group , then opened some parameters of the movement rules to evolution in an attempt to discover a more biologically plausible set of movement rules .finally , some studies have investigated the evolution of predator behavior in response to prey density , the coevolution of predator and prey behavior in the presence of the predator confusion effect , and elaborating upon the interaction between ecology and the evolution of grouping behavior .this paper builds on this work by studying the effects of coevolving predators and predator attack mode ( i.e. , how predators select a prey in a group to attack ) on the evolution of the selfish herd . more broadly ,in the past decade researchers have focused on the application of locally - interacting swarming agents to optimization problems , called particle swarm optimization ( pso ) .pso applications range from feature selection for classifiers , to video processing , to open vehicle routing .a related technique within pso seeks to combine pso with coevolving `` predator '' and `` prey '' solutions to avoid local minima .researchers have even sought to harness the collective problem solving power of swarming agents to design robust autonomous robotic swarms .thus , elaborations on the foundations of animal grouping behavior has the potential to improve our ability to solve engineering problems .to study the evolution of the selfish herd , we developed an agent - based model in which agents interact in a continuous , toroidal virtual environment ( virtual meters ) , shown in figure [ fig : sim - env ] . at the beginning of each simulation , we place 250 agents in the environment at uniformly random locations .these agents are treated as `` virtual prey . ''each agent is controlled by a _markov network _( mn ) , which is a probabilistic controller that makes movement decisions based on a combination of sensory input ( i.e. , vision ) and internal states ( i.e. , memory ) .we evolve the agent mns with a genetic algorithm ( ga ) under varying selection regimes , which will be described in more detail below . during each simulation time step, all agents read information from their sensors and take action ( i.e. , move ) based on their effectors . in our first set of treatments , we simulate an ideal , disembodied predator by periodically removing prey agents from the environment and marking them as consumed , e.g. , when they are on the outermost edges of the group .subsequent treatments introduce an embodied , coevolving predator agent which is controlled by its own mn .the source code and data for these experiments are available online . in the remainder of this section, we describe the sensory - motor architecture of individual agents and present details related to the function and encoding of mns .limited - distance retina ( 200 virtual meters ) to observe their surroundings and detect the presence of other agents .the current heading of the agent is indicated by a bold arrow .each agent has its own markov network , which decides where to move next based off of a combination of sensory input and memory .the left and right actuators ( labeled `` l '' and `` r '' ) enable the agents to move forward , left , and right in discrete steps.,scaledwidth=55.0% ] figure [ fig : agent - illustration ] depicts the sensory - motor architecture of the agents used for this study .a prey agent can sense predators and conspecifics with a limited - distance ( 200 virtual meters ) , pixelated retina covering its entire 360 visual field .its retina is split into 24 even slices , each covering an arc of 15 , which is an abstraction of the broad , coarse visual systems often observed in grouping prey .regardless of the number of agents present in a single retina slice , the prey agent only knows whether a conspecific or predator resides within that slice , but not how many . for example , in figure [ fig : agent - illustration ] , the fourth retina slice to the right of the agent s heading ( labeled `` a '' ) has both the predator and prey sensors activated because there are two predator agents and a prey agent inside that slice .once provided with its sensory information , the prey agent chooses one of four discrete actions , as shown in table [ table : agent - action - encoding ] .prey agents turn in 8 increments and move 1 virtual meter each time step . in our coevolution experiments ,the predator agents detect nearby prey agents and conspecifics using a limited - distance ( 200 virtual meters ) , pixelated retina covering its frontal 180 that works just like the prey agent s retina ( figure [ fig : agent - illustration ] ) .similar to the prey agents , predators make decisions about how to move next using their mn , as shown in table [ table : agent - action - encoding ] , but move 3 faster than the prey agents and turn correspondingly slower ( 6 per simulation time step ) due to their higher speed .finally , if a predator agent moves within 5 virtual meters of a prey agent that is anywhere within its retina , the predator agent makes an attack attempt on the prey agent .if the attack attempt is successful , we remove the prey agent from the simulation and mark it as consumed .each agent is controlled by its own markov network ( mn ) , which is a probabilistic controller that makes decisions about how the agent interacts with the environment and other agents within that environment .since a mn is responsible for the control decisions of its agent , it can be thought of as an _ artificial brain _ for the agent it controls .every simulation time step , the mns receive input via sensors ( e.g. , visual retina ) , perform a computation on inputs and any hidden states ( i.e. , memory ) , then place the result of the computation into hidden or output states ( e.g. , actuators ) .we note that mn states are binary and only assume a value of 0 or 1 .when we evolve mns with a ga , mutations affect ( 1 ) which states the mn pays attention to as input , ( 2 ) which states the mn outputs the result of its computation to , and ( 3 ) the internal logic that converts the input into the corresponding output ..possible actions encoded by the agent s output .each output pair encodes a discrete action taken by the agent .the agent s mn changes the values stored in output states l and r to indicate the action it has decided to take in the next simulation time step .[ cols="<,<,<",options="header " , ] [ table : hdaa - treatments ] thus far , we have explored attack modes that select for the evolution of swarming behavior .it is not surprising that there are also attack modes exhibited by natural predators that must select against swarming behavior in their prey .for example , blue whales ( _ balaenoptera musculus _ ) are known to dive into the densest areas in swarms of krill , consuming hundreds of thousands of krill in the middle of the swarm in a single attack .we call this kind of attack mode a _ high - density area attack_. such an attack clearly selects against swarming behavior because it targets the prey that swarm the most .if krill swarms consistently experience these high - density area attacks , then why do they still evolve swarming behavior ?it is important to note that krill swarms are also fed on by smaller species , such as crabeater seals ( _ lobodon carcinophagus _ ) , that consistently attack the krill on the outside of the swarm .thus , krill swarms are experiencing two forms of attack modes simultaneously : high - density area attacks from whales and outside attacks from crabeater seals .thus , it is possible that the selection pressure to swarm from outside attacks ( figure [ fig : sdc - artificial - selection ] ) could outweigh the selection pressure to disperse from high - density area attacks .shown in figure [ fig : attack - modes]d , we model high - density area attacks as an artificial attack that always targets the prey at the most dense area of the swarm ( i.e. , highest ) .we note that this attack mode is the opposite of the density - dependent mechanism explored in the previous section , which favors predators that target prey in the _ least _ dense area of the swarm . once the target is selected , we execute the attack by removing the target prey and all other prey within 30 virtual meters of the target prey .outside attacks are modeled as described above . to study the effect of high - density area attacks on the evolution of swarming behavior , we allow the prey to evolve while experiencing both attack modes simultaneously .we vary the relative handling times of both attacks ( table [ table : hdaa - treatments ] ) to explore whether relative attack frequency could explain why some swarming animals evolved swarming behavior despite the fact that they experience high - density area attacks .as shown in figure [ fig : sdc - artificial - selection - hdaa ] , prey experiencing only outside attacks quickly evolve cohesive swarming behavior ( light grey triangles with a full line ) .however , when we introduce infrequent high - density area attacks ( dark grey circles with a dashed line ) , the selection pressure for prey to swarm is reduced .finally , when we introduce frequent high - density area attacks ( black squares with a dotted line ) , the prey do not evolve swarming behavior at all .thus , one possible explanation for animals evolving swarming behavior despite experiencing high - density area attacks is that the high - density area attacks are too infrequent relative to other attack types to exert a strong enough selection pressure for prey to disperse . in summary ,the artificial selection experiments provided us with two important pieces of information regarding the evolution of the selfish herd : ( 1 ) attacks on prey on the periphery of the herd exert a strong selection pressure for prey to swarm and ( 2 ) prey in less dense areas , such as those on the outside of the herd , must experience a higher predation rate than in areas of dense prey , such as found in the center of the herd .building upon the artificial selection experiments , we implemented density - dependent predation in a predator - prey coevolution experiment . adding predators into the simulation environmentenables us to observe how embodied coevolving predators affect the evolution of the selfish herd . for this experiment ,we coevolve a population of 100 predator genomes with a population of 100 prey genomes using a ga with settings described in table [ table : ga - settings ] .specifically , we evaluate each predator genome against the entire prey genome population for 2,000 simulation time steps each generation . during evaluation, we place 4 clonal predator agents inside a virtual meters simulation environment with all 100 prey agents and allow the predator agents to make attack attempts on the prey agents . the prey genome population size , simulation environment area , and total number of ga generationswere decreased in this experiment due to computational limitations imposed by predator - prey coevolution .we assigned the prey individual fitness values as in the previous experiments , and evaluated predator fitness according to the following equation : where is the current simulation time step , is the total number of simulation time steps ( here , = 2,000 ) , is the starting group size ( here , = 100 ) , and is the number of prey alive at update .thus , predators are selected to consume more prey faster , and prey are selected to survive longer than other prey in the group .once all of the predator and prey genomes are assigned fitness values , we perform fitness proportionate selection on the populations via a moran process , increment the generation counter , and repeat the evaluation process on the new populations until the final generation ( 1,200 ) is reached .to evaluate the coevolved predators and prey quantitatively , we obtained the line of descent ( lod ) for every replicate by tracing the ancestors of the most - fit prey mn in the final population until we reached the randomly - generated ancestral mn with which the starting population was seeded ( see for an introduction to the concept of a lod in the context of digital evolution ) .we again characterized the prey grouping behavior by measuring the swarm density of the entire prey population every generation .figure [ fig : sdc - pred - prey - coevolution ] depicts the prey behavior measurements for the coevolution experiments with density - dependent predation ( black circles with a dashed line ; mean swarm density at generation 1,200 two standard errors : 26.2.3 ) and without density - dependent predation ( light grey triangles with a full line ; 3.9.8 ) . without density - dependent predation ,the prey evolved purely dispersive behavior as a mechanism to escape the predators .however , with density - dependent predation , the prey evolved cohesive swarming behavior in response to attacks from the predators . herewe see that density - dependent predation provides a sufficient selective advantage for prey to evolve the selfish herd in response to predation by coevolving predators .accordingly , these results uphold hamilton s hypothesis that grouping behavior could evolve in animals purely due to selfish reasons , without the need for an explanation that involves the benefits to the whole group .moreover , the discoveries in this work refine the selfish herd hypothesis by clarifying the effect that different attack modes have on the evolution of the selfish herd .now that we have evolved emergent swarming behavior in an agent - based model under several different treatments , we can analyze the resulting markov networks ( mns ) to gain a deeper understanding of the individual - based mechanisms underlying swarming behavior .for this analysis , we chose the most - abundant prey mn from each of the outside attack artificial selection experiment replicates , resulting in 100 mns that exhibit swarming behavior .first , we analyze the structure of the 100 mns by looking at the specific retina sensors that the mns evolved to connect to . shown in figure [ fig : prey - brain - connectivity ] ,the prey mns show a strong bias for connecting to the prey - specific retina sensors in front of the prey , but not to the sides .additionally , some of the prey mns show a preference for connecting to the prey - specific retina sensors behind the prey . from this analysis alone, we can deduce that the retina sensors that are most conducive to swarming behavior are in front of the prey agent . to understand how prey make movement decisions based on their sensory input , we map every possible input combination in the prey s retina to the corresponding movement decision that the prey made .due to the stochastic nature of markov networks , the prey agents do not always make the same movement decision when given the same input .thus , we take the most - likely output out of 1,000 trials as the representative decision for a given sensory input combination . effectively , this process produces a truth table that maps every possible sensory input to its corresponding movement decision .we input this truth table into the logic minimization software _ espresso _ , which outputs the minimal representative logic of the truth table .this process results in a truth table that is reduced enough to make the evolved prey behavior comprehensible by humans. surprisingly , the individual - based mechanisms underlying the emergent swarming behavior are remarkably simple .most of the prey mns evolved to make their movement decisions based off of only one prey sensor in front of the prey agent .if the prey sensor does not detect another prey agent , the agent repeatedly turns in one direction until it detects another prey agent in that sensor .once the agent detects another prey agent in the sensor , it moves forward until the agent is no longer visible .this mechanism alone proved sufficient to produce cohesive swarming behavior in the majority of our experiments .interestingly , this discovery corroborates the findings in earlier studies suggesting that complex swarming behavior can emerge from simple movement rules when applied over a population of locally - interacting agents . in a small subset of the evolved prey mns , we observe mns that occasionally connect to one of the prey sensors behind them .these mns watch for a prey agent to appear in a single prey sensor behind the agent and turn repeatedly in one direction until a prey agent is no longer visible in that sensor . once a prey agent is no longer visible in the back sensor , the mn moves forward or turns depending on the state of the frontal sensor .we note that this mechanism _ only _ evolved in prey mns that already exhibited swarming behavior using one of the frontal sensors , which suggests that this mechanism does not play a major role in swarming behavior .instead , this mechanism seems to cause the prey agent to turn toward the center of the swarm instead of swarming in a circle with the rest of the prey agents .this mechanism can be thought of as a `` selfish herd '' mechanism that attempts to selfishly move the agent toward the center of the swarm to avoid predation .the contributions of this work are as follows .first , we demonstrate hamilton s selfish herd hypothesis in a digital evolutionary model and highlight that it is the attack mode of the predator that critically determines the evolvability of selfish herd behavior .second , we show that density - dependent predation is sufficient for the selfish herd to evolve as long as the predators can not consistently attack prey in the center of the group . finally , we show that density - dependent predation is sufficient to evolve grouping behavior in prey as a response to predation by coevolving predators .consequently , future work exploring the evolution of the selfish herd in animals should not only consider the behavior of the prey in the group , but the attack mode of the predators as well .following these experiments , we analyze the evolved control algorithms of the swarming prey and identify simple , biologically - plausible agent - based algorithms that produce emergent swarming behavior .while this work shows one method by which the the evolution of grouping behavior can be studied , there remain many different hypotheses explaining the evolution of grouping behavior . our future work in this areawill focus on directly testing these hypotheses in similar digital evolutionary models .this research has been supported in part by the national science foundation ( nsf ) beacon center under cooperative agreement dbi-0939454 , and nsf grant oci-1122617 .any opinions , findings , and conclusions or recommendations expressed in this material are those of the author(s ) and do not necessarily reflect the views of the nsf .we gratefully acknowledge the support of the michigan state university high performance computing center and the institute for cyber enabled research ( icer ) .olson , d.b .knoester , and c. adami .critical interplay between density - dependent predation and evolution of the selfish herd . in _ proceedings of the genetic and evolutionary computation conference ( gecco ) _ , pages 247254 , 2013 .h. kunz , t. zblin , and c.k .hemelrijk . on prey grouping and predator confusion in artificial fish schools . in _ proceedings of the international conference on the simulation and synthesis of living systems ( alife ) _ , pages 365371 , 2006 .l. spector , j. klein , c. perry , and m. feinstein .emergence of collective behavior in evolving populations of flying agents . in _ proceedings of the genetic and evolutionary computation conference ( gecco ) _ , pages 6173 , 2003 .b. xue , m. zhang , and w.n .multi - objective particle swarm optimisation ( pso ) for feature selection . in _ proceedings of the genetic and evolutionary computation conference ( gecco ) _ , pages 8188 , 2012 .e. vellasques , r. sabourin , and e. granger .gaussian mixture modeling for dynamic particle swarm optimization of recurrent problems . in _ proceedings of the genetic and evolutionary computation conference ( gecco ) _ ,pages 7380 , 2012 .y. marinakis and m. marinaki . a honey bees mating optimization algorithm for the open vehicle routing problem . in _ proceedings of the genetic and evolutionary computation conference ( gecco ) _ ,pages 101108 , 2011 .a. silva , a. neves , and e. costa .an empirical comparison of particle swarm and predator prey optimisation . in _ proceedings of the irish conference on artificial intelligence and cognitive science ( aics ) _ , pages 103110 , 2002 .goldbogen , j. calambokidis , e. oleson , j. potvin , n.d .pyenson , g. schorr , and r.e .mechanics , hydrodynamics and energetics of blue whale lunge feeding : efficiency dependence on krill density . , 214:131146 , 2011 .
animal grouping behaviors have been widely studied due to their implications for understanding social intelligence , collective cognition , and potential applications in engineering , artificial intelligence , and robotics . an important biological aspect of these studies is discerning which selection pressures favor the evolution of grouping behavior . in the past decade , researchers have begun using evolutionary computation to study the evolutionary effects of these selection pressures in predator - prey models . the selfish herd hypothesis states that concentrated groups arise because prey selfishly attempt to place their conspecifics between themselves and the predator , thus causing an endless cycle of movement toward the center of the group . using an evolutionary model of a predator - prey system , we show that how predators attack is critical to the evolution of the selfish herd . following this discovery , we show that density - dependent predation provides an abstraction of hamilton s original formulation of `` domains of danger . '' finally , we verify that density - dependent predation provides a sufficient selective advantage for prey to evolve the selfish herd in response to predation by coevolving predators . thus , our work corroborates hamilton s selfish herd hypothesis in a digital evolutionary model , refines the assumptions of the selfish herd hypothesis , and generalizes the domain of danger concept to density - dependent predation . group behavior , selfish herd theory , predator attack mode , density - dependent predation , predator - prey coevolution , evolutionary algorithm , digital evolutionary model .
local damage of buildings can either be due to accidental events like gas explosions , gross design - construction errors and malicious terrorist attacks , or can be thoroughly planned as part of controlled demolition processes with blast .subsequent cascades of failures can cause large economic and human loss when triggered by accidental events and make the difference between effective and dangerously ineffective demolitions .research about progressive collapse of buildings proceeded discontinuously since 1970s mostly prompted by outstanding and shocking catastrophes .interest in the subject rose after the ronan point partial collapse in 1968 due to gas explosion . during the seventies , the fundamental approaches to structural robustness as well asmany indicators , like the reserve strength ratio ( rsr ) , were formulated , also with regard to off - shore structures that suffered brittle collapses on the north sea . renewed attention to the problem was given due to terrorist attacks against the alfred p. murrah federal building , oklahoma city 1995 , and the world trade center ( wtc ) , new york 2001 .nowadays many codes prescribe alternate paths for the load ( alternate load path method ( alpm ) ) and high toughness of structural members and their interconnections .nonetheless , these measures are not always sufficient to prevent progressive collapse . moreover , even though the serious damage amplification due to dynamics has been exhaustively pointed out in , static analyses are still used in the context of the alpm . developing efficient tools to evaluate structural robustness and to prove the effectiveness of measures aimed at preventing progressive collapse is therefore an important issue and today several algorithms and models are available in literature . a simplified approach to take dynamics and impacts between falling elements into accountwas proposed by and showed its analogy with the variational approach to fracture mechanics .the scheme is based on energy balance , requires only static analyses , and was effectively applied to strain hardening and softening structural elements .analytical 1d models were developed after the wtc collapse . in these models ,progressive buckling of the columns is due to the impact of the upper floors , considered as an increasing falling mass .differently , computer simulations permit to study more complex 2d and 3d structures .several key factors that influence robustness of frames have already been identified .for instance , we know that the loss of external columns from the facades or the corners of a buildings are the most serious scenarios where , according to the alpm method , one column is instantaneously removed ( see , e.g. ) . moreover , it was shown that beam - column connections are critical points of failure initiation and that catenary effects in the floor slabs can remarkably improve robustness .even though the final outcome of progressive collapse depends on the collisions between structural elements , most of literature focuses on collapse initiation .collisions are rarely taken into account either detailed with finite elements , or approximated in the framework of finite macro - elements .detailed finite elements are too demanding in terms of computational time for extensive parametric studies on large structures . differently , finite macro - elements are efficient and can be applied to large structures , but they require strong approximations to take into account collisions and catenary effects , especially in 3d ( e.g. see ) .the lack of experimental results of progressive collapses suggests an approached based on simulations whose reliability arises from the basic physics incorporated .the results obtained with such algorithms can be used to construct , test and calibrate simpler models . in this work ,we use spherical discrete elements ( de ) to simulate the progressive collapse of typical 3d framed structures made of reinforced concrete with fixed regular overall geometry ( see sec .[ simul ] ) .the aim is to study the collapse initiation mechanisms due to dynamic stress redistribution , and the subsequent damage propagation mechanisms due to collisions between the structural elements .understanding the activated mechanisms , depending on the strength , the stiffness and the plastic properties of the structural elements , can help to choose optimal robustness oriented design solutions as well as the most appropriate structural reinforcement of existing buildings .we perform parametric studies scaling the cross sectional size and reinforcement by the _ cross sectional scale factor _ and varying the plastic capacity of the structural elements ( see sec .[ sims ] ) . in this way, we show the expected collapse mechanisms and the final consequences of progressive collapse in terms of final collapse extent and fragment size distribution for various ( , ) . finally , in sec .[ rcgen ] we compare the damage response of structures with symmetric and asymmetric reinforcement in the beams .the choice of spherical de as simulation tool is motivated by several factors : first of all , de are naturally suitable to deal with dynamic problems since they are based on the direct integration of newton s equations of motion , which makes the algorithm simple and fast .moreover , geometric and material nonlinearities , as well as local ruptures can be easily modelled without remeshing .momentum transmissions due to collisions can be included straightforwardly ( see sec .[ mdmodel ] ) .a quite fine mesh is required to represent the actual volume of the structure and to reduce the error originating from the fact that instead of considering sectional ruptures we instantly remove beam elements that are responsible for the cohesion of the system ( see sec .[ mdmodel ] ) . considering sectional ruptures would require remeshing while a more precise representation of the volumes could be obtained with polyhedrical de .for both strategies , the computational demand would grow remarkably .our model has previously been employed to study fragmentation of materials , e.g. , and its applicability to progressive collapse of structures is demonstrated in . in the simulations ,the intact structures are first equilibrated under external service load and gravity .if some elements fail during this initial phase , the structure is incapable of carrying the service load and the simulation is stopped .differently , if no elements fail , the local damage induced by an accidental event is considered by a sudden removal of a central column of a facade at the first floor ( fig .[ figmdstruct ] ) , according to the alpm .the subsequent dynamic stress redistribution can break other elements and trigger widespread progressive collapse .the dynamics of the system is followed by means of explicit time integration , using a 5 order gear predictor - corrector scheme that , for the explored set of parameters , is stable with time increments lower than seconds . in the following subsections [ struct ] and [ mdmodel ]we give a detailed description of the studied structures and employed model . .c ) cross sections of the structural elements and arrangement of the reinforcement . ] for comparative reasons we limit ourselves to study typical regular 3d frames formed by 4x4x4 identical square cuboid cells with =4 m and =3 m ( see fig .[ figstruct].a ) .the structures are made of columns along the vertical direction , clamped to the ground at m and connected at each storey by principal beams in and direction .thin slabs spanning between the principal beams form the floors while the presence of walls is not considered .the geometry of the cross sections of the structural elements is displayed in fig .[ figstruct].c , where the subscripts , and denote columns , beams , and slabs .we set the height of the cross sections proportional to the length of the structural element with the coefficients , namely =h/10 , =l/10 and =l/50 , and we scale each by a dimensionless _cross sectional scale factor _ . is identical for all elements and enlarges their cross sections making the structure stiffer and stronger .the base edges , of the cross section are proportional to the heights , with the aspect ratio coefficients =2/3 and =1 . consequently , the area , the sectional inertia with respect to the principal direction and the torsional inertia of the cross sections of beams , columns and portions of floor slabs can be easily computed .we represent a structure made of reinforced concrete ( rc ) with young s modulus and shear modulus ( see appendix , table [ tabmecpar ] ) .the reinforcement is symmetrically distributed ( see fig . [ figstruct].c ) and its area is proportional to that of the cross section by the factors =1.78% , =0.58% , and =1.26% .the structure carries its own weight , the service external dead load =285kg / m given by non structural elements like pavement , plaster and internal walls , and the service live load =667n / m . and are considered uniformly applied to the upper faces of the floors .we represent the columns and the beams by meshes with and euler - bernoulli ( eb ) beam elements respectively .the floor slabs are considered by a grid of eb elements of length ( see fig .[ figmdstruct].b ) that define slab portions of size ( see fig . [figstruct].c ) . to represent the volume of the structure, we set and to obtain , where and are the height of the cross section along and the length of the generic eb element .the cross sections of the eb beam elements are set according to sec .[ struct ] .the error on introduced by the simplifying hypothesis is acceptable because the bending of the beams and of the slabs in the horizontal plane is not relevant .the light grey area marks the initially removed column .generic euler - bernoulli element in c ) undeformed and d ) deformed state . ]we represent the structural volume by spheres surrounding each node ( see fig . [figmdstruct].a ) .the diameter of the sphere is equal to 90 of the length of the shortest eb element connected to node .the mass is obtained summing the contributions from the mass of the eb elements connected to node and from an extra mass given by the external dead load : note that in the de algorithm the eb beam elements do not have a mass since is concentrated in the spheres .the rotational inertia of a sphere is computed considering to be uniformly distributed .the eb elements determine the interactions between pairs of nodes , associating their relative rotations and displacements to forces and moments acting on them . in the following ,we describe the linear elastic - perfectly plastic constitutive behavior and the failure rules of the eb elements .[ [ elastic - regime ] ] elastic regime + + + + + + + + + + + + + + in the linear elastic regime , we use the force - displacement relations described in detail in , thus taking into account the geometric nonlinearities due to large displacements and neglecting shear deformability . the rotations are defined starting from the deformed state of the generic eb element ( see fig . [figmdstruct].d ) .namely , the bending rotations , around the , principal axes align the , axes with the line connecting nodes 0 and 1 , while rotating the , axes around by makes them parallel to , : at a given time step , we compute the axial strain and the rotations for every eb element starting from the position of the spheres and at the orientation of the , , axes frozen to nodes 0 and 1 .the forces and moments at nodes 0 and 1 come from the eb beam theory : is the young s modulus of concrete ( see table [ tabmecpar ] ) . and the bending moment and the effective rotation around the axis at node , is the shear force along the axis at node , is the normal force and is the torsion .damping inside the beams is considered adding forces and moments at nodes 0 and 1 that are proportional to the elastic part of the velocities , , , , , by the coefficients in table [ taby - th - damp ] but with opposing direction .[ [ plastic - regime ] ] plastic regime + + + + + + + + + + + + + + progressive collapse of structures involves many irreversible processes in the elements and plastic energy dissipation can determine robust or vulnerable responses to damage . to consider plasticity in the eb elements , we make the simplifying assumption that axial and bending plasticization are uncoupled .this choice is justified by our intention to keep the model as simple as possible , leaving refinements to further works .furthermore , we do nt consider plasticization due to shear or torsion in the rc because plastic dissipations associated to them are generally small . under these hypotheses ,the eb elements enter the perfectly plastic regime in axial direction or in bending at node and around or independently if one or more of these conditions is satisfied : we set the tension and compression yield thresholds , neglecting the contribution of concrete in tension and steel in compression .the bending yield threshold is evaluated referring to the axis and neglecting the contribution of concrete .if is the fraction of reinforcement in tension , i.e. for the columns and for the beams and the slabs ( see fig .[ figstruct].c ) , we obtain : we also add a further contribution to to consider that compressions increase by reducing the area of concrete in tension during bending .we set assuming that it is carried by the reinforcement alone and that it compensates the strain in the reinforcement in tension produced by , i.e. : within the employed direct time integration scheme , we check yielding in terms of strain and rotations instead of forces and moments .thus , we adopt elongation , shortening and bending yield thresholds that satisfy eqs .[ ny_cond ] - [ by - cond ] in the equality form , when inserted into eqs .[ m - fi ] - [ mz - teta ] .the expressions for the , and are summarized in table [ taby - th - damp ] , where also the additional term to from eq . [ dmy ] is shown . in the perfectly plastic regime, we consider the axial strain and rotations , to result from the sum of an elastic and a plastic contribution .[ figfluxey ] shows how axial plasticization is implemented at the generic time step .history dependence is considered accumulating the plastic strain in time .the linear distribution of bending along the eb element makes the description of the rotational plastic regime more complicated .first , we compare , obtained from the integration of newton s equation with to define whether plasticization occurs only at one node or at both .if for instance only node 0 enters the plastic regime , then we set = + , where , put into eq .[ m - fi ] , satisfies eq .[ by - cond ] in the equality form .differently , if both node 0 and 1 plasticize , either and must be separated into elastic and plastic part , so that eq .[ m - fi ] and eq .[ by - cond ] in the equality form return a linear system of two equations in the two unknowns and . as for axial plasticization ,the plastic parts and must be subtracted from the total and at the next time step , and the plastic rotations must be cumulated in time .[ [ element - failure ] ] element failure + + + + + + + + + + + + + + + if the strain in a cross section of an eb elements is too high , the element fails and is instantly removed from the system . in the following , we will scale the plastic capacity of the eb elements by a parameter .if the elements break when a combination of the elongation and the effective rotations at nodes is large with respect to the yielding thresholds evaluated in uncoupled conditions : differently ,if , the breaking rules are : with ultimate threshold values of elongation , shortening and rotation estimated in uncoupled conditions .failure due to shear and torsion is neglected because the shear reinforcement is thought to be designed according to the capacity design approach , that avoids the occurrence of these brittle mechanisms before bending or axial strain failure .we assume and equal to the ultimate tensile strain of steel and compressive strain of concrete respectively ( see table [ tabmecpar ] ) . is estimated considering a state of uniform bending , and thus uniform curvature , in the eb element . under these hypotheses , the rotation between the edges of an eb element is and gives a strain in the steel bars at = equal to .we thus obtain setting equal to the ultimate strain of the steel , i.e. . the expressions in table [ taby - th - damp ] consider also that the adopted meshing rule assures .progressive failure of eb elements can lead to the free fall of structural elements .we model inter - spheres collisions by a hertzian potential that generates a conservative repulsive force between partially overlapping spheres .this force is directed along the line connecting the centres of mass of the colliding spheres and is proportional to the overlapping volume by a stiffness parameter .a similar force is generated when a sphere crosses the plane that represents the ground .impacts dissipates energy due to local fragmentation and to sliding and rolling friction .thus we introduce forces that are proportional and opposed to the normal , the tangential and the rotational relative velocities of the overlapping spheres with the damping coefficients summarized in table [ tabmdimpacts ] . in tangential direction , either coulomb s or dynamic sliding friction are considered .the plastic capacity of the structural elements is a key factor of progressive collapse .plasticity determines the subsequent mechanisms of damage propagation based on collisions and the final extent of collapse .we employ the presented model to perform parametric studies on the structural _ cross sectional scale factor _ ( see sec .[ struct ] ) and on the _ plastic capacity _ ( see eqs . [ tbreak ] and [ cbreak ] ) .we choose as parameter because a structure with given can be robust or extremely vulnerable to a column removal , depending on the cross sectional size of the elements . in this way, we can see the effect of plasticity in structures that exhibit different responses to the initial damage , ranging from no collapse to catastrophic collapse . in sec .[ colmec ] we describe the observed global and local primary mechanisms that can trigger progressive collapse and the subsequent collisions - driven mechanisms . in sec .[ collapse ] we show the results of the parametric studies .we especially point out what collapse mechanisms occur depending on and and what are the consequences in terms of final extent of the collapse . finally , in sec .[ frag ] we show how and effect the fragment size distribution of the rubble .the initial column removal can trigger three different primary collapse mechanisms that start progressive collapse , two of which are global and lead to total collapse : 1 .the first one is caused by elastic waves inside the floors that can separate even distant slabs from the beams ( see fig .[ figglobcoll].a ) .+ , ) and b ) progressive punching ( , ) .local starting collapse ( , ) : a ) detail of the first failure area , where the light gray area marks the initially removed column and the arrows show the direction of crack propagation ; b ) first stages of the local progressive collapse .e ) approximate static schemes of the elements where the starting damage propagation occurs ; the springs in b represent the stiffness of the perpendicular beams . ] 2 .the other global mechanism separates less slabs from the beams but the floors are progressively punched by the columns ( see fig .[ figglobcoll].b ) .most probably , this mechanism would turn into progressive buckling of the columns if they were less reinforced .the local primary collapse mechanism is characterized by a crack propagating from point a ( fig . [ figglobcoll].c ) and disconnecting the neighboring floor slabs from the beams ( fig .[ figglobcoll].d ) .+ to explain why rupture occurs at point a instead of point b or c we consider the schematic representation in fig .[ figglobcoll].e .after the column removal , the cross sections and , being at two sides of the same node b , show the same vertical displacement . in fig .[ figglobcoll].e , the load on on beam is greater than the load on beam because the area that can transfer load to the former is larger .therefore , considering that the torsional stiffness of can be represented by a torsional spring that reduces the bending moment in and increases that in , the maximum static bending moment is located in .when a local primary collapse mechanism is triggered , the portion of structure above the removed column undergoes free fall and collides with the floor slabs below . this effect ( fig .[ fighamm ] ) generates elastic waves that can damage the neighbouring floor slabs .rarely does it have catastrophic consequences by itself . , ) .free fall of the floor slabs , c ) almost horizontal for brittle structures ( = 0.59 , = 0.2 ) and d ) tilted for plastic structures ( , ) .in brittle structures e ) the slabs stacking on the ground push less against the column than in f ) plastic structures ] other sources of damage transmission are the lateral impacts between falling rubble and still intact portions of the structure. this effect can destroy the perimeter beams of neighbouring floor slabs , which can eventually collapse , but usually is not able to cause a widespread propagation of damage by itself . finally , a severe secondary mechanism is due to the forces exerted by the rubble stacking on the ground , which can cut the neighbouring columns at the base ( , see fig .[ fighamm].c - f ) .the collapse mechanisms described in sec . [ colmec ] can occur depending on the cross sectional size and reinforcement of the structural elements , determined by the cross sectional scale factor , and on the available plastic capacity . fig .[ figphases].a shows the response of the studied frames to the applied local damage .[ htb ] is the maximum plastic rotation in uncoupled condition ( cf .[ tbreak ] , [ cbreak ] ) ] for ( , ) pairs below the curve , the intact structures experience static collapse before the damage .such weak frames are not supposed to exist since they can not carry the service load .structures with ( , ) within the dashed area just above the curve completely collapse after the column removal triggers a global primary mechanism .this ( , ) region is narrow and vanishes when since brittle failures induce compartmentalisation , abruptly interrupting the dynamic stress flow . in this way, either the propagation of waves ( dominating when ) and the global failure of the storeys due to progressive punching ( ) are avoided .structures with ( , ) above the dashed region and below the exhibit local primary collapse , but can still experience total collapse prompted by the mechanism ( see fig . [figphases].b ) . if is small , the probability that a sequence of provokes total collapse is low , due to the almost null tilt of the floor slabs during the free fall , the high degree of fragmentation after collisions and the larger values . on the contrary , when is large , sequences of are frequent because the slabs tilt while falling and stack on the ground without considerably fragmenting .consequently , massive slabs lean against the thin ( small ) base columns at a remarkable height , and cut them ( fig .[ fighamm].c - e ) .partial collapse can occur when and is sufficiently large ( see fig . [ figphases].b ) . in this caseall the collision mechanisms of sec .[ colmec ] determine the final extent of the collapse . overlapping between the partial and total collapse regions in fig .[ figphases ] is due to the fact that the consequences of collision driven mechanism are considerably variable .structures with ( ) pairs above the curve are perfectly robust , i.e. they do not suffer any further failure after the column removal . finally , when the intact structure does not plasticize in the pre - dmage stage , i.e. during the static application of the service load .since this is a common requirement for buildings in service conditions , structures with realistic size of the elements are located in the region ., ) pairs between and .note that for each , values of larger or smaller than those of the plotted points give respectively and .b fragment mass distribution for different plastic capacities . is the fraction of fragments with mass normalised to the total mass of the structure . ]the beneficial effect of plasticity is evident from the fact that and decrease with ( see fig . [ figphases].a ) .these curves sensibly decrease in the range , which means that the provision of a minimal plastic capacity is sufficient to remarkably improve the structural response , assuring complete safety to structures with .this result is a consequence of the symmetric distribution of reinforcement inside the beams and the floor slabs , that makes their bending behavior qualitatively similar to that of steel elements .steel structures are actually likely to sustain one column removal without damage propagation . in sec .[ rcgen ] we show that rc structures with realistic cross sectional size of the elements and asymmetric reinforcement distribution would experience partial collapse .differently , when , and do not decrease much anymore ( see fig .[ figphases].a ) .this means that the initiation of progressive collapse is a local phenomenon associated with relatively small plastic stress redistributions , as already observed for 2d steel frames by .[ figcol - alp].a shows a quantitative measure of the final collapse extent for structures with different plastic capacity .in particular , we compute the _ demolition ratio _ , i.e. the fraction of lost living space at the end of the collapse . from the figure it can be immediately seen that structures with high plastic capacity undergo progressive collapse only if they have thin elements , i.e. small , but if the collapse is triggered it will affect the whole system .the large variability of is due to the collision - driven mechanisms and , in particular , the largest values of associated with partial collapse , i.e. when and , indicate the occurrence of .the size of the fragments produced after a structural collapse is interesting for controlled demolitions , where large fragments require further effort to relocate them .the probability density distribution of the mass of the fragments normalised by the total mass of the structure is shown in fig .[ figcol - alp].b .we observed that does not influence .differently , as the plastic capacity of the elements grows the power law regression lines in figure shift towards larger sizes of the fragments while their exponent , between -1.2 and -1.4 , does not seem to change according to a trend . note that this exponent is in the range of the one reported for shell fragmentation of 1.35 .when is small has large dispersion and most of the fragments are represented by single spheres completely disconnected from the others .calling the sum of the masses of the fragments made of one sphere and the total mass of the fragments , is larger than 0.7 when .this denotes a finite size effect , i.e. the single sphere is larger than the characteristic size of the fragments . on the contrary ,when is high is less dispersed and implies that the fragment size distribution is better caught ..c , d ) symmetric and asymmetric reinforcement arrangement .static deformed condition of the beams , bending moment in and regions in tension b ) before and c ) after the column removal . ]the results in sec . [ collapse ] refer to rc structures with symmetrically reinforced elements .nevertheless , the reinforcement inside the beams and the floor slabs of real structures is mostly concentrated in the regions under tension in service conditions . fig .[ figasym].a shows the symmetric and the more realistic asymmetric reinforcement arrangement inside the beams that are involved in the local primary collapse mechanism of fig .[ figglobcoll].c , d . the column loss in fig .[ figasym].c produces the inversion of the bending moment in and thus , in case of asymmetric reinforcement , the reinforcement under tension in passes from before the damage ( see fig . [ figasym].b ) to after the damage ( see fig . [figasym].c ) . therefore ,if is small , section is likely to fail before section , differently from the observations in case of symmetric reinforcement ( see sec .[ colmec ] ) . in the following , depending on , we estimate the curves that separates the collapse and robustness regions in fig .[ figphases].a for frames with asymmetric reinforcement .beam ( cf .[ q(alfa ) ] ) .b ) transition to no collapse after the column removal for different ratios of asymmetry of the reinforcement in the beams . ]failure in occurs if : where is the static bending moment in after the damage ( see fig .[ figasym].c ) and is an amplifying factor that considers dynamics and plastic capacity . from for linear elastic - perfectly brittle structures ( ) to for perfectly plastic structures with . is the load per unit length on the beam ( see fig .[ figload].a ) : \varpi(\beta ) l \;\;,\ ] ] is the yield bending moment in , with because it is proportional to the area of the cross section times its height ( see eq .thus , eq . [ bba = byba ] can be rewritten as .\varpi(\beta ) \frac{l^3}{6 } = dyn\cdot\psi t_s\rho_{s ,b}\lambda_b^3\delta_b f_y \alpha^3 \;\;.\ ] ] for a given , solving eq . [ alfac ] in for different permits to trace the curves . nevertheless , in eq . [ alfac ] the parameter is still unknown . in order to assign values to , we repeat the previous argument for structures with symmetric reinforcement , for which we now the values at some obtained from our simulations ( see fig . [ figphases].a ) . in case of symmetric reinforcement , the beam in fig .[ figasym].c fails in , where the bending moment in .since in this case , eq . [ alfac ] turns into : \varpi(\beta ) \frac{l^3}{3 } = dyn\cdot t_s\rho_{s ,b}\lambda_b^3\delta_b f_y \alpha^3 \;\;.\ ] ] solving eq .[ alfac_sym ] starting from the values in fig .[ figphases].a , we obtain the values in table [ dyn_omega ] .. factors of eqs .[ alfac ] and [ alfac_sym ] . [ cols="<,^,^,^,^,^",options="header " , ] inserting the values from table [ dyn_omega ] into eq . [ alfac ] , we obtain the curves in fig .[ figload].b .these curves show that collapse can initiate also in well designed rc structures , i.e. with .moreover , we argue that the final extent of the collapse should be partial .note that the curve in fig .[ figphases ] does not change with since it only depends on the reinforcement in tension in the intact structure . also , the bold dotted line in fig .[ figphases].b regarding the base - cutting phenomenon does not depend on but only on the reinforcement inside the columns , that is usually symmetric .therefore , when an expansion of the partial collapse region towards higher values of is expected .progressive collapse of framed structures after local damage consist in an initial triggering and a subsequent damage propagation .if the initial damage is small , like the studied column removal , collapse initiation is generally a local phenomenon affecting the surroundings of the initially damaged area .global primary mechanisms can occur only in thin structures with enough plastic capacity to avoid the compartmentalisation effect produced by brittle ruptures , i.e. .nevertheless , if the starting damage is more serious than a single column removal , global primary mechanisms can be expected also for larger and more brittle structures . in case of multiple column removal ,progressive crushing or buckling of the columns is a possible global primary collapse mechanism that was not observed in this context .we showed that structures with minimal plastic capacity and symmetrically reinforced beams are robust towards a single column removal . on the contrary ,frames with asymmetric reinforcement would experience partial collapse even if made of elements with large plastic capacity . if a local primary collapse mechanism is triggered , the final extent of the collapse depends on secondary mechanisms driven by collisions between the structural elements .we showed that damage can not widely propagate in structures with small plastic capacity since brittle failures compartmentalise the system .differently , structures with large plastic capacity tend to collapse entirely after a sequence of base cutting .this result is a consequence of the fact that , in presence of large plastic capacity , we studied frames with thin structural elements . in facts , columns with larger and thus more realistic cross sectional size and reinforcement would not fail because of base cutting .collision - driven mechanisms also determine the outcome of the fragmentation process .we showed that the fragment mass distribution does not depend on the strength and stiffness of the structural elements .it is namely influenced by the plastic capacity of the elements . in structures with large plastic capacitythe fragments are more massive and represent an extra cost in controlled demolitions processes . in the present paper , for clearer interpretability of the results , we limited ourselves to simple geometry , collisions , and constitutive models . implementing more sophisticated collision models , e.g. using polyhedral discrete elements , or enabling discrete elements to fragment ( see e.g. ) , as well as rate effects are future challenges .we also neglected shear failures , since the structural elements were sufficiently small and slender , but this hypothesis should be removed to deal with structures made of large elements .these models should also be refined if the aim is to simulate in detail the collapse of specific real buildings . nevertheless , already at the present state, several interesting studies can be conducted to analyze the response to earthquakes and to investigate the influence of material disorder , geometric uncertainties , overall geometry , and structural connections .experimental validation remains problematic in the field of progressive collapse , because of difficulty in monitoring collapse of complex buildings , and due to problems concerning repeatability of experiments .further discussion about monitoring collapse of buildings for model validation , especially concerning demolitions , can be found in .23 [ 1]#1 [ 1]`#1 ` urlstyle [ 1]doi : # 1 z.p . baant and m. verdure .mechanics of progressive collapse : learning from the world trade center and building demolition .mech .- asce _ , 1330 ( 3):0 308319 , 2007 .carmona , f.k .wittel , f. kun , and herrmann h.j .fragmentation processes in impact of spheres .e _ , 770 ( 5):0 243253 , 2008 .cherepanov and i.e. esparragoza . progressive collapse of towers : the resistance effect ._ , 143:0 203206 , 2007 .chiaia and e. masoero .analogies between progressive collapse of structures and fracture of materials ._ , 1540 ( 1 - 2):0 177193 , 2008 .grierson , l. xu , and y. liu .progressive - failure analysis of buildings subjected to abnormal loading ._ comput.aided civ ._ , 20:0 155171 , 2005 .d. hartmann , m. breidt , v. van nguyen , f. stangenberg , s. hhler , k. schweizerhof , s. mattern , g. blankenhorn , b. mller , and m. liebscher .structural collapse simulation under consideration of uncertainty - fundamental concept and results ._ , 860 ( 21 - 22):0 20642078 , 2008 .d. isobe and m. tsuda .seismic collapse analysis of reinforced concrete framed structures using the finite element method .d. _ , 320 ( 13):0 20272046 , 2004 .g. kaewkulchai and e.b .dynamic behavior of planar frames during progressive collapse . in _ proceedings of the 16th asce engineering mechanics conference _ , 2003 .university of washington , seattle , july 16 - 18 , 2003 .g. kaewkulchai and e.b .williamson . modelling the impact of failed members for progressive collapse analysis of frame structures ._ j. perform .asce _ , 200 ( 4):0 375383 , 2006 .k. khandelwal , s. el - tawil , s.k .kunnath , and h.s .macromodel - based simulation of progressive collapse : steel frame structures . _ j. struct .asce _ , 1340 ( 7):0 10701078 , 2008 .m. levy and m. salvadori ._ why buildings fall down ?_ w. w. norton , new york , 1992 .luccioni , r.d .ambrosini , and danesi r.f .analysis of building collapse under blast load ._ , 26:0 6371 , 2004 . m.a .maes , k.e .fritzsons , and glowienka s. structural robustness in the light of risk and consequences analysis . __ , 160 ( 2):0 101107 , 2006 .s. marjanishvili and e. agnew .comparison of various procedure for progressive collapse analysis ._ j. perform ._ , 200 ( 4):0 365374 , 2006 . e. masoero , p. vallini , a.p .fantilli , and b.m .energy - based study of structures under accidental damage ._ key engineering materials _ , 417 - 418:0 557560 , 2010 .progressive collapse basics ._ modern steel constr ._ , 440 ( 3):0 3744 , 2004 .t. pschel and t. schwager ._ computational granular dynamics_. springer - verlag gmbh , berlin , 2005 .pretlove , m. ramsden , and a.g .dynamic effects in progressive failure of structures .. j. impact eng ._ , 110 ( 4):0 539546 , 1991 .. progressive collapse of the world trade center .1340 ( 2):0 125132 , 2008 .val and e.g. val .robustness of framed structures ._ , 160 ( 2):0 108112 , 2006 ._ progressive collapse assessment of tall buildings_. phd thesis , london imperial college , uk , 2007 .vlassis , b.a izzudin , a.y .elghazouli , and d.a .nethercot . design oriented approach for progressive collapse assessment of steel framed buildings ._ , 160 ( 2):0 129136 , 2006 .wittel , f. kun , h.j .herrmann , and b.h .fragmentation of shells ._ , 930 ( 3):0 035504 , 2004 . & & + + specific weight & & kg / m & 2500 + young modulus & & n / m & 30 + shear modulus & & n / m & + compressive yield stress & & n / m & 20 + ultimate shortening & & - & 0.0035 + + young s modulus & & n / m & 200 + yield stress & & n / m & 440 + yield strain & & - & 0.0022 + ultimate strain & & - & 0.05 + & & + + elongation & & ns / m & + torsion & & nms & + bending & & nms & + + elongation & & - & + compression & & - & + + columns & & rad & + & & & + beams & slabs & & rad & + & & & + + elongation & & & + compression & & & + rotation & & & + & & & + & & + + sphere - sphere & & n / m & + sphere - ground & & n / m & + + sphere - sphere & & ns / m & + sphere - ground & & ns / m & + + coulomb friction & & ns / m & + dynamic friction & & ns / m & + rolling friction & & nms & + + coulomb friction & & ns / m & + dynamic friction & & ns / m & + rolling friction & & nms & +
in this paper , we study the progressive collapse of 3d framed structures made of reinforced concrete after the sudden loss of a column . the structures are represented by elasto - plastic euler bernoulli beams with elongation - rotation failure threshold . we performed simulations using the discrete element method considering inelastic collisions between the structural elements . the results show what collapse initiation and impact - driven propagation mechanisms are activated in structures with different geometric and mechanical features . namely , we investigate the influence of the cross sectional size and reinforcement and of the plastic capacity of the structural elements . we also study the final collapse extent and the fragment size distribution and their relation to , and to the observed collapse mechanisms . finally , we compare the damage response of structures with symmetric and asymmetric reinforcement in the beams . * keywords : * frames , progressive collapse , robustness , discrete elements
synchronization processes on complex networks has received a lot of attention during the last decades .the interplay between the dynamical evolution of oscillators and their local interactions ( as given by the complex topology of a network ) usually results in non - trivial phenomena of relevance to physical , biological , technological and social systems .first introduced by pecora and carroll , the master stability function ( msf ) is nowadays one of the main theoretical methods for the study of network synchronization .msf is indeed a powerful tool to analyze the stability of the synchronization manifold when identical systems of oscillators are diffusively coupled .originally applied to undirected networks , the msf approach has been later extended to investigate enhancements and optimization of complete synchronization in weighted and asymmetric topologies ( see , and references therein ) .in the authors stated the so - called heterogeneity paradox , i.e. the fact that heterogeneous networks , wherein distances between nodes are relatively short , are less stable , in terms of synchronization , than their homogeneous counterparts .soon after , a proper and adequate weighting of the link strengths was shown to overcome this paradox , based again on concepts sparkling from the msf formalism .following works , have shown how different network s topological features influence the stability of the synchronous state , such as : heterogeneity of the node degree , degree - degree correlations , average shortest - path , betweenness centrality or clustering .these latter studies indicate that altering the structure of a network may result in maximizing the stability of the synchronous state , thus achieving a _ maximally stable synchronization structure _ .enhancement of the networks synchronizability can also be achieved by the application of genetic algorithms increasing the stability of the synchronized state . in this case , the networks self - organize by disconnecting the hubs and connecting peripheral nodes , thus increasing the homogeneity and leading to what is known as _ entangled networks _ . in our study, we report the enhancement of the stability of complete synchronization of an ensemble of dynamical units , when coupled simultaneously in different dimensions .we are concerned with a multivariable coupling , where the dynamical systems are coupled through different dimensions according to a certain structure of connections ( see fig.[fig01 ] for a schematic illustration ) .in particular , we consider a generic dynamical system whose associated vector state ( with ) evolves according to .each one of the state variables of the dynamical system at a given node can be coupled to the corresponding variable of any of the other systems ( i.e. , nodes ) of the network .equivalently , we can think of our system as a network with layers , each one accounting for the structure of couplings at each variable of the system .this multilayer point of view illustrated in fig.[fig01 ] is , in fact , just accounting for a multivariable coupling between the nodes of a network , nevertheless it will help us to provide a more concrete representation of the structure of the system , and possible connections to applications .so we will make use of it at certain points . if the coupling between oscillators does not include some of the state variables , _i.e. _ , the topology of the corresponding layers to those variables would be trivially given by a zero adjacency matrix , so we would not consider them to be proper layers ( as is the case of the layer corresponding to variable in fig.[fig01 ] ) .for simplicity , we consider a bidirectional coupling between the same variables of each system ( i.e. each layer is an undirected network ) .this is illustrated in fig .[ fig01 ] with an example of the case and .interestingly , our framework connects with the so - called _ hypernetwork _ formalism introduced by sorrentino . in this latter work, the author shows that a msf approach to hypernetwork synchronization is possible when the laplacian matrices of different layers ( accounting for the coupling through each variable ) have the same basis of eigenvectors , _i.e. _ , when they are simultaneously diagonalizable .this is a condition that has been shown to be fulfilled for two layers in three cases : ( i ) the laplacian matrices of the different layers are commuting ( a condition that automatically allows for a msf approach whatever the number of layers if the laplacian matrices form a pairwise commuting set ) , ( ii ) one of the two layers is unweighted and fully connected , or ( iii ) one of the two layers has an adjacency matrix of the form with . additionally , ref . contains an extension of the approach in to more general topologies by making use of a simultaneous block diagonalization of laplacian matrices corresponding to different layers , thereby decreasing the dimensionality of the linear stability problem . in our work, we consider the topology to be the same in each layer , trivially falling , from the of view of hypernetworks , into category ( i ) of ref . . this way, we present a study on how the stability of the synchronous state is enhanced by finding an optimal balance for the coupling between the different variables in a network of identical oscillators with multivariable coupling . on the one hand , we provide results based on extensive numerical simulations of networks of rssler - like oscillators to show the applicability of the proposed ideas and how the msf can help us to find the adequate balance between the couplings that optimizes the stability of the synchronous state of a network . on the other hand , by constructing an electronic version of the model , we show that these predictions are in good agreement with the experimental evidences in spite of the idealizations used in the theoretical treatment .in this section we explain how stability of the synchronous state can be enhanced by engineering a multivariable coupling function between nodes in a network and what balance between coupling variables is the most adequate . for the sake of concreteness we focus on a set of rssler - like oscillators coupled to their neighbors through both the and the variables , whose dynamics evolve according to the following equations : ) \\ \nonumber \label{system1 } \dot{y}_i&=&-\alpha_{2 } ( -\gamma x_i+\left ( 1-\delta \right ) y_i -d_r \sigma_y\phi \sum_{j=1}^na^y_{ij } \left [ y_j - y_i\right ] ) \\ \label{system2 } \dot{z}_i&=&-\alpha_{3}\left ( g_{x_i}+z_i\right)\;,\end{aligned}\ ] ] where , , , , , , , , , and and account for the coupling strengths of variables and . as we explain below , this chaotic oscillator has the highly non - trivial characteristic of being quite robust when implemented in electronic circuits .the adjacency matrices and contain the topology of each of two layers , each one accounting for the coupling through the and variables .elements and are one when nodes and are connected and zero otherwise . with these parametersthe oscillators display chaotic dynamics due to the nonlinearity introduced in , which consists on a piecewise function defined as : the coupling between oscillators is here controlled by two parameters : being the coupling strength and controlling how the coupling strength is distributed between variables and .this way , ( ) leads to a coupling restricted to variable ( ) , while a sweep of in the interval ] .these signals are digitally filtered by a third - order lowpass butterworth filter with a cutoff frequency of khz .all the experimental process is controlled from a virtual interface developed in labview 8.5 .the experimental procedure is the following : first , and are set to zero and then we introduce the six noisy signals and apply the factor gain ( ) . after a waiting time of 500 ms ( roughly corresponding to cycles of the autonomous systems ) , the signals corresponding to the variables of the 6 circuits are acquired by the analog ports ( ai 0 ; ai 1 ; ... ; ai 5 ) and the synchronization error is calculated and stored in the pc .noisy signals are injected by the digital converters ( ao 0 ; ao 1 ; ... ; ao 5 ) and this part of the process is repeated 100 times ( until the maximum value of is reached ) .finally , is increased to the next value and is swept again .the whole process is repeated 100 times until the maximum value of is reached .figure [ fig05 ] shows the experimental results for a configuration identical to that of the numerical simulations shown in fig .[ fig03 ] .we observe that the qualitative agreement between numerics and experiment is excellent , in spite of unavoidable parameter mismatches in the experimental realization due to the tolerance of the electronic components ( between 5 and 10 ) .the parameter mismatch , together with the experimental noise , make the oscillators in the network not only slightly different from their mathematical definition , but also non - identical to one another .this way , we confirm experimentally the feasibility of using the msf for evaluating how the coupling through multiple variables enhances the stability of the synchronous state of a network under realistic conditions .we have seen how an adequate distribution of the coupling strength between the variables of a dynamical system leads to an enhancement of the stability of the synchronized manifold . in particular , we have shown that it is possible obtain a msf that depends on the parameter accounting for the distribution of strength , while maintaining the global coupling constant .interestingly , we report the existence of an optimal value of indicating what is the most adequate amount of coupling to be considered at each coupling variable .the optimal value of is independent of the topology of the network , as long as we use the same coupling structure among all variables . using electronic circuits ,we have also checked the robustness of the results when noise and parameter mismatch are considered , which confirms the theoretical predictions given by the parametric msf and , in its turn , reveal that the requirement of the oscillators to be identical can be relaxed .the proposed framework of decomposing the different dimensions of the system ( variables ) in interconnected layers paves the way to the use of the multilayer networks tools to further analyze synchronization phenomena in multivariable coupled systems .indeed , the current theoretical efforts in network theory to define and study complex structures resulting from the interaction of networks , _ e.g. _ interdependent networks and multiplex networks among others , have made great progress in recent times , in showing new emergent phenomena with no counterpart in single ( monolayer ) complex networks .new developments in that direction could be further extend the results we here present , either using , e.g. , the insight developed within the hypernetwork formalism , or using new approaches for analyzing multilayer networks .indeed , our methodology has some limitations that must be further explored in the future .first of all , the fact that the coupling must have the same topological structure at all variables is a strict constraint , since real systems may have different configurations depending on the coupling variable .more general , fully multilayer topologies could be considered by resorting to the hypernetwork formalism introduced in , and for greater generality one can use the method in ref . . with this methodology ,the case of could be addressed at the cost of introducing some more complexity to the problem .nevertheless , it would be of great interest , since it would raise new questions such as what the adequate combination of topologies would be given a specific distribution of weights .second , since the parametric msf depends on the dynamical system implemented in the network , we can not guarantee the existence of an optimal balance of the distribution of coupling between layers in other dynamical systems , at least until their corresponding msf have been analyzed .authors acknowledge d. papo and p.l .del barrio for fruitful conversations .support from mineco through projects fis2011 - 25167 , fis2012 - 38266 and fis2013 - 41057-p is also acknowledged .aa and jgg acknowledge support from the ec fet - proactive project plexmath ( grant 317614 ) and multiplex ( grant 317532 ) . jgg acknowledges support from mineco through the ramn y cajal program , the comunidad de aragn ( grupo fenol ) and the brazilian cnpq through the pve project of the ciencia sem fronteiras program .aa acknowledges icrea academia and the james s. mcdonnell foundation . r.s.e . acknowledges universidad de guadalajara , culagos ( mexico ) for financial support ( op / pifi-2013 - 14msu0010z-17 - 04 , proinpep - rg/005/2014 , udg - conacyt / i010/163/2014 ) and conacyt ( becas mixtas mzo2015/290842 ) . a network with a connectivity matrix ( if nodes and are connected , and 0 otherwise ) can be represented by the laplacian matrix , where ( denoting the kronecker delta )
synchronization processes in populations of identical networked oscillators are in the focus of intense studies in physical , biological , technological and social systems . here we analyze the stability of the synchronization of a network of oscillators coupled through different variables . under the assumption of an equal topology of connections for all variables , the master stability function formalism allows assessing and quantifying the stability properties of the synchronization manifold when the coupling is transferred from one variable to another . we report on the existence of an optimal coupling transference that maximizes the stability of the synchronous state in a network of rssler - like oscillators . finally , we design an experimental implementation ( using nonlinear electronic circuits ) which grounds the robustness of the theoretical predictions against parameter mismatches , as well as against intrinsic noise of the system .
quantum information science ( qis) is a rapidly emerging discipline with the potential to revolutionize measurement , computation and communication . sitting at the intersection of quantum electronics , quantum optics , and information theory , qis offers a new paradigm for the collection , transmission , reception , storage and processing of information , based on the laws of quantum , rather than classical , physics .applications of qis certainly have the potential to generate totally new quantum information technology ( qit ) , but ultimately any future qit industry will be justified by commercial interest in the products and services that it supports .practical results in qis are currently right at the cutting edge of experimental quantum research , so the route to qit is very hard going . from the commercial perspective, a major challenge is the creation of relatively simple qit which is nevertheless economically viable .this would generate revenue and expand current industrial participation and interest in the field , effectively seeding a new qit industry , in the same way hearing aids were the first commercial application of the transistor and the beginning of the classical it industry .a long term goal for qit is the realization of many - qubit scalable quantum processors .it is known that these machines would outperform their classical counterparts at certain tasks such as factoring and searching, and the search continues for new applications .a shorter term goal is the realization of ( say ) 50 - 100 qubit processors .these would certainly be better at quantum simulation than any foreseeable conventional it and , as a research tool , would expose qit to a whole new class of curious and creative people , with all their potential for new ideas and applications .however , perhaps the most immediate qit , that which will stimulate a new industry , is based on or related to quantum communication and metrology. it has become clear over the last two decades that computer and network security that relies primarily on software protocols is potentially porous , being based on unproven mathematical assumptions . in qis , quantum computation and communication protocols can be devised in which unconditional privacy is guaranteed by fundamental laws of physics . although it is not clear yet that quantum key distribution ( qkd ) will be the first profitable application of qit , it is possible that extensions of qkd ( e.g. , controlled entanglement swapping ) , photonic state comparison ( for quantum signature verification ) , and full quantum communication at high data rates will become compelling to financial , medical , and other institutions and their customers .furthermore , it is already clear that distributed quantum algorithms can efficiently enable solutions to economics problems ( e.g. , public goods economics ) that are difficult to treat with conventional mechanisms , but it is not yet known whether other economic procedures such as auctions have superior quantum solutions .similarly , quantum metrology and imaging have interest for the nanoscale manufacturing and physical security industries , as these techniques allow tiny phase shifts , displacements , and forces to be accurately measured remotely even when the target is enclosed within an inaccessible or hostile environment . although there are still open questions , one promising route for starting a qit industry is to found it on communication and metrology applications , based on the generation , transmission , processing , and detection of a few photonic qubits .it is certainly clear that photons ( or other quantum states of light ) are _ the _ qubits of choice for communication, and so for few - qubit processing there is a case for keeping everything optical .it is also the case that potentially useful processing tasks could be performed with moderate ( 1020% ) gate error rates , rather larger than the stricter error bounds demanded for fault - tolerant many - qubit processing .this `` all optical '' scenario is the motivation for our work . clearly , taking this approach , quantum information processing primitives based on nonlinear quantum optics ( such as a universal set of optical gates and single - photon detectors )must be developed and fabricated .these primitives would potentially allow the construction of few - qubit nanoscale quantum optical processors that could be incorporated into existing pcs and communication networks . in this paperwe discuss the possibility of realizing these primitives through use of electromagnetically induced transparency ( eit ) .atom and a nearly resonant three - frequency electromagnetic field .( a ) in the semiclassical view , two atomic energy levels are separated by the energy , and coupled by a field oscillating at the frequency .the strength and phase of the corresponding dipole interaction is represented by the rabi frequency .( b ) in the quantum view , the states of the atom + photons system separate into manifolds coupled internally by resonant transitions .( c ) a model mach - zehnder interferometer illustrating an architecture for a `` dual rail '' quantum phase - shifter using four - level atoms .the upper arm is denoted by `` 1 '' and the lower arm by `` 0 . '' ]in previous work, we considered a model of the nonlinear electric dipole interaction between three quantum electromagnetic radiation fields with angular frequencies , , and a corresponding four - level atomic system , as shown in fig .[ fourlevel](a ) .we considered atoms , fixed and stationary in a volume that is small compared to the optical wavelengths , and we assumed that the three frequency channels of the resonant four - level manifold of the resulting quantum system shown in fig .[ fourlevel](b ) are driven by fock states containing , , and photons , corresponding to the rabi frequencies , , and , respectively .as an example of the use of an eit system as a phase - shifter , we incorporate the atomic system into the dual - rail mach - zehnder interferometer shown in fig .[ fourlevel](c ) .we wish to apply a phase shift to the photon in mode on the upper rail , conditioned on the presence of one or more photons in mode . in one arm of the interferometer , the four - level atoms are prepared using to provide a phase shift at the probe frequency while remaining largely transparent and dispersive . in the second arm , , andthe system is tuned to match the absorption and dispersion provided by the atoms in the first arm , allowing the interferometer to remain time - synchronous .we must be careful to demonstrate that the interaction of either arm with a photon at the probe frequency that has entered the interferometer at the input port will entangle the optical modes with each other but _ not _ with either collection of atoms .therefore , we solve the density matrix equation of motion in the presence of a completely general lindblad damping model , and monitor the element that corresponds to the initial ( ground ) state of the entire collection of atoms in both ensembles .if the field in mode is indeed described by a fock state , then using the quasi - steady - state approximation to obtain eqs .[ eq : rho10 ] and [ eq : w_10_4 ] .however , our subsequent numerical work has shown us that those results hold over a broad range of experimental parameters even when . ]( ) we obtain where the rabi frequencies anddetunings are defined in fig .[ fourlevel](a ) and ( b ) , and \ , \left|\omega_a\right|^2 } { ( \nu_a + i \gamma_{20})\left[(\nu_a - \nu_b + i \gamma_{30})(\nu_a - \nu_b + \nu_c + i \gamma_{40 } ) - \left|\omega_c\right|^2\right ] - ( \nu_a - \nu_b + \nu_c + i \gamma_{40 } ) \left|\omega_b\right|^2 } .\ ] ] the constant represents the net decoherence rate ( depopulation + dephasing ) of level of the quantum manifold shown in fig .[ fourlevel](b ) for the atom + field system on the first rail relative to the evolution of a system ( absent mode ) on the second rail , while represents the decoherence rate of the collective ground state divided by the number of atoms .since the atomic levels and are metastable by assumption , the decoherence rates and represent pure dephasing mechanisms .suppose that we consider the concrete case of a phase gate that couples single photons in modes and and is driven by a coherent state in mode .we wish to optimize the experimentally controllable parameters so that we introduce a phase shift between the two arms of the system with minimum error .we proceed by choosing _ a priori _ an error which occurs over the entire gate operation , and determining the parameters needed for the gate to perform with this level of error. there will be three sources of error : the dephasing described by and ; the additional depopulation described by and ; and the error arising from the finite value of for the coherent state in mode , which prevents the system from evolving to a perfect phase - shifted state even in the absence of decoherence .suppose that a phase shift of is applied by the phase gate after a time . then the density matrix element given by eq .( [ eq : rho10 ] ) can be rewritten as , where the effective decoherence time is defined as for a given gate fidelity ( a measure of the distance between the ideal output state of the gate and the state that is actually generated ) , the net error due to dephasing and depopulation is . in the absence of significant dephasing ,it is clear that a large value of the detuning reduces the value of the ratio and therefore ; in fact , the maximum practical value of would be set by the duration of the pulses in the three electromagnetic modes and the experimental convenience of accurately measuring the detuning .however , when the pure dephasing terms and are finite , reaches a minimum for a finite value of .we assume that the dephasing rates are small enough that ) , we obtain ^\frac{1}{2}\ , \frac{q_c^2}{q_b^2 } \qquad \mathrm{and } \qquad \tau_\mathit{eff } = 2 \left[\tilde{\gamma}_{10}\tilde{\gamma}_{20}\left(\gamma_{10 } \tilde{\gamma}_{20 } + \left|\omega_a\right|^2\right)\right]^{\frac{1}{2 } } \frac{q_b^2}{\left|\omega_b\right|^2 } \frac{q_c^2}{\left|\omega_c\right|^2 } \frac{\phi}{\left|\omega_a\right|^2 } , \ ] ] where consider the case of a phase gate , and assume that the pure dephasing rates are small enough that .we see immediately that we can have only if , giving and .now , if , then we must have , this requirement is eased considerably to is replaced by the coherent state with ) shows that , and it is clear that the gate performs a phase shift when there is a photon present in _ both _modes and , and no phase shift when either is absent .however , mode is driven by a coherent state , and an input coherent state will evolve according to where , make a significant contribution to the sum , and the phase can be approximately factored out as . however ,if the coherent state is too weak , then the phase shift will be imperfectly implemented with a significant error , even in the absence of decoherence .furthermore , we see from eqs .( [ eq : gt ] ) and ( [ eq : g20_t ] ) that the effect of the decoherence increases with the intensity of the coherent state , and therefore for a given dephasing there is some ideal value of that minimizes the error. we are now ready to estimate the magnitude of the dephasing that must be achieved to obtain a given error . we assume that and . we obtain this equation gives an estimate of the error arising from a _ fock _ input state with an optimized value of for a particular level of decoherence . in fig .[ fig : gplot](a ) we plot the error induced on a _ coherent _ input state due to both decoherence and the finite intensity of the coherent state as a function of the dephasing , where we have chosen both and to minimize the error .figures [ fig : gplot](b ) and [ fig : gplot](c ) shows these values of and chosen to give the largest possible dephasing for a given error .is not suppressed , i.e. . the dashed lines ( - - - ) illustrates the case when has been suppressed relative to by a factor of 1000 .the dotted lines ( ) give the error in the case of the one - qubit phase gate when modes and are occupied by coherent states . in ( c ) , the solid and dashed lines correspond to an ideal value of .however , the dotted line ( ) , shows a _ minimum _ value that must take for the error in the one - qubit gate to remain below the value shown ; can take on larger values without affecting either the decoherence or the value of required to minimize the error .for the one qubit gate , . ]as noted above , if can be suppressed , then the appropriate constraint becomes relatively easy to satisfy .for the requirement , the dephasing can be larger than in the unsuppressed case by a factor of and still produce the same error .by contrast , for , the dephasing can be larger by a factor of .therefore , suppressing means that the detuning required is also much smaller .in order to perform a two - qubit conditional phase gate with an error of 20% ( and assuming we can not suppress the depopulation from level ) , from fig . [fig : gplot ] we find that the required parameters are , , and , and the gate operation time is determined by . however , if can be suppressed by a factor of 1000 relative to , the requirements are much reduced : the dephasing needed is , the detuning is , and the coherent state must have . the corresponding gate operation time is determined by .this system can also be used as a one - qubit phase shift gate ( or an eit - based qnd detector, where we can assume that ) , _ without _ the extra effort described above for suppression of spontaneous emission from the atomic level .if mode remains in the single - photon fock state , the system will act as a phase shifter on mode , and the above analysis applies .alternatively , we can replace the quantum state in mode with an intense coherent - state driving field .now moderate ( i.e. , non - classical ) values of and introduce errors . however , examining eqs .( [ eq : g20_t ] ) , we see that unlike for the two - qubit gate the effects of decoherence depend much less sensitively on the intensities of the coherent states . as shown in fig .[ fig : gplot ] , this means that we can eliminate the error due to the finite size of the coherent state , and we are only left with an error due to decoherence .we have studied gates for quantum information processing based on a quantum treatment of eit systems . we have analyzed in detail the performance of a two - qubit phase gate ( and , by extension , that of a one - qubit phase gate ) as functions of both atom and field properties , and we have described a general optimization method that selects a detuning that minimizes the gate error for a given phase shift .the resulting constraints on the allowable dephasing rates in these systems are quite stringent for high - fidelity operation . however ,if the spontaneous emission rate from the atomic level can be suppressed significantly , then demonstration of a moderate - fidelity phase gate becomes experimentally achievable .99 m. a. nielsen and i. l. chuang , * quantum computation and quantum information * ( cambridge university press , 2000 ) .p. w. shor , `` polynomial - time algorithm for prime factorization and discrete logarithms on a quantum computer , '' proc .35th annual symposium on the foundations of computer science , ed .s. goldwasser , 124 ( ieee computer society press , los alamitos , ca , 1994 ) ; siam j. computing * 26 * , 1484 ( 1997 ) ; quant - ph/9508027 . l. k. grover , `` a fast quantum mechanical algorithm for database search , '' proc .28th annual acm symposium on the theory of computing ( stoc ) , 212 ( may 1996 ) ; quant - ph/9605043 ; phys . rev. lett . * 79 * , 325 ( 1997 ) ; quant - ph/9706033 .s. lloyd , `` universal quantum simulators , '' science * 273 * , 1073 ( 1996 ) .n. gisin et al . , `` quantum cryptography , '' rev .phys . * 74 * , 145 ( 2002 ) .h. lee , p. kok and j. p. dowling , `` quantum imaging and metrology , '' proceedings of the sixth international conference on quantum communication , measurement and computing ( qcmc02 ) , edited by j.h .shapiro and o. hirota ( rinton press , paramus , new jersey , usa , 2003 ) , quant - ph/0306113 .. k .- y .chen , t. hogg and r. g. beausoleil , `` a quantum treatment of public goods economics , '' quant .* 1 * , 449 ( 2003 ) ; arxiv : quant - ph/0301013 .r. g. beausoleil , w. j. munro , and t. p.spiller ( 2003 ) , `` applications of coherent population transfer to quantum information processing , '' j. mod .opt * 51 * , 1159 ( 2004 ) ; arxiv : quant - ph/0302109 . w. j. munro , k. nemoto , r. g. beausoleil , and t. p.spiller , `` a high - efficiency quantum non - demolition single photon number resolving detector , '' arxiv : quant - ph/0310066 .
we provide a broad outline of the requirements that should be met by components produced for a quantum information technology ( qit ) industry , and we identify electromagnetically induced transparency ( eit ) as potentially key enabling science toward the goal of providing widely available few - qubit quantum information processing within the next decade . as a concrete example , we build on earlier work and discuss the implementation of a two - photon controlled phase gate ( and , briefly , a one - photon phase gate ) using the approximate kerr nonlinearity provided by eit . in this paper , we rigorously analyze the dependence of the performance of these gates on atomic dephasing and field detuning and intensity , and we calculate the optimum parameters needed to apply a phase shift in a gate of a given fidelity . although high - fidelity gate operation will be difficult to achieve with realistic system dephasing rates , the moderate fidelities that we believe will be needed for few - qubit qit seem much more obtainable .
the rotation induced oblateness of astronomical bodies is a classical problem in newtonian and celestial mechanics ( for the early history , see todhunter ) .it has twice played an important role in the history of science . in the early eighteenth century measurements indicated a prolate shape of the earth , in strong conflict with the newtonian prediction .this was later shown to be wrong by more careful measurements by maupertuis , clairaut , and celsius in northern sweden in 1736 .then , in 1967 , measurements of the solar oblateness were published and according to these it was much larger than the sun s surface angular velocity would explain . the confirmation of general relativity by mercury s perihelion precessionwould then be lost .also this problem is now gone and the modern consensus is that the solar oblateness is too small to affect this classic test of general relativity .the subject of the flattening of rotating astronomical bodies is thus quite mature .the classical theory is due mainly to clairaut , laplace , and lyapunov .also radau , darwin , de sitter and many others have made important contributions .more recent accounts of the theory can be found in , for example , jeffreys , zharkov et al. , cook , moritz and , partly , in chandrasekhar .some pedagogical efforts can be found in murray and dermott , or in kaula .as is plain from these references the theory is quite involved . only the unrealistic assumption that the body is homogeneous gives compact analytical results . otherwise a specified radial density distribution is needed and one resorts to cumbersome series ( multipole ) expansions , or purely numerical methods , for quantitative results . herewe will present analytical results based on the assumption that the body consists of a central point mass surrounded by a homogeneous fluid , the so called point core model . by varying the relative mass of the fluid and the central point particle one can interpolate between the extreme limits of newton s homogeneous body and the roche model with a dominating small heavy center .the point core model goes back to the work of g. h. darwin .more recently it has been used to study the shape of outer planet moons , see hubbard and anderson , dermott and thomas .apart from the basic point core approximation ( i ) several further approximations are assumed here .these are : ( ii ) that the shape is determined by hydrostatic equilibrium and ( iii ) that the shape is ellipsoidal .these are not consistent . according to hamy s theorem ,see moritz , the exact shape is not ellipsoidal , so we regard an ellipsoidal shape as a constraint and find the equilibrium shape , among these , by minimizing the energy .a further approximation ( iv ) is the neglect of differential rotation .on the other hand fixed volume ( `` bulk incompressibility '' ) need not be assumed ; the equilibrium volume problem separates from the shape problem . in spite of these approximations , which are standard in the literature , the mathematics can be quite involved . in this articlei hope to clarify and simplify it as much as possible . mathematically our model then becomes a three degree of freedom mechanical system for which we can calculate the kinetic and potential ( gravitational plus centrifugal ) energies exactly .multipole expansions in terms of spherical harmonics are not needed .the three degrees of freedom correspond to the three semi - axes of the ellipsoid ( ) but these are transformed to three generalized coordinates that describe size ( or volume ) , , spheroidal , , and triaxial , , shape changes , see eq .( [ eq.def.xi.eta ] ) .the statics problem of equilibrium shape is solved by minimizing the potential energy , for a fixed , eq.([eq.energy.xi.tau ] ) . for slow rotationthe shape will not be triaxial so .finding the shape , or flattening , is then only a matter of minimizing a dimensionless potential energy , given by the last two terms of eq .( [ eq.energy.xi ] ) . here is defined by eq .( [ eq.basic.phi.xi ] ) and plotted in fig .[ fig1 ] while is a constant that depends on rotational parameter and dimensionless moment of inertia , eq.([eq.kappa2.nu ] ) .the root of the equation thus gives the flattening .this root is to very high accuracy given by ^{-1/6 } , \ ] ] see eqs .( [ eq.maclaurin.approx.inv ] ) and ( [ eq.eccentricity.relation.xi ] ) , for the rotational parameters that can be found in the solar system . from this result , simple formulas , eqs.([eq.maclaurin.approx.relation.q ] ) ( [ eq.maclaurin.approx.relation.epsilon ] ) , relating the observables , rotation parameter , , gravitational quadrupole , , and excentricity squared , , are obtained .these appear to be , partly , new , and their usefulness is demonstrated by comparing with empirical data for the sun and the rotating planets of the solar system .variational methods have been used before to study similar problems , see for example abad et al .denis et al. have pointed out that variational methods generally fail to provide estimates of their accuracy .therefore the agreement of our formulas with empirical data , as demonstrated in table /cite(table ) , is important and demonstrates that our model catches the essential physics of rotational flattening .finally small amplitude oscillations near the equilibrium are investigated , starting from the lagrangian for our three degree of freedom model system .this gives useful insight into the physics of free stellar or planetary oscillations and their coupling to rotation .our approximations are , however , too severe for these results to be of quantitative interest .assume that are rectangular cartesian coordinates in three - dimensional space of a point with position vector an ellipsoid , with semi - axes , is the solid defined by ^{1/2 } \le 1 .\ ] ] if we put and define ^{1/2}\ ] ] the ellipsoid is the set of points that fulfill where is the geometric mean ( or volumetric ) radius {abc}eee$}}_z ] .\end{aligned}\ ] ] an elementary computation shows that the kinetic energy , , becomes \end{aligned}\ ] ] here , and .the potential energy is given in eq .( [ eq.energy.xi.tau ] ) . to get an explicit potential energy we must find some expression for the volume dependent ( pressure ) energy . as a model for this part of the energy we take the simple expression where we must take if the corresponding force is to balance gravity and prevent collapse .putting these together we thus find the total potential energy if we use the definition of in eq .( [ eq.def.k ] ) we find the expression for the potential energy of the system .combining of eq .( [ eq.kin.energy.2 ] ) with to the lagrangian we can get the dynamics of this three degree of freedom system exactly by finding and solving the euler - lagrange equations of the system . here we will instead assume small oscillations and make a normal mode analysis .we first taylor expand to quadratic terms around and then solve for the zeros of the first derivatives of this quadratic to get the position of the minimum .the result is we now introduce new variables , through then , of course , . in the kinetic energywe replace in the coefficients with the equilibrium positions and expand to first order in . in the potential energywe make the same replacement in the quadratic taylor expansion , expand to first order in and throw away constant terms .the result is that \ ] ] and that .\ ] ] only the degrees of freedom and are coupled and only when . for the spheroidal -mode and the triaxial -mode are degenerate ( have the same frequency ) .the value is special since for that value all three modes have the same frequency. for the -mode pressure is restoring force in the contracting phase of the motion and gravity in the expanding phase .for the other two modes , , gravity alone acts to restore the spherical minimum . if we put eq .( [ eq.def.k ] ) shows that , and eq.([eq.maclaurin.approx.eq ] ) , with , that .for we then get the approximate eigen frequencies to first order in . from thisone easily finds the following first order results ,\\ t_{\tau } = \frac{2\pi}{\omega_{0 } } \left[1+\frac{1}{2}\left(1+\frac{16}{15n}\right)\epsilon\right],\end{aligned}\ ] ] for the corresponding periods .the free radial oscillation mode for the earth is known to have = .this means that one can calculate and find it to be for the earth . since the equilibrium radius of the rotating earth is eq.([min.pos , lambda ] ) , and a small calculation , now shows that earth s mean radius is m larger due to rotation compared to the non - rotating case .the fairly large also shows that the mode periods are essentially independent of . for the earthone finds that the -mode has period min . due to the rotational splitting , which is given by -mode is seconds shorter .the longest observed free oscillation period of the earth has period ( see udas ) .the main explanation for the discrepancy is probably that the model neglects elastic forces .though not quantitatively reliable from this point of view the model has the advantages of showing in a simple way how rotational splitting of the modes arise and the order of magnitude of such splittings .some apparently new results relating to the classic theory of the figure of rotating bodies have been presented .the basic model , a point mass at the center of a homogeneous fluid , is characterized by their mass ratio , and interpolates between the limits of an ellipsoidal homogeneous fluid and a body dominated by a small central mass concentration .it allows simple analytic treatment but is still flexible enough to correctly describe the essential hydrostatics of real rotating planets as well as stars .such models are always useful , especially when one wishes to analyze , compare , and understand large numbers of observational data . in spite of the simplicitythere is no perturbation order to which the results are valid ; the nonlinearity of the basic equations can be retained . to be more precise eq .( [ eq.maclaurin.approx ] ) shows that the formulas are valid to seventh order in the eccentricity ( within the basic model with its simplified mass distribution ) .as demonstrated by the numerical experiments on jupiter and saturn data , this is essential .in fact table [ table ] indicates that the geometric oblateness of the surface equipotential surface of mars and uranus determined from observed and values using ( [ eq.maclaurin.approx.relation.epsilon ] ) probably are more reliable than current observational data .finally the dynamics of the model reveals the essentials of the coupling and rotational splitting of the most basic free oscillations of a planet without recourse to expansion in spherical harmonics .+ + * acknowledgement * constructive comments from dr .john d. anderson on a previous version of this manuscript are gratefully acknowledged .the first row for each planet gives literature data .these are observational except which are based on theoretical calculations .the second row for each planet gives the corresponding calculated values as given by formulas ( [ eq.maclaurin.approx.relation.q ] ) - ( [ eq.eps.e2 ] ) in such a way that for each pair of observational , and the missing third is calculated .the moment of inertia is calculated from observational and using formula ( [ eq.j2.expression ] ) , , with from formula ( [ eq.maclaurin.approx.relation.epsilon ] ) .+ 3 . the third row for jupiter and saturngives data calculated in a similar way to that in the second row except that the first order formula ( [ eq.j2.alpha.epsilon ] ) has been used to find the third value from two observational values .+ 4 . for the sun , and have been calculated from an observation based -value discussed in the text and a theoretical , using formulas ( [ eq.ellipt.kappa2 ] ) , ( [ eq.j2.kappa2 ] ) and respectively .
a point mass at the center of an ellipsoidal homogeneous fluid is used as a simple model to study the effect of rotation on the shape and external gravitational field of planets and stars . maclaurin s analytical result for a homogenous body is generalized to this model . the absence of a third order term in the taylor expansion of the maclaurin function leads to further simple but very accurate analytical results connecting the three observables : oblateness ( ) , gravitational quadrupole ( ) , and angular velocity parameter ( ) . these are compared to observational data for the planets . the moments of inertia of the planets are calculated and compared to published values . the oblateness of the sun is estimated . oscillations near equilibrium are studied within the model . + +
quantum physics is a technically difficult and abstract subject . the subject matter makes instruction quite challenging and students perpetually struggle to master basic concepts . here i discuss the development and evaluation of quantum interactive learning tutorials ( quilts ) that help advanced undergraduate students learn quantum mechanics .quilts are designed to create an active learning environment where students have an opportunity to confront their misconceptions , interpret the material learned , draw qualitative inferences from quantitative tools learned in quantum mechanics and build links between new material and prior knowledge .they are designed to be easy to implement regardless of the lecturer s teaching style .a unique aspect of quilts is that they are research - based , targeting specific difficulties and misconceptions students have in learning various concepts in quantum physics . they often employ computer - based visualization tools to help students build physical intuition about quantum processes and keep students consistently engaged in the learning process by asking them to predict what should happen in a particular situation , and then providing appropriate feedback .they attempt to bridge the gap between the abstract quantitative formalism of quantum mechanics and the qualitative understanding necessary to explain and predict diverse physical phenomena. they can be used in class by the instructors once or twice a week as supplements to lectures or outside of the class as homework or as self - study tool by students .the quilts use a learning cycle approach in which students engage in the topic via examples that focus their attention , explore the topic through facilitated questioning and observation , explain what they have learned with instructor facilitating further discussion to help refine students understanding and extend what they have learned by applying the same concepts in different contexts . the guidance provided by the tutorials is decreased gradually and students assume more responsibility in order to develop self - reliance .in addition to the main tutorial , quilts often have a warm - up " component and a tutorial homework " .students work on the warm - up " component of a quilt at home before working on the main tutorials in class .these warm - ups typically review the prior knowledge necessary for optimizing the benefits of the main tutorial related to that topic .the tutorial homework " associated with a quilt can be given as part of their homework to reinforce concepts after students have worked on the main tutorial .the tutorial homework helps students apply the topic of a particular quilt to many specific situations to learn about its applicability in diverse cases and learn to generalize the concept appropriately .we design a pre - test and post - test to accompany each quilt .the pre - test assesses students initial knowledge before they have worked on the corresponding quilt , but typically after lecture on relevant concepts .the quilt , together with the preceding pre - test , often make students difficulties related to relevant concepts clear not only to the instructors but also to students themselves .the pre - test can also have motivational benefits and can help students better focus on the concepts covered in the quilt that follows it .pre-/post - test performances are also useful for refining and modifying the quilt. an integral component of the quilts is the adaptation of visualization tools for helping students develop physical intuition about quantum processes .a visualization tool can be made much more pedagogically effective if it is embedded in a learning environment such as quilt .a simulation , preceded by a prediction and followed by questions , can help students reflect upon what they visualized .such reflection can be useful for understanding and remembering concepts .they can also be invaluable in helping students better understand the differences between classical and quantum concepts .we have adapted simulations from a number of sources as appropriate including the open source java simulations developed by belloni and christian . some of the quilts , e.g. , the quilt on double - slit experiment which uses simulations developed by klaus muthsam , are also appropriate for modern physics courses .the double - slit quilt uses simulations to teach students about the wave nature of particles manifested via the double slit experiment with single particles , the importance of the phase of the probability amplitude for the occurrence of interference pattern and the connection between having information about which slit a particle " went through ( which - path " information ) and the loss of interference pattern . for the quilts based on simulations, students must first make predictions about what and why they expect a certain thing to happen in a particular situation before exploring the relevant concepts with the simulations .for example , students can learn about the stationary states of a single particle in various types of potential wells .students can change the model parameters and learn how those parameters affect stationary states and the probability of finding the electron at a particular position .they can also take various linear combinations of stationary states to learn how the probability of finding the electron at a particular position is affected .they can calculate and compare the expectation values of various operators in different states for a given potential .they can also better appreciate why classical physics may be a good approximation to quantum physics under certain conditions .students can also develop intuition about the differences between bound states and scattering states by using visual simulations .guided visualization tools can also help students understand the changes that take place when a system containing one particle is extended to many particles . similar to the development of tutorials for introductory and modern physics , the development of each quilt goes through a cyclical iterative process .preliminary tutorials are developed based upon common difficulties in learning a particular topic , and how that topic fits within the overall structure of quantum mechanics .the preliminary tutorials are implemented in one - on - one interviews with student volunteers , and modifications are made .these modifications are essential for making the quilts effective .after such one - on - one implementation with at least half a dozen students , tutorials are tested and evaluated in classroom settings and refined further .working through quilts in groups is an effective way of learning because formulating and articulating thoughts can provide students with an opportunity to solidify concepts and benefit from one another s strengths .it can also provide an opportunity to monitor their own learning because mutual discussions can help students rectify their knowledge deficiencies .students typically finish a quilt at home if they can not finish it in class and take the post - test associated with it individually in the following class for which no help is provided .below , we briefly discuss case studies related to the development and evaluation of three quilts on time development of wave function , uncertainty principle , and mach - zehnder interferometer .the development of each quilt starts with a careful analysis of the difficulties students have in learning related concepts .after the preliminary development of the tutorials and the pre-/post - tests associated with them , we conduct one - on - one 1.5 hour interviews with 6 - 7 student volunteers for each tutorial using a think - aloud protocol . in this protocol , students are asked to work on a tutorial while talking aloud so that we could follow their thought processes .hints are provided as appropriate .these individual interviews provide an opportunity to probe students thinking as they work through a tutorial and gauge the extent to which students are able to benefit from them . after each of these interviews ,the tutorials are modified based upon the feedback obtained . then , they are administered in the classroom and are modified further based upon the feedback .table 1 shows the performance on the pre-/post - test of advanced undergraduate students in a quantum mechanics course on the last version .the pre - test was given after traditional instruction on relevant concepts but before the tutorial .below we summarize each tutorial and discuss student performance on the case - study .we note that the pre - test and post - test for a quilt were not identical but often had some identical questions .one difficulty with the time development of wave functions stems from the fact that many students believe that the only possible wave functions for a system are stationary states . since the hamiltonian of a system governs its time development , we may expand a non - stationary state wave function at the initial time in terms of the stationary states and then multiply appropriate time dependent phase factors with each term ( they are in general different for different stationary states because the energies are different ) to find the wave function at time .students often append an overall time - dependent phase factor even if the wave function is in a linear superposition of the stationary states . to elicit this difficulty , the pretest of this quilt begins by asking students about the time dependence of a non - stationary state wave function for an electron in a one - dimensional infinite square well .if the students choose an overall phase factor similar to that for a stationary state , they are asked for the probability density , i.e. , the absolute square of the wave function . as noted above , when a non - stationary state is expanded in terms of stationary states , the probability density at time , ,is generally non - stationary due to a different time - dependent phase factor in each term .if students incorrectly choose that the wave function is time - independent even for a non - stationary state , arguing that overall phase factors cancel out , the tutorial asks them to watch the simulations for the time evolution of the probability densities .simulations for this quilt are adapted from the open source physics simulations developed by belloni and christian . these simulations are highly effective in challenging students beliefs .students are often taken aback when they find that the probability density oscillates back and forth for the non - stationary state .figure 1 shows snapshots adapted in quilt from an open source physics simulation by belloni and christian for the probability density for a non - stationary state wave function for a one - dimensional harmonic oscillator well . in the actual simulation ,students watch the probability density evolve in time .when students observe that the probability density does not depend on time for the stationary - state wave function but depends on time for the non - stationary - state wave function , they are challenged to resolve the discrepancy between their initial prediction and observation . in our model , this is a good time to provide students guidance and feedback to help them build a robust knowledge structure .students then work through the rest of the quilt which provides appropriate support and helps solidify basic concepts related to time development .students respond to time development questions with stationary and non - stationary state wave functions in problems involving different potential energies ( e.g. , harmonic oscillator , free - particle etc . ) to reinforce concepts , and they receive timely feedback to build their knowledge structure . for each case, they check their calculations and predictions for the time - dependence of the probability density in each case with the simulations . within an interactive environment , they learn that the hamiltonian governs the time development of the system , and that the eigenstates of the hamiltonian are special with regards to the time evolution of the system .they learn that not all possible wave functions are stationary - state wave functions , and they learn about the difference between the time - independent and time - dependent schroedinger equation .table 1 shows that in the case study in which nine students took both the pre-/post - tests , the average student performance improved from to after working on the quilt . as discussed earlier , the most common difficulty on the pre - test was treating the time evolution of non - stationary states as though those states were stationary states .moreover , two students who were absent on the day the pre - test and tutorial were administered in the class but were present for the post - test in the following class obtained and on the post - test respectively .the quilt on the uncertainty principle contains three parts with increasing levels of sophistication . depending upon the level of students , the instructors may choose to use only one or all parts .the first part of this quilt helps students understand that this fundamental principle is due to the wave nature of particles . with the help of the de broglie relation ,the quilt helps students understand that a sinusoidal extended wave has a well - defined wavelength and momentum but does not have a well - defined position . on the other hand , a wave pulse with a well defined positiondoes not have a well defined wavelength or momentum .students gain further insight into the uncertainty principle in the second part of the quilt by fourier transforming the position - space wave function and noticing how the spread of the position - space wave function affects its spread in momentum space .computer simulations involving fourier transforms are exploited in this part of the quilt and students fourier transform various position - space wave function with different spreads and check the corresponding changes in the momentum - space wave function .the third part of the quilt helps students generalize the uncertainty principle for position and momentum operators to any two incompatible observables whose corresponding operators do not commute .this part of the quilt also helps students bridge this new treatment with students earlier encounter with uncertainty principle for position and momentum in the context of the spread of a wave function in position and momentum space .the quilt also helps students understand why a measurement of one observable immediately followed by the measurement of another incompatible observable does not guarantee a definite value for the second observable .table 1 shows that the average performance of 12 students who took the last version of the quilt improved from to from pre - test to post - test . in a question that was common for both the pre - test and post - test ,students were asked to make a qualitative sketch of the absolute value of the fourier transform of a delta function .they were asked to explain their reasoning and label the axes appropriately .only one student in the pre - test drew a correct diagram . in the post - test , 10 out of 12 students were able to draw correct diagrams with labeled axes and explain why the fourier transform should be a constant extended over all space .also , in the post - test , 10 out of 12 students were able to draw the fourier transform of a gaussian position space wave function and were able to discuss the relative changes in the spread of the position and the corresponding momentum space wave functions .these were concepts they had explored using computer simulations while working on the quilt .similar results were found in individual interviews conducted earlier with other students during the development of the quilt .one of the questions on both the pre-/post - test of this tutorial was the following : + consider the following statements : uncertainty principle makes sense .when the particle is moving fast , the position measurement has uncertainty because you can not determine the particle s position precisely .. it is a blur .... that s exactly what we learn in quantum mechanics .. if the particle has a large speed , the position measurement can not be very precise . "explain why you agree or disagree with the statement . + out of the 12 students who took both pre-/post - tests , 7 students provided incorrect responses on the pre - test .the following are examples of incorrect student responses on the pre - test : 1 ._ i agree ... when p is high it is easy to determine , while x is difficult to determine .the opposite is also true , when p is small it is difficult to determine , while x is easy to determine ._ i agree because when a particle has a high velocity it is difficult to measure the position accurately _ 3 ._ i agree because i know the uncertainty principle to be true . _when a particle is moving fast , we can not determine its position exactly - it resembles a wave - at fast speed , its momentum can be better determined . _ in comparison , one student provided incorrect response and one did not provide a clear reasoning on the post - test .the goals of this quilt are to review the interference at a detector due to the superposition of light from the two paths of the interferometer .the tutorial adapts a simulation developed by albert huber ( http://www.physik.uni-muenchen.de/didaktik/computer/interfer/interfere.html ) to help students learn the following important quantum mechanical concepts : * interference of a single photon with itself after it passes through the two paths of the mz .* effect of placing detectors and polarizers in the path of the photon in the mz . *how the information about the path along which a photon went ( which - path " information ) destroys the interference pattern .a screen shot from the simulation is shown in figure 2 .students were given the following information about the setup : the basic schematic setup for the mach - zehnder interferometer ( mz ) used in this quilt is as follows ( see figure 3 ) with changes made later in the tutorial , e.g. , changes in the position of the beam splitters , incorporation of polarizers , detectors or a glass piece , to illustrate various concepts .all angles of incidence are with respect to the normal to the surface .for simplicity , we will assume that light can only reflect from one of the two surfaces of the identical half - silvered mirrors ( beam splitters ) and because of anti - reflection coatings .the detectors and are detectors located symmetrically with respect to the other components of the mz as shown .the photons originate from a monochromatic coherent point source .assume that the light through both the and paths travels the same distance in to reach each detector .+ in this quilt , students first learn about the basics of phase changes that take place as light reflects or passes through different beam splitters and mirrors in the mz setup by drawing analogy with reflected or transmitted wave on a string with fixed or free boundary condition at one end. then , students use the simulation to learn that a single photon can interfere with itself and produce interference pattern after it passes through both paths of the mz .students explore and learn using simulations that which - path " information is obtained by removing or by placing detectors or polarizers in certain locations . later in the tutorial , point detector replaced with a screen .table 1 shows that the average performance of 12 students who took the last version of the mz quilt improved from to from pre - test to post - test .moreover , all but one of the 12 students in the post - test obtained perfect scores on the following three questions ( correct options ( c ) , ( b ) , and ( b ) respectively ) that were similar ( but not necessarily identical to ) the kinds of setups they had explored using the simulation within the guided quilt approach : 1 .if you insert polarizers 1 and 2 ( one with a horizontal and the other with a transmission axis ) as in the figure 4 , how does the interference pattern compare with the case when the two polarizers have orthogonal transmission axis ? + ( a ) the interference pattern is identical to the case when polarizers 1 and 2 have orthogonal axes .+ ( b ) the interference pattern vanishes when the transmission axes of polarizers 1 and 2 are horizontal and .+ ( c ) an interference pattern is observed , in contrast to the case when polarizers 1 and 2 were orthogonal to each other .+ ( d ) no photons reach the screen when the transmission axes of polarizers 1 and 2 are horizontal and .if you insert polarizer 1 with a horizontal transmission axis and polarizer 2 ( between the second beam splitter and the screen ) with a transmission axis ( figure 5 ) , how does the interference pattern compare with the case when only polarizer 1 was present ? + ( a ) the interference pattern is identical to the case when only polarizer 1 was present .+ ( b ) the intensity of the interference pattern changes but the interference pattern is maintained in the presence of polarizer 2 .+ ( c ) the interference pattern vanishes when polarizer 2 is inserted but some photons reach the screen .+ ( d ) an interference pattern reappears that was absent when only polarizer 1 was present .if you insert polarizer 2 with a transmission axis between the second beam splitter and the screen ( figure 6 ) , how does the interference pattern compare with the case when polarizer 2 was not present ? + ( a ) the interference pattern is unchanged regardless of the presence of polarizer 2 because all interference effects occur before beam splitter 2 .+ ( b ) the intensity of the interference pattern decreases but the interference pattern is maintained even in the presence of polarizer 2 .+ ( c ) the intensity of the interference pattern increases in the presence of polarizer 2 .+ ( d ) the interference pattern vanishes when polarizer 2 is inserted but some photons reach the screen .a survey of 12 students whose pre-/post - test data is presented in table 1 was given to assess the effectiveness of quilts from students perspective .below we provide the questions and student responses : + * please rate the tutorials for their overall effectiveness where 1 means totally ineffective and 5 means very effective .+ in response to this question , no student chose 1 or 2 , one student chose 3 , one chose 3.5 , three chose 4 , one 4.5 and six chose 5 . * how often did you complete the tutorial at home that you could not complete during the class ? ( 1 ) never , ( 2 ) less than half the time , ( 3 ) often , ( 4 ) most of the time , ( 5 ) always .+ in response to this question , no student chose ( 1 ) , one student chose ( 2 ) , two students chose ( 3 ) , 6 chose ( 4 ) , and 3 chose ( 5 ) . * how often were the hints / solutions provided for the tutorials useful ? ( 1 ) never , ( 2 ) less than half the time , ( 3 ) often , ( 4 ) most of the time , ( 5 ) always .+ in response to this question , no student chose ( 1 ) or ( 2 ) , 2 students chose ( 3 ) , 5 chose ( 4 ) and 5 chose ( 5 ) . * is it more helpful to do the tutorials in class or would you prefer to do them as homework ?please explain the advantages and disadvantages as you see it .+ in response to this question , 10 students felt that doing them in class was more useful .the students who preferred doing them in class often noted that the tutorials focused on improving their conceptual understanding which was best done via group discussion and hence in class .they appreciated the fact that any questions they had could be discussed and they benefited from the reasoning provided by their peers and instructor .the few students who preferred doing them at home felt that more time and effort will go into them if they did them at home . *how frequently should the tutorials be administered in the class ( e.g. , every other class , once a week , once every other week ) ?explain your reasoning . + a majority of students liked having the tutorials once a week .this frequency was considered to be the best by some students who felt that the concepts learned in the tutorials made it easier for them to understand the textbook and homework problems later in the week and integrate the material learned .others felt that once a week was the best because tutorials helped them focus on concepts that were missed in lectures , book , and student / teacher conversation . *do you prefer a multiple - choice or open - ended question format for the tutorial questions ?explain your reasoning .+ students in general seemed to like the questions that were in the multiple - choice format but most of them also appreciated the open - ended questions .some students noted that the multiple - choice questions helped focus their attention on important issues and common difficulties and misconceptions while the open - ended questions stimulated creative thought .some students felt that multiple - choice format may be better for the warm - up " tutorial done at home and the open - ended questions may be better for the main tutorial done in the class .some students felt that a mix of the two types of questions was best because the multiple - choice format was a good way to get the fundamental concepts across and the open - ended questions gave them an opportunity to apply these concepts and deepen their understanding of the concepts .we have given an overview of the development of quilts and discuss the preliminary evaluation of three quilts using pre-/post - tests in the natural classroom setting .quilts adapt visualization tools to help students build physical intuition about quantum processes .they are designed to help undergraduates sort through challenging quantum mechanics concepts .they target misconceptions and common difficulties explicitly , focus on helping students integrate qualitative and quantitative understanding , and learn to discriminate between concepts that are often confused .they strive to help students develop the ability to apply quantum principles in different situations , explore differences between classical and quantum ideas , and organize knowledge hierarchically .their development is an iterative process . during the development of the existing quilts ,we have conducted more than 100 hours of interviews with individual students to assess the aspects of the quilts that work well and those that require refinement .quilts naturally lend themselves to dissemination via the web .they provide appropriate feedback to students based upon their need and are suited as an on - line learning tool for both undergraduates ( and beginning graduate students ) in addition to being suitable as supplements to lectures for a one or two - semester undergraduate quantum mechanics courses .we are very grateful to mario belloni and wolfgang christian for their help in developing and adapting their open source physics simulations for quilts .we also thank albert huber for mach zehnder interferometer simulation and to klaus muthsam for the double slit simulation .we thank all the faculty who have administered different versions of quilts in their classrooms .+ e. j. galvez , c. h. holbrow , m. j. pysher , j. w. martin , n. courtemanche , l. heilig and j. spencer , _ interference with correlated photons : five quantum mechanics experiments for undergraduates _ , am .j. phys . * 73 * , 127 - 140 ( 2005 ) . c. singh , _ improving student understanding of quantum mechanics _ ,proceedings of the phys .conference , salt lake city , ut , edited by p. heron , l. mccullough , j. marx , 69 - 72 , ( aip , melville ny , 2005 ) . c. singh , helping students learn quantum mechanics for quantum computing , " proc , phys .conference , syracuse , ny , edited by l. mccullough , l. hsu and p. heron , 42 - 45 ( aip , melville ny , 2006 ) .m. belloni and w. christian , _ physlets for quantum mechanics _ , comp .eng . 5 , 90 ( 2003 ) ; m. belloni , w. christian and a. cox , _ physlet quantum physics _ , pearson prentice hall , upper saddle river , nj , ( 2006 ) .mach - zehnder simulation was adapted from http://www.physik.uni-muenchen.de/didaktik/computer/interfer/interfere.html and double - slit simulation was developed by klaus muthsam ( muthsam.de ) .the original spins program was written by daniel schroeder and thomas moore for the macintosh and was ported to java by david mcintyre of oregon state university and used as part of the paradigms project . both of these versions were , andremain , open source .
we discuss the development and evaluation of quantum interactive learning tutorials ( quilts ) , suitable for one or two - semester undergraduate quantum mechanics courses . quilts are based upon investigation of student difficulties in learning quantum physics . they exploit computer - based visualization tools and help students build links between the formal and conceptual aspects of quantum physics without compromising the technical content . they can be used both as supplements to lectures in the classroom or as a self - study tool . address = department of physics , university of pittsburgh , pittsburgh , pa , 15213
the term `` master equation '' ( me ) used in this work means an equation of motion for the reduced density operator of subsystem which interacts with ( usually much larger ) another subsystem .experiences gathered during quite a few years of lecturing indicate that students find it difficult to understand the derivation and consequences of me .the literature sources ( at least those known to us ) are usually quite brief and not easy to follow .the aim of this paper is to give a full presentation of the so - called microscopic derivation of me together with detailed discussion of the underlying assumptions and approximations .hence , this work does not bring any original material .it is just a tutorial which , hopefully , will help the students to obtain a better grasp of the ideas and concepts leading to one of important theoretical methods used in a variety of physical problems .the literature of the subject is quite rich . entering the term `` master equation '' into _ google scholar _ returns about 30 000 links .it seems virtually impossible to give an extensive bibliography .therefore , we concentrate only on several works which were essential in the preparation of this paper . in order to present the me technique, we first give the basic ideas which help , so to speak , to set the scene for further developments .we assume that the reader is familiar with fundamental concepts of quantum mechanics ( given , for example , in the main chapters of the first volume of the excellent book by cohen - tannoudji _ et al _ ) . to start with , we recall that the state of the quantum system is , in the majority of practical applications , given by the density operator .this concept is introduced and discussed in virtually all handbooks on quantum mechanics , hence we mention only the basic facts which will be needed here .namely , density operator for any physical system must have three essential properties [ ii01 ] the last inequality means that for any state from the hilbert space of the states of the given physical system . these properties may be phrased in terms of the eigenvalues of the density operator .namely , , , and . for a closed system with hamiltonian , the evolution of the corresponding density operatoris given by the von neumann equation , \label{ii03}\ ] ] which has a well - known solution where is a suitably chosen initial condition .such an evolution is unitary and obviously preserves all the necessary properties of the density operator .in such a case everything is conceptually clear although necessary calculation may be quite involved or even requiring some approximate computational methods .the problem arises when we deal with a bipartite system , consisting of two interacting subsystems and .let us briefly outline the physical situation .we assume that the whole system is closed and the total hamiltonian is written as where and are free , independent hamiltonians of each of the subsystems and . is the hamiltonian describing the interaction between two parts .let denote the density operator of the total system .then , evolves according to the von neumann equation ( identical as ) , \label{ii06}\ ] ] now , one may ask , what is the problem ? the point is that we are , in fact , interested only in the subsystem .subsystem , for this reason or other , is considered irrelevant , although the - interaction certainly affects the evolution of .moreover , in many practical cases , subsystem is much larger ( with many more degrees of freedom ) and virtually inaccessible to direct measurements .frequently , is a reservoir and plays the role of environment upon which we have neither control nor influence. this may be very important in the context of quantum information theory when the relevant subsystem is disturbed by the surroundings .moreover , problems of decoherence and irreversibility are intrinsically connected with the effects occurring in a subsystem influenced by an external reservoir ( see ) .we shall not discuss these problems but focus on master equation technique .in the view of these brief remarks the question is , how to extract useful information on from the general von neumann equation . we stress that we are interested only in system , so we need to find the reduced density operator the aim is , therefore , twofold : * extract the evolution of from eq .for the entire system .* do it in a way which guarantees that properties of ( as of any density operator ) are preserved for any moment of time .the solution to the stated problems is found in the so - called master equation technique .there are two , conceptually different but complementary , approaches to me .the first one uses mathematically rigorous methods .such an approach is presented , for example , in refs . ( see also the references given in these books ) .rigorous mathematics is obviously very important from the fundamental point of view .mathematical theorems prove that me for the reduced density operator of subsystem follows from general von neumann equation and indeed preserves properties . in refs . it is shown that it is so , when me attains the so - called standard ( lindblad - gorini - kossakowski - sudarshan ) form ( ref . , p.8 , eq .( 38 ) ; , eq .( 3.63 ) and , eq . ( 78 ) ) ~+~ \sum_{ij } a_{ij } \biggl ( \ , f_{i } \ , \rho_{a}(t ) \ , f_{j}^{\dagger } ~-~ { { \textstyle \frac{1}{2}}}\bigl [ f_{j}^{\dagger } f_{i } , \ ; \rho_{a}(t ) \bigr]_{+ } \ , \biggr ) .\label{ii10}\ ] ] the first term leads the unitary ( hamiltonian ) evolution , while the second one is sometimes called a dissipator .operators constitute a basis in the space of operators for subsystem ( see ) .finally , is a positive definite hermitian matrix . here, we do not go into the details of the derivation of the standard form of the master equation , we refer the reader to the mentioned references ( see also ) .quite interesting derivation of standard me is given by preskill .his presentation is somewhat heuristic but certainly worth reading , especially if one is interested in connection with quantum information theory , quantum channels etc .it is not , however , our aim to pursue formal mathematical issues .our intentions are quite practical , so the reader may ask , why do we speak about the standard form of me .the reason is as follows .the second approach to the stated problems is via `` microscopic derivations '' .this occurs when we need to consider a specific physical situation when interacting systems and are known and well - defined .then , we want to construct the corresponding me equation of motion for the reduced density operator .this is less formal and may be mathematically uncertain .if the microscopic derivation yields an equation in the standard form , we can say that the aim is achieved , because standard form ensures the preservation of the necessary properties of reduced density operator . hence , both approaches are complementary . formal but rigorous mathematical methods lead to standard form of me which must be matched by equations obtained through microscopic derivation .all of the already mentioned references give ( usually brief ) account on the microscopic derivation of me .these presentations seem to be difficult for students who are not acquainted with the subject and who seek the necessary introduction . perhaps the most extensive microscopic derivation is given by cohen - tannoudji , dupont - roc and grynberg .this latter presentation is somewhat heuristic and , as it seems to us , leaves some nuances unexplained .there is , however , one more drawback .namely , cohen - tannoudji _ et al _ do not compare their me with the standard form .therefore , essential question of positivity preservation remains untouched .the derivation given here uses the concepts which can be found in the all references . nevertheless , we are most indebted to cohen - tannoudji _ et al_. their approach and especially their discussion of the time scales and employed approximations strongly influenced our tutorial .we apologize if , at some places , we do not give proper references .too many ones can distract the reader , which does not lessen our debt to all authors of cited literature .the scheme of this paper is summarized in the _ contents _ , hence we feel no need to repeat it .we try to be as clear and as precise as possible .we focus our attention on the microscopic derivation of me , leading to the standard form .some issues are postponed to _ auxiliary _ section , so that the main flow of derivation is not broken by additional comments , which can safely be given later .we hope , that the students who need a primer in the subject , will find our work useful and informative .the readers are invited to comment , so that the next version ( if such a need arises ) will be improved and , more readable .we consider a physical system which consists of two parts and .we are interested only in what happens in part which is usually much smaller than part .the latter one we will call a reservoir ( environment ) .we assume that the entire system is closed .then , its hamiltonian is specified as in eq .some additional assumptions concerning both subsystems will be introduced when necessary .a previously , let denote the density operator of the whole system .the evolution of this operator is governed by von neumann equation .our main aim is to find the corresponding equation of motion for the reduced density operator for the subsystem .our starting point is provided by von neumann equation which , after the transformation to the interaction picture , reads , \label{me05}\ ] ] where , we obviously denoted with given in eq . .reduction of the density operator ( as in ) is preserved in the interaction picture ( see _ auxiliary _ sections ) formal integration of eq . yields the following expression , \label{me07}\ ] ] which gives the density operator at a later moment , while the initial one at a moment is assumed to be known . iterating further and denoting we obtain , ( similarly as in ref. ) \nonumber \\ & \hspace*{8 mm } ~+~ \left ( \frac{1}{i\hbar } \right)^{2 } \int\limits_{t}^{t+\delta t } dt_{1 } \int_{t}^{t_{1 } } dt_{2 } ~\bigl [ \ : \widetilde{v}_{ab}(t_{1 } ) , \ ; \bigl [ \ : \widetilde{v}_{ab}(t_{2 } ) , ~{\widetilde{\varrho}}_{ab}(t ) \ : \bigr ] \nonumber \\[2 mm ] & \hspace*{-8 mm } ~+~ \left ( \frac{1}{i\hbar } \right)^{3 } \int\limits_{t}^{t+\delta t } dt_{1 } \int_{t}^{t_{1 } } dt_{2 } \int_{t}^{t_{2 } } dt_{3 } ~\bigl [ \ : \widetilde{v}_{ab}(t_{1 } ) , \ ; \bigl [ \ : \widetilde{v}_{ab}(t_{2 } ) , \bigl [ \ : \widetilde{v}_{ab}(t_{3})~{\widetilde{\varrho}}_{ab}(t_{3 } ) \ : \bigr ] . \label{me12 } \end{aligned}\ ] ] higher order iterations will contain fourfold , etc . , integrals and commutators .let us note that in the last term we have time ordering .the above equation is rigorous , no approximations have been made. weak - coupling approximation ( discussed in all refs. ) , consists in retaining the terms up to the second order in interaction hamiltonian .higher order terms are then neglected .thus , we remain with \nonumber \\ & \hspace*{14 mm } ~+ \left ( \frac{1}{i\hbar } \right)^{2 } \int\limits_{t}^{t+\delta t } dt_{1 } \int_{t}^{t_{1 } } dt_{2 } ~\bigl [ \ : \widetilde{v}_{ab}(t_{1 } ) , \ ; \bigl [ \ : \widetilde{v}_{ab}(t_{2 } ) , ~{\widetilde{\varrho}}_{ab}(t ) \ : \bigr ] .\label{me16 } \end{aligned}\ ] ] alternatively , we can say that the obtained equation is valid in the second - order perturbation theory .such an approximation requires a justification .this will be presented in the _ auxiliary _ sections , now we focus on further steps of the derivation .reduction of the operator in the left hand side of poses no difficulties . tracing over the reservoir variables ( subsystem ) we obtain \nonumber \\ & \hspace*{14 mm } ~+~ \left ( \frac{1}{i\hbar } \right)^{2 } \int\limits_{t}^{t+\delta t } dt_{1 } \int_{t}^{t_{1 } } dt_{2 } ~{\mathrm{tr}}_{b } \bigl [ \ : \widetilde{v}_{ab}(t_{1 } ) , \ ; \bigl [ \ : \widetilde{v}_{ab}(t_{2 } ) , ~{\widetilde{\varrho}}_{ab}(t ) \ : \bigr ] .\label{me16x } \end{aligned}\ ] ] this expression has certain drawback .the the commutators in the right hand side contain full density operator , and not the interesting ( relevant ) reduced one .to proceed , we need some more assumptions and approximations .one more remark seems to be in place .subsequent iterations leading to eq .are rigorous . in equation which is approximate there occurs the operator , taken at the initial moment .the last term in the exact equation contains for moments earlier than the current moment , but later than the initial instant .this means that we neglect the influence of the `` history '' on the present moment .we shall return to the discussion of this point . the key role in our consideration is played by the assumption that there are two distinct time scales .the first one is specified by typical time during which the internal correlations in the reservoir exist .this will discussed in more detail later .here we will only say that is such a time , that when it elapses , the state of the reservoir is practically independent of its initial state .the second scale is provide by time .it characterizes the evolution ( changes ) of the operator which is ( are ) due to the interaction with reservoir , and which may be specified by the relation time may be called the characteristic relaxation time of subsystem .let us note that we are speaking about interaction the interaction picture we employ is thus , particularly useful .we make no statements about the rate of the free evolution of ( in the schrdinger picture ) , which is governed by hamiltonian .usually , the characteristic times of free evolution ( the times of the order of ) are typically much shorter that describing the influence of the interaction between subsystems .now , we assume that the introduced time scales satisfy the requirement we have a fast scale ( small ) determining the decay of correlations within the reservoir and the second much slower scale defined by relatively long relaxation time , characterizing the interaction between the two parts of the entire physical system .this may be phrased differently .we have assumed that the interaction is weak .let denote the average `` strength '' of this interaction .uncertainty principle states that the condition implies that in other words , we can say that spectral widths are the reciprocals of characteristic times , so the condition means that the spectral width of the reservoir energies must be much larger than the spectral width of the interaction between subsystem with reservoir .further discussion and justification of our approximations is postponed to _ auxiliary _ sections . herewe focus on the derivation of the master equation .the adopted assumption allows us to make the approximation which is sometimes called the born one - .initial density operator for the whole system can always be written as where and are the reduced density operators for two subsystems .the state of the whole system consists of a factorizable part and the entangled part , describing the interaction - induced correlations between the subsystems .equation gives us the change , hence informs us about changes occurring in the time interval .assumption that allows us to neglect the mentioned correlations . as previously , we postpone the discussion for _ auxiliary _ sections . at present , we assume that by assumption , the reservoir ( environment ) is large , its correlation time is very short , so the reservoir s relaxation is fast .we may say that before any significant changes occur in subsystem , the reservoir would have enough time to reach thermodynamic equilibrium .as it is known from statistical physics such state is given as the quantity is a partition sum states and energies are the eigenstates and eigenvalues of the reservoir hamiltonian , which can be written as . as a consequencewe conclude that ~=~ 0 , \label{me23d}\ ] ] so we can say that operator is stationary does not change in time .it is worth noting that we could have reversed the argument .first require stationarity , as expressed by which would entail relations and .moreover , commutation relation implies that the reduced density operator is of the same form both in schrdinger and interaction pictures . due to these remarksoperator appearing in eq .is simply replaced by .therefore , in eq .we make the replacement .so , we now have \nonumber \\ & \hspace*{1 mm } + \left ( \frac{1}{i\hbar } \right)^{2 } \int\limits_{t}^{t+\delta t } dt_{1 } \int_{t}^{t_{1 } } dt_{2 } ~{\mathrm{tr}}_{b } \bigl [ \ : \widetilde{v}_{ab}(t_{1 } ) , \ ; \bigl [ \ : \widetilde{v}_{ab}(t_{2 } ) , \ ; { \widetilde{\varrho}}_{a}(t ) \otimes { \bar{\sigma}_{b}}\ : \bigr ] .\label{me27 } \end{aligned}\ ] ] the employed simplification facilitates computation of the remaining traces .however , to proceed effectively , we need to specify the interaction hamiltonian .our next assumption concerns the shape of the interaction hamiltonian . it is taken as ( similarly as in the given references ) where are operators which act in the space of the states of subsystem , while operators correspond to space of the reservoir s states .operators appearing in the definition need not be hermitian ( each one separately ) .only the hamiltonian must be hermitian .that is why we have written the second equality .we can say that to each nonhermitian term corresponds the term , and the latter appears in the sum , but with another number . in _ auxiliary _ sections we will argue that it is not any limitation .it is only important that the whole hamiltonian must be hermitian .operators and act in different spaces so they are independent and commute . in the interaction picturewe immediately have with rules of hermitian conjugation imply that the conjugate operators transform to interaction picture in the exactly the same manner as the initial ones .we now make one more assumption about reservoir . we have already taken . here, we assume that in the schrdinger picture this assumption easily transforms to interaction picture which follows from commutation relation , cyclic property of trace and .this is rather a simplification , not a restrictive assumption .this will be clarified and explained in _ auxiliary _ sections .( leading to ) allows us to see that the first term in the me is , in fact , zero . indeed = ~ { \mathrm{tr}}_{b } \biggl [ \ : \sum_{\alpha } \widetilde{a}_{\alpha}(t_{1 } ) \otimes \widetilde{x}_{\alpha}(t_{1 } ) , ~{\widetilde{\varrho}}_{a}(t ) \otimes { \bar{\sigma}_{b}}\ : \biggr ] \nonumber \\ & \hspace*{5 mm } = ~\sum_{\alpha } \bigl\ { \ : \widetilde{a}_{\alpha}(t_{1 } ) { \widetilde{\varrho}}_{a}(t ) \ : { \mathrm{tr}}_{b } \bigl [ \widetilde{x}_{\alpha}(t_{1 } ) \ : { \bar{\sigma}_{b}}\bigl ] \nonumber \\ & \hspace*{35 mm } ~-~ { \widetilde{\varrho}}_{a } \widetilde{a}_{\alpha}(t_{1 } ) \ : { \mathrm{tr}}_{b } \bigl [ { \bar{\sigma}_{b}}\ : \widetilde{x}_{\alpha}(t_{1 } ) \bigl ] \bigl\ } ~=~ 0 .\label{me40 } \end{aligned}\ ] ] both traces are equal ( cyclic property ) , nevertheless this expression need not be zero , because operators of the system need not commute . if requirement is not fulfilled then the above average may not vanish .assumption and its consequence fortunately give zero , thus the first term of eq .vanishes and we remain with the master equation \right ] . \label{me41a}\ ] ] expanding the commutators is simple .moreover , one easily notices that there are two pairs of hermitian conjugates .hence we have we can now use hamiltonian and perform further transformations .it can be , however , shown that this equation does not guarantee that the positivity of the density operator is preserved .it appears that the so - called secular approximation is necessary . to perform it effectively , it is convenient to present the interaction hamiltonian in a somewhat different form. let us write the hamiltonian of the subsystem as states constitute the complete and orthonormal basis in the space of states of the subsystem .the eigenfrequencies may or may not be degenerate .we allow for . at presentit suffices that we distinguish different kets solely by their `` _ quantum number _ '' .similarly as in refs. , we now define the operators via the following relation this representation may be called the decomposition of operator into eigenprojectors of hamiltonian .delta is of the kronecker type , that is in our considerations we allow for nonhermitian operators .hence , definition is augmented by the following one because it is always allowed to interchange the summation indices .we stress that contains bohr frequency , while in we have .the following relation seems to be quite obvious since out of all s one will exactly match . as a consequence we obtain indeed , from definition , relation and due to completeness of states we get relation implies that the interaction hamiltonian ( in schrdinger picture ) can be written as similarly as above we can also show that and using definition we find the operator in the interaction picture because . linking expressionsand we write the interaction hamiltonian in the interaction picture before starting to analyze me , let us notice that operators possess some interesting properties .discussion of these properties is moved to _ auxiliary _ sections .we return to master equation .interaction hamiltonian is taken as in the first part of eq . , while is represented by its hermitian conjugate according to second part of .this gives e^{i\omega\ , ' t_{1 } } a^{\dagger}_{\alpha}(\omega\ , ' ) \otimes \widetilde{x}^{\dagger}_{\alpha}(t_{1 } ) \nonumber \\[2 mm ] & \hspace*{-4 mm } -~e^{i\omega\ , ' t_{1 } } a^{\dagger}_{\alpha}(\omega\ , ' ) \otimes \widetilde{x}^{\dagger}_{\alpha}(t_{1 } ) \bigl [ e^{-i\omega t_{2 } } a_{\beta}(\omega ) \otimes \widetilde{x}_{\beta}(t_{2 } ) \bigr ] { \widetilde{\varrho}}_{a}(t ) \otimes { \bar{\sigma}_{b}}\bigr\ } ~+~ \mathbb{h.c}. \label{me60 } \end{aligned}\ ] ] computing tensor products we remember that partial trace is taken only with respect to reservoir variables .moreover , we note that these traces are the same ( cyclic property ) .therefore we denote finally we rewrite the arguments of the exponentials as .. becomes ~+~ \mathbb{h.c}. \label{me64 } \end{aligned}\ ] ] the quantity is called the correlation function of the reservoir .we will briefly discuss its properties .let us focus on the functions defined by the right hand side of eq ., that is these are the functions of two variables and it is not , _ a priori _ , clear that they depend only of the difference . before discussing this fact , let us note that . to prove this relation, we recall that , so that the definition gives now , we will show that the function is indeed a function of the difference of its arguments .the key role plays the fact that the state of the reservoir ( density operator ) is stationary ( does not change in time ) . explicitly using the interaction picture we get trace is cyclic and commutes with hamiltonianem so we conclude that for two moments of time .reservoir s correlation function effectively depends only on one variable .this fact is denoted by a bar over the symbol of correlation function . thus we write such correlation functions are called stationary . sometimes the concept of stationarity means invariance with respect to time translation .indeed , for arbitrary time we have this property of the correlation functions is a straightforward consequence of the stationarity of reservoir s density operator .preparing this section of our tutorial we greatly benefited from the analogous discussion by cohen - tannoudji _ et al _ . to large extentwe follow their reasoning , trying to elucidate some less obvious points . admitting this , we will refrain from giving multiple references to their work . in master equation one integrates over the triangle which is shown in fig . [ xmerys01 ] .firstly , one computes the integral over in the range from to .this is indicated by thin vertical lines ( at left ) .next , one sums such contributions by integrating over from to .[ 0.6 ] [ 0.6 ] the integrand in contains correlation functions of the reservoir which depend on the difference .we stress that we always have , so that .the integration over the triangle can be performed in another manner .let us consider the geometry ( right graph in fig .[ xmerys01 ] ) . along the diagonal acwe have , so . the straight line has ( in and variables ) the equation , where is fixed , since is the coordinate of the point where the discussed line intersects the axis .then , for the line ( passing through point b ) is also fixed ( by the same argument , as in the case of line ) . on ,at the point b we have and .thus , we have .parametr specifies the skew straight lines ( parallel to the diagonal ac ) and passing through triangle abc . [0.6 ] integration over the triangle abc is now done as follows .we fix and we move along the segment ac ( see fig.[xmerys02 ] ) .variable runs in the interval from to .so , we first integrate over from to ( along the segment ac ) .next , we integrate over from zero to . in this mannerwe sum the contributions from all skew segments covering the triangle abc .therefore , we can write while we remember that ( or ) . performing the discussed changes of integration variables in eq . , we get ~+~ \mathbb{h.c}. , \label{me71 } \end{aligned}\ ] ] we recall that the considered time intervals satisfy the requirement ( which will be discussed in detail later ) . if it is true , then the main contribution to the integral over in eq. will come from the region in the neighborhood of .geometrically , this corresponds to a narrow belt which is parallel to the diagonal ac and lies just below it .it follows , that outside this region the reservoir s correlation functions practically vanish ( decay to zero ) .therefore , we will not make any serious error moving the upper limit of integration over to infinity .moreover , since only small s contribute significantly , the lower limit of the integral over may be approximated simply by , so only a small `` initial '' error might be introduced . with these approximations equationyields ~+~ \mathbb{h.c}. \label{me72 } \end{aligned}\ ] ] introducing the quantities [ me73 ] , \label{me73a } \\w_{\alpha\beta}(\omega ) & = \int_{0}^{\infty } d\tau ~e^{i \omega \tau } ~\bar{g}_{\alpha \beta}(\tau ) = \int_{0}^{\infty } d\tau ~e^{i \omega \tau } \ : { \mathrm{tr}}_{b } \bigl\ { \widetilde{x}^{\dagger}_{\alpha}(\tau ) \ , x_{\beta } { \bar{\sigma}_{b}}\bigr\ } , \label{me73b } \ ] ] we rewrite eq . as follows ~+~ \mathbb{h.c}. \label{me74 } \end{aligned}\ ] ] this equation specifies the rate of change of the reduced density operator within the time interval . the quotient can be treated as an averaging this averaging results in smoothing all very rapid changes of which may occur during the interval . in principlewe should account for such rapid changes .we do not do that because right hand side of eq .contains , while the left hand side represents the smoothed rate of change .this rate depends on the density operator in the past , that is at the moment when the smoothed evolution was started .so our next approximation consists in replacing the smoothed rate by a usual derivative .in other words , the variation at an instant ( that is the derivative ) is connected with the value of at the very same instant .this approximation allows us to use a usual derivative at the left hand side of .this approximation sometimes is called a markovian one since it connects the variations of some physical quantity with its value at the same instant , independently from the values which this quantity had at earlier moments .we can say that markovian approximation consists in neglecting the influence of the history of the physical system on its current state which fully determines the presently occurring changes . in some literature sources this approximationis also called the coarse - graining one , because small and rapid fluctuations are neglected when the evolution is investigated on a much longer time scale specified by . with all the discussed approximation our master equation becomes ~+~ \mathbb{h.c}. \label{me76 } \end{aligned}\ ] ] at this stage , we return to the schrdinger picture and we insert into eq . . when computing the derivative at the left hand side we reproduce the free evolution termthus , from we get e^{-ih_{a}t/\hbar } \nonumber \\ & \hspace*{1 mm } + ~\biggr\ { \ ; \frac{1}{\hbar^{2 } } \sum_{\omega,\omega\ , ' } \sum_{\alpha,\beta } j(\omega\,'- \omega ) w_{\alpha\beta}(\omega ) \bigl [ \ : a_{\beta}(\omega ) \ : e^{ih_{a}t/\hbar}\:\rho_{a}(t)\ : e^{-ih_{a}t/\hbar } a^{\dagger}_{\alpha}(\omega\ , ' ) \nonumber \\ & \hspace*{29 mm } -~ a^{\dagger}_{\alpha}(\omega\ , ' ) a_{\beta}(\omega ) e^{ih_{a}t/\hbar } \ :\rho_{a}(t ) \ : e^{-ih_{a}t/\hbar } \bigr ] ~+~ \mathbb{h.c } \ ; \biggr\}. \label{me78 } \end{aligned}\ ] ] multiplying on the left by and on the right by , we use relation and its hermitian conjugate ( for negative times ) .this yields + ~\biggr\ { \ ; \frac{1}{\hbar^{2 } } \sum_{\omega,\omega\ , ' } \sum_{\alpha,\beta } j(\omega\,'-\omega ) w_{\alpha\beta}(\omega ) ~e^{i(\omega - \omega\,')t } \nonumber \\ & \hspace*{12 mm } \times \bigl [ \ : a_{\beta}(\omega ) \rho_{a}(t ) a^{\dagger}_{\alpha}(\omega\ , ' ) ~-~a^{\dagger}_{\alpha}(\omega\ , ' ) a_{\beta}(\omega ) \rho_{a}(t ) \ : \bigr ] ~+~ \mathbb{h.c } \ ; \biggr\}. \label{me81 } \end{aligned}\ ] ] our master equation contains the integral defined in .its computation is straightforward . denoting temporarily ,we get where we have introduced ( as in ) a function specified as due to the obtained results we can write inserting the computed integral into we note that the exponential factor cancels out . hence + \biggr\ { \ : \frac{1}{\hbar^{2 } } \sum_{\omega\,',\omega } \sum_{\alpha,\beta } f(\omega\,'-\omega ) w_{\alpha\beta}(\omega ) \nonumber \\ & \hspace*{12 mm } \times \bigl [ \ : a_{\beta}(\omega ) \ : \rho_{a}(t ) \ :a^{\dagger}_{\alpha}(\omega ) ~-~a^{\dagger}_{\alpha}(\omega ) \ : a_{\beta}(\omega ) \ : \rho_{a}(t ) \ : \bigr ] ~+~ \mathbb{h.c } \ : \biggr\}. \label{me88}\end{aligned}\ ] ] the sense of function must be carefully considered .we proceed along the lines similar to those in .it is easy to see that function has a sharp maximum for , where it is equal to unity .[ 0.6 ] zeroes of this function correspond to if the time is sufficiently long then the central maximum is very narrow .the question is what does it mean `` sufficiently long time '' .let us consider two possibilities . 1 .if , the argument of function is very close to zero , its value being practically one .2 . if ( bohr frequencies are significantly different ) then is close to zero , as it is seen in fig [ xmerys03 ] .we conclude that the terms at the right hand side of master equation containing the operator products , for which practically do not contribute to the evolution of the density operator . according to the first possibility above, significant contributions come only from such couplings that operators and have practically equal bohr frequencies . as we know , time is a characteristic relaxation time in subsystem due to interaction with reservoir and it satisfies the estimate ( we discuss it later ) .it can be argued ( see also ) that the terms in master equation , in which also give very small contributions , so that they can be neglected . as a result of all these approximations, we may say that only those terms in right hand side of master equation contribute significantly for which .such an approximation is called the secular one .it allows us to replace the function by the kronecker delta defined as in .it reminds us that only the terms satisfying the requirement give nonzero contribution .all these arguments lead to master equation of the form ~+~\biggr\ { \ : \frac{1}{\hbar^{2 } } \sum_{\alpha,\beta } \sum_{\omega\,',\omega } \delta(\omega\,'-\omega ) \ : w_{\alpha\beta}(\omega ) \nonumber \\ & \hspace*{12 mm } \times \bigl [ \ : a_{\beta}(\omega ) \ : \rho_{a}(t ) \ : a^{\dagger}_{\alpha}(\omega\ , ' ) ~-~a^{\dagger}_{\alpha}(\omega\ , ' ) \ : a_{\beta}(\omega ) \ :\rho_{a}(t ) \ : \bigr ] ~+~ \mathbb{h.c } \ : \biggr\}. \label{me90 } \end{aligned}\ ] ] the presence of the discussed kronecker delta simplifies one of the summations , which gives + ~\biggr\ { \ : \frac{1}{\hbar^{2 } } \sum_{\alpha,\beta } \sum_{\omega } w_{\alpha\beta}(\omega ) \bigl [ a_{\beta}(\omega ) \rho_{a}(t ) a^{\dagger}_{\alpha}(\omega ) \nonumber \\ & \hspace*{55 mm } ~-~ a^{\dagger}_{\alpha}(\omega ) a_{\beta}(\omega ) \rho_{a}(t ) \bigr ] + \mathbb{h.c } \ ; \biggr\}. \label{me91 } \end{aligned}\ ] ] the fundamental part of the microscopic derivation of the master equation is finished .we shall perform some transformations which have important , but rather cosmetic character .we want to transform master equation into the so - called standard form .all other comments are , as mentioned many times , left to _ auxiliary _ sections .standard form is important , because it can be shown ( see ) that this form guarantees preservation of hermiticity , normalization and , first of all , the positivity of the reduced density operator . if our master equation can be brought into the standard form , then we can be sure that all the necessary properties of the density operator of subsystem are indeed preserved .obviously , the first term in the right hand side of equation describes the unitary evolution , hence we shall concentrate only on the second term . writing explicitly the hermitian conjugates, we have \nonumber \\ & + \frac{1}{\hbar^{2 } } \sum_{\omega } \sum_{\alpha,\beta } w^{\ast}_{\alpha\beta}(\omega ) \bigl [ \ : a_{\alpha}(\omega ) \rho_{a}(t ) a^{\dagger}_{\beta}(\omega ) ~-~\rho_{a}(t ) a^{\dagger}_{\beta}(\omega ) a_{\alpha}(\omega ) \ : \bigr ] , \label{me96a } \end{aligned}\ ] ] because operator is hermitian ( the proof that hermiticity is preserved will be presented in _ auxiliary _ sections ) . in the second term we interchange the summation indices which gives \nonumber \\ & \hspace*{4 mm } + \frac{1}{\hbar^{2 } } \sum_{\omega } \sum_{\alpha,\beta } w^{\ast}_{\beta \alpha}(\omega ) \bigl [ \ : a_{\beta}(\omega ) \rho_{a}(t ) a^{\dagger}_{\alpha}(\omega ) ~-~\rho_{a}(t ) a^{\dagger}_{\alpha}(\omega ) a_{\beta}(\omega ) \ : \bigr ] .\label{me96b } \end{aligned}\ ] ] for further convenience we introduce the following notation [ me97 ] .\label{me97b } \ ] ] the matrix is hermitian and positively defined .the latter property is difficult to prove .it requires some advanced mathematics and we take this fact for granted .the readers are referred to literature .hermiticity of follows directly from the definition .indeed , we have the second matrix is also hermitian . from itfollows that = \frac{1}{2i } \bigl [ w_{\beta\alpha}(\omega ) - w^{\ast}_{\alpha \beta}(\omega ) \bigr ] = \delta_{\beta\alpha}(\omega ) .\label{me100}\ ] ] let us focus on the method of computation of elements .as it will be shown , elements are less important . to find we need quantities .conjugating definition we find that where we used relations , and cyclic property of trace . changing the integration variable , we have the integrand is identical as in the definition , only the integration limits are different . combining this with , we get the elements are the fourier transforms of the corresponding correlation function of the reservoir. matrix does not have such a simple representation . from the definition and the second relation in .\label{me101h } \end{aligned}\ ] ] inverting relations we express elements via and .after simple regrouping of the terms in eq .we find {+ } - i \delta_{\alpha \beta}(\omega ) \bigl [ a^{\dagger}_{\alpha}(\omega ) a_{\beta}(\omega ) , \ :\rho_{a}(t ) \bigr ] \ : \bigr\}. \label{me102d } \end{aligned}\ ] ] let us note that the last term is a commutator , so we define taking into account hermiticity of matrix ( changing the names of the summation indices when necessary ) we can easily show that the operator is also hermitian . returning to full master equation , that is to eq . , we conclude that the term containing in can be connected with the free hamiltonian one . in this manner , we finally have \nonumber \\ & + ~\frac{1}{\hbar^{2 } } \sum_{\omega } \sum_{\alpha,\beta } \ ; \gamma_{\alpha \beta}(\omega ) \ : \bigl\ { \ : a_{\beta}(\omega ) \ : \rho_{a}(t ) \ : a^{\dagger}_{\alpha}(\omega ) \nonumber \\ & \hspace*{40 mm } - { { \textstyle \frac{1}{2}}}\ : \bigl [ a^{\dagger}_{\alpha}(\omega ) \ : a_{\beta}(\omega ) , \ : \rho_{a}(t ) \bigr]_{+ } \bigr\ } , \label{me106 } \end{aligned}\ ] ] which coincides exactly with the standard form of the evolution equation for the reduced density operator which describes the state of the subsystem interacting with reservoir .this allows us to be sure that hermiticity , normalization and positivity of the operator are indeed ensured .finally , let us remark that operator which gives a contribution to the hamiltonian ( unitary ) evolution , usually produces small shifts of the eigenenergies of the subsystem .that is why , in many practical applications , this term is simply omitted .this explains our previous remark that matrix is less important than .obviously one can construct operator and investigate its influence on the unperturbed energy levels of the subsystem .small energy shifts of eigenenergies of subsystem are qualitatively similar to the well - known lamb shifts , which clarifies the employed notation .the obtained master equation is an operator one . in practice, we frequently need an equation of motion for the matrix elements of the reduced density operator .it seems to be natural to use the energy representation , that is to consider matrix elements of calculated in the basis of the eigenstates of the free hamiltonian ( see eq . ) .this will be done in the next section . when analyzing master equation in the basis of the eigenstates of free hamiltonian we need to be careful .the reason is that the commutator in contains an additional term , namely the lamb - shift hamiltonian .one may argue that this changes the hamiltonian and a new basis should be found ( a basis in which is diagonal ) .we will , however , proceed in the spirit of the perturbative approach .we will treat as a small perturbation which , at most , will yield small energy shifts .therefore , the set of eigenstates of the unperturbed hamiltonian can be used as complete and orthonormal basis .working within this scheme , we can easily construct master equation ( equation of motion ) for matrix elements of the density operator for subsystem .we will suppress the index since it should lead to no misunderstanding .taking matrix elements and expanding the anticommutator term we obtain { | \ , b \, \rangle } \nonumber \\ & + ~ \frac{1}{\hbar^{2 } } \sum_{\omega } \sum_{\alpha,\beta } \ : \gamma_{\alpha \beta}(\omega ) \ : \bigl\ { \ : { \langle \ , a \ , | } a_{\beta}(\omega ) \ : \rho(t ) \ :a^{\dagger}_{\alpha}(\omega ) { | \ , b \ ,\rangle } \nonumber \\ & \hspace*{-5 mm } - { { \textstyle \frac{1}{2}}}\ : { \langle \ , a \ , | } a^{\dagger}_{\alpha}(\omega ) \ : a_{\beta}(\omega ) \ : \rho(t ) { | \ , b \ ,\rangle } - { { \textstyle \frac{1}{2}}}\ : { \langle \ , a \ , | } \rho(t ) \ : a^{\dagger}_{\alpha}(\omega ) \ : a_{\beta}(\omega ) { | \ , b \ , \rangle } \bigr\}. \label{me112 } \end{aligned}\ ] ] the last three terms constitute a so - called dissipative term ( or dissipator ) and we will concentrate on its form .first , we use expressions , for operators and .then we consider three matrix elements .necessary computations in the basis of eigenstates of free hamiltonian are simple though a bit tedious , in some cases a suitable changes of summation indices is necessary .the results of these calculations are as follows [ me117 ] & \hspace*{9 mm } = ~\sum_{m , n } \delta(\omega_{ma } - \omega ) \ :\delta(\omega_{nb } - \omega ) { { \langle \ , a \ , |}\,a_{\beta}\,{| \ , m \ , \rangle } } { { \langle \ , n \ , |}\,a^{\dagger}_{\alpha}\,{| \ , b \ , \rangle } } \ : \rho_{mn}(t ) , \label{me117a } \end{aligned}\ ] ] & \hspace*{9 mm } = \sum_{m , n } \delta(\omega_{an } - \omega ) \ : \delta(\omega_{mn } - \omega ) { { \langle \ , a \ , |}\,a^{\dagger}_{\alpha}\,{| \ ,n \ , \rangle } } { { \langle \ , n \ , |}\,a_{\beta}\,{| \ , m \ , \rangle } } \ : \rho_{mb}(t ) , \label{me117b } \end{aligned}\ ] ] & \hspace*{9 mm } = \sum_{m , n } \delta(\omega_{mn } - \omega ) \ : \delta(\omega_{bn } - \omega ) { { \langle \ , m \ , |}\,a^{\dagger}_{\alpha}\,{| \ , n \ , \rangle } } { { \langle \ , n \ , |}\,a_{\beta}\,{| \ , b \ , \rangle } } \ : \rho_{am}(t ) .\label{me117c } \ ] ] the computed matrix elements are plugged into equation and summation over frequency is performed .after some regrouping we find that & \hspace*{13 mm } -~{{\textstyle \frac{1}{2}}}\ : \gamma_{\alpha \beta}(\omega_{mn } ) \ : \delta(\omega_{bn } - \omega_{mn } ) { { \langle \ , n \ , |}\,a_{\beta}\,{| \ , b \ , \rangle } } { { \langle \ , n \ , |}\,a_{\alpha}\,{| \ , m \ , \rangle}}^{\ast } \ :\rho_{am}(t ) \bigr\}. \label{me118b } \end{aligned}\ ] ] going further , we use the evenness of kronecker delta in the first term , while the presence of the deltas in the second and third terms allows us to change arguments in the elements of matrix .next , we denote due to these , we rewrite formula as & \hspace*{40 mm } -~{{\textstyle \frac{1}{2}}}\ : \sum_{m , n } \ : \delta(\omega_{bn } - \omega_{mn } ) \ ; k(nb , nm ) \ ;\rho_{am}(t ) . \label{me120 } \end{aligned}\ ] ] let us note the specific symmetry of this expression .further analysis depends on whether the eigenfrequencies of the hamiltonian are degenerate or not .we also note that kronecker deltas in the second and third terms are correspondingly given as and , which allows one to perform summation over .however , one has to be careful because eigenfrequencies can be degenerate . to account for the possible degeneracies ,let us write the hamiltonian of the considered system in the following form where is the main quantum number which distinguishes energy levels ( energy multiplets ) , while , are subsidiary quantum numbers .is is obvious that for .certainly , the nondegenerate case follows immediately and it corresponds to , then subsidiary quantum numbers are unnecessary and can be simply suppressed . in the degenerate case single indices appearing in equation must be replaced by corresponding pairs , for example .equation is now rewritten as & \hspace*{10 mm } -~{{\textstyle \frac{1}{2}}}\ : \sum_{mm } \sum_{nn } \ : \delta(\omega_{bn } - \omega_{mn } ) \ ; k(nnbb , nnmm ) \ ; \rho_{aamm}(t ) .\label{me131 } \end{aligned}\ ] ] as already noted , one immediately sees that and similarly , where the last deltas are the simple kronecker ones .the sum over in the second term is trivial .we put and we `` land within multiplet '' , hence we change .analogously , in the second term and .therefore , we have & \hspace*{40 mm } -~{{\textstyle \frac{1}{2}}}\ : \sum_{nn } \sum_{b '' } k(nnbb , nnbb '' ) \ ; \rho_{aabb''}(t ) .\label{me132 } \end{aligned}\ ] ] in two last terms matrix elements do not depend on quantum numbers , hence we can denote this allows us to write equation in the form let us consider this equation in some more detail .first , we take ( and correspondingly ) .this yields the equation of motion for `` quasi - population '' matrix elements taken within just one energy multiplet .then , the first term in right - hand side contains .the sum over is trivial ( ) and we have this equation connects `` quasi - populations '' with other ones .the first sum contains the term with and this term represents elastic ( energy conserving ) processes .the remaining terms ( with ) corresponding to nonelastic transitions . in this case, the environment serves as a reservoir which gives or absorbs the energy .the terms in the second line describe the `` escape '' from multiplet to other ones . to discuss coherences , we assume , which implies . the kronecker delta in can be rewritten as . since , we also get .if we assume that all energy distances are different ( that is for different pairs ) the considered delta can give unity only when and ( which entails and then , eq .reduces to so the coherences between two multiplets couple only with coherences from just these multiplets .obviously for the nondegenerate case `` small '' indices play no role they can be suppressed .then , instead of equation for `` quasi - populations '' we get an equation for genuine populations similarly for coherences , eq . yields \;\rho_{ab}(t ). \label{me148}\ ] ] these examples indicate that me for matrix elements of the reduced density operator possess quite a specific symmetry which probably can be further investigated .this , however , goes beyond the scope of the present work .any density operator , so also the reduced one for subsystem must be normalized , that is , we require that .this has a simple consequence clearly the hamiltonian part ( the commutator ) preserves the trace , which follows from cyclic property .hence we must check the second dissipative part of our me .one may ask at which stage of our derivation such a check should be made . in principle , this can be done at any stage . in this section we shall do so twice . once for standard form , and for me in the energy basis . taking me in its standard formwe need to compute the following trace \bigr\ } , \label{mea03 } \end{aligned}\ ] ] and show that it vanishes , ie . , .the trace is a linear operation , so then & \hspace*{42 mm } ~-~ { { \textstyle \frac{1}{2}}}\ : \gamma_{\alpha \beta}(\omega ) \ : { \mathrm{tr}}_{a } \bigl\ { \rho_{a}(t ) \ :a^{\dagger}_{\alpha}(\omega ) \ : a_{\beta}(\omega ) \bigr\ } \bigr ] . \label{mea04 } \end{aligned}\ ] ] cyclic property allows one to see that all three traces are equal .therefore , and we conclude that preservation of the normalization for me in the standard form is proved .in this case we check the trace preservation for eq ., with .we need to compute in the first term we use definition of the parameter ( see ) . in the two next ones we notice that indices and concern the same multiplet , so the summation range is also the same .we can interchange and obtain the second and third terms are identical and cancel out with the first one ( names of summation indices are irrelevant ) .we have shown that in the energetic basis the trace of the reduced density operator for subsystem is preserved .in other words , the derived me preserves normalization .the next necessary property of any density operator is its hermiticity .if the equation of motion for is identical with the similar equation for , then the same equations must yield the same solutions , this means that .free evolution is given by the hamiltonian term ] . directly from the definitions we obtain = \bigr[\sum_{n } \hbar\omega_{n } { | \ , n \ , \rangle}{\langle \ , n \ , | } , ~\sum_{a , b } \delta(\omega_{ba } - \omega ) \ : { | \ , a \ , \rangle}{{\langle \ , a \ , |}\,a_{\alpha}\,{| \ , b \ , \rangle}}{\langle \ , b \ , | } \bigl ] \nonumber \\ & \hspace*{10 mm } = \sum_{a , b } \hbar ( \omega_{a } - \omega_{b } ) \ :\delta(\omega_{ba } - \omega ) { | \ , a \ ,\rangle } { { \langle \ , a \ , |}\,a_{\alpha}\,{| \ , b \ , \rangle}}{\langle \ , b \ ,| } \nonumber \\ & \hspace*{10 mm } = - \ ; \hbar \omega \sum_{a , b } \delta(\omega_{ba } - \omega ) { | \ , a \ , \rangle } { { \langle \ , a \ , |}\,a_{\alpha}\,{| \ , b \ , \rangle}}{\langle \ , b \ , | } ~=~ - \hbar \omega a_{\alpha}(\omega ) , \label{meg03 } \end{aligned}\ ] ] which ends the calculation .conjugation changes sign , so that ~=~ \hbar \omega a_{\alpha}^{\dagger}(\omega ) .\label{meg04}\ ] ] heisenberg equation of motion follows from formula , and it is ~=~ \hbar \omega a_{\alpha}^{(h)}(\omega ) .\label{meg05}\ ] ] after integration we obtain which agrees with .finally , we present one more relation = a^{\dagger}_{\alpha}(\omega ) \bigl [ \,h_{a } , \ ; a_{\beta}(\omega ) \ : \bigr ] \nonumber \\ & \hspace*{50 mm } ~+~ \bigl [ \,h_{a } , \ ; a_{\alpha}^{\dagger}(\omega ) \ : \bigr ] a_{\beta}(\omega ) = 0 , \label{meg09 } \end{aligned}\ ] ] which follows immediately from the derived results .correlation function of the reservoir was defined in or . by assumption , reservoir hamiltonian and the corresponding density operator commute, so they have a common set of complete and orthonormal eigenstates .let us calculate the trace in in the chosen basis in eq .we denoted the eigenvalues of by , hence with , and .expression shows that the correlation function is a complicated superposition of functions which oscillate with bohr frequencies .reservoir is assumed to be large , the discussed frequencies are densely space ( quasi - continuous ) . if time is sufficiently large the oscillations interfere destructively ( average out to zero ) .we can expect that reservoir correlation function decay quickly when time increases .characteristic decay time is denoted by and assumed to be , by far , the shortest time characterizing the system . when the correlation may be neglected .in this summary we describe practical steps needed in the construction of the me for specified physical systems .the first step consists in precise definition of the system and of the reservoir .we need to specify their free hamiltonians and and ( at least sometimes ) their eigenenergies and eigenstates .then we define the interaction hamiltonian in the form where are ( correspondingly ) operators of system and reservoir .we stress that these operator need not be ( separately ) hermitian .it suffices that the full interaction hamiltonian is hermitian .we also need to specify the density operator describing the state of the reservoir .it is worth remembering that operator and commute .this implies that the reservoir is in the stationary state . in the second step of me constructionwe build ( identify ) the following operators the following matrix elements are computed in the third step they are seen to be partial fourier transform of the reservoir correlation functions .reservoir operators are taken in the interaction picture coefficients are then employed to construct two hermitian matrices .\label{mep05 } \end{aligned}\ ] ] we note that is a positive - definite matrix and can be computed directly as fourier transform parameters , in practical applications , are more important than .explanation will be given later .the separate expression for elements is .\label{mep6b}\ ] ] hence , calculation of coefficients can be usually omitted .final construction of the proper me is the fourth and the last step .the above given quantities allow us to write the me as \nonumber \\ & \hspace*{8 mm } + ~\frac{1}{\hbar^{2 } } \sum_{\omega } \sum_{\alpha,\beta } \gamma_{\alpha \beta}(\omega ) \ : \big\ { \ : a_{\beta}(\omega ) \ :\rho_{a}(t ) \ : a^{\dagger}_{\alpha}(\omega ) \nonumber \\ & \hspace*{45 mm } -~ { { \textstyle \frac{1}{2}}}\ : \bigl [ a^{\dagger}_{\alpha}(\omega ) \ : a_{\beta}(\omega ) , \ : \rho_{a}(t ) \bigr]_{+ } \bigr\ } , \label{mep07 } \end{aligned}\ ] ] where the so - called lamb - shift hamiltonian is given as energy shifts of the system which are due to the presence of in the hamiltonian part , are usually quite small and frequently negligible .this explains why the role of matrix is usually less important than that of matrix .99 c. cohen - tannoudji , b. diu , f. lalo , _ quantum mechanics _ , + wiley - interscience , new york 1991 .r. alicki and k. lendi , _ quantum dynamical semigroups and applications _ , + lect .notes phys 717 , ( springer , berlin heidelberg 2007 ) .breuer , f. petruccione , _ the theory of open quantum systems _ ,+ oxford university press , 2002 .k. hornberger , _ introduction to decoherence theory _ , arxiv : quant - ph/0612118v2 . j. preskill , _ lecture notes on quantum computation _ , + http://www.theory.caltech.edu//ph229 . c. cohen - tannoudji , j. dupont - roc , g. grynberg , _ atom photon interactions _ , + wiley , new york 1992 .
we do not present any original or new material . this is a tutorial addressed to students who need to study the microscopic derivation of the quantum - mechanical master equation encountered in many practical physical situations . = -1 mm
cross - correlations often provide us very useful information about financial markets to figure out various non - trivial and complicated structures behind the stocks as multivariate time series .actually , the use of the cross - correlation can visualize collective behavior of stocks during the crisis .as such examples , we visualized the collective movement of the stocks by means of the so - called multi - dimensional scaling ( mds ) during the earthquake in japan on march 2011 .we have also constructed a prediction procedure for several stocks simultaneously by means of multi - layer ising model having mutual correlations through the mean - fields in each layer .usually , we need information about the trend of each stock to predict the price for , you might say , ` single trading ' . however , it sometimes requires us a lot of unlearnable ` craftsperson s techniques ' to make a profit .hence , it is reasonable for us to use the procedure without any trend - forecasting - type way in order to manage the asset with a small risk .from the view point of time - series prediction , elliot _ et.al ._ made a model for the spread and tried to estimate the state variables ( spread ) as hidden variables from observations by means of kalman filter .they also estimated the hyper - parameters appearing in the model by using em algorithm ( expectation and maximization algorithm ) which has been used in the field of computer science .as an example of constructing optimal pairs , mudchanatongsuk regarded pair prices as ornstein - uhlenbeck process , and they proposed a portfolio optimization for the pair by means of stochastic control . for the managing of assets , the so - called _ pairs trading _ has attracted trader s attention .the pairs trading is based on the assumption that the spread between highly - correlated two stocks might shrink eventually even if the two prices of the stocks temporally exhibit ` mis - pricing ' leading up to a large spread .it has been believed that the pairs trading is almost ` risk - free ' procedure , however , there are only a few extensive studies so far to examine the conjecture in terms of big - data scientific approach . of course , several purely theoretical approaches based on probabilistic theory have been reported .for instance , the so - called _ arbitrage pricing theory ( apt ) _ in the research field of econometrics has suggested that the pairs trading works effectively if the linear combination of two stocks , each of which is non - stationary time series , becomes stationary .namely , the pair of two stocks showing the properties of the so - called _ co - integration _ might be a suitable pair .however , it might cost us a large computational time to check the stationarity of the co - integration for all possible pairs in a market , whereas it might be quite relevant issue to clarify whether the pairs trading is actually safer than the conventional ` single trading ' ( see for instance ) to manage the asset , or to what extent the return from the pairs trading would be expected _etc_. with these central issues in mind , here we construct a platform to carry out and to investigate the pairs trading which has been recognized an effective procedure for some kind of ` risk - hedge ' in asset management .we propose an effective algorithm ( procedure ) to check the amount of profit from the pair trading easily and automatically .we apply our algorithm to daily data of stocks in the first section of the tokyo stock exchange , which is now available at the yahoo!finance web site . in the algorithm , three distinct conditions , namely , starting ( ) , profit - taking ( ) and stop - loss ( ) conditions of transaction are automatically built - into the system by evaluating the spread ( gap ) between the prices of two stocks for a given pair .namely , we shall introduce three essential conditions to inform us when we should start the trading , when the spread between the stock prices satisfies the profit - taking conditions , _etc_. by making use of a very simple way .numerical evaluations of the algorithm for the empirical data set are carried out for all possible pairs by changing the starting , profit - taking and stop - loss conditions in order to look for the best possible combination of the conditions .this paper is organized as follows . in the next section [ sec : sec2 ] ,we introduce several descriptions for the mathematical modeling of pairs trading and set - up for the empirical data analysis by defining various variables and quantities . herewe also mention that the pairs trading is described by a first - passage process , and explain the difference between our study and arbitrage pricing theory ( apt ) which have highly developed in the research field of econometrics . in section[ sec : sec3 ] , we introduce several rules of the game for the trading .we define two relevant measurements to quantify the usefulness of pairs trading , namely , winning probability and profit rate .the concrete algorithm to carry out pairs trading automatically is also given in this section explicitly .the results of empirical data analysis are reported and argued in section [ sec : sec4 ] .the last section is devoted to summary .in pairs trading , we first pick up two stocks having a large correlation in the past .a well - known historical example is the pair of _ coca cola _ and _ pepsi cola _then , we start the action when the spread ( gap ) between the two stocks prices increases up to some amount of the level ( say , ) , namely , we sell one increasing stock ( say , the stock ) and buy another decreasing one ( say , the stock ) at the time . we might obtain the arbitrage as a profit ( gain ) : when the spread decreases to some amount of the level ( say , ) again due to the strong correlation between the stocks , and we buy the stock and sell the stock at time . we should keep in mind that we used here the stock price normalized by the value itself at -times before ( we may say ` rate ' ) as where we defined as a price of the stock at time .it is convenient for us to use the ( or ) instead of the price because we should treat the pairs changing in quite different ranges of price .hence , we evaluate the spread between two stocks by means of the rate which denotes how much percentage of the price increases ( or decreases ) from the value itself at -times before . in our simulation, we choose } ] appearing in ( [ eq : def_gamma ] ) is quite longer than } ] .therefore , the number of active pairs is dependent on the thresholds , and we see the details of the dependence in table [ tab : tb1 ] and table [ tab : tb2 ] under the condition ( see in the tables ) . from fig .[ fig : cor ] , we are also confirmed that the number pairs satisfying and is extremely smaller ( ) than the number of combinations for all possible stocks , namely , .it might be helpful for us to notice that the pairs trading could be regarded as a ` minimal portfolio ' and it can obtain the profit even for the case that the stock average decreases . actually , it is possible for us to construct such ` market neutral portfolio ' as follows .let us consider the return of the two stocks and which are described as where parameters denote the so - called ` market betas ' for the stocks , and stands for the return of the stock average , namely , and here we select them ( ) as positive values for simplicity .on the other hand , appearing in ( [ eq : betai])([eq : betaj ] ) are residual parts ( without any correlation with the stock average ) of the returns of stocks . then , let us assume that we take a short position ( ` selling ' in future ) of the stock by volume and a long position ( ` buying ' in future ) of the stock .for this action , we have the return of the portfolio as hence , obviously , the choice of the volume as leads to which is independent of the market ( the average stock ) .we should notice that is rewritten in terms of the profit as follows . therefore , in this sense , the profit is also independent of the market .this empirical fact might tell us the usefulness of paris trading . in the arbitrage pricing theory ( apt ) ,the condition for searching suitable pairs is the linear combination of ` non - stationary ' time series and , becomes co - integration , namely , it becomes ` stationary ' .then , the quantity possesses the long - time equilibrium value and we write with a small deviation from the mean .therefore , we easily find namely , we obtain the profit with a very small risk .hence , the numerical checking for the stationarity of the linear combination by means of , for instance , exponentially fast decay of the auto - correlation function or various types of statistical test might be useful for us to select the possible pairs .however , it computationally cost us heavily for large - scale empirical data analysis .this is a reason why here we use the correlation coefficients and volatilities to investigate the active pairs instead of the co - integration based analysis as given in the references .in this section , we explain rules of our game ( trading ) using the data set for the past three years 2010 - 2012 including 2009 to evaluate the quantities like correlation coefficient and volatility in 2010 by choosing [ days ] . in following ,we explain how one evaluates the performance of pairs trading according to the rules .obviously , the ability of the asset management by pairs treading is dependent on the choice of the thresholds .hence , we should investigate how much percentage of total active pairs can obtain a profit for a given set of . to carry out the empirical analysis , we define the ratio between the profit and the loss for the marginal spread , namely , as where is a control parameter . it should be noted that for positive constants the gap of the spreads ( profit ) is written as , the difference appearing in the denominator of equation ( [ eq : def_alpha ] ) gives a lower bound of the profit .although the numerator in ( [ eq : def_alpha ] ) has no such an explicit meaning , however , implicitly it might be regarded as a ` typical loss ' because the actually realized loss fluctuates around the typical value and it is more likely to take a value which is close to .hence , for , the loss for the marginal spread is larger than the lowest possible profit once a transaction is taken place and vice versa for .if we set , it is more more likely to lose the money less than the lowest bound of the profit , however , at the same time , it means that we easily lose due to the small gap between and . in other words, we might frequently lose with a small amount of losses . on the other hand ,if we set , we might hardly lose , however , once we lose , the total amount of the losses is quite large .basically , it lies with traders to decide which to choose or , however , here we set the marginal as a ` neutral strategy ' , that is thus , we have now only two thresholds for our pairs trading , and the should be determined as a ` slave variable ' from equation ( [ eq : balance2 ] ) .actually this constraint ( [ eq : balance2 ] ) can reduce our computational time to a numerically tractable revel . under the condition ( [ eq : balance2 ] ) , we sweep the thresholds as , ( ) and , ( ) in our numerical calculations ( see table [ tab : tb1 ] and table [ tab : tb2 ] ) . in order to investigate the performance of pairs trading quantitatively , we should observe several relevant performance measurements . as such observables , herewe define the following wining probability as a function of the thresholds : where we defined where are numbers of wins and loses , respectively , and the conservation of the number of total active pairs should hold ( see the definition of in ( [ eq : def_n ] ) under the condition ) .the bracket appearing in ( [ eq : pw ] ) is defined by we also define the profit rate : which is a slightly different measurement from the winning probability .we should notice that we now consider the case with the constraint ( [ eq : balance2 ] ) and in this sense , the explicit dependences of and on are omitted in the above descriptions .we also keep in mind that takes a positive value if we make up accounts for taking the arbitrage at . on the other hand ,the becomes negative if we terminate the trading due to loss - cutting .therefore , the above denotes a total profit for a given set of the thresholds .we shall list the concrete algorithm for our empirical study on the pairs trading as follows . 1 .we collect a pair of stocks from daily data for the past one year . 2 .do the following procedures from to + .1 . calculate and to determine whether the pair satisfy the start condition . +* start condition : * * if and and , go to ( c ) .* if not , go to ( b ) .2 . and back to ( a ) . and go to the termination condition . +* termination condition : * * if ( we ` win ' ) , go to the next pairs . *if not , go back to ( c ) . if , we ` lose ' . if , go to 1then , we repeat the above procedure for all possible pairs of stocks listed on the first section of the tokyo stock exchange leading up to totally pairs .we play our game according to the above algorithm for each pair , and if a pair passes their decision time resulting in the profit : or the loss : we discard the pair and never ` recycle ' the pair again for pairs trading . of course , such treatment might be hardly accepted in realistic pairs trading because traders tend to use the same pairs as the one which gave them a profit in the past markets .nevertheless , here we shall utilize this somewhat ` artificial ' treatment in order to quantify the performance of paris trading through the measurements and systematically .we also simplify the game by restricting ourselves to the case in which each trader always makes a trade by a unit volume .in the next section , we show several result of empirical data analysis .here we show several empirical data analyses done for all possible pairs of stocks listed on the first section of the tokyo stock exchange leading up to pairs .the daily data sets are collected for the past four years 2009 - 2010 from the web cite . in our empirical analysis , we set [ days ] , . before we show our main result , we provide the two empirical distributions for the correlation coefficients and volatilities , which might posses very useful information about selecting the active pairs .we also discuss the distribution of the first - passage time to quantify the processing time roughly . in fig .[ fig : cor ] , we plot the distributions of ( left ) and ( right ) for the past four years ( 2009 - 2012 ) .( left ) and volatility ( right ) for the past four years data sets : 2009 , 2010 , 2011 and 2012 . on the other hand ,the distribution of the volatility is almost independent of year and it possess a single peak around . ,title="fig:",width=219 ] ( left ) and volatility ( right ) for the past four years data sets : 2009 , 2010 , 2011 and 2012 . on the other hand ,the distribution of the volatility is almost independent of year and it possess a single peak around . ,title="fig:",width=219 ] from the left panel , we find that the distribution of correlation coefficients is apparently skewed for all years and the degree of skewness in 2011 is the highest among the four due to the great east japan earthquake as reported in . actually , we might observe that most of stocks in the multidimensional scaling plane shrink to a finite restricted region due to the strong correlations .on the other hand , the distribution of the volatility is almost independent of the year and possess a peak around .we are confirmed from these empirical distributions that the choice of the system parameters could be justified properly in the sense that the number of pairs satisfying the criteria and is not a vanishingly small fraction but reasonable number of pairs ( ) can remain in the system .we next show the distributions of the first - passage times for the data set in 2010 .it should be noted that we observe the duration as a first passage time from the point in time axis , hence , the distributions of the duration are given for respectively .we plot the results in fig .[ fig : fg0 ] . of first - passage timesthe left panel is , whereas the right is .we find that one confirms the lose by loss - cutting by 50 days after the start point in most cases and the decision is disclosed at latest by 250 days after the . on the other hand, we win within several days after the start and a single peak is actually located in the short time frame ., title="fig:",width=219 ] of first - passage times .the left panel is , whereas the right is .we find that one confirms the lose by loss - cutting by 50 days after the start point in most cases and the decision is disclosed at latest by 250 days after the . on the other hand, we win within several days after the start and a single peak is actually located in the short time frame ., title="fig:",width=219 ] from the left panel , we find that one confirms the lose by loss - cutting by 50 days after the start point in most cases , and the decision is disclosed at latest by 250 days after the . on the other hand, we win within several days after the start and a single peak is actually located in the short time frame .these empirical findings tell us that in most cases , the spread between highly - correlated two stocks actually shrink shortly even if the two prices of the stocks exhibit ` mis - pricing ' leading up to a large spread temporally . taking into account this fact , our findings also imply that the selection by correlation coefficients and volatilities works effectively to make the pairs trading useful . as our main results ,we first show the wining probability as a function of thresholds defined by ( [ eq : pw ] ) in fig .[ fig : fg1 ] . to show it effectively, we display the results as three dimensional plots with contours . from these panels, we find that the winning probability is unfortunately less than that of the ` draw case ' in most choices of the thresholds .we also find that for a given , the probability is almost a monotonically increasing function of in all the three years .this result is naturally accepted because the trader might take more careful actions on the starting of the pairs trading for a relatively large . as a function of .from the top most to the bottom , the results in 2012 , 2011 and 2010 are plotted .the right panels are the plots for relatively small range of thresholds . from these panels, we find that the winning probability is less than that of the ` draw case ' in most cases of the thresholds .see also table [ tab : tb1 ] ( 2012 ) and table [ tab : tb2 ] ( 2011 ) for the details ., title="fig:",width=219 ] as a function of . from the top most to the bottom , the results in 2012 , 2011 and 2010 are plotted .the right panels are the plots for relatively small range of thresholds . from these panels , we find that the winning probability is less than that of the ` draw case ' in most cases of the thresholds .see also table [ tab : tb1 ] ( 2012 ) and table [ tab : tb2 ] ( 2011 ) for the details ., title="fig:",width=219 ] as a function of . from the top most to the bottom, the results in 2012 , 2011 and 2010 are plotted .the right panels are the plots for relatively small range of thresholds . from these panels , we find that the winning probability is less than that of the ` draw case ' in most cases of the thresholds .see also table [ tab : tb1 ] ( 2012 ) and table [ tab : tb2 ] ( 2011 ) for the details ., title="fig:",width=219 ] as a function of . from the top most to the bottom ,the results in 2012 , 2011 and 2010 are plotted .the right panels are the plots for relatively small range of thresholds . from these panels , we find that the winning probability is less than that of the ` draw case ' in most cases of the thresholds .see also table [ tab : tb1 ] ( 2012 ) and table [ tab : tb2 ] ( 2011 ) for the details ., title="fig:",width=219 ] as a function of . from the top most to the bottom, the results in 2012 , 2011 and 2010 are plotted .the right panels are the plots for relatively small range of thresholds . from these panels , we find that the winning probability is less than that of the ` draw case ' in most cases of the thresholds .see also table [ tab : tb1 ] ( 2012 ) and table [ tab : tb2 ] ( 2011 ) for the details ., title="fig:",width=219 ] as a function of . from the top most to the bottom, the results in 2012 , 2011 and 2010 are plotted .the right panels are the plots for relatively small range of thresholds . from these panels , we find that the winning probability is less than that of the ` draw case ' in most cases of the thresholds .see also table [ tab : tb1 ] ( 2012 ) and table [ tab : tb2 ] ( 2011 ) for the details ., title="fig:",width=219 ] to see the result more carefully , we write the raw data produced by our analysis in table [ tab : tb1 ] ( 2012 ) and table [ tab : tb2 ] ( 2011 ) . from these two tables, we find that relatively higher winning probabilities are observed , however , for those cases , the number of wins ( or loses ) is small , and it should be more careful for us to evaluate the winning possibility of pairs trading from those limited data sets . in order to consider the result obtained by our algorithm for pairs trading from a slightly different aspect , we plot the profit rate given by ( [ eq : profit ] ) as a function of thresholds in fig .[ fig : fg_gain2 ] .we clearly find that for almost all of the combinations , one can obtain the positive profit rate , which means that our algorithm actually achieves almost risk - free asset management and it might be a justification of the usefulness of pairs trading . at a glance, it seems that the result of the small winning probability is inconsistent with that of the positive profit rate .however , the result can be possible to be obtained . to see it explicitly ,let us assume that the pairs and lose and the pair wins for a specific choice of thresholds .then , the wining probability is .however , the profits for these three pairs could satisfy the following inequality : from the definition of the profit rate ( [ eq : profit ] ) , we are immediately conformed as hence , an active pair producing a relatively large arbitrage can compensate the loss of wrong active pairs by choosing the threshold appropriately .it might be an ideal scenario for the pairs trading . as a function of .from the upper left to the bottom , the results in 2012 , 2011 and 2010 are plotted .we clearly find that for almost all of the combinations , one can obtain the positive profit rate , which means that our algorithm actually achieves almost risk - free asset management and it might be a justification of the usefulness of pairs trading . , title="fig:",width=219 ] as a function of . from the upper left to the bottom , the results in 2012 , 2011 and 2010 are plotted .we clearly find that for almost all of the combinations , one can obtain the positive profit rate , which means that our algorithm actually achieves almost risk - free asset management and it might be a justification of the usefulness of pairs trading ., title="fig:",width=219 ] as a function of . from the upper left to the bottom , the results in 2012 , 2011 and 2010 are plotted .we clearly find that for almost all of the combinations , one can obtain the positive profit rate , which means that our algorithm actually achieves almost risk - free asset management and it might be a justification of the usefulness of pairs trading ., title="fig:",width=219 ] finally , we should stress that the fact in most cases of thresholds implies that automatic pairs trading system could be constructed by applying our algorithm for all possible in parallel .however , it does not mean that we can always obtain positive profit ` actually ' .our original motivation in this paper is just to examine ( from the stochastic properties of spreads between two stocks ) how much percentage of highly correlated pairs is suitable for the candidate in pairs trading in a specific market , namely , tokyo stock exchange . in this sense, our result could not be used directly for practical trading . nevertheless , as one can easily point out, we may pare down the candidates by introducing the additional transaction cost , and even for such a case , the game to calculate the winning probability etc . by regarding the trading as a mixture of first - passage processes might be useful . in fig .[ fig : fg_scat ] , we plot the profit rate against the volatilities as a scattergram only for the winner pairs . against the volatilities as a scattergram .we set the profit - taking threshold as and vary the starting threshold in the range of .we find that there exist two distinct clusters ( components ) in the winner pairs , namely , the winner pairs giving us the profit rate typically as much as for the range of , which are almost independent of , and the winner pairs having the profit linearly dependent on the volatility .note that the vertical axis is shown as ` percentage ' ., width=302 ] in this plot , we set the profit - taking threshold as and vary the starting threshold in the range of .for each active winner pair , we observe the profit rate and the average volatility of the two stocks in each pair , and plot the set in the two - dimensional scattergram . from this figure, we find that there exist two distinct clusters ( components ) in the winner pairs , namely , the winner pairs giving us the profit rate typically as much as for the range of , which are almost independent of , and the winner pairs having the profit rate linearly dependent on the volatility .the former is a low - risk group , whereas the latter is a high - risk group . the density of the points for the low - risk group in fig .[ fig : fg_scat ] is much higher than that of the high - risk group .hence , we are confirmed that our selection procedure of the active pairs works effectively to manage the asset as safely as possible by reducing the risk which usually increases as the volatility grows .finally , it should be noted that as we discussed in subsection [ sec : sub_constraint ] , the value is a lower bound of the profit rate ( see equations ( [ eq : lower_bound ] ) and ( [ eq : eta02 ] ) ) .therefore , in the above case , the lower bound for the profit rate should be estimated for as the lowest value for the profit rate ( [ eq : lower_bound2 ] ) is consistent with the actually observed lowest value in the scattergram shown in fig .[ fig : fg_scat ] . finally , we shall list several examples of active pairs to win the game .of course , we can not list all of the winner pairs in this paper , hence , we here list only three pairs as examples , each of which includes _ sanyo special steel co. ltd . _( i d : 5481 ) and the corresponding partners are _hitachi metals . ltd . _( i d : 5486 ) , _ mitsui mining smelting co. ltd . _( i d : 5706 ) and _ pacific metals co. ltd . _( i d : 5541 ) .namely , the following three pairs actually won in our empirical analysis of the game . note that each i d in the above expression corresponds to each identifier used in yahoo!finance . as we expected as an example of _ coca cola _ and _ pepsi cola _ , these are all the same type of industry ( the steel industry ) .we would like to stress that we should act with caution to trade using the above pairs because the pairs just won the game in which the pairs once got a profit in the past are never recycled in future .therefore , we need much more extensive analysis for the above pairs to use them in practice .in this paper , we proposed a very simple and effective algorithm to make the pairs trading easily and automatically .we applied our algorithm to daily data of stocks in the first section of the tokyo stock exchange .numerical evaluations of the algorithm for the empirical data set were carried out for all possible pairs by changing the starting ( ) , profit - taking ( ) and stop - loss ( ) conditions in order to look for the best possible combination of the conditions .we found that for almost all of the combinations under the constraint , one can obtain the positive profit rate , which means that our algorithm actually achieves almost risk - free asset management at least for the past three years ( 2010 - 2012 ) and it might be a justification of the usefulness of pairs trading .finally , we showed several examples of active pairs to win the game . as we expected before , the pairs are all the same type of industry ( for these examples, it is the steel industry ) .we should conclude that the fact in most cases of thresholds implies that automatic pairs trading system could be constructed by applying our algorithm for all possible in parallel way .of course , the result does not mean directly that we can always obtain positive profit in a practical pairs trading .our aim in this paper was to examine how much percentage of highly correlated pairs is suitable for the candidate in pairs trading in a specific market , namely , tokyo stock exchange . in this sense, our result could not be used directly for practical pairs trading .nevertheless , we may pare down the candidates by introducing the additional transaction cost , and even for such a case , the game to calculate the winning probability etc . by regarding the trading as a mixture of first - passage processes might be useful .we are planning to consider pairs listed in different stock markets , for instance , one is in tokyo and the other is in ny .then , of course , we should also consider the effect of the exchange rate .those analyses might be addressed as our future study . .details of the result in 2012 .see also fig .[ fig : fg1 ] ( the top most ) .we find that relatively higher winning probabilities are observed , however , the number of wins ( and lose ) is small . it should be noted that holds .[ cols=">,>,>,>,>,>,>,>,>,>,>,>",options="header " , ] this work was financially supported by grant - in - aid for scientific research ( c ) of japan society for the promotion of science no .2533027803 and grant - in - aid for scientific research ( b ) of japan society for the promotion of science no .we were also supported by grant - in - aid for scientific research on innovative area no .one of the authors ( ji ) thanks anirban chakraborti for his useful comments on this study at the early stage .t. ibuki , s. suzuki and j. inoue , _ cluster analysis and gaussian mixture estimation of correlated time - series by means of multi - dimensional scaling _ , _ econophysics of systemic risk and network dynamics , new economic windows _ vol .2013 , pp .239 - 259 , springer - verlag ( italy - milan ) ( 2012 ) .t. ibuki , s. higano , s. suzuki and j. inoue , _ hierarchical information cascade : visualization and prediction of human collective behaviour at financial crisis by using stock - correlation _ , _ ase human journal _ * 1 * , issue 2 , pp.74 - 87 ( 2012 ) .t. ibuki , s. higano , s. suzuki , j. inoue and a. chakraborti , _ statistical inference of co - movements of stocks during a financialcrisis _, _ journal of physics : conference series _ * 473 * , 012008 ( 16pages ) ( 2013 ) .m. murota and j. inoue , _ characterizing financial crisis by means of the three states random field ising model _ , _ econophysics of agent - based models , new economic windows _ vol .2014 , pp .83 - 98 , springer - verlag ( italy - milan ) ( 2013 ) .e.g. gatev , w.n .goetzmann and k.g .rouwenhorst , _ pairs trading : performance of a relative value arbitrage rule _ , _ the review of financial studies _ * 19 * , issue 3 , pp .797 - 827 ( 2006 ) .( see also _ nber working papers _ * 7032 * , _ national bureau of economic research inc . _ ( 1999 ) . )t. ibuki and j. inoue , _ response of double - auction markets to instantaneous selling - buying signals with stochastic bid - ask spread _, _ journal of economic interaction and coordination _ * 6 * , no.2 , pp.93 - 120 ( 2011 ) .
we carry out a large - scale empirical data analysis to examine the efficiency of the so - called pairs trading . on the basis of relevant three thresholds , namely , starting , profit - taking , and stop - loss for the ` first - passage process ' of the spread ( gap ) between two highly - correlated stocks , we construct an effective strategy to make a trade via ` active ' stock - pairs automatically . the algorithm is applied to stocks listed on the first section of the tokyo stock exchange leading up to totally pairs . we are numerically confirmed that the asset management by means of the pairs trading works effectively at least for the past three years ( 2010 - 2012 ) data sets in the sense that the profit rate becomes positive ( totally positive arbitrage ) in most cases of the possible combinations of thresholds corresponding to ` absorbing boundaries ' in the literature of first - passage processes .
in this work we will determine the mosaic number of all 36 prime knots of eight crossings or fewer . before we do this, we will give a short introduction to knot mosaics .take a length of rope , tie a knot in it , glue the ends of the rope together and you have a _ mathematical knot _ a closed loop in 3-space . a rope with its ends glued together without any knotsis referred to as the _ trivial knot _ , or just an unknotted circle in 3-space .there are other ways to create mathematical knots aside from rope .for example , _ stick knots _ are created by gluing sticks end to end until a knot is formed ( see adams ) . in 2008 ,lomonaco and kauffman developed an additional structure for considering knots which they called _knot mosaics_. in 2014 , kuriya and shehab showed that this representation of knots was equivalent to tame knot theory , or knots with rope , implying that tame knots can be represented equivalently with knot mosaics .this means any knot that can be made with rope can be represented equivalently with a knot mosaic .+ a _ knot mosaic _ is the representation of a knot on an grid composed of 11 tiles as depicted in figure [ f : tiles ] .a tile is said to be _ suitably connected _ if each of its connection points touches a connection point of a contiguous tile .several examples of knot mosaics are depicted in figure [ f : example ] .it should be noted that in figure [ f : example ] , the first mosaic is a knot , the trefoil knot , the second mosaic is a link , the hopf link , and the third is the composition of two trefoil knots ( remove a small arc from two trefoils then connect the four endpoints by two new arcs depicted in red , denoted by ) .a knot is made of one component ( i.e. one piece of rope ) , and a link is made of one or more components ( i.e. one or more pieces of rope ) . for this work , we will focus on knot mosaics of _ prime knots_. a prime knot is a knot that can not be depicted as the composition of two non - trivial knots . the trefoil is a prime knot . respectively . ]+ + + + when studying knots , a useful and interesting topic used to help distinguish two knots is _ knot invariants_. a knot invariant is a quantity defined for each knot that is not changed by _ ambient isotopy _ , or continuous distortion , without cutting or gluing the knot .one such knot invariant is the _ crossing number _ of a knot .the crossing number is the fewest number of crossings in any diagram of the knot .for example , the crossing number of the trefoil is three , which can be seen in figure [ f : example ] . a _ reduced _ diagram of a knot is a projection of the knot in which none of the crossings can be reduced or removed .the fourth knot mosaic depicted in figure [ f : example ] is an example of a non - reduced trefoil knot diagram . in this examplethe crossing number of three is not realized because there are two extra crossings that can be easily removed . + an interesting knot invariant for knot mosaics is the _mosaic number_. the mosaic number of a knot is the smallest integer for which can be represented on an mosaic board .we will denote the mosaic number of a knot as .for the trefoil , it is an easy exercise to show that mosaic number for the trefoil is four , or . to see this ,try making the trefoil on a board and arrive at a contradiction .+ next , we introduce a technique that can be used to clean up " a knot mosaic by removing unneeded crossing tiles .in 1926 , kurt reidemeister demonstrated that two knot diagrams belonging to the same knot , up to ambient isotopy , can be related by a sequence of three moves , now know as the reidemeister moves .for our purposes , we will consider two of these moves on knot mosaics , the mosaic reidemeister type i and type ii moves as described by lomonaco and kauffman . formore about reidemeister moves , the interested reader should see adams .+ the mosaic reidemeister type i moves are the following : the mosaic reidemeister type ii moves are given below : next we make several observations that will prove useful . [o : twofold ] once the inner tiles of a mosaic board are suitably placed , there are only two ways to complete the board so that it is suitable connected , resulting in a knot or a link . for any given mosaic board , we will refer to the collection of inner tiles as the _inner board_. for example , in figure [ f:5by5 ] the tiles - would make the inner board for this mosaic board .[ o : evenall ] assume is even . for a board of size with the inner board consisting of all crossing tiles , any resulting suitably connected mosaic will either be a component link mosaic or component non - reduced link mosaic , see figure [ f:4_1 ] for example . [o : oddall ] assume is odd . for a board of size with the inner board consisting of all crossing tiles , any resulting suitably connected mosaic is a non - reduced knot mosaic . observation [ o : oddall ] can be generalized in the following way .[ o : corner ] let be a knot mosaic with two corner crossing tiles in a top row of the inner board .if the top boundary of the row has an odd number of connection points , then a reidemeister type i move can be applied to either corner of the row of .this extends via rotation to the outer - most columns and the lower - most row of the inner board .[ fig:5by5 ] armed with this quick introduction to knot mosaics , we are ready to determine the mosaic number for all prime knots with a crossing number of eight or fewer in the next section .before we proceed , it should be noted that there are many other questions to consider regarding knot mosaics besides finding the mosaic number .for example , what is the fewest number of non - blank tiles needed to create a specific knot ? this could be known as the _ tile number _ of a knot . with this in mind , if we allow knot mosaics to be rectangular , can some knots have a smaller tile number if they are presented in a rectangular , configuration as opposed to a square configuration ? we will conclude this article with a number of other open questions about knot mosaics that you can consider and try to solve . +in this section , we will determine the mosaic number for all prime knots of eight or fewer crossings .we are referring to these as small " prime knots .we will see that for some knots , the mosaic number is obvious , " while others take some considerable work .we begin with knots is the second knot of six crossings in the rolfsen knot table . ] of an obvious mosaic number as shown in table [ t : obvious ] .+ why do these knots have obvious mosaic numbers ?as previously noted the trefoil knot , , can not fit on a mosaic board , as such a board would only allow one crossing tile when requires at least three crossings .hence , the mosaic number for the trefoil is obvious .similarly , the knots , , and have more than four crossings , so they can not fit on a mosaic board which only allows at most four crossing tiles . in the appendix , we have provided representations of these knots on mosaic boards , thus determining the mosaic number for these knots .it should be noted that as the knots become larger , it is often difficult to determine whether a specific knot mosaic represents a given knot . to check that a knot mosaic represents a specific knot , we used a software packaged called knotscape developed by professor morwen thistlethawite which looks at the dowker notation of a knot to determine the knot presented . while knotscape can not determine all knots , it can determine small prime knots . for more see adams .+ next we consider knots whose mosaic number is almost obvious . " at first glance , one may think that the figure - eight knot , , should have a mosaic number of four .start with a mosaic with the four inner tiles being crossing tiles . by observation [ o :twofold ] , this mosaic can be completed in two ways as seen in figure [ f:4_1 ] . however , the knot is known as an _ alternating _ knot .an alternating knot is a knot with a projection that has crossings that alternate between over and under as one traverses around the knot in a fixed direction .so , if we were to try to place on a mosaic , there would be four crossing tiles and they would have to alternate . thus can not be placed on a board . in the appendixwe see a presentation of on a mosaic board , hence .another knot with an almost obvious mosaic number is .figure [ f : compare6_1 ] first depicts a configuration of on a mosaic board .however , by performing a move called a _ flype _ ( see adams ) we can fit on a mosaic board . it should be noted that this mosaic representation of has seven crossings instead of six .thus the mosaic number for is realized when the crossing is not .it turns out that is not the only knot with such a property .ludwig , evans , and paat created an infinite family of knots whose mosaic numbers were realized only when their crossing numbers were not . at this point , we have determined the mosaic number for all six or fewer crossing knots except .surprisingly , we will see that can not fit on a board , even though such a board has nine possible positions to place crossing tiles .+ since has six crossings , we know that . in the appendix we see a representation of on a mosaic board , so .this mean or .we now argue .+ assume to the contrary that . by the definition of the mosaic number ,this implies that there is some mosaic that represents .we will show via a case analysis that regardless of how the crossing tiles are arranged on , the resulting knot is in fact _ not . this will give us a contradiction , implying that .+ in order to help with the case analysis , we label the nine inner tiles of the mosaic , as depicted in figure [ f:5by5 ] .the knot uses at least six crossing tiles .since has a crossing number of six , by the pigeon hole principle at least one of the four corner inner tiles , , , , or must be a crossing tile . by rotations , it is enough to consider the four cases that are depicted in figure [ f : fourcases ] .note that in figure [ f : fourcases ] , gray tiles describe non - crossing tiles and white tiles could be crossing tiles ( but they do not have to be ) .* case 1 * : suppose that is a crossing tile , while , and are not .thus and are all crossing tiles since has exactly 6 crossing tiles .every reduced projection of an alternating knot is alternating , and since represents the alternating knot , the crossings on must alternate .a quick inspection shows that to suitably connect the inner tiles of we would need to ensure that ( i ) , and are not crossing tiles , ( ii ) the crossings are alternating , and ( iii ) there are no easily removed crossings , however this results in a mosaic that represents , as seen in figure [ fig : ex1 ] .therefore , can not be constructed in case 1 .* case 2 * : for the case when has two corner inner tiles that are crossings , we require two sub - cases . + * sub - case 2(a ) * : suppose that and are crossing tiles , while and are not .if is a crossing tile then observation [ o : corner ] may be applied to the top inner row of . applying this observaitonwill either change or to a non - crossing tile . without loss of generalization ,suppose that is changed .notice that now satisfies case 1 , and from the previous analysis , does not represent .therefore we may assume that is not a crossing tile .since has at least 6 crossing tiles , and and are not crossing tiles , the remaining 6 inner tiles must be crossing tiles .then has connection points . by observation [ o : corner ] either or be changed to a non - crossing tile .again falls into case 1 and does not represent . + * sub - case 2(b ) * suppose that and are crossing tiles while and are not .+ assume to the contrary that only has crossing tiles .then exactly one of is a non - crossing tile .up to rotation and reflection , we only need to consider situations ( i ) and ( ii ) , where since has 6 crossings , the corner inner - tiles and can not be changed from crossing tiles . with this in mind , suitably connecting the crossing tiles in figure [ f : case2(b)inner ] so that a knot ( and not a 2-component link ) is created leads to or ( see figure [ f : case2(b)knots ] ) .this is a contradiction , proving our claim . if the crossing tiles are alternating , then represents , contradicting that is .so assume that has 7 crossings and is non - alternating .+ observe that if any of the pairs , , , or are non - alternating , then a type ii reidemeister move is present , and can be reduced to five crossings .however , this contradicts that is .so each pair , , , and is alternating . since is non - alternating , at least one of the pairs , , , or is non - alternating .without loss of generality , assume creates a pair of non - alternating crossings . then , up to ambient isotopy , is or , as seen in figure [ fig_6131 ] .therefore sub - case 2(b ) does not result in .* case 3 : * suppose that and are crossing tiles , and is not. since has at least 6 crossings , there must be at least 3 more crossing tiles on the board .+ observe that if either or are crossing tiles then observation [ o : corner ] may be applied to the top inner row or the right inner column , respectively . as a result of this , one of the crossings on and be changed to a non - crossing tile , leaving only two corner inner - tiles that are crossings .this reverts to case 2 showing that would not represent .therefore , we may assume that neither nor are crossing tiles .+ with and eliminated as crossing tiles , and must all be crossing tiles. then by observation [ o : corner ] , either or can be changed from a crossing tile to a non - crossing tile .this again reverts to case 2 showing that would not represent . + * case 4 : * suppose that and are crossing tiles . note that at least one of the tiles in the set must be a crossing tile .this means observation [ o : corner ] applies to some row or column of , and can be reduced to case 3 .hence does not represent .+ by the above four cases , we see that the can not be placed on a mosaic . hence , by the figure for in the appendix , we see that the mosaic number of is six ; that is .we next consider the seven - crossing knots .as seen above , the knot can be placed on a mosaic board .we formalize this result in the following proposition as well as establish the mosaic number for the other seven - crossing knots .+ [ t:7 - 4 ] the mosaic number of is five , that is .moreover , is the only seven - crossing prime knot with mosaic number five ; the remaining seven - crossing prime knots have mosaic number six .we have already seen via observation [ o : corner ] that at most seven crossing tiles can be placed on a board without reduction via a reidemeister type i move . moreover , since all seven crossing knots are alternating , there is only one way to place seven alternating crossing tiles up to mirror , reflection , and rotation as depicted in figure [ f : alt_7 ] . when this arrangement is suitably connected , the only knot resulting is .therefore and all other seven - crossing knots have mosaic number six as depicted in the appendix .next we consider the eight - crossing knots .let be a knot of eight crossings . by the proof of theorem [ t:6 - 3 ] , we know that can not fit on a mosaic board .this means .furthermore there exists a knot mosaic of on a board ( see the appendix ) .this implies .+ given the above arguments , we see that the mosaic number of the eight crossing knots is greater than five . by the appendix and use of knotscape, we see that the mosaic number of all eight crossing knots is six .we summarize our findings in the table [ t : obvious ] . for each knot with at most 8 crossings , the appendix includes a mosaic of size representing .+ 10 colin c. adams , bevin m. brennan , deborah l. greilsheimer , and alexander k. woo , _stick numbers and composition of knots and links _ , j. knot theory ramifications , * 6 * ( 1997 ) , no .2 , 149161 .colin c. adams , _ the knot book _ , american mathematical society , providence , ri , 2004 .j. w. alexander _ a lemma on systems of knotted curves _ ,usa * 9 * ( 1923 ) 9395 .knotscape , http://www.math.utk.edu/~morwen/knotscape.html .kyungpyo hong , ho lee , hwa jeong lee , and seungsangoh , _ small knot mosaics and partition matrices _ , j. phys .a , * 47 * ( 2014 ) , no . 43 , 13 pp . t. kuriya and o. shebab , _ the lomonaco - kauffman conjecture _ , j. knot theory ramifications , * 23 * ( 2014 ) , no . 1hwa jeong lee , kyungpyo hong , ho lee , and seungsang oh , _mosaic number of knots _ , j. knot theory ramifications , * 23 * ( 2014 ) no .13 , 8 pp .samuel j. lomonaco and l. kauffman , _ quantum knots and mosaics _ , quantum inf ., * 7 * ( 2008 ) , no . 2 - 3 , 85115 .lewis d. ludwig , erica l. evans , and joseph s. paat , _ an infinite family of knots whose mosaic number is realized in non - reduced projections _ , j. knot theory ramifications , * 22 * ( 2013 ) , no . 7 , 11 pp .kurt reidemeister , _ elementare begrdung der knotentheorie _ ,hamburg , * 5 * ( 1926 ) , 2432 .
in 2008 , lomonaco and kauffman introduced a knot mosaic system to define a quantum knot system . a quantum knot is used to describe a physical quantum system such as the topology or status of vortexing that occurs on a small scale can not see . kuriya and shehab proved that knot mosaic type is a complete invariant of tame knots . in this article , we consider the mosaic number of a knot which is a natural and fundamental knot invariant defined in the knot mosaic system . we determine the mosaic number for all eight - crossing or fewer prime knots . this work is written at an introductory level to encourage other undergraduates to understand and explore this topic . no prior of knot theory is assumed or required . future planning ( nrf-2015r1c1a2a01054607 ) .
the problem of growing trees belongs to larger class of problems of evolving networks a new area with many interdisciplinary applications , from biology and computational science to linguistics . in statistical mechanics ,we often investigate the state of thermodynamic equilibrium , which is unique and therefore it can not preserve any information .however , in other sciences memory on past states is an essential ingredient of the system .here we are interested in search how the structure of the origin of a tree , i.e. of a graph from which the tree is constructed , influences the overall characteristics of the growing system .a network containing nodes is fully characterized by its connectivity matrix : if the nodes are linked together , and elsewhere .more convenient but somewhat redundant is the distance matrix , where the matrix element is the number of links along the shortest path from to .it is often simpler to describe a network statistically .a local characteristics of a network includes the degree distribution , i.e. the probability that a node is linked to a given number of neighbors .a global characteristics includes the node - node distance distribution .whereas the former can be treated as complete only conditionally , a few is known on the latter .recent progress of knowledge on the mean node - node distance ] is an average over different matrices , i.e. different graphs . by growing we mean adding subsequent nodes to an already existing graph . when each node is added with one link only , a tree a compact graph without loops and without multiple edges is formed . in trees ,a path between each two nodes is unique , and it can not be changed during the growth process . when a node is added , the node - node distance matrix is increased by one column and one row . once the matrix elements are formed , they do not change their values . however , if nodes are added with two or more links , a kind of shortcuts are formed and some node - node distances may be shortened . the main goal of this work is to demonstrate , that the node - node distance distribution of a growing tree preserves an information on the structure of the initial tree , from which it is formed . below we deal with two kinds of growing trees , which differ in the degree distribution .let us consider the linking of new nodes to randomly selected nodes .when the selection is made without any preference , we obtain a so - called exponential tree . in this case , the degree distribution , where is the number of links of a node .nodes can be selected also with some preference with respect to their degree .if the linking probability is proportional to the degree , we obtain the scale - free or barabsi albert networks . in this case , , with . to achieve our goal ,the simplest method is to calculate the mean node - node distance for trees of nodes , the formation of which has started from two different trees with four nodes .this is done in the next section with iterative equations , which has been derived recently for the exponential trees . in section [ sec_algorithm ] , the growth algorithms are introduced , basing on an evolution of the distance matrix . in section [ sec_results ] ,numerical results are presented for the exponential trees and the barabsi albert scale - free trees .we show also that the memory on the ancestral network is much reduced , if the trees are substituted by graphs with cyclic paths , i.e. with . the last section is devoted to discussion .consider the probability that a tree of a given structure is grown .trees are different if there is no one - to - one correspondence between their pairs of linked nodes .let us denote the number of different trees with nodes by .it is easy to check by inspection , that and . as , the probability or weight of the tree of three nodes ( fig . [fig_trees](a ) ) must be one .an exponential tree of four nodes can be formed by linking a new ( fourth ) node either to one of two end nodes , or to the central one .then , the probability of a chain of nodes ( fig . [ fig_trees](b ) ) is , and the probability of a star - like - tree ( fig . [ fig_trees](c ) ) is . from the chain , a longer chain ( fig . [ fig_trees](d ) ) can be produced in two ways , then its weight is . from the star , another star ( fig .[ fig_trees](f ) ) can appear with the probability .the remaining tree ( fig . [ fig_trees](e ) ) can be formed from either the chain or the star , then its weight is .we note that in the case of the scale - free trees , the weights of the trees presented in fig .[ fig_trees ] are : 1 , 1/2 , 1/2 , 1/6 , 7/12 and 1/4 , respectively .this is a simple demonstration , that the weights of trees in two different classes are different .any possible tree can be formed from a tree of three nodes ( fig . [ fig_trees](a ) ) .the way to form chains and stars is unique and then , their weights are relatively small .example giving , the weight of an exponential star of nodes is .we could eliminate stars , if we develop trees from the chain shown in fig .[ fig_trees](b ) .seemingly , the weights of other trees should not be changed much , but all of them are influenced by the lack of the stars .example giving , in this case the tree shown in fig .[ fig_trees](e ) can be formed in one unique way . as a consequence , the whole distribution of weights is rebuilt . with the iterative equations derived recently , we can calculate the mean distance and the mean square of distances $ ] for two `` families '' of trees .one is formed from the chain - like tree shown in fig . [ fig_trees](b ) and labeled as `` z '' , and another from the star - like tree presented in fig . [ fig_trees](c ) and marked as `` y '' .then , the first `` family '' does not contain stars , and the second one does not contain chains .the equations are : [ eq_iter ] and the information on the initial trees is encoded in the initial values of and .it is easy to check , that for the chain , and for the star , .similar method has been used in .the difference is that here , the eqs .are exact , but they apply only to the exponential trees .two initial trees with four nodes ( the chain and the star ) are represented in the computer memory as two distance matrices and . the starting point are two matrices for two trees of four nodes : for the chain and the star , respectively . selecting a nodeto link a new node is equivalent to select a number of column / row of the matrix .then the matrix is supplemented by new column and row , which are copies of the -th column / row but with all elements incremented by one and obviously the eq. served in the derivation of the iterative formulas .the same numerical technique is applied also to the case of the barabsi albert scale - free trees .the only difference is that in this case , the node is selected with preference of the number of its pre - existing links .namely , where is the number of links of -th node .additional matrix contains the indices of row of the distance matrix where `` 1 '' is encountered .each case indicates a link between nodes and .the matrix is useful to select nodes of given degree for the scale - free trees and graphs , according to the so - called kertsz algorithm .further , the same technique is applied to simple graphs , where new nodes are attached to previously existing ones by links .then , cyclic paths are possible and the distance matrix is to be rebuilt when adding each node .the algorithm is as follows : let us suppose that -th node is added to existing nodes and .then [ eq_graphs ] for new , -th , column / row and again for the diagonal element in the case of growing graphs .the gray sites show randomly chosen columns / rows ( nodes to which new node will be attached ) .the black sites show matrix elements which are reevaluated from eq . due to newly created shortcuts .the last columns / rows are constructed according eqs . and . starting with the y - like star new nodes were subsequently added to nodes . ]one step of construction of the matrix for simple graphs is presented in fig .[ fig_matrix ] .an example of the construction for trees is given in .in figs . [ fig_met_exp ] and [ fig_met_ab ] the dependences ( a ) and ( b ) obtained from growth simulations are presented , for exponential trees and for scale - free trees , respectively .the results of simulations are averaged over independent growths . for both kinds of treesthe difference in average node - node distance tends to the constant value during the growth process . for higher momentsthe effect is even stronger and increases with the tree size . in fig .[ fig_met_exp ] we give also the results for and calculated with eq . for exponential trees of nodes .the fact that and do not decrease with means that growing structures preserve the memory on their initial shapes . in the case of simple graphs ( ) , the distance matrix must be reevaluated , what makes the time of the calculation substantially larger .the results for graphs are averaged only over one thousand of independent growths .the curves and for both kind of simple graphs are shown in fig .[ fig_dn ] .the linear fits for are and for the exponential graphs and the scale - free graphs , respectively . the functions and for both kind of evolving graphs are shown in fig .[ fig_meg ] .for the scale - free graphs , we observe some small memory effect , which manifests as a constant mutual shift of the plots vs. . in this caseit is not clear if the effect vanishes or not , when tends to infinity . and ( b ) for _ exponential trees _ obtained with iterative formula as well as from the direct growth simulations .the results of simulations are averaged over independent growths.,title="fig:",scaledwidth=45.0% ] and ( b ) for _ exponential trees _ obtained with iterative formula as well as from the direct growth simulations .the results of simulations are averaged over independent growths.,title="fig:",scaledwidth=45.0% ] and( b ) for _ scale - free trees_ obtained from the growth simulations .the results are averaged over independent growths.,title="fig:",scaledwidth=45.0% ] and ( b ) for _ scale - free trees _ obtained from the growth simulations .the results are averaged over independent growths.,title="fig:",scaledwidth=45.0% ] and ( b ) for the exponential and scale - free _ graphs _ and different initial configurations obtained from the growth simulations .the results are averaged over independent growths .the dependence on the initial configuration is not visible in the scale of the plot.,title="fig:",scaledwidth=45.0% ] and ( b ) for the exponential and scale - free _ graphs _ and different initial configurations obtained from the growth simulations .the results are averaged over independent growths .the dependence on the initial configuration is not visible in the scale of the plot.,title="fig:",scaledwidth=45.0% ] and for the exponential and the scale - free _ graphs _ obtained from the growth simulations .the results are averaged over independent growths.,scaledwidth=45.0% ]in the case of the exponential trees , the results of the simulations agree well with the curves obtained from the iteration equations .this fact supports the reliability of the numerical equation for the scale - free trees and the graphs with , where we have no analytical calculations .main result of this work is , that the node - node distance distribution in a growing tree depends on its initial structure .our calculations indicate , that both the average distance and its second moment in trees display this kind of memory .the information is encoded in the constant in the expression .the constant varies by about 0.109 and 0.164 , when we change the shape of the initial tree of four nodes from the y - like star to the z - like chain for the exponential and scale - free trees , respectively . in the second moment , it is the constant which depends on the initial shape .this is true both for the exponential and the scale - free trees .the memory effect is much reduced or even disappears in the case when new nodes are linked to the network by at least two edges . in this case , the distance matrix is rebuilt by new edges which can shorten distances between initially far nodes by providing new paths between them .concluding , we have demonstrated that the growing trees carry an information on their initial geometrical structure .the validity of this result relies not only on pure geometry , but also on a particular application of the graph theory .as remarked in the introduction , the list of examples of such applications is quite rich .if the considered network is due to the citation index , we trace the flow of a new idea from one paper to another .we see that around some seminal papers , networks of citations are formed , as it happens in the case of ref .sometimes there are two or more seminal papers , and then the shape of the network depends on their clarity , ease of mathematical formulation and individual preferences of the readership , formed in personal contacts .if new results spread just by reading papers , it spreads slowly : somebody reads it , tells to a friend , the friend s student is asked to calculate a similar problem .the tree is ` chain - like ' . on the contrary, each conference makes the tree of spreading of ideas to be more ` star - like ' , where the possible sources of getting new information are multiple and efficient .the authors are grateful to prof .kazimierz raski for valuable help and to mr .pawe kuakowski for comments on manuscript .the numerical calculations were carried out in ack cyfronet the machine time on sgi 2800 is financed by the polish state committee for scientific research ( kbn ) under grant no .kbn / sgi2800/agh/018/2003 .99 r.albert and a .- l.barabsi , rev .* 286 * ( 2002 ) 47 .s.n.dorogovtsev and j.f.f.mendes , adv .* 51 * ( 2002 ) 1079 .m.e.j.newman , siam review * 45 * ( 2003 ) 167 .s.n.dorogovtsev and j.f.f.mendes , in _ from the genome to the internet _ , eds .s.bornholdt and h.g.schuster , viley - vch , berlin , 2002 .z.burda , j.d.correia , and a.krzywicki , phys . rev .* e56 * ( 2001 ) 46118 .s.n.dorogovtsev , j.f.f.mendes , and a.n.samukhin , nucl . phys .* b653 * ( 2003 ) 307 .a.fronczak , p.fronczak , and j.hoyst , cond - mat/0212230 .p.biaas , z.burda , j.jurkiewicz , and a.krzywicki , phys . rev . *e67 * ( 2003 ) 066106 .l.barabsi and r.albert , science * 286 * ( 1999 ) 509 .k.malarz , j.czaplicki , b.kawecka-magiera , and k.kuakowski , int . j. mod . phys .* c14 * ( 2003 ) 1201 .k.malarz , j.karpiska , a.kardas , and k.kuakowski , task quarterly * 8 * ( 2004 ) 115 .r.j.wilson , _ introduction to graph theory _ , longman scientific and technical , new york , 1987 .r.v.kulkarni , e.almaas , and d.stroud , phys . rev .* e61 * ( 2000 ) 4268 .g.szab , m.alava , and j.kertsz , phys . rev .* e66 * ( 2002 ) 026101 .d.stauffer private communication .
we show that the structure of a growing tree preserves an information on the shape of an initial graph . for the exponential trees , evidence of this kind of memory is provided by means of the iterative equations , derived for the moments of the node - node distance distribution . numerical calculations confirm the result and allow to extend the conclusion to the barabsi albert scale - free trees . the memory effect almost disappears , if subsequent nodes are connected to the network with more than one link . evolving networks , graphs and trees , small - world effect 82.20.m , 05.50.+q
neurons in the visual cortex form spatial representations or maps of several stimulus features . how are different spatial representations of visual information coordinated in the brain ? in this paper , we study the hypothesis that the coordinated organization of several visual cortical maps can be explained by joint optimization .previous attempts to explain the spatial layout of functional maps in the visual cortex proposed specific optimization principles ad hoc . here , we systematically analyze how optimization principles in a general class of models impact on the spatial layout of visual cortical maps . for each considered optimization principlewe identify the corresponding optima and analyze their spatial layout .this directly demonstrates that by studying map layout and geometric inter - map correlations one can substantially constrain the underlying optimization principle .in particular , we study whether such optimization principles can lead to spatially complex patterns and to geometric correlations among cortical maps as observed in imaging experiments .neurons in the primary visual cortex are selective to a multidimensional set of visual stimulus features , including visual field position , contour orientation , ocular dominance , direction of motion , and spatial frequency . in many mammals ,these response properties form spatially complex , two - dimensional patterns called visual cortical maps . the functional advantage of a two dimensional mapping of stimulus selectivities is currently unknown . what determines the precise spatial organization of these maps ?it is a plausible hypothesis that natural selection should shape visual cortical maps to build efficient representations of visual information improving the fitness of the organism .cortical maps are therefore often viewed as optima of some cost function .for instance , it has been proposed that cortical maps optimize the cortical wiring length or represent an optimal compromise between stimulus coverage and map continuity .if map structure was largely genetically determined map structure might be optimized through genetic variation and darwinian selection on an evolutionary timescale .optimization may , however , also occur during the ontogenetic maturation of the individual organism for instance by the activity - dependent refinement of neuronal circuits .if such an activity - dependent refinement of cortical architecture realizes an optimization strategy its outcome should be interpreted as the convergence towards a ground state of a specific energy functional .this hypothesized optimized functional , however , remains currently unknown . as several different functional maps coexist in the visual cortex candidate energy functionals are expected to reflect the multiple response properties of neurons in the visual cortex .in fact , consistent with the idea of joint optimization of different feature maps cortical maps are not independent of each other .various studies proposed a coordinated optimization of different feature maps .coordinated optimization appears consistent with the observed distinct spatial relationships between different maps such as the tendency of iso - orientation lines to intersect od borders perpendicularly or the preferential positioning of orientation pinwheels at locations of maximal eye dominance .specifically these geometric correlations have thus been proposed to indicate the optimization of a cost function given by a compromise between stimulus coverage and continuity , a conclusion that was questioned by carreira - perpinan and goodhill .+ visual cortical maps are often spatially complex patterns that contain defect structures such as point singularities ( pinwheels ) or line discontinuities ( fractures ) and that never exactly repeat .it is conceivable that this spatial complexity arises from geometric frustration due to a coordinated optimization of multiple feature maps in which not all inter - map interactions can be simultaneously satisfied . in many optimization models , however , the resulting map layout is spatially not complex or lacks some of the basic features such as topological defects . in other studies coordinated optimizationwas reported to preserve defects that would otherwise decay .an attempt to rigorously study the hypothesis that the structure of cortical maps is explained by an optimization process thus raises a number of questions : i ) what are the genuine ground states of candidate energy functionals and how can they be calculated with precision and rigor ?ii ) how do differences in candidate optimization principles impact on the predicted map structure and conversely what can be learned about an hypothetical underlying optimization principle from observations on map structure ?iii ) is there a way to analyze the coordinated organization of cortical maps predicted by optimization principles in general ?if theoretical neuroscience was able to answer these questions with greater confidence , the interpretation and explanation of visual cortical architecture could build on a more solid foundation than currently available .to start laying such a foundation , we examined how symmetry principles in general constrain the form of optimization models and developed a formalism for analyzing map optimization independent of the specific energy functional assumed .+ minima of a given energy functional can be found by gradient descent which is naturally represented by a dynamical system describing a formal time evolution of the maps .response properties in visual cortical maps are arranged in repetitive modules of a typical spatial length called hypercolumn .optimization models that reproduce this typical length scale are therefore effectively pattern forming systems with a so - called cellular or finite wavelength instability , see .in the theory of pattern formation , it is well understood that symmetries play a crucial role .some symmetries are widely considered biologically plausible for cortical maps , for instance the invariance under spatial translations and rotations or a global shift of orientation preference . in this paperwe argue that such symmetries and an approach that utilizes the analogy between map optimization and pattern forming systems can open up a novel and systematic approach to the coordinated optimization of visual cortical representations .+ a recent study found strong evidence for a common design in the functional architecture of orientation columns .three species , galagos , ferrets , and tree shrews , widely separated in evolution of modern mammals , share an apparently universal set of quantitative properties .the average pinwheel density as well as the spatial organization of pinwheels within orientation hypercolumns , expressed in the statistics of nearest neighbors as well as the local variability of the pinwheel densities in cortical subregions ranging from 1 to 30 hypercolumns , are found to be virtually identical in the analyzed species .however , these quantities are different from random maps .intriguingly , the average pinwheel density was found to be statistical indistinguishable from the mathematical constant up to a precision of 2% .such apparently universal laws can be reproduced in relatively simple self - organization models if long - range neuronal interactions are dominant . as pointed out by kaschube and coworkers ,these findings pose strong constraints on models of cortical functional architecture .many models exhibiting pinwheel annihilation or pinwheel crystallization were found to violate the experimentally observed layout rules . in was shown that the common design is correctly predicted in models that were based only on intrinsic op properties . alternatively , however , it is conceivable that they result from geometric frustration due to inter - map interactions and joint optimization . in the current studywe therefore in particular examined whether the coordinated optimization of the op map and another feature map can reproduce the quantitative laws defining the common design .+ the presentation of our results is organized as follows .first we introduce a formalism to model the coordinated optimization of complex and real valued scalar fields .complex valued fields can represent for instance orientation preference ( op ) or direction preference maps .real valued fields may represent for instance ocular dominance ( od ) , spatial frequency maps or on - off segregation .we construct several optimization models such that an independent optimization of each map in isolation results in a regular op stripe pattern and , depending on the relative representations of the two eyes , od patterns with a regular hexagonal or stripe layout .a model - free , symmetry - based analysis of potential optimization principles that couple the real and complex valued fields provides a comprehensive classification and parametrization of conceivable coordinated optimization models and identifies representative forms of coupling energies . for analytical treatment of the optimization problemwe adapt a perturbation method from pattern formation theory called weakly nonlinear analysis .this method is applicable to models in which the spatial pattern of columns branches off continuously from an unselective homogeneous state . it reduces the dimensionality of the system and leads to amplitude equations as an approximate description of the system near the symmetry breaking transition at which the homogeneous state becomes unstable .we identify a limit in which inter - map interactions that are formally always bidirectional become effectively unidirectional . in this limit, one can neglect the backreaction of the complex map on the layout of the co - evolving scalar feature map .we show how to treat low and higher order versions of inter - map coupling energies which enter at different order in the perturbative expansion .+ second we apply the derived formalism by calculating optima of two representative low order examples of coordinated optimization models and examine how they impact on the resulting map layout .two higher order optimization models are analyzed in text s1 . for concreteness andmotivated by recent topical interest , we illustrate the coordinated optimization of visual cortical maps for the widely studied example of a complex op map and a real feature map such as the od map .op maps are characterized by pinwheels , regions in which columns preferring all possible orientations are organized around a common center in a radial fashion . in particular , we address the problem of pinwheel stability in op maps and calculate the pinwheel densities predicted by different models .as shown previously , many theoretical models of visual cortical development and optimization fail to predict op maps possessing stable pinwheels .we show that in case of the low order energies , a strong inter - map coupling will typically lead to op map suppression , causing the orientation selectivity of all neurons to vanish .for all considered optimization models , we identify stationary solutions of the resulting dynamics and mathematically demonstrate their stability .we further calculate phase diagrams as a function of the inter - map coupling strength and the amount of overrepresentation of certain stimuli of the co - evolving scalar feature map .we show that the optimization of any of the analyzed coupling energies can lead to spatially relatively complex patterns. moreover , in case of op maps , these patterns are typically pinwheel - rich .the phase diagrams , however , differ for each considered coupling energy , in particular leading to coupling energy specific ground states .we therefore thoroughly analyze the spatial layout of energetic ground states and in particular their geometric inter - map relationships .we find that none of the examined models reproduces the experimentally observed pinwheel density and spatially aperiodic arrangements .our analysis identifies a seemingly general condition for interaction induced pinwheel - rich op optima namely a substantial bias in the response properties of the co - evolving scalar feature map . in case of the low order inter - map coupling energiesstrong inter - map coupling leads to a suppression of op selectivity .this suppressive effect can be avoided by restricting coupling strengths .one aim of this article is to test different optimization principles and potentially rule out some optimization principles .when comparing our results from different optimization principles to biological data such parameter tuning reduces the practicability . in this supporting information we complement our study using the high order inter - map coupling energies .we show that in this case a suppression of op selectivity can not occur .we derive coupled amplitude equations which , however , involve several mathematical assumptions .a systematic treatment as it is shown in the main article would imply that low order and higher order inter - map coupling energies are in general non - zero .low order energy terms would enter at third order in the expansion and higher order corrections could potentially alter the stability properties .in addition , higher order inter - map coupling energies can affect the stability of patterns .in the following , we assume all low order inter - map coupling energies to be zero and that contributions entering the amplitude equations at higher orders can be neglected .the obtained results are confirmed numerically in part ( ii ) of this study .we model the response properties of neuronal populations in the visual cortex by two - dimensional scalar order parameter fields which are either complex valued or real valued .a complex valued field can for instance describe op or direction preference of a neuron located at position .a real valued field can describe for instance od or the spatial frequency preference .although we introduce a model for the coordinated optimization of general real and complex valued order parameter fields we consider as the field of op and as the field of od throughout this article . in this case , the pattern of preferred stimulus orientation is obtained by the modulus is a measure of the selectivity at cortical location .+ op maps are characterized by so - called _ pinwheels _ , regions in which columns preferring all possible orientations are organized around a common center in a radial fashion .the centers of pinwheels are point discontinuities of the field where the mean orientation preference of nearby columns changes by 90 degrees .pinwheels can be characterized by a topological charge which indicates in particular whether the orientation preference increases clockwise or counterclockwise around the pinwheel center , where is a closed curve around a single pinwheel center at . since is a cyclic variable in the interval ] and ] , =-\frac{\delta e}{\delta o} ] a nonlinear operator .the energy functional of this dynamics is given by in fourier representation , is diagonal with the spectrum the spectrum exhibits a maximum at .for , all modes are damped since and only the homogeneous state is stable .this is no longer the case for when modes on the _ critical circle _ acquire a positive growth rate and now start to grow , resulting in patterns with a typical wavelength .thus , this model exhibits a supercritical bifurcation where the homogeneous state looses its stability and spatial modulations start to grow .+ the coupled dynamics we considered is of the form where , and is a constant . to account for the species differences in the wavelengths of the pattern we chose two typical wavelengths and .the dynamics of and is coupled by interaction terms which can be derived from a coupling energy . in the uncoupled casethis dynamics leads to pinwheel free op stripe patterns .how many inter - map coupling energies exist ?using a phenomenological approach the inclusion and exclusion of various terms has to be strictly justified .we did this by symmetry considerations .the constant breaks the inversion symmetry of inputs from the ipsilateral ( ) or contralateral ( ) eye .such an inversion symmetry breaking could also arise from quadratic terms such as . in the methods sectionwe detail how a constant shift in the field can eliminate the constant term and generate such a quadratic term . including either a shift or a quadratic term thus already represents the most general case .the inter - map coupling energy was assumed to be invariant under this inversion .otherwise orientation selective neurons would , for an equal representation of the two eyes , develop different layouts to inputs from the left or the right eye .the primary visual cortex shows no anatomical indication that there are any prominent regions or directions parallel to the cortical layers . besides invariance under translations and rotations of both maps we required that the dynamics should be invariant under orientation shifts , that the assumption of shift symmetry is an idealization that uncouples the op map from the map of visual space .bressloff and coworkers have presented arguments that euclidean symmetry that couples spatial locations to orientation shift represents a more plausible symmetry for visual cortical dynamics , see also . the existence of orientation shift symmetry , however , is not an all or none question .recent evidence in fact indicates that shift symmetry is only weakly broken in the spatial organization of orientation maps .a general coupling energy term can be expressed by integral operators which can be written as a volterra series with an -th .order integral kernel .inversion symmetry and orientation shift symmetry require to be even and that the number of fields equals the number of fields .the lowest order term , mediating an interaction between the fields and is given by i.e. next , we rewrite eq .( [ eq : volterra4 ] ) as an integral over an energy density .we use the invariance under translations to introduce new coordinates this leads to the kernel may contain local and non - local contributions .map interactions were assumed to be local . for local interactionsthe integral kernel is independent of the locations .we expanded both fields in a taylor series around for a local energy density we could truncate this expansion at the first order in the derivatives .the energy density can thus be written due to rotation symmetry this energy density should be invariant under a simultaneous rotation of both fields . from all possible combinations of eq .( [ eq : t4fields_part3a ] ) only those are invariant in which the gradients of the fields appear as scalar products . the energy density can thus be written as where we suppress the argument .all combinations can also enter via their complex conjugate .the general expression for is therefore from all possible combinations we selected those which are invariant under orientation shifts and eye inversions .this leads to the energy densities with prefactor to do not mediate a coupling between od and op fields and can be absorbed into the single field energy functionals .the densities with prefactors and ( also with and ) are complex and can occur only together with ( ) to be real . these energy densities , however , are not bounded from below as their real and imaginary parts can have arbitrary positive and negative values .the lowest order terms which are real and positive definite are thus given by the next higher order energy terms are given by here the fields and enter with an unequal power .in the corresponding field equations these interaction terms enter either in the linear part or in the cubic nonlinearity .we will show in this article that interaction terms that enter in the linear part of the dynamics can lead to a suppression of the pattern and possibly to an instability of the pattern solution .therefore we considered also higher order interaction terms .+ these higher order terms contain combinations of terms in eq .( [ eq : t4 ] ) and are given by as we will show below examples of coupling energies form a representative set that can be expected to reproduce experimentally observed map relationships . for this choice of energy the corresponding interaction terms in the dynamics eq .( [ eq : dynamicsshcoupled_ref ] ) are given by +n_\beta[o , o , z]+n_\epsilon[o , o , o , o , z , z,\overline{z}]+n_\tau[o , o , o , o , z , z,\overline{z}]\nonumber \\ & = & -\alpha o^2z+\beta\nabla \left(a\nabla o\right ) + \epsilon \, 2\nabla \left(|a|^2a\nabla o\right ) -2\tau \,o^4|z|^2z , \nonumber \\ -\frac{\delta u}{\delta o}&= & \widetilde{n}_\alpha[o , z,\overline{z}]+\widetilde{n}_\beta[o , z,\overline{z}]+ \widetilde{n}_\epsilon[o , o , o , z , z,\overline{z},\overline{z}]+ \widetilde{n}_\tau[o , o , o , z , z,\overline{z},\overline{z}]\nonumber \\ & = & -\alpha o|z|^2+\beta \nabla \left ( \overline{a } \nabla z \right ) + \epsilon \ , 2 \nabla \left(|a|^2\overline{a } \nabla z \right)-2\tau\ , o^3|z|^4+c.c . \end{aligned}\ ] ] with and denoting the complex conjugate . in general , all coupling energies in , and can occur in the dynamics and we restrict to those energies which are expected to reproduce the observed geometric relationships between op and od maps .it is important to note that with this restriction we did not miss any essential parts of the model .when using weakly nonlinear analysis the general form of the near threshold dynamics is insensitive to the used type of coupling energy and we therefore expect similar results also for the remaining coupling energies .+ numerical simulations of the dynamics eq .( [ eq : dynamicsshcoupled_ref ] ) , see , with the coupling energy eq .( [ eq : energy_all ] ) and are shown in fig .[ fig : numerics ] . .color code of op map with zero contours of od map superimposed . * * * * * * and * * .initial conditions identical in * * - * * , . ]the initial conditions and final states are shown for different bias terms and inter - map coupling strengths .we observed that for a substantial contralateral bias and above a critical inter - map coupling pinwheels are preserved from random initial conditions or are generated if the initial condition is pinwheel free . without a contralateral biasthe final states were pinwheel free stripe solutions irrespective of the strength of the inter - map coupling .we studied eq .( [ eq : dynamicsshcoupled_ref ] ) with the low order inter - map coupling energies in eq .( [ eq : energy_all ] ) using weakly nonlinear analysis .we therefore rewrite eq .( [ eq : dynamicsshcoupled_ref ] ) as -n_{3,c}[z , o , o ] \nonumber \\ \partial_t \ , o(\mathbf{x},t ) & = & r_o o(\mathbf{x},t ) -\hat{l}_o^0 \ , o(\mathbf{x},t ) + n_{2,u}[o , o]-n_{3,u}[o , o , o ] -\tilde{n}_{3,c}[o , z,\overline{z}]\ , , \end{aligned}\ ] ] where we shifted both linear operators as , . the constant term in eq .( [ eq : dynamicsshcoupled_ref ] ) is replaced by a quadratic interaction term =\tilde{\gamma } o^2 ] . in this way those parts of the maps are emphasized from which the most significant information about the intersection angles can be obtained .these are the regions where the op gradient is high and thus every intersection angle receives a statistical weight according to .for an alternative method see .we studied how the emerging od map depends on the overall eye dominance .to this end we mapped the uncoupled od dynamics to a swift - hohenberg equation containing a quadratic interaction term instead of a constant bias .this allowed for the use of weakly nonlinear analysis to derive amplitude equations as an approximate description of the shifted od dynamics near the bifurcation point .we identified the stationary solutions and studied their stability properties .finally , we derived expressions for the fraction of contralateral eye dominance for the stable solutions . herewe describe how to map the swift - hohenberg equation to one with a quadratic interaction term . to eliminate the constant term we shift the field by a constant amount .this changes the linear and nonlinear terms as we define the new parameters and .this leads to the new dynamics the condition that the constant part is zero is thus given by for the real solution to eq .( [ eq : mapping_deltaeq_part3 ] ) is given by with .for small this formula is approximated as the uncoupled od dynamics we consider in the following is therefore given by this equation has been extensively studied in pattern formation literature .we studied eq .( [ eq : od_uncoupled ] ) using weakly nonlinear analysis .this method leads to amplitude equation as an approximate description of the full field dynamics eq .( [ eq : od_uncoupled ] ) near the bifurcation point .we summarize the derivation of the amplitude equations for the od dynamics which is of the form -n_3[o , o , o]\ , , \ ] ] with the linear operator . in this sectionwe use for simplicity the variables instead of .the derivation is performed for general quadratic and cubic nonlinearities but are specified later according to eq .( [ eq : dynamicsshcoupled_ref ] ) as =o^3 ] . for the calculations in the following , it is useful to separate from the linear operator therefore the largest eigenvalue of is zero .the amplitude of the field is assumed to be small near the onset and thus the nonlinearities are small .we therefore expand both the field and the control parameter in powers of a small expansion parameter as and the dynamics at the critical point becomes arbitrarily slow since the intrinsic timescale diverges at the critical point . to compensate we introduce a rescaled time scale as in order for all terms in eq .( [ eq : oduncoupled ] ) to be of the same order the quadratic interaction term must be small .we therefore rescale as .this preserves the nature of the bifurcation compared to the case . + we insert the expansion eq .( [ eq : expand_o ] ) and eq .( [ eq : expand_mu ] ) in the dynamics eq .( [ eq : oduncoupled ] ) and get \right ) \nonumber \\ & + & \ , \mu^3 \left(-\hat{l}^0 o_3+r_1 \left(o_2-\partial_t o_2 \right ) + r_2 \left(o_1-\partial_t o_1 \right)-n_3[o_1,o_1,o_1 ] \right)\nonumber \\ & \vdots & \end{aligned}\ ] ] we sort and collect all terms in order of their power in . the equation can be fulfilled for only if each of these terms is zero .we therefore solve the equation order by order . in the leading orderwe get the homogeneous equation thus is an element of the kernel of .the kernel contains linear combinations of modes with wavevector on the critical circle . at this levelany of such wavevectors is possible .we choose where the wavevectors are chosen to be equally spaced and the complex amplitudes .the homogeneous equation leaves the amplitudes undetermined .these amplitudes are fixed by the higher order equations .besides the leading order homogeneous equation we get inhomogeneous equations of the form to solve this inhomogeneous equation we first apply a solvability condition .we thus apply the _ fredholm alternative theorem _ to eq .( [ eq : inhom_o_part3 ] ) . since the operator is self - adjoint , the equation is solvable if and only if is orthogonal to the kernel of i.e. the orthogonality to the kernel can be expressed by a projection operator onto the kernel and the condition can be rewritten as .+ at second order we get applying the solvability condition eq .( [ eq : solvecond_o ] ) we see that this equation can be fulfilled only for . at third orderwe get -n_3[o_1,o_1,o_1 ] \ , .\ ] ] the parameter sets the scale in which is measured and we can set .we apply the solvability condition and get -\hat{p}_c n_3[o_1,o_1,o_1 ] \ , .\ ] ] we insert our ansatz eq .( [ eq : kernelod ] ) which leads to the amplitude equations at third order -\hat{p}_i \sum_{j , k } b_j b_k b_l e^{-\imath \vec{k}_i \vec{x } } n_3[e^{\imath \vec{k}_j \vec{x}},e^{\imath \vec{k}_k \vec{x}},e^{\imath \vec{k}_l \vec{x } } ] \ , , \ ] ] where is the projection operator onto the subspace of the kernel . picks out all combinations of the modes which have their wavevector equal to . in our casethe three active modes form a so called triad resonance .the quadratic coupling terms which are resonant to the mode are therefore given by + n_2[e^{-\imath \vec{k}_3 \vec{x}},e^{-\imath \vec{k}_2 \vec{x } } ] \right)\ , .\ ] ] resonant contributions from the cubic nonlinearity result from terms of the form .their coupling coefficients are given by + n_3[e^{\imath \vec{k}_i \vec{x}},e^{-\imath \vec{k}_j \vec{x}},e^{\imath \vec{k}_j \vec{x}}]+ n_3[e^{\imath \vec{k}_j \vec{x}},e^{\imath \vec{k}_i \vec{x}},e^{-\imath \vec{k}_j \vec{x}}]+\nonumber \\ & & n_3[e^{-\imath \vec{k}_j \vec{x}},e^{\imath \vec{k}_i \vec{x}},e^{\imath \vec{k}_j \vec{x}}]+ n_3[e^{\imath \vec{k}_j \vec{x}},e^{-\imath \vec{k}_j \vec{x}},e^{\imath \vec{k}_i \vec{x}}]+ n_3[e^{-\imath \vec{k}_j \vec{x}},e^{\imath \vec{k}_j \vec{x}},e^{\imath \vec{k}_i \vec{x}}]\ , , \end{aligned}\ ] ] and + n_3[e^{\imath \vec{k}_i \vec{x}},e^{-\imath \vec{k}_i \vec{x}},e^{\imath \vec{k}_i \vec{x}}]+ n_3[e^{-\imath \vec{k}_i \vec{x}},e^{\imath \vec{k}_i \vec{x}},e^{\imath \vec{k}_i \vec{x}}]\ , .\ ] ] when specifying the nonlinearities eq .( [ eq : dynamicsshcoupled_ref ] ) the coupling coefficients are given by . finally , the amplitude equations ( here in the shifted variables ( ) are given by where we scaled back to the original time variable .equations for and are given by cyclic permutation of the indices .the amplitude equations ( [ eq : ampl_od_uncoupled ] ) have three types of stationary solutions , namely od stripes with , hexagons with the resonance condition and .finally , there is a homogeneous solution with spatially constant eye dominance the spatial average of all solutions is .the course of , , and of is shown in fig .[ fig : constsolutions ] .( [ eq : stripes ] ) ( blue ) , eq . ( [ eq : hex ] ) ( red ) , and of eq . ( [ eq : delta ] ) ( green ) for .the solutions are plotted in solid lines within their stability ranges . * * od map of macaque monkey . adapted from . * * details of * * with stripe - like , patchy , and homogeneous layout . ]we decomposed the amplitude equations ( [ eq : ampl_od_uncoupled ] ) into the real and imaginary parts . from the imaginary partwe get the phase equation and equations for by cyclic permutation of the indices .the stationary phases are given by .the phase equation can be derived from the potential =-2\widetilde{\gamma } \cos ( \psi_1+\psi_2+\psi_3)$ ] .we see that the solution is stable for and the solution is stable for .+ we calculate the stability borders of the od stripe , hexagon , and constant solution in the uncoupled case .this treatment follows . in case of stripesthe three modes of the amplitude equations are perturbed as assuming small perturbations , and .this leads to the linear equations with the stability matrix the corresponding eigenvalues are given by this leads to the two borders for the stripe stability in terms of the original variables the borders are given by ( ) to derive the stability borders for the hexagon solution we perturb the amplitudes as the stability matrix is then given by and the corresponding eigenvalues are given by the stability borders for the hexagon solution are given by in terms of the original variables we finally get the phase diagram of this model is depicted in fig .[ fig : constterm]**. ) . dashed lines : stability border of hexagon solutions , solid line : stability border of stripe solution , gray regions : stability region of constant solution * * percentage of neurons dominated by the contralateral eye plotted for the three stationary solutions .circles : numerically obtained values , solid lines : and . ] it shows the stability borders , and for the three solutions obtained by linear stability analysis . without a bias term the od map is either constant , for , or has a stripe layout , for . for positive and increasing bias term there are two transition regions , first a transition region from stripes to hexagons and second a transition region from hexagons to the constant solution .+ the spatial layout of the od hexagons consists of hexagonal arrays of ipsilateral eye dominance blobs in a sea of contralateral eye dominance , see fig .[ fig : constterm]**. to compare the obtained solutions with physiological od maps we quantified the fraction of neurons selective to the contralateral eye inputs . for stripe and hexagon solutions we thus calculatedthe fraction of contralateral eye dominated territory and . in case of stripesthis is a purely one - dimensional problem .the zeros of the field are given by with the solution as the field has a periodicity of the area fraction is given by in case of hexagons we observe that the territory of negative values is approximately a circular area .we obtain the fraction of negative values by relating this area to the area of the whole hexagonal lattice . in case of hexagons the field is given by as an approximation we project the field onto the -axis and choose for simplicity .the field has its maximum at the origin .the projection leads to the zeros are located at the circular area of positive values is now given by .the periodicity of the hexagonal pattern is given by .this leads to .the area of the hexagon is therefore given by .the contra fraction is finally given by the course of the fractions and is shown in fig .[ fig : constterm]**. at the border , where hexagons become stable . at the border , where hexagons loose stability .both quantities are independent of .we confirmed our results by direct numerical calculation of the fraction of positive pixel values .deviations from the result eq .( [ eq : chex ] ) are small .for the zeros of eq .( [ eq : zerosofhex ] ) are not that well approximated with a circular shape and the projection described above leads to the small deviations which decrease with increasing bias . *text s1 : supporting information for + coordinated optimization of visual cortical maps + ( i ) symmetry - based analysis *we studied the coupled swift - hohenberg equations -n_{7,c}[z , z,\overline{z},o , o , o , o ] \nonumber \\\partial_t \ , o(\mathbf{x},t ) & = & r_o o(\mathbf{x},t ) -\hat{l}_o^0 \ , o(\mathbf{x},t ) + n_{2,u}[o , o]-n_{3,u}[o , o , o ] -\tilde{n}_{7,c}[o , o , o , z , z,\overline{z},\overline{z}]\ , , \end{aligned}\ ] ] with the higher order inter - map coupling energies using weakly nonlinear analysis .we study eq .( [ eq : fieldeq_foramplderivs1 ] ) close to the pattern forming bifurcation where and are small .we therefore expand both control parameters in powers of the small expansion parameter close to the bifurcation the fileds are small and thus nonlinearities are weak .we therefore expand both fields as we further introduced a common slow timescale and insert the expansions in eq .( [ eq : fieldeq_foramplderivs1 ] ) and get \right ) \nonumber \\ & & \vdots \nonumber \\ & & + \mu^7 \left ( -\hat{l}^0 z_7 + r_{z2 } z_5 + r_{z4}z3+r_{z6}z_1 + \dots + n_{3,u}[z_5,z_1,\overline{z}_1 ] \right ) \nonumber\\ & & + \mu^7 \left(- n_{7,c}[z_1,z_1,\overline{z}_1,o_1,o_1,o_1,o_1]\right ) \nonumber \\ & & \vdots\end{aligned}\ ] ] and \right)\nonumber\\ & & + \mu^3\left(- r_{z2 } \partial_t o_1+r_{o2 } o_1+r_{o1 } o_2- r_{z1 } \partial_to_2-\hat{l}^0o_3-\tilde{n}_{3,u}[o_1,o_1,o_1]\right ) \nonumber \\ & & \vdots \nonumber \\ & & + \mu^7 \left(-\hat{l}^0 o_7+r_{o2 } o_5+r_{o4 } o_3+r_{o6 } o_1+\dots -\tilde{n}_{3,u}[o_5,o_1,o_1]-\tilde{n}_{2,u}[o_1,o_5]-\dots \right)\nonumber \\ & & + \mu^7\left(- \tilde{n}_{7,c}[o_1,o_1,o_1,z_1,z_1,\overline{z}_1,\overline{z}_1]\right ) \nonumber \\ & & \vdots\end{aligned}\ ] ] we consider amplitude equations up to seventh order as this is the order where the nonlinearity of the higher order coupling energy enters first .( [ eq : expansion_z_coupled ] ) and eq .( [ eq : expansion_o_coupled ] ) to be fulfilled each individual order in has to be zero . at linear order in get the two homogeneous equations thus and are elements of the kernel of and .both kernels contain linear combinations of modes with a wavevector on the critical circle i.e. with the complex amplitudes , and , . in view of the hexagonal or stripe layout of the od pattern shown in fig . [ fig : numerics ] , is an appropriate choice . since in catvisual cortex the typical wavelength for od and op maps are approximately the same i.e. the fourier components of the emerging pattern are located on a common critical circle . to account for species differences we also analyzed models with detuned op and od wavelengths in part ( ii ) of this study .+ at second order in we get as and are elements of the kernel . at third order , when applying the solvability condition ( see methods ) , we get \nonumber \\ r_{z2 } \partial_t \ , o_1&= & r_{o2 } o_1-\sqrt{r_{o2 } } \ , \hat{p}_c \tilde{n}_{2,u}[o_1,o_1]- \hat{p}_c \tilde{n}_{3,u}[o_1,o_1,o_1 ] \ , .\end{aligned}\ ] ] we insert the leading order fields eq .( [ eq_z1o1 ] ) and obtain the amplitude equations these uncoupled amplitude equations obtain corrections at higher order .there are fifth order , seventh order and even higher order corrections to the uncoupled amplitude equations .in addition , at seventh order enters the nonlinearity of the higher order inter - map coupling energies .the amplitude equations up to seventh order are thus derived from \nonumber \\ r_{z2 } \partial_t \ , z_3 & = & r_{z2 } z_3- \dots - \hat{p } n_{3,u}[z_1,z_1,\overline{z_3 } ] \\ r_{z2 } \partial_t \ , z_5 & = & r_{z2 } z_5-\dots - \hat{p}_c n_{3,u}[z_3,z_1,\overline{z_3}]- \hat{p}_c n_{7,c}[z_1,z_1,\overline{z}_1,o_1,o_1,o_1,o_1 ] \, , \nonumber\end{aligned}\ ] ] and corresponding equations for the fields , , and .the field is given in eq .( [ eq_z1o1 ] ) and its amplitudes and are determined at third order .the field contains contributions from modes off the critical circle , and on the critical circle i.e. .its amplitude are determined at fifth order .the field also contains contributions from modes off the critical circle and on the critical circle i.e. .its amplitude are determined at seventh order .this leads to a series of amplitude equations which are solved order by order .we set and and rescale to the fast time .this leads to we can combine the amplitude equations up to seventh order by introducing the amplitudes and .this leads to the amplitude equations for simplicity we have written only the simplest inter - map coupling terms .depending on the configuration of active modes additional contributions may enter the amplitude equations .in addition , for the product - type coupling energy , there are coupling terms which contain the constant , see methods . in case of the inter - map coupling terms in dynamics of the modes are small . in this limitthe dynamics of the modes decouples from the modes and we can use the uncoupled od dynamics , see methods . in the following ,we use the effective inter - map coupling strength ( and ) .in this article we demonstrated that the low order coupling terms can lead to a complete suppression of op selectivity i.e. vanishing magnitude of the order parameter .as the coupling terms are effectively linear they not only influence pattern selection but also whether there is a pattern at all .this is in general not the case for higher order coupling energies using the amplitude equations eq .( [ eq : ampleq_coupled_seventhorder ] ) . in this casethe coupling is an effective cubic interaction term and complete selectivity suppression is impossible .moreover , as in the low - order energy case , we could identify the limit in which the backreaction onto the od map formally becomes negligible .the potential is of the form where denotes the kronecker delta and the uncoupled contributions amplitude equations can be derived from the potential by .we have not written terms involving the modes or .the complete amplitude equations involving all modes and the corresponding coupling coefficients are given in text s2 .as for the low order coupling energies terms involving the constant depend only on the coupling coefficient of the product - type energy . in the following we specify the amplitude equations for negligible backreaction where , , or .+ first , we studied the higher order product - type inter - map coupling energy .as for the lower order version of this coupling energy the shift explicitly enters the amplitude equations resulting in a rather complex parameter dependence , see eq .( 68 ) in the methods section .in the case of od stripes the amplitude equations of op modes read where denotes the kronecker delta and the equation for the mode is given by interchanging the modes and in eq .( [ eq : ampl_tau_odstripes ] ) .the equations for the modes are given by interchanging the modes and and interchanging the modes and .+ in this case , at low inter - map coupling the op stripes given by with run parallel to the od stripes .their stationary amplitudes are given by with , , , .the parameter dependence of these stripe solutions is shown in fig .[ fig : amplitudes_tau]**. + at large inter - map coupling the attractor states of the op map consist of a stripe pattern containing only two preferred orientations , namely and .the zero contour lines of the od map are along the maximum amplitude of orientation preference minimizing the energy term .+ in addition there are rhombic solutions which exist also in the uncoupled case , see fig .[ fig : amplitudes_tau]**. however , these rhombic solutions are energetically not favored compared to stripe solutions , see fig .[ fig : amplitudes_tau]**. the inclusion of the inter - map coupling makes these rhombic solution even more stripe - like .+ in case of a od constant solution the amplitude equations read with , and .inter - map coupling thus leads to a renormalization of the uncoupled interaction terms .stationary solutions are stripes with the amplitude and rhombic solutions with the stationary phases and the stationary amplitudes in the case of od hexagons we identify , in addition to stripe - like and rhombic solutions , uniform solutions . when solving the amplitude equations numerically we have seen that the phase relations vary with the inter - map coupling strength for non - uniform solutions . but for the uniform solution the phase relations are independent of the inter - map coupling strength .we use the ansatz for uniform solutions where is the kronecker delta and a constant parameter .this leads to the stationarity condition \sin \delta = 0\ , .\ ] ] four types of stationary solutions exist namely the , which we already observed in case of the low order energies , and the solutions which depends on and and thus on the bias .the course of eq .( [ eq : unif_d_gamma ] ) as a function of is shown in fig .[ fig : pd_tau]**. stationary amplitudes for these solutions are given by we study the stability properties of op stripe - like , rhombic and uniform solutions using linear stability analysis .the eigenvalues of the stability matrix , see text s2 , are calculated numerically .linear stability analysis shows that for the solution is unstable for all bias values .the stability region of the solution and the solution eq .( [ eq : unif_d_gamma ] ) is bias dependent . the bias dependent solution eq .( [ eq : unif_d_gamma ] ) is stable for and for which , see fig .[ fig : pd_tau]**. for larger bias only the uniform solution is stable . the parameter dependence of op solutions when interacting with od stripes is shown in fig .[ fig : amplitudes_tau]**. similar to the low order variant of this coupling energy the amplitude of the stripes pattern is suppressed while the amplitude of the opposite mode grows . finally both amplitudes collapse , leading to an orientation scotoma solution .in contrast to the low order variant this stripe pattern is stable for arbitrary large inter - map coupling . in case of oprhombic solutions inter - map coupling transforms this solution by reducing the amplitudes while increasing the amplitudes . without od biasthis solution is then transformed into the orientation scotoma stripe pattern , similar to the low order variant of this energy .in contrast to the low order energy , for non - zero bias the amplitudes and stay small but non - zero .+ the parameter dependence of op solutions when interacting with od hexagons is shown in fig .[ fig : amplitudes_tau]**. for a small od bias ( ) op rhombic solutions decay into op stripe - like patterns .these stripe - like patterns stay stable also for large - inter map coupling . in case of a larger od bias ( ) , both the op stripe and the op rhombic solutions decay into the uniform pwc solution .thus for small bias there is a bistability between stripe - like and uniform pwc solutions while for larger od bias the uniform pwc solution is the only stable solution .the potential of op stripe and op rhombic solutions is shown in fig .[ fig : amplitudes_tau]**. in the uncoupled case as well as for small inter - map coupling strength op stripe solutions are for all bias values the energetic ground state .for large inter - map coupling and a small bias ( ) rhombic solutions are unstable and the stripe - like solutions are energetically preferred compared to pwc solutions . for larger bias ,however , pwc solutions are the only stable solutions for large inter - map coupling . .solid ( dashed ) lines : stable ( unstable ) solutions . ** od stripes , ( blue ) , ( green ) , ( red ) . * * od hexagons , ( blue ) , ( red ) . * * transition from op stripe solutions , * * transition from op rhombic solutions .* * potential , eq .( [ eq : potential_septic_part3 ] ) , of op stripes and op rhombs interacting with od stripes . * * potential , eq .( [ eq : potential_septic_part3 ] ) , of op stripes , op rhombs , and hpwc interacting with od hexagons .arrows indicate corresponding lines in the phase diagram , fig .( [ fig : pd_tau ] ) . ]the stability properties of all stationary solutions are summarized in the phase diagram fig .[ fig : pd_tau ] ., . vertical black lines : stability range of od stripes , hexagons , and constant solutions .magenta ( orange ) line : stability border of orientation scotoma stripes .green solid line : stability border of rhombic solutions .red solid line : stability border of pwc solutions , red dashed line : , * * course of eq .( [ eq : unif_d_gamma ] ) , dashed line : .* * stability border between eq .( [ eq : unif_d_gamma ] ) solution and the solution as a function of ( vertical red line in * * ) . ] compared to the gradient - type interaction energy we can not scale out the dependence on .the phase diagram is thus plotted for .we rescale the inter - map coupling strength as where is the stationary amplitude of the od hexagons . in the regime of stable od stripesthere is a transition from op stripes towards the orientation scotoma stripe solution . in the regime of stable od hexagonsthere is a transition from op stripes towards pwc solutions ( red line ) .the stability border of pwc solutions is strongly od bias dependent and has a peak at . for small od bias the uniform solution eq .( [ eq : unif_d_gamma ] ) is stable . with increasing biasthere is a smooth transition of this solution until at the uniform solution becomes stable . in fig .[ fig : pd_tau ] * * the stability border between the two types of uniform solutions is plotted as a function of .we observe that there is only a weak dependence on the control parameter and .figure [ fig : unif_gamma_depend ] illustrates the uniform solutions eq .( [ eq : unif_d_gamma ] ) for different values of the od bias . ) * * , * * , * * , * * . op map , superimposed are the od borders ( gray ) , 90% ipsilateral eye dominance ( black ) , and 90% contralateral eye dominance ( white ) , .dashed lines mark the unit cell of the regular pattern . * * distribution of orientation preference . * * intersection angles between iso - orientation lines and od borders . ] for small bias , the op pattern has six pinwheels per unit cell .two of them are located at od maxima while one is located at an od minimum .the remaining three pinwheels are located near the od border . with increasing bias ,these three pinwheels are pushed further away from the od border , being attracted to the od maxima . with further increasing bias three shifted pinwheelsmerge with the one at the od maximum building a single charge 1 pinwheel centered on a contralateral peak .the remaining two pinwheels are located at an ispi and contra peak , respectively .note , compared to the braitenberg pwc of the uniform solution the charge 1 pinwheel here is located at the contralateral od peak . finally , the charge 1 pinwheels split up again into four pinwheels . with increasing biasthe solution more and more resembles the ipsi - center pwc ( solution ) which is stable also in the lower order version of the coupling energy . finally , at the ipsi - center pwc becomes stable and fixed for . the distribution of preferred orientations for different values of the bias is shown in fig .[ fig : unif_gamma_depend ] * * , reflecting the symmetry of each pattern . the distribution of intersection angles is shown in fig .[ fig : unif_gamma_depend]**. remarkably , all solutions show a tendency towards perpendicular intersection angles .this tendency is more pronounced with increasing od bias . at about parallel intersection anglesare completely absent and at there are exclusively perpendicular intersection angles .finally , we examine the higher order version of the gradient - type inter - map coupling .the interaction terms are independent of the od shift . in this casethe coupling strength can be rescaled as and is therefore independent of the bias .the bias in this case only determines the stability of od stripes , hexagons or the constant solution . as for its lower order pendanta coupling to od stripes is relatively easy to analyze .the energetic ground state corresponds to op stripes with the direction perpendicular to the od stripes for which .in addition , there are rhombic solutions with the stationary amplitudes . in casethe od map is a constant , , the gradient - type inter - map coupling leaves the op unaffected . as for its lower order pendantthe stationary states are therefore op stripes running in an arbitrary direction and the uncoupled rhombic solutions .+ in case of od hexagons we identified three types of non - uniform solutions .besides stripe - like solutions of with one dominant mode we find rpwcs with and distorted rpwcs with .note , that distorted rpwcs are not stable in case of the product - type coupling energy or the analyzed low - order coupling energies . for these non - uniform solutionsthe stationary phases are inter - map coupling strength dependent .we therefore calculate the stationary phases and amplitudes numerically using a newton method and initial conditions close to these solutions .+ in case of od hexagons there are further uniform solutions and . the imaginary part of the amplitude equations , see text s1 , leads to equations for the phases . the ansatz eq .( [ eq : uniformsol_part3 ] ) leads to the stationarity condition the solutions are , and where the stationary amplitude are given by we calculated the stability properties of all stationary solutions by linear stability analysis considering perturbations of the amplitudes , and of the phases , .this leads to a perturbation matrix . in general amplitude and phase perturbationsdo not decouple .we therefore calculate the eigenvalues of the perturbation matrix numerically .it turns out that for this type of coupling energy only the uniform solutions with are stable while the and solutions are unstable in general . for increasing inter - map coupling strength the amplitudes of the op stripe and op rhombic solutionsare shown in fig .[ fig : bif_eps]**. , * * solid ( dashed ) lines : stable ( unstable ) solutions .blue : rpwc , green : distorted rpwc , red : hpwc .black lines : stripe - like solutions . ** potential , eq .( [ eq : potential_septic_part3 ] ) , of op stripes ( black ) , op rhombs ( blue ) , and hpwc solutions ( red ) .arrows indicate corresponding lines in the phase diagram , fig .( s[fig : pdcoupled ] ) . ] in case of stable od hexagons there is a transition from rpwc ( blue ) towards distorted rpwc ( green ) . the distorted rpwcs then decay into the hpwc ( red ) .in case of op stripes ( black dashed lines ) inter - map coupling leads to a slight suppression of the dominant mode and a growth of the remaining modes .this growth saturates at small amplitudes and thus the solution stays stripe - like .this stripe - like solution remains stable for arbitrary large inter - map coupling .therefore there is a bistability between hpwc solutions and stripe - like solutions for large inter - map coupling .+ the stability borders for the rpwc and distorted rpwc solutions were obtained by calculating their bifurcation diagram numerically from the amplitude equations , see text s1 .with increasing map coupling we observe a transition from a rpwc towards a distorted rpwc at ( blue dashed line in fig . [ fig : pdcoupled ] * * ) , see also fig .[ fig : pwwandering]**. the distorted rpwc loses its stability at ( blue solid line in fig .[ fig : pdcoupled ] * * ) and from thereon all amplitudes are equal corresponding to the hpwc .there is a bistability between hpwc , rpwc , and stripe - like solutions . to calculate the inter - map coupling needed for the hexagonal solution to become the energetic ground state we calculated the potential eq .( [ eq : potential_septic_part3 ] ) for the three solutions . in case of the uniform solution eq .( [ eq : uniformsol_part3 ] ) the potential is given by the potential in case of the rhombic and stripe - like solutions was obtained by numerically solving the amplitude equations using newtons method and initial conditions close to these solutions . above the hpwc is energetically preferred compared to stripe - like solutions ( red dashed line in in fig . [fig : pdcoupled ] ) and thus corresponds to the energetic ground state for large inter - map coupling , see fig .( [ fig : bif_eps])**. we calculated the phase diagram of the coupled system in the limit , shown in fig . [fig : pdcoupled ] . , for .vertical lines : stability range of od hexagons , green line : transition from rpwc to distorted rpwc , red line : stability border of hpwc , blue line : stability border of distorted rpwc . above orange line : hpwc corresponds to ground state of energy . ]the phase diagram contains the stability borders of the uncoupled od solutions .they correspond to vertical lines , as they are independent of the inter - map coupling in the limit . at hexagons become stable .stripe solutions become unstable at . at homogeneous solution becomes stable while at hexagons loose their stability . in units the borders vary slightly with , see fig .[ fig : constterm ] , and are drawn here for .we rescale the inter - map coupling strength as where is the stationary amplitude of the od hexagons .the stability borders of op solutions are then horizontal lines . for or for pinwheel free orientation stripes are dynamically selected . for and above a critical effective coupling strength hpwc solutionsare stable and become the energetic ground state of eq .( [ eq : potential_septic_part3 ] ) above . below ,rpwc solutions are stable leading to a bistability region between rpwc and hpwc solutions .we find in this region that rhombic solutions transform into distorted rhombic solutions above an effective coupling strength of .first , we studied the spatial layout of the rhombic solutions which is illustrated in fig .[ fig : rpwc ] . .* * selectivity , white : high selectivity , black : low selectivity . ]the rpwc solutions are symmetric under rotation by 180 degree .the rhombic solution has 4 pinwheels per unit cell and its pinwheel density is thus .one may expect that the energy term eq .( [ eq : energy_ho ] ) favors pinwheels to co - localize with od extrema . in case of the rhombic layoutthere is only one pinwheel at an od extremum while the other three pinwheels are located at od saddle - points which are also energetically favorable positions with respect to .the orientation selectivity for the rpwc is shown in fig .[ fig : rpwc]**. the pattern of selectivity is arranged in small patches of highly selective regions .+ the hexagonal layout of the two stable uniform solutions is shown in fig .[ fig : hpwc ] . .* * , * * . * * distribution of orientation preference . ** op map with superimposed od map for three different values of the od bias . * * selectivity , white : high selectivity , black : low selectivity . * * distribution of intersection angles . ]the solutions have six pinwheels per unit cell .their pinwheel density is therefore .three pinwheels of the same topological charge are located at the extrema of the od map .two of these are located at the od maximum while one is located at the od minimum .the remaining three pinwheels are not at an od extremum but near the od border .the distance to the od border depends on the od bias , see fig .[ fig : hpwc]**. for a small bias ( ) these three pinwheels are close to the od borders and with increasing bias the od border moves away from the pinwheels .the pinwheel in the center of the op hexagon is at the contralateral od peak . because these pinwheels organize most of the map while the others essentially only match one op hexagon to its neighbors we refer to this pinwheel crystal as the _ contra - center pinwheel crystal_. note , that some pinwheels are located at the vertices of the hexagonal pattern .pinwheels located between these vertices ( on the edge ) are not in the middle of this edge. solutions with are therefore not symmetric under a rotation by 60 degree but symmetric under a rotation by 120 degree .therefore the solution can not be transformed into the solution by a rotation of the od and op pattern by 180 degrees .this symmetry is also reflected by the distribution of preferred orientations , see fig .[ fig : hpwc]**. six orientations are slightly overrepresented .compared to the ipsi - center pwc , which have a symmetry , this distribution illustrates the symmetry of the pattern .the distribution of intersection angles is continuous , see fig .[ fig : hpwc]**. although there is a fixed uniform solution with varying od bias the distribution of intersection angles changes .the reason for this is the bias dependent change in the od borders , see fig . [fig : hpwc]**. for all bias values there is a tendency towards perpendicular intersection angles , although for low od bias there is an additional small peak at parallel intersection angles .the orientation selectivity for the hpwc is shown in fig .[ fig : hpwc]**. the pattern shows hexagonal bands of high selectivity .+ finally , we study changes in pinwheel positions during the transition from a rpwc towards a hpwc i.e. with increasing inter - map coupling strength . in case of the higher order gradient - type coupling energythere is a transition towards a contra - center pwc , see fig .[ fig : pwwandering]**.. numbers label pinwheels within the unit cell ( dashed lines ) .blue ( green , red ) points : pinwheel positions for rpwc ( distorted rpwc , hpwc ) solutions . ** , using stationary amplitudes from fig .( [ fig : bif_eps])(a ) .positions of distorted rpwcs move continuously ( pinwheel 1,3,4 ) . * * , using stationary amplitudes from fig .( [ fig : amplitudes_tau])**. positions of rpwcs move continuously ( pinwheel 5,6 ) . ] in the regime where the distorted rpwc is stable , three of the four pinwheels of the rpwc are moving either from an od saddle - point to a position near an od border ( pinwheel 1 and 3 ) or from an od saddle - point to an od extremum ( pinwheel 4 ) .one pinwheel ( pinwheel 2 ) is fixed in space . at the transition to the hpwc two additional pinwheelsare created , one near an od border ( pinwheel 5 ) and one at an od extremum ( pinwheel 6 ) .we compare the inter - map coupling strength dependent pinwheel positions of the gradient - type coupling energy with those of the product type coupling energy , see fig .( [ fig : pwwandering])**. in this case three ( pinwheel 2,3,4 ) of the four rpwc pinwheels have a inter - map coupling strength independent position . the remaining pinwheel ( pinwheel 1 ) with increasing inter - map coupling strength splits up into three pinwheels . while one of these three pinwheels ( pinwheel 1 ) is fixed in space the remaining two pinwheels ( pinwheel 5,6 ) move towards the extrema of od .thus for large inter - map coupling , where hpwc solutions are stable , all six pinwheels are located at od extrema .we derived amplitude equations and analyzed ground states of the higher order inter - map coupling energies .we calculated local and global optima and derived corresponding phase diagrams . a main difference between phase diagrams for low order and high order coupling energiesconsists in the collapse of orientation selectivity above a critical coupling strength that occurs only in the low order models .in contrast , for the high order versions , orientation selectivity is preserved for arbitrarily strong inter - map coupling . in order to neglect the backreaction on the dynamics of the modes we assumed .our results , however , show that for the stability of pinwheel crystals a finite amplitude is necessary . a decrease in not be compensated by another parameter ( as it would be in case of the low order inter - map coupling energies ) . for a finite higher order corrections to the amplitude equations than those presented here can thus become significant .such terms are we neglected in the present treatment . in part ( ii ) of this study we numerically confirm our main results for the higher order inter - map coupling energies . + from a practical point of view , the analyzed phase diagrams and pattern properties indicate that the higher order gradient - type coupling energy is the simplest and most convenient choice for constructing models that reflect the correlations of map layouts in the visual cortex . for this coupling ,intersection angle statistics are reproduced well , pinwheels can be stabilized , and pattern collapse can not occur . * text s2 : supporting information for + coordinated optimization of visual cortical maps + ( i ) symmetry - based analysis *here , we list the amplitude equations for the op dynamics in case of the high order inter - map coupling energies and . where the indices are considered to be cyclic i.e. . all sums are considered to run from 1 to 3 . in the article ,these amplitude equations are specified in case of od stripes , eq .( 109 ) , od hexagons , eq . ( 110 ) , or a constant od solution , eq ( 111 ) .in the following , we list the non - zero elements of the coupling coefficients . note , that coupling coefficients involving the constant shift only occur in case of the product - type inter - map coupling energy . * text s3 : supporting information for + coordinated optimization of visual cortical maps + ( i ) symmetry - based analysis *here , we state the stability matrices used in the linear stability analysis of the coupled amplitude equations . the stability matrix is defined by + for the uniform solutionswe separate the uncoupled contributions from the inter - map coupling contributions i.e. .the uncoupled contributions are given by where denotes the identity matrix and .+ the coupling part in case of the low order product - type inter - map coupling energy is given by . + the coupling part in case of the low order gradient - type inter - map coupling energy is given by . + the stationary amplitudes are given in eq .( 63 ) , eq .( 68 ) , eq .( 81 ) , and eq .the stationary amplitudes and the constant are given by eq .( 94 ) and eq .( 116 ) , respectively .we thank ghazaleh afshar , eberhard bodenschatz , theo geisel , min huang , wolfgang keil , michael schnabel , dmitry tsigankov , and juan daniel flrez weidinger for discussions . xu x , bosking wh , white le , fitzpatrick d , casagrande va ( 2005 ) functional organization of visual cortex in the prosimian bush baby revealed by optical imaging of intrinsic signals .j neurophysiol 94 : 27482762 .hoffsmmer f , wolf f , geisel t , lwel s , schmidt ke ( 1996 ) sequential bifurcation and dynamic rearrangement of columnar patterns during cortical development . in :bower j , editor , computation and neural systems .hoffsmmer f , wolf f , geisel t , lwel s , schmidt ke ( 1995 ) sequential bifurcation of orientation and ocular dominance maps . in : proceedings of the international conference on artificial neural networks .paris : ec2 & cie , volume i , p. 535 .hoffsmmer f , wolf f , geisel t , schmidt ke , lwel s ( 1997 ) sequential emergence of orientation and ocular dominance maps . in : elsner n , menzel r , editors , learning and memory , proceedings of the 23rd gttingen neurobiology conference 1995 .stutttgart : thieme verlag , p. 97 .bonhoeffer t , kim ds , malonek d , shoham d , grinwald a ( 1995 ) optical imaging of the layout of functional domains in area 17 and across the area 17/18 border in cat visual cortex .eur j neurosci 7 : 19731988 .kaschube m , wolf f , puhlmann m , rathjen s , schmidt ke , et al .the pattern of ocular dominance columns in cat primary visual cortex : intra- and interindividual variability of column spacing and its dependence on genetic background .eur j neurosci 18 : 32513266 .
in the primary visual cortex of primates and carnivores , functional architecture can be characterized by maps of various stimulus features such as orientation preference ( op ) , ocular dominance ( od ) , and spatial frequency . it is a long - standing question in theoretical neuroscience whether the observed maps should be interpreted as optima of a specific energy functional that summarizes the design principles of cortical functional architecture . a rigorous evaluation of this optimization hypothesis is particularly demanded by recent evidence that the functional architecture of orientation columns precisely follows species invariant quantitative laws . because it would be desirable to infer the form of such an optimization principle from the biological data , the optimization approach to explain cortical functional architecture raises the following questions : i ) what are the genuine ground states of candidate energy functionals and how can they be calculated with precision and rigor ? ii ) how do differences in candidate optimization principles impact on the predicted map structure and conversely what can be learned about an hypothetical underlying optimization principle from observations on map structure ? iii ) is there a way to analyze the coordinated organization of cortical maps predicted by optimization principles in general ? to answer these questions we developed a general dynamical systems approach to the combined optimization of visual cortical maps of op and another scalar feature such as od or spatial frequency preference . from basic symmetry assumptions we obtain a comprehensive phenomenological classification of possible inter - map coupling energies and examine representative examples . we show that each individual coupling energy leads to a different class of op solutions with different correlations among the maps such that inferences about the optimization principle from map layout appear viable . we systematically assess whether quantitative laws resembling experimental observations can result from the coordinated optimization of orientation columns with other feature maps .
the advent of reliable redshift - independent distance estimators lead to an enormous growth of activity in the field of measuring and interpreting the peculiar velocities of galaxies .the velocity field can in particular be fruitfully investigated by means of perturbation analysis .one important result is the , -dependent , velocity - density relationship that follows from linear theory ( see e.g. peebles 1980 ) .moreover , taking advantage of the fact that perturbation theory predicts that the rotational part of the velocity field vanishes , bertschinger & dekel ( 1989 ) developed the non - parametric potent method in which the local cosmological velocity field is reconstructed from the measured line - of - sight velocities ( bertschinger et al .1990 ) .then , via the velocity - density relationship it is possible to estimate the value of , where it is assumed that the bias of the galaxies can be simply represented by a linear bias factor ( see the review paper of dekel 1994 and references therein ) .there are however other methods that have been proposed that uses _ intrinsic _ properties of the large scale velocity field to estimate .for example , nusser & dekel ( 1993 ) proposed to use a reconstruction method assuming gaussian initial conditions to constrain , while dekel & rees ( 1994 ) use voids to achieve the same goal. another approach has been proposed by bernardeau et al .( 1995 ) and bernardeau ( 1994a ) based on the use of statistical properties of the divergence of the locally smoothed velocity field .preliminary comparisons of the analytical predictions with numerical simulations ( bernardeau 1994b , juszkiewicz et al .1995 , okas et al .1995 ) yielded encouraging results .however , such a comparison is complicated due to the fact that the velocities are only known at , non - uniformly distributed , particle locations . here , i report recent results obtained by bernardeau & van de weygaert ( 1996 ) and van de weygaert et al .( 1996 ) addressing specifically the issue of the discrete nature of the velocity sampling .to start with , let me remind a few analytical results obtained from perturbation theory applied to the large - scale velocity field .i consider the statistical properties of the one - point _ volume_ averaged velocity divergence , , ( in units of the hubble constant ) when the average is made with a top - hat window function . although the first analytical results that have been obtained dealt with the values of high order moments of its distribution function ( bernardeau et al .1995 , bernardeau 1994a ) it is more convenient to consider its global shape . particularly interesting is the case ( where is the index of the power spectrum ) for which there is a simple analytical fit for the pdf ( bernardeau 1994b ) , /\kappa^{1/2}+ [ \lambda-1]/\lambda^{1/2})^{-3/2 } \over \kappa^{3/4 } ( 2\pi)^{1/2 } \sigma_{\theta } } \exp\left[-{\theta^2\over 2\lambda\sigma_{\theta}^2}\right ] { \rm d}\theta,\ ] ] with where is the variance of the distribution .the resulting shape of the pdf is shown in fig .it is worth noting that the dependence shows up in the shape and position of the peak ( given by indicated on the figure ) and by the position of the large cutoff ( on the figure ) .a similar property to the later one was also found by dekel & rees ( 1994 ) using the zeldovich approximation .it allows indeed to constraint from the largest expanding void . in principleall these features can be used to constrain .before trying to apply these ideas to the data we have extensively checked these features in numerical simulations . in usual numerical analysis , the velocity field is defined with a momentum average of the closest particles on grid points .this method , however , yields very poor results when a subsequent _ volume _ average is required .the main reason is that the two smoothing scales , grid size and smoothing radius , can not be very different from each other .the problem is in fact to define properly the velocity _ field _ ( the velocity at any location in space ) from the velocities of a given set of discrete and sparse points .this is what the methods proposed by van de weygaert and i are designed for . in the first method we propose the local velocity to be given by the _ velocity of the closest particle_. for defining the velocity in the whole space onehas then to divide space in cells , each containing a particle ( of a simulation for instance ) , and so that any point inside the cell is closer to it than to any other particle .this partition is called the _ voronoi _ tessellation . in fig 2 .i present a 2d sketch of such a partition : the solid lines form the voronoi tessellation of the filled circles representing the particles .then , from the initial assumption that the velocity is constant in the voronoi cells , the velocity gradients ( in particular the divergence ) are localized on the walls .they have actually a surface density given by ( is the peculiar velocity ) where is the unit vector normal to the wall and going outward of the cell ( see fig .the local smoothed velocity divergences are then just given by the sum of the fraction of all walls that are within a given sphere of radius ( thick solid line in the figure ) multiplied by the value of the divergence on each wall , in the delaunay method the local velocity is supposed to be given by a _linear combination of the velocities of the four neighbors_. if in 1d it is easy to identify the closest neighbors , this is no longer the case in 2d or 3d .the solution is once again provided by the voronoi tessellation or rather its dual , the _ delaunay _ tessellation .this is the triangulation in which the particles are connected together when they share a wall in the voronoi tessellation ( dashed lines in fig .because of the properties of the voronoi tessellation , the delaunay triangulation satisfies a criterion of compactness : tetrahedra that are defined in such a way ( or triangles in 2d ) are as less elongated as possible .this is a crucial property for doing a subsequent linear interpolation in the tetrahedra , since it ensures that the linear interpolation will be as good as possible .so having identified the four neighbors of a point its velocity is assumed to be given by where are the barycentric weights of the points at the position , the velocity gradients are then uniform in the tetrahedra and the local smoothed divergence is given by a sum over all the tetrahedra that intersect a given sphere ( gray area in fig .2 ) . in practice to use these methods we have to select only a fraction of the points provided by the -body codes .this is done in such a way that the largest voids retain as many particles as possible .for details see bernardeau and van de weygaert ( 1995 ) .the tessellations are then built using the codes developed by van de weygaert ( 1991 ) .so far we have applied these methods to two different numerical simulations .one kindly provided by h. couchman ( 1991 ) with cdm initial condition for and a pm simulation with and a power law spectrum of index ( van de weygaert et al .1996 ) here i just present a very significant figure showing the scatter plots of the local divergences measured in 8000 different locations ( fig .3 ) with the various available methods .when the delaunay method is compared to the previous grid method the correlation is very noisy and there is even a systematic error in the variance . when the voronoi and the delaunay methods are compared to each other , no such features are seen and there is a perfect correlation between the two estimations .this gives us a good confidence in our methods .the resulting pdf - s are presented in fig .4 . they show a remarkable agreement between the numerical estimations and the predictions of eq .moreover the left panel demonstrates that the statistics of the velocity divergence is indeed sensitive to ( comparison of the dashed line with the solid line ) .an interesting remark to make is that , in principle , it is not only possible to measure but it must also be possible to test the gravitational instability scenario . the pdf given in ( 1 ) is indeed a two - parameter family of curves . if the actual distribution fails to reproduce one of these curves it is not compatible with the gravitational instability scenario with gaussian initial conditions. that would be the case , for instance , if the observed distribution is skewed the other way around .bernardeau , f. , 1994a , apj , 433 , 1 bernardeau , f. , 1994b , a&a , 291 , 697 bernardeau , f. , juszkiewicz , r. , dekel , a. , bouchet , f.r . , 1995 , mnras , 274 , 20 bernardeau , f. , van de weygaert , r. , 1996 , mnras in press .bertschinger , e. , dekel , a. , 1989 , apj , 336 , l5 bertschinger , e. , dekel , a. , faber , s.m . , dressler , a. , burstein , d. , 1990 , apj , 364 , 370 couchman , h.m.p ., 1991 , apj , 368 , l23 dekel , a. , 1994 , araa , 32 , 371 dekel , a. , rees , m. , 1994 , apj , 422 , l1 juszkiewicz , r. , weinberg , d.h ., amsterdamski , p. , chodorowski , m. , bouchet , 1995 , apj , 442 , 39 okas , e.l . ,juszkiewicz , r. , weinberg , d.h . ,bouchet , f.r . , 1995 , mnras , in press nusser , a. , dekel , a. , 1993 , apj , 405 , 437 peebles , p.j.e . , 1980 , the large - scale structure of the universe , princeton univ .press van de weygaert , r. , 1991 , ph.d .thesis , leiden university van de weygaert , r. , bernardeau , f. , hivon , e. , bouchet , f. , 1996 , in preparation
a lot of predictions for the statistical properties of the cosmic velocity field at large - scale have been obtained recently using perturbation theory . in this contribution i report the outcomes of a set of numerical tests that aim to check these results . using voronoi and delaunay tessellations for defining the velocity field by interpolation between the particle velocities in numerical simulations , we have been able to get reliable estimates of the local velocity gradients . thus , we have been able to show that the properties of the velocity divergence are in very good agreement with the analytical results . in particular we have confirmed the dependence expected for the shape of its distribution function .
the plinth of cryptography is built upon the properties of confusion and diffusion as stated by shannon in 1949 , which can be linked to the main characteristics of chaotic systems : ergodicity and sensitivity to control parameters and initial conditions .the connection between the basic coordinates of cryptography and chaotic systems has paved the research on chaotic cryptography .a lot of different methods have been proposed in the field of chaos - based cryptography , but most of them show very serious security flaws ( * ? ? ?* chapters 8 and 9 ) .a very important family of chaotic cryptosystems is the one inheriting the characteristics of the substitution - permutation networks ( spns ) , as it is explained in .this kind of architecture is not secure unless the avalanche criterion is satisfied . as matter of fact, the inclusion of chaotic systems in this kind of architecture does not guarantee security and the assessment of the avalanche property should be thoroughly carried out . in chaotic cryptosystem is proposed to encrypt colour images through the permutation of their columns and rows , along with a substitution procedure based on the logistic map . from a general point of view , this cryptosystem can be interpreted as one round of a spn .this kind of architecture present a very low level of confusion and security pitfalls if the substitution stage can be rewritten as a way to change the plaintext according to a keystream which is independent of the plaintext . as we will discuss along this paper, this is the case of the cryptosystem described in .the rest of the paper is organized as follows . in sec .[ sec : description ] it is described the cryptosystem under examination . with the aim of underlining the shortcomings of this encryption scheme , we discuss in sec . [sec : weaknesses ] some limitations with respect to the dynamical system bearing encryption , to the key space and , finally , in regards to the diffusion property of the cryptosystem .the analysis is complemented by remarking the vulnerability of the cryptosystem against a chosen - plaintext attack . in this concern , we explain along sec . [sec : probl - deriv - from ] how to elude the security laying on the encryption architecture selected in . finally , in sec .[ sec : conclusions ] we summarize and discuss the results of the cryptanalysis .the encryption procedure defined in is applied on colour plain - images of size and coded in rgb format .the plain colour image is treated as a matrix of size , whereas the cipher - image is given by also of size . for the sake of clarity ,we have first modified the notation used in and second divided the encryption method into four stages ( see fig .[ fig : encryption ] ) : 1 ._ rows permutations_. + the colour plain - image is transformed into a gray - scale image of size , just by incorporating the rows of the green and blue components after the rows of the red one .let be a permutation matrix that transforms into by shuffling its rows in a pseudo - random way , through the iteration of the logistic map for control parameter equals to and initial condition given by .the logistic map is defined by the iteration function and the orbit can be generated from a given initial condition by doing .columns permutations_. + the matrix is converted into a matrix of size , by combining horizontally ( one after the other ) the three groups of rows that define . for each row of , the pixels are permuted according to the corresponding row of a permutation matrix .the resulting matrix is noted as .again , this permutation matrix is obtained by iterating the logistic map in this case with control parameter and initial condition .selection of the next pixel to encrypt_. + once pixels have been shuffled , substitution is performed using a keystream and selecting the pixel to encrypt based on a pseudo - random sequence , with .the sequence determines if the next pixel to encrypt proceeds from either the first n columns ( ) , the second group of n columns ( ) , or the third set of columns ( for ) of . in caseall pixels of a band have been already selected , the pixel to encrypt is chosen from the next colour band ( after the blue pixels , the next ones are the red ) . consequently , a vector of length is obtained by reading each colour component of from the first row and from the left to the right , according to the selection vector .substitution stage_. + finally , the output of the previous step is masked using a keystream .the update rule is given by the resulting cipher - image is derived from using , i.e. , by grouping the pixels of into colour components in the reversed order that they were grabbed from to build up . according to ,the secret key of the cryptosystem consists of the set of values , which are used to compute two orbits of the logistic map ( eq . ) .those orbits are the core of the procedures to generate the permutation matrices and , the pseudo - random sequence , and the keystream .as we discuss below , the cryptanalysis of the cryptosystem can be carried out independently of those generation procedures . for a more detailed description of any of those procedures or other design details, please refer to sec .2.1 of .as result of our previous work on the field of chaos - based cryptography , we can conclude that the most critical problems in chaotic cryptography are linked to three aspects : the selection of the chaotic system , the choice of an encryption architecture , and the implementation of the cryptosystem . in the specific scenario depicted by ,there exist some problems that we have previously highlighted in regards to both the selection of the chaotic system and the encryption architecture .those problems inform about a non exhaustive description of the cryptosystem , but also about security breaches .the drawbacks of the cryptosystem definition are derived in sec .[ sec : non - exha - defin ] by studying the key space of the cryptosystem on account of the dynamical properties of the underlying chaotic map , and in sec .[ sec : low - sens - change ] through the discussion of the diffusion property of the encryption architecture .the security analysis is the core of sec .[ sec : probl - deriv - from ] .one major concern in chaotic cryptography is on designing cryptosystems in such a way that the underlying dynamical systems evolves chaotically ( * ? ? ?* rule 5 ) . in the case of the logistic map ( and other maps ) , this resorts to the evaluation of the lyapunov exponent in order to guarantee chaoticity ( see fig . [fig : lyapunov ] ) . as a matter of fact , after the myrberg - feigenbaum point ( ) it can not be asserted that the logistic map is always chaotic due to the existence of a dense set of periodic windows ( i.e. , of values of implying _ regular _ and _ non stochastic _ behavior ) . .the selection of and should be performed guaranteeing chaoticity , i.e. , positive values for the lyapunov exponent . ] additionally , in the use of the logistic map relies not only on its positive rate of local divergence , but also on its topological properties .certainly , the permutation of columns and rows is conducted by the ordering of chaotic orbits of the logistic map of length and , respectively . in this sense, we should assess whether the number of possible permutations on the values of those orbits is at least equal to the number of possible initial conditions .the number of initial conditions is given by the inverse of the _ machine epsilon _ , which is for double precision floating - point arithmetic . on the other hand ,the number of possible permutations on a given orbit of length is . in the case of deterministic dynamical systems, this upper value is not reached due to the existence of a set of _ forbidden _ permutations .if we restrict our discussion to dynamical systems with iteration function defined as a scalar , then the cardinality of the set of possible permutations of an orbit is upper bounded by , where is the topological entropy of the map . for unimodal mapsthe topological entropy can be easily computed according to the theory of applied symbolic dynamics and , in some cases , it is even possible to give a closed analytical form . in fig .[ fig : top_log ] we show the topological entropy of the logistic map with respect to the control parameter . according to the scope depicted by the permutation phases of the cryptosystem defined in , the control parametershould be selected in such a way that is greater than .if we consider that the smallest value for and is 128 , then the previous restriction is satisfied for above .this fact implies a reduction of the key space as defined in and , although it is not a large shortening , it indeed informs about the needs of using not only the lyapunov exponent but also the topological entropy as core of the selection of the keys of the cryptosystem .finally , another problem when defining the key space of the cryptosystem arises from the symmetry of the iteration function of the logistic map . as it is commented in , the fact that eq .( [ eq : logistic ] ) satisfies implies that and are equivalent sub - keys for decryption .the same applies to and . in the context of cryptographya minor change in the input of a cryptosystem should imply a major change in the corresponding output ( * ? ? ?* rule 9 ) . in this respect , if we take into account two images and with only one different pixel , then the associated cipher - images should be very different . to assess this property for the cryptosystem in , we have encrypted the images in fig .[ fig : low_sensitivity ] using as key , , , and .the differential cipher - image is equal to zero for a meaningful set of pixels , which informs about the limitations of the diffusion property of the cryptosystem given in regarding changes in the plain - image .according to , the security assessment of any cryptosystem must be carried out ( at least ) with respect to four basic attacks : * ciphertext - only attack : the cryptanalysis only knows the result of encryption . *known - plaintext attack : several pairs of plaintext and ciphertext are accessible for the cryptographer .* chosen - plaintext attack : the attacker gains access to the encryption machine and performs cryptanalysis by selecting adequate plaintexts . *chosen - ciphertext attack : the decryption machine can be used by the cryptanalyst , which chooses ciphertexts in order to extract information about the secret parameters of the cryptosystem . in this section we show that the cryptosystem described in does not exclude the successful application of a chosen - plaintext attack . as it has been pointed out in sec .[ sec : description ] , the encryption scheme consists of two classes of procedures : permutation and substitution of pixels .the main weakness of the proposal is a consequence of the independence between the shuffling stages and the last stage , i.e. , the one concerning the substitution of pixels .this fact can be exploited by means of the following _ divide - and - conquer attack _ , using as bottom - line chosen plain - images which are neutral elements with respect to row / column permutations . in this sense , if one encrypts a plain - image with all pixels equal to the same value , then the output of the shuffling procedures is the same plain - image .moreover , if the plain - image is selected forcing all rows / columns being equal , then encryption only shuffles columns / rows . in correspondence to the previous comments , we can mount an attack based on a chosen plain - image with all pixels equals to zero .let be a colour image with all pixels equal to zero , which implies that for .taking into account eq .( [ eq:1 ] ) , we have for . from the previous equation we can find the value of just by subtracting from .if we want to apply the recovered to get any from the corresponding , then must be obtained .this commitment can be accomplished using a second _ constant value plain - image_. for instance , we can use a chosen plain - image with all pixels equal to one .this being the case , we have for and for .let us focus on the image given as the difference between the cipher - images obtained from and respectively .since the difference between eq .( [ eq:3 ] ) and eq .( [ eq:4 ] ) is equal to , the components of are determined by looking for the pixel with value in each colour band of that difference image .if that pixel belongs to the red component , then ; if it is one of the green pixels , then ; finally , leads to a pixel in the blue band .once the substitution keystream and the selection vector have been obtained , it is possible to reconstruct the input of the shuffling procedures according to .this new goal is going to be achieved by using chosen plain - images . in this paperwe restrict our analysis to images of the same size as those used in , i.e. , images of size and , consequently , four chosen - plain images are required to elude the permutation - only phase . in order to validate our cryptanalysis , we have configured an encryption machine by selecting the key defined by the set , , , and . upon the assumption of having access to the encryption machine , we encrypt an image equal to zero and an image with all pixels equal to 1 .the cryptanalysis described in sec .[ sec : break - conf - stage ] is applied , and thus the keystream and the pseudo - random sequence are recovered . as it is commented in similarcryptanalysis works , the recovering of those sequences is equivalent to getting the secret key .nevertheless , the complete cryptanalysis of the cryptosystem in demands to infer a permutation matrix representing the composition of the permutation procedures lead by and .this goal can be achieved by using plain - images with all rows / columns equals . to illustrate the cryptanalysis we are going to extract the original positions of the pixels of the first row of .first , we encrypt a plain - image with each colour component determined by if we consider the vector of length 768 given by the concatenation of the first row of red , green , and blue component of the cipher - image , it is easy to verify that it contains only three values .the values corresponding to the selected secret key are 93 , 203 , and 223 , which indicates that the first row of the cipher image comes from either the row 93 , 203 , 223 of either of the colour components of the plain image . in order to establish the colour band of each of the three candidates for row permutation, we encrypt a plain - image with red component with all pixels equal to zero , green band being 1 , and blue component being 2 .then , we look for the occurrences of 0 , 1 , and 2 in the first row of each colour component of the cipher - image . the intersection of this new vector of indexes of occurrence with the previous one enables to conclude that contains the row 93 of the blue band of the plain - image , the row 203 of the red component of the plain - image , and the row 223 of the red component of the plain - image . after identifying the source of the first row of , we need to label each pixel of the rows identified as sources of that row .this aim is fulfilled if we encrypt a colour image with its three colour components equal to afterwards , we look for the occurrences of through the vector .the indexes of occurrence are given by the set .let us begin with , which is for the selected key .the set implies that either of the referred pixels comes from the first pixel of either the row 93 of the blue component , the red row number 203 , or the row 223 of the red band of the plain - image . to select the proper value among the three candidates for the three identified pixels , we encrypt an plain - image such that the row 93 of the blue component is the red row number 203 and the row 223 of the red bandis defined as again , we look for through and we obtain the indexes of occurrence 235 , 356 , and 556 . only 356 is included in the previous set , and as a result we have that the first pixel of the row 93 of the blue component of goes to the pixel 100 ( ) of the first row of the green component of . if we proceed in the same fashion with for , then we obtain the permutations for all the pixels of the row 93 of the blue band of the plain - image .the same applies to the row 203 ( 223 ) of the red band , but taking into account that the first pixel of the row is now labeled by 255 ( 254 ) .if one applies the previous methodology for all the rows of the cipher - image , then the permutation matrix can be inferred . in this sense, we have applied the cryptanalysis based on the six chosen plain - images to an encryption machine with secret key , , , and .the cryptanalysis allows to get , , and the permutation matrix , which is equivalent to obtain the secret key . to verify this assertion we have encrypted an image ( the result is in fig .[ fig : chosen](a ) ) , applied the cryptanalysis , and decrypted the cipher - image using the outputs of the cryptanalysis .the decrypted image is the one in fig .[ fig : chosen](b ) , which coincides with the original plain - image .in this paper we have studied in detail a recent proposal in the area of chaos - based cryptography .we have underlined some problems related to the dynamical properties of the system sustaining encryption , and we have also pinpointed some flaws related to the encryption architecture .the goal of our work was not simply to show the problems of a given chaotic cryptosystem , but to highlight the possibility of creating secure proposals to encrypt information using chaos . in this flavour ,our recommendation is on the line of the set of rules given in .this work was supported by the uam projects of teaching innovation and the spanish government projects tin2010 - 19607 and bfu2009 - 08473 .the work of david arroyo was supported by a juan de la cierva fellowship from the ministerio de ciencia e innovacin of spain .d. arroyo , framework for the analysis and design of encryption strategies based on discrete - time chaotic dynamical systems , ph.d .thesis , etsia of the polytechnic university of madrid , madrid , spain , avalaible online at http://digital.csic.es/handle/10261/15668 ( july 2009 ) .d. arroyo , r. rhouma , g. alvarez , s. li , v. fernandez , on the security of a new image encryption scheme based on chaotic map lattices , chaos : an interdisciplinary journal of nonlinear science 18 ( 2008 ) 033112 , 7 pages .d. arroyo , g. alvarez , v. fernandez , on the inadequacy of the logistic map for cryptographic applications , in : l. hernandez , a. martin ( eds . ) , x reunin espaola sobre criptologa y seguridad de la informacin ( x recsi ) , universidad de salamanca , salamanca , spain , 2008 , pp . 7782 , ( isbn 9788469151587 ) .d. arroyo , j. m. amig , s. li , g. alvarez , on the inadequacy of unimodal maps for cryptographic applications , in : j. d. ferrer , a. m. ballest , j. c. roca , a. s. gmez ( eds . ) , xi reunin espaola sobre criptologa y seguridad de la informacin ( xi recsi ) , universitat rovira i virgili , tarragona , spain , 2010 , pp . 3742 , isbn 9788469333044 . s. li , c. li , g. chen , n. g. bourbakis , k .-lo , a general quantitative cryptanalysis of permutation - only multimedia ciphers against plaintext attacks , signal processing : image communication 23 ( 3 ) ( 2008 ) 212 223 .
the interleaving of chaos and cryptography has been the aim of a large set of works since the beginning of the nineties . many encryption proposals have been introduced to improve conventional cryptography . however , many of those proposals possess serious problems according to the basic requirements for the secure exchange of information . in this paper we highlight some of the main problems of chaotic cryptography by means of the analysis of a very recent chaotic cryptosystem based on a one round substitution permutation network . more specifically , we show that it is not possible to avoid the security problems of that encryption architecture just by including a chaotic system as core of the derived encryption system . image encryption , substitution permutation networks , permutation - only ciphers , unimodal maps , chosen - plaintext attack .
we study the three - user multi - way relay channel ( mwrc ) with correlated sources , where each user transmits its data to the other two users via a single relay , and where the users messages can be correlated .the mwrc is a canonical extension of the extensively studied two - way relay channel ( twrc ) , where two users exchange data via a relay . adding users tothe twrc can change the problem significantly .the mwrc has been studied from the point of view of channel coding and source coding . in channel coding problems ,the sources are assumed to be independent , and the channel noisy .the problem is to find the capacity , defined as the region of all _ achievable _ channel rate triplets ( bits per channel use at which the users can encode / send on average ) . for the gaussian mwrc with independent sources , gndz et al . obtained asymptotic capacity results for the high snr and the low snr regimes . for the finite - field mwrc with independent sources , ong et al . constructed the _ functional - decode - forward _ coding scheme , and obtained the capacity region . for the general mwrc with independent sources ,however , the problem remains open to date . in source coding problems ,the sources are assumed to be correlated , but the channel noiseless .the problem is to find the region of all _ achievable _ source rate triplets ( bits per message symbol at which the users can encode / send on average ) .the source coding problem for the three - user mwrc was solved by wyner et al . , using _ cascaded slepian - wolf _ source coding . in this paper , we study both source and channel coding in the same network , i.e. , transmitting correlated sources through noisy channels ( cf .our recent work on the mwrc with correlated sources and orthogonal uplinks ) . for most communication scenarios ,the source correlation is fixed by the natural occurrence of the phenomena , and the channel is the part that engineers are `` unwilling or unable to change '' .given the source and channel models , we are interested in finding the limit of how fast we can feed the sources through the channel . to this end , define _ source - channel rate _ ( also known as bandwidth ratio ) as the average channel transmissions used per source tuple . our aim is then to derive the minimum source - channel rate required such that each user can _ reliably _ and _ losslessly _ reconstruct the other two users messages . in the multi - terminal network, it is well known that separating source and channel coding , i.e. , designing them independently , is not always optimal ( see , e.g. , the multiple - access channel ) .designing good joint source - channel coding schemes is difficult , let alone finding an optimal one .gndz et al . considered a few networks with two senders and two receivers , and showed that source - channel separation is optimal for certain classes of source structure . in this paper , we approach the mwrc in a similar direction .we show that source - channel separation is optimal for three classes of source / channel combinations , by constructing coding schemes that achieve the minimum source - channel rate .recently , mohajer et al . solved the problem of linear deterministic relay networks with correlated sources .they constructed an optimal coding scheme , where each relay injectively maps its received channel output to its transmitted channel input .while this scheme is optimal for deterministic networks , such a scheme ( e.g. , the amplify - forward scheme in the additive white gaussian noise channel ) suffers from noise propagation in noisy channels and has been shown to be suboptimal for the mwrc with independent sources .we consider the mwrc depicted in figure [ fig : mwrc ] , where three users ( denoted by 1 , 2 , and 3 ) exchange messages through a noisy channel with the help of a relay ( denoted by 0 ) .for each node , we denote its source by , its input to the channel by , and its received channel output by .we let , as the relay has no source .we consider correlated and discrete - memoryless sources for the users , where , , and are generated according to some joint probability mass function the channel consists of a finite - field _ uplink _ from the users to the relay , which takes the form and a finite - field _ downlink _ from the relay to each user , which takes the form where , for all , for some finite field of cardinality with the associated addition . here, can be any prime power .we assume that the noise is not uniformly distributed , i.e. , its entropy ; otherwise , it will randomize the channel , and no information can be sent through .each user sends source symbols to the other two users ( simultaneously ) in channel uses .we refer to the source symbols of user as its message , denoted by , w_i[2 ] , \dotsc, ] , where each symbol triplet , w_2[u ] , w_3[u]) ] , for all and for all .each user estimates the messages of the other users from its own message and all its received channel symbols , i.e. , user decodes the messages from users and as , for all distinct .we denote ,y_i[2], ] .for source symbols or for channel symbols is clear from context . ]note that utilizing _feedback _ is permitted in our system model .this is commonly referred to as the _ unrestricted _ mwrc ( cf .the _ restricted _ mwrc ) .we will see later that for the classes of source / channel combinations for which we find the minimum source - channel rate , feedback is not used .this means that feedback provides no improvement to source - channel rate for these cases .user makes a decoding error if .we define as the probability that one or more users make a decoding error , and say that source - channel rate is _ achievable _ if the following is true : for any , there exists at least one block code of source - channel rate with .the aim of this paper is to find the infimum of achievable source - channel rates , denoted by . for the rest of the paper, we refer to as the minimum source - channel rate .theoretical interest aside , the finite - field channel considered in this paper shares two important properties with the awgn channel ( commonly used to model wireless environments ) .firstly , the channel is linear , i.e. , the channel output is a function of the sum of all inputs .secondly , the noise is additive .sharing these two properties , optimal coding schemes derived for the finite - field channel shed light on how one would code in awgn channels .for example , the optimal coding scheme derived for the finite - field mwrc with independent sources is used to prove capacity results for the awgn mwrc with independent sources .we will now state the main result of this paper .the technical terms ( in italics ) in the theorem will be defined in section [ sec : definition ] following the theorem .[ theorem : main ] the minimum source - channel rate is given by if the sources have any one of the following : 1 ._ almost - balanced conditional mutual information _ , or 2 ._ skewed conditional entropies _ ( on any _ symmetrical _ finite - field channel ) , or 3 . _their common information equals their mutual information_. for cases 1 and 2 , we derive the achievability ( upper bound ) of using existing ( i ) slepian - wolf source coding and ( ii ) functional - decode - forward channel coding for independent sources .we abbreviate this pair of source and channel coding scheme by sw / fdf - is .we derive a lower bound using cut - set arguments . while the achievability for these two cases is rather straightforward, what we find interesting is that using the scheme for independent messages is actually optimal for two classes of source / channel combinations .furthermore , although the source - channel rates achievable using sw / fdf - is can not be expressed in a closed form , we are able to derive closed - form conditions for two classes of sources where the achievability of sw / fdf - is matches the lower bound . in sw/ fdf - is , the source coding while compressing destroys the correlation among the sources , and hence channel coding for independent sources is used . for case 3 ,the sources have their common information equal their mutual information , meaning that each source is able to identify the parts of the messages it has in common with other source(s ) . for this case, we again use slepian - wolf source coding , but we conserve the parts that the sources have in common .we then design a _channel coding scheme that takes the common parts into account . here, the challenge is to optimize the functions of different parts that the relay should decode .we show that the new coding scheme is able to achieve . for all three cases ,the coding schemes are derived based on the separate source - channel coding architecture .also , for cases 1 and 3 , is found when only the sources satisfy certain conditions , and this is true independent of the underlying finite - field channel , i.e. , any and any noise distribution . in this section ,we define the technical terms in theorem [ theorem : main ] .[ def : symmetrical ] a finite - field mwrc is _ symmetrical _ if otherwise , we say that the channel is _we can think of as the noise level on the downlink from the relay to user .so , a symmetrical channel requires that the downlinks from the relay to all the users are equally noisy .we do not impose any condition on the uplink noise level , .[ def : abcmi ] the sources are said to have _ almost - balanced conditional mutual information _( abcmi ) if for all distinct .otherwise , the sources are said to have _ unbalanced conditional mutual information_. putting it another way , for unbalanced sources , we can always find a user , such that for some and distinct .[ def : sce ] sources with unbalanced conditional mutual information are said to have _ skewed conditional entropies _ ( sce ) if , in addition to , for the same as in .[ def : sce ] sources with unbalanced conditional mutual information are said to have _ skewed conditional entropies _( sce ) if , in addition to , for the same as in .lastly , we define _ common information _ in the same spirit as gcs and krner . for two users ,gcs and krner defined common information as a value on which two users can _ agree _ ( using the terminology of witsenhausen ) .the common information between two random variables can be as large as mutual information ( in the shannon sense ) , but no larger .the concept of common information was extended to multiple users by tyagi et al . , where they considered a value on which all users can agree . in this paper, we further extend common information to values on which different subsets of users can agree .we now formally define a class of sources , where their common information equals their mutual information .[ def : cc ] three correlated random variables are said to have their common information equal their mutual information if there exists four random variables , , , and such that for some deterministic functions , and we give graphical interpretations using information diagrams for sources that have abcmi and sce in appendix [ appendix : abcmi ] , and examples of sources that have abcmi and their common information equal their mutual information in section [ section : conclusion ] .definitions [ def : abcmi ] and [ def : sce ] are mutually exclusive , but definitions [ def : cc ] and [ def : abcmi ] ( or [ def : cc ] and [ def : sce ] ) are not .this means correlated sources that have their common information equal their mutual information must also have either abcmi , sce , or unbalanced mutual information without sce .this leads to the graphical summary of the results of theorem [ theorem : main ] in figure [ fig : results ] .the rest of this paper is organized as follows : we show a lower bound and an upper bound ( achievability ) to in section [ section : lower ] . in section [ section : upper-1 - 2 ] , we show that for cases 1 and 2 in theorem [ theorem : main ] , the lower bound is achievable . in section [section : upper-3 ] , we propose a coding scheme that takes common information into account , and show the source - channel rate achievable using this new scheme matches the lower bound .we conclude the paper with some discussion in section [ section : conclusion ] .denote the rhs of as we first show that is a lower bound to . using cut - set arguments , we can show that if source - channel rate is achievable , then , \label{eq : upper-2}\end{aligned}\ ] ] , \label{eq : upper-2}\end{aligned}\ ] ] for all distinct . here follows from mohajer et al .( 11)(12 ) ) and follows from ong et al .* section iii ) .re - arranging the equation gives the following lower bound to all achievable source - channel rates hence also to : [ lemma : lower - bound ] for any three - user finite - field mwrc with correlated sources , the minimum source - channel rate is lower bounded as we now present the result of sw / fdf - is coding scheme that first uses slepian - wolf source coding for the noiseless mwrc with correlated sources , followed by functional - decode - forward for independent sources ( fdf - is ) channel coding for the mwrc .this scheme achieves the following source - channel rates : [ lemma : achievability - separate ] for any three - user finite - field mwrc with correlated sources , sw / fdf - is achieves all source - channel rates in , where where is the set of real numbers .so , the minimum source - channel rate is upper bounded as where is the set of real numbers .so , the minimum source - channel rate is upper bounded as the proof is based on random coding arguments and can be found in appendix [ appendix : slepian - wolf - fdf ] .the variables are actually the channel code rates , i.e. , the number of message bits transmitted by user per channel use . from lemmas [ lemma : lower - bound ] and [ lemma : achievability - separate ] , we have the following result : [ corollary : separation ] for a three - user finite - field mwrc , if , then , meaning that the minimum source - channel rate is known and is achievable using sw / fdf - is .[ remark : separation ] the collection of source / channel combinations that satisfy corollary [ corollary : separation ] forms a class where the minimum source - channel rate is found , in addition to theorem [ theorem : main ]. the challenge , however , is to characterize in closed form classes of source / channel combinations for which . for this , we need to guarantee the existence of three positive numbers and satisfying the inequalities in for every .next , we will show that for cases 1 and 2 in theorem [ theorem : main ] , sw / fdf - is achieves all source - channel rates .in this subsection , we will show that if the sources have abcmi , then .since any relies on the existence of channel code rates , we first show the following proposition : [ proposition : existence ] consider sources with abcmi . given any source - channel rate , and any positive number , we can always find positive and such that it can be shown that choosing + \frac{\delta}{4 } , \label{eq : case-1-r1-r3}\ ] ] + \frac{\delta}{4 } , \label{eq : case-1-r1-r3}\end{aligned}\ ] ] for all distinct satisfies . the expression in the square brackets is non - negative due to the abcmi condition . with this result ,we now prove case 1 of theorem [ theorem : main ] .we need to show that any source - channel rate is achievable , i.e. , the source - channel rate for any , lies in . here , is independent of and . for a source - channel rate in ,we choose as in . substituting into , the second inequality in is satisfied . also , imply the first inequality in .hence , .this proves case 1 in theorem [ theorem : main ] . we need to show that if the sources have sce and the channel is symmetrical , then the source - channel rate in is achievable for any . recall that sources that have sce must have unbalanced conditional mutual information , for which we can always re - index the users as , , and satisfying for some fixed . for achievability in lemma [ lemma : achievability - separate ], we first show the existence of satisfying the following conditions : [ proposition : existence-2 ] consider sources with unbalanced mutual information . given any source - channel rate , and any positive number , we can always find positive and such that for defined in .constraint implies the following : first , we can always choose a positive number as in .in addition , we choose + \frac{\delta}{4 } , \label{eq : case-2-r2}\\ \kappa r_c & = h(w_c|w_a , w_b ) + \frac{1}{2 } \big [ i(w_b;w_c|w_a ) + i(w_a;w_c|w_b ) - i(w_a;w_b|w_c)\big ] + \frac{\delta}{4}. \label{eq : case-2-r3}\end{aligned}\ ] ] + \frac{\delta}{4 } , \label{eq : case-2-r2}\\ \kappa r_c & = h(w_c|w_a , w_b ) + \frac{1}{2 } \big [ i(w_b;w_c|w_a)\nonumber \\ & \quad + i(w_a;w_c|w_b ) - i(w_a;w_b|w_c)\big ] + \frac{\delta}{4}. \label{eq : case-2-r3}\end{aligned}\ ] ] substituting into , we get ; substituting into , we get . summing different pairs from , , and ,we get .furthermore , for a symmetrical channel , we can define so , for sce and for symmetrical channels imply that the source - channel rate in equals hence , we only need to show that the source - channel rate is achievable for any .we first choose and as in , , and , respectively . from , we get - \frac{\delta}{2 } \label{eq : b } \\ & < \kappa \big[\log f - \max\ { h(n_0 ) , h_\text{downlink } \ } \big ] , \label{eq:2a-1}\\ \kappa ( r_a + r_b ) & = h(w_a , w_b|w_c ) + \frac{\eta}{2 } + \frac{\delta}{2 } \label{eq : c } \\ & \leq h(w_b , w_c|w_a ) - \frac{\eta}{2 } + \frac{\delta}{2 } \label{eq : d}\\ & < \kappa \big[\log f - \max\ { h(n_0 ) , h_\text{downlink } \ } \big ] , \label{eq:2a-2}\\ \kappa ( r_a + r_c ) & = h(w_a , w_c|w_b ) + \frac{\eta}{2 } + \frac{\delta}{2 } \label{eq : e}\\ & \leq h(w_b , w_c|w_a ) - \frac{\eta}{2 } + \frac{\delta}{2 } \label{eq : f } \\ &< \kappa \big[\log - \max\ { h(n_0 ) , h_\text{downlink } \ } \big ] , \label{eq:2a-3}\end{aligned}\ ] ] where and follow from ; , , and follow from . , which is determined by the sources correlation , is strictly greater than zero ; see its definition in .] this means the second inequality in is satisfied . from , we know that the first inequality in is also satisfied . hence , the source - channel rate is indeed achievable for any . in this section , we give an example showing that sw / fdf - is can be suboptimal .consider the following sources : , , and , where is uniformly distributed in , is uniformly distributed in , and are each uniformly distributed in . in addition , all and are mutually independent . here, each represents common information between and . for the channel ,let the finite field be and be modulo- addition , i.e. , .furthermore , let for , and for ; let for , and for , for all . for this source and channel combinations , we have , , , , , , for all .one can verify that these sources have unbalanced conditional mutual information and do not have sce . in this example ,suppose that is achievable using sw / fdf - is .from lemma [ lemma : achievability - separate ] , there must exists three positive real numbers , , and such that from , we must have that and .these imply . hence , and can not be simultaneously true .this means the source - channel rate 1.05 is not achievable using sw / fdf - is .the sources described here have their common information equal their mutual information .we will next propose an alternative scheme that is optimal for this class of sources .the following proposed scheme achieves all source - channel rates for this source / channel combination , meaning that the minimum source - channel rate for this example is .so sw / fdf - is is strictly suboptimal for this source / channel combination .while the achievability for cases 1 and 2 uses existing source and channel coding schemes , for case 3 ( i.e. , sources that have their common information equal their mutual information ) , we will use an existing source coding scheme and design a new channel coding scheme to achieve all source - channel rates .while case 3 in general requires a new achievability scheme , these sources may have abcmi ( as shown in figure [ fig : results ] ) .for such cases , optimal codes can also be obtained using the coding scheme for case 1 . in this section , without loss of generality , we let this means we can re - write as follows : as mentioned earlier , we will use a separate - source - channel - coding architecture , where we first perform source coding and then channel coding .we will again use random coding arguments . more specifically, we will use random linear block codes for channel coding .we encode each , v_\mathcal{s}[2 ] , \dotsc , v_\mathcal{s}[m])$ ] to , which is a length- finite - field ( of size ) vector , for all ( see definition [ def : cc ] for the definition of ) .we also encode each to , which is a length- finite - field vector .so , each message is encoded into four subcodes , e.g. , is encoded into . some subcodes the common parts are shared among multiple sources . using the results of distributed source coding , if is sufficiently large and if then we can decode to , to , and to with an arbitrarily small error probability .we show the proof in appendix [ app : gk - source - coding ] .after source coding , user 1 has .in order for it to decode , it must receive from the other users through the channel .similarly , users 2 and 3 must each obtain subcodes that they do not already have through the channel .in contrast to the source coding used for cases 1 and 2 , here , we have generated source codes where the users share some subcodes .so , instead of using existing fdf - is channel codes ( designed for independent sources ) , we will design channel codes that take the common subcodes into account . [cols="^,^,^,^,^,^,^,^",options="header " , ] the messages in each column are transmitted simultaneously .note that in the third row of messages in the table , is split into two parts if and only if .else , the entire message will be transmitted in the first column , i.e. , together with . since , the message can always fit into the first and third columns , with the remaining `` space '' padded with zero , denoted by .the message is transmitted in a similar way .the relay decodes the modulo addition of the messages in each column . if is sufficiently large and if is satisfied , then the relay can reliably decode its intended messages , i.e. , .the relay broadcasts on the downlink . using lemma [ lemma : downlnk ], we can show that if , , and are satisfied , then each user can reliably decode , from which it can recover the messages of the other two users .this completes the proof for the achievability of case 3 in theorem [ theorem : main ] .note that the lower bound in lemma [ lemma : lower - bound ] and the achievable source - channel rates in lemma [ lemma : achievability - separate ] are applicable to the general finite - field mwrc with correlated sources , in addition to cases 1 and 2 in theorem [ theorem : main ] .however , the coding technique in section [ section : upper-3 ] is useful only for sources that have their common information equal their mutual information . besides the coding schemes considered in this paper, one could treat the uplink ( as a multiple - access channel with correlated sources ) and the downlink ( as a broadcast channel with receiver side information ) separately to get potentially different achievable source - channel rates . in this case , on the uplink , we let the relay decode all three users messages .an achievable channel rate region for the two - sender multiple - access channel with correlated sources was found by cover et al .then , on the downlink , we can use the result by tuncel for the relay to transmit to the users taking into account that each user has side information . extending the results of cover et al .* theorem 2 ) to three senders , for the relay to be able to reliably decode on the uplink , must satisfy the following : , \\\rightarrow & & \kappa & > \frac{h(w_1,w_2,w_3)}{\log f - h(n_0)}. \label{eq : cover - tuncel}\end{aligned}\ ] ] , \\\rightarrow & & \kappa & > \frac{h(w_1,w_2,w_3)}{\log f - h(n_0)}. \label{eq : cover - tuncel}\end{aligned}\ ] ] for the case where each user has non - zero message , for all .comparing to , we see that this coding strategy is strictly suboptimal for all cases 13 in theorem [ theorem : main ] when .however , this strategy _ may _ achieve better ( i.e. , lower ) source - channel rates than sw / fdf - is in general .we leave the derivation of the rates to the reader it is straightforward given the results by cover et al . and tuncel . in this paper, we have only investigated the separate - source - channel coding architecture without feedback .though we have identified source / channel combinations where the minimum source - channel rate is found , the problem remains open in general .two directions which one could explore are _ joint _ source - channel coding and _feedback_. we now give a numerical example where none of the coding schemes in this paper achieves the lower bound .let the sources be , , and where all are independent random variables , and denotes modulo - two addition .we choose , , , and . for this choice, we have , , . for the channel , let , which gives .we choose and for , giving . for each , we set , , and for , giving .this means , , and for all . in this example , .if is achievable , then ( from lemma [ lemma : achievability - separate ] ) we must be able to find some and satisfying , and . since the conditions can not be simultaneously met , is not achievable using sw / fdf - is . as the sources common information does not equal their mutual information , we can not use the coding scheme derived in section [ section : upper-3 ] .this example shows that the separation schemes derived in this paper can not achieve the lower bound in some cases .however , to show that separation is suboptimal , one has to explore all possible separation schemes and show that some joint source - channel scheme achieves a better source - channel rate . in this paper ,we have identified three classes of source / channel combinations where the minimum source - channel rate is found .the first class is sources that have almost - balanced conditional mutual information ( abcmi ) .an example of sources that have abcmi is interchangeable random variables in the sense of chernoff and teicher , where `` every subcollection of the random variables has a joint distribution which is a symmetric function of its arguments . ''this can model sensor networks where the sensors are equally - spaced on a circle to detect a phenomenon occurring at the center of the circle .however , the abcmi conditions are looser than that of interchangeable random variables as the former only requires that mutual information between any two sources has _ roughly _ the same value ( see appendix [ appendix : abcmi ] ) , and also , the marginal distribution of the variables can be vastly different .another class for which we have derived the minimum source - channel rate is when the sources have their common information equal their mutual information .an example is correlated sources in the sense of han , where the sources can be written as , , and , where and are _ mutually independent _ random variables . using sensor networks as an example again , each node here has multiple sensing capabilities , e.g. , temperature , light , sound . as these measurements display different behavior spatially , some remain constant across subsets of sensors , e.g. , nodes 1 and 2 always sense the same temperature but different light intensity . as for the class of sources with skewed conditional entropies , the conditions appear to be purely mathematical in nature .the authors would like to thank roy timo for discussions on gcs and krner s common information and other helpful comments .figure [ fig : case1 - 2 ] shows the relationship among the entropies and mutual information for the three source messages , , and for the cases described above .referring to figure [ fig : case-1 ] , the shaded areas represent the mutual information between any two source messages given the third source message .for abcmi , we have that any of the three shaded areas must not be bigger than the sum of the other two shaded areas .suppose that the sources do not have abcmi , then they must have unbalanced conditional mutual information , i.e. , we can find a user where is larger than the sum of and by an amount ( see figure [ fig : case-2a ] ) .in addition , for sources that have sce , we also have that for the two messages , and , whose mutual information conditioned on , i.e. , , is larger than the sum of the other two pairs by the amount , their entropy conditioned on , i.e. , , is also greater than that of any other pair ( conditioned on the message of the third user ) by at least . the information diagram for sce is depicted in figure [ fig : case-2a ] .we first quote two existing results of ( i ) channel coding for the three - user mwrc with independent messages and ( ii ) source coding with side information .the following channel - coding setting assumes that the source messages are independent : [ lemma : channel - coding ] consider the mwrc depicted in figure [ fig : mwrc ] , where each user s message is uniformly distributed in ( we consider a single copy , i.e. , ) , and where , , and are independent .the channel is used times according to the block code ( with ) specified in section [ sec : model ] .each user can then decode the messages of the other two users ( with its received channel symbols and its own message ) with an arbitrarily small error probability if is sufficiently large , and if for all distinct .the following source - coding setting assumes that the channel is noiseless : [ lemma : source - coding ] consider only the three users with their respective length- messages generated according to ( [ eq : source ] ) .each user encodes its messages to an index , and gives its index to the other two users .each user can then decode the messages of the other two users ( with the received indices and its own message ) with an arbitrarily small error probability if is sufficiently large , and if for all distinct .note that wyner et al . derived a similar result with an additional constraint on the relay . in their setup , the users present their indices to a relay ; the relay in turn re - encodes and presents its index to the users .we use slepian - wolf source coding .each user encodes its length- message to an index , satisfying and , for .each user randomly generates a dither uniformly distributed in , and forms its encoded message .the dithers are made known to all nodes .now , , , and are mutually independent , and each is uniformly distributed in .we then use fdf - is channel coding for the users to exchange the encoded independent messages , , and via the relay in channel uses . from lemma[ lemma : channel - coding ] , if is satisfied , then each user can reliably recover and . knowing the dithers , it can also recover and .from lemma [ lemma : source - coding ] , if and are satisfied , then each user can reliably recover . noting that and defining , the conditions for achievability , i.e. , , , and , can be expressed as .we perform source coding for correlated sources . consider the source message . clearly , , since are deterministic functions of . now since captures all information that and have in common ( because ) , we have .similarly .so , we can reliably reconstruct from if is sufficiently large , and if the following inequalities hold ( * ? ? ? * theorem 2 ) : \log f ) /m & > h(w_1 , v_{12 } | v_{13 } , v_{123 } ) = h(w_1|w_2,w_3 ) + i(w_1;w_2|w_3 ) , \\ ( [ \ell_{1 } + \ell_{13 } ] \log f ) /m & > h(w_1 , v_{13 } |v_{12 } , v_{123 } ) = h(w_1|w_2,w_3 ) + i(w_1;w_3|w_2 ) , \\ ( [ \ell_{1 } + \ell_{123 } ] \log f ) /m & > h(w_1 , v_{123 } | v_{12 } , v_{13 } ) = h(w_1|w_2,w_3 ) , \\ ( [ \ell_{12 } + \ell_{13 } ] \log f ) /m & > h(v_{12 } , v_{13 } | w_1 , v_{123 } ) = 0 , \\ & \vdots\\ ( [ \ell_{1 } + \ell_{12 } + \ell_{13 } ] \log f ) /m & > h(w_1 , v_{12 } , v_{13 } | v_{123 } ) \nonumber \\ & = h(w_1|w_2,w_3 ) + i(w_1;w_2|w_3 ) + i(w_1;w_3|w_2 ) , \\ ( [ \ell_{1 } + \ell_{12 } + \ell_{123 } ] \log f ) /m & > h(w_1 , v_{12 } , v_{123 } | v_{13 } ) = h(w_1|w_2,w_3 ) + i(w_1;w_2|w_3 ) , \\ ( [ \ell_{1 } + \ell_{13 } + \ell_{123 } ] \log f ) /m & > h(w_1 , v_{13 } , v_{123 } | v_{12 } ) = h(w_1|w_2,w_3 ) + i(w_1;w_3|w_2 ) , \\ ( [ \ell_{12 } + \ell_{13 } + \ell_{123}\ \logf ) /m & > h(v_{12 } , v_{13 } , v_{123}|w_1 ) = 0 , \\ ( [ \ell_{1 } + \ell_{12 } + \ell_{13 } + \ell_{123 } ] \log f ) /m &= h(w_1 , v_{12 } , v_{13 } , v_{123 } ) = h(w_1).\end{aligned}\ ] ] \log f ) /m & > h(w_1 , v_{12 } | v_{13 } , v_{123})\\ & = h(w_1|w_2,w_3)\\ & \quad + i(w_1;w_2|w_3 ) , \\ ( [ \ell_{1 } + \ell_{13 } ] \log f ) /m & > h(w_1 , v_{13 } |v_{12 } , v_{123})\\ & = h(w_1|w_2,w_3)\\ & \quad + i(w_1;w_3|w_2 ) , \\ ( [ \ell_{1 } + \ell_{123 } ] \log f ) /m & > h(w_1 , v_{123 } | v_{12 } , v_{13})\\ & = h(w_1|w_2,w_3 ) , \\ ( [ \ell_{12 } + \ell_{13 } ] \log f ) /m & > h(v_{12 } , v_{13 } | w_1 , v_{123 } ) = 0 , \\ & \vdots\\ ( [ \ell_{1 } + \ell_{12 } + \ell_{13 } ] \log f ) /m & > h(w_1 , v_{12 } , v_{13 } | v_{123 } ) \nonumber \\ & = h(w_1|w_2,w_3)\\ & \quad + i(w_1;w_2|w_3)\\ & \quad + i(w_1;w_3|w_2 ) , \\ ( [ \ell_{1 } + \ell_{12 } + \ell_{123 } ] \log f ) /m & > h(w_1 , v_{12 } , v_{123 } | v_{13})\\ & = h(w_1|w_2,w_3)\\ & \quad + i(w_1;w_2|w_3 ) , \\ ( [ \ell_{1 } + \ell_{13 } + \ell_{123 } ] \log f ) /m & > h(w_1 , v_{13 } , v_{123 } | v_{12})\\ & = h(w_1|w_2,w_3)\\ & \quad + i(w_1;w_3|w_2 ) , \\ ( [ \ell_{12 } + \ell_{13 } + \ell_{123}\ \log f ) /m & > h(v_{12 } , v_{13 } , v_{123}|w_1 ) = 0,\end{aligned}\ ] ] \log f ) /m & = h(w_1 , v_{12 } , v_{13 } , v_{123})\\ & = h(w_1).\end{aligned}\ ] ] here , we need to consider all possible non - empty subsets of on the lhs .we have omitted some trivial inequalities where the rhs equals zero . repeating the above for sources 2 and 3 , and simplifying , we have .d. gndz , e. tuncel , and j. nayak , `` rate regions for the separated two - way relay channel , '' in _ proc .46th allerton conf .control comput .( allerton conf . ) _ , monticello , usa , sept . 2326 2008 , pp . 13331340 . c. schnurr , s. stanczak , and t. j. oechtering , `` achievable rates for the restricted half - duplex two - way relay channel under a partial - decode - and - forward protocol , '' in _ proc .theory workshop ( itw ) _ , porto , portugal , may 59 2008 , pp . 134138 .x. liang , s. jin , x. gao , and k .- k .wong , `` outage performance for decode - and - forward two - way relay network with multiple interferers and noisy relay , '' _ ieee trans ._ , vol .61 , no . 2 ,521531 , feb .s. j. kim , b. smida , and n. devroye , `` capacity bounds on multi - pair two - way communication with a base - station aided by a relay , '' in _ proc .inf . theory ( isit ) _ , austin , usa , june 1318 2010 , pp . 425429 .r. timo , g. lechner , l. ong , and s. j. johnson , `` multi - way relay networks : orthogonal uplink , source - channel separation and code design , '' _ ieee trans ._ , vol . 61 , no . 2 , pp .753768 , feb .j. l. massey , `` channel models for random - access systems . '' in _ performance limits in communication theory and practice , nato advances studies institutes series e142 _ , j. k. skwirzynski , ed.1em plus 0.5em minus 0.4emkluwer academic , 1988 , pp . 391402 .a. jain , d. gndz , s. r. kulkarni , h. v. poor , and s. verd , `` energy - distortion tradeoffs in gaussian joint source - channel coding problems , '' _ ieee trans .inf . theory _58 , no . 5 , pp .31533168 , may 2012 .
this paper studies the three - user finite - field multi - way relay channel , where the users exchange messages via a relay . the messages are arbitrarily correlated , and the finite - field channel is linear and is subject to additive noise of arbitrary distribution . the problem is to determine the minimum achievable source - channel rate , defined as channel uses per source symbol needed for reliable communication . we combine slepian - wolf source coding and functional - decode - forward channel coding to obtain the solution for two classes of source and channel combinations . furthermore , for correlated sources that have their common information equal their mutual information , we propose a new coding scheme to achieve the minimum source - channel rate . bidirectional relaying , common information , correlated sources , linear block codes , finite - field channel , functional - decode - forward , multi - way relay channel
the virial theorem ( vt ) ( chandrasekhar & fermi 1953 ) provides a direct way of analyzing the energy balance of a bounded region in a flow , describing the effect of various forces either in driving changes in the structure of a dynamical system or in determining the character of its equilibrium .the virial theorem can be cast in either eulerian or lagrangian form .the latter applies to a fluid parcel following the flow , i.e. , the volume and its bounding surface will generally be time - dependent .the eulerian version of the virial theorem ( evt ) applies to a fixed volume rather than a fixed mass ( e.g. , mckee & zweibel 1992 ) .this is best suited for application to fixed - grid , eulerian numerical simulations .however , when considering a large region of the ism , clouds constitute an ensemble in which each one is _ moving _ and _ morphing _ ( see vzquez - semadeni , this volume ) .thus , for studying the evt we have two options : \(a ) we can consider all clouds in the simulation , or `` lab '' frame ( ballesteros - paredes & vazquez - semadeni 1997 ) .however , in this case , the clouds bulk motions will appear as internal kinetic energy , and the energy budget of the volume defining the cloud is not exactly the energy budget of the cloud ; \(b ) we can consider a different frame for each cloud , so that the frame is moving with the cloud s center of mass velocity with respect to the lab frame , and has its origin at the cloud s center of mass . in the present work ,we briefly discuss the choice of reference frame , and then present preliminary statistical results of virial balance in two three - dimensional simulations of mhd isothermal turbulence with self - gravity at a resolution of grid points , and with rms mach number 2.2 .one simulation has a box size equal to the jeans length , and the other has .the transition from the lab frame is not completely straightforward .we first considered the possibility of staying in the lab frame , but taking a volume that moves with the velocity of the cloud s center of mass , although maintaining a fixed shape in time .however , in this case , the form of the evt is altered , and several extra terms appear that have no simple interpretation . due to these difficulties ,we finally have chosen to completely move to each cloud s frame , as described above . in this case, the evt remains in its usual form , at the expense that plots for the cloud ensemble comparing the various terms in the evt contain data from many different frames one for each cloud .the evt is then where is the moment of the inertia of the cloud , is the thermal energy ; is the kinetic energy , is the surface pressure term , is the surface kinetic term ; is the magnetic energy and is the surface magnetic term , with being the maxwell stress tensor ; is the gravitational term ( _ not _ equal to the gravitational energy ) , and is the flux of moment of inertia through the surface of the cloud ( sums over repeated indices are assumed ) .we define the clouds in the numerical simulations as connected sets of pixels with densities larger than an arbitrary threshold value . in order to measure only the contribution associated to the fluctuations , we substract from the velocity the bulk , mass - averaged velocity of the cloud , defined as , where is mass of the cloud , and substract from the position of each pixel the position of the cloud s center of mass .we consider several density thresholds to enlarge the cloud ensemble , and consider that a cloud maintains its identity as long as it does not split into several components upon increasing the threshold .we find the following preliminary results , consistent with those of ballesteros - paredes & vzquez - semadeni ( 1997 ) : the time derivative terms ( i.e. , the lhs and the last term in the rhs of equation [ 1 ] ) are dominant in the overall virial balance , being much larger than the remaining volume and surface terms ( fig.1 ) .we refer to these terms as `` geometrical '' , since they correspond to the ( time derivatives of the ) mass distribution in the cloud and its flux through the cloud s boundary .this implies that , far from being in quasi - hydrostatic equilibrium , the clouds are continually changing shape and `` morphing '' .the surface terms , which are often neglected in virial - balance studies , have magnitudes comparable to those of the volumetric ones ( fig.2 ) . * 3 . *however , the surface and the volume terms do not cancel out exactly , leaving a net contribution for shaping the clouds and balancing gravity .in fact , we propose that the correct diagnostic for whether a cloud will undergo gravitational collapse is ; otherwise , the cloud is transient . indeed , in the simulation with , two clouds are observed to be collapsing by the final time .these two clouds satisfy the above criterion ( fig.3 , left ) . note that comparing the absolute value of gravitational term vs. only the volume term would suggest that none of the clouds be collapsing ( fig.3 , right ) , contrary to what occurs in the simulation .thus , we conclude that _ the correct diagnostic for determining gravitational binding must include the contribution of the surface terms , or else the gravitational term is underestimated . _
we discuss the virial balance of all members of a cloud ensemble in numerical simulations of self - gravitating mhd turbulence . we first discuss the choice of reference frame for evaluating the terms entering the virial theorem ( vt ) , concluding that the balance of each cloud should be measured in its own reference frame . we then report preliminary results suggesting that a ) the clouds are far from virial _ equilibrium _ , with the `` geometric '' ( time derivative ) terms dominating the vt . b ) the surface terms in the vt are as important as the volume ones , and tend to decrease the action of the latter . c ) this implies that gravitational binding should be considered including the surface terms in the overall balance . # 1_#1 _ # 1_#1 _ = # 1 1.25 in .125 in .25 in
computer science and physics , although different disciplines in essence , have been closely linked since the birth of the first one . more recently, computer science has met together with statistical physics in the so called combinatorial problems and their relation to phase transitions and computational complexity ( see for a compendium of recent works ) .more accurately , algorithmic phase transitions ( threshold property in the computer science language ) , i.e. sharp changes in the behavior of some computer algorithms , have attracted the attention of both communities .it has been shown that phase transitions play an important role in the resource growing classification of random combinatorial problems .the computational complexity theory is therefore nowadays experimenting a widespread growth , melting different ideas and approaches coming either from theoretical computation , discrete mathematics , and physics .for instance , there exist striking similarities between optimization problems and the study of the ground states of disordered models .+ problems related to random combinatorics appear typically in discrete mathematics ( graph theory ) , computer science ( search algorithms ) or physics ( disordered systems ) .the concept of sudden change in the behavior of some variables of the system is intimately linked to this hallmark .for instance , erd and renyi , in their pioneering work on graph theory , found the existence of _ zero - one _ laws in their study of cluster generation .these laws have a clear interpretation in terms of phase transitions , which appear extensively in many physical systems .more recently , computer science community has detected this behavior in the context of algorithmic problems .the so called _ threshold phenomenon _ distinguishes zones in the phase space of an algorithm where the problem is , computationally speaking , either tractable or intractable .it is straightforward that these three phenomena can be understood as a unique concept , in such a way that building bridges between each other is an appealing idea .+ related to the concept of a phase transition is the task of classifying combinatorial problems .the theory of computational complexity distinguishes problems which are tractable , that is to say , solvable in polynomial time by an efficient algorithm , from those which are not .the so - called class gathers problems that can be solved in polynomial time by a non - deterministic turing machine .this class generally includes many hard or eventually intractable problems , although this classification is denoted _ worst - case _ , that is to say , a rather pessimistic one , since the situations that involve long computations can be eventually rare . in the last years , numerical evidences suggest the presence of the threshold phenomenon in problems .these phase transitions may in turn characterize the _ average - case _ complexity of the associated problems , as pointed out recently .+ + in this paper we discuss a stochastic algorithm inspired on artificial chemistry models that has already been studied from a statistical physics point of view .this algorithm generates prime numbers by means of a stochastic integer decomposition .a continuous phase transition has been detected and described in a previous work : we can distinguish a stationary phase where the ability for producing primes is practically null and a stationary phase where the algorithm is able to reduce the whole system into primes .it is straightforward to reinterpret the model as a search problem which undergoes an algorithmic phase transition related to a change in its computational complexity . in this paperwe firstly pretend to make a broader characterization of the system ; in this sense this work is a continuation of a previous one .further on , we will situate the model in the context of the computational complexity theory , in order to relate its computational complexity with the phase transition present in the system . the paperthus goes as follows : in section ii we will describe the model , which stands as a stochastic prime number generator . in section iiiwe will characterize the phase transition present in the system , following the steps depicted in a previous work and providing some new data and additional information . concretely, we will firstly outline the breaking of symmetry responsible for the order - disorder transition . after defining proper order and control parameters, critical exponents will be calculated numerically from extensive simulations and finite size scaling analysis .an analytical approach to the system will also be considered at this point , in terms of an annealed approximation . in section iv, we will reinterpret the model as a search problem .we will then show that the system belongs to the class in a worst - case classification .we will find an easy - hard - easy pattern in the algorithm , as common in many problems , related in turn to the critical slowing down near the the transition point . according to , we will finally relate the nature of the phase transition with the average - case complexity of the problem .we will show that while the problem is in , the resource growing is only polynomial . in section v we will conclude .the fundamental theorem of arithmetic states that every integer greater than one can be expressed uniquely as a product of primes or powers of primes . in a certain manner ,prime numbers act as atoms in chemistry , both are irreducible . this way , composite numbers can be understood as molecules .following this resemblance , the next algorithm has been introduced : suppose that we got a pool of positive integers , from which we randomly extract a certain number of them ( this will constitute the system under study ) .note that the chosen numbers can be repeated , and that the integer is not taken into account .now , given two numbers and taken from the system of numbers , the algorithm collision rules are the following : * rule 1 : if there is no reaction ( elastic shock ) , and the numbers are not modified .* rule 2 : if the numbers are different ( say ) , a reaction will take place only if is a divisor of , i.e. if mod .the reaction will then stand for where stands for the usual notation of a chemical reaction and .* rule 3 : if but mod , no reaction takes place ( elastic shock ) .the result of a reaction will be the extinction of the composed number and the introduction of a simpler one , .+ the algorithm goes as follows : after randomly extracting from the pool a set of numbers , we pick at random two numbers and from the set : this is equivalent to the random encounter of two molecules .we then apply the reaction rules . in order to have a parallel updating, we will establish repetitions of this process ( monte carlo steps ) as a time step .note that the reactions tend to separate numbers into its irreducible elements , as molecules can be separated into atoms .hence , this dynamic when iterated may generate prime numbers in the system .we say that the system has reached stationarity when every collision is elastic ( no more reaction can be achieved ) , whether because every number has become a prime or because rule 2 can not be satisfied in any case -frozen state- .the algorithm then stops .versus , for a pool size .each run is averaged over realizations in order to avoid fluctuations .note that the system exhibits a phase transition which distinguishes a phase where every element of the system becomes a prime in the steady state and a phase with low prime density.,scaledwidth=45.0% ] as stated in the previous section , this algorithm clearly tends to generate primes as far as possible : when the algorithm stops , one may expect the system to have a large number of primes or at least have a frozen state of non - divisible pairs . a first indicator that can evaluate properly this feature is the unit percentage or ratio of primes , that a given system of numbers reaches at stationarity . in figure [ r_y_v ]we present the results of monte carlo simulations calculating , as a function of and for a concrete pool size , the steady values of . every simulation is averaged over realizations in order to avoid fluctuations .we can clearly distinguish in figure [ r_y_v ] two phases , a first one where is small and a second one where the prime number concentration reaches the unity .realizations ) of the elements , for and ( phase with low ) : this one is a uniform distribution u(2,m ) ( note that the distribution is not normalized ) .the right figure stands for the same plot for and ( phase where reaches the unity ) : this one is a power law .,scaledwidth=75.0% ] this is the portrait of a phase transition , where would stand as the control parameter and as the order parameter . in the phase with small , the average steady state distribution of the elements is plotted in the left side of figure ( [ distris ] ) : the distribution is uniform ( note that the vertical scale is zoomed in such a way that if we scale it between ] .again , depends on the pool size .,scaledwidth=45.0% ] ( as explained in the text ) versus , for different values of pool size , in the annealed approximation.,scaledwidth=45.0% ] the system under hands shows highly complex dynamics : correlations take place between the numbers of the system at each time step in a non trivial way .find an analytical solution to the former problem is thus completely out of the focus of this paper .however , an annealed approximation can still be performed .the main idea is to obviate these two - time correlations , assuming that at each time step , the elements are randomly generated .this way , we can calculate , given and , the probability that at a single time step , no pair of molecules between are divisible .thus , will be the probability that there exist at least one reacting pair .note that will somehow play the role of the order parameter , in this oversimplified system .+ in a first step , we can calculate the probability that two molecules randomly chosen from the pool are divisible : where the floor brackets stand for the integer part function . obviously , is the probability that two molecules randomly chosen are not divisible in any case .now , in a system composed by molecules , we can make distinct pairs . however , these pairs are not independent in the present case , so that probability is nt simply . correlations between pairs must be somehow taken into account . at this point , we can make the following ansatz : where characterizes the degree of independence of the pairs .the relation versus is plotted in figure [ campo ] for different values of the pool size .note that for a given , the behavior of is qualitatively similar to , the order parameter in the real system .+ for convenience , in this annealed approximation we will define a threshold as the one for which .this value is the one for which half of the configurations reach an ordered state .this procedure is usual for instance in percolation processes , since the choice of the percolation threshold , related to the definition of a spanning cluster , is somewhat arbitrary in finite size systems . taking logarithms in equation ( [ ansatz ] ) and expanding up to first order, we easily find an scaling relation between and , that reads ^{\alpha } .\label{escala1}\ ] ] this relation suggests that the system s characteristic size is not , as one would expect in a first moment , but . in figure [ scaling1 ]we plot , in log - log , the scaling between and the characteristic size that can be extracted from figure [ campo ] .the best fitting provides a relation of the shape ( [ escala1 ] ) where ( note that the scaling is quite good , what gives consistency to the leading order approximations assumed in equation [ escala1 ] ) . versus the system s characteristic size in the annealed approximation .the plot is log - log : the slope of the straight line provides the exponent of equation ( [ escala1]).,scaledwidth=45.0% ] versus the characteristic system s size in the prime number generator , for pool size .the plot is log - log : the slope of the curve provides an exponent .,scaledwidth=45.0% ] the annealed approximation introduced in the preceding section suggests that the characteristic size of the system is not as one would expect but rather .this is quite reasonable if we have in mind that the number of primes that a pool of integers has is on average : the quantity of primes does nt grow linearly with .this is the so called prime number theorem , and states that the number of primes in the set of integers is asymptotic with when is large enough .+ in order to test if this conjecture also applies to the prime number generator , in figure [ scaling2 ] we represent ( in log - log ) the values of ( obtained numerically from the values where becomes non - null for the first time ) as a function of .we find the same scaling relation as for the annealed system ( equation [ escala1 ] ) , but with a different value for .this little disagreement is logical and comes from the fact that in the annealed approximation correlations are obviated .+ let us apply generic techniques of finite size scaling in order to calculate the critical exponents of this phase transition . reducing the control parameter as , the correlation exponent is defined as : where we find and .note that the transition tends to zero in the thermodynamical limit because its value increases more slowly than the system s size . ) for different values of , assuming the scaling relation [ escala1 ] .the collapse is very good , the scaling relation seems to be consistent ., scaledwidth=35.0% ] the critical exponent of the order parameter can be deduced from the calculation of the correlation exponent and from the finite size scaling relation the best fitting provides a value of .+ in figure [ colapso ] we have collapsed all curves according to the preceding developments .note that the collapse is excellent , something which provides consistency to the full development .hitherto , we have seen that the dynamical process that the prime number generator shows gives rise to a continuous phase transition embedded in a direct catalytic network . as pointed out in , phase transitions quite similar to the former one as percolation processes for instancecan be easily related to search problems . in the case under studywe can redefine the percolation process as a decision problem in the following terms : one could ask when does the clause _ every number of the system is prime when the algorithm reaches stationarity _ is satisfied .it is clear that through this focus , the prime number generator can be understood as a sat - like problem , as long as there is an evident parallelism between the satisfiability of the preceding clause and our order parameter p. thereby , in order to study the system from the focus of computational complexity theory , we must tackle the following questions : what is the algorithmic complexity of the system ? andhow is related the observed phase transition to the problem s tractability ?the algorithm under study , which generates primes by stochastic decomposition of integers , is related to both primality test and integer decomposition problems .although primality was believed to belong to the so - called problems ( solvable in non - deterministic polynomial time ) , it has recently been shown to be in : there exists at least an efficient deterministic algorithm that tests if a number is prime in polynomial time .the integer decomposition problem is in turn a harder problem , and to find an algorithm that would factorize numbers in polynomial time is an unsolved problem of computer science .furthermore , exploring the computational complexity of the problem under hands could eventually shed light into these aspects .+ for that task , let us determine how does the search space grows when we increase . in a given time step , the search space corresponds to the set of configurations that must be checked in order to solve the decision problem : this is nothing but the number of different pairs that can be formed using numbers . applying basic combinatorics ,the set of different configurations for elements and pairs is : we get that the search space increases with as . on the other hand , note that the decision problem is rapidly checked ( in polynomial time )if we provide a candidate set of numbers to the algorithm .these two features lead us to assume that the problem under hands belongs , in a worst - case classification , to the complexity class .+ note that this is not surprising : the preceding sections led us to the conclusion that the process is embedded in a ( dynamical ) scale - free catalytic network .as a matter of fact , the phase transition is related to a dynamical process embedded in a high dimensional catalytic network . in this hallmark , it is straightforward that this underlying network is non - planar .now , it has been shown that non - planarity in this kind of problems usually leaves to np - completeness ( for instance , the ising model in two - dimensions is , when the underlying network topology is non - planar , in ) . as defined in the text versus , for different pool sizes , from left to right : .every simulation is averaged over realizations .note that for each curve and within the finite size effects reaches a maximum in a neighborhood of its transition point ( this can be easily explored in figure [ p]).,scaledwidth=45.0% ] for the curves of figure [ t_sin_colapso ] .the goodness of the collapse validates the scaling relations.,scaledwidth=45.0% ] an ingredient which is quite universal in the algorithmic phase transitions is the so called _ easy - hard - easy pattern _ : in both phases , the computational cost of the algorithm ( the time that the algorithm requires to find a solution , that is , to reach stationarity ) is relatively small. however , in a neighborhood of the transition , this computational time reaches a peaked maximum . in terms of search or decision problems , this fact has a clear interpretation : the problem is relatively easy to solve as long as the input is clearly in one phase or the other , but not in between . in the system under study , the algorithm is relatively fast in reaching an absorbing state of low concentration of primes for small because the probability of having reactions is small . in the other hand ,the algorithm is also fast in reaching an absorbing state of high concentration of primes for high , because the system has enough `` catalytic candidates '' at each time step to be able to reduce them , the probability of having reactions is high . in the transition s vicinity ,the system is critical .reactions can be achieved , however , the system needs to make an exhaustive search of the configuration space in order to find these reactions : the algorithm requires in this region much more time to reach stationarity . + note that this easy - hard - easy pattern is related , in second order phase transitions , to the the phenomenon of critical slowing down , where the relaxation time in the critical region diverges .+ we have already seen in figure [ series ] that the system reaches the steady state in a different manner , depending on which phase is located the process . more properly ,when ( disordered phase ) , the system rapidly frozens , without practically achieving any reaction .when ( ordered phase ) , the system takes more time to reach the steady state , but it is in the regime where this time is maximal .in order to be able to properly compare these three regimes , let us define a characteristic time in the system as the number of average time steps that the algorithm needs to take in order to reach stationarity. remember that we defined a time step as monte carlo steps ( operations ) .thus , in order to normalize over the number of molecules , it is straightforward to define a characteristic time as : note that can be understood as a measure of the algorithm s time complexity . in figure [ t_sin_colapso ]we plot versus for a set of different pools ( simulations are averaged over realizations ) .note that given a pool size , reaches a maximum in a neighborhood of its transition point , as can be checked according to figure [ p ] .as expected , the system exhibits an easy - hard - easy pattern , as long as the characteristic time required by the algorithm to solve the problem has a clear maximum in a neighborhood of the phase transition .moreover , the location of the maximum shifts with the system s size according to the critical point scaling found in equation [ escala1 ] . in the other hand , this maximum also scales as : where the best fitting provides .note that in the thermodynamic limit , the characteristic time would diverge in the neighborhood of the transition .it is straightforward to relate this parameter with the relaxation time of a physical phase transition .according to these relations , we can collapse the curves of figure [ t_sin_colapso ] into a single universal one . in figure [ colapse_t ] this collapseis provided : the goodness of the former one supports the validity of the scaling relations .the system under study is interpreted in terms of a search problem , belonging to the class in a worst - case classification .now , an average - case behavior , which is likely to be more useful in order to classify combinatorial problems , turns out to be tough to describe . in , monasson et al .showed that there where problems exhibit phase transitions ( related to dramatic changes in the computational hardness of the problem ) , the order of the phase transition is in turn related to the average - case complexity of the problem . more specifically , that second order phase transitions are related to a polynomial growing of the resource requirements , instead of exponential growing , associated to first order phase transitions .+ it has been shown that the system actually exhibits a second order phase transition and an easy - hard - easy pattern . following monasson et al . , while our prime generator is likely to belong to the class , it shows however only a polynomial growing in the resource requirements , in the average case .one may argue that one of the reasons of this hardness reduction is that the algorithm does nt realize a direct search but on the contrary this search is stochastic : the search space is not exhaustively explored .thereby , the average behavior of the system and thus the average decision problem can be easily solved by the algorithm , in detriment of the probable character of this solution .in this paper a ( stochastic ) algorithmic model which stands for a prime number generator has been studied .this model exhibits a phase transition which distinguishes a phase where the algorithm has the ability to reduce every element into a prime , and a phase where the system is rapidly frozen .analytical and numerical evidences suggest that the transition is continuous . on a second part, the model has been reinterpreted as a search problem .as long as the model searches paths to reduce integers into primes , the combinatorial problem is related to primality test and decomposition problem .it has been shown that this model belongs to the class in a worst - case classification , moreover , an easy - hard - easy pattern has been found , as common in many algorithmic phase transitions . according to the fact that the transition is continuous , and based on previous works, it has been put into relevance that the average - case complexity may be only polynomial .this hardness reduction is in turn related to the fact that the algorithm only yields probable states .the authors wish to thank the instituto de fisica at unam for support and hospitality .the research was supported by unam - dgapa grant in-118306 ( om ) , grants fis2006 - 26382-e and fis2006 - 08607 ( bl and ll ) and the _tomas brody lectureship 2007 _ awarded to bl .30 this is something quite intuitive if we take into account that there are typically multiples of two , multiples of three , and so on : the probability that the prime appears in a random composite number is in average .
we introduce a prime number generator in the form of a stochastic algorithm . the character of such algorithm gives rise to a continuous phase transition which distinguishes a phase where the algorithm is able to reduce the whole system of numbers into primes and a phase where the system reaches a frozen state with low prime density . in this paper we firstly pretend to give a broad characterization of this phase transition , both in terms of analytical and numerical analysis . critical exponents are calculated , and data collapse is provided . further on we redefine the model as a search problem , fitting it in the hallmark of computational complexity theory . we suggest that the system belongs to the class _ np_. the computational cost is maximal around the threshold , as common in many algorithmic phase transitions , revealing the presence of an easy - hard - easy pattern . we finally relate the nature of the phase transition to an average - case classification of the problem .
_ boundary alignment _ is a critical feature of meshes in many applications . in a boundaryaligned mesh the boundary , or some other line , is traced by the sides of high - quality cells , see fig .[ fig : bound_align ] . the definition of a well - shaped cell may be application dependent , but in many cases , cells similar in shape to squares ( for quadrilateral cells ) or equilateral triangles ( for triangles ) are preferred . characteristics of the entire mesh are also important , such as smooth cells - size and cell - shape transitions .[ ptb ] the problem of producing boundary aligned meshes with well - shaped cells has been the subject of extensive research . still ,many popular algorithms are heuristic in nature , and a more general understanding of the subject is called for , especially when quadrilateral meshes are considered .a key difficulty is the problem s _ global _ character : the shape and position of every cell in the mesh is , at least in principle , related to that of any other cell . in a previous work , we described a relation between the problem of two - dimensional unstructured mesh generation , on both planar and curved surfaces , and another well - known problem , namely the _ inverse poisson _ ( ip ) problem .the ip problem is concerned with reconstructing a _ source distribution _ of the poisson equation , from information on the potential at the boundaries . in that work, the mesh was assumed to be _ conformal _ away from the irregular vertices ( vertices whose degree is different than four ) , like a grid mapped by a conformal mapping .such grids have the property of having square cells in the limit of an increasingly finer grid . under this assumption ,the problem of mesh generation was then shown to reduce to an ip problem .the irregular vertices of the mesh correspond to point sources ( delta functions ) of , and is interpreted as the logarithm of the local resolution .this theoretical framework turns the focus to the irregular vertices of the mesh : once their distribution is fixed , the continuum properties of the mesh - local resolution and directionality - are known .note that it is also an explicitly global formulation , since the resolution at any given point is affected , via the function , by the locations of all irregular vertices in the mesh. in this paper the generation of planar quadrilateral meshes is discussed .resting on the results of , a new ip algorithm is presented , designed to construct source distributions of the appropriate type , which approximate the resolution and mesh directionality inputs at user - specified points , such as at the boundaries .the ip algorithm is then incorporated into a complete mesh generation scheme , which also includes a technique for generating the final mesh .an implementation is described , and shown in example cases to generate boundary aligned meshes , where well - placed point sources create smooth cell transitions and high quality cells .a similar procedure is probably applicable to triangular meshes , but is not discussed in the present work .remeshing of curved surfaces has recently attracted considerable attention ; for a review see .many of the algorithms receive an input mesh directionality throughout the surface , usually the principal curvature directions of the surface .this setting presents different challenges than those addressed here , since the mesh structure is determined , to a large extent , when mesh directionality is given everywhere in the domain .for example , the locations of critical irregular vertices are dictated by the mesh directionality in the _ vicinity _ of these points .another related subject is surface parameterization , concerned with creating mapping of surfaces to the plane .conformal surface parameterizations are created in , but boundary alignment is not addressed .the paper is organized as follows : section [ sec : ip_review ] shortly reviews the relevant theoretical background , with emphasis on the relation between the ip problem and unstructured mesh generation .sections [ sec : algorithm ] and [ sec : create_final ] describe the proposed ip algorithm , and the mesh generation scheme . section [ sec : implementation ] describes an implementation of the algorithm , and section [ sec : examples ] gives examples of meshes generated . conclusions and possible directions for future researchare discussed in section [ sec : conclusions ] .in this section an overview of the background theory will be given .section [ sec : motivation ] explains the rationale underlying the mathematical formulation . in section [ sec : definitions ] , some key conformal geometry concepts are discussed , together with their relevance to the present problem .section [ sec : ip_background ] summarizes the results developed in , relating mesh generation with the ip problem .the exposition is limited to planar two - dimensional mesh generation ; a detailed account , in the more general setting of curved surfaces , can be found in .we start by considering mesh generation using conformal mappings , which can be viewed as a special case of the theory to be described .mapping techniques , in general , construct a function from one domain , for which a mesh already exists , to a second domain , which is being meshed . the mapping functionis then used to map the mesh into the second domain .a key idea is that continuum properties of the mapping function control the shapes of the cells of the new mesh , at least for small enough cells .for example , if the mapping is conformal , i.e. angle preserving , a cell with right inner angles ( rectangle ) will be mapped to a target cell with approximately right inner angles . in unstructured mesh generationthe connectivity of the mesh is not known in advance , and a more general framework is called for . in what follows ,the interplay between the two domains which serves as a paradigm of mapping techniques , is replaced with an interplay between two definitions of distances on the input domain . instead of imagining the mesh as being the image of a mesh on a different domain , we _ redefine the distances _ on the domain to be meshed , and _fix the cell edge length_. thus , the local mesh resolution will be proportional to the new local distance definition : a large new distance between two given points will mean more cells will be placed in between , hence a higher local resolution .distances are redefined using the concept of a _ metric _ , known from riemannian geometry . since we focus on cells which are squares in the limit of an increasingly finer mesh , we need to define only two local properties : the resolution ( inverse of cell - size ) , and the direction of the cell - edges .the new resolution , as noted before , is controlled by a new distance definition .we use a new distance which is localy a scaling of the old distances : the new distance between a point and other nearby points is equal to the old distance , multiplied by a scalar factor which is independent of the direction .thus , a small square measured in one distance definition is also ( approximately ) a square according to the new distance definition .in riemannian geometry terminology , the old and new metrics are said to be _ conformally related_. the mesh directionality is related to the resolution , as is described in the following section .it is convenient to work with a function , defined as the _ logarithm _ of the local scaling factor .that is , a small square of side length measured with the original , cartesian distance definition , is a square with side length as measured with the new distance definition , see fig .[ fig : lay_grid1],a .. if we imagine that the domain is covered with many small squares , all with the same side length in the new distance definition , the size of these cells in the original distance definition will be . the ( local ) resolution is the inverse of the local - cell - size , hence where the proportionality constant is a single number for the whole mesh .the second continuum property is the local directionality of the cell edges .the edges should ideally meet at right angles everywhere , except at _ irregular vertices _ , where the number of edges incident on the vertex is different than four .it is therefore natural to assign to every point a set of four directions , mutually parallel or perpendicular .this concept was expressed by many authors , and given various names , such as _ mesh directionality _4-symmetry direction field _ , and _ frame field _ , although the last may refer to a structure which also holds cell - size information .graphically , this object can be represented by a cross at every point , see fig .[ fig : lay_grid1],a . , and here will be called a _ cross - field _ . on the plane, the cross direction can be measured by the angle from the -axis to one of the directions of the cross .this angle is fixed up to an addition of radians , i.e. represent the same cross iff , for some integer .[ ptb ] the function and the cross - field are not unrelated ; due to conformality , lines that trace the edge directions bend towards the side with smaller cells , see fig . [fig : lay_grid1],a .. in the continuum theory , these lines are known as _geodesics _ of the manifold .geodesics are a generalization of the concept of straight lines to non - euclidian geometries . in the original , cartesian coordinates , the geodesic obeys the following differential equation : where is the curvature of the geodesic in the cartesian coordinate system , and is the derivative of the function in the direction normal to the tangent , see fig .[ fig : lay_grid1],b .. eq . ( [ eq : kappa_dphi_dn ] ) allows one to calculate the change in the direction of the cross - field between two points connected by a geodesic .there is also a direct way of calculating the change in the direction of a cross - field between any two points , along any curve connecting the two points , known as _ parallel - transport_. let be some curve from point to point , and let be the angles of the crosses at points , respectively ( as noted above , the angles are defined up to an addition of a radians ) .then where the integration denotes a line integral along the curve , according to the length parameter on , as measured in the original coordinate system .the differential formulation of this equation is where are a pair of perpendicular vectors that form a right hand system .where the function is defined , that is , at any point in the domain that is not a singularity , can be shown to _harmonic _ , that is to obey the laplace equation : where , again , the derivatives are taken with the original coordinate system .( [ eq : kappa_dphi_dn],[eq : phi_laplace ] ) are well - known results of conformal geometry , .( [ eq : theta_change_par_tranport ] ) is derived in equations ( [ eq : kappa_dphi_dn],[eq : phi_laplace ] ) fully describe the relations between the cross - field and the function , at any regular point of the domain , that is , any point that is not a singularity of the function .the singularities of the function are a key ingredient of the theory , since they correspond to the irregular vertices of the mesh , and unstructured meshes are those which contain irregular vertices . a detailed analysis of the possible of singularities of the harmonic function , and their effect on the resulting mesh , was carried out in , and shows that the only type of singularity that corresponds to a mesh with a finite number of cells is of the type , where is the location of the singularity . in the ip literature , such a sigularity is known as a _ point source_. furthermore , the prefactor of the logarithm is directly related to the degree of the irregular vertex in the final mesh . more specifically , suppose there are singularities in a domain , at points , then the function can be written as * condition 1 : * where is a harmonic function , , and .the numbers are known as _ charges _ , so condition 1 states that the charges are integer multiples of .we will refer to the numbers simply as the -values of the singularities .the degree ( number of incident edges ) of the irregular vertex corresponding to the source at is equal to .so , for example , irregular vertices of degrees 3 and 5 will correspond to a singularities with and , respectively . in a geometrical context, a logarithmic singularity of represents a _ cone - point _ - the tip of a cone - in the new distance definition .the charge corresponds to the angle deficit of the tip of the cone .related subjects where manifolds with cone - points are considered include the study of disclinations in elastic media , and surface parameterizations .an unstructured mesh is required to be aligned with the boundary .namely , that one of the cross directions be parallel to the tangent to the boundary ( where a tangent exists ) : * boundary alignment definition : * where is a curve tracing the boundary , and .since is fixed up to an addition of a multiple of radians , this definition depends only on the cross - field itself , and not on the particular choice of . requiring boundary alignment , as defined in eq .( [ eq : bound_align ] ) can be shown to be equivalent to three more conditions on the function .the following , condition 2 , applies to a point on a smooth section of the boundary , .it roughly states that a smooth section of the boundary is a geodesic .this provides information on the derivative of normal to the boundary : * condition 2 : * is the boundary curvature at .the next condition is concerned with _ junction points _ of the boundary , where the tangent to the boundary is discontinuous .it assures that the cross - field will be aligned with the boundary on both sides of the junction point .let be a curve from point to point on both sides of a point of the boundary , see fig .[ fig : cond_3_4],b .. then * condition 3 : * where is the angle of the boundary , and .this means that for some inner angles will contain a singularity at ; at a distance from , the singularity will be of type .[ ptb ] the final condition is concerned with the relation between two different boundary components .it is just a restatement of eq .( [ eq : theta_change_par_tranport ] ) for two points on two different boundary segments .* condition 4 : * let , be two points on two boundary curves .let be a curve from to .then conditions 1 - 4 form a complete set of conditions on , for a cross - field that is boundary aligned to exist .that is , conditions 1 - 3 , and condition 4 on a number of selected curves , one from a selected point to each boundary component , are sufficient for a boundary aligned cross - field to exist , see .the mesh generation scheme consists of the following steps : 1 .setup and solution of the ip problem . finds the sources locations and charges . described in sections [ sec : input]-[sec : calc_locations ] .solution of the direct poisson problem , to obtain the functions throughout the domain , see section [ sec : restore_harmonic ] .generation of the final mesh , see section [ sec : create_final ] .suppose that we are given an open domain , with boundary , and are given the required resolution function on .( [ eq : phi_is_log_res ] ) , this is readily translated to the required value of on the boundary : condition 1 ( eq . ( [ eq : cond1 ] ) ) is a solution of the poisson equation , with point sources , whose locations and charges are yet unknown .the problem is to find a distribution of sources ( location and charge ) adhering to the boundary alignment definition , eq .( [ eq : bound_align ] ) , or alternatively to conditions 2 - 4 ( eq .( [ eq : cond2])-([eq : cond4 ] ) ) , as well as to eq .( [ eq : phi_from_res ] ) .such a problem is known as an _ inverse poisson _ ( ip ) problem .the ip problem may be compared with the _ direct poisson _problem , where the source distribution is given , as well as some boundary information ( e.g. , dirichlet or newmann boundary conditions ) , and the value of the function is to be found . in the ip problem ,the source distribution is unknown , and a source distribution adhering to the known boundary information is to be found .ip problems have important applications in various areas of science and engineering ,,, , . by its nature , the ip problem is ill - posed , and the solution might not be unique , and may be sensitive to small changes of the input , such as small changes in boundary conditions . in delicate problems of this type , any prior information on the source distribution may greatly affect the applicability of a specific solution procedure . in the present problem , we seek a point source distribution. a number of algorithms for solving an ip problem with point sources appear in the literature . in , an inverse problem where all sources have the same charge is solved . in our implementation , however , at least two different charge values must be incorporated ( ) . in ,, an inverse problem where both the locations of sources and their charges are unknown , andare reconstructed .this gives more freedom in the reconstruction than we can allow , since for the present purposes the charges must be multiples of .another important aspect of the present problem is that the domain of the ip problem may be of any shape and topology ( i.e. , may contain holes ) , whereas the above works only deal with a simply connected region ( a circle , usually ) .it is important to note that in the present application , in contrast to other standard applications , the input to the ip problem is not generated by some existing source distribution ( perhaps with some added noise ) , but by the domain s shape and input resolution .the existence of a source distribution which reconstructs the input data , at least approximately , is therefore not obvious .this is an interesting and important subject , but is beyond the scope of the present work .as is well known , the real and imaginary parts of a complex analytic function are harmonic functions .this correspondence has been utilized in ip algorithms ,, , and will be used here as well .we define the complex - valued function where , with the components of , that were defined in condition 1 . is a function on , such that then , recalling that , it follows that with defined in condition 1 , and , with . the functions and for some are analytic in a neighborhood of any point that is not a singularity of .however , as functions over the entire domain , they may be _ multi - valued_. multi - valued functions accept many - values at a point , depending on the path taken to that point .it is well known that the complex function is multi - valued . defining as , where is a path from to , the imaginary part of only fixed up to an addition of multiples of , depending on the path taken .if the domain is not simply - connected ( i.e. , contains holes ) , then the function may also be multi - valued , since by following different paths around the holes , different values of may be obtained .the function , being a sum of multi - valued functions , may also be multi - valued . according to eq .( [ eq : re_f_is_phi ] ) , the real part of is single - valued , as it is equal to at that point .we thus turn to examine the imaginary part of .the real and imaginary parts of a complex function are related by the cauchy - reimann ( cr ) equations .recall that for a complex differentiable function , where are real - valued , the cauchy - reimann equations read , and .this allows one to recover the imaginary part of a analytic function if its real part is given .writing eq .( [ eq : phi_theta_right_hand_relation ] ) in the two right hand coordinate systems , gives the relations and .these are precisely the cr equations for the complex function , where is any real constant , hence the constant is arbitrary , and will be taken to be zero .the multi - valued nature of is explicit in the integral formulation in eq .( [ eq : f_is_phi_i_theta ] ) . the right - hand - side ( rhs ) of eq .( [ eq : f_is_phi_i_theta ] ) shows that the since the imaginary part of is , and must be fixed up to additions of , then the integral must also be fixed up to additions of . indeed ,this was shown to hold if conditions 1 - 3 ( eq .( [ eq : cond1])-([eq : cond3 ] ) ) hold , see . to summarize ,the problem is restated as finding a multi - valued complex function , given by eq .( [ eq : f_def ] ) , and whose value at points on the boundary of is : in the following sections , an algorithm for solving the problem as it was here restated is described .the ip algorithm first removes the contribution of from the boundary conditions , paying special attention to the junction points , and then constructs the source distribution .the function , and hence the function , may be singular at junction points .special procedures for addressing this behavior are taken as part of two different steps of the algorithm : a. : : choosing the input resolution near junction points .b. : : special treatment of junction points when is removed from the boudary conditions .the two subjects are discussed in the following subsections , [ sec : input_near_junc ] and [ sec : remove_harmonic_junc ] . except in the special cases when is a multiple of , the behavior of at a small neighborhood in of a junction point is singular , with the singularity at the junction point ,see condition 3 ( eq . ( [ eq : cond3 ] ) ) .if the input resolution in a small neighbourhood of the junction point does not match this singular behavior , solution of the ip problem will feature sources at any distance from the junction , no matter how small , resulting in a distribution with an infinite number of sources . in practice , this means that an ip algorithm will cluster many sources near the junction point , in a futile attempt to reconstruct the boundary conditions there . to avoid this problem ,we adjust the _ input _ at a neighborhood of the junction point. this adjustment should vanish rapidly at a distance larger than 1 - 2 cell - sizes , so as to have little effect on the original required resolution .condition 3 states that the gradient of near a junction point must diverge as , where is the distance from the junction point .such a flux is formed by a logarithmic singularity at the junction point ( see also , section 7.1 ) .consider a singular source term of the form , where is the distance from the junction point .then the flux through a circular arc , at distance from the junction point is ( see also fig . [fig : cond_3_4],a . ) : where is the junction inner angle .then by condition 3 , , hence is integer , which must be positive , otherwise a mesh with an infinite number of cells will result ( , section 7.1 ) .in fact , has a simple interpretation : it is the number of cells incident upon the junction point in the resulting mesh ) nearby cone - points may be joined , and cone - points may be shifted to the boundary , depending on the final cell - size .in such a case , the number of cells incident on a junction point , as well as on other boundary points , may change . ] .a reasonable choice for is therefore for which the inner angles of the cells incident on the junction point are closest to . in order to restrict the effect of this correction of to a small region in , a source term with the opposite charge be placed _ outside _ , at a distance of about 1 - 2 edge lengths .see for example the second and third examples in section [ sec : examples ] . in section [ sec :remove_harmonic ] a method will be described for calculating and removing the contribution of to the boundary conditions . at junction points ,however , the function may be singular : as is an open set , boundary singularities are part of , so care should be taken when preforming the calculations described in section [ sec : remove_harmonic ] , especially in a numerical implementation of the technique . to avoid these problems, we subtract the junction singularities from the boundary value of before proceeding with removing .let be the locations of the junction points , and the charges as given by eq .( [ eq : junc_charge ] ) , ( [ eq : junc_incidence ] ) .the complex function corresponding to is , so the contribution of the junctions is if a singularity is located on an inner boundary component , the logarithms must contain a branch cut somewhere in . in order to avoid adding a cut , we add another term to every hole , equal to minus the total charge in each hole .denote the inner boundaries by , and let be arbitrary points inside the respective holes .the following terms are therefore added to : in this way , the additional terms form an analytic function in , with no branch cut , and are a part of .since the contribution of is removed from as described in the next section , the exact form of this additional term , i.e. the choice of points , does not affect the results . in this section a technique for calculating presented .after is calculated , its contribution to at the boundary can be subtracted . to simplify notation ,let denote the sum of logarithms in : in the context of ip algorithms , the idea that the contribution of the harmonic part to the solution can be removed from the boundary information was suggested in .there , an ip problem in the unit disk is discussed .the value of the analytic part of on the boundary of can then be calculated by taking the fourier transform of , and leaving only the positive frequency components , as can be shown by considering the lourent series of in the unit disk . by its construction, this technique applies to functions defined on the unit disk .it can be extended to other domains , if a conformal mapping of the domain to the unit disk ( which exists according to the riemann mapping theorem ) is calculated . for our present purposes , since the domain in every problem input is different , using this technique would require constructing a mapping to the unit disk for each domain being meshed separately . furthermore , the domains in our problem may contain holes , which further complicates the matter .we therefore use an alternative approach , which is now described .it is based on the cauchy integral theorem , and is similar to the sum factorization step in the weiner - hopf technique , applied to a bounded domain ( , defined below ) in place of an infinite strip . as in , we assume there is a positive distance between the boundary of and the source closest to it , as is the case , for example , when the number of sources is finite . let be the set of points at a distance smaller than to the boundary , see fig .[ fig : cauchy],a .. as will become apparent later , the value of does not enter the calculation and is irrelevant , as long as there exists some as required .[ ptb ] we first consider a simply - connected domain , with boundary .define the flux through any curve : the flux through the boundary can be calculated : the value of at smooth boundary points is known from condition 2 ( eq .( [ eq : cond2 ] ) ) , and the flux at junction points is calculated from condition 3 , as is explained in section [ sec : input_near_junc ] above .this flux can be shown to be a multiple of .[ remark : flux_calc]using conditions 2,3 and the fact that the rotation of a tangent in a simple curve is , the total flux through a boundary component can be shown to be equal to , where are the junction points , for each junction is given by eq .( [ eq : junc_incidence ] ) , and for inner and outer boundary components , respectively .see also . to proceed with the decomposition calculation, we assume that , as calculated from eq .( [ eq : bound_flux_def ] ) above .if this is not the case , following , we subtract a source term to the boundary conditions , centered at some point inside the domain : such that is zero .as will be shown below , the choice of does not affect the results . since was equal to a multiple of before , . when , the function is single - valued and analytic in , as follows from the integral representation of in eq .( [ eq : f_is_phi_i_theta ] ) , along with .we now use the cauchy s theorem , stating that for every point : the boundary of has two boundary connectivity components : the outer boundary ( which is also the boundary of ) , and the inner boundary . denote them by and respectively , see fig .[ fig : cauchy],a .. eq . ( [ eq : cauchy_d_simply_con ] ) now reads the function is harmonic on .according to a decomposition theorem ( see , chapter 9 ) its decomposition into and , is also unique .it follows that the decomposition of the multi - valued complex function into and is unique , up to the arbitrary constant in eq .( [ eq : f_is_phi_i_theta ] ) , chosen before to be zero .the two integrals expressions on the rhs of eq .( [ eq : cauchy_on_omega ] ) correspond exactly to the two components of the ( unique ) decomposition of in as described in , hence where we have used the fact that .we would like to find on , but is not known , nor is on , so can not be computed directly from the second equation .the first equation can however be used , since is given on , so can be calculated on ( more precisely , since is defined in , it is the limit of as is approached ) .then on is given by , according to eq .( [ eq : fp_def ] ) . finally , we add back the source term subtracted before we now turn to the case when is not simply - connected .we add source terms : one inside the domain , as discussed above ( eq . ( [ eq : source_in_d ] ) ) , and one inside every hole ( i.e. _ outside _ ) , such that the flux through every boundary connectivity element is zero , see fig .[ fig : cauchy],b .. the can be the same points used in section [ sec : remove_harmonic_junc ] , or other points .the results do not depend on the additional charges locations , as will be explained below .we now introduce cuts so that the boundary contains a single connectivity element ( the cuts play a part in the derivation , but drop out of the final calculation ) , see fig .[ fig : cauchy],b .. the cuts introduced also serve as the branch cuts of the logarithms .denote this new boundary .using , we proceed as when is simply connected : define as before , see fig .[ fig : cauchy],b . , and subtract the source term inside , as in eq .( [ eq : source_in_d ] ) . according to eq .( [ eq : cauchy_2_parts ] ) : the second integral on the rhs denotes the integral over the cuts introduced to form .note that each cut - path is traversed twice , back and forth .since the flux through each and every hole boundary is zero , the value of , when integrated along , is continuous across the branch cuts , and the integrations over each cut traversed in both directions cancel each other , and drop from the total integration .therefore as in eq . ( [ eq : cauchy_2_parts ] ) . as before , .finally , the source term inside is added back , as in eq .( [ eq : source_in_d_back ] ) .the boundary value of obtained after this source term is added back may be multi - valued , with a branch cut discontinuity in the imaginary part that is a multiple of .this is a valid input to the next step , as explained in section [ sec : calc_locations ] below ( following eq .( [ eq : xsi_def ] ) ) .the choice of the added singularities locations outside ( inside the holes ) does not affect the results : changing the location of a singularity with some charge from to is equivalent to adding a singularity with opposite charge at , and with the same charge at . since these two singularities lay outside , andare of opposite charge , this amounts to adding an analytic function to , which is removed by the cauchy integral technique described above .the result is also unaffected by the location of the source term subtracted inside , since its location only affects the and it is later added back . the choice of cuts in eq .[ eq : cauchy_with_cuts ] , of course , does not affect the cauchy integral calculation , since the cuts do not enter the final calculation , eq .( [ eq : cauchy_final_calc ] ) . at this point, we are given the value of on the boundary of the domain .( [ eq : f_def ] ) , we define = \exp\left [ { \displaystyle\sum\limits_{m=1}^{n_{c } } } k_{m}\ln\left ( z - z_{m}\right ) \right ] \text{.}\label{eq : xsi_def}\ ] ] note that while may be have a multi - valued imaginary part with a branch cut discontinuity : , for some integer , is single valued , since = 1 ] was assigned a varying input resolution proportional to : the resolution thus varies by a factor of within the domain . as always , eq .( [ eq : phi_from_res ] ) , .the total charge is zero , so .the boundary was sampled at points .a source distribution with was obtained with just charges , i.e. with .[ fig : multi_ex4],a . shows the source distribution , fig .[ fig : multi_ex4],b . shows the obtained vs. the input , and the final mesh is shown in fig .[ fig : multi_ex4],c .. it is instructive to compare the star - geodesics and cut - tree with the final mesh .the two are overlaid in fig .[ fig : multi_ex4],d .( this would create a crowded appearence when more sources are present ) .the black solid line traces the cut - tree , the dashed lines are additional star - geodesics , and the mesh is drawn with the background in gray lines .[ ptbh ] cell shape quality was measured using a variant of the quality measure , as defined in . represents a square cell , while represents a cell with an inner angle of .cells with a high aspect ratio are also given low -values .table 1 shows the minimum and average values , and the total number of cells , for the examples in fig .[ multi_diamond_var_res]-[fig : multi_ex4 ] . for each example , the tabulated information is given for the mesh shown in the corresponding figure , and for a finer mesh of the same domain , where the resolution function was doubled ( prodcing a mesh with about four times the number of cells ) .note that multiplying the resolution by a factor amounts to adding a constant to , which does not change the source distribution obtained from the ip algorithm , as this constant , which is part of , is readily removed when is subtracted .{|c|c|c|c|}\hline & num .cells & min .$ ] table 1 : mesh statistics .an unstructured quadrilateral mesh generation scheme in planar domains was presented .the method rests on a theoretical foundation , linking the mesh generation problem with the inverse poisson ( ip ) problem .an ip solution algorithm is presented , whose output is interpreted as the location and type ( degree ) of irregular vertices in the domain .the continuum fields obtained , describing mesh resolution and directionality , are conformal everywhere except on the irregular vertices , and fit the required input properties at the boundary , or at other user - defined locations .an algorithm for creating a valid final mesh is also presented .example meshes feature irregular vertices where they are needed , in combination with highly regular regions where possible .directions for future work include more sophisticated methods for solving the rational function interpolation equations , and for constructing the final mesh .the relations between conformal unstructured mesh generation , the ip problem and rational function interpolation raise many research questions .these may lead to a deeper understanding of the properties of high quality meshes , and to better algorithms for creating them .* acknowledgements . *the author would like to thank mirela ben - chen , shlomi hillel , dov levine , yair shokef and vincenzo vitelli for helpful discussions and critical reading of the manuscript .a cross is defined at every point of the domain , that is not a singularity of the function .geodesic curves that start at can be drawn in all four directions of the cross . by the definition of a cross - field, such a geodesic will be aligned with the cross - field everywhere along the curve . if is a singular point , with -value , a crossis not defined at , but there are geodesics that are incident on and follow the cross - field directions elsewhere on the curve .these will be called _ star - geodesics_. for example , the geodesics drawn in fig .[ multi_diamond_var_res],c . and [ fig : multi_ex4],d .are star - geodesics .denote the angle from the -axis around by , and the cross direction when is approached from direction by , see fig .[ fg : star_directions ] . to calculate the directions in which star - geodesics emanate from ,we first calculate the cross when is approached from some direction , e.g. the positive -axis .this can be done using eq .( [ eq : theta_change_par_tranport ] ) along a curve from the boundary to which approaches from the direction .once is known , for any can be calculated by using eq .( [ eq : theta_change_par_tranport ] ) along a small circular arc around at radius , .the singularity term at is , and according to eq .( [ eq : theta_change_par_tranport ] ) note that contributions to that are regular at do not affect for .the star - geodesic directions are those for which the cross is directed along the ray from , or with . substituting eq .( [ eq : app_a_th_calc ] ) into eq ( [ eq : app_a_star_cond ] ) and rearranging , we find it is easy to show that there are exactly different -values for which this equation is fulfilled .they are equally distributed around .m. hmlinen , r. hari , r. j. ilmoniemi , j. knuutila and o. v. lounasmaa , magnetoencephalography - theory , instrumentation , and applications to noninvasive studies of the working human brain , _ reviews of modern physics _ * 65 * issue 2 ( 1993 ) 413 - 497 .
a novel approach to unstructured quadrilateral mesh generation for planar domains is presented . away from irregular vertices , the resulting meshes have the properties of nearly conformal grids . the technique is based on a theoretical relation between the present problem , and the inverse poisson ( ip ) problem with point sources . an ip algorithm is described , which constructs a point - source distribution , whose sources correspond to the irregular vertices of the mesh . both the background theory and the ip algorithm address the global nature of the mesh generation problem . the ip algorithm is incorporated in a complete mesh generation scheme , which also includes an algorithm for creating the final mesh . example results are presented and discussed .
after bachelier s seminal paper and its re - discovery , random walk theory has been the most crucial cornerstone in economics and finance .an assumption that price dynamics is governed by stochastic process has become popular and useful in asset valuation theories such as option pricing theory or interest rate models .however , the assumption also claims that prices of financial instruments can not be predicted exactly because of the nature of brownian motion .this unpredictable nature of financial markets helps economists to establish a belief that there are no tools to find arbitrage opportunities and to make money systematically in the financial markets .it is also imposed that successful investors are considered nothing but luckier than others .the idea is crystallized in the form of the efficient market hypothesis by eugene fama and paul samuelson . according to the efficient market hypothesis , financial markets are informationally efficient andthis efficiency can not make participants systematically achieve excessive returns over the market portfolio in the long run .although there are three slightly different versions of the hypothesis to cover more general cases , what the hypothesis generally emphasizes has not been changed .however , many market practitioners intrinsically have an idea that the market could be predictable regardless of their methods used for forecast and investment because it is partially or totally inefficient .the idea is opposite to the belief of proponents for the efficient market hypothesis and it is empirically supported by the fact that there are actual market anomalies which are used as the sources of systematic arbitrage trading .these anomalies and trading strategies include fundamental analysis , technical analysis , pair trading , price momentum , sector momentum , mutual fund arbitrage , volatility arbitrage , merger arbitrage , january effect , and weekend effect etc .the anomalies let market participants create profits by utilizing the trading strategies based on the market inefficiencies .even if the market is efficient in the long run , practitioners assure that they are able to find opportunities and timings that the market stays in the inefficient phase within very short time intervals .the existence of a shortly inefficient market state is guaranteed by the success of high frequency trading based on quantitative analysis and algorithmic execution in a short time scale automated by computers . in these cases, the arbitrage does not have the traditional definition that non - negative profit is gained almost surely .it can create positive expected return with high probability but there are also downside risks which make the portfolio underperform .this kind of arbitrage is called statistical arbitrage and the arbitrage in this paper means mostly statistical arbitrage .not only the practitioners but some academic researchers also have different opinions to the efficient market hypothesis . they have taken two approaches to check the validity of the efficient market hypothesis . on the one hand , the market anomalies of which the practitioners believe the existence are empirically scrutinized .some results on the famous market anomalies are reported in academic literatures and seem to be statistically significant while their origins are not clearly revealed yet . for more detailed discussions on the market anomalies , see singal or lo and mackinlay . on the other hand ,psychological and behavioral aspects of investors begin to be paid attention in order to find the explanatory theories on the market anomalies .the behavioral economists focus on cognitive biases such as over- and under - reaction to news / events and bounded rationality of investors .they claim that those biases can create the inefficiencies in the market .the cognitive biases lead the investors to group thinking and herding behavior that most of investors think and behave in the similar ways .the good examples of herding are speculative bubbles , their collapses , market crashes , and panics during financial crises .momentum effect on price or valuation in assets is one of the famous examples which have attracted the interest of industry and academia . as an implemented trading strategy , momentum strategyis frequently used for statistical arbitrage .additionally , an investor gains a maximum 1.31% of monthly return by the monthly momentum strategy that constructs the portfolio which buys past winners and short - sells losers in the u.s .market .since price dynamics has a tendency that price moves along the direction it has moved , price momentum becomes the systematic proxy for forecasting future prices .if an investor buys past winners , short - sells past losers , and repeats execution of the strategy , then he / she is expected to gain positive return with high probability in the long run .it is exactly a counterexample to the efficient market hypothesis . despite the success of the momentum strategy ,the origin of price momentum is not well - understood and remains rather unclear .some possible explanations on the existence of the momentum effect answer parts of the question and the behavioral direction is one of them for understanding the nature of price momentum .physicists also have become interested in the characteristics of financial markets as complex systems .mainly , econophysics and statistical mechanics communities have used their methodologies to analyze the financial markets and several research fields have attracted their interests . in the sense of correlation ,the financial markets are interesting objects .since there are many types of interactions between market building blocks such as markets - markets , instruments - instruments , and investors - investors , correlations and correlation lengths are important .in other directions , speculation and its collapse are always hot topics because they are explained as collective behavior in physics .the analysis on speculation gives some partial answers that speculations have patterns including the resilience effect .additionally , market crash or collapse of a bubble can be understood by the log - periodic pattern . for more details , see and references therein .in particular , sornette introduced the concept of spontaneous symmetry breaking ( ssb ) of stock price to explain speculation and to resolve the growth stock paradox .he pointed out that economic speculation is understood as price dynamics caused by desirable / undesirable price symmetry .if stocks of a certain company are desirable to hold , investors try to buy the equities at extremely high prices which are the spontaneous symmetry breaking mode .however , when the equities are not desirable any more , the investors do not want to hold it and try to sell them as soon as possible to avoid damages from the downslide of price caused by the situation that nobody in the market prefers the equities . in his paper , the phase transition is induced by riskless interest rate above risk - adjusted dividend growth rate which also expresses herding in the sense that large growth rate gets more attention from investors and it leads to herding .positive dividend payment breaking the symmetry makes the price positive and this is why the positive price is observed .these are the origins of speculation in economic valuation .the result is also related to the well - known financial valuation theory called the gordon - shaprio formula .his work is important in speculation modeling not only because symmetry breaking concept is applied to finance but also because speculation , its collapse , and market crash are indispensable parts of the market dynamics . in this paper ,the concept of spontaneous symmetry breaking is applied to arbitrage modeling . unlike sornette s work which uses spontaneous symmetry breaking to explain speculation in the asset valuation theory , the phase transition is emergent directly from arbitrage dynamics .wyarta and bouchaud also consider symmetry breaking but their concern is self - referential behavior explained by spontaneous symmetry breaking of correlation in macroeconomic markets such as indexes not of arbitrage return generated by the trading strategy . from the viewpoint of symmetry breaking, this paper pays attention to portfolio / risk management rather than explanations on macroeconomic regime change on which both of the previous works focus .based on the dynamics which gives a spontaneous arbitrage phase and a no - arbitrage phase , the arbitrage strategy can be executed upon the phases of arbitrage .the phases are decided by a control parameter which has the same meaning to speed of adjustment in finance .the execution of the strategy aided by spontaneous symmetry breaking provides better performance than the naive strategy and also diminishes risk of the strategy . in section [ ssb ] , a brief introduction to arbitrage modeling is given and then the spontaneous arbitrage modes are emergent from the return dynamics .the momentum strategy aided by spontaneous symmetry breaking is simulated on real data and the results in various markets are posted in section [ result ] . in section [ conclu ] ,we conclude the paper with some discussions and future directions .introducing the existence of arbitrage opportunity , the value of portfolio is governed by the following differential equation , where is risk free rate , is excessive return of the portfolio , is volatility of portfolio return , and is a stochastic noise term .if the no - arbitrage theorem is imposed , the excessive return becomes zero guaranteed by the girsanov theorem that the risk - neutral measure and the brownian motion always exist under no - arbitrage situation .if the existence of arbitrage is assumed , there is no risk - neutral measure nor related brownian motion . in this case, it is more important to know how its return series has evolved .the reason why the dynamics is important has two facets .first of all , for theorists , the dynamics encodes large amount of information on market macro- and microstructure .secondly , it is helpful for practitioners to exploit the arbitrage opportunity by implementing trading strategies based on the dynamics . the excessive return is modeled by where is a white noise term .the structure of is decided by properties of arbitrage .one of the simplest forms for is a polynomial function of .two properties of arbitrage dynamics help to guess the structure of the function .when the excessive return of the strategy is large enough , the arbitrage opportunity usually disappears very quickly because many market participants are easily able to perceive the existence of the arbitrage and can use the opportunity profitably even with trading costs .this property imposes a constraint that coefficients of have negative values . additionally , eq . ( [ arb_gen_dyn ] ) should be invariant under parity transformation because negative arbitrage return is also governed by the same dynamics .this property makes even order terms in the function vanish . considering these properties of arbitrage, the form of is given by where for odd positive integer .in traditional finance , these are also able to be considered as the proxies incorporating the information on changes of discount rates which are covered in .the dynamics describes reversal of return that the return becomes decreased when being large and it is increased when under the trend line . in other words ,the reversal makes the return stay near the equilibrium around the trend line . by dimensional analysis , is a speed of adjustment and is broadly studied in finance . larger means the arbitrage opportunity dies out much faster. meanwhile , smaller corresponds to the situation that chances for arbitrage can survive longer .as goes to infinity , the arbitrage return goes to zero extremely quickly and this limit corresponds to the no - arbitrage theorem .when only the linear term is considered for the simplest case , the dynamics is an ornstein - uhlenbeck process in mathematical finance , where the trend line is zero .this stochastic differential equation is invariant under parity transformation of because is an ito process with standard normal distribution which has symmetric distribution around mean zero .although there are higher order terms in eq .( [ arb_dyn_poly ] ) , the dynamics is still considered as a generalized ornstein - uhlenbeck process because it is the mean - reverting process around the trend line . we begin to introduce a cubic term to the ornstein - uhlenbeck process to extend it to more general cases .the introduction of higher order terms is already used in the market crash model .then the dynamics is changed to where , , and is a white noise term .after the cubic term is introduced , adjustment on arbitrate return occurs quicker because the coefficients are all negative .the negative coefficient condition needs to be modified in order to describe not only reversal but also trend - following arbitrage return which is explained by positive coefficients . in real situations ,the trend - following arbitrage strategies are also possible to make profits by exploiting market anomalies because arbitrage opportunities fortified by transaction feedback do not disappear as quickly as expect and there could be more chances for investors .speculation , as one of the examples , can create more opportunities for the trend - following arbitrage and increases expected return . under speculation ,the investors buy the instrument even though the price is high .this transaction induces to generate the trend line and is able to give feedback to the investors trading patterns . during market crash or bubble collapse , they want to sell everything at very low prices although the intrinsic values of instruments are much higher than the price at which they want to sell . not in extreme cases but under the normal market condition , people tend to buy financial instruments which have shown better performance than others because they expect that the instruments will provide higher returns in the future .the prices of the instruments become higher because the investors actually buy with the expectation .it seems to be very irrational but happens frequently in the markets . to integrate these kinds of situations, we can introduce the cutoff value which can decide whether the arbitrage is originated from reversal or trend - following dynamics rather than the negative speed of adjustment . with the cutoff value ,let us change and into the forms of where , , and are positive .although the number of parameters seems to be increased , this is not true because is an external parameter . under these changes ,the arbitrage dynamics is given by after relaxation time , eq .( [ arb_dyn_ssb ] ) becomes zero up to the noise term because other transient effects die out .in other words , the deterministic part of arbitrage dynamics arrives at the equilibrium state . by setting the deterministic part of the r.h.s . in eq .( [ arb_dyn_ssb ] ) to zero , stationary solutions are found .the interesting point is that the number of stationary solutions is dependent with and . in the spontaneous symmetry breaking argument, is a control parameter and is an order parameter . when , there is only one asymptotic solution which shows the property of usual arbitrage opportunities .the meaning of this solution is that the arbitrage return finally becomes zero up to noise .it is obvious that the arbitrage opportunity vanishes after the relaxation time because it is taken by market participants who know the existence and use the chance . for ,there are three asymptotic solutions with and the solution has the same meaning to the solution for .it means that the arbitrage opportunity finally dies out .the latter solutions , , are more interesting because there exist long - living arbitrage modes in return . after the relaxation time ,the arbitrage chance still exists and lifetime of the spontaneous market anomaly is longer than that of the usual short - living arbitrage . it is noteworthy that these solutions unlike are symmetry breaking solutions although the dynamics is conserved under parity .the spontaneous mode also has the coherent meaning in the sense of speed of adjustment .if is smaller than the critical value , it is slower adjustment and the arbitrage opportunity can have longer lifetime .these solutions are also well - matched to the no - arbitrage theorem that the arbitrage chance does not exist because it disappears very quickly .the no - arbitrage theorem which corresponds to does not make the arbitrage possible after the relaxation time because is always greater than .when a weak field term is introduced to eq .( [ arb_dyn_ssb ] ) , the observation becomes more interesting . introducing the constant term ,the equation is given by where can be considered the velocity of . if , the asymptotic solution is also changed from to as positive is changed to negative .the asymptotic behaviors described in the previous subsection can be cross - checked with exact solutions . in the long run ,the noise term is ignored because its average is zero . under this property , the exact solutions of eq .( [ arb_dyn_ssb ] ) are given by where is the initial time .when , exponential functions in the nominator and the denominator go to zero in the large region and it makes zero .this corresponds to the symmetry preserving solution which is the usual arbitrage . if , the exponential functions become dominant as goes to infinity . at that time , approaches which are the symmetry breaking solutions .these solutions are already seen in the asymptotic solutions . with the long - living arbitrage solutions in eq .( ) , properties of the solutions are checked graphically in fig .[ ssb_return_lambda ] and [ ssb_return_time ] . in fig .[ ssb_return_lambda ] , the left graph shows time evolution of the solutions as . in the small t region , there exist non - zero arbitrage returns regardless of the value of .however , as , the return approaches to non - zero if and it vanishes if . in the asymptotic region ,the difference becomes clear and phase transition happens where is at the critical value .it is easily seen in the graph on the right .the region is called the long - living arbitrage phase , spontaneous return phase , or arbitrage phase .another region where is considered the short - living arbitrage phase or no - arbitrage phase . in the model , market anomalies survive if they are in the long - living modes . in fig .[ ssb_return_time ] , the spontaneous arbitrage returns approach to whatever initial return values are .however , the no - arbitrage phase finally goes to zero .this property does not depend on the size of the initial return values .even if the initial value is smaller than the asymptotic value , it grows up to the asymptotic value .for example , if investors realize the arbitrage opportunity and if they begin to invest into the chance , their trading behavior affects price dynamics and the trend - following investors pay attention to the instruments .the interest leads to trading which gives feedback to their trading patterns and can increase the profitability .in other words , money flows into the instrument , boosts its price , and gives feedback to investors behaviors .if transaction cost is smaller than the asymptotic value , arbitrage opportunities created by spontaneous symmetry breaking can be utilized by the investors .when the long - living arbitrage mode is possible , can be re - parametrized by where is a dynamic field for expansion around .plugging this re - parametrization into ( [ arb_dyn_ssb ] ) , the differential equation for is solved and its solutions are given by since the latter solution goes to in the asymptotic region , we can check the transition between and .if , the initial modes stay in themselves , i.e. go to . however ,if is the latter solution , they evolve to in large limit even though we start at initially .in order to test the validity of spontaneous symmetry breaking of arbitrage , we apply the following scheme depicted in fig .[ flow_chart ] to trading strategies over real historical data . in backtest , the control parameter for the strategy should be forecasted based on historical data . at certain specific time , it is assumed that data only from are available and the control parameter for next period is forecasted from them . if the forecasted is smaller than the forecasted , the strategy which we want to test is expected to be in spontaneous arbitrage mode in the next period and the strategy will be executed .when the forecast tells that the strategy would not be in spontaneous arbitrage mode , it will not be exploited and the investor waits until the next execution signals .the weak field is also able to decide the method of portfolio construction .if the constant term is positive , the portfolio which the strategy suggests to build will be constructed .however , if the constant term becomes negative , weights of portfolio will become opposite to those of the portfolio originated from the positive constant term .simply speaking , the portfolio is not constructed if the speed of adjustment is larger than the critical speed .when it is smaller than the critical value , the weight of the portfolio is if the weak field is positive and the portfolio is if the weak field is negative .this kind of multi - state models is popular in the names of hidden markov model or threshold autoregressive model in econometrics and finance .the scheme is repeated in every period over the whole data set .to apply the model to real data , the model considered in the continuous time frame needs to be modified to discrete version because all financial data are in the forms of discrete time series . in the discrete form , eq .( [ arb_dyn_ssb ] ) is changed into and an additional related to the coefficient in the first term on the r.h.s comes from the time derivative in eq .( [ arb_dyn_ssb ] ) .the next step is estimation of parameters in eq .( [ arb_dyn_ssb_disc ] ) with real data .regression theory gives help on estimation but it is not easy to estimate the parameters with real data because the model is nonlinear and many methods in the regression theory are for linear models . in statistics , these parameters can be estimated by nonlinear regression theory but it is not discussed in this paper . instead of using nonlinear regression theory directly, we can get some hints from linear regression theory .with consideration on financial meanings and physical dimensions of the parameters , linear regression theory enables us to estimate the model parameters .there are some issues on the estimation of parameters .the first issue is related to stability of the parameters .when the parameters are fit to real data , if values of the parameters severely fluctuate over time , those abruptly - varying parameters hardly give a steady forecast .one of the best ways to avoid this is taking a moving average ( ma ) over a certain period .moving average over the period can make the parameters smoothly - changing parameters .for longer ma windows , the parameter is stable but it would be rather out of date to tell more on the recent situation of the market .if it is short , they can encode more recent information but they tend to vary too abruptly to forecast the future values . to check ma window size dependency , a range of mas needs to be tested and the results from different mas should be compared . another issue is the method to estimate parameters in the model . since two or three internal parameters and one external parameter are given in the model , the same number of equations should be prepared . for the simpler case ,the coefficient for each term can be considered as one parameter . in this case , two equations need to be set up .however , the values of two parameters found from two equations sometimes diverge when real data are plugged . since and are the speeds of adjustment and have same physical and financial meanings, they need to be derived from the same origin .the only difference is that is external .in addition to that , the symmetry breaking needs comparison between two different speeds , and .one of the possible solutions is that is derived from the return series of the strategy and comes from the benchmark return as the definition of an external parameter .this interpretation can give two parameters the same physical dimensions and financial meanings .the specification on is also reasonable in the sense of the efficient market hypothesis . since the hypothesis tells that it is impossible to systematically outperform the benchmark , it is obvious that we compare the performance of the strategy with that of the benchmark in order to test the hypothesis . in the case of ,the volatility of the strategy or benchmark return can be a good candidate because also has the same meaning and dimension to variance .for the constant term , the average value of strategy return or benchmark return would be considered .dividend payment rate is also a good candidate .however , since the most important parameters in the model are and , we focus on the estimation of these two parameters . the intuitive way to get and is using a hint from the autoregressive model of order 1 called the ar(1 ) model . ignoring the cubic termis also justified by the fact that the returns are much smaller than 1 . starting with the simpler model which does not have the cubic term , multiplying to both sides and taking ma over periods make the last term zero on the r.h.s . of eq .( [ arb_dyn_ssb_disc ] ) and give the form of .the one - step ahead forecasted is where . in longer ma windows ,we can change in the denominator to because is close to . in shorter ma windows ,the change is meaningful because it is capable of considering more recent informations will be tested in next subsections . ] . based on this argument ,the final form of forecasted speed of adjustment is given by this has the same form to the parameter in ar(1 ) model which is found in }{e[x_{i}x_{i } ] } \nonumber \end{aligned}\ ] ] where $ ] is the expectation value .the estimator ( [ lambda_estimator ] ) is intuitively estimated but the hand - weaving argument is available .since the benchmark return tends to be weakly autocorrelated and the return series by the arbitrage strategy is expected to be strongly positive - autocorrelated , the estimator for the arbitrage strategy is usually smaller than that for the benchmark . in this case , the strategy is in the long - living arbitrage mode . when the estimator for the strategy is larger than that for the benchmark , it is highly probable that return series for the strategy becomes much more weakly autocorrelated than the benchmark returnthis tells that the strategy has recently suffered from large downslide and it can be used as the stop signal to strategy execution .additionally , since the estimator is related to the correlation function which is in the range of -1 and 1 , the value of the estimator fluctuates between 0 and 2 and it is well - matched to the positiveness condition on .the momentum strategy is one of the famous trading strategies which use market anomalies .it is well - known that the strategy that buys past winners , short - sells past losers in returns , and then holds the portfolio for some periods in the u.s .market enables to provide positive expected returns in intermediate time frames such as 112 months .the basic assumption of the strategy is that since price has momentum in its dynamics , it tends to move along the direction it has moved .based on the assumption , the financial instruments which have shown good performance in the past are highly probable to gain profits in the future .opposite to winners , it is likely that losers in the past would underperform the benchmark in the monthly time frame . over other trading strategies such as pair trading or merger arbitrage strategies ,it is advantageous that the momentum strategy is able to be exploited at any time and in any markets .pair trading is utilized only when the correlation of two instruments weakens and when investors can find it .merger arbitrage is able to make benefits if m&a rumors or news begin to be spread in the market and if there is a price gap between actual and buy prices .when using these strategies , the investors become relatively passive to market conditions and events . however , in the case of the momentum strategy , if they look back at the price history , market participants make use of momentum strategy and the trading frequency is up to their time frames from high frequency trading to long - term investment .in addition to that , unlike merger arbitrage which is possible only in equity markets , momentum strategy can be applied to various asset markets including local equity , global index / equity , currency , commodity , future , and bond markets . to exploit momentum strategy ,first of all , returns of all equities in the market universe during a certain period called the look - back or ranking period are calculated from closing prices on the first and the last trading dates of the lookback period .then equities are sorted by their returns in ascending order .grouped into ten groups , the first group contains the worst performers and the tenth group is for the best performers .each equity in winner or loser baskets has the same weight in the basket and each basket is also equal in absolute wealth but opposite in position to make the whole portfolio zero cost .the portfolio constructed in zero cost is held during the holding period .the construction of the portfolio occurs on the first day of the holding period and it is liquidated on the last day of the holding period .the transactions happen at the closing prices of each day .the strategy with period look - back and period holding period is shortly called strategy .the expected return by momentum strategy is dependent with lengths of ranking and holding periods . as explained , the strategy with intermediate lengths such as monthly lookback and holding periods generates positive expected return . for longer period strategies ,the momentum strategy suffers from reversal such that the winner group loses its price momentum and shows poor performance . meanwhile , the loser basket shows the opposite behavior such that the basket outperforms and can provide positive return . according to lo and mackinlay , the short - term momentum strategy in weekly scalealso has negative expected return . in both longer and shorter term strategies ,it is impossible to make a profit on the momentum portfolio but it does not mean that there is no statistical arbitrage chance nor that the market is efficient . if the portfolio is constructed by reversal momentum i.e. contrarian strategy which buys losers and sells winners , there still exist the chances of profits .the position by contrarian strategy is exactly opposite to that of the momentum strategy .the reason why the momentum strategy generates positive expected return has attracted the interest of researchers but it is not clearly revealed yet .the sector momentum is considered one of possible explanations . a behavioral approach to momentumalso can give more explanations such as under - reaction or over - reaction of market participants to news or psychology .it is ambiguous whether the momentum effect comes from either which of them or from a combination of these possible explanations .however , this paper focuses on how to use the strategy based on symmetry breaking rather than what makes markets inefficient .two different market universes are used for analysis to avoid sample selection bias .the first universe is the s&p 500 index that is the value / float - weighted average of the top 500 companies in market capitalization in nyse and nasdaq .it is one of the main indexes such as the dow jones index and russell 3000 in the u.s .standard & poor s owns and maintains the index with regular rebalancing and component change .another universe is kospi 200 in the south korean market operated by korea exchange ( krx ) .it is the value - weighted average of 200 companies which represent the main industrial sectors . unlike the s&p 500 index , kospi 200contains small - sized companies in market capitalization and considers sector diversification .its components and weights are maintained regularly and are also irregularly replaced and rebalanced in the case of bankruptcy or upon sector representation issues such as change of core business field or increase / descrease of relative weight in the sector .the significance of each index in the market is much higher than those of other main indexes such as dow jones 30 index or russell 3000 in the u.s . and the kospi index , the value - weighted average of all enlisted equities in the south korean market , because futures and options on the indexes have enough liquidities to make strong interactions between equity and derivative markets . in the case of the korean market ,the kospi 200 index among main indexes is the only index which has the futures and options related to the main indexes . additionally , many mutual funds target the indexes as their benchmarks and various index - related exchange - traded funds are highly popular in both markets .the whole time spans considered for two markets are slightly different but have large overlaps .s&p 500 is covered in the term of jan .1st , 1995 and jun .30th , 2010 , 15.5 years which includes various macro - scale market events such as the russian / asian crisis ( 19971998 ) , dot - com bubble ( 19952000 ) , its collapse ( 20002002 ) , bull market phase ( 2003 - 2007 ) , sub - prime mortgage crisis ( 20072009 ) , and the recovery boosted by quantitative easing i ( 20092010 ) . in the case of kospi 200 ,the market in the period of jan .2000 to dec . 31st .2010 , had experienced not only economic coupling to the u.s .market but also local economic turmoils such as the credit bubble and crunch ( 20022003 ) . given the market and time span ,the price history of each stock and whether it was bankrupt or not are stored on a database in order to remove survivor bias and all records for component change are also tracked to keep the number of index components the same .the s&p 500 data are downloaded from bloomberg .the whole data of kospi 200 components and their change records are able to be downloaded from krx .the total number of equities in the database during the covered period is 968 and 411 for s&p 500 , kospi 200 , respectively . in both markets , strategies are considered and the contrarian portfolios are constructed .the reason for choosing weekly strategies is that they show the best performance in each market among 144 strategies derived from maximum 12-week lookbacks and holdings .excessive weekly returns of the portfolios are calculated from risk - free rates and proxies for the risk - free rate are from the u.s .treasury bill with 91 days duration for s&p 500 , cd with 91 days duration for kospi 200 . since the weekly momentum portfolio is constructed at the closing price of the first day in the week and is liquidated at the closing price of the last day , the benchmark return is also calculated from the closing prices of the first and the last days in the week .the results for these markets are given in fig .[ grp_return ] there are similarities and differences in two markets .first of all , it is easily seen that strategy shows a reversal that if the winner basket is bought , it is impossible to get a significant positive return but we can achieve positive return from the loser basket . in particular , the contrarian portfolio beats the benchmark over whole periods and this comes from the fact that the loser basket outperforms and the winner basket underperforms the benchmark . additionally , the contrarian strategy looks more profitable in the korean market and it can be explained that developed markets have weaker anomalies than emerging markets because the investors in the developed market have utilized the anomalies during longer periods . in the south korea market ,the winner and the loser have more clear directions and magnitudes of the returns are much greater than those of the u.s .it is easily seen in table [ tbl_stat ] ..statistics for contrarian strategy and benchmark in s&p 500 and kospi 200 . [ cols="^,^,^,^,^,^,^,^",options="header " , ] [ tbl_stat ] in table [ tbl_stat ] , the numbers from the kospi 200 confirm much stronger and clearer contrarian patterns as shown in fig .[ grp_return ] .the contrarian return in the korean market is weekly 1.325% which is much greater than 0.225% from s&p 500 contrarian strategy and the t - value of the kospi 200 contrarian strategy is 9.491 which is 0.1% statistically significant but the u.s .strategy has only 2.065 , 5% statistically significant .the null hypothesis is that the expected excessive return is zero .similar to the contrarian returns , the winner basket and the loser basket have larger absolute returns and t - statistics in kospi 200 .both of them are 0.1% statistically significant but the s&p 500 loser return only has a 5% statistically significant t - value and a less significant t - value for another . in both markets ,benchmarks have much smaller weekly expected returns than those by the contrarian strategies and t - values are not significant .standard deviation gives another reason why the portfolio by the momentum / contrarian strategy needs to be constructed .after construction of the contrarian portfolio , the volatility of the portfolio is smaller than the volatility of the winner group and the loser group . in particular , in the south korea market , the contrarian portfolio has a smaller volatility than the benchmark and has a greater sharpe ratio than each of the winner and the loser basket has .a larger sharpe ratio imposes that the strategy is good at minimizing the risk and maximizing the return .winners , losers , and contrarian portfolios have large kurtosis by fat - tailed distribution .the results by symmetry breaking with 99 different ma windows are given in fig .[ sp500_ssb ] and [ kospi200_ssb ] .the strategies aided by spontaneous symmetry breaking show better performance than the naive momentum strategy in both markets and the results are not particularly dependent on the market where the strategy is used . in the case of return, the strategies with shorter ma windows have improved returns than longer ma windows or naive momentum strategy .as the length of the ma window becomes longer , the return plunges sharply and this plummet is observed in both markets .the sharpe ratio is also increased with the ssb - aided strategy and it is obvious that the modified strategy is under better risk management .the winning percentage also increases and it is larger for shorter ma windows .the application of spontaneous symmetry breaking also has the minor market dependencies . in s&p 500 ,average returns and sharpe ratios increase after a drop around the 20-ma window but the kospi 200 momentum does not recover its average return level and remains stagnated around returns by the naive strategies . in the case of volatility , it is helpful to reduce volatility with ssb in kospi 200 but is not useful in s&p 500 . the constant term in spontaneous symmetry breaking is also considered . as described before , average return over the ma window orreturn in previous term of the raw strategy are used as the forecasted constant term .if the constant is positive , the contrarian portfolio is constructed and if the constant is negative , the momentum strategy is used . however , the strategy including the constant term does not provide better results than the strategy without the constant .the same approach is applied to mean return or return in previous terms of the benchmark but it is not possible to find the better strategy . with these facts , it is guessed that the constant term is zero or the constant term is always positive if it exists and if these returns are the only possible candidates for the constant .the positiveness of the constant can be guaranteed by the fact that the arbitrage portfolio is constructed to get a positive expected return . with other estimators for speed of adjustment, it is found that the ssb - guided strategies provide similar results although the results are not given in the paper . in both markets ,the patterns of results are similar to the results depicted in fig .[ sp500_ssb ] and [ kospi200_ssb ] . specifically speaking, the estimator with in the denominator gives similar patterns in kospi 200 but the performance is slightly poorer than the result in fig .[ kospi200_ssb ] . in the u.s .market , similar patterns in longer ma windows are found but the results with shorter ma windows are worse than the result given in the paper . this is well - matched to the assumption that is almost identical to in longer ma windows .when the estimator uses the covariance of and , similar results are found but the performance becomes much poorer , especially in shorter window length . although the whole time period is same for each of the ma windows , longer ma strategies have fewer data points when the performance is calculated in backtest .this difference in number of data points comes from the assumption that even though we work with historical data already known , we pretend to be unaware of the future after the moment at which the forecast is made in backtest . in the simulation with each ma ,the first few data whose length is the same as the size of the ma window are used for forecast and are ignored in the calculation of performance .however , the difference does not make any serious difference in the patterns of performance .when the tests to calculate the performance are repeated over the same sample period for all ma windows , notable differences are not observed and the results are similar to fig . [ sp500_ssb ] and [ kospi200_ssb ] .the cubic order term and parity symmetry on return introduce the concept of spontaneous symmetry breaking to arbitrage dynamics . in the asymptotic time region , the dynamics has symmetry breaking modes triggered by the control parameter. it can provide the long - living arbitrage modes including the short - living mode in the dynamics .spontaneous symmetry breaking generated by the control parameter imposes phase transition between the arbitrage phase and no - arbitrage phase .contrasting to the short - living mode which is expected in the frame of the efficient market hypothesis , the long - living modes are totally new and exotic .the existence of a spontaneous arbitrage mode explains why the arbitrage return survives longer than expected and why the trading strategies based on market anomalies can make long term profits . with the existence of the weak field , it is possible to consider the transition between two long - living arbitrage modes , in the asymptotic region . based on spontaneous symmetry breaking of arbitrage , the control parameter enables to decide execution of the trading strategy .if for the strategy is smaller than for the benchmark , the strategy will be executed in next period .if the speed of adjustment for the strategy is greater than that of the benchmark , nothing will be invested .since it is difficult to estimate the parameter in the nonlinear model , the ar(1 ) model gives an insight for estimation .the estimated based on the ar(1 ) model has the theoretical ground that the speed of estimation is derived from the autocorrelation function .it is also reasonable in the sense of testing the efficient market hypothesis because it is capable of comparing the strategy with the benchmark .the simplest but most meaningful estimator for the control parameter is applied to momentum strategy in the u.s and south korean stock markets .the ssb - aided momentum strategy outperforms and has lower risk than the naive momentum strategy has .since the strategy applied to two different markets shows similar patterns , the results are not achieved by data snooping .it is also not by estimator bias because three different estimators for speed of adjustment are tested and provide similar results with some minor differences .the future study will be stretched into a few directions .first of all , parameter estimation needs to be more precise and statistically meaningful . in this paper ,the estimator for the control parameter is from the ar(1 ) model and for benchmark serves as the critical value although the cubic term exists .although it provides better performance and lower risk , estimation of the parameters is from the reasonable intuition not from regression theory . for the more precise model ,they need to be estimated from nonlinear regression theory .in particular , a statistical test on estimation should be done . in the case of , it can be estimated with the help of other researches on market phase such as wyarta and bouchaud s work .other parameters , or , also help to find the better performance strategy if they are statistically well - estimated .the second direction is considering the stochastic term in arbitrage dynamics . in the paper , only the deterministic partis considered and the stochastic term is out of interest in finding the exact solutions . if the spontaneous symmetry breaking modes are found not as the asymptotic solutions but as the exact solutions of the stochastic differential equation , they would extend our understanding on arbitrage dynamics .in addition to that , specification of relaxation time can be found from the correlation function of the stochastic solutions .finally , it would be interesting if validity of the arbitrage modeling with spontaneous symmetry breaking is tested over other arbitrage strategies . since only the momentum / contrarian strategy is the main concern in the paper , tests on other trading strategies including high frequency trading look very interesting .additionally , a cross - check with momentum strategies for different markets and frequencies would be helpful to check the effectiveness and usefulness of spontaneous symmetry breaking concepts in arbitrage modeling .it is our pleasure to thank matthew atwood , jonghyoun eun , robert j. frey , wonseok kang , andrew mullhaupt , and svetlozar rachev for useful discussions .we are especially indebted to sungsoo choi for helpful discussions from the early stage of this work .we are grateful to didier sornette for providing valuable advice in the revision of the first draft .we express thanks to sungsoo choi , jonghyoun eun , and wonseok kang for cooperation on the construction of the financial database on the korean stock market .we are thankful to xiaoping zhou for collecting parts of price histories on s&p 500 components . 99 bachelier , l. , theorie de la spculation , annales scientifiques de lcole normale suprieure 3 ( 1900 ) 21 - 86 cootner , p. , the random character of stock market prices , mit press , 1964 black , f. and scholes m. the pricing of options and corporate liabilities , journal of political economy 8 ( 1973 ) 637 - 654 merton , r. c. , theory of rational option pricing , bell journal of economics and management science ( the rand corporation ) 4 ( 1973 ) 141 - 183 .vasicek , o. , an equilibrium characterization of the term structure , journal of financial economics 5 ( 1977 ) 177 - 188 cox , j.c . , ingersoll , j.e . and ross , s. a. , a theory of the term structure of interest rates , econometrica 53 ( 1985 ) 385 - 407 hull , j. and white , a. , pricing interest - rate derivative securities , the review of financial studies 3 ( 1990 ) 573 - 592 fama , e. , the behavior of stock market prices , journal of business 38 ( 1965 ) 34 - 105 samuelson , p. , proof that properly anticipated prices fluctuate randomly , industrial management review 6 ( 1965 ) 41 - 49 singal , v. , beyond the random walk : a guide to stock market anomalies and low risk investing , oxford university press , 2003 lo , a. and mackinlay , a. , a non - random walk down wall street , princeton university press , 2001 shleifer , a. , inefficient markets : an introduction to behavioral finance , oxford university press , 2000 kahneman , d. and tversky , a. , choices , values and frames , cambridge university press , 2000 kahneman , d. , slovic , p. , and tversky , a. judgment under uncertainty : heuristics and biases , cambridge university press , 1982 conlisk , j. , bounded rationality and market fluctuation , journal of economic behavior and organization 29 ( 1996 ) 233 - 250 jegadeesh , n. and titman , s. , returns to buying winners and selling losers : implications for stock market efficiency , journal of finance 48 ( 1993 ) 65 - 91 mangtegna , r. n. and stanley , h. e. , introduction to econophysics : correlations and complexity in finance , cambridge university press , 2007 roehner , b. m. , patterns of speculation : a study in observational econophysics , cambridge university press , 2005 sornette , d. , why stock markets crash : critical events in complex financial systems , princeton university press , 2004 sornette , d. and malevergne , y. , from rational bubbles to crashes , physica a 299 ( 2001 ) 40 - 59 ( 2001 ) malevergne , y. and sornette , d. , multi - dimensional rational bubbles and fat tails , quantitative finance 1 ( 2001 ) 533 - 541 sornette , d. , stock market speculation : spontaneous symmetry breaking of economic valuation , physica a 284 ( 2000 ) 355 - 375 wyarta , m. and bouchaud , j. p. , self - referential behaviour , overreaction and conventions in financial markets , journal of economic behavior and organization 63 ( 2007 ) 1 - 24 girsanov , i. v. , on transforming a certain class of stochastic processes by absolutely continuous substitution of measures , theory prob .. 5 ( 1960 ) 285 - 301 ilinski , k. , physics of finance : gauge modelling in non - equilibrium pricing , wiley , 2001 cochrane , john h. , presidential address : discount rates , journal of finance 66 ( 2011 ) 1047 - 1108 amihud , y. and mendelson , h. , trading mechanisms and stock returns : an empirical investigation , journal of finance 42 ( 1987 ) 533 - 553 damodaran , a. , a simple measure of price adjustment coefficients , journal of finance 48 ( 1993 ) 387 - 400 theobald , m. , and yallup , p. , measuring cash - futures temporal effects in the uk using partial adjustment factors , journal of banking and finance 22 ( 1998 ) 221 - 243 theobald , m. , and yallup , p. , determining security speed of adjustment coefficient , journal of financial markets 7 ( 2004 ) 75 - 96 bouchaud , j. p. and cont r. , a langevin approach to stock market fluctuations and crashes , european physical journal b 6 ( 1998 ) 543 - 550 hong , h. and stein , j. c. , a unified theory of underreaction , momentum trading and overreaction in asset markets , journal of finance 54 ( 1999 ) 2143 - 2184 terence , h. , hong , h. , lim , t. , and stein , j. , bad news travels slowly : size , analyst coverage , and the profitability of momentum strategies , journal of finance 55 ( 2000 ) pp .265 - 295 baum , l. e. and petrie , t. statistical inference for probabilistic functions of finite state markov chains .the annals of mathematical statistics 37 ( 1966 ) : 1554 - 1563 tong , h. , threshold models in nonlinear time series analysis , springer - verlag , new york , 1983 rouwenhorst , k. g. , international momentum strategies , journal of finance 53 ( 1998 ) 267 - 284 rouwenhorst , k. g. , local return factors and turnover in emerging stock markets , journal of finance 54 ( 1999 ) 1439 - 1464 okunev , john , and derek white , do momentum - based strategies still work in foreign currency markets ? ,journal of financial and quantitative analysis 38 ( 2003 ) 425 - 447 erb , claude b. , and campbell r. harvey , the strategic and tactical value of commodity futures , financial analysts journal 62 ( 2006 ) 69 - 97 asness , c. s. , moskowitz , t. , and pedersen , l. h. , value and momentum everywhere , university of chicago working paper , 2008 moskowitz , t. j. , ooi , y. h. , and pedersen , l. h. , time series momentum , university of chicago working paper , 2010 de bondt , w. f. m. and thaler , r. , does the stock market overreact ?, journal of finance 40 ( 1985 ) 793 - 805 lo , a. and mackinlay , a. , when are contrarian profits due to stock market overreaction ? , review of financial studies 3 ( 1990 ) 175 - 205 moskowitz , t. j. and grinblatt m., do industries explain momentum ?, journal of finance 54 ( 1999 ) 1249 - 1290 daniel , k. , hirshleifer , d. , subrahmanyam , a. , investor psychology and security market under- and over - reactions. journal of finance 53 ( 1998 ) 1839 - 1886 barberis , n. , shleifer , a. , vishny , r. a model of investor sentiment .journal of financial economics 49 ( 1998 ) 307 - 343
we introduce the concept of spontaneous symmetry breaking to arbitrage modeling . in the model , the arbitrage strategy is considered as being in the symmetry breaking phase and the phase transition between arbitrage mode and no - arbitrage mode is triggered by a control parameter . we estimate the control parameter for momentum strategy with real historical data . the momentum strategy aided by symmetry breaking shows stronger performance and has a better risk measure than the naive momentum strategy in u.s . and south korean markets . spontaneous symmetry breaking , arbitrage modeling , momentum strategy
o oscilador harmnico um dos mais importantes sistemas em mecnica quntica porque ele apresenta soluo em forma fechada e isto pode ser til para gerar solues aproximadas ou solues exatas para vrios problemas .o oscilador harmnico costumeiramente resolvido com o mtodo de soluo em sries de potncias ( veja , e.g. , ) e o mtodo algbrico ( veja , e.g. , ) , e tambm por meio de tcnicas de integrao de trajetria ( veja , e.g. , ) .recentemente , o oscilador harmnico unidimensional foi abordado com os mtodos operacionais da transformada de fourier e da transformada de laplace . a equao de schrdinger com um potencial quadrtico acrescido de um termo inversamente quadrtico , conhecido como oscilador harmnico singular , tambm um problema exatamente solvel -. a bem da verdade , o problema geral de espalhamento e estados ligados em potenciais singulares um tema antigo ( veja , e.g. , ) .o caso do oscilador harmnico singular com parmetros do potencial dependentes do tempo tm sido alvo de investigao recente .o oscilador singular se presta para a construo de modelos solveis de corpos , tanto quanto como base para expanses perturbativas e anlisevariacional para osciladores harmnicos acrescidos de termos com singularidades muito mais fortes que o termo inversamente quadrtico .o oscilador singular tambm tem sido utilizado em mecnica quntica relativstica . a exata solubilidade do oscilador singular pode ser constatada nas referncias e para o caso tridimensional , e tambm patente nas referncias e para o caso unidimensional restrito ao semieixo positivo .duas referncias mais recentes abordam o problema unidimensional em todo o eixo - . no h detalhes da soluo do problema nem meno s possveis degenerescncias , e l consta que , para um oscilador singular atrativo , a partcula colapsa para o ponto ( _ the particle collapses to the point _ ) . , palma e raff esmiuam o problema com o potencial singular repulsivo , concluem apropriadamente sobre a degenerescncia e simplesmente afirmam que no h estado fundamental no caso do oscilador singular atrativo ( _ the attractive potential has no lower energy bound _ ) .o intuito deste trabalho perscrutar a equao de schrdinger unidimensional com o oscilador harmnico singular .veremos que , alm de uma crtica literatura concernente a um problema de interesse recente e j cristalizado em livros - texto largamente conhecidos , a abordagem dos estados ligados do oscilador harmnico singular que se presencia neste trabalho permite aos estudantes de mecnica quntica e fsica matemtica dos cursos de graduao em fsica o contato com equaes diferenciais singulares e o comportamento assinttico de suas solues , funo hipergeomtrica confluente , polinmios de laguerre e de hermite e outras funes especiais , valor principal de cauchy de integrais imprprias , condio de hermiticidade sobre operadores associados com grandezas fsicas observveis e o descarte de solues esprias , condies de contorno e analiticidade das solues na vizinhana de pontos singulares , paridade e extenses simtricas e antissimtricas de autofunes , degenerescncia em sistemas unidimensionais , transio de fase e surgimento de nveis intrusos , _et cetera_. seguramente , a profuso de conceitos e tcnicas do interesse de estudantes e instrutores .com o critrio de hermiticidade dos operadores associados com quantidades fsicas observveis , o tratamento do problema pe vista dos leitores que o oscilador singular , seja atrativo ou repulsivo , exibe um nmero infinito de solues aceitveis desde que o parmetro responsvel pela singularidade seja maior que um certo valor crtico .veremos que o espectro de energia uma funo montona do parmetro responsvel pela singularidade e que a energia do estado fundamental do oscilador singular , independentemente do sinal de tal parmetro , sempre maior que dois teros da energia do estado fundamental do oscilador no - singular para o problema definido no semieixo , e sempre maior que o dobro da energia do estado fundamental do oscilador no - singular para o problema definido em todo o eixo .mostramos que o problema definido em todo o eixo nos conduz dupla degenerescncia no caso do potencial singular e intruso de adicionais nveis de energia no caso do oscilador no - singular ( relacionados com as autofunes de paridade par ) .o robusto critrio de hermiticidade do operador associado com a energia cintica ( ou potencial ) mostra - se suficiente para descartar solues ilegtimas e permite demonstrar que se o potencial singular for fracamente atrativo no h cabimento em se falar em colapso para o centro ou inexistncia de estado fundamental .finalmente , mostramos que , seja o problema definido no semieixo ou em todo o eixo , o oscilador no - singular pode ser pensado como uma transio de fase do oscilador singular , e por causa disso a soluo do oscilador harmnico singular no pode ser obtida a partir da soluo do oscilador harmnico no - singular via teoria da perturbao .a equao de schrdinger unidimensional para uma partcula de massa de repouso sujeita a um potencial externo dada por , a funo de onda , a constante de planck reduzida ( ) e o operador hamiltoniano dificil mostrar que , \]]e levando em considerao que o operador hamiltoniano um operador hermitiano , dito ser hermitiano se e so duas funes de onda quaisquer que fazem .em particular , as funes de onda devem ser quadrado - integrveis , viz . . ]temos o corolrio equao da continuidade satisfeita com a densidade de probabilidade a corrente equao da continuidade pode tambm ser escrita como um ponto arbitrrio do eixo .a forma integral da equao da continuidade , ( [ ec ] ) , permite interpretar inequivocamente a corrente como sendo o fluxo de probabilidade atravs de no instante .no caso de potenciais externos independentes do tempo , a funo de onda admite solues particulares da forma onde obedece equao de schrdinger independente do tempo caso , com condies de contorno apropriadas , o problema se reduz determinao do par caracterstico . a equao deautovalor ( [ auto ] ) tambm pode ser escrita na forma com densidade e a corrente correspondentes soluo expressa por ( [ 2a ] ) tornam - se virtude de e serem independentes do tempo , a soluo ( [ 2a ] ) dita descrever um estado estacionrio .note que , por causa da equao da continuidade ( [ con ] ) e ( [ ec ] ) , a corrente no to somente estacionria , mas tambm uniforme , i.e. . os estados ligados constituem uma classe de solues da equao de schrdinger que representam um sistema localizado numa regio finita do espao .para estados ligados devemos procurar autofunes que se anulam medida que . bvio que , em decorrncia deste comportamento assinttico , quando .assim , a uniformidade da corrente dos estados estacionrios demanda que seja nula em todo o espao .fato esperado em vista da interpretao de apresentada anteriormente .tambm , neste caso podemos normalizar fazendo um potencial invariante sob reflexo atravs da origem ( ) , autofunes com paridades bem definidas podem ser construdas .neste caso , as autofunes de so tambm autofunes do operador paridade , viz. o operador paridade , e denota quaisquer outros nmeros qunticos .como consequncia da hemiticidade de e , as autofunes satisfazem condio de ortogonalidade causa da paridade , podemos concentrar a ateno no semieixo positivo e impor condies de contorno na origem e no infinito .naturalmente , a autofuno contnua .no entanto , temos de considerar a condio de conexo entre a derivada primeira da autofuno direita e esquerda da origem , que deve ser obtida diretamente da equao de schrdinger independente do tempo .normalizabilidade , conforme comentado de pouco , requer que quando .autofunes com paridades bem definidas em todo o eixo podem ser construdas tomando combinaes lineares simtricas e antissimtricas de definida no lado positivo do eixo .estas novas autofunes possuem a mesma energia , ento , em princpio , existe uma dupla degenerescncia ( ) . notrio que o espectro de estados ligados de sistemas unidimensionais com potenciais regulares no - degenerado ( veja , e.g. , e ) .entretanto , se o potencial for singular na origem , por exemplo , tanto as autofunes pares quanto as autofunes mpares poderiam obedecer condio homognea de dirichlet na origem , e cada nvel de energia exibiria uma degenerescncia de grau dois .. ] a condio de conexo obedecida pela derivada primeira da autofuno , contudo , poderia excluir uma das duas combinaes lineares , e nesse caso os nveis de energia seriam no - degenerados .seguindo a notao da ref . , vamos agora considerar o potencial parmetro adimensional caracteriza trs diferentes perfis para o potencial , como est ilustrado na figura 1 . as linhas tracejada , contnua e pontilhada para os casos com negativo , nulo e positivo , respectivamente . ,title="fig:",width=377 ] [ fig : fig1 ] para temos o potencial do oscilador harmnico regular ( poo simples ) , e para temos o caso de um poo duplo com uma barreira de potencial repulsiva e singular na origem ( ) ou o caso de um poo sem fundo puramente atrativo ( ) .para o potencial ( [ pot ] ) , a equao de schrdinger independente do tempo assume a forma com quando , as solues da equao ( [ af ] ) so analticas em todo o eixo .quando , todavia , pode manifestar singularidade na origem .tal singularidade poderia comprometer a nulidade da corrente em , a existncia das integrais definidas nos intervalos ( ) e ( ) , e a hermiticidade dos operadores associados com as quantidades fsicas observveis .por causa desta sensao de ameaa , comearemos a abordagem do problema pela averiguao do comportamento das solues de ( [ af ] ) na vizinhana da origem . claro que o comportamento assinttico ( ) tambm meritrio .na vizinhana da origem a equao ( [ af ] ) passa a ter duas formas distintas : de um jeito ou de outro , no semieixo positivo podemos escrever soluo da equao algbrica indicial vizinhana da origem , o comportamento dos termos a hermiticidade dos operadores associados com a energia cintica e com o potencial .para , podemos escrever , \notag\end{aligned}\]]e para .\notag\end{aligned}\]]em ambos os casos .vemos destas ltimas relaes que a hermiticidade do operador associado com a energia cintica ( ou potencial ) verificada somente se quando , o que equivale a dizer que o sinal negativo defronte do radical em ( [ beta ] ) deve ser descartado e deve ser maior que .naturalmente , devemos considerar tanto quanto quando .portanto , podemos afirmar que comporta - se na vizinhana da origem como ainda que a nulidade da corrente requer apenas que e , enquanto a condio de ortonormalizabilidade exige apenas que e .a condio , que nos faz evitar um potencial atrativo com singularidade muito forte , relacionado com o problema da queda para o centro , tanto quanto a soluo apropriada da equao indicial ( [ indi ] ) , foram obtidas aqui de uma maneira extremamente simples , sem recorrer ao processo de regularizao do potencial na origem . substitudo por para e depois de usar as condies de continuidade para e no _ cutoff _ tomamos o limite .resulta que a soluo com suprimida em relao a essa envolvendo quando . ]nota - se que , ainda que acrescido da exigncia de normalizabilidade , o processo de regularizao do potencial inapto para exluir o caso com . o critrio de hermiticidade do operador associado com a energia cintica ( ou potencial ) lcito e suficiente para descartar solues esprias .e na vizinhana da origem , respectivamente . ]resumidamente, condio de dirichlet homognea ( ) essencial sempre que , contudo ela tambm ocorre para quando mas no para .em suma, equao de schrdinger independente do tempo para o nosso problema , eq .( [ af ] ) , tem o comportamento assinttico ( da sucede que a forma assinttica da soluo quadrado - integrvel dada por o comportamento assinttico de expresso por ( [ asym ] ) convida - nos a definir de forma que a autofuno para todo pode ser escrita como a funo desconhecida soluo regular da equao hipergeomtrica confluente soluo geral de ( [ kum ] ) dada por e so constantes arbitrrias , e , tambm denotada por , a funo hipergeomtrica confluente ( tambm chamada de funo de kummer ) expressa pela srie a funo gama . a funo gama no tem razes e seus polos so dados por , onde um inteiro no - negativo .a funo de kummer converge para todo , regular na origem ( ) e tem o comportamento assinttico prescrito por vista que ,e estamos em busca de soluo regular na origem , devemos tomar ( [ sg ] ) . a presena de em ( [ asy ] ) deprava obom comportamento assinttico da autofuno j ditado por ( asym ) .para remediar esta situao constrangedora devemos considerar os polos de , e assim preceituar que um comportamento aceitvel para ocorre somente se caso , a srie ( [ ser ] ) truncada em e o polinmio de grau resultante proporcional ao polinmio de laguerre generalizado , com abr .portanto , de ( [ k2 ] ) , ( [ l ] ) , ( [ ab ] ) e ( [ n ] ) podemos determinar que as energias permitidas so dadas por as autofunes definidas no semieixo positivo so na figura 2 ) em funo de no intervalo ] .observe que , qualquer que seja , ainda que , o espectro discreto e sempre positivo .h um nmero infinito de nveis de energia igualmente espaados ( espaamento igual a , independentemente de ) , e o estado fundamental tem energia .na figura 3 .as linhas contnua , ponto - tracejada , tracejada e pontilhada para os casos com igual a , , e , respectivamente.,title="fig:",width=377 ] [ fig : fig3 ] ilustramos o comportamento da autofuno para o estado fundamental .a normalizao foi realizada por mtodos numricos mas poderia ter sido obtida por meio de frmulas envolvendo os polinmios de laguerre associados constantes na ref . . a comparao entre as quatro curvas mostra que a partcula tende a evitar a origem mais e mais medida que aumenta .eis um resultado esperado no caso que ocorre tambm no caso .observe que , qualquer que seja , as autofunes ( afu ) so fisicamente aceitveis , ainda que no intervalo elas possuam derivada primeira singular . mesmo assim , contanto que o parmetro seja maior que , o par caracterstico constitui uma soluo permissvel do problema proposto . a partcula nunca colapsa para o ponto e certamente hum estado fundamental . a autofuno definida para todo o eixo pode ser escrita como a funo degrau de heaviside , o autovalor do operador paridade , e a autoenergia dada por hermiticidade do operador associado com a energia cintica ( ou potencial ) , por causa da singularidade em no caso ( ) , depende da existncia do valor principal de cauchy for singular na origem , a integral ser _nonsense_. contudo , o valor principal de cauchy , , uma prescrio que pode atribuir um sentido proveitoso representao integral por meio da receita que se segue : ] da integral , o valor principal de cauchy poderia consentir um afrouxamento das condies de contorno impostas sobre as autofunes .autofunes mais singulares que essas anteriormente definidas no semieixo seriam toleradas se na vizinhana da origem os sinais de direita e esquerda da origem fossem diferentes para quaisquer e .porm , temos modo que , no caso em que , a integral ( [ intk ] ) no seria finita para .somos assim conduzidos a preservar a rigidez do critrio de hermiticidade j estabelecido no problema definido no semieixo .a continuidade ( ou descontinuidade ) de na origem pode ser avaliada pela integrao de ( [ orig ] ) de para no limite . a frmula de conexo entre direita e esquerda da origem pode ser sumarizada por em considerao o valor principal de cauchy , no caso de antissimtrica com , podemos afirmar que as autofunes tm derivada primeira contnua na origem .assim , o espectro do oscilador singular duplamente degenerado .por causa da continuidade , no h autofuno mpar para e l ocorre a condio homognea de neumann ( ) , e no h autofuno par para .portanto , o espectro do oscilador regular no - degenerado , como deveria ser .e , em particular , os polinmios de laguerre e so proporcionais aos polinmios de hermite , respectivamente : polinmios de hermite so definidos no intervalo e gozam da propriedade .assim , a soluo do oscilador no - singular pode ser escrita na forma que se costumou em termos dos polinmios de hermite: autofuno definida em todo o eixo expressa por constantes em ( [ afu ] ) e ( [ psi2 ] ) , chamadas de constantes de normalizao , podem ser determinadas por meio da condio de normalizao expressa por ( [ normaliza ] ) .na figura 4 ) em funo de no intervalo ] para o problema definido em todo o eixo .para , o espectro exatamente igual a esse do problema definido no semieixo .quando a singularidade nula , entanto , o espaamento dos nveis .esta mudana de espaamento de nveis quando devida aos nveis intrusos que surgem por causa emergncia da condio de contorno homognea de neumann , em adio condio de contorno homognea de dirichlet j existente no problema definido no semieixo .esta invaso de novas solues com tem como consequncia imediata um drstico efeito sobre a localizao da partcula .os resultados apresentados neste trabalho mostram com transparncia que o oscilador harmnico singular unidimensional , seja definido no semieixo ou em todo o eixo , seja repulsivo ou atrativo , exibe um conjunto infinito de solues aceitveis , em claro contraste com os ditames estampados nas referncias e , e transcritos na introduo .sim , para um potencial singular fracamente atrativo o colapso para o centro no tem nada a ver , e certamente h um estado fundamental com energia maior que .a generalizao dos resultados do problema definido no semieixo para o oscilador harmnico singular tridimensional pode ser feita com facilidade por meio da substituio de por , onde o numero quntico orbital , e pela concomitante substituio de pela funo radial .para o problema definido no semieixo , a transio de fase manifesta - se apenas por meio do comportamento da derivada primeira da autofuno na origem .derivada primeira infinita para , constante para , e nula para .para o problema definido em todo o eixo , entretanto , a transio de fase manifesta - se por meio da degenerescncia , pelo comportamento da autofuno e sua derivada primeira na origem , e pela localizao da partcula .uma outra assinatura da transio de fase o espaamento dos nveis de energia .quando o potencial singular na origem , as autoenergias so igualmente espaadas com passo igual a . admirvel que esse espaamento de nveis independente da intensidade do parmetro responsvel pela singularidade do potencial .quando a singularidade nula , entanto , o espaamento dos nveis .esta brusca mudana de espaamento de nveis quando passa por devida aos nveis intrusos que surgem por causa emergncia da condio de contorno homognea de neumann , em adio condio de contorno homognea de dirichlet j existente para .esta intruso permite o surgimento de polinmios de hermite pares e seus autovalores associados , que se entremeiam entre os autovalores pr - existentes associados com os polinmios de hermite mpares .os polinmios de hermite pares tm e esta condio de contorno nunca permitida quando a singularidade est presente , ainda que seja muito pequeno .esta invaso sbita dos polinmios pares tem um brusco efeito sobre a localizao da partcula .poder - se - ia tambm tentar compreender tal transio sbita partindo de um potencial no - singular ( ) , quando a soluo do problema envolve os polinmios de hermite pares e mpares , e ento adicionar o potencial singular como uma perturbao do potencial com .agora , por sua natureza o potencial singular perturbativorepulsivo , demanda que e assim ele mata naturalmente a soluo envolvendo os polinmios de hermite pares .ademais , no h degenerescncia no espectro para o caso de .lathouwers considerou o caso unidimensional do oscilador harmnico singular como o oscilador harmnico no - singular perturbado .acontece que , por causa dos distintos comportamentos da derivada primeira da autofuno na origem para ( ) e ( para , e para ) , nossos comentrios finais desfavorecem tal aspirao , ainda que haja continuidade do espectro na vizinhana de no caso associado com autofunes mpares do oscilador no - singular . k.m .case , phys .rev . * 80 * , 797 ( 1950 ) ; f.l .scarf , phys .* 109 * , 2170 ( 1958 ) ; a. pais e t.t .wu , phys .rev . * 134 * , b1303 ( 1964 ) ; w.m .frank , d.j .land e r.m .spector , rev . mod* 43 * , 36 ( 1971 ) .p. camiz , et al ., j. math phys .* 12 * , 2040 ( 1971 ) ; v.v .dodonov , i.a .malkin , e v.i .manko , phys .a * 39 * , 377 ( 1972 ) ; v.v .dodonov , v.i .manko e l. rosa , phys .a * 57 * , 2851 ( 1998 ) ; j.r .choi , j. korean phys .soc . * 44 * , 223 ( 2004 ) .hall , n. saad , e a. von keviczky , j. math phys . * 39 * , 6345 ( 1998 ) ; r.l .hall e n. saad , j. phys .a * 33 * , 5531 ( 2000 ) ; r.l .hall e n. saad , j. phys .a * 33 * , 569 ( 2000 ) ; r.l .hall e n. saad , j. phys .a * 34 * , 1169 ( 2001 ) ; r.l . hall , n. saad , e a. von keviczky , j. math phys . * 43 * , 94 ( 2002 ) ; r.l .hall , n. saad , e a. von keviczky , j. phys .a * 36 * , 487 ( 2003 ) .nagiyev , e.i .jafarov , e r.m .imanov , j. phys . a * 36 * , 7813 ( 2003 ) ; a.s .de castro , ann .( n.y . ) * 311 * , 170 ( 2004 ) ; a. de souza dutra e c .- s. jia , phys .a * 352 * , 484 ( 2006 ) ; t.r .cardoso , l.b .castro , e a.s . de castro , j. phys .a * 45 * , 075302 ( 2012 ) .
investiga - se a equao de schrdinger unidimensional com o oscilador harmnico singular . a hermiticidade dos operadores associados com quantidades fsicas observveis usada como critrio para mostrar que o oscilador singular atrativo ou repulsivo exibe um nmero infinito de solues aceitveis , contanto que o parmetro responsvel pela singularidade seja maior que um certo valor crtico , em discordncia com a literatura . o problema definido em todo o eixo exibe dupla degenerescncia no caso do oscilador singular e intruso de adicionais nveis de energia no caso do oscilador no - singular . outrossim , mostra - se que a soluo do oscilador singular no pode ser obtida a partir da soluo do oscilador no - singular via teoria da perturbao . * palavras - chave : * oscilador harmnico , potencial singular , degenerescncia , colapso para o centro
measure - valued markov chains , or more generally measure - valued markov processes , arise naturally in modeling the composition of evolving populations and play an important role in a variety of research areas such as population genetics and bioinformatics ( see , e.g. , ) , bayesian nonparametrics , combinatorics and statistical physics . in particular , in bayesian nonparametrics there has been interest in measure - valued markov chains since the seminal paper by , where the law of the dirichlet process has been characterized as the unique invariant measure of a certain measure - valued markov chain . in order to introduce the result by , let us consider a polish space endowed with the borel -field and let be the space of probability measures on with the -field generated by the topology of weak convergence . if is a strictly positive finite measure on with total mass , is a -valued random variable ( r.v . ) distributed according to and is a r.v .independent of and distributed according to a beta distribution with parameter then , theorem 3.4 in implies that a dirichlet process on with parameter uniquely satisfies the distributional equation where all the random elements on the right - hand side of ( [ ftchain1 ] ) are independent .all the r.v.s introduced in this paper are meant to be assigned on a probability space unless otherwise stated . in , ( [ ftchain1 ] ) is recognized as the distributional equation for the unique invariant measure of a measure - valued markov chain defined via the recursive identity where is arbitrary , is a sequence of -valued r.v.s independent and identically distributed as and is a sequence of r.v.s , independent and identically distributed as and independent of .we term as the feigin tweedie markov chain . by investigating the functional markov chain , with for any and forany measurable linear function , provide properties of the corresponding linear functional of a dirichlet process .in particular , the existence of the linear functional of the dirichlet process is characterized according to the condition ; these functionals were considered by and their existence was also investigated by who referred to them as moments , as well as by and .further developments of the linear functional markov chain are provided by and more recently by .starting from the distributional equation ( [ ftchain1 ] ) , a constructive definition of the dirichlet process has been proposed by . if is a dirichlet process on with parameter , then where is a sequence of independent r.v.s identically distributed according to and is a sequence of r.v.s independent of and derived by the so - called stick breaking construction , that is , and for , with being a sequence of independent r.v.s identically distributed according to a beta distribution with parameter .then , equation ( [ ftchain1 ] ) arises by considering where now and for .thus , it is easy to see that is also a dirichlet process on with parameter and it is independent of the pairs of r.v.s .if we would extend this idea to initial samples , we should consider writing where and is a dirichlet process on with parameter independent of the random vectors and .however , this is not an easy extension since the distribution of is unclear , and moreover and are not independent . for this reason , in an alternative distributional equation has been introduced .let be a strictly positive finite measure on with total mass and let be a -valued plya sequence with parameter ( see ) , that is , is a sequence of -valued r.v.s characterized by the following predictive distributions and , for any .the sequence is exchangeable , that is , for any and any permutation of the indexes , the law of the r.v.s and coincide ; in particular , according to the celebrated de finetti representation theorem , the plya sequence is characterized by a so - called de finetti measure , which is the law of a dirichlet process on with parameter . for a fixed integer , let be a random vector distributed according to the dirichlet distribution with parameter , , and let be a r.v . distributed according to a beta distribution with parameter such that , and are mutually independent .moving from such a collection of random elements , lemma 1 in implies that a dirichlet process on with parameter uniquely satisfy the distributional equation where all the random elements on the right - hand side of ( [ ftchain3 ] ) are independent . in order to emphasize the additional parameter , we used an upper - script on the dirichlet process and on the random vector .it can be easily checked that equation ( [ ftchain3 ] ) generalizes ( [ ftchain1 ] ) , which can be recovered by setting . in the present paper ,our aim is to further investigate the distributional equation ( [ ftchain3 ] ) and its implications in bayesian nonparametrics theory and methods .the first part of the paper is devoted to investigate the random element in ( [ ftchain3 ] ) which is recognized to be the random probability measure ( r.p.m . ) at the step of a measure - valued markov chain defined via the recursive identity where is a sequence of independent r.v.s , each distributed according a beta distribution with parameter , for and and the sequence is independent from .more generally , we observe that the measure - valued markov chain defined via the recursive identity ( [ nuovacatena ] ) can be extended by considering , instead of a plya sequence with parameter , any exchangeable sequence characterized by some de finetti measure on and such that is independent from .asymptotic properties for this new class of measure - valued markov chains are derived and some linkages to bayesian nonparametric mixture modelling are discussed . in particular , we remark how it is closely related to a well - known recursive algorithm introduced in for estimating the underlying mixing distribution in mixture models , the so - called newton s algorithm . in the second part of the paper , by using finite and asymptotic properties of the r.p.m . and by following the original idea of , we define and investigate from ( [ ftchain3 ] ) a class of measure - valued markov chain which generalizes the feigin tweedie markov chain , introducing a fixed integer parameter .our aim is in providing features of the markov chain in order to verify if it preserves some of the properties characterizing the feigin tweedie markov chain ; furthermore , we are interested in analyzing asymptotic ( as goes to ) properties of the associated linear functional markov chain with for any and for any function such that . in particular , we show that the feigin tweedie markov chain sits in a larger class of measure - valued markov chains parametrized by an integer number and still having the law of a dirichlet process with parameter as unique invariant measure .the role of the further parameter is discussed in terms of new potential applications of the markov chain with respect to the the known applications of the feigin tweedie markov chain . following these guidelines , in section [ sec2 ] we introduce a new class of measure - valued markov chains defined via exchangeable sequences of r.v.s ; asymptotic results for are derived and applications related to bayesian nonparametric mixture modelling are discussed . in section [ secthree ] , we show that the feigin tweedie markov chain sits in a larger class of measure - valued markov chains , which is investigated in comparison with . in section [ sec4 ] , some concluding remarks and future research lines are presented .let be a sequence of independent r.v.s such that almost surely and has beta distribution with parameter for . moreover , let be a sequence of -valued exchangeable r.v.s independent from and characterized by some de finetti measure on .let us consider the measure - valued markov chain defined via the recursive identity in the next theorem , we provide an alternative representation of and show that converges weakly to some limit probability for almost all , _ that is _, for each in some set with .in short , we use notation a.s.- .[ asconv ] let be the markov chain defined by ( [ eqprec ] ) . then : a. an equivalent representation of , is where , has dirichlet distribution with parameter , and and are independent .b. there exists a r.p.m . on such that , as , where the law of is the de finetti measure of the sequence . as far as ( i ) is concerned , by repeated application of the recursive identity ( [ eqprec ] ), it can be checked that , for any , where almost surely and is defined to be 1 when . defining , , it is straightforward to show that has the dirichlet distribution with parameter and , so that ( [ eqqrepr ] ) holds .regarding ( ii ) , by the definition of the dirichlet distribution , an equivalent representation of ( [ eqqrepr ] ) is where is a sequence of r.v.s independent and identically distributed according to standard exponential distribution , independent from .let be any bounded continuous function , and consider the expression in the denominator converges almost surely to 1 by the strong law of large numbers .as far as the numerator is concerned , let be the r.p.m .defined on , such that the r.v.s are independent and identically distributed conditionally on ; the existence of such a random element is guaranteed by the de finetti representation theorem ( see , e.g. , , theorem 1.49 ) .it can be shown that is a sequence of exchangeable r.v.s and , if , where , , , is a r.p.m . with trajectories in , and denotes the random distribution relative to .this means that , conditionally on , is a sequence of r.v.s independent and identically distributed according to the random distribution ( evaluated in ) of course , since is bounded .as in , example 7.3.1 , this condition implies so that a.s.- . by theorem 2.2 in , it follows that a.s.- as . throughout the paper, denotes a strictly positive and finite measure on with total mass , unless otherwise stated .if the exchangeable sequence is the plya sequence with parameter , then by theorem [ asconv](i ) is the markov chain defined via the recursive identity ( [ nuovacatena ] ) ; in particular , by theorem [ asconv](ii ) , where is a dirichlet process on with parameter .this means that , for any fixed integer , the r.p.m . can be interpreted as an approximation of a dirichlet process with parameter . in appendix[ appa1 ] , we present an alternative proof of the weak convergence ( convergence of the finite dimensional distribution ) of to a dirichlet process on with parameter , using a combinatorial technique . as a byproduct of this proof , we obtain an explicit expression for the moment of order ) of the -dimensional plya distribution. a straightforward generalization of the markov chain can be obtained by considering a nonparametric hierarchical mixture model .let be a kernel , that is , is a measurable function such that is a density with respect to some -finite measure on , for any fixed , where is a polish space ( with the usual borel -field ) .let be the markov chain defined via ( [ eqprec ] ) .then for each we introduce a real - valued markov chain defined via the recursive identity where by a straightforward application of theorem 2.2 in , for any fixed , when is continuous for all and bounded by a function , as , then where is the limit r.p.m . in theorem [ asconv ] .for instance , if is a dirichlet process on with parameter , is precisely the density in the dirichlet process mixture model introduced by .when is a -integrable function , not only the limit is a random density , but a stronger result than ( [ convdens ] ) is achieved .[ rigo ] if is continuous for all and bounded by a -integrable function , then where is the limit r.p.m . in theorem [ asconv ] . the functions and , defined on , are -measurable , by a monotone class argument . in fact , by kernel s definition , is -measurable .moreover , if , and , then is -measurable .let .since contains the rectangles , it contains the field generated by rectangles , and , since is a monotone class , .the assertion holds for of the form since there exist a sequence of simple function on rectangles which converges pointwise to .therefore , does not converge to .then , by fubini s theorem , hence , where is the set of such that does not converge to . for any fixed in , it holds , -a.e ., so that by the scheff s theorem we have the theorem follows since .we conclude this section by remarking an interesting linkage between the markov chain and the so - called newton s algorithm , originally introduced in for estimating the mixing density when a finite sample is available from the corresponding mixture model .see also see also and .briefly , suppose that are r.v.s independent and identically distributed according to the density function where is a known kernel dominated by a -finite measure on ; assume that the mixing distribution is absolutely continuous with respect to some -finite measure on . proposed to estimate as follows : fix an initial estimate and a sequence of weights .given independent and identically distributed observations from , compute for and produce as the final estimate .we refer to , and for a recent wider investigation of the newton s algorithm . herewe show how the newton s algorithm is connected to the measure - valued markov chain .let us consider observations from the nonparametric hierarchical mixture model , that is , and where is a r.p.m .if we observed , then by virtue of ( ii ) in theorem [ asconv ] , we could construct a sequence of distributions for estimating the limit r.p.m . , where is a sequence of independent r.v.s such that almost surely and has beta distribution with parameters .this approximating sequence is precisely the sequence ( [ eqprec ] ) .therefore , taking the expectation of both sides of the previous recursive equation , and defining ] , we have which can represent a predictive distribution for , and hence an estimate for .however , instead of observing the sequence , it is actually the sequence which is observed ; in particular , we can assume that are r.v.s independent and identically distributed according to the density function ( [ mixmod ] ) .therefore , instead of ( [ eqqtilde ] ) , we consider where in ( [ eqqtilde ] ) has been substituted ( or estimated , if you prefer ) by . finally , observe that , if is absolutely continuous , with respect to some -finite measure on , with density , for , then we can write which is precisely a recursive estimator of a mixing distribution proposed by when the weights are fixed to be for and the initial estimate is ] where }{\lambda - a/(1+a)}.\ ] ] here the size of the small set of can be controlled by an additional parameter , suggesting the upper bounds of the rate of convergence of the chain depends on too .however , if we would establish an explicit upper bound on the rate of convergence , we would need results like theorem 2.2 in , or theorems 5.1 and 5.2 in .all these results need a minorization condition to hold for the step transition probability for any and , for some positive integer and all in a small set ; in particular , if , where is the density of and is some density such that , then where is a probability measure on . in order to check the validity of our intuition that the rate of convergence will increase as gets larger , the function should be increasing with in order to prove that the uniform error ( when the support of the s is bounded ) in total variation between the law of given and its limit distribution decreases as increases .if is the density of , which exists since , conditioning on s , is a random dirichlet mean , then therefore , the density function corresponding to is unfortunately , the explicit expression of , which for reduces to the density of if it exists , is not simple ; from proposition 5 in for instance , for , where , when for , here is the distribution of which , by definition , can be recovered by the product rule with and }(y_{i}).\ ] ] however , some remarks on the asymptotic behavior of can be made under suitable conditions . since , if the support of s is bounded ( for instance equal to ] so that for any fixed integer , the chain will be geometrically ergodic ; moreover , it can be proved that is small so that the chain is uniformly ergodic .when , showed that the convergence of is very good and there is no need to consider the chain with .we consider the cases , and , and for each of them we run the chain for , , and .we found that the trace plots do not depend on the initial values . in figure[ figfigure1bis ] , we give the trace plots of when .observe that convergence improves as increases for any fixed value of ; however the improvement is more glaring from the graph for large . when the convergence of the chain for seems to occur at about , while for convergence is at about a value between and .for these values of , the total user times to reach convergence was 0.038 seconds for the former , and 0.066 seconds for the latter .moreover , the total user times to simulate 500 iterations of were 0.05 , 0.071 , 0.226 , 0.429 , 2.299 seconds when , respectively.=1 with the uniform distribution on , and ( solid blue line ) , ( dashed red line ) , ( dotted black line ) and ( dot - dashed violet line ) . ]this behaviour is confirmed in the next example , where the support of the measure is assumed to be unbounded .[ ex2 ] let be a gaussian distribution with parameter and let .the behavior of has been considered in .figure [ figfigura2 ] displays the trace plots of for three different initial values ( ) , with .also in this case , it is clear that the convergence improves as increases .as far as the total user times are concerned , we drew similar conclusions than in example [ ex1 ] .with the gaussian distribution with parameter , , and ( dashed red line ) , ( solid blue line ) and ( dotted black line ) . ]the next is an example in the mixture models context .[ ex3 ] let us consider a gaussian kernel with unknown mean and known variance equal to 1 .if we consider the random density , where is a dirichlet process with parameter , then , for any fixed , is a random dirichlet mean .therefore , if we consider the measure - valued markov chain defined recursively as in ( [ ftchain4 ] ) , we define a sequence of random densities , where . in each panel of figure[ figfigura3 ] , we drew for different values of when is fixed .in particular , we fixed to be a gaussian distribution with parameter , and let ; in this case , since the `` variance '' of is small , the mean density (x)=\int_{\mathbb{r } } k(x,\theta)\alpha_0(\mathrm{d}\theta) ] , while , if , , as well as the successive iterations , is a good approximation of .as in example [ ex3 ] ; , , and . ] in any case , observe that even if the `` true '' density is unknown , as when , the improvement ( as increases ) is clear as well ; see figure [ figfigura4 ] , where 5 draws of , , are plotted for different values of . as in example[ ex3 ] ; , , and . ]the paper constitutes , as far as we know , the first work highlighting the interplay between bayesian nonparametrics on the one side and the theory of measure - valued markov chains on the other . in the present paper , we have further studied such interplay by introducing and investigating a new measure - valued markov chain defined via exchangeable sequences of r.v.s . two applications related to bayesian nonparametricshave been considered : the first gives evidence that is strictly related to the newton s algorithm of the mixture of dirichlet process model , while the second shows how can be applied in order to a define a generalization of the feigin tweedie markov chain .an interesting development consists in investigating whether there are any new applications related to the feigin tweedie markov chain apart from the well - known application in the field of functional linear functionals of the dirichlet process ( see , e.g. , ) .the proposed generalization of the feigin tweedie markov chain represents a large class of measure - valued markov chains maintaining all the same properties of the feigin tweedie markov chain ; in other terms , we have increased the flexibility of the feigin tweedie markov chain via a further parameter .we believe that a number of different interpretations for can be investigated in order to extend the applicability of the feigin tweedie markov chain . in this respect ,an intuitive and simple extension is related to the problem of defining a ( bivariate ) vector of measure - valued markov chains , where , for each fixed , is a vector of dependent random probabilities , being fixed positive integers .marginally , the two sequences and are defined via the recursive identity ( [ ftchain4 ] ) ; the dependence is achieved using the same plya sequence and assuming dependence in or between and .for instance , if , for each , are independent r.v.s , with an exponential distribution with parameter , with an exponential distribution with parameter 1 , we could define , .of course , the dependence is related to the difference between and .work on this is ongoing .a proof of the weak convergence of the sequence of r.p.m.s on to a dirichlet process is provided here , when the s are a plya sequence with parameter measure .the result automatically follows from theorem [ asconv](ii ) , but this proof is interesting _ per se _ , since we use a combinatorial technique ; moreover an explicit expression for the moment of order ) of the -dimensional plya distribution is obtained .let defined in ( [ eqprec ] ) , where are a plya sequence with parameter .then where is a dirichlet process on with parameter . by theorem 4.2 in ,it is sufficient to prove that for any measurable partition of , characterizing the distribution of the limit . forany given measurable partition of , by conditioning on , it can be checked that is distributed according to a dirichlet distribution with empirical parameter , and \\ [ -8pt ] & & \quad={n\choose j_{1}\cdots j_{k}}\frac{(\alpha(b_{1}))_{j_{1}\uparrow1 } \cdots(\alpha(b_{k}))_{j_{k}\uparrow1}}{(a)_{n\uparrow1 } } , \nonumber\end{aligned}\ ] ] where with . for any -uple of nonnegative integers , we are going to compute the limit , for , of the moment \nonumber\\ [ -8pt]\\ [ -8pt ] & & \hspace*{-5pt}\quad= \sum_{(j_{1},\ldots , j_{k})\in\mathcal{d}^{(0)}_{k , n } } { n\choose j_{1}\cdots j_{k } } \frac{(\alpha(b_{1}))_{j_{1}\uparrow1 } \cdots(\alpha(b_{k}))_{j_{k}\uparrow1}}{(a)_{n\uparrow1 } } \frac{(j_{1})_{r_{1}\uparrow1}\cdots(j_{k})_{r_{k}\uparrow1 } } { ( n)_{(r_{1}+\cdots+r_{k})\uparrow1}},\nonumber\end{aligned}\ ] ] where in general denotes the pochhammer symbol for the factorial power of with increment , that is .we will show that , as , \\ & & \quad\rightarrow{\mathbb{e}}\biggl[(p(b_1)\big)^{r_1}\cdots(p(b_{k-1}))^{r_{k-1 } } \biggl ( 1-\sum_{i=1}^{k-1}p(b_{i})\biggr)^{r_k}\biggr],\end{aligned}\ ] ] where is a dirichlet process on with parameter measure , that is , the r.v . has dirichlet distribution with parameter . this will be sufficient to characterize the distribution of the limit , because of the boundedness of the support of the limit distribution .first of all , we prove the convergence for , which corresponds to the one - dimensional case .in particular , we have \\ & & \quad= \frac{1}{(n)_{(r_{1}+r_{2})\uparrow1}}\sum_{(j_{1},j_{2})\in\mathcal{d}^{(0)}_{2,n } } { n\choose j_{1},j_{2 } } \frac{(\alpha(b_{1}))_{j_{1}\uparrow1}(\alpha(b_{2}))_{j_{2}\uparrow1 } } { ( a)_{n\uparrow1 } } ( j_{1})_{r_{1}\uparrow1}(j_{2})_{r_{2}\uparrow1}\\ & & \quad=\frac{1}{(n)_{(r_{1}+r_{2})\uparrow1}}\sum_{t_{1}=1}^{r_{1}}|s(r_{1},t_{1})|\sum_{s_{1}=1}^{t_{1}}s(t_{1},s_{1 } ) \sum_{t_{2}=1}^{r_{2}}|s(r_{2},t_{2})|\sum_{s_{2}=1}^{t_{2}}s(t_{2},s_{2})\\ & & \qquad{}\times\sum_{(j_{1},j_{2})\in\mathcal{d}^{(0)}_{2,n } } { n\choose j_{1},j_{2 } } \frac{(\alpha(b_{1}))_{j_{1 } \uparrow1}(\alpha(b_{2}))_{j_{2}\uparrow1}}{(a)_{n\uparrow1 } } ( j_{1})_{s_{1}\downarrow1}(j_{2})_{s_{2}\downarrow1},\end{aligned}\ ] ] where and and are the stirling number of the first and second kind , respectively .let us consider the following numbers , where , are nonnegative integers and and prove they satisfy a recursive relation .in particular , so that the following recursive equation holds therefore , starting from , , we have and by ( [ rec1 ] ) we obtain .thus , \\ & & \quad = \sum_{t_{1}=0}^{r_{1}}|s(r_{1},t_{1})|\sum_{s_{1}=0}^{t_{1}}s(t_{1},s_{1})\sum_{t_{2}=0}^{r_{2}}|s(r_{2},t_{2})|\sum_{s_{2}=0}^{t_{2}}s(t_{2},s_{2})\lim_{n\rightarrow+\infty}\frac{c_{n}^{(s_{1},s_{2})}}{(n)_{(r_{1}+r_{2})\uparrow1 } } \\ & & \quad = |s(r_{1},r_{1})|s(r_{1},r_{1})|s(r_{2},r_{2})|s(r_{2},r_{2})\frac{(\alpha(b_{1}))_{r_{1}\uparrow1 } ( \alpha(b_{2}))_{r_{2}\uparrow1 } } { ( a)_{(r_{1}+r_{2})\uparrow1}}\\ & & \quad=\frac{(\alpha(b_{1}))_{r_{1}\uparrow1}(\alpha(b_{2}))_{r_{2}\uparrow1 } } { ( a)_{(r_{1}+r_{2})\uparrow1}}.\end{aligned}\ ] ] the last expression is exactly ] and it has at least one finite invariant measure if it is a weak feller markov chain .in fact , for a fixed since the distribution of has at most a countable numbers of atoms and is absolutely continuous . from proposition 4.3 in , if we show that is -irreducible for a finite measure , then the markov chain is positive recurrent and the invariant measure is unique .let us consider the following event .then for a finite measure we have to prove that if , then for any .we observe that therefore , since , using the same argument in lemma 2 in , we conclude that for a suitable measure such that . finally , we prove the aperiodicity of by contradiction . if the chain is periodic with period , the exist disjoint sets such that for all and for .this implies for almost every with respect to the lebesgue measure restricted to .thus , for . for generic and ,this is in contradiction with the assumption . by theorem 13.3.4(ii ) in , converges in distribution for -almost all starting points .in particular , converges weakly for -almost all starting points . from the arguments above, it follows that , for all bounded and continuous , there exists a r.v . such that as for -almost all starting points .therefore , for lemma 5.1 in there exists a r.p.m . such that as and for all .this implies that the law of is an invariant measure for the markov chain .then , as , and the limit is unique for any . since for any random measure and we know that if and only if for any ( see theorem 3.1 . in ) , the invariant measure for the markov chain is unique . by the definition of , it is straightforward to show that the limit must satisfy ( [ ftchain3 ] ) so that is the dirichlet process with parameter .proof of theorem [ thmharrisfunct ] the proof is a straightforward adaptation of the proof of theorem 2 in , using & \leq & \sum_{i=1}^{n}\log\bigl(1+|g(y_{1,i})|\bigr)+ \log\bigl(1+(1-\theta_{1})\big|g^{(n)}_{0}|\bigr)\end{aligned}\ ] ] instead of their inequality ( 8) . proof of theorem [ thmgeoergfunct ] as regards ( i ) , given the definition of stochastically monotone markov chain , we have that for , , & \geq & \mathbb{p}\biggl(\theta_{1}\sum_{i=1}^{n}q_{1,i}^{(n)}y_{1,i}+(1-\theta_{1})z_{2}<a\biggr)\\[-2pt ] & = & p^{(n)}_{1}(z_{2},(-\infty , s)).\end{aligned}\ ] ] as far as ( ii ) is concerned , we first prove that , under condition ( [ eqfinitmeanalpha0 ] ) , the markov chain satisfies the foster lyapunov condition for the function .this property implies the geometric ergodicity of the ( see , chapter 15 ) .we have \\[-2pt ] & \leq & 1+{\mathbb{e}}[\theta_{1}]\sum_{i=1}^{n}e\bigl[\bigl|q_{1,i}^{(n)}y_{1,i}\bigr|\bigr]+|x|{\mathbb{e}}[1-\theta_{1}]\\[-2pt ] & \leq & 1+\frac{n}{n+a}\sum_{i=1}^{n}{\mathbb{e}}\bigl[\bigl|q_{1,i}^{(n)}y_{1,i}\bigr|\bigr]+\frac{a}{n+a}|x|=1+\frac{n}{n+a}{\mathbb{e}}[|y_{1,1}|]+\frac{a}{n+a}|x|.\end{aligned}\ ] ] therefore , we are looking for the small set such that the foster lyapunov condition holds , that is , a small set such that +\frac{a}{n+a}|x|\leq\lambda(1+|x|)+b\mathbbl{1}_{c^{(n)}}(x)\ ] ] for some constant and .if ] , it is straightforward to prove that the foster lyapunov condition holds for some constant , and such that =\frac{\gamma(a+s)\gamma(a+n)}{\gamma(a)\gamma(a+s+n)}<\lambda<1\ ] ] and for some compact set .of course ( [ eqjar ] ) implies <+\infty; ] .the authors are very grateful to patrizia berti and pietro rigo who suggested the proof of theorem [ rigo ] , and to eugenio regazzini for helpful discussions .the authors are also grateful to an associate editor and a referee for comments that helped to improve the presentation .the second author was partially supported by miur grant 2006/134525 .
measure - valued markov chains have raised interest in bayesian nonparametrics since the seminal paper by ( _ math . proc . cambridge philos . soc . _ * 105 * ( 1989 ) 579585 ) where a markov chain having the law of the dirichlet process as unique invariant measure has been introduced . in the present paper , we propose and investigate a new class of measure - valued markov chains defined via exchangeable sequences of random variables . asymptotic properties for this new class are derived and applications related to bayesian nonparametric mixture modeling , and to a generalization of the markov chain proposed by ( _ math . proc . cambridge philos . soc . _ * 105 * ( 1989 ) 579585 ) , are discussed . these results and their applications highlight once again the interplay between bayesian nonparametrics and the theory of measure - valued markov chains . ,
the ability to communicate secretly is considered one of the most important challenges of the information era . for all practical purposes most modern public key systemscan be considered secure . however , this security is not based on any mathematical proof but on the belief that there is no classical algorithm to factorize huge prime numbers in a reasonable amount of time .some classical cyphers , such as the one - time pad , do not have the aforementioned problem .they are private key protocols whose security is entirely based on a random string of bits , the key , which only the sender ( alice ) and the receiver ( bob ) should know .once the key is secretly transmitted the communication is absolutely secure .the drawback with these cyphers is that any key transmitted through a classical channel can be passively monitored .although it may be technologically difficult to get the key without being noticed , it is in principle possible .the solution of the key transmission problem based on the laws of physics was presented for the first time by bennett and brassard in a seminal work . making use of quantum channels ( polarized photons )the authors theoretically showed the possibility to share a secret key with absolute security if the laws of quantum mechanics are correct .they have shown that any interference of an eavesdropper ( eve ) on the quantum channel can be detected by alice and bob at the end of the protocol .other interesting schemes were later proposed , in particular the e91 protocol which was the first qkd scheme that employed maximally entangled states . for an extensive review on other protocols and on experimental feasibilitieswe refer the reader to ref . . in this contributionwe present a new qkd scheme that uses directly non - maximally entangled states ( no entanglement concentration needed ) and the probabilistic quantum teleportation protocol ( pqt ) .nevertheless , the present scheme resembles the bb84 rather than the e91 protocol , i.e. , although we make use of entanglement there is no need to check for violation of any bell inequality to assure the security of the shared key .it is the probabilistic aspect of the pqt that guarantees the security of the teleported key and , as we show later , also allows it to be encoded in a set of orthogonal states .note that in bb84-like protocols it is mandatory to encode the key in non - orthogonal states to make sure its transmission is secure .in contrast to other protocols , where departure from maximal entanglement makes them inoperable , our scheme exploits partial entanglement and , using a special generalized bell measurement , ensures flawless key distribution .furthermore , this new qkd scheme takes advantage of a new free parameter , namely the degree of partial entanglement , that enables more control over the security and transmission rate of the protocol .indeed , as explained later , this freedom allows us to introduce a minor modification in the qkd scheme that turns it into a controlled qkd protocol , where a third party ( charlie ) has the final word on whether or not alice and bob are allowed to share a secret key , even after all steps of the protocol were implemented .it is also worth mentioning that charlie decides whether or not alice and bob will share a key without ever knowing it , a feature that has practical applications .we also show other possible interesting extensions of the basic protocol and how we can improve its security and the reliability of the transmitted key .one important ingredient in this qkd protocol , and the one that allows it to depart from e91-like protocols , is the use of _ partially _ entangled states to transmit the secret key from one party to the other . indeed , as we will show , by playing with different kinds of partially entangled states and with different joint measurement basis , alice can teleport to bob a secret key .the other ingredient is , as anticipated in the last sentence , the proper use of a probabilistic teleportation protocol , which allows us to harness the teleporting power of a non - maximally pure entangled state .let us start by recalling the pqt as developed in ref . and extended in ref . . as usual, alice wants to teleport the following qubit to bob , where and are arbitrary complex numbers such that + .contrary to the original proposal alice and bob now share a non - maximally entangled state , with and , which naturally leads to the following orthonormal basis , & = & ( + m ) / , [ phiplus ] + & = & ( m^ * - ) / , + & = & ( + m ) / , + & = & ( m^ * - ) / , [ psiminus ] with denoting the complex conjugate of and . using the generalized bell states ( gbs ) abovewe can rewrite the three qubit state belonging to alice and bob as , , with the first two qubits being with alice and the last one with bob .alice now proceeds by implementing a generalized bell measurement ( gbm ) , i.e. , she projects her two qubits onto one of the four gbs . discusses three possible ways to experimentally implement a gbm . )the probability to obtain a given gbs is , p_^+_m&=&(||^2+|mn|^2)/[(1+|m|^2)(1+|n|^2 ) ] , + p_^-_m&=&(|m|^2+|n|^2)/[(1+|m|^2)(1+|n|^2 ) ] , + p_^+_m&=&(|n|^2+|m|^2)/[(1+|m|^2)(1+|n|^2 ) ] , + p_^-_m&=&(|mn|^2+||^2)/[(1+|m|^2)(1+|n|^2 ) ] .alice then sends bob the result of her measurement ( two bits ) via a classical channel who , whereupon , applies a unitary transformation on his qubit according to this information .these transformations are the same ones given in the original teleportation protocol : if alice gets then bob does nothing , if she gets he applies a operation , if alice measures then bob applies and finally for he applies .here , and are the usual pauli matrices ( and ) .after the correct transformation bob s qubit is given by one of the following possibilities , & & = , [ a ] + & & = , [ b ] + & & = , [ c ] + & & = .[ d ] from now on , and without loss of generality , we consider and to be real quantities . therefore , looking at eqs .( [ b ] ) and ( [ c ] ) we realize that if the state with bob is and the protocol works perfectly .there exist other possibilities , which come from eqs .( [ a ] ) and ( [ d ] ) , namely , or .but this is only possible if we have maximally entangled states since those relations imply . for probability of success is simply p_suc = p_^-_n+p_^+_n=. [ probsuc ] therefore , if alice knows the entanglement of the channel , which increases monotonically with , she can match her measurement basis ( ) in order to make the protocol work with probability .it is important to note that if bob obtains a different state and the protocol fails .it is this property that we explore in order to build our qkd scheme .let us assume that bob prepares with equal probability two partially entangled states , and , where and , .for each state he keeps one qubit and send the other one to alice ( see fig .[ fig1 ] ) .[ fig1 ] both parties previously agreed on the values of and but at this stage bob does not tell alice the respective value of for each entangled state he prepared .alice , on the other hand , prepares randomly two types of single qubit states , , which are to be associated with the secret key she wants to share with bob .for example , the parties use the convention that represents the bit and that the bit . note that we do not need another encoding for the bits and that is non - orthogonal to the previous one , as required by the bb84 protocol . and .this would increase the security of the present protocol on the expense of the key transmission rate .however , as it is clear from the security analysis , this is not mandatory . ]alice then uses each qubit received from bob to implement the pqt for each one of her randomly generated states . in doing so , she also chooses in a random way whether to project each pair of qubits ( hers and bob s ) to the gbs with or .alice , however , does not inform bob of the value of but only which gbs she gets . at the end of this stage bobknows what her measurement results were ( but not ) , which allows him to implement the right unitary operation on his qubits .after that , each one of his qubits are described by one of the four states given by eqs .( [ a])-([d ] ) , with .the last six steps of the protocol are as follows .first , bob projects his qubits onto the basis .second , bob and alice reveal in a public channel the following information .bob tells alice which value of ( or ) he has assigned to each partially entangled state whilst alice tells him the value of ( or ) for each gbm she made .third , they keep all the cases where she has rightly matched the entanglement of the channel with the entanglement of the measuring basis , i.e. , whenever the shared entangled state was and alice chose , . fourth , they discard all the other cases since the pqt fails there ( ) .fifth , within the cases where the matching condition is satisfied , bob and alice keep only those instances where her measurement results were or , the so called successful runs of the pqt .for those , and only those runs of the protocol , alice and bob are sure they agree on the teleported state and consequently on the random string of zeros and ones .finally , the last step consists of using half of the successful cases to test whether or not eve has tried to tamper with the key . in an idealized situation , i.e. , perfect detectors and no noise , they simply broadcast half of their valid results in a public classical channel and check if they always get the same bits .if they do , the remaining half of bits are their secret key . if they fail to agree on the public data , they discard everything and repeat the whole protocol again .however , noise and non - ideal detectors will introduce some errors even when all the steps of the protocol are successful .nevertheless , alice and bob can still achieve any desired level of security by increasing the size of the shared key and employing classical reconciliation protocols and privacy amplification techniques already developed for other qkd schemes .assuming an ideal scenario , for instance , excellent detectors and efficient measurement processes , we can calculate the maximum rate of how many teleported qubits constitute the final secret key .we know that alice implements the pqt half of the times making a gbm with and half with .therefore , the total probability of success for all pqt is , according to eq .( [ probsuc ] ) , but half of the successful cases are discarded to check for the presence of eve , and the final rate becomes on the other hand , if we look at the bb84 protocol , we see that half of the times bob measures the qubits he receives from alice using the same basis she employed to prepare them and , within these successful runs , the other half is used to test for the security of the protocol .this gives us , assuming no loss during the transmission of the qubit and ideal detectors , a total idealized rate of . returning to the protocol presented here , it is not difficult to see that , no matter what the values of and are .( if they are equal to one we have but then the protocol is useless . )however , for modest values of and ( a little greater than ) we get rates above .if we allow one of them to approach unity we do even better . for example , if we have and we already obtain rates higher than . there exists , nevertheless , an important feature that we can easily achieve employing this protocol that is unattainable using the bb84 protocol. we can transform it into a sort of controlled qkd scheme introducing another party ( charlie ) who can decide whether or not alice and bob are allowed to share a secret key even after they finished all steps of the protocol . in order to do that, we let charlie prepare and distribute the entangled states and to alice and bob . hence, if charlie publicly announces the values of for each entangled state he prepared he can make the protocol work without ever knowing the key .otherwise , if he does not broadcast this information , the protocol ultimately fails .note that the probability of success in this scenario , assuming charlie broadcast all the values of for each entangled pair he prepares , is the same we had before , eq .( [ finalrate ] ) .his role here is simply to distribute the entangled states between alice and bob , without changing the final success rate for the protocol .we also remark that a similar third party control can be achieved using a different qkd scheme based on maximally entangled bell states . at this pointwe wish to emphasize the main differences between the present scheme and the bb84 and the e91 protocol .as described above , here we can achieve a level of third party control that is unattainable using the former two protocols .this is an important and practical characteristic of this scheme that , as we show below , can also be extended to a fourth , fifth , , -th party level of control .moreover , in the present protocol the key is never transmitted from alice to bob as in the bb84 protocol .rather , it is teleported from one party to the other , which gives an additional flexibility for this protocol in its third party formulation . indeed , once alice and bob have shared the partially entangled states distributed by charlie they can easily exchange their roles . instead of alice teleporting the key to bob ,he is the one who teleports the key to her . also , contrary to the e91 protocol where a maximally entangled state is directly responsible to the generation of the secret key , here we use a non - maximallyentangled state as a channel through which the key is teleported .in other words , the non - maximally entangled states of the present scheme have no direct role on the generation of the secret key .the security of this protocol is based on the same premises of the bb84 protocol and , therefore , we can understand the security of the former by recalling the security analysis of the latter . the key ingredient here is the recognition that there are two unknown sets of actions throughout the implementation of the bb84 protocol that are only publicly revealed at the end of it : the basis in which alice prepared her qubits and the basis in which bob measured the qubits received from alice .a similar thing happens for the present protocol .we have two unknown sets of actions throughout each run of the protocol that are revealed only at its end : the entanglement of the shared qubits between alice and bob and what basis alice used to implement the gbm .this lack of information prevents eve from always obtaining the right bit being sent from alice to bob without being noticed .as we show below , the laws of quantum mechanics forbid eve from acquiring information about the key being transmitted without disturbing the quantum state carrying it if she does not know which entangled state is shared between alice and bob .let us assume , for definiteness , that in one of the runs of the protocol alice prepared the state and that eve , somehow , replaced the entangled state produced by bob with one produced by her .eve wants the state prepared by alice to be teleported to her . by doingso she thinks she can obtain information on the key .however , she does not know which gbm alice implemented to perform the pqt ( or ) .this information is only revealed after bob confirms he measured his qubit .she only knows that the measurement result of alice is , say , .( this is the best scenario for eve . )therefore , eve s qubit is described by eq .( [ b ] ) , which can be written as , .\label{stateeve}\ ] ] looking at eq .( [ stateeve ] ) we see that unless eve guessed correctly alice s choice for ( and this only happens half of the times ) , preparing the right entangled state with , we have a superposition of the states and .this implies that she obtains the wrong bit being transmitted with probability . in other words , since we have a superposition of the right and wrong answers quantum mechanics forbids eve from always getting the right one with a single measurement .it is clear now that this is similar to the argument used to prove the security of the bb84 protocol .hence , no matter what eve does , if she prepared the wrong entangled state and bob the right one , she will be caught trying to tamper with the key when alice and bob publicly compare part of it .this is true since eve can not with certainty send bob another qubit which mimics the right one .furthermore , eq . ( [ stateeve ] ) tells us that the greater the difference between and the more likely will eve be detected .we can see this by noting that increases as a function of or , equivalently , as a function of the difference in entanglement between the channels .lastly , the chances of eve being caught also increases with the size of the string of bits being publicly announced .we can also estimate the optimal range of parameters ( and ) for this protocol , assuming we want to maximize the transmission rate while at the same time minimizing the chances of eve guessing the correct qubit being teleported .in other words , we want to maximize a function proportional to , where is , as given above , the probability of eve guessing the wrong qubit and is eq .( [ finalrate ] ) , the total rate of success in the transmission of the key .both and are now considered functions of and .a simple numerical analysis shows that the best strategies occur for and ( and vice - versa ) , while the worst cases for alice and bob occur when .note that the security check outlined above is an idealization . in real - life situations we always have noise and imperfect devices that give wrong answers even in the absence of eve .however , this can be controlled using classical reconciliation protocols and privacy amplification . a more detailed security analysis based on bounds for the mutual information between alice , bob , and eve using , for example , the techniques of refs . , is beyond our goals here and is left for future work .the qkd scheme presented here is very versatile and allows for arbitrary control over the protocol parameters .this can be achieved by introducing two extensions , where one increases its security and the other increases the distance of reliable transmission of the key .the security of the protocol is increased by allowing bob to generate more than two partially entangled states .for example , instead of just creating the states , , he can create three or more states with different . with only two states ,eve can guess the right gbm in half of the successful runs of the pqt .however , with more entangled states , her chances are reduced to , where is the number of partially entangled states produced by bob . on the other hand , this increase in security reduces the transmission rate of the key since it becomes less likely that alice and bob achieve the matching condition ( ) .to extend the distance of reliable key transmissions we can use quantum repeater stations . in this scenarioit is the first station ( the closest to alice ) that generates the partially entangled states and then publicize the values of , only after bob measures his qubits .the other stations use maximally entangled states to successively teleport alice s qubit to bob . note that security increases if other repeater stations use partially entangled states too .the repeater stations can also be used to extend the third party control described before to any number of parties . indeed , if we allow each station to freely choose its own partially entangled states we are increasing the number of parties that can decide whether or not alice and bob will share a secret key .this is true for the protocol will work if , and only if , all the repeater stations disclose to alice and bob which partially entangled states they generated at each run of the protocol .while noise and decoherence of entangled qubits usually result in mixed states , partially - entangled states used in the aforementioned qkd protocol can be considered in the scenario of coupling to a zero - temperature bath . in this regime dynamical control of decoherence one to determine the amount of partial entanglement of the channel by properly tuning the relative decoherence between the qubits .thus , the party sending the partially entangled states ( either bob in the standard qkd or charlie in the controlled version ) can select the degree of partial entanglement and is not restricted by the amount of noise in the system .we showed that partially entangled states are useful resources for the construction of a direct qkd scheme by the proper use of probabilistic teleportation protocols .this has an interesting implication on the practical implementation of entangled based qkd schemes , as it is extremely difficult to produce maximally entangled qubits . using the protocol presented here, one can alleviate the experimental demands on the production of entangled pairs without rendering qkd inoperable .furthermore , the present partially entangled state based qkd scheme is flexible enough that we were able extend it in at least three directions , each one augmenting its usability .the first one turned the protocol into a controlled qkd scheme , where a third party decides whether or not alice and bob are able to share a secret key .then we demonstrated how one can increase its security by letting the parties use more and more different partially entangled states to implement one of the steps of the qkd protocol , namely , the probabilistic teleportation protocol .and third , we discussed how the use of quantum repeaters extends the distance of reliable key transmission without diminishing the key rate .finally , the present qkd protocol naturally leads to new interesting questions .for instance , can we increase the key transmission rate using partially entangled qudits instead of qubits ? is there any possible way of devising a similar approach using mixed entangled states ? or using continuous variable entangled systems ?can we do better by using different types of entangled qubit - like states , such as the cluster state or the cluster - type coherent entangled states ?it is our hope that the ideas presented here might lead to clues on how to answer these questions .g. r. thanks the brazilian agency coordenao de aperfeioamento de pessoal de nvel superior ( capes ) for funding this research . c. h. bennett and g. brassard , _quantum cryptography : public key distribution and coin tossing _ ( proceedings of ieee international conference on computers systems and signal processing : bangalore , india , 1984 )
we show that non - maximally entangled states can be used to build a quantum key distribution ( qkd ) scheme where the key is probabilistically teleported from alice to bob . this probabilistic aspect of the protocol ensures the security of the key without the need of non - orthogonal states to encode it , in contrast to other qkd schemes . also , the security and key transmission rate of the present protocol is nearly equivalent to those of standard qkd schemes and these aspects can be controlled by properly harnessing the new free parameter in the present proposal , namely , the degree of partial entanglement . furthermore , we discuss how to build a controlled qkd scheme , also based on partially entangled states , where a third party can decide whether or not alice and bob are allowed to share a key . quantum communication , quantum cryptography and communication security , entanglement production and manipulation 03.67.hk , 03.67.dd , 03.67.bg
spontaneous parametric down - conversion ( spdc ) has proven to be an excellent technology for quantum communication , with spdc photons functioning as `` flying qubits . ''the discovery by knill _et al . _ that linear optics and single photon detectors are sufficient for scalable quantum computation has opened the possibility that spdc may also be useful for quantum computation .as the proposals for quantum information processing with spdc become more sophisticated , the technical demands placed on spdc sources become more stringent . for example , a quantum key distribution experiment based on polarization entanglement requires that two two - photon amplitudes ( and , for example ) be made indistinguishable .the spectral properties of the two photons are only important if they are correlated to the polarization degree of freedom . a more stringent form of indistinguishability is typically required for quantum computation with linear optics : it must be impossible to determine which source produced a certain photon after that photon emerges from a beamsplitter .this in turn requires that all of the photons properties be controlled such that it is impossible to learn any information about the identity of a given photon s source .photon pairs produced by spdc are often correlated in one or more of their properties ( frequency , direction , etc . ) .these correlations can destroy the requisite indistinguishability by enabling one to learn about one photon by performing measurements on its twin . in this paper, we describe an spdc source that produces photon pairs that have arbitrary correlation in frequency .the source we propose enables an unusual flexibility in the control of the marginal spectra of the spdc photons .specifically , our source can produce frequency - uncorrelated photon pair in which the center frequency and the bandwidth of each photon is controlled independently , regardless of the nonlinear material s dispersion curves .this makes the source well - suited for applications that span quantum communication and computation , such as teleportation and entanglement swapping . in these applications ,one photon of a pair ( the `` communication '' photon ) is often sent to another party through a long span of optical fiber , while the other photon ( the `` computation '' photon ) is analyzed and detected in a localized interferometer .it is desirable that the communication photon be narrow - band ( such that effects like polarization mode dispersion are mitigated ) and have a wavelength in the infrared ( where optical fiber is least lossy ) .contrariwise , the computation photon should be broad - band ( such that it can be used in interferometers with small path - length differences ) and have a wavelength suited for high - efficiency single - photon counters .the paper is organized as follows . in section [ gapmspdc ], we introduce the new technique and demonstrate that it permits the generation of photon pairs with arbitrary joint spectrum . in section [ example ] , we show how this spectral control can be combined with polarization entanglement by considering a specific example involving a bbo waveguide . in section [ int ], we discuss the possibility of using this source as part of a distributed quantum information processor based on integrated optics .finally , we summarize our results in section [ conclusions ] .our source is a generalization of the design we previously introduced under the name _ auto - phase - matched _spdc ; thus , we name the new scheme _ generalized _ auto - phase - matched spdc . like the original scheme , the new scheme features counter - propagating spdc created in a single - mode nonlinear waveguide by a transverse pump pulse . in the original scheme ( fig .[ setup]a ) , the pump pulse is cross - spectrally pure ( _ i.e. _ the complex envelope factors into separate functions of space and time ) , and impinges on the waveguide at normal incidence . in the new scheme ( fig . [ setup]b ) , the pump pulse may have cross - spectral correlations , and may approach the waveguide at non - normal incidence . with these constraints on the pump pulserelaxed , the center frequency and bandwidth of each spdc photon may be controlled independently . in typical spdc experiments ,a monochromatic pump beam is used . in this situation ,the sum of the frequencies of the signal and idler photons is fixed , thus the photons frequencies are anticorrelated .spdc with generalized joint spectral properties ( _ i.e. _ correlated , uncorrelated , anticorrelated ) was studied theoretically by campos _et al . _ . herewe review the experimental proposals for generating frequency - uncorrelated spdc .a number of techniques have been proposed for creating frequency - uncorrelated spdc ; however , they all impose certain constraints on the center frequencies and/or bandwidths of the spdc photons .proposed a method for creating frequency - uncorrelated spdc based on a group - velocity matching condition introduced by keller and rubin .their method can be used to create degenerate , frequency - uncorrelated photons ; however , the center frequency of down - conversion is fixed by the nonlinear material , and the bandwidth of two spdc photons must be equal .they also demonstrate that degenerate , frequency - uncorrelated photons with different bandwidths may be generated ; however , in this case the bandwidths are fixed and can not be independently controlled .proposed extending the approach of grice _ el al ._ by using a periodically poled nonlinear crystal .this allows one to to satisfy the zeroth - order term in the phase - matching relation at an arbitrary pump wavelength , making the group - velocity matching relation easier to satisfy .even with such an enhancement , this approach does not have sufficient flexibility to allow independent control of the marginal spectra .a distinct approach for creating frequency - uncorrelated spdc was independently discovered by uren _et al . _ and our group . instead of relying on the satisfaction of a group - velocity matching condition , these approaches rely on the geometrical symmetry of degenerate , non - collinear type - i spdc ( the previously mentioned techniques only worked with type - ii spdc ) .the essential difference between the two proposals is that for uren _ et al ._ , the phase - matching relation in the pump propagation direction is a constraint that must be satisfied , while for our auto - phase - matched technique , the single - mode waveguide ensures that this relation is satisfied , regardless of the system parameters .the relative lack of constraints for these techniques makes them attractive , since it suggests that they remain viable options even if the center frequency of spdc is constrained by some other factor ( optical fiber loss , detector efficiency , etc . ) .nonetheless , both techniques suffer a lack of flexibility that is reminiscent of the previously mentioned schemes .the spdc photons must be degenerate , and the bandwidths of the two photons must be equal . in the next section ,we show that by generalizing our auto - phase - matched technique , we can obtain independent spectral control of the spdc photons .to begin , we review the relationship between the pump pulse and the spdc in the geometry of fig .[ setup ] . following the derivation in ref . , a classical pump pulse described on the free - space side of the waveguide - air interface by stimulates the creation of a pair of photons described by the two - photon wavefunction where \ ] ] is the refractive index for the pump polarization . here , and for the rest of the paper , we use the variable to refer to the component of the pump wavevector along the -axis .the ket represents a signal photon in the frequency mode and an idler photon in the frequency mode with corresponding propagation constants and , respectively . equation ( [ wavefunction ] ) conveys the main result of this paper : assuming the dispersion properties of the medium are known , it is possible to generate a down - converted photon pair with arbitrary joint spectrum by appropriately engineering the spatial and temporal characteristics of the pump pulse . in fig .[ setup]a , the pump pulse is parameterized by three numbers : the center frequency , the temporal coherence length , and the spatial coherence length .these parameters may be chosen to produce degenerate spdc with controllable entanglement , as described in ref . ; however , in order to obtain independent control of the center frequency and bandwidth of each spdc photon , one must relax the constraints on the pump pulse , as in fig .[ setup]b . using eqs .( [ pump ] ) and ( [ wavefunction ] ) , it is straightforward to show that a pump pulse described by \ ] ] will yield the following frequency - uncorrelated two - photon state \,|\omega_s\rangle_s\,|\omega_i\rangle_i,\ ] ] where and are the center frequencies of the signal and idler beams , respectively , and we have used the following definitions and approximations : these approximations are valid in typical situations ; however , if required , more terms may be used at the expense of a more complicated expression for the pump pulse .equations ( [ genpump ] ) and ( [ genwavefunction ] ) summarize the central result of this work . taken together , these relations can be thought of as an algorithm for producing frequency - uncorrelated spdc with arbitrary marginal spectra .the wavefunction in eq .( [ genwavefunction ] ) describes a frequency - uncorrelated two - photon state in which the signal photon is centered on with a bandwidth , and the idler photon is centered on with a bandwidth .note that these two photons are not themselves indistinguishable ( unless and ) .as previously mentioned , the indistinguishability arises in a multi - photon experiment when one photon of the pair enters an interferometer with one or more photons that have identical spectra . in this case, the lack of frequency correlations between the signal and idler prevents a loss of interferometric visibility by ensuring that spectral measurements on one photon of the pair wo nt reveal any spectral information about the other .equations ( [ genpump]-[end ] ) demonstrate that the four numbers , , , and , along with the dispersion properties of the waveguide , are sufficient to determine the form of the pump pulse required to generate the desired wavefunction .we can simplify the description of the pump pulse by rewriting eq .( [ genpump ] ) as ,\ ] ] where the algorithm for creating the appropriate pump pulse to produce the state in eq .( [ genwavefunction ] ) is as follows .a pulse is created with center frequency , spectral bandwidth and spatial bandwidth .next , a dispersive element such as a wedge of quartz or a diffraction grating is used to correlate and by effecting the substitution finally , the pulse it directed towards the nonlinear waveguide at incidence angle where is measured outside the waveguide ( see fig . [ setup]b ) , and is the speed of light in vacuum . in fig .[ jointspectrum ] , we present a graphical depiction of the relationship between the pump pulse and the resulting two - photon state , for both auto - phase - matched spdc ( fig .[ jointspectrum]a ) and generalized auto - phase - matched spdc ( fig .[ jointspectrum]b ) . in both cases ,a plot of is superposed over the joint spectrum of the signal and idler photons . by plotting at the correct location and on the correct inner axes, one can immediately infer the joint spectrum of the down - converted photons , simply by interpreting the plot using the outer axes . in fig .[ jointspectrum]a , the non - zero portion of is centered on the axis , and one of the inner axes is scaled by the factor , which is the first derivative of evaluated at . in fig .[ jointspectrum]b , the non - zero portion of is located at a general position , and the inner axes are no longer orthogonal ( unless ) . using this figure ,the two desirable features of the two - photon joint spectrum ( non - degeneracy and independently controlled bandwidths ) are easily interpreted in terms of the pump pulse .that is , the non - degeneracy of the photon pair derives from the condition , which in turn derives from the non - normal incidence of the pump pulse .similarly , the independent control of the two photons respective bandwidths derives from the cross - spectral correlation in the pump pulse ( does not factor into a function of times a function of ) .[ table ] .parameters that describe the pump pulse required to produce non - degenerate , frequency - uncorrelated , polarization - entangled spdc in a single - mode bbo waveguide , using the technique depicted in fig .[ setup]b .the signal photon is at with coherence length mm , and the idler photon is at with coherence length cm .the pump pulse is comprised of coherently superposed , independently controlled pulses and in the two polarization modes and , respectively .the parameters , , , and are defined in sec .[ gapmspdc ] .the negative values of indicate that the projection of the pump wavevector along the waveguide is oriented in the negative direction ( see fig .[ setup]b ) . [ cols="<,<,<",options="header " , ] in the generalized auto - phase - matched technique , one can obtain polarization entanglement by adjusting the polarization state of the pump pulse , without sacrificing the spectral control described above . to illustrate this feature , we present an example involving non - degenerate, polarization - entangled spdc produced in a single - mode bbo waveguide .the general idea is to use two of the nonlinear medium s tensor elements at the same time , by preparing a coherent superposition of two polarization modes of the pump pulse . when producing polarization - entangled photon pairs , it is typically desirable that a given photon have the same spectral properties for both two - photon polarization amplitudes. therefore , in creating the pump pulse , we use the same four numbers , , , and in calculating the desired pulse shape for both pump polarization modes .however , since the two two - photon amplitudes relate to spdc processes taking place in distinct polarization modes , the dispersion properties of the waveguide will in general be different .thus , using the notation of fig .[ setup ] , the pump pulse will be characterized by two functions : , which describes the -polarized component of the pump - pulse , and , which describes the -polarized component of the pump - pulse in the case of bbo , the relevant tensor elements are pm / v and pm / v and are not equal , we adjust the ratio of the optical powers in each pump polarization mode in order to obtain the maximally entangled state , where the relative phase is determined by the relative phase between the two pump polarization modes .it is straightforward to produce non - maximally entangled and/or mixed polarization states with this technique .non - maximally entangled states may be produced by adjusting the ratio of the optical powers in each of the pump s polarization modes .mixed states may be produced by allowing the pump pulse to become partially depolarized .although in this paper we are concerned with frequency - uncorrelated ( and thus , frequency - unentangled ) photon pairs , analogous generalizations for partial entanglement and mixedness in the frequency degrees of freedom are possible , given the appropriate manipulations of the pump pulse . ] .therefore , using the notation of fig .[ setup ] , the pump beam will approach the waveguide in the - plane , and will be composed of a -polarized pulse and a -polarized pulse . in tablei , we list the calculated values of , , , and ( defined in sec .[ gapmspdc ] ) that correspond to a frequency - uncorrelated polarization - entangled pair of photons with the signal photon at with coherence length mm , and the idler photon at with coherence length cm .these values for center wavelength and coherence length were chosen in order to make the signal photon suitable for long - distance optical fiber transmission , and the idler photon suitable for local processing in an integrated optical circuit .in calculating the values in table i , we have ignored waveguide dispersion , using instead the selmeier curves to describe the material dispersion in the bbo single - mode waveguide .generalized auto - phase - matched spdc is particularly well - suited for quantum information processing on an integrated optical circuit ( see fig . [ integrated ] ) . among the advantages of replacing an array of discrete optical elements with an integrated optical circuitare the following : reduced size , reduced loss due to fewer connectors , and `` common mode '' noise processes because of the close proximity of optical elements .however , there are substantial experimental challenges associated with constructing an integrated optical quantum information processor .perhaps the most obvious challenge is finding a material that can perform as many of the required functions ( photon source , modulation , detection ) as possible . a significant advantage of generalized auto - phase - matched spdc in that the choice of material places essentially no limitation on the spectral and polarization properties of the photon pairs that will be produced .all that is required is that the material s tensor have the appropriate non - zero elements such that , for a given orientation of the optic axis with respect to the waveguide , the desired spdc process will occur .figure [ integrated ] depicts a conceptual schematic of an integrated optical quantum information processor which employs generalized auto - phase - matched spdc for generating photons .the figure highlights several of the practical advantages associated with this technology .first , the sources may be placed at the edge of the circuit and combined with single - photon counters to implement conditional single - photon sources .second , due to the transverse - pump configuration , the photon pairs may be created within the interior of the optical circuit . finally , since there is no group - velocity matching relation associated with generalized auto - phase - matched spdc , poling of the nonlinear waveguide at each source is not required .we have described a scheme for generating polarization - entangled photons pairs with arbitrary spectrum . by controlling the spatial , temporal , and polarization properties of the pump pulse , it is possible to generate the desired two - photon state , regardless of the dispersion properties of the nonlinear medium .we provided a calculation of the parameters describing the pump pulse required to generate a photon - pair with a particular joint spectrum in a single - mode bbo waveguide .finally , we discussed the role this source technology might play in a distributed quantum information processor based on integrated optics .this work was supported by the national science foundation ; the center for subsurface sensing and imaging systems ( censsis ) , an nsf engineering research center ; the defense advanced research projects agency ( darpa ) ; and the david and lucile packard foundation .
we present a scheme for generating polarization - entangled photons pairs with arbitrary joint spectrum . specifically , we describe a technique for spontaneous parametric down - conversion in which both the center frequencies and the bandwidths of the down - converted photons may be controlled by appropriate manipulation of the pump pulse . the spectral control offered by this technique permits one to choose the operating wavelengths for each photon of a pair based on optimizations of other system parameters ( loss in optical fiber , photon counter performance , etc . ) . the combination of spectral control , polarization control , and lack of group - velocity matching conditions makes this technique particularly well - suited for a distributed quantum information processing architecture in which integrated optical circuits are connected by spans of optical fiber .
non - uniform filter banks are of interest in speech processing applications since they can be used to exploit the perceptual properties of the human ear . a well known and efficient technique to realize a non - uniform filter - bank is the all - pass transformed polyphase filter - bank - , where the delay elements of the input and output delay chains are replaced by first - order all - pass filters , as shown in fig . 1 .such a warped filter bank has been found to be beneficial in applications such as speech enhancement and beamforming . in addition, the warped filter banks also involve much lower delay and complexity in comparison to non - uniform filter banks realized by a tree structure .since most hands - free and speech enhancement systems are coupled with an acoustic echo canceller , it is important that the analysis and synthesis filter banks are optimized for echo cancellation .other realizations of non - uniform filter structures are obtained by joining two or more uniform filter bank structures of different bandwidths by transitions banks - , or by combining a subset of varying numbers of subbands of a uniform filter bank - . in sampled non - uniform filter banks for adaptive filtering are realized by incorporating extra filters in between the non - uniform sub - bands to cancel the aliasing. in echo cancellation for speech signals , cancellation of low - frequency echoes is most critical for two important reasons .the first is because most of the speech energy is distributed in the low - frequency end of the audio spectrum .the second is due to room acoustics : in a typical room environment the higher - frequency components of an audio signal are more easily absorbed by the materials in the room ( walls , carpets , curtains , etc . ) and , as a result , the lower frequency sub - bands require much longer adaptive filter lengths to cancel the echoes . consequently , by using non - uniform filter banks that have bandwidths that increase with frequency , the convergence rate of the lower sub - bands can be improved significantly thereby resulting in more effective cancellation of the low - frequency echoes .the use of sub - band adaptive filters in acoustic echo cancellation has been quite popular , especially when the impulse response is very long , due to their fast convergence rate and low computational complexity in comparison to full - band adaptive filters - . in sub - band echo cancellation ,one of the critical aspects of filter bank design is the minimization of the aliasing component during the analysis stage , as aliasing disturbs the convergence process of the adaptive filter .it is well known that aliasing in the sub - band signals caused by finite stop - band attenuation influences the mmse , - .efforts to quantify the mmse via aliasing have been carried out in , - . in - non - uniform filter - bankswere designed with emphasis on near - perfect reconstruction ( npr ) of the analysis - synthesis system .although these designs are useful in applications such as speech coding , they usually do not work well in adaptive filtering since the signal components in the adjacent bands that are required for npr are often severely modified by an adaptive filter . in non - uniform filter - banks that minimize aliasing during the analysis stagewere developed for beamforming and speech processing applications . with this approach ,a linear phase constraint is imposed on both the analysis and synthesis prototype filters , and the filter group - delay , which may not be optimal , must be specified . in , we framed the design method without phase constraints on the filters , which increases the degrees of freedom during optimization , and , in turn , improves the aliasing - suppression performance of the filters .then , in we modified the objective function so that overall signal - to - alias ratio ( sar ) is maximized .the sar characterizes the factor by which the error signal power can be reduced by adaptive filtering and is equivalent to the widely used erle quality measure .since each subband in a non - uniform filter bank has a different bandwidth , the contribution to sar from each subband will be different .consequently , to ensure that the overall sar is maximized the contribution from each of the analysis banks , as well as the psds of the input signal , , and the unknown system , , are taken into account during optimization in . in this paper, we extend and improve on the method developed in .we describe how the maximization of sar across the subbands leads to an increase in erle performance ; then , we formulate a convex optimization problem so that the sar is maximized across the subbands .experimental results show that the filter bank designed using the proposed method results in a much lower erle when compared to existing design methods .the paper is organized as follows .section ii describes the non - uniform filter - bank implementation while section iii describes the subband adaptive filter . in sectionsiv and v , the design of the analysis and synthesis prototype filters , respectively , are discussed . in sectionvi , experimental results are presented to show the effectiveness of the proposed approach .conclusions are drawn in section vii .the non - uniform filter bank in fig . 1 is a generalization of the uniform dft filter bank where the delay element , , is replaced by a first - order allpass filter , , of the form using an -point dft analysis bank , the transfer function , , and z - domain output signal , , of the analysis subband filter are given by where is the analysis prototype filter , is the z - domain input signal , is the downsampling factor in the sub - band , and is the complex modulating factor . the corresponding synthesis bank is an -point inverse - dft , with the synthesis subband filter given by where is the synthesis prototype filter .the overall input - output relationship for the analysis - synthesis system can be expressed as in general , the input - output transfer function of the analysis - synthesis system is a linear , periodically time varying system with period equal to the maximum downsampling factor .therefore , to account for this behaviour , the overall transfer function is computed by using a sequence of time - shifted impulses as input and given by where we have assumed for , and as such , by replacing the delay element by the all - pass filter , the frequency response of the filter at frequency is mapped into frequency , given by \ ] ] consequently , the subband filter will lie between frequencies and where parameters and can be obtained by solving for using a simple line search optimization algorithm on the convex function where is the optimization variable . in section iii , frequencies and will be used as integration limits when computing the aliasing power in the subband filter .in fig . 2 the sub - band adaptive filter structure is shown .as can be seen , the input signal and desired signal are split into subbands by analysis filter banks . the resulting subband signals and in the subband are adapted independently of the other subband signals .the resulting errors from each of the subbands are then recombined to form the fullband error signal . in most adaptive filtering applications ,the signal is represented as a stochastic signal with known power spectral density ( psd ) . to characterize by a spectrum rather than a psdwe represent as the output of a source model which is excited by a white noise signal of unit variance . using the spectrum representation ,the psd of is given by from fig . 2, the desired signal will be a combination of the unknown system , the subband filter , and source model for the input signal given by for a fullband adaptive filter with filter coefficients , the erle of an echo canceller is defined as }{e[((d(n)-\hat{d}(n))^2}\ ] ] where is the adaptive filter estimate of the desired signal and is given by if we assume , for simplicity , a stationary white noise input signal , the erle can be expressed as \displaystyle \sum_{i=0}^\infty s_{(i)}^2(n)}{e[x^2(n)]\left ( \displaystyle \sum_{i=0}^\infty s_{(i)}^2(n ) - 2 \displaystyle \sum_{i=0}^{n-1}s_{(i)}(n)\hat{s}_{(i)}(n ) + \displaystyle \sum_{i=0}^{n-1 } \hat{s}_{(i)}^2(n ) \right ) } \end{split}\ ] ] assuming perfect match of the coefficients of the adaptive filter , so that the upper bound of the erle simplifies to as can be seen from ( [ erlesimpl ] ) , if the filter length is made long enough the erle can be made arbitrarily small in a full - band adaptive filter . in a subband adaptive filter , however , the erle is dependent not only on the length of the adaptive filter , but also on the amount of aliasing power present after analysis and synthesis .if the length of each subband adaptive filter is made sufficiently long , the erle will then be dependent only on the power ratio between the desired signal and the steady state error due to aliasing , or sar .therefore , for a sub - band adaptive filter with sufficiently long sub - band adaptive filters so that the impulse response of the unknown system is adequately modelled , we have to compute the sar , we use the approximation in and extend it to the non - uniform filter bank case , giving where the sar in each sub - band is given by equations ( [ sig1 ] ) and ( [ sig2 ] ) can be simplified if we exchange summation and squaring by ignoring the mixed product terms in the source model , which is justified if the unknown system is comprised of statistically independent frequency components .therefore , and become using ( [ model ] ) , above can be expanded as if and the average power spectrum of are not readily available , we can simplify further by setting design the analysis filter with no phase constraint , the square of the magnitude of the frequency response is used . to this end , from ( [ eq:2 ] ) we get the magnitude - squared function in ( [ eq:7 ] ) can be further simplified as \label{eq:8}\ ] ] where ] by taking the inverse cepstrum .the prototype filter is designed by minimizing the sar across all of the analysis subbands . to this end, we solve the optimization problem : with the prototype filter - magnitude coefficients as the optimization variables . to obtain the global minimum , we frame the optimization as a convex optimization problem , which is done by ensuring that the cost function is convex and the equality constraint is affine . by using the coefficients of the magnitude squared coefficients in ( [ magcoeff ] ) as the optimization variable and combining ( [ simpsig2 ] ) , ( [ modelexpand ] ) , and ( [ eq:8 ] ) we can express the cost function in affine form , which is convex , as where q = 0 q = 0 ] . the term in ( [ eq:22 ] ) is a regularization parameter that is introduced in case the matrix is ill - conditioned ; for example , this may happen when some of the coefficients in are 0 .if we assume that the magnitude of the aliased signal component , , is adequately minimized , the frequency response of the analysis - synthesis system is dependent only on . therefore , from ( [ eq:19 ] ) , it becomes apparent that the frequency response of the analysis - synthesis system is that of a cascade of first - order all - pass filters .consequently , the phase response of the analysis - synthesis system is no longer linear and it becomes necessary to correct the phase using an additional filter operation . in , for example , a non - recursive filter having an impulse response that is a time - limited , time - inverted impulse response of the analysis - synthesis filter bank is used for correcting the phase .alternatively , lower - order recursive group - delay equalizers that approximate the inverse group - delay of the cascade of all - pass filters may also be utilized .in this section , we show the effectiveness of the proposed method by comparing it with two variants of existing methods , method a and method b. we compare their performance for three different types of reference signals : white noise , colored noise and speech . for methoda , we design the prototype analysis filter using the method described in . in this method, the filter is designed by simultaneously minimizing the mean - square error in the passband together with the inband aliasing power in the the subband with the widest bandwidth .the desired passband response is constrained to be linear phase with a magnitude of unity .the prototype synthesis filter is designed using a modified optimization algorithm where the cost function in ( [ eq:21 ] ) is replaced with the one in , given by however , unlike the synthesis design algorithm in , we do not impose any linear phase constraint in the synthesis filter design for our method a , since it reduces the degrees of freedom during optimization thereby reducing the performance of the filter even further . at the same time, we also extend the cost function to incorporate variable decimation factors across the subbands . for method b , we design the analysis prototype filter by maximizing the sar only for the subband with the largest bandwidth . for , the general optimization equation for obtaining the analysis filter design in method bis given by method b essentially demonstrates the performance that can be attained when only the largest subband is considered , as was done in , or when uniform filter bank design methods are employed .for the synthesis prototype filter design , we use the same optimization algorithm as in section v. we compare the proposed method with method a and method b for two filter - bank design specifications : ( a ) specification 1 : , , and and ( b ) specification 2 : , , and , 8 , 8 , 4 , 4 , 4 , 2 , 2 , 2 , 2 , 2 , 4 , 4 , 4 , 8 , 8 . we select as it closely approximates the bark frequency scale . furthermore , for the erle performance comparison experiments in this paper , the adaptive - filter weights are initially set to zero and the adaptation process is started 1 second after the application of the reference and desired signal .this is done so that the error - signal power obtained during the first 1 second can be normalized to 0 db in the erle plots .parameters and , required for computing the analysis filter cost function in ( [ anacostfun ] ) , are obtained after solving the line search equation in ( [ linesearch ] ) .their computed values for specification 1 and specification 2 are listed in table [ spec0 ] .it should be noted that the values listed in the table are not unique but have a period of . in this sub - section ,we compare the performance when the reference signal is white noise ; therefore , we set to unity when designing the filters using the proposed method . as such , when we shall refer to the design method as ` proposed - white ' .we also assume no knowledge of the average spectrum of the unknown system , and therefore set to unity for all of the experiments in this paper .the desired signal , , is white noise convolved with an impulse response of length 200 that is randomly generated from a normal distribution of unit variance .the length of the adaptive filter in each subband varies with the decimation factor and is set to for subband .the nlms algorithm is employed for adapting the adaptive - filter coefficients in each subband .the erle plot for the two filter bank designs are shown in figs .[ erlewhite1](a ) and ( b ) with the corresponding steady - state values tabulated in table ii .as can be seen , the proposed method results in an improvement of several dbs over method a and method b. , , and ( b ) specification 2 : , , and , 8 , 8 , 4 , 4 , 4 , 2 , 2 , 2 , 2 , 2 , 4 , 4 , 4 , 8 , 8.,scaledwidth=90.0% ] ( a ) , , and ( b ) specification 2 : , , and , 8 , 8 , 4 , 4 , 4 , 2 , 2 , 2 , 2 , 2 , 4 , 4 , 4 , 8 , 8.,scaledwidth=90.0% ] ( b ) next , we show comparative plots for the amplitude responses of the prototype analysis filters in fig .[ anaampresp ] . then , the full - band sars computed using ( [ overallsar ] ) are tabulated for the three methods in table iii . comparing the values in table ii and tableiii we observe that the sar values are about 10 db smaller than the corresponding erle values , but vary proportionally to the erle values .the difference between the erle and sar values arises because the sar in ( [ overallsar ] ) is computed right after analysis whereas the erle is estimated after analysis and synthesis . the additional aliasing signal suppression by the synthesis filters results in higher erle values that are proportional to the respective sar values .we then use ( [ subbandsar ] ) to compute the corresponding sub - band sars , , which are plotted in figs .[ sar1](a ) and ( b ) . from the plots , it is apparent that the filters designed using the proposed method have higher sub - band sar in all the other sub - bands , except in bin 9 , which corresponds to the highest frequency sub - band .the improvement in sub - band sar in the other sub bands at the expense of a decrease in the highest sub band is not undesirable in acoustic echo cancellation where cancellation of the lower frequency echoes is usually most critical .( a ) ( b ) in figs .[ anasynampresp1](a ) and ( b ) , we compare the overall amplitude response of the analysis - synthesis system .as can be seen in fig .[ anasynampresp1](a ) , for specification 1 the proposed method and method b have the smallest deviation and are also identical .the reason for the identical response is given in appendix of the paper .for specification 2 , however , the proposed method has the least deviation , even better than method b. since the synthesis filter design algorithm for the proposed method and method b are identical , we can conclude that the better overall response in the proposed method is due to better analysis prototype filters .( a ) ( b ) for the proposed method , the filter coefficients of the analysis and synthesis prototype filters for specification 1 and specification 2 are given in table [ spec1 ] and [ spec2 ] , respectively , and their corresponding amplitude responses are shown in figs.[propanasynampresp](a ) and ( b ) .( a ) ( b ) in this subsection , we compare the erle performance when the reference signal is colored noise .we therefore design a second set of analysis and synthesis prototype filters that takes the spectrum of the reference signal , , into account when designing the filters for specification 1 and specification 2 .that is , we set in ( [ model ] ) as the power spectrum of the colored noise . a plot of the spectrum is shown in fig .[ colordnoisespec ] . to differentiate from the design in subsection vi - a where is unity, we shall refer to this design method as ` proposed - colored ' .the colored noise is obtained by passing the signal through a low - pass fir filter of order 5 ; it is estimated to have an eigenvalue spread of 126 .like in subsection vi - a , the desired signal is obtained by convolving the colored noise with a randomly generated impulse response of length 200 . the erle plot for the two filter bank designsare shown in figs .[ erlecolored](a ) and ( b ) .as can be seen , the proposed methods show an improvement of several dbs over method a and method b. it is interesting to note that for specification 2 , the improvement of the ` proposed - color ' method over the ` proposed - white ' method is not as high as in specification 1 .this is because the constraints imposed by the higher decimation factors in specification 2 limits the degree of freedom in the minimization of the aliasing power for a certain change in .( a ) ( b ) in this subsection , we compare the erle performance when the reference signal is speech . as in subsection vi - b , we design a second set of analysis and synthesis prototype filters where is set to the average power spectrum of speech .we refer to this design method as ` proposed - speech ' .to compute the average power spectrum of speech , we took speech signals of 3 males and 3 females speakers from the atis database and computed their average spectrum , which is plotted in fig .[ speechspec ] .the duration of the signal is about 5 minutes with a nyquist frequency of 8 khz . to avoid including the silence portion of speech when computing the average , we use a simple energy detector to make the classification . unlike the experiments in the previous subsection where we used a randomly generated impulse response , in this section we use a real impulse response , measured in a compact - sized car , to generate the desired signal from the reference speech signal ; a plot of the impulse response is shown in fig.[carimpresp ] .the reference speech signal to the adaptive filter is shown in fig.[erlespeech](a ) and the erle plot for the two filter bank designs are shown in figs . [ erlespeech](b ) and ( c ) . as can be seen , the proposed methods show improvements of several dbs over method a and method b. and , like in subsection vi - b , the improvement of the ` proposed - speech ' method over the proposed - white method is higher for specification 1 . ( a )( b ) ( c ) it should be noted that for the sake of comparison , we have used the nlms algorithm with a fixed step size in our experiments . however , in practical applications the convergence rate of the adaptive filters can be significantly improved by employing various techniques , such as varying the step - sizes as the adaptation progresses , or using more powerful adaptation algorithms like the improved - pnlms or the affine projection algorithms .a new method for designing non - uniform filter - banks for acoustic echo cancellation has been described . in the method ,the analysis prototype filter is framed as a convex optimization problem that maximizes the sar in the analysis banks . since each subband has different bandwidth , the contribution to the overall sar from each subbandis taken into account during optimization . to increase the degrees of freedom during optimizationno constraints are imposed on the phase of the filter . and to ensure low delay , the filter is constrained to be minimum phase .experimental results show that the proposed method results in filter banks with fast convergence and superior erle when compared to filter banks designed using existing methods .the authors are grateful to the natural sciences and engineering research council of canada for supporting this work .in this appendix , we show that if the decimation factors across the sub - bands are the same and the analysis filter used in deriving the synthesis filter in ( [ eq:22 ] ) has no zero coefficients , the analysis - synthesis amplitude response is , up to a scale factor , independent of the analysis prototype filter . setting the decimation factors to be equal across the subbands in ( [ eq:20 ] ) we get upon expanding and ,interchanging the summations and simplifying we obtain where is the optimization variable and is known .therefore , from ( [ dotproduct ] ) it is apparent that if is not zero , in ( [ aliasingsynsamedsimple ] ) remains unconstrained , and , consequently , the minimization of the the cost function in ( [ eq:21 ] ) under the constraint that , is independent of the analysis prototype filter .if , however , is zero for , then is also constrained to zero , and can have arbitrary values .because of this scenario , we introduce the regularization term in the optimization problem in ( [ eq:22 ] ) so that solution of with the minimum l2 norm is always selected .
a new method for designing non - uniform filter - banks for acoustic echo cancellation is proposed . in the method , the analysis prototype filter design is framed as a convex optimization problem that maximizes the signal - to - alias ratio ( sar ) in the analysis banks . since each sub - band has a different bandwidth , the contribution to the overall sar from each analysis bank is taken into account during optimization . to increase the degrees of freedom during optimization , no constraints are imposed on the phase or group delay of the filters ; at the same time , low delay is achieved by ensuring that the resulting filters are minimum phase . experimental results show that the filter bank designed using the proposed method results in a sub - band adaptive filter with a much better echo return loss enhancement ( erle ) when compared with existing design methods . = 1 acoustic echo cancellation , non - uniform filter - banks , sub - band adaptive filter
experts have predicted that the number of devices with communication capability will rise to 50 billions by 2020 . the resulting web of devices connected by the internet - of - things ( iot ) and machine - to - machine ( m2 m ) communications for instance will lead to more sophisticated network topologies .communication over such networks is in general multi - way , where communicating pairs of nodes exchange information in both directions such as in the two - way channel . beside multi - way communication ,a key aspect of future networks is relaying which can play a key role in improving transmission rates . in multi - way networks in particular ,the potential of multi - way relaying can be of great importance .this is especially true in scenarios where physical - layer network coding can be applied , which can significantly boost the performance of a network . for the aforementioned reasons , the multi - way relay channel ( mwrc ) which combines both aspects( multi - way and relaying ) is an integral part of future networks .the mwrc consists of multiple users that want to exchange information via a common relay node . in its simplest form with two users, we get the so called two - way relay channel twrc .the twrc is a fundamental scenario that has been introduced in , and studied thoroughly recently in .several transmission strategies for the twrc including compress - forward and lattice coding have been examined lately , leading to the capacity of the twrc within a constant gap .although the twrc has become well - understood recently , the mwrc has not reached a similar status yet , although several researches have focused on this network recently .for instance , study the multi - pair twrc , study the multi - cast mwrc , studies the mwrc with cyclic message exchange , and study the mwrc with multiple uni - cast message exchange . in this paper , we focus on the latter variant of the mwrc , i.e. , the mwrc with multiple uni - cast message exchange , also known as the y - channel . in the -user y - channel ,several users want to exchange information in all directions via the relay .in particular , user wants to communicate with user .the extension of the twrc to the y - channel is not straightforward , and many challenges have to be tackled when making this step .one of the challenges is in deriving capacity upper bounds .while the capacity of the twrc can be approximated with a high - precision using the cut - set bounds , the capacity of the -user y - channel requires new bounds .such bounds have been derived in .another challenge is in finding the best communication strategy .the -user y - channel requires , in addition to bi - directional communication strategies used in the twrc , more involved strategies such as cyclic communication and detour schemes . the single - input single - output ( siso )-user y - channel has been studied in . here , we focus on the multiple - input multiple - output ( mimo ) case .the mimo y - channel has been initially introduced in , where the strategy of signal - space alignment for network - coding was used . in their paper , lee _ et al ._ characterized the optimal sum degrees - of - freedom ( dof ) of the 3-user mimo y - channel under some conditions on the ratio of the number of antennas at the users and the relay .however , a complete sum - dof characterization of the general 3-user mimo y - channel was not available until where a novel upper bound and a general transmission strategy were developed , thus settling this problem .the mimo y - channel with more than 3 users has also been studied in . in , tian and yenerhave studied the multi - cluster mimo y - channel and characterized the sum - dof of the channel under some conditions on the number of antennas , while in , lee _ et al . _ proposed a transmission strategy for the -user mimo y - channel and derived its achievable dof . despite the intensive work on the y - channel ,many questions remain open .for instance , the sum - dof of the general -user mimo y - channel remains open to date .a recent development on this front has been achieved recently , when wang has characterized the sum - dof of the 4-user mimo y - channel in .another question is on the dof region of the mimo y - channel which is still unknown .recently , the dof region of the 3-user and 4-user cases was studied in .the importance of the dof region is that it reflects the trade - off between the achievable different dof of different users , contrary to the sum - dof which does not .this trade - off is essential in cases where the dof demand by different users is not the same , such as in a network with prioritized users . in such cases, it is interesting to know what is the maximum dof that can be achieved by some users under some constraints on the dof of other users .this question can be answered by the dof region . also , by obtaining the dof region , the sum - dof is obtained as a by - product . in this paper , we focus on the dof _ region _ of the -user mimo y - channel .we develop a communication strategy for the -user mimo y - channel with antennas at the users , and antennas at the relay .this case models a situation where it is easier to mount antennas at the users than at the relay node , such as when the relay is a satellite node .our proposed strategy revolves around two ideas : ( i ) channel diagonalization and ( ii ) cyclic communication using physical - layer network - coding .channel diagonalization is performed by zero - forcing beam - forming using the moore - penrose pseudo - inverse .after channel diagonalization , the mimo y - channel is decomposed into a set of parallel siso y - channels ( sub - channels ) .then , cyclic communication is performed over these sub - channels .a cyclic communication strategy ensures information exchange over a set of users in a cyclic manner , such as exchanging a signal from user to , to , and to thus constituting the cycle . in cyclic communication ,the users send a set of symbols to the relay , which decodes functions of these symbols and forwards these functions to the users .these functions have to be designed appropriately , so that each user can extract his desired symbol from these functions after reception .note that the -user y - channel has cycles of length 2 ( e.g. ) up to length ( e.g. ) .we call the transmission strategy corresponding to an -cycle ( cycle of length ) an -cyclic strategy . the efficiency of the proposed -cyclic strategy is symbol / sub - channel ( or dof / dimension ) . note that after channel diagonalization , the channel has similarities to the linear - deterministic 3-user siso y - channel studied in which is a set of parallel binary y - channels , some of which are not fully - connected .the difference is that the parallel siso y - channels obtained after diagonalization of the mimo y - channel are complex - valued .furthermore , the work in considers only the 3-user case , and the extension to the -user case is not considered .thus , the main difference between this work and the one in is that here we : 1 .extend the scheme to the complex - valued channel with user , 2 .provide a graphical illustration of the problem in the form of a message flow graph , 3 .show that with users , cyclic communication over cycles of various lengths has to be considered , and 4 .propose an optimal resource allocation strategy which distributes the streams to be communicated over the available sub - channels , and uses the optimal strategies over these sub - channels .the question that arises at this point is : is it optimal to treat each sub - channel of the mimo y - channel separately ? or is it better to encode jointly over sub - channels ? to answer this question , one has to optimize the transmission strategy , and observe if the optimized solution requires joint encoding over spatial - dimensions . with this goal in mind , we propose a resource allocation that allocates sub - channels to cyclic strategies based on their efficiencies .the proposed resource allocation is proved to be optimal by deriving a dof region outer bound using a genie - aided approach .similar to , the derived genie - aided bound converts the y - channel into a mimo point - to - point channel whose dof is known . as a result ,a dof region characterization for the -user mimo y - channel with is obtained .this provides _ the first dof region characterization for the -user mimo y - channel ._ with the optimal strategy at hand , we can go back to the channel separability question .we observe that the dof - region - optimal strategy for the mimo y - channel treats the parallel sub - channels jointly , where encoding over spatial dimensions is necessary .we conclude that the mimo y - channel is not separable .however , from sum - dof point - of - view ( instead of dof - region ) , separate encoding over each sub - channel is optimal .another interesting observation is that the optimal strategy is in fact a combination of different cyclic strategies with different efficiencies . in other words , it is not enough to rely on the cyclic strategy with highest efficiency , i.e. , the 2-cyclic strategy . in the next section ,we formally define the -user mimo y - channel .we introduce the main result of the paper , which is a dof region characterization of the case in section [ secmainresult ] .next , we introduce our communication strategy by using a toy - example consisting of a 3-user y - channel in section [ sec:3userychannel ] . the communication strategy for the -user caseis described in detail in section [ sec : achievability ] .comments on the regime where and on the inseparability of the y - channel are given in sections [ sec : ngm ] and [ sec : subchannelsep ] , respectively .finally , we conclude the paper with a discussion in section [ sec : conclusion ] .the following notation is used throughout the paper .we use bold - face lower - case ( ) and upper - case ( ) letters to denote vectors and matrices , respectively , and we use normal fonts ( ) and calligraphic fonts ( ) to denote scalars and sets , respectively . we denote the identity matrix and the zero vector by and , respectively . we say that when is a complex gaussian random vector with mean and covariance matrix .we use and to denote the hermitian transpose and the inverse of a matrix , respectively .we also use to denote the length- sequence .a sequence is i.i.d . if its components are independent and identically distributed .the function is an indicator function which returns 1 if and 0 otherwise , and is the inverse indicator function .the -user mimo y - channel consists of users which want to establish full message - exchange via a relay as shown in figures [ fig : modelu ] and [ fig : modeld ] .all nodes are assumed to be full - duplex with power . can be assumed without loss of generality , since different powers can be incorporated into the channel . ]the relay has antennas , and the users are assumed to be identical in terms of the number of antennas , with antennas at each user .user has a message to be sent to user for all .the message is a realization of a random variable uniformly distributed over the set where denotes the rate of the message , and denotes the number of transmissions ( channel uses ) . at time instant , user sends which is a codeword symbol constructed from the messages , , and from , the received signals of user up to time instant .this transmit signal has to satisfy the power constraint , i.e. , )&\leq \rho.\end{aligned}\ ] ] the received signal at the relay is given by ( cf .figure [ fig : modelu ] ) which is an vector , where the noise is i.i.d . over time . here is the complex channel matrix from user to the relay , which is assumed to be constant throughout the channel uses , and has rank .the relay transmit signal at time is denoted , it satisfies )&\leq \rho,\end{aligned}\ ] ] and it is constructed from , the received signal at the relay up to time instant .the received signal at user is given by ( cf .[ fig : modeld ] ) which is an vector , where the noise is i.i.d . over time will be suppressed henceforth . ] , and is the downlink constant complex channel matrix from the relay to user , and has rank .after channel uses , user has from which it tries to decode , , by using its messages as side information .after decoding , it obtains , .an error occurs if for some distinct .a rate is said to be achievable if there exist a strategy ( encoding and decoding strategies ) that provides an error probability ] , and where is the normalized right - mppi of given by with ^{-1} ] is not a diagonal matrix .although this noise correlation can be exploited at the receiver to increase the achievable rate , this is not necessary from a dof point of view .thus , we can assume that these noises are independent , which delivers a worst - case performance .the result of this diagonalization is a decomposition of the mimo y - channel into parallel siso y - channels as shown in figure [ fig : diagonalization ] . from this point on, we deal with the mimo y - channel after pre- and post - coding as a set of parallel siso y - channels .now let us describe the transmission strategies to be used over these sub - channels . in this subsection, we describe the different communication strategies that will be used to achieve the dof region of the y - channel .cycles will play an important role in the discussion in this subsection and the next one .so we start by introducing some notation related to cycles .an -cycle is denoted by the tuple .note that this notation is cyclic - shift invariant .in other words , if is a cyclic - shift of by positions , then and are equivalent cycles for all .let us denote the set of all distinct -cycles in the -user y - channel by .this set contains all -tuples which are not cyclically equivalent , i.e. , recall that .the cardinality of is given by , which is the number of permutations with elements from given by divided by the number of cyclically equivalent permutations .we denote the -th element of by } ] the set of all edges of the cycle } ] , }}=\{{\ensuremath{\boldsymbol{c}}}_{\ell [ n],1}{\ensuremath{\boldsymbol{c}}}_{\ell [ n],2},\ { \ensuremath{\boldsymbol{c}}}_{\ell [ n],2}{\ensuremath{\boldsymbol{c}}}_{\ell [ n],3},\cdots,{\ensuremath{\boldsymbol{c}}}_{\ell [ n],\ell -1}{\ensuremath{\boldsymbol{c}}}_{\ell [ n],\ell},\ { \ensuremath{\boldsymbol{c}}}_{\ell [ n],\ell}{\ensuremath{\boldsymbol{c}}}_{\ell [ n],1}\},\end{aligned}\ ] ] where ,b} ] .note that we denote the edges by ,b}{\ensuremath{\boldsymbol{c}}}_{\ell [ n],b+1} ] in order to avoid confusion with the 2-cycle ,b},{\ensuremath{\boldsymbol{c}}}_{\ell [ n],b+1}) ] is given by }}=\{12,\ 23,\ 31\} ] , e.g. , where the communicating partners want to exchange one symbol with each other .for this cycle , users and send symbols }},u_{j,{\ensuremath{\boldsymbol{c}}}_{2[n]}}\in\mathbb{c} ] and }} ] ( see appendix [ app : rate2dof ] ) .the relay then forwards this sum to the two users over sub - channel in channel uses of the downlink after multiplying by a normalization factor for power allocation .thus , the relay sets }}+\alpha_ju_{j,{\ensuremath{\boldsymbol{c}}}_{2[n]}}) ] , , we allocate the dof to the bi - directional strategy according to }}=\min_{{\ensuremath{\boldsymbol{e}}}\in\mathcal{e}_{{\ensuremath{\boldsymbol{c}}}_{2[n]}}}\left\{d_{{\ensuremath{\boldsymbol{e}}}}\right\},\end{aligned}\ ] ] where represents component of corresponding to edge .in other words , each user in a -cycle achieves }} ] sub - channels .consider the cycle }=(1,2) ] for instance .for this -cycle , we get , which determines the dof to be achieved by each of users 1 and 2 using the bi - directional strategy .the involved partners in this cycle ( ,1} ] ) apply the bi - directional strategy over }} ] , .we allocate resources to the -cyclic strategy corresponding to this -cycle as follows }}&=\min_{{\ensuremath{\boldsymbol{e}}}\in\mathcal{e}_{{\ensuremath{\boldsymbol{c}}}_{3[n ] } } } \left\ { d_{{\ensuremath{\boldsymbol{e } } } } -\sum_{m=1}^{|\mathcal{s}_2|}{\ensuremath{\mathsf{i}}}_{\mathcal{e}_{{\ensuremath{\boldsymbol{c}}}_{2[m]}}}({\ensuremath{\boldsymbol{e}}})d_{{\ensuremath{\boldsymbol{c}}}_{2[m]}}-\sum_{m=1}^{n-1}{\ensuremath{\mathsf{i}}}_{\mathcal{e}_{{\ensuremath{\boldsymbol{c}}}_{3[m]}}}({\ensuremath{\boldsymbol{e}}})d_{{\ensuremath{\boldsymbol{c}}}_{3[m ] } } \right\}.\end{aligned}\ ] ] with this allocation , each user in the -cycle } ] dof , and the corresponding -cyclic strategy is performed over }} ] is an indicator function which is equal to 1 if }} ] , and the second one represents the dof that have been already allocated to -cycles } ] . as an example , assume that after allocating resources for -cycles in a 4-user y - channel , we end up with a residual dof tuple with cycles }=(1,2,3) ] ( see figure [ fig : messageflow4users ] ) .we subsequently set }}&=\min\{d_{12}-d_{(1,2)},d_{23}-d_{(2,3)},d_{31}-d_{(1,3)}\},\\ d_{{\ensuremath{\boldsymbol{c}}}_{3[2]}}&=\min\{d_{12}-d_{(1,2)}-d_{{\ensuremath{\boldsymbol{c}}}_{3[1]}},d_{24}-d_{(2,4)},d_{41}-d_{(1,4)}\},\end{aligned}\ ] ] so that each user in the -cycles } ] achieves }} ] dof by using the -cyclic strategy over }} ] sub - channels , respectively .this resolves all -cycles in figure [ fig : messageflow4users ] . after allocating resources to the -cyclic strategy, we obtain the number of sub - channels to be used for each -cycle .the transmission of the corresponding signals is done as described in section [ sec : kcycstrategy ] .the cycles of length to can be treated similarly .next , we illustrate the resource allocation for a general -cycle strategy . after handling all cycles of length to , we consider -cycles , .consider an -cycle } ] achieve }} ] sub - channels . in , we subtract from all the dof that have been allocated to -cycles , , sharing edge with } ] , ) sharing the edge with } ] .suppose that the relay wants to construct an observation involving the first components of .the relay constructs as where is the matrix consisting of the last columns of .the first component of is a combination of the first components of .note that in addition to this elimination of variables , some additional variables can be eliminated by the relay if they are aligned by the transmitters .in other words , if user sends the signal where is the -th row of , so that for some and some , then the signals and align at the relay . in this case , eliminating also eliminated . according to this discussion ,the design of the optimal scheme is not an straightforward extension of the case considered in this paper .the main additional ingredient is the _ design of the optimal pre - coders and post - coders for a given dof tuple _ so that the desired observations are obtained at the relay .we did not have to go through this step in this paper since for , the same pre - coding allows achieving all dof tuples .given the pre- and post - coders , the coding schemes discussed in this paper ( uni - direction , bi - directional , and cyclic ) can be used over the resulting sub - channels . it is not clear whether such a combination would be optimal in general .the problem of designing the optimal scheme for thus remains an open problem .the sum dof of the 4-user case has been characterized in .it is worth to mention that the outer bound derived in this paper also applies for . in general , the outer bound can be stated as where is a permutation of and is its -th component ( see appendix [ app : optproof ] ) .combined with the the cut - set bounds for all , we get a general outer bound . as discussed above , this outer bound is tight for and for .however , we expect that it is not tight in the intermediate regime .similarly , the inner bound developed in this paper holds in for a general mimo y - channel as since if , we can deactivate antennas at the relay and still apply our scheme .this inner bound is also not tight in general . in conclusion ,the dof region of the general mimo y - channel remains an open problem , and requires further investigation .an interesting aspect of mimo systems is their channel separability / inseparability .separability of a mimo channel means that independent coding on each sub - channel suffices to achieve the dof of the channel .a mimo point - to - point channel is an example of a separable mimo channel .the main consequence of this separability is that the transmission can be optimized ( in terms of achievable rate ) using water - filling .the optimal scheme in this case consists of channel diagonalization , separate coding , plus power allocation .inseparability on the other hand means that joint encoding over multiple sub - channels is necessary to achieve the dof of the channel .in particular , in an inseparable channel , signals sent over different sub - channels are not always independent , and decoding is performed by considering multiple sub - channels jointly at the receiver .a mimo interference channel is an example of an inseparable mimo channel .the optimal scheme in such cases becomes more sophisticated .in general , the processing at the transmitters and the receivers of a separable channel is simpler compared to that of an inseparable channel . in this section, we make some remarks on channel separation of the y - channel .we have seen that the optimal strategy that achieves the dof region of our setup is a combination of bi - directional , cyclic , and uni - directional strategies .the resulting combination leads to coding over several sub - channels of the mimo system .more precisely , the cyclic strategy with cycle length requires coding over sub - channels .let us examine what would happen if one were to use a channel separation approach instead . in the channel separation approach, there is no interaction between different sub - channels , and the signals transmitted over a sub - channels can be decoded by only observing this particular sub - channel . while this is not possible for cyclic strategies with cycle length , this is possible for the bi - directional and the uni - directional strategies .so what would happen if we would rely only on those two strategies ?we have seen in section [ sec : bidirstra3user ] that using these two schemes only over a 3-user mimo y - channel with is not sufficient .namely , the dof tuple can not be achieved by this combination as shown in the example in section [ sec : bidirstra3user ] .consequently , a channel separation approach is not optimal in the given scenario . rather than channel separation , one has to code over several sub - channels by using the -cyclic strategy to achieve the given dof tuple .the same behaviour can be shown for a general -user mimo y - channel with . in conclusion ,the mimo y - channel is in general not separable .however , a channel separation approach is optimal in terms of sum - dof .if we are not interested in the dof trade - off between different dof component ( a trade - off which is reflected by the dof region ) but we are rather interested in the sum - dof , then the bi - directional strategy ( which can be applied in a channel separation approach ) suffices . to show this , note that the dof region implies that the sum - dof is given by can be shown by summing up the upper bound corresponding to and the one corresponding to in theorem [ thm : dofkusers ] . to achieve dof in total , the resources ( sub - channels ) can be distributed among the -cycles of the y - channel in any desired manner .then , each pair of users in a -cycle exchange two signals ( one signal in each direction ) over each sub - channel assigned to this -cycle .we have sub - channels in total , and thus , this strategy achieves dof. a simple allocation would be to serve one pair of users at a time , and to change the served pair of users in a round - robin fashion .since we have -cycles in the -user y - channel , this round - robin technique would achieve dof per message .consequently , for all , for a sum - dof of which is the optimal sum - dof .note that this scheme is fair ; it achieves a symmetric dof tuple where all users get the same dof . in conclusion ,the mimo y - channel is separable from sum - dof point of view .note that throughout this work , the uplink and downlink of the y - channel were considered separately .no adaptive coding has been used at the source nodes . in other words ,the signals sent by the users in the uplink are independent of what they received in the downlink .this separation turns out to be optimal for our problem .this kind of separability first appeared in the context of the gaussian two - way channel where adaptive coding is not necessary , and separation is optimal from capacity point of view .we have characterized the dof region of the mimo y - channel with users , antennas at the relay , and antennas at the users .the dof region is proved to be achievable by using channel diagonalization in addition to a combination of bi - directional , cyclic , and uni - directional communication strategies .channel diagonalization decomposes the mimo channel into parallel siso sub - channels over which the cyclic and uni - directional strategies are performed .the bi - directional and cyclic strategies use compute - forward at the relay to deliver several linear combinations of different signals to the users , such that each user is able to extract his desired signals . in other words , the main ingredient of these strategies is physical - layer network - coding . the uni - directional strategy is based on decode - forward .this combination of strategies is optimized by using a simple resource allocation approach .namely , we allocate resources ( sub - channels ) to different strategies based on their efficiency , starting with the most efficient and ending with the least efficient one .although this optimal resource allocation solution is intuitive , it has an interesting property . in order to design an optimal scheme , we have to combine strategies with different efficiencies .in other words , relying on the strategy with highest efficiency ( bi - directional strategy ) is not enough . as a by - product, we conclude that the mimo y - channel can not be separated into disjoint parallel sub - channels without degrading its performance . in general , one has to code over multiple sub - channels to achieve the whole dof region of the channel .the approach used in this paper can be applied to derive the capacity region of -user siso y - channels within a constant gap . to do this ,the cyclic communication strategies should be applied to derive the capacity region of the linear deterministic y - channel .then , the results can be extended to the gaussian case as in .this is left for future work .the authors would like to express their gratefulness to dr .karlheinz ochs ( rub , germany ) for the fruitful discussions .in this section , we prove the converse of theorem [ thm : dofkusers ] . we need to show that the dof region of the mimo y - channel with is outer bounded by where is a permutation of and is its -th component .let us consider the permutation and prove the upper bound holds for this particular permutation .we need to show that any achievable dof tuple must satisfy this bound is shown by using the genie - aided upper bound in .let us consider uses of the channel , and let us give , for all and to user 1 as side information .let us also give , to user 1 as side information .now , consider any achievable rate for the channel , for which every node can obtain its messages with an arbitrarily small probability of error .this means that , after channel uses , user 1 can decode from , and . after decoding its desired messages ,user 1 combines its side information with the decoded messages to obtain , which is the same observation as that of user 2 .this makes user 1 able to decode since user 2 can decode them from the same observation .similarly , after this step , user 1 has knowledge of the observation of user 3 and can use it to decoded , and so on , until user 1 knows all messages in the network through side information and through decoding .thus , user 1 knowing his own messages ( messages ) and the messages in the side information ( messages ) , and knowing his received signals , and the received signals of user 2 to , can decode his desired messages ( messages ) and all remaining messages . using fano s inequality , and defining and for , we can write on for clarity . ] where as , and where the second step follows by using the definition of mutual information , the fact that conditioning does not increase entropy , and the markov chain we can write this bound as where but this is the mutual information between the input and the output of a mimo point - to - point channel .this channel has dof .therefore , by dividing by and then letting we get which proves that which is equivalent to .this proves for the permutation .the upper bounds for all other permutations can be proved similarly .this concludes the proof of the converse of theorem [ thm : dofkusers ] and shows the optimality of the diagonalization strategy , transmission strategies , and resource allocation strategy .consider two users 1 and 2 sending codewords and , respectively , to a relay node .the codewords are constructed by using a nested - lattice code with power and rate . in particular , both users uses a nested - lattice code with a shaping lattice .user constructs ]this is the required number of sub - channels for achieving by our strategy .since we have sub - channels in our y - channel , we need the condition to hold for any . to show that , we need to show that the mfg defined by the dof components in satisfies the no - cycle property .we denote this mfg by .the subtraction of the dof of all cycles }} ] , we have }}=d_{(i , j)}=\min\{d_{ij},d_{ji}\} ] .this resolves the -cycle .let us define the set as the set of edges that remain after removing the edges }}}\{d_{{\ensuremath{\boldsymbol{e}}}}\} ] in guarantee that the mfg defined by has no -cycles . the first sum in might constitute -cycles .however , if we write in as }},\end{aligned}\ ] ] where }},\end{aligned}\ ] ] we can show that the mfg described by has no -cycles ( but possible cycles of length 4 or more ) . to this end , suppose that the first sum in has a -cycle }=(i_1,i_2,i_3) ] defined as ( cf . ) }}&=\min_{{\ensuremath{\boldsymbol{e}}}\in\mathcal{e}_{{\ensuremath{\boldsymbol{c}}}_{3[n ] } } } \left\ { d_{{\ensuremath{\boldsymbol{e } } } } -\sum_{m=1}^{|\mathcal{s}_2|}{\ensuremath{\mathsf{i}}}_{\mathcal{e}_{{\ensuremath{\boldsymbol{c}}}_{2[m]}}}({\ensuremath{\boldsymbol{e}}})d_{{\ensuremath{\boldsymbol{c}}}_{2[m]}}-\sum_{m=1}^{n-1}{\ensuremath{\mathsf{i}}}_{\mathcal{e}_{{\ensuremath{\boldsymbol{c}}}_{3[m]}}}({\ensuremath{\boldsymbol{e}}})d_{{\ensuremath{\boldsymbol{c}}}_{3[m ] } } \right\},\end{aligned}\ ] ] is strictly positive .further , assume that the minimization in is achieved by the edge , i.e. , }}=d_{i_1i_2}-d_{(i_1,i_2 ) } -\sum_{m=1}^{n-1}{\ensuremath{\mathsf{i}}}_{\mathcal{e}_{{\ensuremath{\boldsymbol{c}}}_{3[m]}}}(i_1i_2)d_{{\ensuremath{\boldsymbol{c}}}_{3[m]}}.\end{aligned}\ ] ] then , we can write as }}-\sum_{\substack{m=1\\m\neq n}}^{|\mathcal{s}_3| } d_{{\ensuremath{\boldsymbol{c}}}_{3 [ m]}}\\ & = \sum_{i=1}^k\sum_{j = i+1}^k\max\{d_{ij},d_{ji}\}-\sum_{{\ensuremath{\boldsymbol{e}}}\in\mathcal{e}_{{\ensuremath{\boldsymbol{c}}}_{3[n]}}}d_{{\ensuremath{\boldsymbol{e } } } } + \sum_{{\ensuremath{\boldsymbol{e}}}\in\mathcal{e}_{{\ensuremath{\boldsymbol{c}}}_{3[n]}}}d_{{\ensuremath{\boldsymbol{e } } } } -d_{{\ensuremath{\boldsymbol{c}}}_{3 [ n ] } } -\sum_{\substack{m=1\\m\neq n}}^{|\mathcal{s}_3| } d_{{\ensuremath{\boldsymbol{c}}}_{3 [ m]}}\\ \label{eq : subdc3ni1i2 } & = \sum_{i=1}^k\sum_{j = i+1}^k\max\{d_{ij},d_{ji}\}-\sum_{{\ensuremath{\boldsymbol{e}}}\in\mathcal{e}_{{\ensuremath{\boldsymbol{c}}}_{3[n]}}}d_{{\ensuremath{\boldsymbol{e}}}}+d_{i_2i_3}+d_{i_3i_1}+d_{(i_1,i_2 ) } + \sum_{m=1}^{n-1 } { \ensuremath{\mathsf{i}}}_{\mathcal{e}_{{\ensuremath{\boldsymbol{c}}}_{3[m]}}}(i_1i_2)d_{{\ensuremath{\boldsymbol{c}}}_{3[m ] } } -\sum_{\substack{m=1\\m\neq n}}^{|\mathcal{s}_3| } d_{{\ensuremath{\boldsymbol{c}}}_{3 [ m]}}\\ \label{eq : n_s3cycleresolve } & = \sum_{i=1}^k\sum_{j = i+1}^k\max\{d_{ij},d_{ji}\ } -\sum_{{\ensuremath{\boldsymbol{e}}}\in\mathcal{e}_{{\ensuremath{\boldsymbol{c}}}_{3[n]}}}d_{{\ensuremath{\boldsymbol{e } } } } + d_{i_2i_3}+d_{i_3i_1}+d_{(i_1,i_2 ) } -\sum_{m=1}^{n-1 } \bar{{\ensuremath{\mathsf{i}}}}_{\mathcal{e}_{{\ensuremath{\boldsymbol{c}}}_{3[m]}}}(i_1i_2)d_{{\ensuremath{\boldsymbol{c}}}_{3[m]}}-\sum_{m = n+1}^{|\mathcal{s}_3| } d_{{\ensuremath{\boldsymbol{c}}}_{3 [ m]}},\end{aligned}\ ] ] where in we have substituted .now , since }}>0 ] resolves the cycle by replacing }}}d_{{\ensuremath{\boldsymbol{e}}}}=d_{i_1i_2}+d_{i_2i_3}+d_{i_3i_1} ] resolves all -cycles for . for cycles } ]is zero . as a result , the mfg defined by contains neither -cycles nor -cycles .thus , we can write as where the set is the set of edges that remain after removing the edges , defined as } } } \hspace{-.1cm}\left\ { d_{{\ensuremath{\boldsymbol{e } } } } -\sum_{m=1}^{|\mathcal{s}_2|}{\ensuremath{\mathsf{i}}}_{\mathcal{e}_{{\ensuremath{\boldsymbol{c}}}_{2[m]}}}({\ensuremath{\boldsymbol{e}}})d_{{\ensuremath{\boldsymbol{c}}}_{2[m ] } } -\sum_{m=1}^{n-1}{\ensuremath{\mathsf{i}}}_{\mathcal{e}_{{\ensuremath{\boldsymbol{c}}}_{3[m]}}}({\ensuremath{\boldsymbol{e}}})d_{{\ensuremath{\boldsymbol{c}}}_{3[m ] } } \hspace{-.1cm}\right\}\end{aligned}\ ] ] ( corresponding to ) for , from .thus , clearly the set has no -cycles nor -cycles , but possibly cycles of length 4 or more .is not fixed for all since the remaining edges after resolving -cycles and -cycles depend on . ] by substituting in , we can write }}.\end{aligned}\ ] ] now , it is obvious that has no -cycles .next , we show that it also has no -cycles , .we begin by writing in as }},\end{aligned}\ ] ] where }}.\end{aligned}\ ] ] again , we can show that the mfg defined by does not contain -cycles .in particular , suppose that the edges in constitute the -cycle }=(i_1,i_2,i_3,i_4) ] by .similarly , all -cycles are resolved by the terms }}$ ] leading to where is defined similar to , i.e. , and } } } \left\ { d_{{\ensuremath{\boldsymbol{e } } } } -\sum_{i=2}^{3 } \sum_{m=1}^{|\mathcal{s}_i|}{\ensuremath{\mathsf{i}}}_{\mathcal{e}_{{\ensuremath{\boldsymbol{c}}}_{i[m]}}}({\ensuremath{\boldsymbol{e}}})d_{{\ensuremath{\boldsymbol{c}}}_{i[m]}}-\sum_{m=1}^{n-1}{\ensuremath{\mathsf{i}}}_{\mathcal{e}_{{\ensuremath{\boldsymbol{c}}}_{4 [ m]}}}({\ensuremath{\boldsymbol{e}}})d_{{\ensuremath{\boldsymbol{c}}}_{4 [ m ] } } \right\}.\end{aligned}\ ] ] the edges of do not constitute - , - , or -cycles , but might constitute cycles of length 5 or more . by substituting in in, we get }}.\end{aligned}\ ] ] by proceeding similarly , we can show that all -cycles in are resolved , and that can be written as where is a set of edges that constitute no cycles of length .we conclude that is the sum of dof components of whose corresponding mfg satisfies the no - cycle property . since implies that the sum of all permutations of components of constituting no cycles is less than for all dof tuples , then by , which proves the achievability of .a. chaaban and a. sezgin , `` multi - way communications : an information theoretic perspective , '' _ foundations and trends in communications and information theory _ , vol .12 , no . 3 - 4 ,pp . 185371 , 2015 .[ online ] .available : http://dx.doi.org/10.1561/0100000081 m. p. wilson , k. narayanan , h. d. pfister , and a. sprintson , `` joint physical layer coding and network coding for bidirectional relaying , '' _ ieee trans .on info . theory _ , vol .56 , no . 11 , pp .56415654 , nov .2010 .d. gndz , e. tuncel , and j. nayak , `` rate regions for the separated two - way relay channel , '' in _ proc . of the 46th annual allerton conference on communication , control , and computing _ , urbana - champaign , il , sep .2008 , pp . 13331340 .m. shaqfeh , a. zafar , h. alnuweiri , and m .- s .alouini , `` joint opportunistic scheduling and network coding for bidirectional relay channel , '' in _ proc . of ieee international symposium on info . theory ( isit ) _ , istanbul , turkey , 2013 .a. sezgin , a. s. avestimehr , m. a. khajehnejad , and b. hassibi , `` divide - and - conquer : approaching the capacity of the two - pair bidirectional gaussian relay network , '' _ ieee trans . on info .58 , no . 4 , pp . 24342454 , apr . 2012 .a. sezgin , h. boche , and a. s. avestimehr , `` bidirectional multi - pair network with a mimo relay : beamforming strategies and lack of duality , '' in _ proc .of allerton conference _ , monticello , il , usa , 2010 .m. mokhtar , y. mohasseb , m. nafie , and h. el - gamal , `` on the deterministic multicast capacity of bidirectional relay networks , '' in _ proc . of the 2010 ieee info .theory workshop ( itw ) _ , dublin , aug .2010 .b. matthiesen , a. zappone , and e. a. jorswieck , `` spectral and energy efficiency in 3-way relay channels with circular message exchanges , '' in _ proc .of 11th internation symposium on wireless communication systems ( iswcs ) _ , barcelona , spain , 2014 .n. lee and j .- b .lim , `` a novel signaling for communication on mimo y channel : signal space alignment for network coding , '' in _ proc . of ieee international symposium on info .theory ( isit ) _ , vol . 1 ,seoul , jun .2009 , pp .28922896 .a. chaaban and a. sezgin , `` the capacity region of the linear shift deterministic y - channel , '' in _ ieee international symposium on info .theory ( isit ) _ , st .petersburg , july 31-aug .5 2011 , pp . 24572461 .a. zewail , y. mohasseb , m. nafie , and h. el - gamal , `` the deterministic multicast capacity of 4-node relay networks , '' in _ proc .of ieee international symposium on info . theory ( isit ) _ , istanbul , turkey , july 2013 .a. a. zewail , m. nafie , y. mohasseb , and h. el - gamal , `` achievable degrees of freedom region of mimo relay networks using detour schemes , '' in _ proc . of ieee international conference on communications ( icc ) _ , sydney , australia , 2014 .m. shaqfeh , f. al - qahtani , and h. alnuweiri , `` optimal relay selection for decode - and - forward opportunistic relaying , '' in _ international conference on communications , signal processing , and their applications ( iccspa ) _ , sharjah , uae , feb .park , m .- s .alouini , s .- h . park , and y .- c .ko , `` on the achievable degrees of freedom of alternate mimo relaying with multiple af relays , '' in _ third international conference on communications and networking ( comnet ) _ , hammamet , tunesia , march 2012 .p. cao , z. chong , z. ho , and e. jorswieck , `` energy - efficient power allocation for amplify - and - forward mimo relay channel , '' in _ieee 17th international workshop on computer aided modeling and design of communication links and networks ( camad ) _ , barcelona , spain , sept .2012 .k. narayanan , m. p. wilson , and a. sprintson , `` joint physical layer coding and network coding for bi - directional relaying , '' in _ proc . of the forty - fifth allerton conference _ ,illinois , usa , sep .
the -user mimo multi - way relay channel ( y - channel ) consisting of users with antennas each and a common relay node with antennas is studied in this paper . each user wants to exchange messages with all the other users via the relay . a transmission strategy is proposed for this channel . the proposed strategy is based on two steps : channel diagonalization and cyclic communication . the channel diagonalization is applied by using zero - forcing beam - forming . after channel diagonalization , the channel is decomposed into parallel sub - channels . cyclic communication is then applied , where signal - space alignment for network - coding is used over each sub - channel . the proposed strategy achieves the optimal dof region of the channel if . to prove this , a new degrees - of - freedom outer bound is derived . as a by - product , we conclude that the mimo y - channel is not separable , i.e. , independent coding on separate sub - channels is not enough , and one has to code jointly over several sub - channels .
the laws of physics are framed in precise mathematical language . mastering physics involves learning to do abstract reasoning and making inferences using these abstract laws of physics framed in mathematical forms . the answers to simple questions related to motioncan be very sophisticated requiring a long chain of reasoning .it is not surprising then that developing a solid grasp of physics even at the introductory level can be challenging .learning quantum mechanics is even more challenging [ 1 - 12 ] . unlike classical mechanics, we do not have direct experience with the microscopic quantum world .also , quantum mechanics has an abstract theoretical framework in which the most fundamental equation , the time - dependent schroedinger equation ( tdse ) , describes the time evolution of the wave function or the state of a quantum system according to the hamiltonian of the system .this wave function is in general complex and does not directly represent a physical entity .however , the wave function at a given time can be exploited to make inferences about the probability of measuring different physical observables associated with the system .for example , the absolute square of the wave function in position - space is the probability density .since the tdse does not describe the evolution or motion of a physical entity , unlike newton s second law , the modeling of the microscopic world in quantum mechanics is generally more abstract than the modeling of the macroscopic world in classical mechanics .quantum theory provides a coherent framework for reasoning about microscopic phenomena and has never failed to explain observations if the hamiltonian of the system is modeled appropriately to account for the essential interactions .however , the conceptual framework of quantum mechanics is often counter - intuitive to our everyday experiences .for example , according to the quantum theory , the position , momentum , energy and other observables for a quantum mechanical entity are in general not well - defined .we can only predict the probability of measuring different values based upon the wave function when a measurement is performed .this probabilistic interpretation of quantum mechanics , which even einstein found disconcerting , is challenging for students .moreover , according to the copenhagen interpretation of quantum mechanics , which is widely taught to students , the measurement of a physical observable changes the wave function if the initial wave function is not an eigenfunction of the operator corresponding to the observable measured .thus , the usual time evolution of the system according to the tdse is separated from what happens during the measurement of an observable .students often have difficulty with this notion of an instantaneous change or collapse " of the wave function during the measurement .our prior research shows that many students have common alternative conceptions about the collapse of the wave function during the measurement , e.g. , many believe that the wave function gets stuck in the collapsed state after the measurement or it must go back to the original wave function if one waits long enough after the measurement .we found that when students were given the possibility that the wave function may neither stay stuck nor go back to the original wave function , many students had difficulty understanding how anything other than those two outcomes was possible .it was clear from the discussions that the students had not internalized that after the measurement , the wave function will again evolve according to the tdse starting from the collapsed wave function .in quantum theory , position and momentum are not independent variables that evolve in a deterministic manner but are operators in the hilbert space in which the state of the system is a vector . for a given state of the system , the probabilities of measuring position or momentum in a narrow range depend on each other .in particular , specifying the position - space wave function that can help us determine the probability of measuring the position in a narrow range specifies ( via a fourier transform ) the momentum - space wave function that tells us the probability of measuring the momentum in a narrow range .the eigenstates of the position or momentum operators span the hilbert space so that any state of the system can be written as a linear combination of a complete set of position eigenstates or momentum eigenstates .the measurement of position ( or momentum ) yields a position ( or momentum ) eigenvalue with a certain probability depending upon the state of the system .these concepts are challenging for students .in addition to the lack of direct exposure to microscopic phenomena described by quantum theory and the counter - intuitive nature of the theory , the mathematical facility required in quantum mechanics can increase the cognitive load and make learning quantum mechanics even more challenging . the framework of quantum mechanics is based on linear algebra .in addition , a good grasp of differential equations , special functions , complex variables etc . is highly desired .if students are not facile in mathematics , they may become overwhelmed by the mathematical details and may not have the opportunity to focus on the conceptual framework of quantum mechanics and build a coherent knowledge structure .our earlier research shows that a lack of mathematical facility can hinder conceptual learning .similarly , alternative conceptions about conceptual aspects of quantum mechanics can lead to students making mathematical errors that they would otherwise not make in a linear algebra course .many of the alternative conceptions in the classical world are over - generalizations of everyday experiences to contexts where they are not applicable .for example , the conception that motion implies force often originates from the fact that one must initially apply a force to an object at rest to get it moving .people naively over - generalize such experiences to conclude that even an object moving at a constant velocity must have a net force acting on it .one may argue that quantum mechanics may have an advantage here because the microscopic world does not directly deal with observable phenomena in every day experience so students are unlikely to have alternative conceptions .unfortunately , that is not true and research shows that students have many alternative conceptions about quantum mechanics [ 1 - 12 ] .these conceptions are often about the quantum mechanical model itself and about exploiting this model to infer what should happen in a given situation .students often over - generalize their intuitive notions from the classical world to the quantum world which can lead to incorrect inferences .as discussed earlier , the wave function is central to quantum mechanics . here, we discuss an investigation of difficulties with the wave function that was carried out by administering written surveys to more than two hundred physics graduate students and advanced undergraduate students enrolled in quantum mechanics courses and by conducting individual interviews with a subset of students .students were given a potential energy diagram for a one - dimensional finite square well of width and depth between .they were asked to draw a qualitative sketch of ( a ) the ground state wave function , ( b ) any one scattering state wave function and comment on the shape of the wave function in each case in all the three regions , and .the individual interviews were carried out using a think aloud protocol . during the semi - structured interviews , students were asked to verbalize their thought processes while they answered the questions . they were not interrupted unless they remained quiet for a while . in the end , we asked them for clarifications of the issues they did not make clear . here ,we cite examples of students difficulties .we note that students were provided separate spaces for drawing the bound and scattering state wave functions so that they do not confuse the vertical axis in the potential energy diagram given with the vertical axis of their sketch of the wave function . but instead of simply showing the location of and in their sketches , many students redrew the potential energy diagram , situated their wave function in the potential energywell and did not specify what the vertical axes of their plots were . in response to the question , one interviewed student claimed that it is impossible to draw the bound and scattering state wave functions for a finite square well because one must find the solution of a transcedental equation which can only be solved numerically . when the student was encouraged to make a qualitative sketch, he drew two coordinate axes and then drew some parallel curves and a straight line from the origin intercepting the curves .he claimed that all he can say without solving the equation on the computer is that the intercepts will give the wave function .while one must solve a transcendental equation to find the finite number of bound states for a finite square well , the student was asked to draw a qualitative sketch of the wave function , something that is taught even in a modern physics course . in particular, students are taught that the bound state wave functions for a finite square well look sinusoidal inside the well with an exponential tail outside in the classically forbidden region .it appeared that the student had memorized a procedure but had not developed a qualitative feel " for what the bound and scattering state wave functions should look like for a finite square well .figure 1 shows a sketch from a student who incorrectly believed that the bound and scattering states can be part of the same wave function .he felt that the sinusoidal wave function inside the well was the bound state and the part of the wave function outside the well was the scattering state and corresponded to the `` free particle '' .some interviewed students claimed that the shapes of the various bound state wave functions for the finite square well can not be sinusoidal inside the well since only the infinite square well allows sinusoidal bound states .one student incorrectly claimed that the ground state of the finite square well should be gaussian in shape to ensure that the wave function has no cusp and exponentially decays to zero outside the well .figure 2 shows a sketch of the scattering state wave function by a student who incorrectly claimed that the wave function has no slope because the potential is zero in regions i and iii . while the probability density may be uniform , the wave function can not be constant in those regions .the student also incorrectly believed that that the constant value of the wave function is lower in region iii compared to region i since it is affected by the potential in region ii and dies .figure 3 shows a sketch of the scattering state by a student who incorrectly drew the wave function to be higher in region ii and claimed : higher because some of the wave is reflected at the wall " .figure 4 shows sketches by three students all of whom incorrectly believed that the wave function will decay exponentially in region ii .these students have not learned what one should observe when the potential energy is lower in the well in region ii .instead , they plotted a decaying wave function from rote memory that may correspond to a potential barrier .moreover , similar to a student s sketch in figure 3 , the student who drew figure 4(a ) incorrectly claimed : typical particle wave function but lowered by potential well " as though the oscillations in regions i and iii should be around different references .these types of confusions are partly due to the inability to distinguish between the vertical axis of the potential well ( which has the units of energy ) with the vertical axis when drawing the wave function . also , in figure 4(c ) , the student drew the incoming and reflected waves separately in region i but only drew the incoming part to be continuous with the wave function in region ii which is incorrect . figure 5 shows three students plots in which the wave functions drawn have discontinuities and figure 6 shows a plot in which there is a cusp .interviews and written explanations suggest that many students drew diagrams of the wave function from memory without thinking about the physical meaning of the wave function. this may partly be due to the fact that the wave function itself is not physical and can not be observed experimentally .additional cognitive resources are required to make sense of the wave function in order to draw it correctly .for example , a discontinuity in the wave function is not physical because the absolute square of the wave function is related to the probability density for finding the particle and a discontinuity at a point would imply that the probability of finding the particle will depend on whether we approach that point from the left or the right side .similarly , the wave function can not have a cusp because it would imply that the expectation value of the kinetic energy ( related to the second derivative of the wave function ) is infinite .while quantum mechanics may require reasoning at the formal operational level in the piagetian hierarchy of cognitive levels , it is possible to design instruction that helps students develop intuition .the notion of the zone of proximal development " ( zpd ) attributed to vygotsky focuses on what a student can do on his / her own vs. with the help of an instructional strategy that accounts for his / her prior knowledge and skills and builds on it . in quantum mechanics, we can exploit students prior knowledge of probability and mathematical skills .but the non - intuitive nature of quantum mechanics and other issues discussed earlier imply that scaffolding , which is at the heart of zpd , is critical for helping students learn concepts .scaffolding can be used to stretch students learning far beyond their initial knowledge by carefully crafted instruction .we are taking into account these issues and students prior knowledge to develop quantum interactive learning tutorials ( quilts ) and tools for peer - instruction .these learning tools employ computer - based visualization tools and help students take advantage of the visual representation of the quantum mechanical concepts , e.g. , wave function , in order to develop intuition about quantum phenomena .
we are investigating cognitive issues in learning quantum mechanics in order to develop effective teaching and learning tools . the analysis of cognitive issues is particularly important for bridging the gap between the quantitative and conceptual aspects of quantum mechanics and for ensuring that the learning tools help students build a robust knowledge structure . we discuss the cognitive aspects of quantum mechanics that are similar or different from those of introductory physics and their implications for developing strategies to help students develop a good grasp of quantum mechanics . address = department of physics and astronomy , university of pittsburgh , pittsburgh , pa , 15260 , usa
during cell differentiation , the cell evolves from undifferentiated phenotypes in a multipotent stem or progenitor state to differentiated phenotypes in a mature cell . in this process ,the gene regulatory network , which governs the progressive changes of gene expression patterns of the cell , forces the cell to adopt the cell type - specific phenotypes .cells can have states with the higher probability of appearance , which leads to different cell phenotypes .different cell phenotypes correspond to different basins of attractions on the potential landscape .therefore the differentiation and developmental process of the cell can be thought as the evolution of the underlying landscape topography from one basin to to another .one grand challenge is to explain how this occurs , what is the underlying mechanism and how to quantify the differentiation and developmental process .furthermore , the unidirectional developmental process posses another challenge to explain the origin of time arrow . in the cell ,intrinsic fluctuations are unavoidable due to the limited number of protein molecules .there have been increasing numbers of studies on how the gene regulatory networks can be stable and functional under such highly fluctuating environments .in addition , the gene state fluctuations from the regulatory proteins binding / unbinding to the promoters can be significant for gene expression dynamics .conventionally , it was often assumed that the binding / unbinding is significantly faster than the synthesis and degradation ( adiabatic limit ) .this assumption may hold in some prokaryotic cells in certain conditions , in general there is no guarantee it is true .in fact , one expects in eukaryotic cells and some prokaryotic cells , binding / unbinding can be comparable or even slower than the corresponding synthesis and degradation ( non - adiabatic limit ) .this can lead to nontrivial stable states and coherent oscillations appearing as a result of new time scales introduced due to the non - adiabaticity .therefore , the challenge for us is to understand how the biological differentiation and reprogramming can be functional under both intrinsic fluctuations and non - adiabatic fluctuations .previous studies showed that the change in the self activation regulatory strengths can cause the differentiation of phenotypes . in this article, we used a canonical gene regulatory circuit module to study cell fate decision and commitment in multipotent stem or progenitor cells .we will study a model of cell developmental circuit ( fig . [ circuit ] ) which is composed of a pair of mutually inhibiting but self activating genes . this gene regulatory motif has been found in various tissues where a pluri / multipotent stem cell has to undergo a binary cell fate decision .for example , in the multipotent common myeloic progenitor cell ( cmp ) facing the binary cell fate decision between the myeloid and the erythroid fate , the fate determining transcription factors ( tf ) , pu.1 , and gata1 , which promote the myeloid or the erythroid fates , respectively , form such a gene network circuit . the relative expression levels a ( pu.1 ) and b ( gata1 ) of these two reciprocal tfs can bias the decision toward either lineage .we found that the change in the time scale of the binding / unbinding of regulatory proteins to the promoters may provide an new important mechanism for the cell differentiation .we studied the underlying potential landscapes associated with the differentiation and developmental process and found that the underlying landscapes developed from un - differentiated multipotent state to the differentiated states as the binding / unbinding rate decreased to the slow non - adiabatic binding region .in addition , in the slow non - adiabatic binding region , we predicted the emergence of multiple meta - stable states in the development of multipotent stem cells and explained the origin of this observation in the experiments .we also calculated the mean first passage transition time for the differentiation and reprogramming .we found that the mean first passage transition time strongly depends on the time scale of the promoter binding / unbinding processes .there is an optimal speed for differentiation and development with certain promoter binding / unbinding rates. it will be natural to ask whether the differentiation and development happens at this optimal speed ?future experimental and bioinformatics studies might be able to give the answer .we quantified the kinetic pathways for the differentiation and reprogramming .we found that they are irreversible .this captures the non - equilibrium prosperities for the biological processes of the underlying gene regulatory networks in multipotent stem or progenitor cells .it may provide the origin of time arrow for development .as shown in fig . [ circuit ] , the gene regulatory circuit that governs binary cell fate decision module consists of mutual regulation of two opposing fate determining master tf a and b. the module has been shown to control developmental cell fate decision and commitment in several instances of multipotent stem or progenitor cells that faces a binary fate decision , ( i.e. , gata1 and pu.1 ) .a and b are coexpressed in the multipotent undecided cell and committed to either one of the two alternative lineages is associated with either one factor dominating over the others , leads to expression patterns in a mutually exclusive manner .importantly , in many cases the genes a and b also self - activate ( positive autoregulate ) themselves ( fig .[ circuit ] ) . here , the hybrid promoter can be bound by the regulatory protein with the binding rate and dissociation rate ( both and can depend on protein concentration ) .the synthesis of protein is controlled by the gene state of promoter .there are two types of genes , and , to be translated into proteins and respectively .the proteins ( ) can bind to the promoter of the gene ( ) to activate the synthesis rate of ( ) , which makes a self - activation feedback loop .the proteins ( ) can bind to the gene ( ) to repress the synthesis rate of ( ) , which makes a mutual repression loop . here ,both protein and protein bind on promoters as a dimer with the binding rate and respectively .therefore , each gene has 4 states with self activator binding or non - binding and with mutual repression from another gene binding or non - binding ( assuming we have two different binding sites , one for self activator and one for the other gene ) .the whole system has 16 gene states in total . for simplicity, we neglect the roles of mrnas by assuming translation processes are very fast .the model can be expressed by the following chemical reactions : {h_{\alpha a } } \mathcal{o}_{\alpha}^{01 } , \quad \mathcal{o}_{\alpha}^{10 } + 2 a \xrightleftharpoons[f_{\alpha a}]{h_{\alpha a } } \mathcal{o}_{\alpha}^{00 } \\ & & \mathcal{o}_{\alpha}^{11 } + 2 b \xrightleftharpoons[f_{\alpha b}]{h_{\alpha b } } \mathcal{o}_{\alpha}^{10 } , \quad \mathcal{o}_{\alpha}^{01 } + 2 b \xrightleftharpoons[f_{\alpha b}]{h_{\alpha b } } \mathcal{o}_{\alpha}^{00 } \\ & & \mathcal{o}_{a}^{ij } \overset{g_{a}^{ij}}{\longrightarrow } a , \quad \mathcal{o}_{b}^{ij } \overset{g_{b}^{ij}}{\longrightarrow } b , \quad a \overset{k_{a}}{\longrightarrow } \emptyset \quad b \overset{k_{b}}{\longrightarrow } \emptyset \label{assmue2}\end{aligned}\ ] ] with for the hybrid promoter of gene . for the gene state index of gene , the first index stands for the activator protein unbound(bound ) on the promoter ; the second index stands for the repressor protein unbound(bound ) on the promoter . ( ) is the synthesis rate of the protein ( ) when the gene ( ) is in state .the probability distribution of the microstate is indicated as where and are the concentration of the activator and the repressor respectively .the index ( ) represents the gene occupation state by the protein ( ) and the index ( ) represents the gene occupation state by the protein ( ) .this results sixteen master equations for the probability distribution which are shown explicitly in supporting material ( * sm * ) .the steady state probability distribution satisfies for all .the total probability distribution is .the generalized potential function of the non - equilibrium network can be quantified as : .it maps to the potential landscape , which gives a quantitative measure of the global stability and function of the underlying network . above equationsare difficult to deal with because each one actually represents an infinite number of equations ( n range from to ) . a direct way to find the steady state is through kinetic simulations . here, we will use monte carlo simulation to find the stead state distribution of master equations ( see supporting material ( * sm * ) ) .in our calculations , we only consider the a - b symmetric case : we define the normalized binding / unbinding rate of the gene states : , , and equilibrium constants : , , which indicate the ratio between unbinding and binding speed .there are four gene states for each gene and the synthesize rates from gene a and b are also symmetric : .when gene a is bound by protein a ( self activation ) while not bound by protein b ( mutual repression ) , the synthesize rate of protein for protein a is the largest : , where is the activation strength and is the repression strength . here , we choose equilibrium constants , symmetric binding / unbinding speed , the repression strength and scale the time to make .such circuits with above control parameters can generate asymmetric attractors representing the differentiated states with almost mutually excluding expression of protein ( i.e. gata1 ) and ( i.e. pu.1 ) .in addition , central symmetric attractor states characterized by approximately equal levels of and expression can also be generated , which represent the multipotent state that exhibits the characteristic balanced or promiscuous expression of the two opposing , fate - determining concentrations - a hallmark of the indeterminacy of the undecided multipotent stem cell .we plotted the potential landscape in - plane for different activation strength and binding / unbinding speed in fig .[ fig4 ] , [ fig5 ] , [ fig6 ] , [ fig7 ] , [ fig8 ] , [ fig9 ] , [ fig1 ] , [ fig2 ] , [ fig3 ] for contour view , and fig .[ fig4_1 ] , [ fig5_1 ] , [ fig6_1 ] , [ fig7_1 ] , [ fig8_1 ] , [ fig9_1 ] , [ fig1_1 ] , [ fig2_1 ] , [ fig3_1 ] for 3 dimensional view . in these figures , we found two kinds of mechanisms for the cell differentiation . during the developmental process ,the self activation regulation coming from an effective regulation and its change is due to the regulations on these transcription factors mediated by other regulators such as klf4 . when the self activation is strong ( large ) , the system is mono - stable with one un - differentiated central basin , as in fig .[ fig4 ] ( or [ fig4_1 ] ) . asself activation strength decreases , the central basin gets weaker and differentiated basins on both sides start to develop , which results tri - stability as in fig .[ fig7 ] ( or [ fig7_1 ] ) . when self activation strength , the circuit will reduce to a normal symmetric toggle switch . for toggle switch, and can not be both large in adiabatic limit , because they suppress each other .then , the un - differentiated central basin disappeared and two differentiated basins on both sides survives , which gives bi - stability as in fig .[ fig1 ] ( or [ fig1_1 ] ) .therefore , decreasing the self activation regulatory strength will lead the cell system to differentiate .changing of the effective self activation regulatory strengths of transcription factors binding to the genes therefore provides a possible differentiation mechanism which is currently under study .we would like to point out that there is another possible mechanism of the cell differentiation from the slow binding / unbinding of protein regulators to gene promoters .we noticed that for a fixed activation strength , cells can develop more stable differentiated states on both sides .as shown in fig .[ fig4 ] ( or [ fig4_1 ] ) , [ fig5 ] ( or [ fig5_1 ] ) , [ fig6 ] ( or [ fig6_1 ] ) and fig .[ fig7 ] ( or [ fig7_1 ] ) , [ fig8 ] ( or [ fig8_1 ] ) , [ fig9 ] ( or [ fig9_1 ] ) , when binding / unbinding rate decreases , the un - differentiated central basin becomes weaker and less stable , while differentiated basins on both sides become stronger and more stable .we also noticed that in the non - adiabatic slow binding limit ( small binding / unbinding rate ) , multiple meta - stable basins show up .in addition , in the non - adiabatic slow binding limit , cells have chances to extinct and there are extinct basins " near , as shown in fig .[ fig6 ] ( or [ fig6_1 ] ) , [ fig9 ] ( or [ fig9_1 ] ) , [ fig3 ] ( or [ fig3_1 ] ) .these behaviors are directly due to the non - adiabatic effect : slow binding / unbinding of protein regulators to promoters . when the binding / unbinding rate is small , the interactions ( either repressions or activations ) between gene states are weak and different gene states statistically co - exist in cells. each gene state will give a basin in the concentration and sum of these basins will lead a multiple stable potential landscape .this results to the development and differentiation with slow binding from the original undifferentiated equally populated single basin of attraction with fast binding .slow binding provides another possible mechanism for differentiation and development . to quantitatively characterize the dynamics of the differentiation and the reverse process as reprogramming , we study the speed of differentiation and reprogramming in terms of mean first passage time ( mfpt ) , as shown in fig .[ fig11 ] , [ fig12 ] and [ fig13 ] . in an attractor landscape , the lifetime of an attractor reflects its stability , which can be measured by mfpt .mfpt is the average transition time induced by intrinsic statistical fluctuations of molecule numbers between attractors on a landscape , since the traversing time represents how easy to switch from one place to another .when the binding / unbinding rate is relatively large , the un - differentiated central basin becomes more stable , as in fig .[ fig4 ] ( or [ fig4_1 ] ) , [ fig7 ] ( or [ fig7_1 ] ) , [ fig1 ] ( or [ fig1_1 ] ) , and cells have more chances to stay in the un - differentiated state .therefore , the differentiation process will be more difficult and mfpt is longer for faster binding . for the differentiation process, it is noticed that , as the binding / unbinding rate increases , the mfpt decreases first , and then increases . in the non - adiabatic limit ( small binding / unbinding rate ) , the differentiation limiting step is the binding / unbinding events .therefore , increasing binding / unbinding speed will accelerate the kinetics from the un - differentiated central basin to differentiated side basins .so for the differentiation process , caused from faster binding to slower binding of regulatory proteins to the genes , we notice that the speed for differentiation is slow when state is dominated by undifferentiated state for faster binding , and is also slower for slower binding which is due to the occasional binding being the rate limiting step for differentiation .there is an optimal speed for differentiation .as binding becomes faster from low speed end ( non - adiabatic limit ) , the speed of differentiation is controlled by the binding speed and therefore increases .as the binding becomes even faster , the differentiation is dominated by the escape from the undifferentiated basin of attraction and therefore is significantly slowed down .this creates an optimal speed for differentiation and development .the reverse process of cell differentiation is the reprogramming of differentiated cells back to a multi- or pluripotent state . in fig .[ fig11 ] , [ fig12 ] and [ fig13 ] , the mfpt for the reprogramming for different self activation strength and binding / unbinding speed is plotted .we observed that , for a typical differentiated system , as in fig .[ fig6 ] ( or [ fig6_1 ] ) and fig .[ fig1 ] ( or [ fig1_1 ] ) , the reprogramming chance is very low and requires a very long time . for self activation strength ( fig .[ fig11 ] ) and ( [ fig12 ] ) , the mfpt for the reprogramming decrease as the increasing of thbinding / unbinding speed , because the stability of un - differentiated symmetric central state increases with the binding / unbinding speed as we can see in potential landscapes , fig .[ fig4 ] ( or [ fig4_1 ] ) , [ fig5 ] ( or [ fig5_1 ] ) , [ fig6 ] ( or [ fig6_1 ] ) , [ fig7 ] ( or [ fig7_1 ] ) , [ fig8 ] ( or [ fig8_1 ] ) and [ fig9 ] ( or [ fig9_1 ] ) . while in fig .[ fig13 ] , since there is no self - activation and no stable symmetric central basin in the landscape , the reprogramming is difficult and the mfpt is very long for different the binding / unbinding speed .both the differentiation and reprogramming can be caused by the change of gene regulations during the developmental process .here we consider the evolution of the binding / unbinding rate from fast to slow from for the differentiation and the evolution of the binding / unbinding rate + 0.001 $ ] from for the reprogramming from slow to fast .the transition paths from gillespie simulation are plotted in fig .[ path ] , accompanied with the potential landscapes for the binding / unbinding speed .it is interesting to observe that the biological dynamic paths are irreversible , i.e. the differentiation path and reprogramming path are totally different . in the differentiation process ,the system stay on the multipotent undifferentiated state for a while until binding becomes slower .as the binding becomes slower , the undifferentiated state becomes less stable .furthermore , the gene state can be switched through binding / unbinding event of regulatory proteins to the promoters and the system will then be evolved from the undifferentiated basin to the differentiated basin of attraction . in the reprogramming process, the system will be gradually attracted into the undifferentiated basin as the increasing of the binding / unbinding rate .the paths of differentiation do not follow the gradient steepest descent of the potential landscape .they do not follow the paths of the reprogramming ( the reverse differentiation process ) .this irreversibility reflects the underlying non - equilibrium nature of the differentiation and developmental network systems .it can give us the fundamental understanding of the biological origin of time arrow in cell development .we developed a theoretical framework to quantify the potential landscape and biological paths for cell development and differentiation .we found a new mechanism for differentiation .the differentiated state can emerge from the slow promoter binding / unbinding processes .we found under slow promoter binding , there can be many meta - stable differentiated states .this has been observed experimentally .our theory gives a possible explanation for the origins of those meta - stable states in the experiments .we show that the developmental process can be quantitatively described and uncovered by the biological paths on the potential landscape and the dynamics of the developmental process is controlled by a combination of the intrinsic fluctuations of protein concentrations and gene state fluctuations through promoter binding .we also show that the biological paths of the reverse differentiation process or reprogramming are irreversible and different from the ones of the differentiation process .we explored the kinetic speed for differentiation .we found that the cell differentiation and reprogramming dynamics strongly depends on the binding / unbinding rate of the regulatory proteins to the gene promoters .we found an optimal speed for differentiation and development with certain binding / unbinding rates of regulatory proteins to the gene promoters .an interesting question we may ask is that is the differentiation and development at optimal speed ?more experimental and bioinformatics studies might be able to pin down the answer .furthermore , the irreversibility in cell development gives biological examples , which can be easily observed in experiments , for the understanding of the origin of time arrow in general non - equilibrium systems . p_{1111 } ( n_{a } , n_{b } ) + f_{aa } p_{0111 } ( n_{a}-2 , n_{b } ) \nonumber \\ -\frac{h_{ab}}{2 } [ n_{b } ( n_{b}-1 ) ] p_{1111 } ( n_{a } , n_{b } ) + f_{ab } p_{1011 } ( n_{a } , n_{b}-2 ) \nonumber \\ -\frac{h_{ba}}{2 } [ n_{a } ( n_{a}-1 ) ] p_{1111 } ( n_{a } , n_{b } ) + f_{ba } p_{1101 } ( n_{a}-2 , n_{b } ) \nonumber \\ -\frac{h_{bb}}{2 } [ n_{b } ( n_{b}-1 ) ] p_{1111 } ( n_{a } , n_{b } ) + f_{bb } p_{1110 } ( n_{a } , n_{b}-2 ) \nonumber \\+ k_{a } [ ( n_{a } + 1 ) p_{1111 } ( n_{a}+1 , n_{b } ) - n_{a } p_{1111 } ( n_{a } , n_{b } ) ] \nonumber \\+ k_{b } [ ( n_{b } + 1 ) p_{1111 } ( n_{a } , n_{b}+1 ) - n_{b } p_{1111 } ( n_{a } , n_{b } ) ] \nonumber\\ + g^a_{11 } [ p_{1111 } ( n_{a}-1 , n_{b } ) - p_{1111 } ( n_{a } , n_{b } ) ] \nonumber \\+ g^b_{11 } [ p_{1111 } ( n_{a } , n_{b}-1 ) - p_{1111 } ( n_{a } , n_{b})]\end{aligned}\ ] ] p_{1011 } ( n_{a } , n_{b } ) + f_{aa } p_{0011 } ( n_{a}-2 , n_{b } ) \nonumber\\ + \frac{h_{ab}}{2 } [ ( n_{b}+2 ) ( n_{b}+1 ) ] p_{1111 } ( n_{a } , n_{b}+2 ) - f_{ab } p_{1011 } ( n_{a } , n_{b } ) \nonumber \\ -\frac{h_{ba}}{2 } [ n_{a } ( n_{a}-1 ) ] p_{1011 } ( n_{a } , n_{b } ) + f_{ba } p_{1001 } ( n_{a}-2 , n_{b } ) \nonumber \\ -\frac{h_{bb}}{2 } [ n_{b } ( n_{b}-1 ) ] p_{1011 } ( n_{a } , n_{b } ) + f_{bb } p_{1010 } ( n_{a } , n_{b}-2 ) \nonumber \\+ k_{a } [ ( n_{a } + 1 ) p_{1011 } ( n_{a}+1 , n_{b } ) - n_{a } p_{1011 } ( n_{a } , n_{b } ) ] \nonumber\\ + k_{b } [ ( n_{b } + 1 ) p_{1011 } ( n_{a } , n_{b}+1 ) - n_{b } p_{1011 } ( n_{a } , n_{b } ) ] \nonumber \\ + g^a_{10 } [ p_{1011 } ( n_{a}-1 , n_{b } ) - p_{1011 } ( n_{a } , n_{b } ) ] \nonumber \\+ g^b_{11 } [ p_{1011 } ( n_{a } , n_{b}-1 ) - p_{1011 } ( n_{a } , n_{b})]\end{aligned}\ ] ] p_{1111 } ( n_{a}+2 , n_{b } ) - f_{aa } p_{0111 } ( n_{a } , n_{b } ) \nonumber \\ -\frac{h_{ab}}{2 } [ n_{b } ( n_{b}-1 ) ] p_{0111 } ( n_{a } , n_{b } ) + f_{ab } p_{0011 } ( n_{a } , n_{b}-2 ) \nonumber \\ -\frac{h_{ba}}{2 } [ n_{a } ( n_{a}-1 ) ] p_{0111 } ( n_{a } , n_{b } ) + f_{ba } p_{0101 } ( n_{a}-2 , n_{b } ) \nonumber \\ -\frac{h_{bb}}{2 } [ n_{b } ( n_{b}-1 ) ] p_{0111 } ( n_{a } , n_{b } ) + f_{bb } p_{0110 } ( n_{a } , n_{b}-2 ) \nonumber\\ + k_{a } [ ( n_{a } + 1 ) p_{0111 } ( n_{a}+1 , n_{b } ) - n_{a } p_{0111 } ( n_{a } , n_{b } ) ] \nonumber \\ + k_{b } [ ( n_{b } + 1 ) p_{0111 } ( n_{a } , n_{b}+1 ) - n_{b } p_{0111 } ( n_{a } , n_{b } ) ] \nonumber\\ + g^a_{01 } [ p_{0111 } ( n_{a}-1 , n_{b } ) - p_{0111 } ( n_{a } , n_{b } ) ] \nonumber \\+ g^b_{11 } [ p_{0111 } ( n_{a } , n_{b}-1 ) - p_{0111 } ( n_{a } , n_{b})]\end{aligned}\ ] ] p_{1011 } ( n_{a}+2 , n_{b } ) - f_{aa } p_{0011 } ( n_{a } , n_{b } ) \nonumber \\+ \frac{h_{ab}}{2 } [ ( n_{b}+2 ) ( n_{b}+1 ) ] p_{0111 } ( n_{a } , n_{b}+2 ) - f_{ab } p_{0011 } ( n_{a } , n_{b } ) \nonumber \\ -\frac{h_{ba}}{2 } [ n_{a } ( n_{a}-1 ) ] p_{0011 } ( n_{a } , n_{b } ) + f_{ba } p_{0001 } ( n_{a}-2 , n_{b } ) \nonumber \\ -\frac{h_{bb}}{2 } [ n_{b } ( n_{b}-1 ) ] p_{0011 } ( n_{a } , n_{b } ) + f_{bb } p_{0010 } ( n_{a } , n_{b}-2 ) \nonumber \\+ k_{a } [ ( n_{a } + 1 ) p_{0011 } ( n_{a}+1 , n_{b } ) - n_{a } p_{0011 } ( n_{a } , n_{b } ) ] \nonumber \\+ k_{b } [ ( n_{b } + 1 ) p_{0011 } ( n_{a } , n_{b}+1 ) - n_{b } p_{0011 } ( n_{a } , n_{b } ) ] \nonumber\\ + g^a_{00 } [ p_{0011 } ( n_{a}-1 , n_{b } ) - p_{0011 } ( n_{a } , n_{b } ) ] \nonumber \\+ g^b_{11 } [ p_{0011 } ( n_{a } , n_{b}-1 ) - p_{0011 } ( n_{a } , n_{b})]\end{aligned}\ ] ] p_{1110 } ( n_{a } , n_{b } ) + f_{aa } p_{0110 } ( n_{a}-2 , n_{b } ) \nonumber \\ -\frac{h_{ab}}{2 } [ n_{b } ( n_{b}-1 ) ] p_{1110 } ( n_{a } , n_{b } ) + f_{ab } p_{1010 } ( n_{a } , n_{b}-2 ) \nonumber \\ -\frac{h_{ba}}{2 } [ n_{a } ( n_{a}-1 ) ] p_{1110 } ( n_{a } , n_{b } ) + f_{ba } p_{1100 } ( n_{a}-2 , n_{b } ) \nonumber \\+ \frac{h_{bb}}{2 } [ ( n_{b}+2 ) ( n_{b}+1 ) ] p_{1111 } ( n_{a } , n_{b}+2 ) - f_{bb } p_{1110 } ( n_{a } , n_{b } ) \nonumber \\ + k_{a } [ ( n_{a } + 1 ) p_{1110 } ( n_{a}+1 , n_{b } ) - n_{a } p_{1110 } ( n_{a } , n_{b } ) ] \nonumber \\ + k_{b } [ ( n_{b } + 1 ) p_{1110 } ( n_{a } , n_{b}+1 ) - n_{b } p_{1110 } ( n_{a } , n_{b } ) ] \nonumber \\ + g^a_{11 } [ p_{1110 } ( n_{a}-1 , n_{b } ) - p_{1110 } ( n_{a } , n_{b } ) ] \nonumber \\ + g^b_{10 } [ p_{1110 } ( n_{a } , n_{b}-1 ) - p_{1110 } ( n_{a } , n_{b})]\end{aligned}\ ] ] p_{1010 } ( n_{a } , n_{b } ) + f_{aa } p_{0010 } ( n_{a}-2 , n_{b } ) \nonumber \\+ \frac{h_{ab}}{2 } [ ( n_{b}+2 ) ( n_{b}+1 ) ] p_{1110 } ( n_{a } , n_{b}+2 ) - f_{ab } p_{1010 } ( n_{a } , n_{b } ) \nonumber \\ -\frac{h_{ba}}{2 } [ n_{a } ( n_{a}-1 ) ] p_{1010 } ( n_{a } , n_{b } ) + f_{ba } p_{1000 } ( n_{a}-2 , n_{b } ) \nonumber \\+ \frac{h_{bb}}{2 } [ ( n_{b}+2 ) ( n_{b}+1 ) ] p_{1011 } ( n_{a } , n_{b}+2 ) - f_{bb } p_{1010 } ( n_{a } , n_{b } ) \nonumber \\ + k_{a } [ ( n_{a } + 1 ) p_{1010 } ( n_{a}+1 , n_{b } ) - n_{a } p_{1010 } ( n_{a } , n_{b } ) ] \nonumber \\ + k_{b } [ ( n_{b } + 1 ) p_{1010 } ( n_{a } , n_{b}+1 ) - n_{b } p_{1010 } ( n_{a } , n_{b } ) ] \nonumber \\ + g^a_{10 } [ p_{1010 } ( n_{a}-1 , n_{b } ) - p_{1010 } ( n_{a } , n_{b } ) ] \nonumber \\+ g^b_{10 } [ p_{1010 } ( n_{a } , n_{b}-1 ) - p_{1010 } ( n_{a } , n_{b})]\end{aligned}\ ] ] p_{1110 } ( n_{a}+2 , n_{b } ) - f_{aa } p_{0110 } ( n_{a } , n_{b } ) \nonumber \\ -\frac{h_{ab}}{2 } [ n_{b } ( n_{b}-1 ) ] p_{0110 } ( n_{a } , n_{b } ) + f_{ab } p_{0010 } ( n_{a } , n_{b}-2 ) \nonumber \\ -\frac{h_{ba}}{2 } [ n_{a } ( n_{a}-1 ) ] p_{0110 } ( n_{a } , n_{b } ) + f_{ba } p_{0100 } ( n_{a}-2 , n_{b } ) \nonumber \\+ \frac{h_{bb}}{2 } [ ( n_{b}+2 ) ( n_{b}+1 ) ] p_{0111 } ( n_{a } , n_{b}+2 ) - f_{bb } p_{0110 } ( n_{a } , n_{b } ) \nonumber \\ + k_{a } [ ( n_{a } + 1 ) p_{0110 } ( n_{a}+1 , n_{b } ) - n_{a } p_{0110 } ( n_{a } , n_{b } ) ] \nonumber \\ + k_{b } [ ( n_{b } + 1 ) p_{0110 } ( n_{a } , n_{b}+1 ) - n_{b } p_{0110 } ( n_{a } , n_{b } ) ] \nonumber \\ + g^a_{01 } [ p_{0110 } ( n_{a}-1 , n_{b } ) - p_{0110 } ( n_{a } , n_{b } ) ] \nonumber \\ + g^b_{10 } [ p_{0110 } ( n_{a } , n_{b}-1 ) - p_{0110 } ( n_{a } , n_{b})]\end{aligned}\ ] ] p_{1010 } ( n_{a}+2 , n_{b } ) - f_{aa } p_{0010 } ( n_{a } , n_{b } ) \nonumber \\+ \frac{h_{ab}}{2 } [ ( n_{b}+2 ) ( n_{b}+1 ) ] p_{0110 } ( n_{a } , n_{b}+2 ) - f_{ab } p_{0010 } ( n_{a } , n_{b } ) \nonumber \\ -\frac{h_{ba}}{2 } [ n_{a } ( n_{a}-1 ) ] p_{0010 } ( n_{a } , n_{b } ) + f_{ba } p_{0000 } ( n_{a}-2 , n_{b } ) \nonumber \\ + \frac{h_{bb}}{2 } [ ( n_{b}+2 ) ( n_{b}+1 ) ] p_{0011 } ( n_{a } , n_{b}+2 ) - f_{bb } p_{0010 } ( n_{a } , n_{b } ) \nonumber \\+ k_{a } [ ( n_{a } + 1 ) p_{0010 } ( n_{a}+1 , n_{b } ) - n_{a } p_{0010 } ( n_{a } , n_{b } ) ] \nonumber \\ + k_{b } [ ( n_{b } + 1 ) p_{0010 } ( n_{a } , n_{b}+1 ) - n_{b } p_{0010 } ( n_{a } , n_{b } ) ] \nonumber \\+ g^a_{00 } [ p_{0010 } ( n_{a}-1 , n_{b } ) - p_{0010 } ( n_{a } , n_{b } ) ] \nonumber \\+ g^b_{10 } [ p_{0010 } ( n_{a } , n_{b}-1 ) - p_{0010 } ( n_{a } , n_{b})]\end{aligned}\ ] ] p_{1101 } ( n_{a } , n_{b } ) + f_{aa } p_{0101 } ( n_{a}-2 , n_{b } ) \nonumber \\ -\frac{h_{ab}}{2 } [ n_{b } ( n_{b}-1 ) ] p_{1101 } ( n_{a } , n_{b } ) + f_{ab } p_{1001 } ( n_{a } , n_{b}-2 ) \nonumber \\+ \frac{h_{ba}}{2 } [ ( n_{a}+2 ) ( n_{a}+1 ) ] p_{1111 } ( n_{a}+2 , n_{b } ) - f_{ba } p_{1101 } ( n_{a } , n_{b } ) \nonumber \\ -\frac{h_{bb}}{2 } [ n_{b } ( n_{b}-1 ) ] p_{1101 } ( n_{a } , n_{b } ) + f_{bb } p_{1100 } ( n_{a } , n_{b}-2 ) \nonumber \\ + k_{a } [ ( n_{a } + 1 ) p_{1101 } ( n_{a}+1 , n_{b } ) - n_{a } p_{1101 } ( n_{a } , n_{b } ) ] \nonumber \\+ k_{b } [ ( n_{b } + 1 ) p_{1101 } ( n_{a } , n_{b}+1 ) - n_{b } p_{1101 } ( n_{a } , n_{b } ) ] \nonumber \\+ g^a_{11 } [ p_{1101 } ( n_{a}-1 , n_{b } ) - p_{1101 } ( n_{a } , n_{b } ) ] \nonumber \\+ g^b_{01 } [ p_{1101 } ( n_{a } , n_{b}-1 ) - p_{1101 } ( n_{a } , n_{b})]\end{aligned}\ ] ] p_{1001 } ( n_{a } , n_{b } ) + f_{aa } p_{0001 } ( n_{a}-2 , n_{b } ) \nonumber \\+ \frac{h_{ab}}{2 } [ ( n_{b}+2 ) ( n_{b}+1 ) ] p_{1101 } ( n_{a } , n_{b}+2 ) - f_{ab } p_{1001 } ( n_{a } , n_{b } ) \nonumber \\ + \frac{h_{ba}}{2 } [ ( n_{a}+2 ) ( n_{a}+1 ) ] p_{1011 } ( n_{a}+2 , n_{b } ) - f_{ba } p_{1001 } ( n_{a } , n_{b } ) \nonumber \\ -\frac{h_{bb}}{2 } [ n_{b } ( n_{b}-1 ) ] p_{1001 } ( n_{a } , n_{b } ) + f_{bb } p_{1000 } ( n_{a } , n_{b}-2 ) \nonumber \\ + k_{a } [ ( n_{a } + 1 ) p_{1001 } ( n_{a}+1 , n_{b } ) - n_{a } p_{1001 } ( n_{a } , n_{b } ) ] \nonumber\\ + k_{b } [ ( n_{b } + 1 ) p_{1001 } ( n_{a } , n_{b}+1 ) - n_{b } p_{1001 } ( n_{a } , n_{b } ) ] \nonumber\\ + g^a_{10 } [ p_{1001 } ( n_{a}-1 , n_{b } ) - p_{1001 } ( n_{a } , n_{b } ) ] \nonumber \\+ g^b_{01 } [ p_{1001 } ( n_{a } , n_{b}-1 ) - p_{1001 } ( n_{a } , n_{b})]\end{aligned}\ ] ] p_{1101 } ( n_{a}+2 , n_{b } ) - f_{aa } p_{0101 } ( n_{a } , n_{b } ) \nonumber \\ -\frac{h_{ab}}{2 } [ n_{b } ( n_{b}-1 ) ] p_{0101 } ( n_{a } , n_{b } ) + f_{ab } p_{0001 } ( n_{a } , n_{b}-2 ) \nonumber \\+ \frac{h_{ba}}{2 } [ ( n_{a}+2 ) ( n_{a}+1 ) ] p_{0111 } ( n_{a}+2 , n_{b } ) - f_{ba } p_{0101 } ( n_{a } , n_{b } ) \nonumber \\ -\frac{h_{bb}}{2 } [ n_{b } ( n_{b}-1 ) ] p_{0101 } ( n_{a } , n_{b } ) + f_{bb } p_{0100 } ( n_{a } , n_{b}-2 ) \nonumber\\ + k_{a } [ ( n_{a } + 1 ) p_{0101 } ( n_{a}+1 , n_{b } ) - n_{a } p_{0101 } ( n_{a } , n_{b } ) ] \nonumber \\+ k_{b } [ ( n_{b } + 1 ) p_{0101 } ( n_{a } , n_{b}+1 ) - n_{b } p_{0101 } ( n_{a } , n_{b } ) ] \nonumber \\+ g^a_{01 } [ p_{0101 } ( n_{a}-1 , n_{b } ) - p_{0101 } ( n_{a } , n_{b } ) ] \nonumber \\+ g^b_{01 } [ p_{0101 } ( n_{a } , n_{b}-1 ) - p_{0101 } ( n_{a } , n_{b})]\end{aligned}\ ] ] p_{1001 } ( n_{a}+2 , n_{b } ) - f_{aa } p_{0001 } ( n_{a } , n_{b } ) \nonumber \\ + \frac{h_{ab}}{2 } [ ( n_{b}+2 ) ( n_{b}+1 ) ] p_{0101 } ( n_{a } , n_{b}+2 ) - f_{ab } p_{0001 } ( n_{a } , n_{b } ) \nonumber \\+ \frac{h_{ba}}{2 } [ ( n_{a}+2 ) ( n_{a}+1 ) ] p_{0011 } ( n_{a}+2 , n_{b } ) - f_{ba } p_{0001 } ( n_{a } , n_{b } ) \nonumber \\ -\frac{h_{bb}}{2 } [ n_{b } ( n_{b}-1 ) ] p_{0001 } ( n_{a } , n_{b } ) + f_{bb } p_{0000 } ( n_{a } , n_{b}-2 ) \nonumber \\+ k_{a } [ ( n_{a } + 1 ) p_{0001 } ( n_{a}+1 , n_{b } ) - n_{a } p_{0001 } ( n_{a } , n_{b } ) ] \nonumber \\ + k_{b } [ ( n_{b } + 1 ) p_{0001 } ( n_{a } , n_{b}+1 ) - n_{b } p_{0001 } ( n_{a } , n_{b } ) ] \nonumber\\ + g^a_{00 } [ p_{0001 } ( n_{a}-1 , n_{b } ) - p_{0001 } ( n_{a } , n_{b } ) ] \nonumber \\+ g^b_{01 } [ p_{0001 } ( n_{a } , n_{b}-1 ) - p_{0001 } ( n_{a } , n_{b})]\end{aligned}\ ] ] p_{1100 } ( n_{a } , n_{b } ) + f_{aa } p_{0100 } ( n_{a}-2 , n_{b } ) \nonumber \\ -\frac{h_{ab}}{2 } [ n_{b } ( n_{b}-1 ) ] p_{1100 } ( n_{a } , n_{b } ) + f_{ab } p_{1000 } ( n_{a } , n_{b}-2 ) \nonumber \\+ \frac{h_{ba}}{2 } [ ( n_{a}+2 ) ( n_{a}+1 ) ] p_{1110 } ( n_{a}+2 , n_{b } ) - f_{ba } p_{1100 } ( n_{a } , n_{b } ) \nonumber \\+ \frac{h_{bb}}{2 } [ ( n_{b}+2 ) ( n_{b}+1 ) ] p_{1101 } ( n_{a } , n_{b}+2 ) - f_{bb } p_{1100 } ( n_{a } , n_{b } ) \nonumber \\ + k_{a } [ ( n_{a } + 1 ) p_{1100 } ( n_{a}+1 , n_{b } ) - n_{a } p_{1100 } ( n_{a } , n_{b } ) ] \nonumber \\+ k_{b } [ ( n_{b } + 1 ) p_{1100 } ( n_{a } , n_{b}+1 ) - n_{b } p_{1100 } ( n_{a } , n_{b } ) ] \nonumber\\ + g^a_{11 } [ p_{1100 } ( n_{a}-1 , n_{b } ) - p_{1100 } ( n_{a } , n_{b } ) ] \nonumber \\+ g^b_{00 } [ p_{1100 } ( n_{a } , n_{b}-1 ) - p_{1100 } ( n_{a } , n_{b})]\end{aligned}\ ] ] p_{1000 } ( n_{a } , n_{b } ) + f_{aa } p_{0000 } ( n_{a}-2 , n_{b } ) \nonumber \\+ \frac{h_{ab}}{2 } [ ( n_{b}+2 ) ( n_{b}+1 ) ] p_{1100 } ( n_{a } , n_{b}+2 ) - f_{ab } p_{1000 } ( n_{a } , n_{b } ) \nonumber \\ + \frac{h_{ba}}{2 } [ ( n_{a}+2 ) ( n_{a}+1 ) ] p_{1010 } ( n_{a}+2 , n_{b } ) - f_{ba } p_{1000 } ( n_{a } , n_{b } ) \nonumber \\+ \frac{h_{bb}}{2 } [ ( n_{b}+2 ) ( n_{b}+1 ) ] p_{1001 } ( n_{a } , n_{b}+2 ) - f_{bb } p_{1000 } ( n_{a } , n_{b } ) \nonumber \\ + k_{a } [ ( n_{a } + 1 ) p_{1000 } ( n_{a}+1 , n_{b } ) - n_{a } p_{1000 } ( n_{a } , n_{b } ) ] \nonumber \\ + k_{b } [ ( n_{b } + 1 ) p_{1000 } ( n_{a } , n_{b}+1 ) - n_{b } p_{1000 } ( n_{a } , n_{b } ) ] \nonumber \\ + g^a_{10 } [ p_{1000 } ( n_{a}-1 , n_{b } ) - p_{1000 } ( n_{a } , n_{b } ) ] \nonumber \\ + g^b_{00 } [ p_{1000 } ( n_{a } , n_{b}-1 ) - p_{1000 } ( n_{a } , n_{b})]\end{aligned}\ ] ] p_{1100 } ( n_{a}+2 , n_{b } ) - f_{aa } p_{0100 } ( n_{a } , n_{b } ) \nonumber \\ -\frac{h_{ab}}{2 } [ n_{b } ( n_{b}-1 ) ] p_{0100 } ( n_{a } , n_{b } ) + f_{ab } p_{0000 } ( n_{a } , n_{b}-2 ) \nonumber\\ + \frac{h_{ba}}{2 } [ ( n_{a}+2 ) ( n_{a}+1 ) ] p_{0110 } ( n_{a}+2 , n_{b } ) - f_{ba } p_{0100 } ( n_{a } , n_{b } ) \nonumber \\+ \frac{h_{bb}}{2 } [ ( n_{b}+2 ) ( n_{b}+1 ) ] p_{0101 } ( n_{a } , n_{b}+2 ) - f_{bb } p_{0100 } ( n_{a } , n_{b } ) \nonumber \\ + k_{a } [ ( n_{a } + 1 ) p_{0100 } ( n_{a}+1 , n_{b } ) - n_{a } p_{0100 } ( n_{a } , n_{b } ) ] \nonumber \\ + k_{b } [ ( n_{b } + 1 ) p_{0100 } ( n_{a } , n_{b}+1 ) - n_{b } p_{0100 } ( n_{a } , n_{b } ) ] \nonumber \\ + g^a_{01 } [ p_{0100 } ( n_{a}-1 , n_{b } ) - p_{0100 } ( n_{a } , n_{b } ) ] \nonumber \\ + g^b_{00 } [ p_{0100 } ( n_{a } , n_{b}-1 ) - p_{0100 } ( n_{a } , n_{b})]\end{aligned}\ ] ] p_{1000 } ( n_{a}+2 , n_{b } ) - f_{aa } p_{0000 } ( n_{a } , n_{b } ) \nonumber \\+ \frac{h_{ab}}{2 } [ ( n_{b}+2 ) ( n_{b}+1 ) ] p_{0100 } ( n_{a } , n_{b}+2 ) - f_{ab } p_{0000 } ( n_{a } , n_{b } ) \nonumber \\ + \frac{h_{ba}}{2 } [ ( n_{a}+2 ) ( n_{a}+1 ) ] p_{0010 } ( n_{a}+2 , n_{b } ) - f_{ba } p_{0000 } ( n_{a } , n_{b } ) \nonumber \\+ \frac{h_{bb}}{2 } [ ( n_{b}+2 ) ( n_{b}+1 ) ] p_{0001 } ( n_{a } , n_{b}+2 ) - f_{bb } p_{0000 } ( n_{a } , n_{b } ) \nonumber \\ + k_{a } [ ( n_{a } + 1 ) p_{0000 } ( n_{a}+1 , n_{b } ) - n_{a } p_{0000 } ( n_{a } , n_{b } ) ] \nonumber \\ + k_{b } [ ( n_{b } + 1 ) p_{0000 } ( n_{a } , n_{b}+1 ) - n_{b } p_{0000 } ( n_{a } , n_{b } ) ] \nonumber \\ + g^a_{00 } [ p_{0000 } ( n_{a}-1 , n_{b } ) - p_{0000 } ( n_{a } , n_{b } ) ] \nonumber \\ + g^b_{00 } [ p_{0000 } ( n_{a } , n_{b}-1 ) - p_{0000 } ( n_{a } , n_{b})]\end{aligned}\ ] ] austin , d. w. , m. s. allen , j. m. mccollum , r. d. dar , j. r. wilgus , g. s. sayler , n. f. samatova , c. d. cox , and m. l. simpson .gene network shaping of inherent noise spectra ._ nature _ 439:608 - 611 .das , j. , m. kardar , and a.k .purely stochastic binary decisions in cell signaling models without underlying deterministic instabilities ._ 104:18598 - 18963 .kalmar , t. , c. lim , p. hayward , s. munoz - descalzo , j. nichols , j. garcia - ojalvo , and a. m. arias .regulated fluctuations in nanog expression mediate cell fate decisions in embryonic stem cells . _plos biol ._ 7:e1000149 .wang , j. , l. xu , and e. k. wang .potential landscape and flux framework of non - equilibrium networks : robustness , dissipation and coherence of biochemical oscillations ._ 105:12271 - 12276 .network diagram of canonical gene regulatory circuit of two mutually opposing proteins that positively self - regulate themselves .two types of genes , and are translated into proteins and respectively .the proteins ( ) can bind to the promoter of the gene ( ) to activate the synthesis rate of ( ) , which makes a self - activation feedback loop .the proteins ( ) can bind to the gene ( ) to repress the synthesis rate of ( ) , which makes a mutual repression loop .both protein and protein bind on promoters as a dimer with the binding rate , respectively , and the unbinding rate , respectively , with . ]
understanding the differentiation , a biological process from a multipotent stem or progenitor state to a mature cell is critically important . we develop a theoretical framework to quantify the underlying potential landscape and biological paths for cell development and differentiation . we propose a new mechanism of differentiation and development through binding / unbinding of regulatory proteins to the gene promoters . we found indeed the differentiated states can emerge from the slow promoter binding/ unbinding processes . furthermore , under slow promoter binding / unbinding , we found multiple meta - stable differentiated states . this can explain the origin of multiple states observed in the recent experiments . in addition , the kinetic time quantified by mean first passage transition time for the differentiation and reprogramming strongly depends on the time scale of the promoter binding / unbinding processes . we discovered an optimal speed for differentiation for certain binding / unbinding rates of regulatory proteins to promoters . more experiments in the future might be able to tell if cells differentiate at at that optimal speed . in addition , we quantify kinetic pathways for the differentiation and reprogramming . we found that they are irreversible . this captures the non - equilibrium dynamics in multipotent stem or progenitor cells . such inherent time - asymmetry as a result of irreversibility of state transition pathways as shown may provide the origin of time arrow for cell development .
link discovery frameworks are of utmost importance during the creation of linked data .this is due to their being the key towards the implementation of the fourth linked data principle , i.e. , the provision of links between datasets .two main challenges need to be addressed by link discovery frameworks .first , they need to address the _ accuracy challenge _ ,i.e. , they need to generate correct links .a plethora of approaches have been developed for this purpose and contain algorithms ranging from genetic programming to probabilistic models .in addition to addressing the need for accurate links , link discovery frameworks need to address the _ challenge of time efficiency_. this challenge comes about because of the mere size of knowledge bases that need to be linked . in particular ,large knowledge bases such as linkedtcga contain more than 20 billion triples .one of the approaches to improving the scalability of link discovery frameworks is to use planning algorithms in a manner akin ( but not equivalent to ) their use in databases . in general , planners rely on cost functions to estimate the runtime of particular portions of link specifications .so far , it has been assumed that this cost function is linear in the parameters of the planning , i.e. , in the size of the datasets and the similarity threshold .however , this assumption has never been verified . in this paper , we address exactly this research gap and _ study how well other models for runtime approximation perform_. in particular , we study linear , exponential and mixed models for runtime estimation .the contributions of this paper are thus as follows : * we present three different models for runtime approximation in planning for link discovery .* we compare these models on six different datasets and study how well they can approximate runtimes of specifications as well as with respect to how well they generalize across datasets . *we integrate the models with the helios planner for link discovery as described in and compare their performance using 400 specifications .the rest of the paper is structured as follows : in section [ sec : preliminaries ] , we present the concept and notations necessary to understand this work . the subsequent section , section [ sec : approach ] , presents the runtime approximation problem and how it can be addressed by different models .we then delve into a thorough evaluation of these models in section [ sec : evaluation ] and compare the expected runtimes generated by the models at hand with the real runtimes of the link discovery framework .we also study the transferability of the results we achieve and their performance when planning whole link specifications .finally , we recapitulate our results and conclude .in this section , we present the necessary concepts and notations to understand the rest of the paper .we begin by giving a description of a knowledge base and link discovery ( ld ) , we continue by providing a formal definition of a link specification ( ls ) and its semantics and we finish our preliminary section with an explanatory presentation of a plan , its components and its relation to a ls .[ [ knowledge - base . ] ] knowledge base .+ + + + + + + + + + + + + + + a knowledge base is a set of triples , where is the set of all rdf resources , is the set of all rdf properties , is the set of all rdf blank nodes and is the set of all literals .[ [ link - discovery . ] ] link discovery .+ + + + + + + + + + + + + + + given two ( not necessarily distinct ) sets of rdf resources and and a relation ( e.g , ` directorof ` , ` owl : sameas ` ) , the main goal of ld is to discover the set ( _ mapping _ ) .given that this task can be very tedious ( especially when and are large ) , ld frameworks are commonly used to achieve this computation .[ [ link - specification . ] ] link specification .+ + + + + + + + + + + + + + + + + + + declarative ld frameworks use link specifications ( lss ) to describe the conditions for which holds for a pair .a ls consists of two basic components : * _ similarity measures _ which allow the comparison of property values of resources found in the input data sets and .we define an _ atomic similarity measure _ as a function ] .a complex ls consists of two ls , and .we call the _ left sub - specification _ and the _ right sub - specification _ of .we denote the semantics ( i.e. , the results of a ls for given sets of resources and ) of a ls as } ] .we will thus use the same graphical representation for filters and atomic specifications .we call the _ filter of _ and denote it with . for our example in fig .[ fig : specexample ] , .we denote the _ operator of a ls _ with . for , .the operator of the ls shown in our example is .the semantics of lss are then as shown in table [ tab : semantics ] .[ [ execution - plan . ] ] execution plan .+ + + + + + + + + + + + + + + to compute the mapping } ] ) that is potentially faster to execute .the most common planner is the _ canonical planner _ ( dubbed canonical ) , which simply traverses in post - order and has its results computed in that order by the execution engine . for the ls shown in fig .[ fig : specexample ] , the execution plan returned by canonicalwould thus foresee to first compute the mapping ] would follow .step 3 would be to compute while abiding by the semantics described in table [ tab : semantics ] .step 4 would be to obtain by filtering the results and keeping only the pairs that have a similarity above 0.5 .step 5 would be ] , whose value at is an approximation of the runtime for the link specification with these parameters .if are the measured runtimes for the parameters , and , then we constrain the mapping to be a local minimum of the l2-loss : writing . within this paper, we consider the following parametrized families of functions : the parameters are then determined by for some local minimum . in the case of and problem is linear in nature and we solved it using the pseudo - inverse of the associated vandermonde matrix .for we used the levenberg - marquardt algorithm for nonlinear least squares problems , using as initial guess for all parameters .we chose as the baseline linear fit . is the standard log - linear fit , except for the term .we included this term during a grid search for polynomials to perform a log - polynomial fit .higher orders of or or did not contribute to a better fit . can be interpreted as an interpolation of and with a constant offset . to exemplify our approach for , assume we have measured and . inserting into eq .( 1 ) and taking the logarithm , one arrives at the optimization problem the solution to this least squares problem also is the unique solution of its normal equations : by multiplying and inverting matrices , we arrive at the linear equation where denotes the moore - penrose pseudo inverse of . multiplying the matrices , we arrive at we have found the coefficients of the fit function .we evaluated the three runtime estimation models using six data sets .the first three are the benchmark data sets for ld dubbed amazon - google products , dblp - acm and dblp - scholar described in .we also created two larger additional data sets ( movies and villages , see table [ tab : datasetsproperties ] ) from the data sets dbpedia , linkedgeodata and linkedmdb .the sixth dataset was the set of all english labels from dbpedia 2014 .table [ tab : datasetsproperties ] describes the characteristics of the datasets and presents the properties used when linking the retrieved resources for the first four datasets .the mapping properties were provided to the link discovery algorithms underlying our results .each of our experiments consisted of two phases : during the _ training _ phase , we trained each of the models independently . for each model , we computed the set of coefficients for each of the approximation models that minimized the root mean squared error ( rmse ) on the training data provided .the aim of the subsequent _ test _ phase was to evaluate the accuracy of the runtime estimation provided by each model and the performance of the currently best ld planner , helios , when it relied of each of the three models for runtime approximations . throughout our experiments, we used the algorithms _ ed - join _ ( which implements the levenshtein string distance ) and _ ppjoin+ _ ( which implements the jaccard , overlap , cosine and trigrams string similarity measures ) to execute atomics specifications . as thresholds we used random values between and .the aim of our evaluation was to answer the following set of questions regarding the performance of the three models _ exp _ , _ linear _ and _ mixed _ : * : how do our models fit each class separately ? to answer this question , we began by splitting the source and target data of each of our datasets into two non - overlapping parts of equal size .we used the first half of each source and each target for training and the second half for testing . * * _ training _: we trained the three models on each dataset . for each model , dataset and mapper ,we a ) selected 15 source and 15 target random samples of random sizes from the first half of a dataset ( amazon - google products , dblp - acm , dblp - scholar , movies and villages ) and b ) compared each source sample with each target sample 3 times .note that we used the same samples across all models for the sake of fairness .overall , we ran 675 training experiments to train each model on each dataset . * * _ testing _ : to test the accuracy of each model , we ran the corresponding algorithm ( _ ed - join _ and _ ppjoin+ _ ) with a random threshold between and and recorded the real runtime of the approach and the runtimes predicted by our three models .each approach was executed 100 times against the whole of the second half of the same dataset .* : how do our models generalize across classes , i.e. , can a model trained on data from one class be used to predict runtimes accurately on another class ? * * _ training _ : we trained each model in the same manner as for on exactly the same five datasets with the sole difference that the samples were selected randomly from the whole dataset . ** _ testing _ : like in the previous series of experiments , we ran _ ed - join _ and _ ppjoin+ _ with a random threshold between and .each of the algorithms was executed 100 times against the remaining four datasets .* : how do our models perform when trained on a large dataset ? * * _ training _: we trained in the same fashion as to answer with the sole differences that ( 1 ) we used 15 source and 15 target random samples of various sizes between and from ( 2 ) the english labels of dbpedia to train our model . * * _ testing _ : we learned 100 lss for the amazon - gp , dblp - acm , movies and villages datasets using the unsupervised version of the eagle algorithm .we chose this algorithm because it was shown to generate meaningful specifications that return high - quality links in previous works .for each dataset , we ran the set of 100 specifications learned by eagle on the given dataset by using each of the models during the execution in combination with the helios planning algorithm , which was shown to outperforms the canonical planner w.r.t .runtime while producing exactly the same results . throughout our experiments, we configured eagleby setting the number of generations and population size to 20 , mutation and crossover rates were set to 0.6 .all experiments for all implementations were carried out on the same 20-core linux server running _ openjdk _ 64-bit server on ubuntu 14.04.4 lts on intel(r ) xeon(r ) cpu e5 - 2650 v3 processors clocked at 2.30ghz .train _ experiment and each _ test _ experiment for was repeated three times . as evaluation measure , we computed root mean square error ( _ rmse _ ) between the _ expected _ runtime and the average _ execution _ runtime required to run each ls .we report all three numbers for each model and dataset .to address , we evaluated the performance of our models when trained and tested on the same class .we present the results of this series of experiments in table [ tab : exp1 ] . for _ ppjoin+ _( in particular the _ trigrams _ measure ) , the _ mixed _ model achieved the lowest error when tested upon amazon - gp and dblp - scholar , whereas the _ linear _ model was able to approximate the expected runtime with higher accuracy on the movies and villages datasets . on average , _ linear _ model was able to achieve a lower _ rmse _ compared to the other two models .for the _ ed - join _ , the _ mixed _ model outperformed _ linear _ and _ exp _ in the majority of datasets ( dblp - scholar , movies and villages ) and obtained the lowest _ rmse _ on average .as we observe in table [ tab : exp1 ] , for both measures , the _ exp _ model retrieved the highest error on average and is thus the model less suitable for runtime approximations .especially , for the _ ed - join _ , _ exp _ had the worst performance in four out of the five datasets and retrieved the highest _ rmse _ among the different test datasets for villages .this clearly answers our first questions : the _ linear _ and _ mixed _ approximation models are able achieve the smallest error when trained on the class on which they are tested .to continue with , we conducted a set of experiments in order to observe how well each model could generalize among the different classes included in our evaluation data .tables [ tab : exp2acm ] , [ tab : exp2amazon ] , [ tab : exp2scholar ] , [ tab : exp2movies ] and [ tab : exp2villages ] present the results of training on one dataset and testing the resulting models on the set of the remaining classes .the highest _ rmse _ error was achieved when both measures were tested using the _ exp _ model in all datasets but villages .however , table [ tab : exp2villages ] shows that the fitting error when trained on villages is relatively low among all three models .additionally , we observe that the _ exp _ model s _ rmse _ increased exponentially as the quantity of the training data decreased , which constitutes this model as inadequate and unreliable for runtime approximations . by observing tables [ tab : exp2acm ] and [ tab : exp2scholar ], we see that the _ rmse _ of the _ exp _ model increased by 38 orders of magnitude for _ ed - join_. for both measures , the _ linear _ model outperformed the other two models on average when trained on the amazon - gp , dblp - acm and dblp - scholar datasets and achieved the lowest _ rmse _ when trained on movies for _ ed - join _ compared to _ exp _ and _ mixed_. both _ linear _ and _ mixed _ achieved minuscule approximation errors compared to _ exp _ , but _linear _ was able to produce at least 35% less _ rmse _ compared to _mixed_. therefore , we can answer by stating that the _ linear _ model is the most suitable and sufficient model that can generalize among different classes . for our last question ,we tested the performance of the different models when trained on a bigger and more diverse dataset .table [ tab : exp3 ] shows the results of our evaluation , where each model was trained on dbpedia english labels and tested on the the four evaluation datasets .the _ linear _ model error was 1 order of magnitude less than the _ rmse _ obtained by _ exp _ and 3 orders of magnitude less compared to the _ mixed _ error . in all four datasets , themodel produced the highest _rmse_. for the villages dataset , the _ mixed _model s error was 1916 and 214 times higher compared to _ linear _ and _ exp _ resp .[ fig : exp ] and [ fig : linear ] present the plans produces by heliosfor the ls ` minus(and(levenshtein(x.description , y.description)|0.5045,trigrams ( x.title , y.name)|0.4871)|0.2925,or(levenshtein(x.description,y.descri ption)|0.5045,trigrams(x.title , y.name)|0.4871)| 0.2925)>=0.2925 ` of the amazon - gp dataset , if the planner used the _ exp _ model and the _ linear _ or the _ mixed _ model resp .for the child ls ` and(levenshtein(x.description , y.descri ption)|0.5045,trigrams(x.title , y.name)|0.4871)|0.2925 ` , the _ linear _ and the _ mixed _ model chose to execute only ` trigrams(x.title , y.name)|0.4871 ) ` and use the other child as a filter .moreover , the plan retrieved by using the _ exp _ model for runtime approximations aims to execute both children lss , which results into an overhead in the execution of the ls .it is obvious that the _ linear _ model achieved by far the lowest _ rmse _ on average compared to the other two models , which concludes the answer to .( filter)[] ; ( minus)[right = of filter , xshift=0.7cm] edge [ - > ] ( filter ) ; ( filter1)[above = of minus.west,xshift=2.6cm ] edge [ - > ] ( minus ) ; ( intersection)[right = of filter1,xshift=0.7cm , yshift=0.8cm] edge [ - > ] ( filter1 ) ; ( label1)[above = of intersection.west,xshift=3.8cm] edge [ - > ] ( intersection ) ; ( label2)[below = of intersection.west,xshift=3.8cm] edge [ - > ] ( intersection ) ; ( filter2)[below = of minus.east,xshift=1.9cm ] edge [ - > ] ( minus ) ; ( union)[right = of filter2,xshift=0.7cm , yshift=-0.8cm] edge [ - > ] ( filter2 ) ; ( label3)[above = of union.west,xshift=3.8cm] edge [ - > ] ( union ) ; ( label4)[below = of union.west,xshift=3.8cm] edge [ - > ] ( union ) ; ( filter0)[] ; ( minus)[below = of filter0,] edge [ - > ] ( filter ) ; ( filter)[below = of minus] edge [ - > ] ( minus ) ; ( filterintersection)[below = of filter] edge [ - > ] ( filter ) ; ( filter1)[below = of filterintersection ] edge [ - > ] ( filterintersection ) ; ( filter2)[right = of minus.east ] edge [ - > ] ( minus ) ; ( union)[right = of filter2,xshift=0.7cm] edge [ - > ] ( filter2 ) ; ( label3)[above = of union.west,xshift=0.3cm] edge [ - > ] ( union ) ; ( label4)[below = of union.west,xshift=1.6cm,yshift=0.3cm] edge [ - > ] ( union ) ;the task of efficient query execution in database systems is similar to the task of execution optimization using runtime approximations in ld frameworks .efficient and scalable data management has been of central importance in database systems . over the past few years, there has been an extensive work on query optimization in databases that is based on statistical information about relations and intermediate results . the author of an analytic overview regarding the procedure of query optimization and the different approaches used at each step of the process .a novel approach in this field was presented by , in which the proposed approach introduced the concept of parametric query optimization . in this work, the authors provided the necessary formalization of the aforementioned concept and conducted a set of experiments using the buffer size as parameter . in order to minimize the total cost of generating all possible alternative execution plans, they used a set of randomized algorithms . on a similar manner ,the authors of introduced the idea of multi - objective parametric query optimization ( mpq ) , where the cost of plan is associated with multiple cost functions and each cost function is associated with various parameters .their experimental results showed however that the mpq method performs an exhaustive search of the solution space which addresses this approach computationally inefficient .another set of approaches in the field of query optimization have focused on creating dynamic execution plans .dynamic planning is based on the idea that the execution engine of a framework knows more than the planner itself .therefore , information generated by the execution engine is used to re - evaluate the plans generated by the optimizer .there has been a vast amount of approaches towards dynamic query optimization such as query scrambling for initial delays , dynamic planning in compile - time , adaptive query operators and re - ordering of operators .moreover , the problem addressed in this work focus on identifying scalable and time - efficient solutions towards ld .a large number of frameworks were developed to assist this issue , such as silk , limes , knofuss and zhishi.links .silk and knofuss implement blocking approaches in order to achieve efficient linking between resources .silk framework incorporates a rough index pre - match , whereas knofuss blocking technique is highly influenced from databases systems techniques . to this end , the only ld framework that provides both theoretical and practical guarantees towards scalable and accurate ld is limes .as we mentioned throughout this work , limesexecution strategy incorporates the heliosplanner which is ( to the best of our knowledge ) the first execution optimizer in ld .heliosis able to provide accurate runtime approximations , which we have extended in this work , and is able to find the least costly execution plan for a ls , consuming a minute portion of the overall execution runtime .in this paper , we studied approximation functions that allow predicting the runtime of link specifications .we showed that on average , linear models are indeed the approach to chose to this end as they seem to overfit the least .still , mixed models also perform in a satisfactory manner .exponential models either fit very well or not at all and are thus not to be used . in future work , we will study further models for the evaluation of runtime and improve upon existing planning mechanisms for the declarative ld . in particular , we will consider other features when approximation runtimes , e.g. , the distribution of characters in the strings to compare .
time - efficient link discovery is of central importance to implement the vision of the semantic web . some of the most rapid link discovery approaches rely internally on planning to execute link specifications . in newer works , linear models have been used to estimate the runtime the fastest planners . however , no other category of models has been studied for this purpose so far . in this paper , we study non - linear runtime estimation functions for runtime estimation . in particular , we study exponential and mixed models for the estimation of the runtimes of planners . to this end , we evaluate three different models for runtime on six datasets using 400 link specifications . we show that exponential and mixed models achieve better fits when trained but are only to be preferred in some cases . our evaluation also shows that the use of better runtime approximation models has a positive impact on the overall execution of link specifications .
since antiquity the regular hexagonal arrangement of bee honeycombs ( see fig [ comb]a ) has been an object of fascination .indeed it was the roman scholar marcus terentius varro who proposed in 36bc what is now known as the honeycomb conjecture : that hexagons are best way to divide a surface into regions of equal area with the least total perimeter .the conjecture was eventually proven in 1999 by thomas hales . however , the debate as the exact reasons as to why bees build such striking structures continues . while the need to minimise the total amount of wax used is part of it ( according to some estimates eight ounces of honeyare consumed to produce one ounce of wax ) the reality is more complicated than the simple model considered in the honeycomb conjecture .( b ) an initial attempt to build a honeycomb on a curved surface - by placing a scharwz - p surface inside the hive .it can be seen that the bees do not confine the comb to the template surface and instead build around it .image credit : dr tim wetherell ] one complication is that a honeycomb is not a single sided object .two layers of cells meet on a surface to make a single comb .the layers are slightly off - set from each other giving rise to a faceted wall between cells from opposing sides of the comb .surprisingly this arrangement ( which is realised in wild honeycombs ) is not the optimal least - wax - using solution .further deviations from the ideal arrangement also arise from the fact that cells are not always perfectly hexagonal ; distorted cells are often observed - especially when the bees encounter an obstacle in building the comb .the exact process by which the honeycomb is built also remains largely a mystery .there are two opposing schools of thought .the first supposes that the bees actively mold the arrangement of the cells until a regular pattern is achieved ; in his account of the process darwin proposed that the bees build the honeycomb by forming rough walls and refining them .the second school , dating back to at least darcy wentworth thompson , rejects this iterative model .they suggests that simple physical forces play a dominating role : the bees begin by building cells as a close packed ( triangular ) arrangement of cylindrical precursors , but the heat inside the hive keeps the wax molten allowing it to flow and minimise surface tension .thus mechanical tension between adjacent cells leads to a regular structure in a self - organised manner .attempts at real time imagining of the process in action have been made but the debate remain unresolved . in light of this, we propose a novel experiment to probe the response of the bees when they are subjected to a constraint .we force the hive to build a honeycomb under conditions where the usual hexagonal arrangement is known not to be the optimal solution . herewe describe our experimental attempt to build a honeycomb within the confines of two parallel negatively curved schwarz primitive surfaces ( see fig [ comb]b for our previous attempt involving only a single schwarz - p surface ) .such surfaces have a number of interesting properties including the fact that they have zero mean curvature everywhere and are periodic in all three directions .however , our interest in such surfaces is motivated by the fact that they present a unique challenge for any hive trying to build on it : the gaussian curvature is continuously changing and as such any patch of honeycomb built by the bees would be frustrated if they try to extend the local arrangement out over the rest of the surface . although our attempt to build such a negatively curved honeycomb is only partially successful - nevertheless it may add something to the continuing debate .the question we seek to answer is : are the bees hard wired through evolution to build hexagons or do they optimise the arrangement `` on the fly '' as they are building ?it is well known that a curved surface can not be tiled entirely by hexagons alone . thus if the bees are forced to build a honeycomb within the confines of an unusual (although completely well understood ) geometry what will be the result ?will the hive adapt the regular honeycomb structure by including topological defects ( i.e. cells that have more or less than six sides ) .such defects are often observed in the self - assembly of two - dimensional crystals on curved surfaces , and their presence may hint at the underlying building processes at work .this leads to a second problem which we also discuss here : what is the least perimeter way to enclose a number of equal volume cells between parallel curved surfaces ? to answer such questions we turn to a related system that may provide further insight : monodisperse foams in confined geometries .due to surface tension the energy of a soap foam depends directly on its on the interfacial area separating bubbles . as such dry foams are a useful means to investigate area minimising partitions .indeed the similarity between the hexagonal bee honeycomb and geometry of the hexagonal two - dimensional dry foam has long been recognised .thus it is natural to ask how the foam structure is modified in the presence of curvature to compare with bee honeycombs between curved surfaces . in recent yearsthere has been a considerable progress in the study of two dimensional ( 2d ) foams obtained by trapping bubbles between two parallel flat glass plates ( the so called hele - shaw cell ) .such foams are not strictly two - dimensional and are more correctly referred to as being quasi-2d .however , provided the distance between the bounding plates is less than the bubble size then the effect of the third dimension can be neglected .in addition to the hele - shaw arrangement , foams have also been studied in a number of other confining geometries .an example of this is the injection of monodisperse bubbles into a wedge - like geometry . in this case , the increasing plate separation forces the bubbles to adopt a initial monolayer structure which gives way to a bilayer ( i.e. double - layer ) and subsequently a multilayer arrangement .variations in plate separation can also be used to control the rheology of highly ordered foams in microfluidic devices ; strategically placed `` bumps '' in the channel force rows of bubbles to swap positions .other notable examples include foams in narrow cylindrical channels where the geometry compels the bubbles to spontaneously self organise into various helical arrangements .bubble statistics and bubble dynamics of a foam trapped between narrowly separated concentric spheres have also been studied , as a direct means of testing the modified von neumann law for coarsening in the presence of curvature . here - using the surface evolver package - we present a framework for simulating individual bubbles , and multicellular dry foams , that are confined between narrowly separated curved plates .the bounding surfaces are modelled using level - set functions and the resulting bubble morphology is then governed by the minimisation of surface tension energy .where possible we make use of content integrals ( see below ) to reduce the number of simulation elements required , thereby further reducing the numerical burden .we consider the following three cases : * bubbles between concentric spherical plates ( i.e. surfaces of constant positive gaussian curvature ) ; * bubbles between concentric tori ( an arrangement with both positive and negative gaussian curvature ) ; and , * foams between confined between two schwarz primitive surfaces ( an important example of a surface with negative gaussian curvature ) , the basic geometry of each is shown in fig [ three ] . in the first two cases the bounding surfaces are curved and parallel to each other . by parallelwe mean that the two plates are parallel surfaces of each other .thus if the points on the first surface are translated along the surface normal by a distance we obtain the second bounding plate , however , in the third case ( i.e. the p surface ) it is not possible to define a pair of parallel surfaces in this manner . below we describe the use of adjacent level set surface of the nodal approximation to the p surface , which we use to confine the bubble .however , this case will be more complex - compared to the spherical and toroidal cases - as variations in separation are anticipated to have significant consequences in determining the equilibrium morphology of a bubble ( or foam ) trapped between the substrates .there are clear analogies between cellular structures on curved surfaces ( whether they are foams or honeycombs ) and previous experimental attempts to pack particles on curved surfaces . in the latter casethe curvature of the bounding surfaces induces a geometric frustration in the local crystalline order .this frustration is relieved by the presence of topological defects including disclinations , dislocations and more complex scar - like arrangements .we suppose that a similar mechanism may also govern the behaviour of quasi-2d foams on curved surfaces .where , such defects have already been observed in the case of strictly 2d foams on a sphere .however , the case of quasi-2d foams between curved plates may prove to be richer than either that of particles or strictly 2d - foams on curved surfaces .recently , we have shown that for a single bubble between a pair of parallel curved surfaces the surface tension energy of the bubble is sensitive to the gaussian curvature of the bounding plates .a bubble has a low surface tension energy when it is in a region of positive gaussian curvature and a high energy in a region of negative curvature .this energy difference can drive bubbles from a region of negative to positive curvature .thus , at least in the case of soap froths there is an additional force or potential as compared to the problem of packing particles on a curved surface .here we demonstrate some of the technical detail how curved interfaces can be represented by level set constraints in the surface evolver .we discuss the instructive example of a single bubble on a sphere , followed by a discussion of the toroidal geometry as an example of a complex geometry with negative and positive curved regions .we then discuss the generation of multicellular foams on the schwarz primitive surface using nodal representations , and a voronoi method to create the initial non - optimised partition .a bubble consists of a liquid interface with a surface tension enclosing a volume of gas .the total surface energy of the bubble is given by , where is assumed to be constant and is the surface area of the bubble .we assume that on two sides the bubble is bounded by solid surfaces , called substrates .the substrates are not necessarily flat but are smooth and frictionless .this allows the bubble interface to slide along the walls and relax to equilibrium .an interface makes a contact with angle with the wall given by young s law : where and is the surface tension on the two sides of the interface .we assume the soap solution wets the solid surface and that the wetting films on both sides have same surface tension . since both sides of the film contain the same gas , so that , this gives . as a resultthe soap film meets the confining wall at right angles ( normal incidence ) .furthermore , since the surface tension energy of the bubble depends entirely on the surface area of the ( transverse ) film between the walls , the wetting films in contact with the bounding surfaces make no contribution .simulations are conducted using the surface evolver package , which is an interactive _ finite element _ program for the study of interfaces shaped by surface tension .bubbles are represented by `` bodies '' which are comprised of oriented faces which are broken down into edges , which are in turn defined by vertices . below we give details for modeling bubbles trapped between various plate geometries .the evolver can handle such boundaries by constraining vertices to lie on level sets functions specified by the user in the input data file .the data files ( below ) consist of an initial starting geometry and level set constraints that model the walls . in some casesthe geometry of the bubble is further simplified through the use of _ content integrals _ ( as described immediately below ) .once the initial geometry of the bubble has been defined , the energy of the bubble is then minimised by applying gradient descent ( and other methods such as conjugate gradient ) while repeatedly refining the mesh to improve accuracy .the wetting surfaces of the bubble ( i.e. the surfaces in contact with the bounding substrates ) do not contribute to the surface tension energy of the bubble .as such the presence of these wetting surfaces in the surface evolver model represents an unnecessary numerical burden which can be removed .the problem then remains how to compute the volume of the the resulting open body ( i.e. the numerical representation of the bubble ) ?the solution is to compute the volume by a closed contour integral over the edges of the missing surface - such integrals are known as _ content integrals _ and are described fully in the surface evolver manual . in two cases below ( the spherical and toroidal geometries ) we are able to take advantage of the cylindrical symmetry of the situation and derive the appropriate content integrals .as such the bubble model in these cases consists solely of the transverse film between the walls . for a simple example of the use of content integrals for a bubble trapped between two horizontal _ flat _ plates see the canonical example _plates___column.fe _ .and an inner plate of radius . ] in spherical polar coordinates with the parametrisation , describes a sphere as the locus of points that satisfy the equation .the outer bounding plate is described by the equation and the inner plate is given by where , as shown in fig [ sphere ] .note , for this geometry the bounding substrates are curved but parallel to each other everywhere .here we derive the content integral for a bubble between two concentric spheres .consider the spherical cap of radius on a sphere of radius as shown in fig [ sphereint ] .we can integrate over a series of infinitesimally thin concentric cylinders , each of radius , length and area to find the volume under the spherical cap , so that , using the method of cylindrical shells ] where we have used the relationship . carrying out the integration gives , since we have - and eq .( [ eq : sv2 ] ) simplifies to , here is the volume under the entire cap obtained by integrating over , we can obtain the differential volume if we replace by to give , now , note that so that , where we have used the identity in the last step .upon substituting eq .( [ eq : dphi ] ) into eq .( [ eq : dv1 ] ) we have , or cancelling the common factor , gives , the volume under the sphere between the patches labelled and is then a contour integral - that can be evaluated by tracing a positively orientated closed path along the edges of the patch , as shown in fig [ sphereint ] .note the above integrand has no singularity at the north pole .it does have a singularity at the south pole , and so * must not * be used near the south pole . in the case of a bubble between two spherical platesthe evolver computes two such integrals one on the inner sphere and one on the outer sphere ( see data file below ) .since the surface normals on these two constraints point in the opposite directions the resulting volume is arrived at by the difference between these two volume integrals .the use of the spherical constraints and the content integral is demonstrated in the following surface evolver data file . ....parameter r1=20.5 /*radius of outer sphere*/ parameter r2=20.0 /*radius of inner sphere*/ constraint 1 / * the outer spherical plate * / formula : x^2 + y^2 + z^2 = r1 ^ 2 content : / * sphere volume element * / c1 : -(r1 ^ 2 + r1*z + z^2)*(-y)/(r1+z)/3 c2 : -(r1 ^ 2 + r1*z + z^2)*(x)/(r1+z)/3 c3 : 0 constraint 2 / * the inner spherical plate * / formula : x^2 + y^2 + z^2 = r2 ^ 2 content : / * sphere volume element * / c1 : -(r2 ^ 2 + r2*z + z^2)*(-y)/(r2+z)/3 c2 : -(r2 ^ 2 + r2*z + z^2)*(x)/(r2+z)/3 c3 : 0 function real zz1 ( real xx , real yy ) { return sqrt ( ( r1 ^ 2 ) - ( ( xx*xx ) + ( yy*yy ) ) ) } function real zz2 ( real xx , real yy ) { return sqrt ( ( r2 ^ 2)- ( ( xx*xx ) + ( yy*yy ) ) ) } vertices 1 1.0 1.0 zz2(1.0 , 1.0 ) constraint 2 2 2.0 1.0 zz2(2.0 , 1.0 ) constraint 2 3 2.0 2.0 zz2(2.0 , 2.0 ) constraint 2 4 1.0 2.0 zz2(1.0 , 2.0 ) constraint 2 5 1.0 1.0 zz1(1.0 , 1.0 ) constraint 1 6 2.0 1.0 zz1(2.0 , 1.0 ) constraint 1 7 2.0 2.0 zz1(2.0 , 2.0 ) constraint 1 8 1.0 2.0 zz1(1.0 , 2.0 ) constraint 1 edges 1 1 2 constraint 2 2 2 3 constraint 2 3 3 4 constraint 2 4 4 1 constraint 2 5 1 5 6 2 6 7 3 7 8 4 8 9 5 6 constraint 1 10 6 7 constraint 1 11 7 8 constraint 1 12 8 5 constraint 1 faces 1 1 6 -9 -5 2 2 7 -10 -6 3 3 8 -11 -7 4 4 5 -12 -8 bodies 1 1 2 3 4 volume 0.5 ....in the angular coordinates with the parametrisation , defines a torus as the locus of points that satisfy the equation where , the centre line of the torus is a circle of radius centred on the origin and the toroidal tube itself has a radius - see fig [ torus ] .this defines the inner toroidal plate while the outer plate is described by , where is again the gap width .note , that as in the spherical case , the bounding substrates are curved but remain strictly parallel to each other everywhere . .the outer torus has a tube radius while the inner torus has a tube radius . ] here we derive the content integral for a bubble trapped between two toroidal substrates .consider for example the patch on the surface of a torus , as shown fig [ torusint ] , and another patch on the plane , where is generated by projecting on to the plane .here we shall calculate the volume of the box between the two patches using the method of cylindrical shells .as shown in fig [ torusint ] we can let the radial distance along the plane be given by .then the resulting arc of radius and angular extent has a length , hence we can integrate over a series of cylindrical sections each of area , thickness , and volume , where , thus upon substitution of eq .( [ eq : arclen ] ) and eq .( [ eq : drelement ] ) into eq .( [ eq : dvtorus ] ) we have , \left [ \frac{xdx+ydy}{r } \right ] , \nonumber \\ & = & z\tan^{-1}\left(\frac{y}{x}\right)(xdx+ydy).\end{aligned}\ ] ] as in the previous ( spherical ) example : the volume under the torus between the patches labelled and can be evaluated by tracing a positively orientated path along the edges of the patch , as shown in fig [ torusint ] .again , in the data file ( below ) there are two integrals to be evaluated - one on the inner torus and one on the outer torus .the difference between these two gives the volume of the bubble .the use of the toroidal constraints and the content integral is demonstrated in the following surface evolver data file ..... parameter r1=1.0 /*tube radius of outer torus*/ parameter r2=0.9 /*tube radius of inner torus*/ parameter ro=5.0 /*toroidal radius*/ parameter ss=0.1 constraint 1 / * the outer toroidal plate * / formula : ( ro - sqrt ( x^2 + y^2 ) ) ^2 + z^2 - r1 ^ 2 = 0 content : / * torus volume element * / c1 : x*z*atan2(y , x ) c2 : y*z*atan2(y , x ) c3 : 0 constraint 2 / * the inner toroidal plate * / formula : ( ro - sqrt ( x^2 + y^2 ) ) ^2 + z^2 - r2 ^ 2 = 0 content : / * torus volume element * / c1 : x*z*atan2(y , x ) c2 : y*z*atan2(y , x ) c3 : 0 function real zz1 ( real xx , real yy ) { return sqrt ( ( r1 ^ 2 ) - ( ro - sqrt((xx*xx ) + ( yy*yy)))^2 ) } function real zz2 ( real xx , real yy ) { return sqrt ( ( r2 ^ 2 ) - ( ro - sqrt((xx*xx ) + ( yy*yy)))^2 ) } vertices 1 ( ro - ss ) 0.0 zz2(ro - ss , 0.0 ) constraint 2 2 ( ro+ss ) 0.0 zz2(ro+ss , 0.0 ) constraint 2 3 ( ro+ss ) ss zz2(ro+ss , 0.0 ) constraint 2 4 ( ro - ss ) ss zz2(ro - ss , 0.0 ) constraint 2 5 ( ro - ss ) 0.0 zz1(ro - ss , 0.0 ) constraint 1 6 ( ro+ss ) 0.0 zz1(ro+ss , 0.0 ) constraint 1 7 ( ro+ss ) ss zz1(ro+ss , 0.0 ) constraint 1 8 ( ro - ss ) ss zz1(ro - ss , 0.0 ) constraint 1 edges 1 1 2 constraint 2 2 2 3 constraint 2 3 3 4 constraint 2 4 4 1 constraint 2 9 1 5 10 2 6 11 3 7 12 4 8 5 5 6 constraint 1 6 6 7 constraint 1 7 7 8 constraint 1 8 8 5 constraint 1 faces 1 1 10 -5 -9 2 2 11 -6 -10 3 3 12 -7 -11 4 4 9 -8 -12 bodies 1 1 2 3 4 volume 0.005 ....triply periodic minimal surfaces can be defined exactly using the enneper - weierstrass complex integration representation for some limited cases ( including the schwarz - p surface ) .while these methods have been used extensively to model triply periodic minimal surfaces , for our purposes they are unwieldy and instead we use the periodic nodal approximation of the schwartz - p surface , where , as shown in fig [ single ] .the surface is periodic in all three directions with a fundamental cubic unit cell of length . in the followingwe shall consider the surface generated by setting the level - set as well as adjacent surfaces - for which . lies between the two substrates and . for the purposes of the foam simulation ( below )the surface is a fictitious surface .the middle surface is used only by the simulating annealing routine to arrange the voronoi centres evenly . in the foam simulation the surface is discarded and only the substrates and are used . ] to generate a realistic foam structure on such a surface our method is as follows .n points , are distributed randomly over the p - surface .these points represent initial coordinates for particles that move to minimise a repulsive inter - particle potential ; in addition there is another potential which acts to keep the particles on the surface ( as described below ) . by this processthe points are eventually evenly distributed over the surface .we then define two surfaces adjacent to , which are narrowly separated surfaces , where .a voronoi partition of these n seed points is calculated .the resulting structure is the is then imported into the surface evolver , where each voronoi cell represents a bubble .the bubbles are constrained by the bounding p - surfaces ( using the constraints described below ) .the bubble areas are prescribed to be equal and we then use evolver s gradient descent implementations to converge to a minimum area configuration .we begin first by distributing points randomly over the surface ( i.e. the approximation to the p surface ) , this initial arrangement is the starting point for metropolis simulated annealing algorithm .the simulation is addressed to a three dimensional cube shaped cell of side length .the box is periodic in all three direction .contained within this space are points which represent the centres of softly repelling spheres , each of diameter . if a pair of spheres is sufficiently close that they overlap we account for this using a pairwise potential as described below .a second potential is used to force the sphere centres to lie on the p - surface .we model the overlap potential between spheres using a hookean , or `` spring - like '' , pairwise interaction between the and spheres , which have their centres at and , the interaction energy between spheres is then given by , where is the distance between the centres of the spheres .note the interaction energy falls to zero when there is no overlap between the spheres . for the sphere a measure of its distance from the p surfaceis given by if the sphere is on the implicit surface then and non - zero otherwise . from thiswe can associate an energy cost for the sphere if it is not on the surface as , thus the total energy of the system is given by the sum of the sphere - sphere and sphere - surface interactions .the energy of the system is then minimised by a metropolis simulated annealing routine .this yields a set of sphere centres that are evenly distributed over the surface and with being negligibly small for any given centre . .then ( b ) a voronoi partition is generated from the sphere centres - the partition is clipped at the bounding surfaces and respects the local curvature of the substrates , as shown in ( c ) .the final state ( d ) is then used as the starting point for a surface evolver simulation . ] as a result of simulated annealing all the particle centres lie close to the surface and are distributed evenly over it . before computing the voronoi partition around the sphere centres , if any of the centres are not exactly on the surface we project it onto the surface and use this set of adjusted points for the partition .the surface is of no further interested and is deleted .next , we define two bounding surfaces and , so that the spheres centres lie on the mid - surface between these boundaries , see fig [ single]b and fig [ vprocess]a . from these bounding surfaces and the sphere centreswe compute the voronoi partition which is periodic in all three directions , using the graphics package , see fig [ vprocess]b .the resulting partition is clipped at the boundaries - i.e. the package calculates the confined voronoi diagram that respects the local curvature of the bounding substrates , see fig [ vprocess]c . the final state is shown in fig [ vprocess]d .the result is voronoi cell around each particle represented in terms of the vertices of the voronoi partition and polys .the latter are a data structure that consists of a closed positively orientated loop over the vertices .each loop defines a face with a unit normal pointing to the exterior of the cell .the voronoi partition is then converted into the appropriate format for surface evolver simulations .again , the bounding constraints are imposed through level set functions . however , due to the lack of cylindrical symmetry the derivation of the content integrals is not attempted here .instead we manually set the surface tension to zero on the edges and faces that are in contact with the bounding walls .an example of the data file for a simple bubble between two schwarz - p surfaces is given below . it should be noted ( again ) that unlike the previous examples , the separation between the adjacent level - set p surfaces is not a constant .unlike the simpler cases ( i.e. concentric spherical and toroidal plates ) it is not possible to define a surface that is strictly parallel to the p surface everywhere . as such in addition to the effects of substrate curvature in determining the surface tension energy of the bubble there will be another analogous effect due to the effect of varying plate separation , see for further details . ....parameter delta=0.2constraint 1 / * the outer plate * / formula : cos(2*pi*x)+cos(2*pi*y)+cos(2*pi*z ) = delta constraint 2 / * the inner plate * / formula : cos(2*pi*x)+cos(2*pi*y)+cos(2*pi*z ) = -delta function real zz1 ( real xx , real yy , real sl ) { return acos ( sl - cos(2*pi*xx ) - cos(2*pi*yy ) ) /(2*pi ) ; } function real zz2 ( real xx , real yy , real sl ) { return acos ( -sl - cos(2*pi*xx ) - cos(2*pi*yy ) ) /(2*pi ) ; } vertices 1 0.1 0.2 zz1(0.1 , 0.2 , delta ) constraint 1 / * vertices on outer plate * / 2 0.2 0.1 zz1(0.2 , 0.1 , delta ) constraint 1 3 0.2 0.2 zz1(0.2 , 0.2 , delta ) constraint 1 4 0.15 0.25 zz2(0.15,0.25,delta ) constraint 2 / * vertices on inner plate * / 5 0.25 0.15 zz2(0.25,0.15,delta ) constraint 2 6 0.25 0.25 zz2(0.25,0.25,delta ) constraint 2 edges 1 1 2 constraint 1 2 2 3 constraint 1 3 3 1 constraint 1 4 4 5 constraint 2 5 5 6 constraint 2 6 6 4 constraint 2 7 1 4 8 2 5 9 3 6 faces 1 1 2 3 constraint 1 tension 0 color yellow 2 4 5 6 constraint 2 tension 0 color yellow 3 7 -6 -9 3 4 -4 -7 1 8 5 -5 -8 2 9 bodies 1 1 2 3 4 5 volume 0.01 read re1 : = { refine edges where on_constraint 1 } re2 : = { refine edges where on_constraint 2 } // typical ( but crude ) evolution gogo : = { re1 ; re2 ; g 5 ; r ; g 20 ; r ; g 50 ; r ; g 50 } ....in the following we describe our experimental attempt to print a honeycomb onto a schwarz - p surface using bees . our motivation was to analyse how the bees cope with or respond to negatively curved surfaces that have two essential properties .firstly , negatively curved surfaces ( such as the schwarz - p surface ) can never have constant gaussian and constant mean curvature .therefore , the bees would experience frustration in building the honeycomb - since at different points of the surface they encounter different surface curvatures . secondly ,it is not possible to tile a negatively - curved surface with hexagons only and so we were curious to see how the bees would modify the usual hexagonal motif to adapt to this constraint . with this in mind we designed an enclosure comprised of two schwarz - p surfaces with a small gap between them ( of the type described above ) ,see fig [ bees]a .our hope was that the bees would then build a natural two - sided honeycomb in - between these two surface - that is one layer of cells are formed on the inner surface and another layer on the outer surface .the gap width was carefully chosen to be that of the height of two honeycomb cells and a little extra space for the bees to manoeuvre within the gap between cells from adjacent surfaces .we deliberately used only half the schwarz - p surface for the cell . with this designwe did not need to use pins or other spacers between the surfaces to hold them at the appropriate distance .as such the bees had maximum freedom to build in any way they chose without obstructions due to the design of the cell .the cell was then constructed by rapid prototyping fig [ bees]b and the bees were allowed to enter the structure .the result after after repeated attempts was that the bees did build a honeycomb on part of the structure - see fig [ bees]c and fig [ bees]d .however , we did not obtain a complete tiling of the two surfaces .ultimately , we believe that the strongly curved surface proved to much of a frustration to the bees and they only attempted to build on the regions that were immediately accessible and neglected to proceed further inwards .we have presented some of our on - going work to simulate foams between curved surfaces . in the futurewe will implement these numerical techniques and examine the role of curvature in determining the morphology and dynamics of quasi-2d foams .we also hope to repeat the bee experiment using the insight gained from our first attempt .we are greatly indebted to ken brakke ( the inventor of surface evolver ) for all his kind and patient help in implementing the content integrals . am acknowledges support from the aberystwyth university research fund .34 natexlab#1#1[2]#2 , , , ., , ( ) . , , et al . , , ( ) ., et al . , , ( ) ., , , ( ) . , , , , ( ) ., , , , , ( ) . , , , , ( ) ., , ( ) . , , , ( ) ., , , ( ) . , , , ( ) ., , , ( ) . , , , ., , , , ( ) . , , , , , , , ( ) ., , , ( ) ., , , , ( ) . , , , , , , ( ) . , , , , , ( ) . , , , , , ( ) . , , , , ( ) ., , , volume , , ., , , ( ) . , , , , , , , , , , ( ) . , , , ( ) ., , , , ( ) . , , et al . , , ( ) ., , , ( ) ., , , , , ( ) .
we present a surface evolver framework for simulating single bubbles and multicellular foams trapped between curved parallel surfaces . we are able to explore a range of geometries using level set constraints to model the bounding surfaces . unlike previous work , in which the bounding surfaces are flat ( the so called hele - shaw geometry ) , we consider surfaces with non - vanishing gaussian curvature , specifically the sphere , the torus and the schwarz primitive - surface . in the case of multi - cellular foams - our method is to first distribute a set of points evenly over the surface ( using an energy minimisation approach ) , these seed points are then used to generate a voronoi partition , that is clipped to the confining space , which in turn forms the basis of a surface evolver simulation . in addition we describe our experimental attempt to generate a honeycomb on a negatively curved surface , by trapping bees between two schwarz primitive - surfaces . our aim is to understand how bees adapt the usual hexagonal motif of the honeycomb to cope with a curved surface . to our knowledge this is the first time that an attempt has been made to realise a biological cellular structure of this type . curvature , foams , hele - shaw
in order to estimate the prevalence of diseases , traits or behaviors in particular social groups or even in the entire society , researchers typically rely on samples of the target population . a carefully selected sample may generate satisfactory low standard errors with a bonus of optimizing research resources and time .a common challenge is to obtain a significant and unbiased sample of the target population .this is particularly difficult if this population of interest is somehow segregated , stigmatized , or in some other way difficult to reach such that a sampling frame can not be well defined .these so - called hidden ( or hard - to - reach ) populations may be for example man - who - have - sex - with - man ( msm ) , sex - workers , injecting drug users , criminals , homeless , or minority groups . in 1997 , heckathorn introduced a new methodology to sample hidden populations named respondent - driven sampling ( rds ) .rds exploits the underlying social network structure in order to reach the target population through the participants own peers .the method consists in a variation of the snowball sampling where the statistical estimators have weights to compensate the non - random nature of the recruiting process , i.e. that individuals with many potential recruiters have a higher chance to be sampled . in rds ,researchers select seeds to start the recruitment . a seed person then invites a number of other individuals to participate in the survey by passing a coupon to them .those successfully recruited respond a survey and get new coupons to invite a number of other individuals within their own social network , and the process is repeated until enough participants are recruited .successful recruitment and participation in the survey are both financially compensated .a fundamental assumption is that each participant knows the number of his or her own acquaintances in the target population , or in the network jargon , his or her own degree .this information is used as weights to estimate the prevalence of the variable of interest in the study population .the perhaps most popular rds statistical estimator is due to volz and heckathorn , who devised a markov process whose equilibrium distribution is the same as the distribution of the target population .this estimator is derived after a series of assumptions regarding both the underlying network structure and the recruitment process _ per se_. although the assumptions are generally reasonable , sometimes they are relatively strict for realistic settings , as for example , the uniformly random selection of peers , persistent successful recruitment , and sampling with replacement .these and other assumptions have been scrutinized in previous theoretical studies and the estimator has performed satisfactory in different scenarios using both synthetic and real networks . a number of real life studies have also concluded that rds is an effective sampling method for various categories of hidden populations ( see for example refs . ) .social networks are however highly heterogeneous in the sense that the structure of connections can not be represented by characteristic values .this is the case of the number of contacts per individual or of the level of clustering between them .since the rds dynamics is constrained by the network structure , one may expect that different patterns of connectivity affect the recruitment chains .for example , the network structure may be such that a recruitment tree grows only in one part of the network .in realistic settings using sampling without replacement , even if all individuals are willing to participate , trees may simply die out because a network has been locally exhausted and bridging nodes block further propagation of coupons to other parts of the network .such situation is not unlikely in highly clustered sub - populations where coupons may simply move around the same group of people .previous theoretical studies have addressed some of these network constrains by studying the rds performance on either synthetic structures or samples of real networks .each approach to model social networks has its own advantages and limitations . on one hand , simple synthetic structures and sampling processes are unrealistic but allows some mathematical treatability and thus intuitive understanding . on the other hand, samples of real networks may suffer biases themselves due to their own sampling and thus potential incompleteness of data .network clustering is particularly important in the context of social networks and should be carefully assessed .it may have different meanings but here we associate clustering to social triangles , i.e. the fact that common contacts of a person are also in contact themselves .network communities are also a form of clustering in which groups of individuals are more connected between themselves than with individuals in other groups .as already mentioned , clustering in all its forms is not uniform across a network . in practice , it means that one may find hidden sub - populations within the study population .examples include social groups with particular features ( e.g. wealth , foreigners , ethnic minorities ) embedded in the target population , transsexuals in populations of msm , or geographically sparse populations . while these sub - populations may potentially be removed by defining a more strict sampling frame , social groups ( or communities ) are inherent of social and other human contact networks .note that network clustering is not the same as homophily , that is the tendency of similar individuals to associate , but one may enhance the other .for example , individuals may share social contacts because they live geographically close , share workplaces , or are structured in organizations ( potentially leading to network clustering ) but may be completely different in other aspects ( low homophily in wealth , health status , gender , infection status , and so on ) . in this paper, we use computational algorithms to generate synthetic networks with various levels of clustering and with network communities of various sizes , aiming to reproduce structures observed in real social networks . using realistic parameters ,we simulate a rds process using these networks and quantify the performance of the rds estimator in different scenarios of the prevalence of an arbitrary variable of interest .the paper is organized such that we first analyze how triangles and community structure affect how the rds spread in the network when it comes to size of transmission trees and generation of recruitment .then we investigate how clustering affects the validity and reliability of the rdsii estimator as a function of different willingness to participate ( response - rates ) in the population .we also test the effect of clustering for scenarios where the variable under study is correlated with the degree of the nodes and the size of the network community .thereafter , we study the consequences of the biased selection of seeds , the bias induced by network structure in samples of real social networks , and the effect of restarting the seeds during the sampling experiment .we describe in this section the models used to generate the synthetic networks with different number of triangles and varying levels of community structure , the empirical networks , the model to simulate the rds dynamics , the protocols to artificially distribute the infections in the target population , and the estimator and other statistics used for the analysis .a social network is defined by a set of nodes representing the population and a set of links representing the social contacts , as for example acquaintances or friendship , between two individuals .the network structure can be characterized by different network quantities .the most fundamental quantity is the degree that represents the number of links of a node or equivalently the number of contacts of an individual .the assortativity of a network measures the tendency of nodes with similar degree to be connected .the number of triangles and the clustering coefficient are used to measure the local clustering in the network .a triangle corresponds to the situation where two contacts of a node are also in contact themselves , and the clustering coefficient is a normalized count of the number of triangles .a network community , on the other hand , is a group of nodes that are more connected between themselves than with nodes of other groups .a fundamental property of the network community structure is that only a few nodes link ( or bridge ) different communities , these nodes are also known as bottlenecks because they constrain the diffusion , or the sampling process , in the network .if there are only a few bridging nodes , one says that the community structure is strong , whereas many bridging nodes weaken the community structure reducing the bottlenecks between groups .we use computational algorithms able to generate synthetic networks with tunable number of triangles ( fig .[ fig:01]b ) or of community structure ( fig .[ fig:01]c ) .these algorithms are not expected to reproduce a particular social network but to generate various structures observed in social networks more realistically than previously studied structures .our reference random network is obtained by simply connecting pairs of nodes for a given degree sequence , a procedure that results on a negligible number of triangles and no network community structure ( fig .[ fig:01]a ) .this model is also know as the configuration model .the first algorithm , due to serrano and bogua , generates networks with a varying number of triangles and assortativity . in this algorithm , an _ a priori _ degree sequence is chosen following a given distribution of node degree .we choose a power - law degree distribution with a small exponential cutoff , i.e. .if no or very small costs are associated with keeping links alive , scale - free distributions are reasonable models for empirical distribution , otherwise we usually observe broad scale distributions not necessarily power - law - like . generally speaking , this degree distribution is thus not expected to be the most appropriate distribution of contacts in real populations but it captures the right - skewed degree heterogeneity typically observed in social groups .this heterogeneity means that the majority of nodes has only a few contacts whereas a small number of them has several contacts .we fix the minimum possible degree to in order to obtain an average degree .furthermore , an _ a priori _ clustering coefficient is chosen such that a given number of triangles is defined for each degree class .the algorithm evolves by randomly selecting three different nodes and forming a triangle between them , respecting the distribution of triangles per degree class .as soon as no new triangles can be formed , the remaining links are uniformly connected ( i.e. the configuration model ) such that no links are left unconnected .self - links are forbidden .a parameter controls the assortativity ( assortativity increases with decreasing ) and the parameters and control the expected clustering coefficient ( clustering increases with increasing and decreases with increasing ) . in this paper , we use , and ( for the configuration with many triangles ) and , and ( for the configuration with few triangles ) . the second algorithm , developed by lancichinetti and fortunato , is used to create networks with community structure .here one starts by choosing the distribution of degrees and the distribution of community sizes . in both cases , we use power - law distributions to capture the heterogeneity in the degree and in the community size as observed in some real social networks . other choices of probability distribution may be more suitable for specific populations but here again we want to study the heterogeneity in the community sizesthe degree distribution has the same parameters as used in the first algorithm , the power - law distribution of community sizes has exponent and community sizes are limited between and nodes . these values are chosen to guarantee that a sufficient number of communities are large in size and at the same time , enough small - sized communities are represented .for example , higher values of the exponent would result in relatively more small - sized communities .these values are also constrained by the number of links and the level of overlapping of communities ( see below ) , and are chosen to generate a network with a single connected component .the number of overlapping nodes and the number of communities that each node belongs to are inputs of the algorithm .overlapping means that a number of nodes belong to more than one community ( these are the bridging nodes ) while the rest of the nodes only belong to single communities .one may further select a mixing parameter to add random links between the bridge nodes and randomly chosen communities ( to weaken the community structure ) .therefore , small overlapping and small mixing generate stronger community structures .we set , and select or overlapping nodes in communities respectively for strong and strong - moderate community structures . for moderate - weak and weak community structures , we set respectively , and and overlapping nodes ( in communities as well ) . for each algorithm , to obtain the statistics , we generate versions of the network with the same set of parameters and with nodes each , which is also the size of the target or study population .we also study rds using real - life networks .we perform simulations on 5 samples of empirical contact networks representing different forms of human social relations .three data sets correspond to email communication , two between members of two distinct universities in europe ( ema1 , ema2 ) and one between employees of a company ( enr ) . in these datasets, nodes correspond to people and social ties are formed between those who have sent or received at least one email during a given time interval .one data set corresponds to friendship ties between us high - school students ( adh ) .the last data set corresponds to online communication between members of an online dating site ( pok ) .similarly to the email networks , if two members have exchanged a message through the online community , a link is made between the respective nodes .although some of these data sets do not correspond to social networks in which rds would take place , they serve as realistic settings capturing the network structure of actual social relations .we have selected data sets with diverse sample sizes and network structure in order to cover various contexts and configurations ( table [ tab:01 ] ) ..*summary statistics of the empirical networks used in this study . *number of nodes ( ) ; number of links ( ) ; clustering coefficient ; number of communities ; size of the smallest community and size of the largest community , according to the mapequation algorithm . [ cols="^,^,^,^,^,^",options="header " , ] we simulate the sampling by using a stochastic process reproducing several features of a realistic rds dynamics .our model further adds a continuous - time framework and the response - rate can be controlled .we use similar parameters as typically used in the literature .we start by uniformly selecting ( unless otherwise stated ) random nodes as seeds for the recruitment . after a time , sampled from an exponential distribution ,each seed chooses uniformly three of its contacts and pass one coupon to each of them .the exponential distribution is chosen because in our model we assume that the recruitment follows a poisson process .we select the average waiting time to be , meaning that a node waits on average time steps ( e.g. 5 days ) before inviting its contacts . therefore , after waiting time steps , and with probability , that represents the probability of participation ( or response - rate , i.e. one minus the probability of not returning a coupon ) , each of these contacts recruits three of their own contacts that have not participated yet ( sampling without replacement ) .if a node accepts to invite its own contacts ( i.e. accepts to participate ) , we add this node in the sample .the process continues until all possibilities of new recruitments are exhausted or , at maximum , when a specific sample size is reached . note that this continuous - time model is equivalent to a discrete - time model in which randomly chosen nodes update their status sequentiallywe assume that if a node refuses to participate once , it becomes available for recruitment by other nodes as if it was never invited .we repeat the simulation of the rds dynamics times for each synthetic network and times for each empirical network . in rds studies ,one is interested in quantifying the prevalence of some variable in the target population .this variable may represent , for instance , being tested positive for a given disease , being male or female , the ethnicity , or having a particular physical trait .in this paper , to simplify the notation , we say that an individual and its respective node is infected with or not infected with .we use different protocols to infect a fraction of of the network nodes with the quantity .the remaining nodes are thus assumed to be non - infected .the reference case ( ri ) corresponds to uniformly selecting the nodes within the target population , i.e. the infection is uniformly distributed in the network .the preferential case ( pi ) corresponds to selecting nodes in decreasing order of degree , i.e. we start at nodes with the highest degree and infect them with until of the nodes become infected . to add some noise ( case pri ) , we select of the infected nodes , cure them , and redistribute these infections uniformly in the network such that the total number of infected nodes remains fixed .the other two cases consist on infecting nodes according to the community structure . in the first case ( si ) , we initially infect nodes in the smallest communities until of the nodes become infected . in the second case ( bi ) , we infect nodes in the largest communities until the same fraction of of nodes get infected .to reduce homophily , we add noise by selecting of the infected nodes , curing them , and redistributing these infections uniformly in the network while keeping the total number of infected nodes fixed ( these configurations are named sri for small and bri for large communities ) . to analyze the recruitment trees , we measure the total number of participants ( i.e. the sample - size ) , and the size and the number of generations ( or waves ) of each recruitment tree , starting from a seed node .the proportion of individuals in the population with a certain feature ( ) is estimated by using the rdsii estimator : where is the reported degree of an individual in the social network .we thus define : as the average estimate of the prevalence of for simulations with the same set of parameters , with standard deviation given by .complementary , we define the average bias , i.e. the difference between the estimate of the prevalence of and the true prevalence of , for simulations , as : in the results , we show the relative bias in respect to the true value of the prevalence , i.e. we show . the design effect is defined as : where is the variance of the estimator using rds , and is the variance of the same estimator using simple uniform sampling ( srs ) , i.e. the same number of nodes ( as in the rds sample ) is uniformly selected in the study population .the design effect thus measures the number of the sample cases necessary to obtain the same statistics as if a simple random sample was used . in our study , ( rds simulations for each of the generated network with fixed parameters , and rds simulations for each of the empirical networks ) .we first discuss the statistics of recruitment trees for synthetic networks with various levels of clustering and community structure .we then analyze the performance of the rdsii estimator for different network structures and for different scenarios of prevalence of the infection .this analysis is followed by results on the convergence of the estimator for increasing sample size on networks with strong community structure .afterwards , we study the rds performance considering the same scenarios of prevalence of the infection using real social networks and conclude the results section showing the increased bias as a consequence of running a single recruitment tree per time .we first look at some statistics of the recruitment trees in the case that the entire target population can potentially be recruited , i.e. the recruitment only stops if no new subject is recruited or if the network is exhausted ( everyone is recruited ) . sincethe population is fixed to individuals , this limiting case provides us the maximum possible coverage of the sampling for a given configuration of the rds . in the reference case ( fig .[ fig:02]a - e ) , only the degree distribution is fixed and the nodes are uniformly connected ( configuration model , see section [ methods_a ] ) . in this case , if every recruited individual responds to the survey , i.e. ( see section [ methods_b ] ) , nearly all the population is recruited .the recruitment dynamics however is not robust to variations in the response - rate , for example , in our simulations , for , only about of the population is recruited , and this percentage falls to negligible values if .successful recruitment in fact occurs only if in the absence of any ( or negligible ) triangles and community structure .we observe a broad distribution in the size of the recruitment trees ( fig .[ fig:02]b , c ) .there is a relatively high chance for the recruitment trees to break down quickly and thus to contain only a few individuals .this typically happens when a recruitment tree reaches a high - degree node .high - degree nodes are easily reachable because they have many connections .once the first recruitment tree passes through one of these high - degree nodes , they become unavailable . consequently, the recruitment trees arriving afterwards simply die out as soon as they reach these nodes . at the same time , a few recruitment chains persist long enough and generate large trees , potentially sampling large parts of the network from a single initial seed . as expected, there is a characteristic peak in the number of waves ( fig .[ fig:02]d , e ) .( eq . [ eq:02 ] ) and the respective standard deviation , b , e , h , k ) the average bias ( eq . [ eq:03 ] ) , and c , f , i , l ) the design effect ( eq . [ eq:04 ] ) .the underlying networks have a few number of triangles and weak community structure ( see section [ methods_a ] ) , and recruitment is limited respectively to ( 1st column ) and to ( 2nd column ) participants . in both cases , of the populationis infected with preferentially towards high degree nodes following either protocol pi ( top 3 rows ) or protocol pir ( bottom 3 rows ) ( see section [ methods_c ] ) . ]the increasing level of clustering has some effect in the statistics of the recruitment trees .in particular , in the absence of communities , a large number of triangles improve recruitment for intermediate values of response - rates ( fig .[ fig:02]f - j ) .triangles create redundant paths eliminating bottlenecks in the network , as for example , bottlenecks due to high degree nodes .high degree nodes make a large number of contacts and thus connect different parts of the network . as mentioned before , as soon as these nodes are recruited , the recruitment chain may not be able to expand beyond them .on the other hand , if the network has weak ( fig .[ fig:02]k ) or strong ( fig .[ fig:02]p , u ) community structure , the number of triangles becomes irrelevant , and the level of community structure defines the sample size . in case of strong community structure ( with low or large number of triangles ) , a maximum of of the population may be recruited ( fig .[ fig:02]p , u ) .bottlenecks in this case correspond to nodes bridging communities .these bottlenecks can not be removed by adding triangles , that only produce local network redundancy , but by connecting more nodes between different communities , i.e. weaken the community structure . moreover , strong communities imply that response - rates should be higher ( in comparison to the absence of or to weaker communities ) for the recruitment chains to take off and gather sufficient participants . if response - rates are bellow , recruitment is insufficient .this is a fundamental issue in realistic settings , meaning that highly clustered ( or in other words , highly segregated and marginalized ) populations need a bit higher compensation in order to achieve the same sampling size as one would obtain if studying less segregated groups .we see that irrespective of the number of triangles or level of community structure , lower response - rates cause a relatively larger number of small recruitment trees together with a few waves ( fig .[ fig:02]b , d , g , i , l , n , q , s , v , x ) .this is not only undesirable because the final sample remains small but also because a few waves is not sufficient for the stochastic process to forget the initial conditions and thus reach the stationary state , the condition in which the estimator is expected to be unbiased . in case of strong community structure ( fig .[ fig:02]s , t , x , y ) , we note a broader variance in the number of waves suggesting that each seed samples the network non - homogeneously .this may be related to the fact that the communities have different sizes ( or number of nodes ) and thus the bottlenecks between communities are reached at different times by different recruitment chains . to study the impact of the network structure and response - rates in the rds estimator , we measure four statistics : ( i ) the average rdsii estimator ( eq . [ eq:02 ] ) and its respective ( ii ) standard deviation , ( iii ) the average bias ( see eq . [ eq:03 ] ) , and ( iv ) the design effect ( eq. [ eq:04 ] ) ( see section [ methods_d ] ) .figures [ fig:03]a - f , m - r show the reference case , i.e. the configuration model where the only structure is in the degree sequence and the rest is random ( see section [ methods_a ] ) . in this reference case , if there is no restriction to the sample size in respect to the size of the target population ( i.e. up to 10000 individuals may be sampled , but actual sample size depends on the response - rate ) , the estimator performs well , although with substantial standard deviation and biases for even if the quantity is uniformly spread in the network ( fig .[ fig:03]a - c ) .this is a result of the insufficient sample size for low response - rates .individuals with a large number of contacts are believed to be more central in a network .these individuals may be for example more likely to get an infection or propagate a piece of information .we thus test an hypothetical scenario where is concentrated in high - degree nodes ( see protocol pri in section [ methods_c ] ) .note that this assumption may be however completely irrelevant in some contexts , but it is useful to understand the mechanisms of sampling . in this case, the accuracy of the estimator is poor for , i.e. is under - estimated for both situations , with and without many triangles , and precision is worse for response - rates ( respectively fig .[ fig:03]j - l and fig .[ fig:03]d - f ) . as before ,the poor accuracy is a result of the rds not recruiting sufficient participants .the under - estimation of the prevalence however suggests that low - degree nodes are not being sufficiently sampled as the sample size gets close to the network size . a substantial bias , given by ,is also observed .the design effect varies between 1 and 2 , with some exceptions for in case of many triangles . if the number of participants is limited to only individuals , i.e. of the total population ( a small fraction of the target population , as usually recommended to guarantee unbiased estimates ) , the performance of the estimator and the average bias improves substantially .however , remains slightly underestimated and the standard deviation increases in the case of many triangles irrespective of the response - rates ( fig .[ fig:03]v - x ) .the cost of this improvement however is a much higher design effect ( fig .[ fig:03]x ) .figure [ fig:04]a - f shows that in networks with weak community structure , if is concentrated at the high - degree nodes , the estimates remain good for if the maximum number of participants is low ( up to 500 ) compared to the total size of the target population . in the limiting case where all individuals can potentially participate ( up to 10000 ) , is slightly overestimated and substantially underestimated respectively for small and large response - rates , being accurate only for moderate values , i.e. ( fig .[ fig:04]g - l ) .the results suggest that for larger response - rates , there is a significant under - representation of low - degree nodes in the final sample .this happens because low - degree nodes become increasingly more difficult to sample as the sample size gets close to the network size ( causing finite - size effects ) .biases are also larger if the community structure is stronger because the recruitment chains die out before exploring some of the communities .altogether , these results are in accordance with previous recommendations that the sample size should be much smaller than the size of the target population in order to achieve good estimates using the rdsii estimator .some caution however should be pointed out since it is not straightforward to know in advance the size of the target population and thus to estimate the optimal sample size in respect to the target population . if too many subjects are recruited , relatively to the size of the target population , saturation occurs and the network structure induces biases in the estimator due to finite - size effects .we now simulate scenarios where the variable is concentrated in specific communities , irrespective of the degree of the nodes .this is a reasonable assumption considering that an infection ( or other particular quantities ) may affect only the population of some geographical region , or for example , a particular group of injecting drug users among msm may be sharing contaminated paraphernalia . by using the know structure of each network , we select of the nodes associated to the smallest communities and infect them with the quantity ( see section [ methods_c ] ) . in thissetting , the prevalence is underestimate and the estimator has relatively large deviations ( fig .[ fig:05]a , g ) for strong and strong - moderate community structure .estimators improve for weaker community structure ( fig .[ fig:05]m , s ) . even for weak community structure ,the minimum average bias is about ( fig .[ fig:05]t ) , being at least in case of strong communities ( fig .[ fig:05]b ) for . for lower response - rates ,the bias gets substantially larger , as in the previous experiments .the design effect is also significantly affected for any level of community structure ( fig .[ fig:05]c , i , o , u ) .this means that for strong communities , for example , in order to have the same statistics as if a standard simple random sample was performed , the rds needs up to 40 times the same sample size .furthermore , if we redistribute the infection of randomly chosen infected nodes to decrease homophily , the overall quality of the statistics improves but still with significant bias , and larger standard deviation and design effect for stronger community structure ( fig .[ fig:05]d - f , j - l , p - r , v - x ) . and the respective standard deviation for networks with a , b , e , f , i , j ) strong and c , d , g , h , k , l ) weak community structure . in the 1st column, is concentrated in the small communities ( sri protocol ) , in the 2nd column , is concentrated in the large communities ( bri protocol ) , and in the 3rd column , is concentrated in the high degree nodes ( pri protocol ) . ]( eq . [ eq:02 ] ) and the respective standard deviation , the average bias ( eq . [ eq:03 ] ) , and the design effect ( eq . [ eq:04 ] ) . in all cases , of the population is infected with in a - c , g - i ) the smallest communities and in d - f , j - l ) the largest communities ( see section [ methods_c ] ) .a - f ) networks with strong communities and g - l ) networks with weak communities . ]( eq . [ eq:02 ] ) and the respective standard deviation , the average bias ( eq . [ eq:03 ] ) , and the design effect ( eq . [ eq:04 ] ) . in all cases , of the population is infected with in a - c , g - i ) the smallest communities and in d - f , j - l ) the largest communities ( see section [ methods_c ] ) .a - f ) networks with strong communities and g - l ) networks with weak communities . ] on the other hand , we can assume that is unlike to occur in small communities because , for example , nodes associated to these communities are simply less likely to get an infection due to isolation .social control is also often higher in small groups. it may therefore be easier to behave in certain ways in larger groups .people who want to or who have particular behaviors or traits may thus decide to move to larger groups . to simulate this hypothetical scenario , we infect of the nodes in the largest communities ( see section [ methods_c ] ) .figure [ fig:06]a shows that is overestimated for for strong community structure .these estimates improve for weaker communities , also resulting on smaller standard deviations ( fig .[ fig:06]g , m , s ) for larger response - rates .the standard deviation is generally slightly larger in this case in comparison to the case where is concentrated in the small communities .the design effect is very high for strong community structure ( fig .[ fig:06]c , i ) , even if homophily is reduced ( fig .[ fig:06]f , l ) .we perform the same analysis using networks with the same configuration studied until now but with higher clustering coefficient ( between 0.5 and 0.6 ) and the results remain quantitatively the same ( apart for a few fluctuations ) .this finding reinforces the previous observation that triangles have a relatively small impact in rds if communities are present in the network .altogether , these results show the key difference between clustering and homophily that was mentioned in the introduction . in both scenarios ,the network community structure and the number of triangles are the same , and homophily is high . in the later case ( fig .[ fig:06 ] ) homophily occurs inside the largest communities whereas in the first case ( fig .[ fig:05 ] ) it happens in the smallest communities .the structure - induced biases however remain relatively high even if the homophily is reduced by redistributing a fraction of the infections . in the previous section ,we have studied the bias induced by the network structure and the response - rates .if we fix the response - rate , each realization of the simulation generates a different sample size due to the stochastic nature of the process . in this section , therefore , we fix the response - rate and analyze the effect of the sample size on the estimator .since recruitment may stop at different times on each simulation , here we estimate the mean and standard deviation for sample size using only simulations in which the recruitment reaches this size .this means that the estimates for large sample sizes have less data points ( to calculate the mean ) than those for small sample sizes .previous studies report that in real settings , response - rates may vary between ( for female sex - workers ) and ( for msm ) , with mean and median at about .we thus study 3 scenarios for the response - rates : .figure [ fig:07 ] shows that for strong community structure , is slightly overestimated for sample sizes smaller than and underestimated for larger sample sizes if is concentrated in the smallest communities ( fig .[ fig:07]a , b ) . on the other hand ,the prevalence of is overestimated for sample sizes larger than if is concentrated in the largest communities ( fig .[ fig:07]e , f ) . in both cases ,the mismatch is maximized when the sample size is between about and ( i.e. 100 and 1500 participants respectively ) of the study population . if is concentrated in the high degree nodes , is underestimated for increasing sample size ( fig .[ fig:07]i , j ) but not as much as for the previous cases . on the other hand, if the community structure is weak , the estimator performs well ( with slight over- and under - estimation of the prevalence for small ( fig .[ fig:07]c , d ) and large ( fig .[ fig:07]g , h ) communities , except in the case of being concentrated in high degree node ( fig .[ fig:07]k , l ) . in this case , the estimates are only good in the range of sample sizes between and nodes . we have assumed so far that seeds are uniformly chosen within the target population . while this is a reasonable standard assumption in theoretical studies ,it is hardly met in real contexts because the inherent fact that the study population is hard - to - reach and seed selection is non trivial . a biased selection of seeds can increase the bias in the rds estimators as shown in figs .[ fig:08 ] and [ fig:09 ] .if seeds are selected only between subjects associated to small communities ( here defined as communities with less than 200 members ) , recruitment chains are generally unable to reach beyond those communities and thus the prevalence is overestimated when the infection is concentrated in the smaller communities ( fig .[ fig:08]a - c ) . on the other hand ,the prevalence is underestimated if the infection is concentrated in the larger communities ( fig .[ fig:08]d - f ) .the mismatch in the estimators are particularly significant if the community structure is stronger , however , the prevalence is also strongly biased for low response - rates ( and weakly biased for high response - rates ) even if the community structure is weak ( fig .[ fig:08]g - l ) .this is in contrast to the our previous findings when seeds are uniformly sampled ( fig .[ fig:05 ] and [ fig:06 ] ) .if one selects the seeds in the largest communities ( here defined as communities with more than 500 members ) , recruitment chains tend to stay within the largest communities , which leads to an under - estimation of the prevalence and relatively high biases if the infection is mostly prevalent in the small communities ( fig .[ fig:09]a - c ) .the prevalence is overestimated , however , if the infection is mostly prevalent in the largest communities ( fig .[ fig:09]d - f ) .results improve for weak community structure , but also in this case , biases and large standard deviations are observed for low response - rates ( fig .[ fig:09]g - l ) .note that in these experiments homophily is relatively weak since we use protocols sri and bri . in the previous sections ,we have studied the impact of various levels of community structure and number of triangles in rds estimates in contact networks generated using theoretical models .although the algorithms used to generate the synthetic networks include several properties of real - life networks , empirical networks , with their own sampling and scope limitations , contain correlations that may be challenging to reproduce theoretically . in this section, we analyze the rds performance using real - life human contact networks in order to be able to extend the conclusions to real scenarios .following the same protocols to infect preferentially the largest ( bri protocol ) or the smallest ( sri protocol ) communities , or the high degree nodes ( pri protocol ) , we find that in most studied networks , rds performs well to estimate the mean prevalence in these hypothetical scenarios ( although the standard deviations are relatively large ) , with a small variation for different response - rates ( fig . [ fig:10 ] ) .the estimates are worse for ema1 and enr datasets , respectively , the smallest and the largest networks ( see table [ tab:01 ] in section [ methods_a ] ) .we see that the average bias is larger than with a few exceptions .it is also typically larger for .the design effect is generally somewhere between 1 and 3 ( one exception for and ema2 ) , a result inline to previous suggestions that a design effect of may be used as a general guideline on unknown populations .( eq . [ eq:02 ] ) and the respective standard deviation , the average bias ( eq . [ eq:03 ] ) , and the design effect ( eq . [ eq:04 ] ) .recruitment is limited to participants . in all cases , of the populationis infected with in a - c , g - i ) the largest communities and in d - f , j - l ) the smallest communities ( see section [ methods_c ] ) . a - f ) networks with strong communities and g - l ) networks with weak communities . ]recruitment trees often break down after a few waves in real settings . as discussed above , this is not only a consequence of low response - rates but also the effect of multiple recruitment trees , originating from different seeds , bumping each other and then dying out .a practical solution to obtain sufficiently large sample sizes is to restart the recruitment with new seeds as soon as the recruitment stops . in this sectionwe test the effect on the estimators if we start a new seed after the previous recruitment chain has completely stopped , i.e. seeds start at different points in time .we assume that the recruitment chain stops either naturally or after reaching a certain size .in particular , we test the case when the target sample size is participants out of the population of people , using 10 seeds as done in the previous sections . each seed is allowed to recruit ( successfully ) at maximum 50 participants , and a new seed is only selected ( uniformly among non - recruited nodes ) when the current recruitment stops . as usual , the same person may participate only once .figure [ fig:11 ] shows that the average estimator is affected and the prevalence is under - estimated if is concentrated in the largest communities and over - estimated if is more likely in smaller communities .the standard deviations are relatively large in case of strong communities ( fig .[ fig:11]a , d ) and decreases , but still maintaining relatively large values , for weaker communities ( fig .[ fig:11]g , j ) .the average bias and design effect substantially change in comparison to the case when seeds are selected simultaneously , particularly for strong community structure .the restarting of seeds introduces mixing , or equivalently , random links , in the network structure .moreover , the restriction in the size of the recruitment trees possibly inhibits the sampling process to reach the stationary state , a factor known to cause biases in the estimators .the major consequence of these very large biases is that one is not sure that a single rds experiment ( as is usually the case in reality ) provides a reliable estimate .selecting exactly one new seed after the current seed has being exhausted is an hypothetical situation .this extreme case however illustrates that the non - simultaneous selecting of seeds may increase the biases substantially unless only a few re - starts occur .we expect that more realistic scenarios ( e.g. initially selecting multiple seeds simultaneously and eventually selecting a few new seeds if the original recruitment trees die out ) lie somewhere between this case and the simultaneous seed sampling studied in section [ results_b ] .respondent - driven sampling has been proposed as an effective methodology to estimate the prevalence of variables of interest in hard - to - reach populations .the approach exploits information on the social contacts for both recruitment and weighting in order to generate accurate estimates of the prevalence .social networks however are not random but contain patterns of connectivity that may constrain the cascade of sampling . in particular, nodes have a high heterogeneity in the number of contacts , and networks typically have many triangles and a community structure . in this paper, we have studied the bias induced by community structure and network triangles in the rds by using both synthetic and empirical network structures with various levels of clustering , size , degree heterogeneity , and so on .we have also analyzed the impact of various response - rates in the estimators and quantified the relative bias for combinations of parameters .altogether , we have identified that the structure of social networks have a relevant impact on rds leading to potential biases in the rds estimator .the estimator generally performs sufficiently well if response - rates are sufficiently high , the community structure is weak and the prevalence of the variable of interest is not much concentrated in some parts of the network ( low homophily ) .the high heterogeneity of the network communities implies that sampling chains may get constrained to certain parts of the network and thus the prevalence of the infection may be either under- or over - estimated depending on which part of the network concentrates more infections .some parts of the network may only be accessed through tight bottlenecks , i.e. key individuals that bridge the small well - hidden sub - groups and the rest of the population .if these bridging nodes are not willing to participate in the recruitment or once they are recruited , recruitment trees get trapped within a group of nodes , oversampling them , and generating biases .the structure of empirical networks may vary in different contexts .consequently , the expected biases may be also lower or higher for certain social networks .in particular , biases should increase for sparser networks because less paths are available between the nodes .in other words , there are more bridging nodes maintaining the network connected and thus the recruitment becomes more sensitive to lower response - rates .similarly , lower biases are expected in denser networks .the number of network communities and the distribution of community sizes may be also different than the ones we consider .many small communities have a significant effect in the sampling , increasing the biases , because they imply on the existence of many bridging nodes and higher chances to divert or break down the recruitment .we have also assumed that those people who choose to not participate in the first invitation may be invited again .this possibly introduces a positive correlation between chance to answer the survey and the degree of the node , i.e. a tendency to oversample high degree nodes not related to clustering or community structure .since this is not possible for response - rates and we generally observe relatively similar results for decreasing , if this effect occurs , it is only relevant for low response - rates .this may however explain why we generally observe a transition in the average bias at values above the critical response - rates ( the point where a significant number of individuals is recruited ) .on the other hand , the effect of clustering and communities should be even higher if we assume that people can not be invited more than once ( or equivalently , if someone refuses to participate the first time , it may refuse the following times as well ) since this is further blocking the access to certain parts of the network . to understand the effect of the participation probability , we may consider the simple case where a one single coupon is exchanged between individuals , and the sampling is done with replacement . in that case , the stochastic process is equivalent to a random walk process if .the probability of finding a coupon with person is driven by the rate equation where is the adjacency matrix of the social network . in the case of undirected and unweighted networks , where each link is reciprocated and carries the same importance, the element matrix of the matrix is equal to 1 if there is a link between and and zero otherwise .the study of this stochastic process has a long tradition in applied mathematics and statistical physics ( e.g. ) .relevant to our results , it is known that the system converges to equilibrium if the underlying network is connected . in this regime , nodes would be visited by coupons with a probability proportional to their degree and the whole network is explored , independently on the initial conditions .equilibrium is reached after a characteristic time scale defined as , where is the first non - zero eigenvalue of the laplacian matrix driving in eq .( [ ctrw ] ) .this time scale is associated to the presence of a bottleneck ( the bridging nodes ) between two strongly connected communities in the network . for times smaller than , the random walk has essentially explored almost uniformly one single community , but has not sufficiently explored the other one .this time scale therefore provides us with a way to estimate the minimal value of needed for the whole graph to be sampled , that is .the case of sampling with restart is related to the process of random walk with teleportation . in that case , the choice of the seed where to restart the process is known to affect the statistical properties of the sampling of the network .a future theoretical exercise is to adapt those ideas to this context in order to improve the rds estimators on situations where restarting is necessary .furthermore , using non - backtracking random walks may be a possible theoretical direction to model rds considering sampling without replacement .those random walks avoid to go back from where they come from , at the previous step , and they are known to explore the network faster . finally , the results of our numerical exercise suggest some general recommendations for studies in real settings : i. experimental researchers should be aware of the potential critical bridge nodes in the study population , which may vary according to the characteristics of the population ; ii .experimental researchers should aim to response - rates at least above 0.4 in order to reduce the associated biases and uncertainty of the estimates .this recommended response - rate may be increased if more coupons are used ; iii .attention should be taken on selecting the seeds as uniformly as possible , particularly aiming to avoid many seeds either in the small or in the large groups ( typically the most reachable individuals ) .the temptation to start all seeds within well - hidden groups may cause the recruitment to not move beyond these groups ; iv . restarting the seeds ( to get larger sample sizes ) during the ongoing recruitment should be generally avoided .a better strategy may be to either start the experiment with more seeds or to increase response - rates to avoid dropouts .lecr is a fnrs charg de recherches .lecr and aet thank vr for financial support .lecr and fl conceived the study .lecr performed the simulations and analyzed the results .lecr , aet , rl , fl wrote the manuscript .the authors declare that they have no competing financial interests .correspondence and requests for materials should be addressed to lecr .a. abdul - quader , d. heckathorn , c. mcknight , h. bramson , c. nemeth , k. sabin , k. gallagher , d. des jarlais effectiveness of respondent - driven sampling for recruiting drug users in new york city : findings from a pilot study . j. urban health 83 ( 3 ) 459 - 476 ( 2006 ) w. robinson , j. risser , s. mcgoy , a. becker , h. rehman , m. jefferson , v. griffin , m. wolverton , s. tortu recruiting injection drug users : a three - site comparison of results and experiences with respondent - driven and targeted sampling procedures j. urban health 83 29 - 38 ( 2006 ) c. mcknight , d. des jarlais , h. bramson , l. tower , a. s. abdul - quader , c. nemeth , d. heckathorn respondent - driven sampling in a study of drug users in new york city : notes from the field j. urban health 83 54 - 59 ( 2006 ) d. abramovitz , e. m. volz , s. a. strathdee , t. l. patterson , a. vera , s. d. frost , e. proyecto using - respondent - driven sampling in a hidden population at risk of hiv infection : who do hiv - positive recruiters recruit sex .26 ( 12 ) 750 - 756 ( 2009 ) m. y. iguchi , a. j. ober , s. h. berry , t. fain , d. d. heckathorn , p. m. gorbach , r. heimer , a. kozlov , l. j. ouellet , s. shoptaw , w. s. zule simultaneous recruitment of drug users and men who have sex with men in the united states and russia using respondent - driven sampling : sampling methods and implications j. urban health 86 ( 1 ) 5 - 13 ( 2009 ) l. f. costa , o. n. oliveira jr ., g. travieso , f. a. rodrigues , p. r. villas boas , l. antiqueira , m. p. viana , l. e. c. rocha analyzing and modeling real - world phenomena with complex networks : a survey of applications adv .60(3 ) ( 2011 ) r. d. burt , h. hagan , k. sabin , h. thiede evaluating respondent - driven sampling in a major metropolitan area : comparing injection drug users in the 2005 seattle area national hiv behavioral surveillance system survey with participants in the raven and kiwi studies ann epidemiol .20 ( 2 ) 159 - 67 ( 2010 ) n. mccreesh , l. g. johnston , a. copas , p. sonnenberg , j. seeley , r. j. hayes , s. d. w. frost , r. g. white evaluation of the role of location and distance in recruitment in respondent - driven sampling int . j. health geographics 10 ( 56 ) ( 2011 ) l. g. johnston , y .- h .chen , a. silva - santisteban , h. f. raymond an empirical examination of respondent driven sampling design effects among hiv risk groups from studies conducted around the world aids behav .17 ( 6 ) 2202 - 2210 ( 2013 ) m. malekinejad , l. g. johnston , c. kendall , l. kerr , m. r. rifkin , g. w. rutherford using respondent - driven sampling methodology for hiv biological and behavioral surveillance in international settings : a systematic review aids behav .12 ( 4 ) s105-s130 ( 2008 )
sampling hidden populations is particularly challenging using standard sampling methods mainly because of the lack of a sampling frame . respondent - driven sampling ( rds ) is an alternative methodology that exploits the social contacts between peers to reach and weight individuals in these hard - to - reach populations . it is a snowball sampling procedure where the weight of the respondents is adjusted for the likelihood of being sampled due to differences in the number of contacts . in rds , the structure of the social contacts thus defines the sampling process and affects its coverage , for instance by constraining the sampling within a sub - region of the network . in this paper we study the bias induced by network structures such as social triangles , community structure , and heterogeneities in the number of contacts , in the recruitment trees and in the rds estimator . we simulate different scenarios of network structures and response - rates to study the potential biases one may expect in real settings . we find that the prevalence of the estimated variable is associated with the size of the network community to which the individual belongs . furthermore , we observe that low - degree nodes may be under - sampled in certain situations if the sample and the network are of similar size . finally , we also show that low response - rates lead to reasonably accurate average estimates of the prevalence but generate relatively large biases . biblabel[1]#1 .
we consider the inverse problem of reconstructing an inhomogenuous isotropic electrical conductivity in a domain , , from interior knowledge of the magnitude of one current density field and of corresponding boundary data .most of the existing results on this problem ( see a brief survey of previous work at the end of this introduction ) consider dirichlet boundary conditions . in this paperwe study boundary conditions which model what can actually be measured in practical experiments .we work with the beautiful complete electrode model ( cem ) originally introduced in and shown to best describe the physical data : for , let denote the surface electrode of constant impedance through which one injects a net current .the cem assumes the voltage potential inside and the constant voltages s on the surface of the electrodes distribute according to the boundary value problem is the outer unit normal .if a solution exists , an integration of over together with and show that is necessary .physically , the zero sum of the boundary currents account for the absence of sources / sinks of charges .the constants appearing in represent _ unknown _ voltages on the surface of the electrodes , and the difference from the traces of the interior voltage potential governs the flux of the current through the skin to the electrode .we refer to the problem , , , and as the _ forward problem_. under the assumptions that is a lipschitz domain , the conductivity is essentially bounded with real part bounded away from zero , the electrodes are ( relatively ) open connected subsets of whose closure are disjoint , the impedances have positive real part , and the injected currents satisfy , the forward problem has a unique solution up to an additive constant , as shown in .we normalize this constant by imposing the electrode voltages to lie in the hyperplane the net input currents , as in , generate a current density field , where is the solution of the forward problem . in this paper we consider the inverse problem of determining , given the magnitude of the current density field inside .the conductivity is _ unknown _ but assumed real valued and satisfying each electrode , is a known lipschitz domain subset of the boundary .the surface impedances are assumed real valued . in generalwe allow them to be inhomogenous functions on the electrodes satisfying further smoothness conditions will be assumed for some of the results in this paper .we note that , in practice , interior measurements of all three components of the current density can be obtained from three magnetic resonance scans involving two rotations of the object .however recent engineering advances in ultra - low field magnetic resonance may be used to recover without rotating the object .we hope that the results presented here may lead to further experimental progress on easier ways to measure directly just the magnitude of the current .we start by remarking that there is non - uniqueness in the inverse problem stated above , as can be seen in the following example : let be the unit square .we inject the current through the top electrode of impedance , `` extract '' the current through the bottom electrode of impedance , and measure the magnitude of the current density field in .then , for every \to [ \varphi(0),\varphi(1)] ] .we are now ready to prove our main uniqueness result .[ unique determination][unique_determination ] let be a -domain , .assume that the electrodes have lipschitz boundaries , and , .for the currents satisfying , let be the solutions of the forward problem , , and corresponding to unknown conductivities , which are assumed -smooth near the boundary and satisfying .if we will give the proof for the three dimensional case and indicate where arguments simplify in the two dimensional case . from corollary [ holderregularity ]we know that and are continuous up to the boundary . in particular, the identity in theorem [ main ] shows that on each electrode , . since on by hypothesis, we conclude that for each .so far we showed that and coincide on , and following proposition [ max ] , ] as being regular if the corresponding -level set is free of singular points . for -regular ,let be a connected component of the - level set .the arguments in ( * ? ? ?* theorem 1.3 ) showing that reaches do not use any boundary information , and thus they remain valid for the cem boundary conditions ; we recall them in appendix b for completeness .to prove unique determination , it now suffices to show that intersects .we reason by contradiction : assume that misses . then the intersection is entirely contained in a connected component . by hypothesis is simply connected . moreover , as transversal ( in fact orthogonal )intersection of -smooth surfaces , the set is a one dimensional immersed - submanifold without boundary , i.e. a closed curve ( in two dimensions it consists of two points ) .since has no singular points , the curve has no self - intersection and thus is a simple closed curve embedded in the simply connected subset . by the jordan curve theorem, separates in two parts , one of which , say , is enclosed by .let be the subset of whose boundary is , and define the new function that may have modified values at the boundary , but only off the electrodes . since is an extension domain ( has a unit normal everywhere )the new map and strictly decreases the functional unless .this contradicts the minimizing property of .therefore in , which now contradicts .therefore intersects , and thus .since the set is dense in , the identity follows . now yields that , a.e .in , and by continuity in .in this section we propose an iterative algorithm which minimizes the functional in .it is the analogue of an algorithm in adapted to the cem boundary conditions .the following lemma is key to constructing a minimizing sequence for the functional .[ decreasingg_a]assume that satisfies some , and let be the unique solution to the forward problem for . then moreover , if equality holds in then .let be arbitrary . since is a global minimizer of as in with as shown in theorem [ existence ], we have the inequality : \nonumber\\ & = \frac{1}{2}\int_\omega a|\nabla v|dx + f_{\frac{a}{|\nabla v|}}(v , v)\nonumber\\ & \geq\frac{1}{2}\int_\omega a|\nabla v|dx + f_{\frac{a}{|\nabla v|}}(u , u).\label{estim1}\end{aligned}\ ] ] writing ^{\frac{1}{2}}|\nabla v|\left[\frac{a}{|\nabla v|}\right]^{\frac{1}{2}}|\nabla u|dx\\ & \leq\left(\int_\omega \frac{a}{|\nabla v|}|\nabla v|^2dx\right)^{\frac{1}{2}}\left(\int_\omega\frac{a}{|\nabla v|}|\nabla u|^2dx\right)^{\frac{1}{2}}\\ & \leq\frac{1}{2}\int_\omega a|\nabla v|dx + \frac{1}{2 } \int_\omega \frac{a}{|\nabla v|}|\nabla u|^2dx,\end{aligned}\]]we also obtain \nonumber\\ & \leq\frac{1}{2}\int_\omega a|\nabla v|dx + \frac{1}{2 } \int_\omega \frac{a}{|\nabla v|}|\nabla u|^2dx + \frac{1}{2}\sum_{k=0}^n\left [ \int_{e_k } \frac{1}{z_k } ( u - u_k)^2d s- 2i_k u_k\right]\nonumber\\ & = \frac{1}{2}\int_\omega a|\nabla v|dx+ f_{\frac{a}{|\nabla v|}}(u , u).\label{estim2}\end{aligned}\ ] ] from and we conclude . moreover , if the equality holds in then equality holds in , and thus since is a solution to the forward problem ( for ) it is also a global minimizer of over . but shows that is also a global minimizer for . now the uniqueness of the global minimizers in theorem [ existence ] ( for ) yields .* algorithm : * we assume the magnitude of the current density satisfies let be the lower bound in , and a measure of error to be used in the stopping criteria . * step 1 : solve ( [ conductivity_eq ] , [ robin_kforward ] , [ inject_kforward ] ) and ( [ no_flux_off_electrodesforward ] ) for , and let be its unique solution. define * step 2 : for given : solve ( [ conductivity_eq ] , [ robin_kforward ] , [ inject_kforward ] ) and ( [ no_flux_off_electrodesforward ] ) for the unique solution ; * step 3 : if define repeat step 2 ; * else stop .we illustrate the theoretical results on a numerical simulation in two dimensions . given a current pattern and a set of surface electrodes with impedances ( taken to be constant)for satisfying , our iterative algorithm consists in solving the forward problem , , and for an updated conductivity at each step .a piecewise linear ( spline ) approximation of the solution to the forward problem is sought on an uniform triangulation of the unit box \times[0,1] ] ; see figure [ fig : torso_truth ] on the left .the values of the conductivity range from to .[ fig : torso_truth ] two currents are respectively injected / extracted through the electrodes \times [ 0,1 ] : ~y = 0 \right\ } \quad \mbox{and } \quad e_1 = \left\{(x , y)\in [ 0,1]\times[0,1]:~y = 1\right\}\ ] ] of equal impedances . for the given we solve the forward problem , , , for .the interior data of the magnitude of the current density field is defined by ; see figure [ fig : torso_truth ] on the right .knowing the injected currents and , the electrode impedances and , and the corresponding magnitude of the current density we find an approximate minimizer of via the iterative algorithm in section [ algorithm ] .the iterations start with the guess .an approximate solution is computed on a grid .the stopping criterion for this experiment used , and was attained with 320 iterations .an intermediate conductivity is computed using the computed minimizer , see figure [ fig : min_sigma ] .we note that in this example everywhere , thus all the level sets of are connected .it follows from the arguments in section 3 that there is a unique such that and the function can be determined from knowledge of on the curve , which connects the two electrodes .more precisely , for each point on the function maps the computed value of to the measured value of at the same point .figure [ fig : voltage_gamma ] on the left shows the resulting on the range .figure [ fig : voltage_gamma ] on the right shows , which is the scaling factor needed to obtain the true conductivity . in figure [ conductivities6.5 ]the reconstructed conductivity is shown on the right against the exact conductivity on the left .the error of the reconstruction is .we are grateful to the anonymous referees for their valuable comments . in particular one of the comments uncovered a gap in a previous version of the manuscript , and another comment suggested that our arguments would also work for the neumann problem ( see the remark at the end of section 3 ) .the work of a. tamasan has been supported by the nsf grant dms-1312883 , as was that of j. veras as part of his ph.d .research at the university of central florida .the work of a. nachman has been supported by the nserc discovery grant 250240 .in this appendix we show solvability of the forward problem for the complete electrode model of by recasting it into a minimization problem .while this approach is less general than the one given in ( we assume a real valued conductivity and positive electrode impedances ) , it explains how we are led to introduce the functional in the solution of the inverse problem . for existence and uniqueness of solutions of the forward problem , the conductivity and electrode impedances need not be smooth : and satisfy let be the space of functions which together with their gradients lie in , and be the hyperplane in .we seek weak solutions to , , , and in the hilbert space , endowed with the product and the induced norm we ll need the following variant of the poicar inequality , suitable for the complete electrode model . [ lemma_coercivity]let , , be an open , connected , bounded domain with lipschitz boundary , and be the hyperplane in . for ,let be disjoint subsets of the boundary of positive induced surface measure : .there exists a constant , dependent only on and the s , such that for all and all , we have we will show that we reason by contradiction : assume the infimum in is zero . without loss of generality ( else normalize to 1 ) , there exists a sequence in the unit sphere of , , and such that due to the compactness of the unit sphere in and of the weakly compactness of the unit sphere in it follows that there exists some with that , on a subsequence ( relabeled for simplicity ) , since the sequence is bounded in , the trace theorem implies that is ( uniformly in ) bounded in , hence also in , for each . using and in ,\end{aligned}\]]we obtain .since , we conclude that now using and and , since is connected , from and we conclude that restricts to the same constant on each electrode , and thus since , we must have and then , thus contradicting .[ quadraticproperties ] let , , and , be as in proposition [ lemma_coercivity ] . for , and , satisfying , and ,let us consider the quadratic functional defined by then \(i ) is strictly convex\(ii ) is gateaux differentiable in , and the derivative at in the direction is given by ( iii ) is coercive , more precisely , for some constant dependent on the lower bound in , and in .\(i ) the functional has two quadratic terms , each strictly convex , and one linear term , hence the sum is strictly convex .( ii ) the gateaux differentiability and the formula follow directly from the definition of .\(iii ) proposition [ lemma_coercivity ] above shows that by completing the square one obtains the proposition below revisits ( * ? ? ?* proposition 3.1 . ) and separates the role of the conservation of charge condition .this becomes important in our minimization approach , where we shall see that has a unique minimizer independently of the condition of being satisfied .however , it is only for currents satisfying , that the minimizer satisfies .this result does not use the reality of and of s . recall that the gateaux derivative of is given in .[ carification ] let , , , , , and be as in proposition [ quadraticproperties ] .\(i ) if is a weak solution to , , and , then holds and \(ii ) if satisfies , then it solves , and .in addition , if s satisfy , then also holds .\(i ) follows from a direct calculation and green s formula .\(ii ) assume that holds . by choosing arbitrary and inwe see that thus is a weak solution of . for each fixed keep as above , but now choose arbitrary with .a straightforward calculation starting from shows that since were arbitrary follows .now choose as above but arbitrary with for all .it follows from that since the trace of is arbitrary off the electrodes holds . finally , for an arbitrary choose with the trace on each , and off the electrodes . by using the already established relations , , and green s formula inwe obtain on the one hand , by introducing the notation with we just showed that .note that so far we have not used the conservation of charge condition .on the other hand , by using , , and in the divergence formula , we have yields . therefore , and holds .the following result establishes existence and uniqueness of the weak solution to the foward cem problem ; contrast with the proof of theorem 3.3 in .[ existence ] let , , , , for , and be as in proposition [ quadraticproperties ] . let be defined in .\(i ) then has a unique minimizer .if , in addition , the injected currents s satisfy the minimizer is the weak solution of the problem , , , and .\(ii ) if the problem , , , has a solution , then it is a minimizer of in the whole space and hence unique .moreover , the current s satisfy .\(i ) let and consider a minimizing sequence in , since we have .following , thus the minimizing sequence must be bounded , hence weakly compact . in particular , for a subsequence ( relabeled for simplicity ) there is some , such that on the other hand since is convex , and gateaux differentiable at in the direction , we have we take the limit as . the weak convergence in yields thus which shows that is a global minimizer .strict convexity of implies it is unique . at the minimum the euler - lagrange equations are satisfied .an application of proposition [ carification ] part ( ii ) shows that is a weak solution to the forward problem .\(ii ) proposition [ carification ] part ( i ) shows that solves the euler - lagrange equations , and due to the convexity it is a minimizer of . due to the strict convexity ofthe functional the minimizer is unique , hence the weak solution is unique .if , then interior elliptic regularity yields .the following result considers the regularity up to the boundary ; part a ) in the proposition below was already proved in ( * ? ? ?* remark 1 ) .we reproduce the proof for the reader s convenience .let be the union set of the electrodes .[ regularity ] let be a -domain , and let be -smooth near the boundary .assume that the electrodes has lipschitz boundary , and , .let be the solution to the forward problem .then \a ) , for all .\b ) for any , there is a neighborhood of , and a function for all , such that .\a ) since it follows from that .since , we have . by ( * ? ? ?* theorem 11.4 ) the extension by zero to the whole boundary yields , and thus .now apply the elliptic regularity for the neumann problem ( * ? ? ?* remark 7.2 ) to conclude , for all .\b ) let , and choose be sufficiently small so that is -smooth in and .let .we define where is the cutoff function with , if , and if .then , by part a ) we have all .also by part a ) we have that the trace . now , by , and thus , now apply the elliptic regularity ( * ? ? ?* theorem 7.4 , remark 7.2 ) to conclude . if , the same proof holds if we choose such that .let be one of the values for which the level set is a - smooth hypersurface ( which is the case for a.e . ) , and be one of its connected components .we show here that .the arguments in the proof of ( * ? ? ?* theorem 1.3 ) use only the interior points of , and thus apply to the cem as well .we include them here for the convenience of the reader .arguing by contradiction , assume that . then is a compact manifold with two connected components . using the alexander duality theorem in algebraic topology for ( see , e.g. theorem 27.10 in , ) we have that is partitioned into three open connected components : .since we have and then for .we claim that at least one of the or is in .assume not , i.e. for each , . since is connected ( by assumption ), we have that is connected which implies is also connected . by applying once againalexander s duality theorem for , we have that has exactly two open connected components , one of which is unbounded : . since is connected and unbounded , we have , which leaves .this is impossible since is open and is a hypersurface .therefore either or or both has the boundary in . to fix ideas ,consider . if this were the case, then we claim that in .indeed , since is an extension domain ( has a unit normal everywhere ) the new map defined by is in and decreases the functional , thus contradicting the minimizing property of .therefore in , which makes in .again we reach a contradiction since the set of critical points of is negligible .
we consider the inverse problem of recovering an isotropic electrical conductivity from interior knowledge of the magnitude of one current density field generated by applying current on a set of electrodes . the required interior data can be obtained by means of mri measurements . on the boundary we only require knowledge of the electrodes , their impedances , and the corresponding average input currents . from the mathematical point of view , this practical question leads us to consider a new weighted minimum gradient problem for functions satisfying the boundary conditions coming from the complete electrode model ( cem ) of somersalo , cheney and isaacson . we show that this variational problem has non - unique solutions . the surprising discovery is that the physical data is still sufficient to determine the geometry of ( the connected components of ) the level sets of the minimizers . we thus obtain an interesting phase retrieval result : knowledge of the input current at the boundary allows determination of the full current vector field from its magnitude . we characterize locally the non - uniqueness in the variational problem . in two and three dimensions we also show that additional measurements of the voltage potential along a curve joining the electrodes yield unique determination of the conductivity . the proofs involve a maximum principle and a new regularity up to the boundary result for the cem boundary conditions . a nonlinear algorithm is proposed and implemented to illustrate the theoretical results . minimum gradient , conductivity imaging , complete electrode model , current density impedance imaging , minimal surfaces , magnetic resonance electrical impedance tomography , current density impedance imaging 35r30 , 35j60 , 31a25 , 62p10
stability is ubiquitous in biology , ranging from physicochemical homeostasis in cellular microenvironments to ecological constancy and resilience .it is noteworthy that not only can the stability phenomenon arise in normal living systems , but it can also happen in abnormal organisms such as cancer . as a large family of diseases with abnormal cell growth, cancer is generally acknowledged to be the malignant progression along with a series of stability - breaking changes ( _ e.g. _ genomic instability ) within the normal organisms .however , some recent researches reveal the other side of cancer .an interesting _ phenotypic equilibrium _ was reported in some cancers .that is , the population composed of different cancer cells will tend to a fixed equilibrium of phenotypic proportions over time regardless of initial states ( fig .these findings provided new insights to the research of cancer heterogeneity .the experimental works also stimulate theoreticians to put forward reasonable models for interpreting the phenotypic equilibrium .in particular , it was reported that the intrinsic interconversion between different cellular phenotypes , also called _ phenotypic plasticity _ , could play a crucial role in stabilizing the mixture of phenotypic proportions in cancer . as a pioneering work , gupta _ et al _ proposed a discrete - time markov chain model to describe the phenotypic transitions in breast cancer cell lines . in their model , three phenotypeswere identified : stem - like cells ( s ) , basal cells ( b ) and luminal cells ( l ) .the phenotypic transitions among them can be captured by the transition probability matrix as follows : where represents the probability of the transition from phenotype to . according to the limiting theory of discrete - time finite - state markov chain , there exists unique equilibrium distribution such that , provided is irreducible and aperiodic . the markov chain will converge to regardless of where it begins . by fitting the markov chain model to their experimental data , the equilibrium proportions of stem - like ,basal and luminal cells were predicted by the equilibrium distribution respectively . even though the markov chain model fitted the experimental results in breast cancer cell lines very well , zapperi and la porta questioned the validity of the phenotypic transitions and gave an alternative explanation to the phenotypic equilibrium , which was based on the conventional cancer stem cell ( csc ) model with imperfect csc biomarkers .moreover , liu _et al _ showed that the negative feedback mechanisms of non - linear growth kinetics of cancer cells can also control the balance between different cellular phenotypes .these works suggested that the phenotypic plasticity may not be the only explanation to the phenotypic equilibrium . to further reveal the mechanisms giving rise to the phenotypic equilibrium , it is more convincing to study the models integrating the phenotypic plasticity with the other conventional cellular processes of cancer .motivated by this , a series of works discussed the phenotypic equilibrium by establishing the models coordinating with both hierarchical cancer stem cell paradigm and phenotypic plasticity . in these works ,the phenotypic equilibria were intimately related to the stable steady - state behavior of the corresponding ordinary differential equations ( odes ) models .in other words , if one can model the dynamics of the phenotypic proportions as the following system of odes the unique stable fixed point ( if exists ) corresponds to the equilibrium proportions .the aforementioned works have showed that the phenotypic equilibrium can be explained by different concepts of stabilities in different models .thus a natural question is whether there exists a unified framework to harmonize the equilibrium distribution of the markov chain model and the stable steady - state behavior of the odes model . in this study, we try to address this issue by establishing a multi - phenotype branching ( mpb ) model .on one hand , our model integrates the phenotypic plasticity with the cellular processes ( such as cell divisions ) that have extensively been studied in cancer biology . on the other hand ,the model is stochastic and closer to the reality with finite population size .based on this model , it is shown that the odes model can be derived by taking the expectation of our model .more specifically , the odes model is just the _ proportion equation _ of the mpb model . besides, the markov chain model is also shown to be closely related to our model .that is , the kolmogorov forward equation of the continuous - time markov chain model is a special case of the proportion equation provided that the division rates of stem - like , basal and luminal cells are the same . interestingly , ``same doubling time '' of the three phenotypes was just observed in gupta _et al _ s experiment when they used the markov chain model to explain the phenotypic equilibrium , which is in line with our theoretical prediction .moreover , our result also shows that one should be more cautious about the application of the markov chain in modeling cell - state dynamics in larger time scales , since the markov chain model takes no account of different capabilities of divisions by cancer stem cells and non - stem cancer cells .more importantly , by showing _ almost sure convergence _ of the mpb model , the stationarity of the markov chain model and the stability of the odes model can be unified as the average - level stability of our model .note that the almost sure convergence indicates the path - wise stability of stochastic samples , providing a more profound explanation to the phenotypic equilibrium .in other words , the phenotypic equilibrium is actually rooted in the stochastic nature of ( almost ) every path sample ; the average - level stability just follows from it by averaging all the stochastic samples .furthermore , it is also shown that , not only can the model with phenotypic plasticity give rise to the path - wise convergence , but the conventional cancer stem cell model without phenotypic plasticity can also lead to the convergence under certain conditions .this echoes the works that the phenotypic plasticity is not the only explanation to the phenotypic equilibrium .the paper is organized as follows .the model is presented in section 2 .main results are shown in section 3 .conclusions are in section 4 .in this section we give the assumptions of our model . consider a population composed of different cancer cell phenotypes .for pure theoretical investigations , the number of the phenotypes can be any in general .however , to better illustrate our theoretical results on the basis of more concrete biological background , enlightened by , we here focus on the specific model consisting of three phenotypes : stem - like cells ( s ) , basal cells ( b ) and luminal cells ( l ) .the main assumptions are listed as follow : + _ 1_. stem - like cells can perform three types of divisions : symmetric division , symmetric differentiation and asymmetric division .that is , a stem - like cell can divide into two identical stem - like cells ( symmetric division ) or two identical differentiated cancer cells ( symmetric differentiation ; it can also divide into a stem - like cell and a differentiated cancer cell ( asymmetric division ) .* symmetric division : s s+s ; * symmetric differentiation : s b+b or s l+l ; * asymmetric division : s s+b or s s+l . is the division rate ( or termed synthesis rate ) , with the meaning that a stem - like cell will wait an exponential time with expectation and then perform one particular type of division with probability ( note that ) .suppose the waiting time and the division strategy are independent to each other , then the product of and governs the reaction rate of the corresponding division type .2_. for non - stem cancer cells , _ i.e. _ basal or luminal cells , we assume that not only can they undergo symmetric divisions with limited times , but they can also perform phenotypic conversions . to illustrate this ,let us take b phenotype as an example .suppose a newly - born b cell can divide at most times .if we denote as the b cell that has already divided times , then we have the following hierarchical structure : * + ; * ... * + ; * . is the division rate , and is the death rate of .moreover , assume that a b cell can convert into an s cell ( termed _ de - differentiation _ ) by phenotypic plasticity . let the dedifferentiation rate of be , then we have * s ; * ... * s. for simplicity , it is often assumed that , denoted as for short .meanwhile , note that a b cell can also convert into an l cell .since the biological mechanisms of the phenotypic conversions between different non - stem cancer cells are still poorly understood , for simplicity it is assumed that the phenotypic transitions between b and l can only happen in the same hierarchical level . that is , supposing that a newly - born l cell can also divide at most times, is the l cell that has already divided times , then we have * ; is the transition rate . in fact , this assumption implies with constant rate overall , which is in line with the assumption in . for luminal cells , similarly ,their cellular processes are shown as follows : * + ( ) ; * . * s ( ) ; * ( ) . based on the cellular processes listed in the last subsection , we can model this cellular system as a continuous - time markov process in the discrete state space of cell numbers ( chapter 11 in ) .if we let be the cell number of s phenotype , be the vector representing the cell numbers of cells , and be the vector representing the cell numbers of cells , then the dynamics of can be modeled as a multi - phenotype branching process .if we define be the probability of at time , according to the theory of _ chemical master equation _( cme ) , the rate of change of is equal to the transitions into minus the transitions out of it , _i.e. _ where is the transition rate from to and is the transition rate from to ( see [ appendix1 ] for more details ) .in next section we will show that the odes model and the markov chain model can be derived from our model . for convenience we term our multi - phenotype branching modelthe _ mpb model_.to relate our mpb model to the odes model , we consider the mean dynamics of the mpb model by averaging all the stochastic samples of it .let be the expectation of , that is , for each component we define we multiply on the both sides of eq .( [ cme ] ) , and then calculate the summation over all for s cells : for b cells : for l cells : then it is not difficult to see that the dynamics of can be captured by a system of linear odes , where {(2m+3)\times(2m+3)}=\left(\begin{smallmatrix } \alpha_s\left(p_1-p_2-p_3\right ) & \beta_{b } & \cdots & \beta_{b } & \beta_{l } & \cdots & \beta_{l } \\ \alpha_s\left(2p_2+p_4\right ) & -\left(\alpha_{b}+\beta_{b}+\gamma_{b}\right ) & 0 & \cdots & \gamma_{l } & \cdots & 0 \\ 0 & 2\alpha_{b } & -\left(\alpha_{b}+\beta_{b}+\gamma_{b}\right ) & 0 & \cdots & \cdots & 0 \\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots\\ \end{smallmatrix}\right ) .\label{matrix2}\ ] ] furthermore , it should be noted that , eq . ( [ ode1 ] ) describes the cell number dynamics of each phenotype at each hierarchical level .if we denote and as the total cell numbers of b and l phenotypes respectively , then it is often the dynamics of that interests people .that is , we can see that eq .( [ odex ] ) is not linear of , which also depends on and separately .technically this is due to the limited capability of divisions of b and l phenotypes . in the limit of , orwhen is relatively large in comparison to observational time scales ( _ e.g. _ ) , eq .( [ odex ] ) can approximately be expressed as a linear system of : where {3\times3}=\left(\begin{smallmatrix } \alpha_s\left(p_1-p_2-p_3\right ) & \beta_{b } & \beta_{l}\\ \alpha_s\left(2p_2+p_4\right ) & \alpha_b-\beta_b-\gamma_b & \gamma_{l}\\ \alpha_s\left(2p_3+p_5\right ) & \gamma_{b } & \alpha_l-\beta_l-\gamma_l \end{smallmatrix}\right ) . \label{matrixx}\ ] ] in this way the model reduces to the three - phenotypic model investigated in . however , eq . ( [ odex ] )should be adopted for describing larger time scales ( _ e.g. _ ) .note that it is inconvenient to analyze eq .( [ odex ] ) directly , we will show later that analyzing eq .( [ ode1 ] ) is quite helpful for the understanding of eq .( [ odex ] ) , especially in the study of the phenotypic equilibrium . since eq .( [ ode1 ] ) describes the dynamics of the absolute numbers of different cellular phenotypes , we term it the _ number equation_. however , to investigate the phenotypic equilibrium , we are more concerned about the dynamics of the relative numbers ( _ i.e. _ proportions ) of different cellular phenotypes .let be the vector representing the proportions of different cellular phenotypes . by replacing in eq .( [ ode1 ] ) with , we have the equation governing the phenotypic proportions as follows ( see [ appendix2 ] ) where . we term eq .( [ ode2 ] ) the _ proportion equation_. it is noteworthy that the stable steady - state behavior of eq .( [ ode2 ] ) just corresponds to the phenotypic equilibrium investigated in .the proportion equation thus connects the mpb model and the odes model in previous literature , implying that the odes model can be seen as the average - level counterpart of the stochastic mpb model . to show the stability of eq .( [ ode2 ] ) , we have the following theorem ( see [ appendix2 + ] for the proof ) : there exists unique positive stable fixed point in eq .( [ ode2 ] ) provided that is irreducible . ][ thm1 ] theorem [ thm1 ] shows that the deterministic population dynamics of cancer cells will tend to an equilibrium mixture of phenotypic proportions as time passes .besides , let be the proportion vector of , _ given ( theorem [ thm1 ] ) , thus we have the following result for : under the same condition in theorem [ thm1 ] , will tend to a fixed positive vector as .[ cor1 ] corollary [ cor1 ] indicates the phenotypic equilibrium of the three - phenotypic model in eq .( [ odex ] ) .moreover , it should be pointed out that , the results in theorem [ thm1 ] and corollary [ cor1 ] can be seen as the average - level stabilities following from the the path - wise convergence of the mpb model , which will be discussed in sec . [ sectioniii ] . note that the markov chain model eq .( [ matrix1 ] ) is discrete - time and the mpb model is continuous - time ; to compare the two models in the same time scale , we turn our attention from discrete - time markov chain to continuous - time markov chain .consider the standard model of continuous - time markov chain .that is , let be the probability of the markov chain being in state at time , its dynamics can be captured by the kolmogorov forward equation : where -matrix {3\times3} ] is larger than 0 . ] * when is irreducible , from theorem 2.6 in , there exists a perron - frobenius eigenvalue satisfying that _ 1 ) _ is real and for any eigenvalue ; _ 2 ) _ is simple , _ i.e. _ a simple root of the characteristic equation of ; _ 3 ) _ is associated with ( up to constant multiples ) unique positive right eigenvector .here we assume that is the normalized right eigenvector of , that is , . since is simple , the solution of eq .( [ ode1 ] ) can be expressed as where are the different eigenvalues of , is the algebraic multiplicity of , is the corresponding eigenvector of , is determined by initial states .suppose , since re , thus before completing the proof , we need to discuss the case . in this case , the above argument does not work .however , since fluctuations are inevitable in real world , will hardly happen in reality . to show this ,let in eq .( [ solution ] ) this is a linear equation of . by cramer s rule we have where $ ] , is just with its first column replaced by .it is easy to add a small perturbation to , so that all the columns of are linear independent , hence holds .the proofs of theorems [ thm2 ] and [ thm3 ] are both on the basis of the following lemma : [ theorem 5 in ] assume that the perron - frobenius eigenvalue of in eq .( [ matrix2 ] ) is simple and positive .conditioned on essential non - extinction , will tend to almost surely as . is the normalized right eigenvector of , which is non - negative .[ lemma ] for proving theorems [ thm2 ] and [ thm3 ] , firstly we need to explain the concept of essential non - extinction .we are not going to discuss the general mathematical definition of it ( see sec .4.2 in ) . in our mpb model ,essential non - extinction specifically means non - extinction of the phenotype corresponding to the perron - frobenius eigenvalue . for theorem [ thm3 ], the assumptions ( 2 ) and ( 3 ) implies that the perron - frobenius eigenvalue of is ( we will show this later ) . in other words ,the essential non - extinction here just means non - extinction of stem - like phenotype . for theorem [ thm2 ] , since is irreducible , it is possible for any two phenotypes to ( directly or indirectly ) inter - convert into each other . in this case ,non - extinction of one particular phenotype is equivalent to non - extinction of any phenotype .this implies that , no matter which phenotype corresponds to , to guarantee essential non - extinction , it is sufficient to assume non - extinction of the population in general .therefore , the conditions provided in theorems [ thm2 ] or [ thm3 ] ensure the essential non - extinction of the model .we now start to prove the two theorems . on one hand, we need to show that of is simple and positive in both theorems . on the other hand ,since lemma [ lemma ] only concludes the non - negativity of , we need to further show the positivity of .the proof for theorem [ thm2 ] is straightforward , since we assume that is irreducible , according to the second remark in [ appendix2 + ] , is simple and is positive .note that is also assumed positive , by lemma [ lemma ] we have almost surely as . for theorem [ thm3 ] , according to the assumptions , reduces to a lower triangular matrix as follows =\left(\begin{smallmatrix } \alpha_s\left(p_1-p_2-p_3\right ) & 0 & \cdots & \cdots & \cdots & \cdots & 0 \\ \alpha_s\left(2p_2+p_4\right ) & -\alpha_{b } & 0 & \cdots & \cdots & \cdots & 0\\ 0 & 2\alpha_{b } & -\alpha_{b } & 0 & \cdots & \cdots & 0\\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots\\ \alpha_s\left(2p_3+p_5\right ) & & & 0 & -\alpha_l & \cdots & 0\\ 0 & \cdots & \cdots & \cdots & 2\alpha_l & -\alpha_l & \cdots \\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots\\ \end{smallmatrix}\right ) .\label{matrix3}\ ] ] it is easy to know that the eigenvalues of correspond to the diagonal elements . by assumptions ( 2 ) and ( 3 ) , is the perron - frobenious eigenvalue which is positive and simple . by lemma [ lemma ], we have almost surely as , where is the normalized right eigenvector of .to complete the proof , we need to show that is positive .note that satisfies the following equation by expanding this equation , we have suppose , then we have since and . with the same logic, we can show the positivity of recursively , which completes the final proof .10 url # 1`#1`urlprefixhref # 1#2#2 # 1#1 c. chaffer , i. brueckmann , c. scheel , a. kaestli , p. wiggins , l. rodrigues , m. brooks , f. reinhardt , y. su , k. polyak , et al ., normal and neoplastic nonstem cells can spontaneously convert to a stem - like state , proc .usa 108 ( 19 ) ( 2011 ) 79507955 .p. gupta , c. fillmore , g. jiang , s. shapira , k. tao , c. kuperwasser , e. lander , stochastic state transitions give rise to phenotypic equilibrium in populations of cancer cells , cell 146 ( 4 ) ( 2011 ) 633644 .g. yang , y. quan , w. wang , q. fu , j. wu , t. mei , j. li , y. tang , c. luo , q. ouyang , et al . , dynamic equilibrium between cancer stem cells and non - stem cancer cells in human sw620 and mcf-7 cancer cell populations , br . j. cancer 106 ( 9 ) ( 2012 ) 15121519 .w. wang , y. quan , q. fu , y. liu , y. liang , j. wu , g. yang , c. luo , q. ouyang , y. wang , dynamics between cancer cell subpopulations reveals a model coordinating with both hierarchical and stochastic concepts , plos one 9 ( 1 ) ( 2014 ) e84654 .x. liu , s. johnson , s. liu , d. kanojia , w. yue , u. p. singh , q. wang , q. wang , q. nie , h. chen , nonlinear growth kinetics of breast cancer stem cells : implications for cancer stem cell targeted therapy , sci .rep . 3 ( 2013 ) 2473 .m. k. jolly , b. huang , m. lu , s. a. mani , h. levine , e. ben - jacob , towards elucidating the connection between epithelial mesenchymal transitions and stemness , j. r. soc .interface 11 ( 101 ) ( 2014 ) 20140962 . c. d. may , n. sphyris , k. w. evans , s. j. werden , w. guo , s. a. mani , epithelial - mesenchymal transition and cancer stem cells : a dangerously dynamic duo in breast cancer progression , breast cancer res 13 ( 1 ) ( 2011 ) 202 .
the phenotypic equilibrium , i.e. heterogeneous population of cancer cells tending to a fixed equilibrium of phenotypic proportions , has received much attention in cancer biology very recently . in previous literature , some theoretical models were used to predict the experimental phenomena of the phenotypic equilibrium , which were often explained by different concepts of stabilities of the models . here we present a stochastic multi - phenotype branching model by integrating conventional cellular hierarchy with phenotypic plasticity mechanisms of cancer cells . based on our model , it is shown that : _ ( i ) _ our model can serve as a framework to unify the previous models for the phenotypic equilibrium , and then harmonizes the different kinds of average - level stabilities proposed in these models ; and _ ( ii ) _ path - wise convergence of our model provides a deeper understanding to the phenotypic equilibrium from stochastic point of view . that is , the emergence of the phenotypic equilibrium is rooted in the stochastic nature of ( almost ) every sample path , the average - level stability just follows from it by averaging stochastic samples . 1 . school of mathematical sciences , xiamen university , xiamen 361005 , p.r . china + ( author , zhouda.edu.cn ) 2 . school of mathematics and statistics , central south university , changsha 410083 , p.r . china 3 . department of applied mathematics , university of washington , seattle , wa 98195 , usa
recent progress in the ability to manipulate single biomolecules such as double stranded dna and protein filaments , prompted the development of continuum models of complex polymers capable of describing bending fluctuations and finite extensibility under tension ( extensions of the worm - like chain model ) as well as models that can describe twist rigidity , spontaneous twist and chiral response to torque ( the ribbon model ) .these models are defined by their elastic energy functions which are then used to generate the equilibrium ensemble of polymer conformations , based on the conventional gibbs distribution approach ( i.e. , weighting the conformations by an appropriate boltzmann factor ) .while this approach proved to be quite successful for open polymers , it could not be applied to generate the conformations of polymer loops and to study properties of circular double stranded dna such as supercoiling and formation of knots . in order to cope with the latter problem, we developed a purely mathematical procedure of generating closed curves , based on the expansion of the components of the polymer conformation vector ( is some parametrization of the contour of the loop ) in finite fourier series and taking the corresponding fourier coefficients from some random distribution .we found that this distribution of fourier knots could be fine tuned to mimic some of the large scale properties of closed gaussian loops and small scale properties of worm - like chain , this could not be achieved using a single persistence length .the present deals attempts to establish a common framework for the discussion of the above physical and mathematical models of polymers , by classifying them according to their smoothness . a continuous curve ( is the contour parameter which measures the distance along the contour )is defined as an -smooth curve if its derivative is a continuous function of . in section [ sec : frenet ] we introduce the fundamental equations of differential geometry of space curves and of stripes and show that these equations describe space curves that are at least -smooth and stripes that are at least -smooth .this is equivalent to saying that while such space curves must have finite torsion , stripes must have finite rate of twist but their torsion is free to take any value along their contour . in section [ sec : physicalmodel ]we show that worm - like chains belong to the class of freely rotating models , with uniformly distributed dihedral angles , divergent torsion and a normal whose direction jumps discontinuously as one moves along the contour of the chain and , therefore , such objects can not be described by the frenet - serret ( fs ) equation .we also show that because the energy of a ribbon depends on its twist , the typical conformations of ribbons have finite rate of twist but their torsion diverges at many points along the contour . in section [ sec : the models ]we compare the ensembles of -smooth ( ribbons ) and -smooth ( fourier knots ) curves and show that typical realizations of the latter ( but not the former ) ensemble , have finite torsion and a smoothly varying normal and can be described by the fs equation .we also calculate the distributions of spatial distances between two points on the contour of the curve in the above ensembles and find that these distributions differ significantly only for distances of the order of persistence length .finally , in section [ sec : discussion ] we discuss our results and conclude that unlike worm - like chains and ribbons which possess no torsional rigidity , the ensemble of curves generated by the fourier knot algorithm can be characterized by finite torsional persistence length .it is often convenient to represent a space curve defined in a space - fixed coordinate frame by intrinsic coordinates , as follows . at every point along the curve ( )one constructs a set of three orthogonal unit vectors known as the frenet frame : the tangent defined as , the normal which points in the direction of and the binormal .the rotation of the frenet frame as one moves along the contour of the curve is described by the frenet - serret ( fs ) equation : where is the curvature and is the torsion ( in general , both are functions of ) .the condition of validity of the above equation is that as everywhere along the curve .it is straightforward to show that and we conclude that since the condition of validity of the fs equation is that the torsion is finite everywhere along the curve , the curve should be at least -smooth . given the curvature and the torsion at each point along the curve , one can solve eq .( [ eq : frenet equation ] ) , calculate the tangent and integrate it to construct the parametric representation of the space curve , .the above construction is unique in the sense that any pair of functions and defined on the interval , can be uniquely mapped to a space curve of length .the simplest examples are ( a ) which yields a planar circle of radius and ( b ) which corresponds to a helix .we now turn to consider stripes of length , width ( such that ) and thickness . unlike a space curve which is uniquely defined by the tangent vector ( the normal and the binormal are auxiliary constructs , needed only to calculate the tangent , given the curvature and the torsion ) ,a stripe is a slice of a plane with which one can associate two orthogonal unit vectors ( in plane ) and ( normal to the plane ) . the spatial configuration of the stripe is thus defined by the local orientation ( at each point on the centerline ) of the orthogonal triad known as the darboux frame which specifies the directions of the two axes and and that of the tangent to the centerline that runs along the long axis of the stripe , . while a space curve is completely defined by the two functions and , a stripe is represented by three generalized curvatures ( ) that determine the unit vectors via the generalized frenet - serret equation , this can be written compactly as where ( is the levi - civita tensor ) .inspection of eq .( [ eq : ribbon equation ] ) shows that is the infinitesimal angle of rotation about the direction and the condition of validity of the generalized fs equation is that this angle vanishes in the limit .however , unlike the torsion which is completely determined by the space curve and can be expressed in terms of its first three derivatives , the rate of twist is the local rate ( per unit length ) of rotation about the tangent to this curve and , as such , it depends only on the orientation of and and can not be expressed in terms of the centerline and its derivatives !we conclude that in order for eq .( [ eq : ribbon equation ] ) to hold , the centerline of the stripe should be represented by a-smooth curve .since both couples of unit vectors and lie in the plane perpendicular to the local tangent , the triads and are connected by where the matrix generates a rotation by an angle about the axis .assuming that the centerline of the stripe can be represented by the fs equation ( i.e. , that it is an -smooth curve ) , one can express the generalized fs equation in terms of the curvature , torsion and the angle between the directions of the binormal and the axis : comparing ( [ eq : ribbequ ] ) with ( [ eq : ribbon equation ] ) yields : conversely , however , since the centerline of the stripe is required to be only -smooth , these relations do not hold in general !physical models of polymers are based on the choice of a geometrical model and an energy functional .the simplest and the most prevalent model is that of a continuous gaussian random walk , with which one can associate a free energy that describes the entropic cost of stretching the polymer chain , here is the monomer ( cutoff ) length , is the boltzmann constant and is the temperature . notice that this free energy is expressed only in terms of the first derivative of the trajectory , and , therefore , space curves that describe polymer conformations in the continuous gaussian random walk model have to be only -smooth ( only the curve itself and not its derivatives , has to be continuous everywhere ) .the worm - like chain model of polymers combines bending elasticity and inextensibility ( the latter condition can be expressed as ) and , assuming that the stress - free state corresponds to a straight line , the energy can be written as since the energy depends only on the curvature and does not depend on the torsion the corresponding space curve has only -smooth . note that curves described by the fs equations have to be at least -smooth and ,therefore , worm - like chains are not sufficiently smooth to be described by the fundamental equations of differential geometry of space curves ! in order to get physical intuition about the origin of the problem , lets consider a discretized model of a continuous curve in which the polymer is made up of connected straight segments of length each , such that the direction of the segment at point is given by the tangent to the original chain at this point , ( the continuum limit is recovered as ) . the angle between neighboring segmentsis denoted as and , in order to describe the non - planar character of a general space curve , one has to introduce the dihedral angle between the two successive planes and determined by three successive segments at points and .since is also the angle between neighboring binormals and , the frenet frames at points and are related by a simple rotation where the rotation matrix is given by notice that no assumption is made so far about the magnitude of the angles and .defining and and subtracting the vector from both sides of eq .( [ eq : b ] ) , yields with the unit matrix .notice that unlike the fs equation which is valid only for infinitesimal rotations of the frenet frame , eq . ( [ eq : discrete frenet equation ] ) describes finite rotations ; the fs equation can be derived from it by dividing both sides of the equation by and taking the limit ( , etc . ) . in order for the right hand side of eq .( [ eq : discrete frenet equation ] ) to remain finite in this limit , all the elements of have to vanish this is equivalent to the condition , one can expand the cosine and sine functions in eq .( [ eq : b ] ) and , upon substituting the result into eq .( [ eq : discrete frenet equation ] ) , one recovers the fs equation with and returning to the worm - like chain model , we notice that while the bending energy ensures that the curvature is finite and the angle is always small , there is no corresponding physical restriction on the magnitude of and we conclude that the worm - like chain corresponds to the class of freely rotating chain models in which the angle can attain any value in the interval ] and , therefore , the probability distribution of is given by using the ribbon algorithm to obtain the ensemble of conformations of a ribbon with a symmetric cross section and without spontaneous curvature , we generate the corresponding distribution ] and is an effective cutoff ( ) , the long wavelength properties of the ensemble generated by the fourier knot algorithm , are in good agreement with those obtained from the worm - like chain model .these properties include second moments such as the mean square distance between two points on the contour ^{2}\right\rangle ] where with the persistence length determined by the cutoff as .however , even though the tangent auto - correlation function decays exponentially with on length scales comparable to ( just like in the worm - like chain model ) , the corresponding decay length smaller than the persistence length obtained from the long - wavelength properties of fourier knots . in ref we suggested that the ensemble of configurations generated by the fourier knot algorithm is equivalent to a physical ensemble of polymers which possess both bending and twist rigidity and , while the short range properties of the tangent - tangent correlation function are determined by the bending persistence length only , both bending and twist persistence length control its long distance behavior . in any case, the fact that the ensemble of fourier knots can not be characterized by a single persistence length suggests that the statistical properties of this ensemble differ from those of worm - like chains and that it is important to investigate not only the second moments but the entire distributions .the first property we examine is the probability distribution . as can be seen in fig .[ fig : cosphi ] , the distribution has a peak at i.e. , at . since ,we conclude that the ensemble of curves generated by the fourier knot algorithm is characterized by finite torsion and a normal whose direction varies smoothly along the contour of each curve and , therefore , such curves can be described by the fs equation .we would like to stress that even though the torsion is described by the first three derivatives of all of which are finite for fourier knots , the observation that the ensemble of fourier knots is dominated by curves with finite torsion is non - trivial since the expression for the torsion diverges at points along the contour where the curvature vanishes ( see eq .[ curvtor ] ) .notice that for ribbons with no spontaneous curvature and twist , the partition function can be written as the product of bending and twist parts with is given by the functional integral ( assuming a symmetric ribbon of bending persistence length ) since , the measure can be written as the product of a radial contribution an angular one .we therefore conclude that the probability of points with vanishes linearly with and since , the torsion should be finite everywhere , as observed . strictly speaking the above argumentwas derived for open ribbons and not to closed curves , but since it involves only the measure and not the form of the energy function , it applies to fourier knots as well ( see fig . [fig : kappa distribution ] where the measured distribution is plotted for fourier knots with ) .we now turn to compare the statistical properties of -smooth curves generated by the fourier knot algorithm and the -smooth centerlines of ribbons generated by the ribbon algorithm .consider the probability distribution of the distance between points and ( ) along the contour ( the second moment of this distribution for fourier knots was calculated in ref .the difficulty in comparing the two ensembles is that while the ribbon algorithm generates open curves , the fourier knot algorithm yields closed loops . in order to compare the latter with the former , we make use of the fact that , as long as we consider contour distances ( much shorter than the total length of the loop ( ) , approaches the probability distribution for an open , infinitely smooth curve . in fig . [fig : rms ] we plot ( ribbon ) and ( knot ) .as expected , in the long wavelength limit ( ) the two distributions approach the gaussian random walk result , ] and we conclude that both worm - like chains and centerlines of ribbons belong to the class of freely rotating models , with divergent torsion and discontinuous jumps of the normal to the curve .however , unlike worm - like chains , ribbons have twist rigidity which means that the rate of twist of the physical axes of the cross section remains finite everywhere along the contour of the ribbon and guarantees that the triad of unit vectors associated with the ribbon obeys the generalized fs equation familiar from the differential geometry of stripes .we compared some statistical properties of ensembles of -smooth and -smooth curves generated by the ribbon and the fourier knot algorithms , respectively .we showed that in the latter case the dihedral angle is peaked about and , therefore , typical configurations of fourier knots have finite torsion everywhere and can be described by the fs equation .we also compared the distribution functions of the spatial distance between two points along the contour of a ribbon and of a fourier knot . as expected , both distribution functions approach the limiting gaussian distribution for length scales much larger than the persistence length , but are quite different on length scales comparable to the persistence length.finally we would like to stress that while the physical ensembles of conformations of worm - like chains and ribbons are generated using the standard methods of statistical physics ( each conformation is weighted with an appropriate boltzmann factor , ) , the ensemble generated by the fourier knot algorithm is a purely mathematical construction and there is no elastic energy associated with different conformations of fourier knots . nevertheless , the observation of two persistence lengths reported in ref . and the present finding that fourier knots have finite torsion , suggest that the statistical properties of this mathematical ensemble ( notice that persistence lengths can be measured directly from the ensemble of conformations of the space curves , just as is done in afm experiments ) are quite similar to those of a physical ensemble of conformations of polymers with both bending and torsional rigidity .the detailed exploration of this analogy is the subject of future work .this work was supported by a grant from the us - israel binational science foundation .love a e h 1944 _ a _ _ treatise on the mathematical theory of elasticity _( new york : dover ) koniaris k and muthukumar m 1991 _ phys ._ 66 2211 deguchi t and tsurusaki k 1997 _ phys .e. _ 55 6245
we analyze several continuum models of polymers : worm - like chains , ribbons and fourier knots . we show that the torsion of worm - like chains diverges and conclude that such chains can not be described by the frenet - serret ( fs ) equation of space curves . while the same holds for ribbons as well , their rate of twist is finite and , therefore , they can be described by the generalized fs equation of stripes . finally , fourier knots have finite curvature and torsion and , therefore , are sufficiently smooth to be described by the fs equation of space curves .
the safely correct development and credible presentation of a physical theory implies identifying physical phenomena to which the theory is assuredly applicable , however small such a validity domain would seem to be at the outset .historically , the special relativity theory came along in attempts to make newtonian / euclidean description of moving macroscopic bodies consistent with optical phenomena observable by means of these bodies .seminal paper establishes the principle of relativity and an idea that propagation of light does not depend on the motion of its source and can be used to define simultaneous events at different places as premises of a physically reasonable way to arrive at lorentz transformation , reproduced in subsequent monographs and textbooks .the alternative post - einsteinian presentation of the relativity theory brought einstein s attempts to identify physical foundations of the problem to its actual formal source : the required change of a coordinate system should preserve the form of free motion of point particles along with the speed of light as a universal limitation on the speed of any particle .such a mathematical approach appears physically important because it provides a consistent description of the macroscopically perceived ( commonly referred to as kinematic ) part of experimental particle physics : registering devices embedded in the solid wall of an accelerator can embody an inertial reference frame along with its euclidean geometry ; registrable / inferable collisions between particles are events . up to the present time, all expositions of the special relativity theory heavily rely on cartesian coordinates for both formulating premises and developing inferences . apparently , within macroscopic physics resorting to that handy mathematical techniqueimposes no restriction on the applicability of such a reasoning .however , good practice of using coordinates in physics and engineering implies that the choice of coordinate systems should facilitate the application of basic / general regularities / rules to a particular problem , so that it is the spatial configuration of the problem that actually determines whether a suitable coordinate system is cartesian , or orthogonal curvilinear , or even special such as barycentric etc . in the next sectionthe reader can find a remainder of what physics secures the existence of frames and actually underlies the validity of euclidean geometry .section [ basic effects ] presents the concept of a boost direction , which enables the subsequent physics oriented inference of the well known spatio - temporal effects related to the special relativity theory .section [ transformation ] exploits these effects to formulate the relation between the time moments and the position vectors of a given event in two frames .this relation entails a logically consistent and physically meaningful presentation of the well known coordinate transformations in section [ coordinate transformations ] .within macroscopic physics , a ( necessarily inertial ) frame can be provisionally viewed as a rigid construct , which means that the spatial relations between its parts obey the rules of euclidean geometry .making use of this well known mathematical structure in the description of undeformed ( or , equivalently , non - accelerating and non - rotating ) macroscopic solids is evidently valid , which suggests that the regularities of statics ( supplemented with hooke law to resolve statics ambiguities ) can underlie euclidean geometry .usually , this formal struture is believed to be applicable to a wider range of spatial scales as well as a larger class of physical phenomena .so other regularities , such as those of electromagnetism and gravity , may take part in maintaining euclidean geometry for the set of all possible positions of sufficiently small interacting bodies. as of now , the group of motions of rigid bodies ( described axiomatically ) is the only mathematical structure that reminds us of the physical foundation of euclidean geometry .the group includes spatial translations and rotations . the additive representation of a spatial translation is usually referred to as a spatial vector .one can use rotations to introduce an angle between two vectors etc .when the orthonormal vectors represent the translations along three mutually perpendicular directions , the decomposition of a displacement ( the change of a position vector ) is just what defines cartesian coordinates .this formal structure refers those interested in the origin of euclidean geometry to the physics of solids , which may not be a reasonable starting point of an investigation aimed something in the physics of fields and particles .meanwhile , it is newtonian laws that make an established formalism of mechanics . within the limitations imposed by microscopic phenomena ,newtonian formalism is believed to be applicable to each sufficiently small part of a macroscopic body , referred to as a point particle . to calculate how the position vector of each particle is changing over time ,a theorist should invoke newton s second law here and hereinafter the notation is used for the list of expressions where the label runs over all its values . within the purely mechanical formalism, the force of action of a particle on a particle can not be but conservative : where denotes the nabla operator designed to act on functions of the position vector .for an isolated system of n particles and given functions , the decomposition turns eqs . into a closed set of 3n differential equations of second order .its general solution involves 6n arbitrary constants , of which one is the time reference shift while seven can be the total energy and the components of the total momentum and the total angular momentum . in other words , inversion of the set of eqs .can yield 6n-8 constants of motions , possibly specific for each sufficiently small interval of , in addition to the seven universal constants of motion : .\ ] ] of these functions , only essentially represent the internal / relative motions of the particles while describes the overall intensity of the whole motion , which can be arbitrary normalized by the choice of the unit of the time ; and correspond to the well known global directions related to the whole motion .the formalism based on the use of position vectors , their decompositions and eqs . may not be a consistent introduction to foundations since it refers to the regularities / notions left without description in terms of the physics phenomena involved . to build a foundational construction straightforwardly, one should identify some real or possible stationary objects with euclidean points so that the appropriate constants of motion can approximate the values of the one - point vector field in eq ., two - point scalar field known as distance ( euclidean length ) etc .the well - known manifestations of quantization preclude implementing such a construction at sufficiently small scales. as a result , nor eqs . nor eqs .appear applicable at those scales .nevertheless , to construct euclidean geometry ( or , at least , some pregeometry ) one could still adopt the motions of the interacting particles as primitive notions described with a set of experimentally identifiable relations between them .the formulation of such relations ( which would then make a low - level automatics of mechanics ) is far from the goals of this text , but there is hardly any doubt that the set of motions of interacting particles is sufficiently rich to support euclidean geometry and , therefore , the concept of a frame . at this logical level ,the relations that underlie euclidean geometry are not separable from those that eventually give rise to the existence of the constants of motion .as discussed in the previous section , within the familiar high - level formalism the concept of a frame manifests itself by means of a position vector , euclidean nature of which is most likely secured by non - relativistic classical physics .at least one connection between frames and regularities of classical physics reveals itself in terms of a position vector : eqs .keep their form when one changes a frame b for a frame a in accordance with the transformation rule where , and are given parameters . here and hereinafter the superscript ( f ) indicates that a quantity is initially defined in a frame f. ( but as far as the transformation is valid between any pair of frames , one can actually define a position vector in any frame . ) from the early days of the relativity theory , the following generalization of the above statement is regarded as a more or less universal principle , called the principle of relativity : the mutual disposition and the relative translational uniform rectilinear motion of two frames can not manifest itself in the description of physical phenomena within one of these frames .equivalently , physical laws have the same formulations in different ( necessarily non - accelerated non - rotating ) frames. in the context of newtonian mechanics , the meaning of the relativity principle is plain : translational uniform rectilinear motion of a physical system as a whole with respect to some external ( reference ) bodies does not affect the motion of the internal parts of the system with respect to each other .however , this idea presumes a partition of the physical system that can not be seamlessly extended to include electromagnetic phenomena since the decomposition of an electromagnetic field into non - interfering components is possible only without electric charges .this apparent gap is accompanied ( and aggravated ) by the fact that the transformation does not preserve the form of source - free maxwell s equations .naturally , an attempt to derive the general transformation that keeps the full formalism of maxwellian electrodynamics would lead a researcher to a complicated problem .so the developers of the special relativity theory can not but begin with the simple coordinate transformation known as ( the original form of ) lorentz transformation ( which , in accordance with the purpose and logic of this article , is explicitly reproduced as late as in section [ lorentz transformation ] . )looking into early presentations of the special relativity theory , one can identify the following physics premise for the formal derivation of lorentz transformation : the law of motion of a free particle and the laws of propagation of a free electromagnetic field have the same form in all frames. to be exact , in an arbitrary frame f the position of a free particle is changing along with the time as while the positions taken by an electromagnetic wavefront ( wave phase - front ) at the time from a point source flashed at the position at the time satisfy the propagation speed of an electromagnetic spherical wavefront is the same in all frames . as long as the goals of one s inference are limited by the derivation of rules equivalent to those of lorentz transformation , one can confine oneself with the limiting form of eq . for an infinitely far source position , i.e. the equation for a plane wavefront which propagates in the direction of the unit vector .one can view the above statements as a partial realization of einstein s original intention to extend the principle of relativity to electromagnetic phenomena .meanwhile , the next generation of authors has dispensed with both maxwell equations and eq . in their introductions to the relativity theory . in the post - einsteinian derivations of the transformation rules between two framesone finds the principle of relativity replaced by the requirement to preserve the form of eq .supplemented with the condition to include `` the motion of a light signal '' ( in effect , the propagation of the intersection point of an electromagnetic plane wavefront with one of its associated ray paths.) as a result , the physics that underlies einstein s special relativity theory has been reduced to that of free point particles with the universal limitation on their speed , equal to in all frames .the use of free particles in a reasoning is evidently restricted by the processes of particles interaction .but as long as one neglects the extention and the duration of those processes , one can exploit an interaction act as a representation of a basic identifiable entity usually referred to as an event . in other words , each event appears to be real or possible interception of two ( or more ) free particles .then one can exploit experimentally identifiable relations between motions of free particles to establish relations between events .if need , the interaction between particles can be assumed so small as not to change their motion .( in general , one need exploiting eq . for the motion of each particle between the points of interception , where the parameters may change .) this technique along with the basic appliance of the relativity principle is a main tool used in the next section .here it is also worth noting that the free motion of particles and the propagation of light rays are not sufficient to construct euclidean geometry , so the attempts to extend its validity to arbitrarily fast processes could end up with nothing but a new postulate. in order to have material carriers of euclidean geometry , presentations of the relativity theory have no choice but to borrow frames from newtonian mechanics .when someone applies newton s second law to a physical system which consists of weakly interacting ( e.g. , widely separated ) parts , he might think that newtonian mechanics should involve some means to identify motions in such parts as simultaneous processes . actually , eqs . are well known to have originated from the regularities [ position evolution ] revealed by experiments / observations related to strongly interacting physical bodies , especially gravitationally bound ones , such as sun and planets .but in such a system , the existence of the time is an inherent property of its ( almost periodic or quasi - periodic ) motion . if is a characteristic speed of such motion , then for a given timescale it secures synchronization in changing physical quantities over the region of size .however , the special relativity theory implies the speed of light as a characteristic speed and , therefore , considers the region of size .in other words , within the special relativity theory , newtonian mechanics can secure the synchronization two processes ( `` clocks '' ) but only at one spatial point . exploiting eq . /eq . or the light rays only ( without specifying geometrical structures prematurely ) one can synchronize events happened to particles at different positions ( as far as one neglects the time delay and the position shift due to interaction of a charged particle and electromagnetic field . ) this turns the time moment into a global variable , similar to a spatial coordinate . in principle , within sufficiently small timescales, newtonian mechanics can maintain euclidean geometry only in the vicinity of each event , which means that the geometry of the subset might be riemannian. in this text , the global geometry of particles positions within each frame is still assumed euclidean , since it is appropriate for usual practical applications of the special relativity theory as well as common teaching curricula .for any pair of frames a and b , there are the velocity of a with respect to b and , vice versa , the velocity .since each of these vectors is defined as a spatial object in its own frame , there can be no procedure of comparing them directly on the basis of newtonian mechanics . nevertheless ,since all frames are supposed to be identical in their essential internal properties , one should accept denote a process that starts with an event and ends with an event , and let a number denote the elapsed time in an arbitrary frame f , so that and .if in the frame a one has for arbitrary events , , , , then in the frame b one finds the same equality because the contrary would allow one to judge about the motion of b with respect to a on the basis of internal data in b , in contradiction with the principle of relativity .further generalization is possible if one exploits eq . to partition a process into a series of shorter processes over even intervals of time : entails for any natural , rational and , finally , real .it follows that for any frame f the relation shows no dependence on the frame while for any process the relation depends only on defined by eq . .let a point body resting in the frame a and a point body resting in the frame b meet each other at the event and let another freely moving point body meet and at the events and , respectively . by making use of the laws of free motion, one can get an expression for the velocity of the body in the frame f where is actually independent of f in accordance with the previous analysis .. allows one to define a one - parametric family of frames ] for a given , and the whole family ] , one can find that moves just along the boost line of ] is an arbitrary member of the helicoboost class and can represent an arbitrary point body ( or even a light signal ) that moves along the boost line in $ ] , the boost lines in different frames of one helicoboost class prove equivalent in a sense : any point bodies that stay ( moving or resting ) on the boost line in one frame of a given helicoboost class remain ( moving or resting ) on the boost line in another frame of the same helicoboost class .if , in addition to the bodies and , one chooses more representatives of the frames a and b , one gets more parallel boost lines . to avoid specifying particular boost lines in a reasoning, one can invoke the direction of or , which represents a bundle of parallel boost lines in each member g of the helicoboost class .hereinafter this direction is referred to as a boost direction .it is possible to define both a boost direction and a helicoboost class of frames without prior reference to its two members : if there are a frame where a set of free point bodies have their velocities parallel or antiparallel to each other , then there are other frames where velocities of the these bodies are also parallel or anti - parallel to each other ; an equivalence class of such frames is a helicoboost class ; the direction of motion of such free bodies in each frame of the helicoboost class is a boost direction .the motion of light signals along the boost direction in one frame of a given helicoboost class is an important limiting case of the motions considered in the previous section .since the light signals represent the propagation of electromagnetic plane wavefronts , one can conclude that they propagate along the boost direction in any frame of the helicoboost class .it follows that simultaneous events in a plane perpendicular to the boost direction in one frame appear simultaneous in any other frame of the same helicoboost class , where they also occupy a plane perpendicular to the boost direction .let the locations of the above simultaneous events make a certain instantaneous arrangement over the plane wavefront . to represent it as a stationary geometric configuration, one should consider intersections some bundle of boost lines with a dense series of parallel wavefronts which propagate along the boost direction . since such wavefronts andthe boost lines are observable in any frame of the helicoboost class , so is the stationary planar configuration they generate .the possibility of the common geometric configuration shows that the observers in different frames can come to agreement with each other about the orientation of their frames around the boost direction or , in other words , to another equivalence relation between two frames .this relation allows one to partition the helicoboost class into subclasses of identically oriented frames . in the following ,a subclass of this kind is referred to as a boost class . to present the above equivalence relation in terms of a relative position vector, one can write : here and in the rest of the paper , the notation describes the decompositions of spatial vectors and in their respective frames . since the symbol in eq . denotes an arbitrary relative position vector , which connects two arbitrary points in a plane perpendicular to , one can view eq . as a notation for mapping a geometric configuration in the frame a to that in the frame b. to be accurate and limit onself by the use of spatial vectors only , without any explicit reference to the concept of euclidean points, one must define the relation " so that and for any spatial vectors in the frame a and their counterparts in the frame b. suppose in the frame a point bodies p , q , r , s are moving with the same velocity while momentarily ( detected as ) arranged in a straight line along the boost direction , i.e. parallel to . in the frameb these point bodies are at rest in a straight line along the boost direction . if in the frame a one has , then in the frame b one finds the correspondent equality due to the relativity principle . herethe notation refers to a distance between moving bodies g and h momentarily observed in a frame a while denotes a distance between stationary bodies g and h in a frame b. if one exploits partitioning a line segment in a manner similar to the division of a time interval , one can eventually come to the similar conclusion that entails for any natural , rational and , finally , real .it follows that for any frame f of the boost class the relation shows no dependence on the frame while for any pair of bodies p and q stationary in the frame b the relation depends only on .the application of the above result is not bounded by a comparison of the distances between two bodies in the two frames .in fact , the distance between two moving point bodies in the frame a is a distance between two _ simultaneous _ events of detecting these bodies in the frame a. in the frame b , these events are not necessarily simultaneous but happening to the same bodies .so the distance between the bodies is a distance between the events , too .thus , the relation is the same for any pair of events in the straight line along the boost direction , provided that they are simultaneous in the frame a. let simultaneous elementary events in the frame a occur over the length in a straight line along the boost direction , i.e. along the direction of motion of the frame b. physically , they together may be an act of detecting a rigid rod embedded into the frame a. if a flashlight occurs in the middle of the rod , it takes the same time interval for that light to get to each end of the rod .when instantaneously observed in the frame b , the length of the rod appears to be .the arrivals of the above mentioned flashlight at the ends of the moving rod are not simultaneous in the frame b : along the direction of the rod s motion the time difference makes while the distance between these events is here and hereafter . the equations and take account of the possibility that in different frames a set of the same events may occupy segments of different sizes along their boost direction .considering the events involved in the inference and the formulation of eqs .and one can find that and in accordance with eq . .. entails , i.e. the length contraction effect to get to another well known effect , one can exploit such thought construct as the light clock , where a light pulse propagates back and forth between two mirrors held parallel and apart at the fixed distance in their rest frame . in that frame , the line of the light propagation is perpendicular to the mirrors and can be referred to as the axis of the clock .the proper duration of the clock s cycle , i.e. the round - trip time of the light pulse in the clock s rest frame , makes let an observer be moving with a speed in the light clock s rest frame and along the axis of the light clock . the observer can detect the contracted length of the light clock and find that the clock s cycle takes here it is worth remarking that the light clock is an axiliary dedicated thought construct , which can in no way include the actual reflections of an electromagnetic wave from substance of the mirrors but embodies the properly collated sequence of events only .this means that eq . andeq . together entail the time dilation effect which gives the laboratory time interval between two events at a moving point during its given proper time interval . to show relations between the spatiotemporal effects of the relativity theory in one s practice of teaching, one may also address the following inference .let the light clock of proper length rest in the frame a so that the axis of the clock is perpendicular to the direction of motion of the frame b , i.e. to the boost direction . in the frame b ,the axis of the clock is also perpendicular to the boost direction , and the main cycle of the clock corresponds to the light signal path shown in fig . [ moving light clock ] . here is a length of the clock simultaneously observed in the frame b as being in the plane perpendicular to the direction of the clock s motion . in accordance with fig .[ moving light clock ] the duration of the clock s main cycle in the frame b should obey the equation this is consistent with the time dilation effect and the proper duration of the clock s main cycle because the spatial transversal effect yields when the motion of the light clock and the propagation of the light pulse occur in the same plane .. suggests that one can generalize eq . and eq . :if in the frame a simultaneous events occur in the transversal spatial slab thick , then in the frame b they occur within the slab and with the time spread describe the relation between the frames a and b in terms of the position vectors and and the time moments and of a given arbitrary event one should first of all identify the place of a common reference event with the origins and of the frames a and b as well as its time moment with the readings and of their clocks .let two elementary events occur in the frame a at a given time moment : an event occurs at the origin while occurs at . due to the time dilation effect , the moment of observing in the frame b equals to the reading of a clock fixed at the origin .due to the time spread effect , in the frame b the difference in time between and therefore , the time moment of observing in the frame b further , if in the frame b the plane is perpendicular to and keeps passing through the origin , then , in accordance with the analysis in section [ boost direction ] , in the frame a it forms the plane perpendicular to and moving with this velocity .evidently , the distance from the event to the plane makes .if denotes the position of in the frame b , then the length contraction effect helps us conclude that in the frame b the distance from the event to the plane is while eq .yields within the described coordinate - free approach , the relationships , and can be view as transformation rules . to show the dependence on the frames a and b explicitly one should only decipher the notation introduced by eq . and eq . :,\ ] ] ,\ ] ] in a more customary form , the transformation rules - can also be presented as the mapping where where the symbol unites the meaning of and the meaning of while the symbol unites the meaning of the usual product of two numbers and the meaning of the dot product of two spatial vectors ; the symbol denotes the dyadic ( outer ) product . historically , the early attempts to obtain such a mapping did not lead to a correct / unambiguous expression of via because of the failure to distinguish between a column vector and a true vector. to keep being mathematically correct the later treatment could not avoid resorting to coordinates but appeared limited to boost transformations ( see the final remark in section [ boosts ] . )as soon as one specifies coordinate systems in the frames a and b , in terms of relations between their unit base vectors and themselves as well as to some physical directions , one can arrive at the relationship between the column vectors made of time and spatial coordinates of a given event as observed in the inertial coordinate systems a and b , respectively .the choice for the unit base vectors turns eqs . - into eq . with i.e. the matrix of the original form of lorentz transformation .boosts make a well known class of the transformations introduced in graduate level physics courses with an aid of its matrix where , the elements of the column vector are the a components of the velocity of b with respect to a. due to the identities with the rotation matrix one can say that incorporates as an observable direction .it is easy to find that textbooks authors exploit boost transformations them only to show a derivation of the thomas precession contribution to the spin motion of a relativistically moving charged particle. however , despite this seemingly definite connection to observable phenomena , in all the texts up till now the presentations that facilitate algebraic manipulations of boosts are favored over those which would treat a boost as a physically defined operation / motion or relations between physical objects .the matrices parametrized by various belong to the grouplike structure formed by general transformation matrices where and represent two arbitrary rotations .the algebraic properties of that structure were the subject of extensive investigation , such as the treatment in ref . , but could not provide any better understanding of the spin precession or anything else because the physical meaning of itself remained out of consideration .the only benefit of such an analysis for physicists seems to be that it has found no reason to consider a boost as the lorentz transformation `` without rotation , '' in attempt to generalize the idea of parallel transport , such as the definition ( ii ) at p. 871 in ref . .as a result , to `` a boost transformation '' mathematicians prefer to apply cautious terms such as `` an aligned axis lorentz transformation '' ( see p. 236 in ref . .) the most visible manifestation of the problem is that no text attempts to formulate a physics based definition of a boost , thereby preventing any reasonable use of that concept .the reference formula makes it difficult to get an idea how mutual orientations of several bodies change when set in motion , because alone provides no hints how to choose the axes of the coordinate systems in use .the seemingly key formula appears to be nothing but a relation between two representations and for one relative motion , of which the formal simplicity of may even be misleading : when asked about the direction of the -axis , someone may correctly infer from that it is the direction of the -axis instantaneously observed in the coordinate system a. but the problem is that a researcher must be able to identify any directions before establishing / verifying ( theoretically / experimentally ) the relationships . to avoid that apparent logical circle, one might take the above description of the direction as a definition and an explicit starting point in a derivation of the transformation rule .needless to say that such an accurate approach can hardly be found in the existing , history - oriented , presentations of the relativity theory .now the coordinate - free description for the relation between two arbitrary frames in section [ transformation ] allows one to formulate logically consistent and physically explicit definition of a boost coordinate transformation . aside from the origins , the aboveconsideration refers to no elements of coordinate systems . to set up cartesian coordinate systems in the frames a and b one should specify the direction of their spatial base unit vectors and for .let and in other words , in view of eq ., makes the same angles with in the frame b as does with in the frame a. then , by definition , the boost is the transformation of time along with the transformation of coordinates of an event between the coordinate systems satisfying the conditions and .the equivalence relations and along with the property yield the equality of numbers eqs . and allows one to rewrite the dot products as and here the common pithy notation is used : while the repeated index implies the summation over all its values .then eq .entails which is just the spatial part of the transformation with the matrix . in terms of column vectorsthe above equation can be written as this form of a boost transformation was obtained from eq . in ref . .one can address the definition of a boost in the previous section so as to come to physically meaningful conclusions directly .an important example is a simple analytical description for the distortion that a cartesian coordinate basis exhibits while simultaneously observed in the frame where it experiences a boost operation .is instantaneously observed as the vector .,width=377 ] fig . [ moving vector ]presents a plane parallel to a spatial vector and the velocity of the frame b , both the vectors being sets of simultaneous events in the frame a. the boost operation applied to the vector results in the vector which is spatial in the frame b and parallel to the above plane , too . due to the definition of the boost and the length contraction effect , in the frame athe vector is perceived ( instantaneously observed ) as the vector since for the angle between and as well as between and , and for the angle between and , one can find that for the angle between and ( see fig . [ moving vector ] . )sometimes it may be reasonable to change the laboratory coordinate system with the aid of the boost transformation so as to reach simpler state of motion , to reduce the form of interaction law etc .then , in the limit , the angle shows what relative error is not properly compensated in case one fails to transform all relevant quantities appropriately .a boost direction for a pair of inertial reference frames is that spatial direction in one of the frames along which the other frame moves .free motions of point particles make an instrumentation for identifying the boost direction as well as events on a straight line along that direction .the concept of a boost direction secures the formulation of the basic relativity effects in a physics - based manner , which , eventually , results in the relation between two arbitrary frames in terms of their position vectors and time moments for a given event .within the physics - based approach , addressing the transformation of coordinates implies specifying some coordinate systems in each frame in terms of physical objects / directions .this yields a logically consistent and physically meaningful presentation of the coordinate transformations commonly exploited in the special relativity theory , which makes observable effects associated with those transformations evident .in particular , for a cartesian coordinate system subjected to a boost coordinate transformation , the coordinate - free technique of reasoning allows one to evaluate its instantaneously observable apparent distortion easily .since both electromagnetism and gravity manifest themselves via the concept of force , they allow one to distinguish directions and thereby support projective geometry at least . the next step should involve point particles , i.e. such entities that can strongly interact over a sufficiently short range .then the application of first newton s law to several particles on the same straight line helps one identify stationary particles , which can embody euclidean points .it enables one to introduce the concept of length in the usual manner .the discussed approach implies that the physical regularities involved can be presented in pre - euclidean / prenumeric terms to describe experiments / observations more directly .to get an idea about a technique of such description , see sections v and vi in ref . .serge a. wagner , how to introduce physical quantities physically , " < http://arxiv.org/pdf/1506.04122v1>[<http://arxiv.org/pdf/1506.04122v1 > ] for example , if someone would choose to store information about a local direction in the angular momentum of a hydrogen atom , he will inevitably fail both due to the well known unavoidable quantization of that quantity and because of the small lifetime of the corresponding excited state of motion . in accordance to the simple consideration known as bohr s theory of a hydrogen atom , it occurs over the spatial scale , called the bohr radius and recognized as a characteristic atomic scale .in general , the elementary charge , plank s constant , electron s rest mass , proton s rest mass are well known magnitudes which reveal the quantization of electric charge , angular momentum and mass / energy , respectively .the dimensional and/or provisional theoretical analysis can yield a set of characteristic spatial scales where is a dimensionless function of the dimensionless quantities the cases , and yield the well known scales : the classical electron radius ( in fact , the characteristic size of proton / neutron ) , compton wavelength of electron and the bohr radius . herethe use of the cgs system of units is assumed , so one can take as a parameter in maxwell equations , not necessarily the observable speed of light .though no formal analysis of applicability of euclidean geometry has yet been published , it is over scales less than compton wavelength of electron where some authors have found it difficult to provide a consistent concept of a spatial coordinate . see p. 40 in ref . and p. 2 in ref .for simple order - of - magnitude considerations and , e.g. , ref . for the exemplars of existing theoretical approach to the problem .the thought that first newton s law has the same form in all inertial frames was viewed as trivial and , for this reason , exploited implicitly ( until the accurate presentation in ref . has appeared . ) in contrast , einstein s idea that electromagnetic phenomena , such as the propagation of a spherical wave , was apparently perceived as nontrivial .so the statement that a spherical wavefront keeps its form in all frames appeared to be a popular explicit premise for the derivation of lorentz transformation ; see , e.g. , p. 100 in ref . andp. 9 in ref . .this is just the condition in the equations ( 8.05 ) and ( 8.06 ) at p. 21 in ref . .interestingly , fock starts his consideration with the general wavefront equation , which , in principle , can describe non - spherical wavefronts , e.g. , from two interfering point sources .but eventually he has narrowed his inference with no explicit reasoning .j. ehlers , f. a. e. pirani , a. schild , `` the geometry of free fall and light propagation , '' in _ general relativity , papers honour of j. l. synge _ , edited by l. oraifeartaigh ( clarendon press , oxford , 1972 ) , p. 6384 .this is an impicit reason why fock starts his inference of a boost with a generally nonlinear transformation of the space and time variables and addresses some concepts of riemannian geometry as elements of a suitable notation .
the concept of an inertial reference frame , which actualizes euclidean geometry , is not confined to the statics of hardly deformable solids but extendible to the dynamical phenomena where newtonian mechanics is valid , defining its concept of time . the laws of propagation of electromagnetic disturbances modify newtonian formalism for sufficiently fast free motions within each spatial domain of its validity for slow motions and introduce the extended concept of time by uniting those of newtonian which can exist in different spatial domains of their validity . a boost direction for a pair of inertial reference frames is that spatial direction in one of the frames along which the other frame moves . free motions of point particles make an instrumentation for identifying the boost direction as well as events on a straight line along that direction . the concept of a boost direction secures the physics - based formulation of the basic relativity effects : the time dilation and retardation , the contraction of the length along and the spatial invariance across the direction of relative motion of two frames . eventually , that formulation results in the relation between two arbitrary frames in terms of their position vectors and time moments for a given event . the obtained transformation rules for the components of the position vector differ from the vector - like relationship known in the literature because the latter actually deals with column vectors made of cartesian coordinates of true vectors and appears identical to a boost coordinate transformation . within the physics - based approach , addressing the transformation of coordinates implies specifying some coordinate systems in each frame in terms of physical objects / directions . this yields a logically consistent and physically meaningful presentation of the coordinate transformations commonly exploited in the special relativity theory , which makes observable effects associated with those transformations evident . in particular , for a cartesian coordinate system subjected to a boost coordinate transformation , the coordinate - free technique of reasoning allows one to evaluate its instantaneously observable apparent distortion easily .
in beam - driven techniques , a high - charge `` drive '' bunch passes through a high - impedance medium and experiences a decelerating field . the resulting energy loss can be transferred to a properly delayed `` witness '' bunch trailing the drive bunch .a critical parameter associated to this class of acceleration method is the transformer ratio where is the maximum accelerating field behind the drive bunch , and is the maximum decelerating field within the drive bunch . generally the transformer ratio is limited to values due to the fundamental beam - loading theorem .however larger values can be produced using drive bunches with tailored ( asymmetric ) current profiles .furthermore , it can be shown that both and for a given charge are maximized when the decelerating field over the drive bunch is constant .additionally , bunch current profiles that minimize the accumulated energy spread within the drive bunch are desirable as they enable transport of the drive bunch over longer distances .+ to date , several current profiles capable of generating transformer ratios have been proposed .these include linearly ramped profiles combined with a door - step or exponential initial distribution .more recently a piecewise double - triangle " current profile was suggested as an alternative with the advantage of being experimentally realizable . a main limitation common to all these shapes resides in their discontinuous character which make their experimental realization either challenging or relying on complicated beam - manipulation techniques . in addition these shapesare often foreseen to be formed in combination with an interceptive mask which add further challenges when combined with high - repetition - rate linacs .+ in this paper we introduce several smooth current profiles which support large transformer ratios and lead to quasi - constant decelerating fields across the drive bunch .we describe a simple scheme for realizing one of these shapes in a photoemission radiofrequecy ( rf ) electron source employing a shaped photocathode - laser pulse .finally , we discuss a possible injector configuration that could form drive bunches consistent with the multi - user free - electron laser ( fel ) studied in ref .for simplicity we consider a wakefield - assisting medium ( e.g. a plasma or a dielectric - lined waveguide ) that supports an axial wakefield described by the green s function where is the loss factor and with being the wavelength of the considered mode . here ( in our convention ) is the distance behind the source particle responsible for the wakefield . in this sectionwe do not specialize to any wakefield mechanism and recognize that , depending on the assisting medium used to excite the wakefield , many modes might be excited so that the green s function would consequently consist of a summation over these modes . given the green s function , the voltage along and behind a bunch with axial charge distribution can be obtained from the convolution we take to be non vanishing on two intervals ] and zero elsewhere . in our convention the bunch head starts at and the tail population lies at .we also constrain our search to functions such that and are continuous at . introducing the function ( to be specified later ), we write the charge distribution as based on our previous work we first consider the following function where and are positive constants , is again the spatial frequency seen above , and is an integer . consequently , using eq .[ eq : genramp ] , the axial bunch profile is written as in this section we report only on solutions pertaining to .additional , albeit more complicated , solutions also exist for larger ; however , these solutions lead to additional oscillations which ultimately lowers the transformer ratio .+ from eq .[ eq : wakepot ] , the decelerating field then takes the form the oscillatory part in the tail ( ) can be eliminated under the condition which leads to the following decelerating and accelerating fields respectively finally , the transformer ratio can be calculated by taking the ratio of the maximum accelerating field ( see appendix [ app : a ] ) over the maximum decelerating field which yields two sets of solutions occur for even and odd which can be interpreted as a phase shift in the oscillatory part . additionally ,larger multiples of even and odd lead to more oscillations in the head which ultimately reduce the transformer ratio . in fig .[ fig : sinramp ] we illustrate the simplest even ( a ) and odd ( b ) solutions corresponding to and respectively .( shaded line ) with the corresponding induced voltages .the parameters are , and plots ( a ) and ( b ) respectively correspond to the cases and .the head of the bunch is at .[ fig : sinramp],scaledwidth=48.0% ] we now consider an even simpler quadratic shape which was inspired by our previous work which leads to the current profile the resulting decelerating field within the bunch is -\sin(kz)+2k \xi}{k^3 } & \text{if , } \\ 0 & \text{elsewhere . } \end{cases } \nonumber\end{aligned}\ ] ] again , the decelerating field can be made constant for $ ] when with .in such a case the previous equation simplifies to ( shaded line ) with corresponding induced voltage .the parameters are , and .the head of the bunch is at .[ fig : quadratic],scaledwidth=45.0% ] the accelerating field trailing the bunch is , \end{aligned}\ ] ] yielding the transformer ratio ^{1/2}. \end{aligned}\ ] ] in fig .[ fig : quadratic ] we illustrate an example of the quadratic shape ( green trace ) as well as its corresponding longitudinal electric field ( blue trace ) for and .we now turn to compare the smooth longitudinal shapes from the previous section with the doorstep and double - triangle which also provide constant decelerating fields over the bunch - tail ( see appendix [ app : b ] for our formulation of these distributions ) . for a fair comparison ,we stress the importance of comparing the various current profiles with equal charge .consequently , we normalize each of the current profile to the same bunch charge where is the scaling parameter associated with each bunch shape ( see section [ sec : theory ] and appendix [ app : b ] ) , and is the total bunch length which is assumed to be larger than the given shape s bunch - head length ( ) . for each distribution , the charge normalization generates a relationship between and which enables us to rexpress in terms of and . in tab .[ tab : rcompare ] we tabulate the analytical results for ( the conventional notation ) and , and also list the maximum decelerating field for each distribution . additionally in fig .[ fig : evranalytic ] we illustrate these results in a log - log plot where , for each distribution , the scaling parameter ( ) was varied for a fixed charge and wavelength . to complete our comparisonwe also added the linear - ramp and gaussian distributions .the results indicate that all of the distributions with constant decelerating fields over the bunch - tail ` live ' on the same curve ; additionally , by varying the scaling parameter for a distribution , you can shift a distribution to have a larger ( resp .smaller ) ( resp . ) and vice - versa .ultimately , this suggests that the distribution which is simplest to make is as useful as any other and it can be scaled accordingly ( , ) for a specific application .these results confirm our previous studies regarding the numerical investigation of the trade - off between and . [cols="<,^,^,>",options="header " , ] + we finally note that the generated current profiles are capable of supporting electric fields and transformer - ratios in a dlw structure with performances that strike a balance between the two cases listed as case 1 " and case 2 " in table 1 of ref . ; see tab .[ tab : dlwperf ] .a simple estimate indicates that our drive bunch would require a dwfa linac of m in order to accelerate an incoming 350-mev witness bunch to a final energy of gev .in conclusion , we have presented a set of smooth current profiles for beam - driven acceleration which display comparable performances with more complex discontinuous shapes discussed in previous work .we find that all proposed current profiles which lead to uniform decelerating fields `` live '' on the same performance curve and that a given profile can be scaled to a particular accelerating field or transformer ratio .we also presented a simple laser - shaping technique combined with a photoinjector to generate our proposed quadratic current profile .we finally illustrated the possible use of this technique to form an electron bunch with a tailored current profile .the distribution obtained from these start - to - end simulations were shown to result in a transformer ratio and peak accelerating field of mv / m in a dielectric - lined waveguide consistent with the proposal of ref .the method offers greater simplicity over other proposed techniques , e.g. , based on complex phase - space manipulations .finally , we point out that the proposed method could provide bunch shapes consistent with those required to mitigate energy - spread and transverse emittance dilutions due coherent - synchrotron - radiation in magnetic bunch compressors . +we would like to acknowledge members of the anl - lanl - niu working group on dwfa - based short wavelength fel led by j. g. power and a. zholents for useful discussions that motivated the study presented in this paper .thanks r. legg ( jefferson lab ) and j. bisognano ( u. wisconsin ) for providing the 200-mhz quarter - wave field map used in section [ sec : linac ] .this work was supported by the u.s .department of energy contract no .de - sc0011831 to northern illinois university , and the defense threat reduction agency , basic research award # hdtra1 - 10 - 1 - 0051 , to northern illinois university .work is also supported by the u.s .department of energy under contract de - ac02 - 07ch11359 with the fermi research alliance , llc , and f.l . was partially supported by a dissertation - completion award granted by the graduate school of northern illinois university .the accelerating field behind the bunch often assumes the functional form the procedure to evaluate the transformer ratio entails to determining the maximum value of .such a value if found by solving for =0 , \end{aligned}\ ] ] with solution given by squaring the previous equation , it is straightforward to show that expressing the value of using the previous equation in [ eq : cossin ] leads to the maximum value of the latter equation is used at several instances throughout section [ sec : theory ] .in this appendix we summarize and rewrite in notations consistent with our section [ sec : theory ] the equations describing the linear ramp and double - triangle current profiles .these equations are the ones used in section [ sec : theory - comp ] . for both cases, leads to the flat decelerating fields over the tail of distribution and leads to the and tabulated and illustrated in tab . [tab : rcompare ] and fig .[ fig : evranalytic ] respectively .99 p. chen , j.m .dawson , robert w. huff , t. katsouleas , phys .lett . * 54 * , 693 ( 1985 ) .g. a. voss , and t. weiland , particle acceleration by wakefields " , report desy - m-82 - 10 available from desy hamburg ( 1982 ) .w. gai , p. schoessow , b. cole , r. konecny , j. norem , j. rosenzweig , and j. simpson , phys .lett . * 61 * , 2756 ( 1988 ) .r. d. ruth , a. chao , p. l. morton , p. b. wilson , part .* 17 * , 171 ( 1985 ) .v. v. tsakanov , nucl .instrum . meth .a , * 432 * , 202- ( 1999 ) . c. jing , a. kanareykin , j. g. power , m. conde , z. yusof , p. schoessow , and w. gai , phys .lett . * 98 * , 144801 ( 2007 ) .power , w. gai , and p. schoessow , phys .e , * 60 * , 6061 ( 1999 ) .k. l. f. bane , p. chen , p. b. wilson , on collinear wakefield acceleration , " slac - pub-3662 ( 1985 ) .b. jiang , c. jing , p. schoessow , j. power , and w. gai , phys . rev .beams * 15 * , 011301 ( 2012 ) .p. muggli , v. yakimenko , m. babzien , e. kallos , and k. p. kusche , phy .* 101 * , 054801 ( 2008 ) .p. piot , y .- e sun , j. g. power , and m. rihaoui , phys . rev .beams * 14 * , 022801 ( 2011 ) . y .- e sun , p. piot , a. johnson , a. h. lumpkin , t. j. maxwell , j. ruan , and r. thurman - keup , phys .lett . * 105 * , 234801 ( 2010 ) .g. ha , m.e .conde , w. gai , c .- j .jing , k .- j .kim , j.g .power , a. zholents , m .- h .cho , w. namkung , c .- j .jing , in proceedings of the 2014 international particle accelerator conference ( ipac14 ) , dresden germany , 1506 ( 2014 ) .a. zholents , w. gai , r. limberg , j. g. power , y .- e sun , c. jing , a. kanareykin , c. li , c. x. tang , d. yu shchegolkov , e. i. simakov , a collinear wakefield accelerator for a high - repetition - rate multi - beamline soft x - ray fel facility , " in proceedings of the 2014 free - electron laser conference ( fel14 ) , 993 ( 2014 ) .a. chao , _ physics of collective instabilities in high - energy accelerators _ , wiley series in beams & accelerator technologies , john wiley and sons ( 1993 ) .f. lemery and p. piot , alternative shapes and shaping techniques for enhanced transformer ratios in beam driven techniques , " in proceedings of the 16th advanced accelerator concepts workshop ( aac 2014 ) , san jose , ca , july 13 - 18 , 2014 ( in press ) ; also fermilab preprint fermilab - conf-14 - 365-ad ( 2014 ) .g. andonian , title holder , " in proceedings of the 16th advanced accelerator concepts workshop ( aac 2014 ) , san jose , ca , july 13 - 18 , 2014 ( in press ) . f. lemery , d. mihalcea , and p. piot , in proceedings of ipac2012 , new orleans , louisiana , usa , 3012 ( 2012 ) .d. t. palmer , r. h. miller , h. winick , x.j .wang , k. batchelor , m. woodle , and i. ben - zvi , microwave measurements of the bnl / slac / ucla 1.6-cell photocathode rf gun `` , in proceedings of the 1995 particle accelerator conference , pac95 ( dallas , tx , 1995 ) , 982 ( 1996 ) .k. flttmann , _ astra : a space charge algorithm , user s manual _ , available from the world wide web at http://www.desy.de//astradokumentation ( unpublished ) . f. lemery and p. piot , n proceedings of the 2014 international particle accelerator conference ( ipac14 ) , dresden germany ,1454 ( 2014 ) . o. j. luiten , s. b. van der geer , m. j. de loos , f. b. kiewiet , and m. j. van der wiel phys .93 * , 094802 ( 2004 ) .a. m. wiener , rev .* 71 * , 1929 ( 2000 ) .j. w. gibbs , nature * 59 * ( 1539 ) , 606 ( 1899 ) .p. piot , y .- e sun , t. j. maxwell , j. ruan , e. secchi , j. c. t. thangaraj , phys . rev .beams * 16 * , 010102 ( 2013 ) .g. ferrini , p. michelato , and f. parmigiani , solid state commun . *106 * , 21 ( 1998 ) .g. penco , m. trov and s. m. lidia , in proceedings of fel 2006 , bessy , berlin , germany , 621 ( 2006 ) .g. penco , m. danailov , a. demidovich , e. allaria , g. de ninno , s. di mitri , w. m. fawley , e. ferrari , l. giannessi , and m. trov , phys .* 112 * , 044801 ( 2014 ) .b. aunes , et al ., phys . rev .beams * 3 * , 092001 ( 2000 ) .r. legg , w. graves , t. grimm , and p. piot , in proceedings of the 2008 european particle accelerator conference ( epac08 ) , genoa , italy , 469 ( 2008 ) .j. bisognano , m. bissen , r. bosch , m. efremov , d. eisert , m. fisher , m. green , k. jacobs , r. keil , k. kleman , g. rogers , m. severson , d. d. yavuz , r. legg , r. bachimanchi , c. hovater , t. plawski , t. powers , in proceedings of the 2013 north - american particle accelerator conference ( napac13 ) , pasadena , usa , 622 ( 2013 ) . j. bisognano , r. a. bosch , d. eisert , m. v. fisher , m. a. green , k. jacobs , k. j. kleman , j. kulpin , g. c. r. edit , in proceedings of 2011 particle accelerator conference ( pac11 ) , new york , ny , usa , 2444 ( 2011 ) .k. flttmann , t. limberg , and p. piot , '' generation of ultrashort electron bunches by cancellation of nonlinear distortions in the longitudinal phase space , " desy report tesla fel 2001 - 06 , available from desy , hamburg germany ( 2001 ) .n. solyak , i. gonin , h. edwards , m. foley , t. khabiboulline , d. mitchell , j. reid , l. simmons , in proceedings of the 2003 particle accelerator conference ( pac03 ) , portland , or , usa , 1213 ( 2003 ) .m. borland , and h. shang , geneticoptimizer , private communication ( 2005 ) .p. emma , bunch compressor options for the new tesla parameters " , internal unpublished report dapnia / sea-98 - 54 available from service des accl rateurs , cea saclay , france ( 1998 ) t. limberg , ph .piot and f. stulle , in proceedings of the 2002 european particle accelerator conference ( epac2002 ) , paris france , 1544 ( 2002 ) .p. piot , c. behrens , c. gerth , m. dohlus , f. lemery , d. mihalcea , p. stoltz , m. vogt , phys .* 108 * , 034801 ( 2012 ) .m. rosing , and w. gai , phys .d * 42 * , 1829 ( 1990 ) . c. mitchell , j. qiang , and p. emma , physbeams * 16 * , 060703 ( 2013 ) .
collinear high - gradient beam - driven wakefield methods for charged - particle acceleration could be critical to the realization of compact , cost - efficient , accelerators , e.g. , in support of tev - scale lepton colliders or multiple - user free - electron laser facilities . to make these options viable , the high accelerating fields need to be complemented with large transformer ratios , a parameter characterizing the efficiency of the energy transfer between a wakefield - exciting drive " bunch to an accelerated witness " bunch . while several potential current distributions have been discussed , their practical realization appears challenging due to their often discontinuous nature . in this paper we propose several alternative current profiles which are smooth which also lead to enhanced transformer ratios . we especially explore a laser - shaping method capable of generating one the suggested distributions directly out of a photoinjector and discuss a linac concept that could possible drive a dielectric accelerator .
the dynamics of a flashing ratchet can be translated into a counterintuitive phenomenon in gambling games which has recently attracted considerable attention .it is the so - called _parrondo s paradox _ consisting of two losing games , a and b , that yield , when alternated , a winning game . in game a , a player tosses a coin and makes a bet on the throw .he wins or loses 1 euro depending on whether the coin falls heads or tails .the probability of winning is ; so game a is fair when and losing when . by losing , winning , and fair games here we mean that the average capital is a decreasing , increasing , and a constant function of the number of turns , respectively .the second game or game b consists of two coins .the player must throw coin 2 if his capital is not a multiple of three , and coin 3 otherwise .the probability of winning with coin 2 is and with coin 3 is .they are called `` good '' and `` bad '' coins respectively .it can be shown that game b is also a losing game if and that makes b a fair game .the rules of both game a and b are depicted in fig .[ fig : rules ] surprisingly , switching between games a and b in a random fashion or following some periodic sequences produces a winning game , for sufficiently small , i.e. , the average of player earnings grows with the number of turns . therefore , from two losing games we actually get a winning game .this indicates that the alternation of stochastic dynamics can result in a new dynamics , which differs qualitatively from the original ones .alternation is either periodic or random in the flashing rachet and in the paradoxical games . on the other hand ,we have recently studied the case of a _ controlled _ alternation of games , where information about the state of the system can be used to select the game to be played with the goal of maximising the capital .this problem is trivial for a single player : the best strategy is to select game a when his capital is a multiple of three and b otherwise .this yields higher returns than any periodic or random alternation .therefore , choosing the game as a function of the current capital presents a considerable advantage with respect to `` blind '' strategies , i.e. , strategies that do not make use of any information about the state of the system , as it is the case of the periodic and random alternation .also , in a flashing ratchet , switching on and off the ratchet potential depending on the location of the brownian particle allows one to extract energy from a single thermal bath , in apparent contradiction with the second law of thermodynamics .this is nothing but a maxwell demon , who operates having at his disposal information about the position of the particle ; and it is the acquisition or the subsequent erasure of this information what has an unavoidable entropy cost , preventing any violation of the second law . whereas a controlled alternation of games is trivial for a single player , interesting and counter - intuitive phenomena can be found in _ collective _ games .we have recently considered a collective version of the original parrondo s paradox . in this model , the game a or b that a large number of individuals play can be selected at every turn .it turns out that blind strategies are winning whereas a strategy which chooses the game with the highest average return is losing . in this paper, we extend our investigation of controlled collective games considering a new strategy based on a majority rule , i.e. , on voting .this type of rule is relevant in several situations , such as the modelling of public opinion or the design of multi - layer neural networks by means of _ committee machines _ .we will show that , in controlled games , the rule is very inefficient : if every player votes for the game that gives him the highest return , then the total capital decreases , whereas blind strategies generate a steady gain .the same effect can be found for the capital - independent games introduced in . as mentioned above , for a single player, the majority rule does defeats the blind strategies .the inefficiency of voting is consequently a purely collective effect .the paper is organised as follows . in section [ sec : model ] we present the model and the counter - intuitive performance of the different strategies . in section[ sec : analysis ] we discuss and provide an intuitive explanation of this behaviour . in sec .[ sec : finite ] , we analyse how the effect depends on the number of players . in section [ sec : history ] , we extend these ideas to the capital - independent games introduced in . finally , in sec . [sec : conclusions ] we present our main conclusions .the model consists of a large number of players . in every turn, they have to choose one of the two original parrondo games , described in the introduction and in fig .[ fig : rules ] .then _ every _ individual plays the selected game against the casino .we will consider three strategies to achieve the collective decision . _a ) _ the _ random strategy _, where the game is chosen randomly with equal probability . _b ) _ the _ periodic strategy _ , where the game is chosen following a given periodic sequence . the sequence that we will use throughout the paper is since it is the one giving the highest returns . _c ) _ the _ majority rule ( mr ) strategy _ , where every player votes for the game giving her the highest probability of winning , with the game obtaining the most votes being selected . the model is related to other extensions of the original parrondo games played by an ensemble of players , such as those considered by toral . however , in our model the only interaction among players can occur when the collective decision is made .once the game has been selected , each individual plays , in a completely independent way , against the casino .moreover , in the periodic and random strategies there is no interaction at all among the players , the model being equivalent to the original parrondo s paradox with a single player .the mr makes use of the information about the state of the system , whereas the periodic and random strategies are blind , in the sense defined above . oneshould then expect a better performance of the mr strategy .however , it turns out that , for large , these blind strategies produce a systematic winning whereas the mr strategy is losing .this is shown in figure [ capital ] , where the capital per player as a function of the number of turns is depicted for the three strategies and an infinite number of players ( see appendix [ app ] for details on how to obtain fig .[ capital ] ) . and the three strategies discussed in the text ., width=264 ]how many players vote for each game ? the key magnitude to answer this question and to explain the system s behaviour is , the fraction of players whose money is a multiple of three at turn .this fraction of players vote for game a in order to avoid the bad coin in game b. on the other hand , the remaining fraction vote for game b to play with the good coin . therefore , if , there are more votes for game a and , if , then game b is preferred by the majority of the players .let us focus now on the behaviour of for when playing both games separately .if game a is played a large number of times , tends to 1/3 because the capital is a symmetric and homogenous random walk under the rules of game a. on the other hand , if b is played repeatedly , tends to 5/13 .this can be proved by analyzing game b as a markov chain .it is also remarkable that , for , the average return when game b is played is zero .figure [ scheme ] represents schematically the evolution of under the action of each game , as well as the prescription of the mr strategy explained above .now we are ready to explain why the mr strategy yields worse results than the periodic and random sequences . under the action of gamea and game b. the prescription of the mr is also represented.,width=264 ] we see that , as long as does not exceed , the mr strategy chooses game b. however , playing b takes closer to , well below , and thus more than half of the players vote for game b again . after a number of runs ,the mr strategy gets trapped playing game b forever .then asymptotically approaches 5/13 , and as this happens , game b turns into a fair game when . as a consequence ,the mr will not produce earnings any more , as can be seen in figure [ mayoriae0 ] .( left ) and the capital per player ( right ) for , for the mr and random strategies .the mr chooses game b when is below the straight line depicted at 1/2 and game a otherwise ., width=491 ] the introduction of turns game b into a losing game if played repeatedly .consequently , the mr strategy becomes a losing one as in figure [ capital ] . to overcome this losing tendency, the players must sacrifice their short - range profits , not only for the benefit of the whole community but also for their own returns in the future .hence , some kind of cooperation among the players is needed to prevent them from losing their capital .a similar effect has been found by toral in another version of collective parrondo s games . there , sharing the capital among players induces a steady gain . in our case , the striking result is that no complex cooperation is necessary .it is enough that the players agree to vote at random .in the previous analysis an infinite number of players has been considered . remarkably , for just one player the mr strategy trivially performs better than any periodic or random sequence , since it completely avoids the use of the bad coin . in this sectionwe analyse the crossover between the winning behaviour for a small number of players and the losing behaviour when this number is large .figure [ figura5 ] shows numerical results of the average capital per player for an increasing number of players ranging from 10 to 1000 . , 50 , 100 , and 1000 players , , and the three different strategies .the simulations have been made over a variable number of realizations , ranging from 10000 realizations for to 10 realizations for .simulations for the random and periodic strategies have been made with players and averaging over 100 realizations .for these blind strategies , the result does not depend on the number of players .,width=377 ] one can observe that , the larger the number of players , the worse the results for the mr strategy , becoming losing for a number of players between 50 and 100 .the above discussion for an infinite ensemble allows us to give a qualitative explanation .the difference between large and small is the magnitude of the fluctuations of around its expected value .if game b is chosen a large number of times in a row , then the expected value of is 5/13 . on the other hand ,the mr selects b unless is above 1/2 .therefore , for the mr to select a , fluctuations must be of order . for players ,the fraction of players with capital multiple of three , , will be a random variable following a binomial distribution , at least if b has been played a large number of times in a row .if the expected value of is 5/13 , fluctuations of around this value are of order .then , fluctuations will allow the mr strategy to choose a if . far above this value ,fluctuations that drive above 1/2 are very rare , and mr chooses b at every turn . on the other hand , for around or below 20 , there is an alternation of the games that can even beat the optimal periodic strategy .we see that the mr strategy can take profit of fluctuations much better than blind strategies , but it loses all its efficiency when these fluctuations are small .we believe that this is closely related to the second law of thermodynamics .the law prohibits any decrease of entropy only in the thermodynamic limit or for average values .on the other hand , when fluctuations are present , entropy can indeed decrease momentarily and this decrease can be exploited by a maxwell demon .a similar phenomenon is exhibited by the games introduced in ref . , whose rules depend on the history rather than on the capital of each player . gamea is still the same as above , whereas game b is played with three coins according to the following table : [ cols="^,^,^,^ " , ] with , , and . introducing a large number of players but allowing just a randomly selected fraction of them to vote and play , the same `` voting paradox '' is recovered for sufficiently small . again, blind strategies achieve a constant growth of the average capital with the number of turns while the mr strategy returns a decreasing average capital , as it is shown in figure [ histo ] . , and three different strategies.,width=302 ]we have shown that the paradoxical games based on the flashing ratchet exhibit a counterintuitive phenomenon when a large number of players are considered . a majority rule based on selfishvoting turns to be very inefficient for large ensembles of players .we have also discussed how the rule only works for a small number of players , since in that case it is able to exploit capital fluctuations .the interest of the model presented here is threefold .first of all , it shows that cooperation among individuals can be beneficial for everybody . in this sense, the model is related to that presented by toral in ref . . sincejohn maynard smith first applied game theory to biological problems , games have been used in ecology and social sciences as models to explain social behaviour of individuals inside a group .some generalizations of the voting model might be useful for this purpose .for instance , it could be interesting to analyse the effect of mixing selfish and cooperative players or the introduction of players who could change their behaviour depending on the fraction of selfish voters in previous turns .secondly , the effect can also be relevant in random decision theory or the theory of stochastic control since it shows how periodic or random strategies can be better than some kind of optimization . in this sense , there has been some work on general adaptive strategies in games related with parrondo s paradox .thirdly , this model and , in particular , the analysis for finite , prompts the problem of how information can be used to improve the performance of a system . in the models presented here , information about the fluctuations of the capital is useful only for a small number of players , that is , when these fluctuations are significant. it will be interesting to analyse this crossover in further detail , not only in the case of the games but also for brownian ratchets .work in this direction is in progress .the authors gratefully acknowledge fruitful discussions with christian van den broeck , who suggested us the introduction of the mr strategy .we also thank h. leonardo martnez for valuable comments on the manuscript .this work has been financially supported by grant bfm2001 - 0291-c02 - 02 from ministerio de ciencia y tecnologa ( spain ) and by a grant from the _ new del amo program _( universidad complutense ) .in this section we describe the semi - analytical solution of the model for an infinite number of players , used to depict fig . [ capital ] .let , be the fraction of players whose capital at turn is of the form with and an integer number .if game a is played in turn , these fractions change following the expression : which can be written in a vector notation as : similarly , when b is played , the evolution is given by : with notice that the mr strategy is the only one inducing a nonlinear evolution in the population fractions . to calculate the evolution of the capital ,we compute the winning probability in each game : finally , the average capital per player evolves as : and is replaced by or , depending on the game played at turn in each strategy .
we study a modification of the so - called parrondo s paradox where a large number of individuals choose the game they want to play by voting . we show that it can be better for the players to vote randomly than to vote according to their own benefit in one turn . the former yields a winning tendency while the latter results in steady losses . an explanation of this behaviour is given by noting that selfish voting prevents the switching between games that is essential for the total capital to grow . results for both finite and infinite number of players are presented . it is shown that the extension of the model to the history - dependent parrondo s paradox also displays the same effect . and parrondo s paradox , majority rule , brownian ratchets 02.50.-r , 05.40.-a , 87.23.ge
the _ gambler s ruin problem _ is a classical subject of probability theory . consider a random walker on a one - dimensional lattice hopping to the right with probability and left with in a single time step .the problem seeks statistical properties of duration , the time at which the walker from first hits the origin .( of course , is a random variable . ) not only a model of bankruptcy and gambling , but queuing theory and genetic algorithm are concerned with the gambler s ruin problem .a standard textbook of probability theory gives detailed instructions on this problem .the central quantity is the probability that the walker at position first hits the origin after time . as the solution of the equation with initial and boundary conditions and ( ) , the coefficient before is the number of different paths from hitting the origin first at time , and it is connected with the reflection principle of a random walk .obviously , holds when , because . in statistical physics , a similar problem is called the _ first passage problem _ .this is a more general problem than the ruin problem , in that the state space of a random walker ( or a diffusion particle ) is not necessarily a one - dimensional lattice but higher - dimensional spaces and networks .many fields in statistical physics , including reaction - rate theory , neuron dynamics , and economic analysis , have been formulated and analyzed based on first - passage properties .the present paper analyzes an extended form of the classical ruin problem ; a random walker hops to the right with probability , to the left with , and it does not hop with probability .figure [ fig1 ] schematically shows the problem .the only difference from the original ruin problem is that the _ halting _ probability is introduced .the difference seems very small , but the results become quite distinct . in fact , the solution of non halting case ( ) is superseded by a formula involving a gauss hypergeometric function in halting case .moreover , moment analysis and asymptotic ( long - time ) behavior are developed . , and to the left with ; hopping does not occur with probability .( . ) the problem focuses on statistical properties of the duration , which is the time the walker first hits the origin . ]as in the classical problem described in the previous section , let be the probability that the walker starting from has duration . satisfies the equation with initial and boundary conditions and ( ) .however , it is hard to solve this equation directly , and we take an another way by employing a classical result effectively . to calculate ,we classify the walker s paths according to the number of hopping .we count the paths consisting of hops and halts .first , the different patterns of putting halts into steps are given by in total , where ` ' ( not ) comes from the fact that a halting step never comes to the last -th step .next , if we focus on only the hopping steps ( and ignore the halting steps ) , the paths are reduced to those of classical ruin problem ; the probability that the walker from hits the origin after hops is given by .thus the total occurring probability of a path with hops and halts is summing up all , we get in the third equality , the range of summation is changed ; the terms corresponding to have no contribution because . in the last equality we change the summation variable as .this result can be also obtained by solving directly the equation with conditions , and ( , ) .we further proceed with calculation ; we rewrite factorials using gamma functions , which is a preparatory step toward a hypergeometric function . employing some formulas of the gamma function ,one obtains ( see appendix [ apdx : a ] for the calculation in detail . )the probability is expressed as where is the gauss hypergeometric function .this is an explicit form of the solution of our problem .this solution can not be deduced from the non - halting solution . the hypergeometric function in eq .is a _ genuine _ hypergeometric function , in the sense that it can not be expressed using simpler functions .( it has been studied that some hypergeometric functions have tractable expressions , e.g. , . ) we comment here on the convergence of the sum in eq .at first sight , it seems to diverge when . by using the pochhammer symbol instead of gamma functions , if , holds automatically because . on the other hand, if , either or becomes zero for .( more precisely , the former becomes zero when and are of same parity , and the latter becomes zero otherwise . )the sum consists of a finite number of terms in reality , so one does not need to worry about the convergence .the exact solution is difficult to understand intuitively .we show numerical evaluation of in fig .the initial position of the walker is fixed as .we separate graphs according to the parameter ; the panels ( a ) , ( b ) , and ( c ) respectively correspond to , , and .( is a key parameter for the average duration , as discussed in the following section . ) is a unimodal function of for each parameter value. power - law behavior is suggested in large when ( see ( d ) ) , which is further discussed in sec . [sec : asymptotic ] . as a function of ( ) .the graphs are separated according to the value of : ( a ) , ( b ) , and ( c ) .the two curves in each panel represent and .long - time behavior of in is shown in ( d ) on logarithmic scales , which suggests a power law ., title="fig : " ] + ( a ) as a function of ( ) .the graphs are separated according to the value of : ( a ) , ( b ) , and ( c ) .the two curves in each panel represent and .long - time behavior of in is shown in ( d ) on logarithmic scales , which suggests a power law ., title="fig : " ] + ( b ) + as a function of ( ) .the graphs are separated according to the value of : ( a ) , ( b ) , and ( c ) .the two curves in each panel represent and .long - time behavior of in is shown in ( d ) on logarithmic scales , which suggests a power law ., title="fig : " ] + ( c ) as a function of ( ) .the graphs are separated according to the value of : ( a ) , ( b ) , and ( c ) .the two curves in each panel represent and .long - time behavior of in is shown in ( d ) on logarithmic scales , which suggests a power law ., title="fig : " ] + ( d )in order to see properties of the random variable , moment analysis is developed in this section .first , we calculate the moment generating function of , defined as calculation process is summarized in appendix [ apdx : b ] , and the result is contains no special functions , so this is simpler than of eq . .remarkably , is described by elementary functions , though its definition is an infinite sum of which each term contains a genuine hypergeometric function remarkably , contains no special functions , though the infinite summation in eq .involves a genuine hypergeometric function via . repeatedly differentiating and putting , we have the first and second moments of the duration as the average and the other moments remain finite when , and they diverge to infinity as .the variance of is given by we can give a plain explanation for the average .the walker moves to the left by a length on average in a single hop . in other words, is a mean velocity of the walker toward the origin .hence , it takes time steps on average to reach the origin .the moment generating function looks too complicated to calculate higher moments by differentiation .alternatively , we provide another calculation method for moments of the duration .we start from the difference equation for : multiply and take summation for to get the left - hand side can be expressed as the last equality uses , which means that a random walker surely hits the origin .( this normalization breaks if the discussion of total probability of ruin in sec .[ sec : discussion ] in more detail . )we have a difference equation a particular solution is given by , and the characteristic equation admits the two roots and .thus , the general solution is which has two constants and . in a limiting case , the problem is still well - defined , and should be finite , so . also , it is obvious that , so .the appropriate solution is therefore next we calculate the second moment . multiplying and taking summation , we derive a difference equation where we use for the calculation on the left - hand side .we assume a particular solution in the form , and determine two constants as the second moment is the homogeneous - solution part vanishes for the same reason as the average . in short , this `bottom - up ' method calculates higher moments inductively , and generally becomes a polynomial of degree .for instance , the third moment is calculated as look at long - time behavior of in this section . in particular, we confirm the power - law behavior mentioned in sec .[ sec:2 ] when ( see fig .[ fig2 ] ( d ) for reference ) .a summation and factorial , as well as a hypergeometric function , are not suitable to study limiting behavior , so we first reexpress the probability into a tractable form .( it may also be possible to estimate factorials directly by stirling s formula . ) according to ref . , the solution of the classic ruin problem has another expression this form is convenient in that one does not need to care about the parities of and .in fact , the integral holds the information of parities : in appendix [ apdx : c ] we prove this formula . applying eq . to eq ., we have the integral form of the probability : here we estimate the integral where the integral variable is changed as in the second equality . for large , hence the integrand is a rapidly decreasing function of , and we can extend the upper limit of the integration to infinity . together with the approximation , we can carry out the integration as therefore , the asymptotic form is expressed as by the arithmetic mean - geometric mean inequality , we note with equality holding if and only if ( or by the notation in sec . [ sec:2 ] ) . thus , the long - time behavior of is completely different according to whether equals zero or not . *if ( i.e. , ) , , and hence asymptotically exhibits exponential decay due to . * if ( i.e. , ) , a power law with exponent is concluded ( recall fig . [ fig2 ] ( d ) ) .in the present paper , we have solved a gambler s ruin problem where a random walker hops to the right with probability , left with , and does not hop with .we have calculated exactly the probability that the walker starting from position has duration . the average and higher moments of calculated in two ways : by the moment generating function , and by the bottom - up calculation .the asymptotic form of is derived , and a power law is obtained when .we make a brief discussion on the continuum limit , which is an appropriate scaling limit where the step width and time interval of the walker tend to zero .let the step width be and time interval be , and position and time are scaled as and . in the continuum limitwe take , together with are both kept finite . is the mean displacement per time unit , and is the diffusioin coefficient corresponding to a non - halting ( or formally called a _ simple _ )random walk .the probability density function is suitable in the continuum limit , rather than the probability distribution .( ` ' represents the continuum limit . )carrying out calculation similar to that in ref . , is obtained .the probability density is called the inverse gaussian distribution . in the continuum limit ,the halting effect is reflected only upon the diffusion coefficient as . comparing of a discrete problem in eq . and above of a continuum limit , we conclude that the discrete random walk is far more difficult than the continuous diffusion , and that the results about the discrete random walk in this paper can not be attained from continuous diffuision problem . combining eqs ., , and , we get we have not used the condition in the derivation of eqs . and, so , , and can take any value independently .set and , furthermore , set ( i.e. , ) , these formulas are nontrivial , but the author can not tell whether they are useful in practice . we stress that an infinite series of which each term involves a genuine hypergeometric function is hardly known .we comment on the total probability of ruin . for the discrete problem ,putting the moment generating function , if , the gambler surely comes to ruin , but if , the gambler can manage to avoid running out of the bankroll with nonzero probability .the equation describes a traffic jam , where stands for the probability distribution that the jam consists of cars at time , and and are involved in the rates at which cars enter and leave the jam .the scaling behavior with exponent has been observed in the lifetime distribution of jams .we think that results obtained in this paper can become theoretical bases ; in particular , our discrete problem is comparable to a microscopic description of traffic , where discreteness is not negligible .the author is grateful to dr .yoshihiro yamazaki and dr .jun - ichi wakita for their beneficial comments .here we follow the calculation of eq . in detail . employing the duplication formula , we rewrite by euler s reflection formula , note that because is an integer .similarly , is obtained .substituting eqs . and into eq ., then using the duplication formula again as we eventually come to the result follow here the calculation of eq . .substituting eq . into eq . , where a summation variable is changed as in the last equality . applying the negative binomial expansion with and , one obtains and simplifies eq . into can transform as follows using the duplication formula , the moment generating function can be expressed by a gauss hypergeometric function it can be expressed by an elementary function ( see ref . ) : this leads to the conclusion .we calculate the integral to prove eq . .we first break up the product of circular functions into the sum . real part is this is just a fourier series expansion of , and is associated with the chebyshev polynomial .moreover , by formulas in trigonometry , thus , since , , and are integers , the first term in the integral is which behaves like the kronecker delta subscript fraction may be unreadable , so we use instead of the standard symbol .the following terms in the integral are also substituted by , , and , respectively .if and have opposite parity ( i.e. , and are odd ) , each kronecker delta vanishes for all integer .otherwise , if and have same parity , each kronecker delta becomes nonzero at some . by picking out such , therefore ,eq . is derived .10 j. scott , j. bank .finance 5 , 317 - 344 ( 1981 ) .a. maitra and w. sudderth , annals of the international society of dynamic games 4 , 251 - 269 ( 1999 ) .j. keilson , oper .11 , 570 - 576 ( 1963 ) .g. r. harik , e. cant - paz , d. e. goldberg , and b. l. miller , evolutionary computation 7 , 231 - 253 ( 1999 ) .w. feller , an introduction to probability theory and its applications , vol . 1 , wiley , new york , 1957 .d. stanton and d. white , constructive combinatorics , springer , new york , 1986 .s. redner , a guide to first - passage processes , cambridge university press , cambridge , 2001 .p. hnggi , rev .62 , 251 - 341 ( 1990 ) .t. verechtchaguina , i. m. sokolov , and l. schimansky - geier , phys .e 73 , 031108 ( 2006 ) .n. sazuka , j. inoue , and e. scalas , physica a 388 , 2839 - 2853 ( 2009 ) .m. abramowitz and i. a. stegun , handbook of mathematical functions , dover , new york , 1970 .m. c. k. tweedie , ann .28 , 362 - 377 ( 1957 ) .k. nagel and m. paczuski , phys .e 51 , 2909 - 2918 ( 1995 ) .a. gil , j. segura , and n. m. temme , numerical methods for special functions , society for industrial and applied mathematics , philadelphia , 2007 .
this paper treats of a kind of a gambler s ruin problem , which seeks the probability that a random walker first hits the origin at a certain time . in addition to a usual random walk which hops either rightwards or leftwards , the present paper introduces the ` halt ' that the walker does not hop with a certain probability . the solution to the problem can be obtained exactly using a gauss hypergeometric function . the moment generating function of the duration is also calculated , and a calculation technique of the moments is developed . the author derives the long - time behavior of the ruin probability , which exhibits power - law behavior if the walker hops to the right and left with equal probability . e - mail : yamamoto.chuo-u.ac.jp pacs numbers : 05.40.fb , 02.50.-r _ keywords _ : first - passage problem ; gambler s ruin ; hypergeometric function ; asymptotic analysis
the problem of state observability for systems driven by unknown inputs ( ui ) is a fundamental problem in control theory .this problem was introduced and firstly investigated in the seventies .a huge effort has then been devoted to design observers for both linear and nonlinear systems in presence of ui , e.g. , .the goal of this paper is not to design new observers for systems driven by ui but to provide simple analytic conditions in order to check the weak local observability of the state .the obtained results hold for systems whose dynamics are nonlinear in the state and affine in both the known and the unknown inputs .additionally , the unknown inputs are supposed to be smooth functions of time ( specifically , they are supposed to be , for a suitable integer ) . in the observability properties of a nonlinear systemare derived starting from the definition of indistinguishable states . according to this definition ,the lie derivatives of any output computed along any direction allowed by the system dynamics take the same values at the states which are indistinguishable .hence , if a given state belongs to the indistinguishable set of a state ( i.e. , to ) all the lie derivatives computed at and at take the same values .this is a fundamental property . in particular ,based on this property , the observability rank condition was introduced in .our first objective is to extend the observability rank condition . for, we introduce a new definition of _ indistinguishable states _ for the case ui ( section [ sectiondefinitions ] ) .then , in section [ sectionextendedsystem ] we introduce a new system by a suitable state extension . for this extended system , we show that , the lie derivatives of the outputs up to a given order , take the same values at the states which are indistinguishable . in other words, the new system satisfies the same property derived in mentioned above and this allows us to extend the observability rank condition ( section [ sectioneorc ] ) .we will refer to this extension as to the extended observability rank condition ( ) .the new system is obtained by a state augmentation . in particular , the augmented state is obtained by including the unknown inputs together with their time - derivatives up to given order .this augmented state has already been considered in the past .specifically , in the authors adopted this augmented state to investigate the observability properties of a fundamental problem in the framework of mobile robotics ( the bearing slam ) .in particular , starting from the idea of including the time - derivatives of the unknown input in the state , in a sufficient condition for the state observability has been provided .the based on the computation of a codistribution defined in the augmented space . in other words ,the us to check the weak local observability of the original state together with its extension and not directly of the original state .this makes the computational cost dependent on the dimension of the augmented state .additionally , the only provides sufficient conditions for the weak local observability of the original state since the state augmentation can be continued indefinitely . for these reasons ,the paper focuses on the following two fundamental issues : * understanding if it is possible to derive the weak local observability of the original state by computing a codistribution defined in the original space , namely a codistribution consisting of covectors with the same dimension of the original state . *understanding if there exists a given augmented state such that , by further augmenting the state , the observability properties of the original state provided by remain unvaried .both these issues have been fully addressed in the case of a single unknown input ( see theorems [ theoremseparation ] and [ theoremstop ] ) .thanks to the result stated by theorem [ theoremseparation ] ( section [ sectionseparation ] ) , the algorithm in definition [ definitionomega ] in section [ sectionseparation ] ( for the case of a single known input ) and in definition [ definitionomegae ] in section [ sectionextension ] ( for the case of multiple known inputs ) can be used to obtain the entire observable codistribution . in other words ,the observability properties of the original state are obtained by a very simple algorithm . as it will be seen , the analytic derivations required to prove theorem [ theoremseparation ] are complex and we are currently extending them to the multiple unknown inputs case .theorem [ theoremstop ] ( section [ sectionstop ] ) ensures the convergence of the algorithm in a finite number of steps and it also provides the criterion to establish that this convergence has been reached .also this proof is based on several tricky and complex analytical steps .both theorems [ theoremseparation ] and [ theoremstop ] are first proved in the case of a single known input ( sections [ sectionseparation ] and [ sectionstop ] ) but in section [ sectionextension ] their validity is extended to the case of multiple known inputs .all the theoretical results are illustrated in section [ sectionapplicationssystem ] by deriving the observability properties of several nonlinear systems driven by unknown inputs .in the sequel we will refer to a nonlinear control system with known inputs ( ^t ] ) .the state is the vector , with an open set of .we assume that the dynamics are nonlinear with respect to the state and affine with respect to the inputs ( both known and unknown ) .finally , for the sake of simplicity , we will refer to the case of a single output ( the extension to multiple outputs is straightforward ) .our system is characterized by the following equations : where , , and , , are vector fields in and the function is a scalar function defined on the open set . for the sake of simplicity , we will assume that all these functions are analytic functions in .let us consider the time interval ] .it is immediate to realize that the product of this gradient by any vector filed in ( [ equationextendedstateevolution ] ) depends at most on , i.e. , it is independent of , .2 cm a simple consequence of this lemma are the following two properties : [ prlieder1 ] let us consider the system .the lie derivatives of the output up to the order along at least one vector among ( ) are identically zero .* proof : * from the previous lemma it follows that all the lie derivatives , up to the are independent of , which are the last components of the extended state in ( [ equationextendedstate ] ) .then , the proof follows from the fact that any vector among ( ) has the first components equal to zero [ prlieder2 ] the lie derivatives of the output up to the order along any vector field , for the system with the same lie derivatives for the system * proof : * we proceed by induction on for any . when we only have one zero - order lie derivative ( i.e. , ) , which is obviously the same for the two systems , .let us assume that the previous assert is true for and let us prove that it holds for .if it is true for , any lie derivative up to the order is the same for the two systems . additionally , from lemma [ lmlieder1 ] , we know that these lie derivatives are independent of , .the proof follows from the fact that the first components of , coincide with the first components of , when .2 cm in the sequel we will use the notation : ^t ] .we also denote by original system , i.e. , the one characterized by the state and the equations in ( [ equationstateevolution ] ) . the definition [ defindistinguishablestates ] , given for , can be applied to .specifically , in , two states ] are indistinguishable if , for any ( the known inputs ) , there exist two vector functions and ( the time derivative of two disturbance vectors ) such that , ; ~u ; ~w_a^{(k ) } ) ) = h ( x(t ; ~ [ x_b , \xi_b ] ; ~u ; ~w_b^{(k ) } ) ) ] and ] and ] and ] .thanks to the results stated by propositions [ prlieder2 ] and [ prconstantlieder ] we will introduce the extension of the observability rank condition in the next section .according to the observability rank condition , the weak local observability of the system in ( [ equationstateevolution ] ) with at a given point can be investigated by analyzing the codistribution generated by the gradients of the lie derivatives of its output .specifically , if the dimension of this codistribution is equal to the dimension of the state on a given neighbourhood of , we conclude that the state is weakly locally observable at ( theorem in ) .we can also check the weak local observability of a subset of the state components . specifically , a given component of the state is weakly locally observable at , if its gradient belongs to the aforementioned codistribution if it is constant on the indistinguishable set . ] .the proof of theorem in is based on the fact that all the lie derivatives ( up to any order ) of the output computed along any direction allowed by the system dynamics take the same values at the states which are indistinguishable .let us consider now the general case , i.e. , when . in the extended system ( )we know that the lie derivatives up to the satisfy the same property ( see proposition [ prconstantlieder ] ) .therefore , we can extend the validity of theorem in to our case , provided that we suitably augment the state and that we only include the lie derivatives up to the to build the observable codistribution . in the sequel , we will introduce the following notation : * will denote the observable codistribution for includes all the lie derivatives of the output along up to the order ; * the symbol will denote the gradient with respect to the extended state in ( [ equationextendedstate ] ) and the symbol will denote the gradient only respect to ; * for a given codistribution and a given vector field , we will denote by the codistribution whose covectors are the lie derivatives along of the covectors in ( we are obviously assuming that the dimension of these covectors coincides with the dimension of ) .* given two vector spaces and , we will denote with their sum , i.e. , the span of all the generators of both and . * for a given and a given \in v ] the set of all the states from ] if at ] in .we have the following result , which is the extension of the result stated by theorem in : for , if ( ) satisfies the observability rank condition at ] . additionally , remains weakly locally observable by further extending the state ( i.e. , in every system ( ) ) . * proof : * we prove that it exists an open neighbourhood of ] , is constant on the set } ] , it exists some open neighborhood of ] , then proposition [ prconstantlieder ] and remark [ remark ] imply that all the lie derivatives up to the order are constant on the set } ] where the parenthesis ] by a direct computation it is easy to realize that has the last components identically null . in the sequel, we will denote by the vector in that contains the first components of .in other words , ^t ] .we define the codistribution as follows ( see definition [ definitionomegae ] in section [ sectionextension ] for the case when ) : [ definitionomega ] this codistribution is defined recursively by the following algorithm : 1 . ; 2 . note that this codistribution is completely integrable by construction .more importantly , its generators are the gradients of functions that only depend on the original state ( ) and not on its extension . in the sequel , we need to embed this codistribution in .we will denote by ] the codistribution consists of two parts .specifically , we can select a basis that consists of exact differentials that are the gradients of functions that only depend on the original state ( ) and not on its extension ( these are the generators of ]. we have the following result : let us denote with the component of the state ( ) .we have : if and only if * proof : * the fact that implies that is obvious since \subseteq\tilde{\omega}_m ] and are generators of .we want to prove that .we proceed by contradiction .let us suppose that .we remark that the first set of generators have the last entries equal to zero , as for .the second set of generators consists of the lie derivatives of along up to the order .let us select the one that is the highest order lie derivative and let us denote by this highest order .we have . by a direct computation ,it is immediate to realize that this is the only generator that depends on . specifically , the dependence is linear by the product ( we remind the reader that ) . butthis means that has the entry equal to and this is not possible since ] the proof of this theorem is complex and is based on several results that we prove before . based on them , we provide the proof of the theorem at the end of this section ..2 cm [ lemmalemma1 ] * proof : * we have .the first term .hence , we need to prove that .we repeat the previous procedure times .specifically , we use the equality , for , and we remove the first term since [ lemmarisa ] , i.e. , the vector is a linear combination of the vectors ( ) , where the coefficients ( ) depend on the state only through the functions that generate the codistribution * proof : * we proceed by induction . by definition , .* inductive step : * let us assume that .we have : =\sum_{j=1}^{m-1}\left[c_j \left[\begin{array}{c } \phi_j\\ 0_k \\ \end{array } \right],~g\right]=\ ] ] ,~g\right ] - \sum_{j=1}^{m-1}\mathcal{l}_g c_j \left[\begin{array}{c } \phi_j\\ 0_k \\ \end{array } \right]\ ] ] we directly compute the lie bracket in the sum ( note that is independent of the unknown input and its time derivatives ) : ,~g\right]= \left[\begin{array}{c } [ \phi_j,~g ] w\\ 0_k \\ \end{array } \right]=\left[\begin{array}{c } \phi_{j+1 } \mathcal{l}^1_gh\\ 0_k \\ \end{array } \right]\ ] ] regarding the second term , we remark that . by setting for and , and by setting for and , we obtain , which proves our assert since is a function of it also holds the following result : [ lemmarisb ] , i.e. , the vector is a linear combination of the vectors ( ) , where the coefficients ( ) depend on the state only through the functions that generate the codistribution * proof : * we proceed by induction . by definition , .* inductive step : * let us assume that .we need to prove that .we start by applying on both members of the equality the lie bracket with respect to .we obtain for the first member : = \hat{\phi}_m \mathcal{l}^1_gh ] . on the other hand , .the first terms are in .hence we have : + \mathcal{l}_f \mathcal{l}^{m-1}_g dh + \mathcal{l}_g \bar{\omega}_{m-1} ] . by using again the induction assumptionwe obtain : + l^{m-1 } + [ \mathcal{l}_f \omega_{m-1},0_k ] + \mathcal{l}_{\phi_{m-1 } } dh + \mathcal{l}_g [ \omega_{m-1},0_k ] + \mathcal{l}_g l^{m-1}=[\omega_{m-1},0_k ] + l^m + [ \mathcal{l}_f \omega_{m-1},0_k ] + \mathcal{l}_{\phi_{m-1 } } dh + [ \mathcal{l}_{\frac{g}{l^1_g } } \omega_{m-1},0_k] ] [ theoremseparation ] is fundamental .it allows us to obtain all the observability properties of the original state by restricting the computation to the codistribution , namely a codistribution whose covectors have the same dimension of the original space . in other words ,the dimension of these covectors is independent of the state augmentation .the codistribution is defined recursively and ( see definition [ definitionomega ] in section [ sectionseparation ] ) .this means that , if for a given the gradients of the components of the original state belong to , we can conclude that the original state is weakly locally observable . on the other hand , if this is not true , we can not exclude that it is true for a larger .the goal of this section is precisely to address this issue .we will show that the algorithm converges in a finite number of steps and we will also provide the criterion to establish that the algorithm has converged ( theorem [ theoremstop ] ). this theorem will be proved at the end of this section since we need to introduce several important new quantities and properties ..2 cm for a given positive integer we define the vector by the following algorithm : 1 . ; 2 . ] it is immediate to repeat all the steps carried out in section [ sectionseparation ] and extend the validity of theorem [ theoremseparation ] to the system characterized by ( [ equationstateevolution1e ] ) .this extension states that all the observability properties of the state that satisfies the nonlinear dynamics in ( [ equationstateevolution1e ] ) can be derived by analyzing the codistribution defined by definition [ definitionomegae ] .finally , also theorem [ theoremstop ] can be easily extended to cope with the case of multiple known inputs . in this case , requiring that means that must be invariant with respect to and all simultaneously ..5 cm we conclude this section by outlining the steps to investigate the weak local observability at a given point of a nonlinear system driven by a single disturbance and several known inputs .in other words , to investigate the weak local observability of a system defined by a state that satisfies the dynamics in ( [ equationstateevolution1e ] ) .the validity of the following procedure is a consequence of the theoretical results previously derived ( in particular theorem [ theoremseparation ] and theorem [ theoremstop ] ) . 1 . for the chosen , compute and . in the case when , introduce new local coordinates , as explained at the end of section [ sectionsingle ] and re -define the output functions .the most convenient choice is the one that corresponds to the highest relative degree ( if this degree coincides with it means that the state is weakly locally observable and we do not need to pursue the observability analysis ) . ] .2 . build the codistribution ( at ) by using the algorithm provided in definition [ definitionomegae ] , starting from and , for each , check if .3 . denote by the smallest such that .4 . for each check if and denote by where is the smallest integer such that and ( note that ) . 5 .if the gradient of a given state component ( , ) belongs to ( namely if ) on a given neighbourhood of , then is weakly locally observable at .if this holds for all the state components , the state is weakly locally observable at .finally , if the dimension of is smaller than on a given neighbourhood of , then the state is not weakly locally observable at .we apply the theory developed in the previous sections in order to investigate the observability properties of several nonlinear systems driven by unknown inputs . in [ sectionapplication1 ]we consider systems with a single disturbance , namely characterized by the equations given in ( [ equationstateevolution1e ] ) . in this casewe will use the results obtained in sections [ sectionsingle ] , [ sectionseparation ] , [ sectionstop ] and [ sectionextension ] .in particular , we will follow the steps outlined at the end of section [ sectionextension ] . in [ sectionapplication2 ] we consider the case of multiple disturbances , i.e. , when the state dynamics satisfy the first equation in ( [ equationstateevolution ] ) . in this section, we also consider the case of multiple outputs and we use directly the , as discussed in section [ sectioneorc ] .we consider a vehicle that moves on a -environment .the configuration of the vehicle in a global reference frame , can be characterized through the vector ^t ] and ] .hence , . additionally , .we need to compute and , in order to do this , we need to compute . we obtain : ] .it is immediate to check that , meaning that .additionally , by a direct computation , it is possible to check that meaning that and , whose dimension is .we conclude that the dimension of the observable space is equal to and the state is not weakly locally observable .in this case we have ] we follow the five steps mentioned at the end of section [ sectionextension ] .we easily obtain .hence , we have to introduce new local coordinates , as explained at the end of section [ sectionsingle ] .we obtain and we obtain that the relative degree of the associated system in ( [ equationstateevolutionass ] ) is .let us denote the new coordinates by .in accordance with ( [ equationlocalcoordinates1 ] ) and ( [ equationlocalcoordinates2 ] ) we should set and . on the other hand , to simplify the computation , we set .finally , we set . we compute the new vector fields that characterize the dynamics in the new coordinates .we have : additionally , we set and ,~ [ 0 , -\sin(x_2'),0]\} ] and ] .hence , , meaning that .additionally , by a direct computation , it is possible to check that meaning that and , whose dimension is .we conclude that the dimension of the observable space is equal to and the state is not weakly locally observable .in this case we have ] we follow the five steps mentioned at the end of section [ sectionextension ] .we have and . hence , ] , , \right. ] .additionally , we obtain , meaning that and , whose dimension is .we conclude that the dimension of the observable space is equal to and the state is not weakly locally observable .in this case we have ] .we follow the five steps mentioned at the end of section [ sectionextension ] .we have and .additionally : \ ] ] we also have \} ] and , ~\frac{1}{\sin^2(\theta_v-\phi)}\left[0 , 1,-1\right]\right\} ] and ] . in the new coordinateswe obtain : and .since depends on , . since the dimension of is already , because of lemma [ lemmarhoinomegam ] , we know that it exists a given integer such that the dimension of is larger than . hence , we conclude that the entire state is weakly locally observable .in this case we refer to the general case , i.e. , to systems characterized by the dynamics given in ( [ equationstateevolution ] ) . for this general case we do not have the results stated by the theorem of separation ( theorem [ theoremseparation ] ) and we have to compute the entire codistribution and to proceed as it has been described in section [ sectioneorc ] .we derive the observability properties of two systems with unknown inputs .the first system characterizes a localization problem in the framework of mobile robotics .the state and its dynamics are the same as in the example discussed in [ sectionapplication1 ] .however , we consider a different output and also the case when both the inputs are unknown . for this simple example , the use of our theory is not required to derive the observability properties , which can be obtained by using intuitive reasoning .the second system is much more complex and describes one of the most important sensor fusion problem , which is the problem of fusing visual and inertial measurements .we will refer to this problem as to the visual - inertial structure from motion problem ( the vi - sfm problem ) .this problem has been investigated by many disciplines , both in the framework of computer science and in the framework of neuroscience ( e.g. , ) .inertial sensors usually consist of three orthogonal accelerometers and three orthogonal gyroscopes . all together, they constitute the inertial measurement unit ( imu ) .we will refer to the fusion of monocular vision with the measurements from an imu as to the _ standard _ vi - sfm problem . in and observability properties of the standard vi - sfm have been investigated in several different scenarios .very recently , following two independent procedures , the most general result for the standard vi - sfm problem has been provided in and .this result can be summarized as follows . in the standard vi - sfm problemall the independent observable states are : the positions in the local frame of all the observed features , the three components of the speed in the local frame , the biases affecting the inertial measurements , the roll and the pitch angles , the magnitude of the gravity and the transformation between the camera and imu frames .the fact that the yaw angle is not observable is an obvious consequence of the system invariance under rotation about the gravity vector .we want to use here the theory developed in the previous sections in order to investigate the observability properties of the vi - sfm problem when the number of inertial sensors is reduced , i.e. , when the system is driven by unknown inputs .we consider the system characterized by the same dynamics given in ( [ equationsimpleexampedynamicsc ] ) .additionally , we assume that the vehicle is equipped with a gps able to provide its position .hence , the system output is the following two - components vector : ^t\ ] ] let us start by considering the case when both the system inputs , i.e. , the two functions and , are available . by comparing ( [ equationstateevolution ] ) with ( [ equationsimpleexampedynamicsc ] )we obtain ^t ] , ^t ] . in order to investigate the observability properties , we apply the observability rank crondition introduced in .the system has two outputs : and . by definition ,they coincide with their zero - order lie derivatives .their gradients with respect to the state are , respectively : ] . hence , the space spanned by the zero - order lie derivatives has dimension two .let us compute the first order lie derivatives .we obtain : , , .hence , the space spanned by the lie derivatives up to the first order span the entire configuration space and we conclude that the state is weakly locally observable .we now consider the case when both the system inputs are unknown . in this case , by comparing ( [ equationstateevolution ] ) with ( [ equationsimpleexampedynamicsc ] ) we obtain , , , , ^t ] and ^t ] .we can easily obtain the analytical expression for the quantities appearing in ( [ equationextendedstateevolution ] ) .we have : ^t ] ( similarly , we obtain ) .we obtain : , [ 0,1,0 , 0,0],\ ] ] ,[0,0,\cos\theta_v v , \sin\theta_v , 0]\}\ ] ] from which we obtain that the gradient of belongs to .therefore , also is weakly locally observable and so the entire original state . for the brevity sake, we do not provide here the computation necessary to deal with this problem .all the details are available in ( see also the work in for the definition of continuous symmetries ) . herewe provide a summary of these results .first of all , we remark that the vi - sfm problem can be described by a nonlinear system with six inputs ( 3 are the accelerations along the three axes , provided by the accelerometers , and 3 are the angular speeds provided by the gyroscopes ) .the outputs are the ones provided by the vision . in the simplest case of a single point feature , they consist of the two bearing angles of this point feature in the camera frame .we analyzed the following three cases : 1 .camera extrinsically calibrated , only one input known ( corresponding to the acceleration along a single axis ) ; 2 .camera extrinsically uncalibrated , only one input known ( corresponding to the acceleration along a single axis ) ; 3 .camera extrinsically uncalibrated , two inputs known ( corresponding to the acceleration along two orthogonal axes ) .the dimension of the original state is in the first case and in the other two cases .additionally and in the first two cases while and in the last case . in prove that the observability properties of vi - sfm do not change by removing all the three gyroscopes and one of the accelerometers .in other words , exactly the same properties hold when the sensor system only consists of a monocular camera and two accelerometers . to achieve this result, we computed the lie derivatives up to the second order for the third case mentioned above . by removing a further accelerometer ( i.e. , by considering the case of a monocular camera and a single accelerometer )the system loses part of its observability properties .in particular , the distribution , , contains a single vector .this vector describes a continuous symmetry that is the invariance under the rotation around the accelerometer axis .this means that some of the internal parameters that define the extrinsic camera calibration , are no longer identifiable .although this symmetry does not affect the observability of the absolute scale and the magnitude of the velocity , it reflects in an indistinguishability of all the initial speeds that differ for a rotation around the accelerometer axis .on the other hand , if the camera is extrinsically calibrated ( i.e. , if the relative transformation between the camera frame and the accelerometer frame is known ( first case mentioned above ) ) this invariance disappears and the system still maintains full observability , as in the case of three orthogonal accelerometers and gyroscopes .the analysis of this system ( the first case mentioned above ) has been done in the extreme case when only a single point feature is available .this required to significantly augment the original state .in particular , in we compute all the lie derivatives up to the order , i.e. , we included in the original state the 5 unknown inputs together with their time - derivatives up to the six order .we prove that the gradient of any component of the original state , with the exception of the yaw angle , is orthogonal to the distribution , ( see the computational details in ) .in this paper we investigated the problem of nonlinear observability when part ( or even all ) of the system inputs is unknown .we made the assumption that the unknown inputs are differentiable functions of time ( up to a given order ) .the goal was not to design new observers but to provide simple analytic conditions in order to check the weak local observability of the state .an unknown input was also called disturbance .the analysis started by extending the observability rank condition .this was obtained by a state augmentation and was called the extended observability rank condition . in general , by further augmenting the state , the observability properties of the original state also increase . as a result, the extended observability rank condition only provides a sufficient condition for the weak local observability of the original state since the state augmentation can be continued indefinitely .additionally , the computational cost demanded to obtain the observability properties through the extended observability rank condition , dramatically depends on the dimension of the augmented state . for these reasons, we focused our investigation on the following two fundamental issues .the former consisted in understanding if there exists a given augmented state such that , by further augmenting the state , the observability properties of the original state provided by the extended observability rank condition remain unvaried .the latter consisted in understanding if it is possible to derive the observability properties of the original state by computing a codistribution defined in the original space , namely a codistribution consisting of covectors with the same dimension of the original state .both these issues have been fully addressed in the case of a single unknown input . in this case, we provided an analytical method to operate a separation on the codistribution computed by the extended observability rank condition , i.e. , the codistribution defined in the augmented space .thanks to this separation , we introduced a method able to obtain the observability properties by simply computing a codistribution that is defined in the original space ( theorem [ theoremseparation ] ) .the new codistribution is defined recursively by a very simple algorithm .specifically , the algorithm in definition [ definitionomega ] in section [ sectionseparation ] ( for the case of a single known input ) and in definition [ definitionomegae ] in section [ sectionextension ] ( for the case of multiple known inputs ) .hence , the overall method to obtain all the observability properties is very simple . on the other hand, the analytic derivations required to prove the validity of this separation are complex and we are currently extending them to the multiple unknown inputs case . finally , we showed that the recursive algorithm converges in a finite number of steps and we also provided the criterion to establish if the algorithm has converged ( theorem [ theoremstop ] ) . also this proof is based on several tricky and complex analytical steps . both theorems [ theoremseparation ] and [ theoremstop ]have first been proved in the case of a single known input ( sections [ sectionseparation ] and [ sectionstop ] ) but in section [ sectionextension ] their validity was extended to the case of multiple known inputs .f. a. w. belo , p. salaris , and a. bicchi , 3 known landmarks are enough for solving planar bearing slam and fully reconstruct unknown inputs , the 2010 ieee / rsj international conference on intelligent robots and systems october 18 - 22 , 2010 , taipei , taiwan a. berthoz , b. pavard and l.r .young , perception of linear horizontal self - motion induced by peripheral vision basic characteristics and visual - vestibular interactions , exp .brain res .23 , 471489 ( 1975 ) .darouach , m. , zasadzinski , m. , xu , s. j. ( 1994 ) .full - order observers for linear systems with unknown inputs .ieee transactions on automatic control , 39(3 ) dokka k. , macneilage p. r. , de angelis g. c. and angelaki d. e. , estimating distance during self - motion : a role for visual - vestibular interactions , journal of vision ( 2011 ) 11(13):2 , 1 - 16 m. hou , p.c .mller design of observers for linear systems with unknown inputs ieee transactions on automatic control , 37 ( 6 ) ( 1992 ) .isidori a. , nonlinear control systems , 3rd ed . , springer verlag , 1995 .e. jones and s. soatto , `` visual - inertial navigation , mapping and localization : a scalable real - time causal approach '' , the international journal of robotics research , vol .4 , pp . 407430 , apr. 2011 .d. koening , b. marx , d. jacquet unknown input observers for switched nonlinear discrete time descriptor system ieee transactions on automatic control , 53 ( 1 ) ( 2008 ) .d. g. kottas , j. a. hesch , s. l. bowman , and s. i. roumeliotis , on the consistency of vision - aided inertial navigation , in proc . of the int .symposium on experimental robotics , canada , jun 2012 .a. martinelli , vision and imu data fusion : closed - form solutions for attitude , speed , absolute scale and bias determination , ieee transactions on robotics , volume 28 ( 2012 ) , issue 1 ( february ) , pp 4460 .mirzaei f.m . androumeliotis s.i . , a kalman filter - based algorithm for imu - camera calibration : observability analysis and performance evaluation , transactions on robotics , 2008 , vol .24 , no . 5 , 11431156
this paper investigates the unknown input observability problem in the nonlinear case under the assumption that the unknown inputs are differentiable functions of time ( up to a given order ) . the goal is not to design new observers but to provide simple analytic conditions in order to check the weak local observability of the state . the analysis starts by extending the observability rank condition . this is obtained by a state augmentation and is called the extended observability rank condition ( first contribution ) . the proposed extension of the observability rank condition only provides sufficient conditions for the state observability . on the other hand , in the case of a single unknown input , the paper provides a simple algorithm to directly obtain the entire observable codistribution ( second and main contribution ) . as in the standard case of only known inputs , the observable codistribution is obtained by recursively computing the lie derivatives along the vector fields that characterize the dynamics . however , in correspondence of the unknown input , the corresponding vector field must be suitably rescaled . additionally , the lie derivatives must be computed also along a new set of vector fields that are obtained by recursively performing suitable lie bracketing of the vector fields that define the dynamics . in practice , the entire observable codistribution is obtained by a very simple recursive algorithm . however , the analytic derivations required to prove that this codistribution fully characterizes the weak local observability of the state are complex . finally , it is shown that the recursive algorithm converges in a finite number of steps and the criterion to establish that the convergence has been reached is provided . also this proof is based on several tricky analytical steps . several applications illustrate the derived theoretical results , both in the case of a single unknown input and in the case of multiple unknown inputs . * keywords : * nonlinear observability ; unknown input observability ;
treewidth , tree decomposition and related graph decomposition concepts have been studied extensively as a means for finding theoretically efficient algorithms for optimization problems in graphs . for graphs of bounded treewidth, polynomial time algorithms can be found for a large number of graph optimization problems .however , due to large constants hidden in the time complexity as well as ( super)exponential dependency on the treewidth , in practice these algorithms are often too slow to solve optimization problems . though heuristic methods for finding tree decompositions of small width have been developed , most applications of tree decompositions are in speeding up exact algorithms .little work has been done in using tree decompositions as a tool for high performance heuristic optimization algorithms . to the best of our knowledge the only work in combinatorial optimization exploring this avenue is the tour merging algorithm for the traveling salesman problem ( tsp ) by cook and seymour , using the related concept of branch decomposition . in their paperthey describe an algorithm that first generates a pool of high quality solutions to the tsp using a local search heuristic with different starting points . in the merging phase , the graph union of these solutionsis then taken to produce a sparse subgraph of the original graph .this makes the computation of a low width branch decomposition feasible , which they then use to quickly find the optimal solution to the tsp instance induced by this sparse subgraph .experimental results showed a fair improvement over the best solution found , in a small amount of additional time . in this paperwe report experimental results applying the same paradigm described in on the steiner tree problem in graphs ( stp ) .a set of locally optimal solutions is generated to create a sparse subgraph , and subsequently tree decomposition is used to quickly solve the restricted instance to optimality .the main difference with the technique by cook and seymour is that we allow the algorithm to discard some of the generated solutions , if it helps finding a tree decomposition of sufficiently small width on the graph union of the remaining solutions . though this hurts solution quality in some cases , the improvement in running time warrants this trade off . for generating solutions we use a multistart heuristic by ribeiro et al . available under the name _ bossa_. the instances induced by the generated solutions are solved using dynamic programming ( dp ) , for which we use a fairly recent tree decomposition based implementation by fafianie et al .we compare the performance of our algorithm to the _ path relinking _ solution merging strategy proposed in which is part of the bossa implementation .experimental results show that our method is very promising .test runs on sparse benchmark sets showed up to an average 6-fold improvement of the optimality gap provided by the best generated solution , within only one or two percent of the running time of the solution generating phase . on the other hand for dense graphs it often was nt possible to find a combination of local solutions within our predefined treewidth limit . by using a fast greedy heuristic for finding tree decompositions however, it takes little time to identify this , and therefore the overhead of running the merging algorithm is negligible in such situations .it should be noted that bossa is no longer competitive in terms of performance . as pointed out by an anonymous reviewer , a heuristic by polzin and daneshmand ( * ? ? ?* chapter 4 ) was shown to give similar or better solutions compared to bossa in a fraction of the time on established benchmarks . however , a very recent advancement by pajor et al . indicates that our results are still highly relevant .they present an improved multistart heuristic and experimental results indicate that this implementation outperforms the heuristic by polzin and daneshmand again .the proposed heuristic has a strong similarity to bossa .though some structural changes yield better quality solution pools in the same number of iterations , most of the performance gain is actually achieved by faster implementations of the same local search techniques . since the improved implementationcould be directly plugged in as a solution generator for our method , we expect the positive results to carry over when replacing the multistart heuristic with the improved version of pajor et al ., though further experiments are need to confirm this . the rest of this article is organized as follows . in section [ sec : preliminaries ] basic notation is introduced , we give a formal definition of treewidth and we discuss a greedy algorithm for treewidth that plays an important part in the performance of our algorithm . in section[ sec : algorithm ] we describe the heuristic for selecting solutions for merging and briefly discuss the algorithms used for generating solutions and solving the instance induced by those solutions . in section [ sec : results ] we report experimental results on a variety of benchmark instances .we denote an undirected graph with vertex set and edge set , or and when no confusion is possible . together with a weight function we have a weighted graph . in this paperwe assume graphs to be simple : no loops or parallel edges are allowed .a graph union is equal to the graph found by taking the union of both the vertex and the edge sets of the operands .the neighbours of in are denoted .the subject of this paper is the classical steiner tree problem .this famous np - complete graph optimization problem should need no introduction to the reader but we include a formal definition for completeness : + given a connected weighted graph with non - negative weights and a set of terminal vertices , find a minimum weight subgraph of such that all terminal vertices are pairwise connected .the concept of treewidth is a graph invariant that indicates how _ tree - like _ a graph is .it is derived from the tree decomposition , a transformation that projects a general graph onto a tree .the formal definition is as follows : a tree decomposition of a graph is a tree , where each node is labelled with a vertex set , called a bag , satisfying the following conditions : 1 . 2 . for all is an such that 3 . if and then for all on the path between and in the tree the width of a tree decomposition is equal to the size of the largest bag minus one .the treewidth of a graph is the smallest width over all possible tree decompositions of . as findingthe treewidth of a graph is np - complete no polynomial time exact algorithms exists unless .heuristic approaches come in many shapes , including local search techniques and heuristics derived from exact algorithms , see bodlaender and koster .we will use a simple but very effective greedy heuristic described in . a minimum degree vertex in add edges to such that is a clique remove from algorithm [ algo : greedydegree ] , greedydegree , constructs an _ elimination order _ , which is a permutation of vertices of the graph .it does so by iteratively choosing the minimum degree vertex , adding edges between all its neighbours , and then removing it from the graph . these last two steps are called _vertex elimination_. any elimination order can be used to construct a ( unique ) tree decomposition in linear time ( see ( * ? ? ?5 ) ) . for conveniencewe will directly treat the output of algorithm [ algo : greedydegree ] as a tree decomposition .the width of the tree decomposition produced by algorithm [ algo : greedydegree ] is equal to the highest degree of any vertex at the moment it is eliminated from the graph . in this paperwe often abuse language by referring to the width found by greedydegree as the treewidth of , especially when is a graph induced by a set of solutions .of course this is just an upper bound , but since we never solve for exact treewidth in our algorithm , it is not necessary to make the distinction when from context it is clear that we mean the width of the tree decomposition found .the basic outline of our approach consists of three steps .let an instance of the stp be denoted by stp . 1 .generate as set of locally optimal solutions for stp .2 . pick a subset such that = greedydegree has width , where .3 . solve stp using dp guided by the decomposition found in 2 .the dp implementation we used for the last step , more on that in section [ subsec : dp ] , has running time linear in , but exponential in the treewidth .the multistart heuristic used to generate locally optimal solution is an implementation of a hybrid greedy randomized adaptive search procedure ( grasp ) for the stp ( see section [ subsec : grasp ] ) .as the first and last steps are basically black box routines with respect to the solution merging heuristic , we will first explain how we construct a suitable subset of solutions . in the implementation of tour merging for the tsp in a fixed number of solutionsis generated , and quite some time is spent on finding a good branch decomposition . if the algorithm can not find a decomposition of sufficiently small width , the merging heuristic is deemed intractable and returns no solution .our method is a little different .we also generate a fixed number of heuristic solutions , and limit the width of the tree decomposition deemed acceptable to proceed with the dp step .however we allow more flexibility by accepting a subset of solutions such that greedydegree finds a decomposition of width at most on their graph union .an initial approach to finding a good subset of solutions is motivated by the idea that if we can not use all solutions , we give priority to those with the highest quality .let be the set of solutions generated in step 1 and their weights .initially we sort the solutions in ascending order of and apply algorithm [ algo : greedysteinerunion ] .this keeps iteratively adding solutions to the graph union as long as the limit is not violated by the decomposition found by greedydegree . in a sense the algorithm finds a _ maximal _ subset of solutions , that is , no solution can be added without breaching the width limit .+ : list of solutions , ordered + : maximum treewidth + : list of solutions , such that a minimum degree vertex in add edges to such that is a clique remove from this procedure usually gives reasonably good improvements in the dp step if the number of solutions rejected by algorithm [ algo : greedysteinerunion ] is low .however , if only small sets of solutions stay within width limit , and there are consequently many possible maximal solution sets , the chance of the greedy procedure finding a good set from the possible alternatives is small .specifically , experiments showed that increasing the width limit may often result in a decrease in the eventual solution quality , a highly undesirable result . to improve the robustness of the solution picking stepwe introduce the randomized _ ranking _ procedure described in algorithm [ algo : rankingprocedure ] .this procedure is akin to a simulation of step 2 and step 3 of the solution merging algorithm with a lower width limit , where we shuffle the solutions instead of sorting them by .we use the value of the solution found in each iteration to adjust the rank of all solutions that were picked by algorithm [ algo : greedysteinerunion ] in that iteration .+ : list of solutions + : map giving the weigth of every steiner tree in + : maximum treewidth + : number of random ranking iterations + : map assigning an adjusted value to every solution in shuffle the order of at random weight of steiner tree found by dp on the graph add to all sets for which the adjusted values can be interpreted as a metric for how promising the inclusion of a solution is in terms of the improvement found in step 3 .these values are then used to sort the solutions before a final run of algorithm [ algo : greedysteinerunion ] with maximum width .this yields a much more robust algorithm as in experiments we never observed an increase in resulting in a decrease in solution quality .experimental results indicate that the execution time of the dp grows roughly with where is the width of the tree decomposition. therefore if we run algorithm [ algo : rankingprocedure ] , for example , with for 10 iterations , its execution time is still expected to be an order of magnitude smaller than directly running the dp once on a graph with decomposition of width .a byproduct is that we can check more combinations of solutions for improvement .in fact , sometimes the best solution found during the execution of algorithm [ algo : rankingprocedure ] is better than the final solution found on the graph union with maximum width , even after ranking according to the adjusted values .however , this does not happen too often and in general it pays off to execute a last iteration with the higher limit . taking it all together the steps for picking the set of solutionsare : * find * sort the solutions ascending according to * find the graph union of and its tree decomposition are then used as input for the final dp run in step 3 .a recent implementation of dynamic programming for the stp was introduced in .it uses the greedydegree algorithm to find a decent tree decomposition of the input graph , and then proceeds with a novel dynamic programming algorithm that reduces the search space in every stage by removing entries that can not affect optimality .we will not reproduce the formal dynamic program here , for which we refer to the paper . however , the idea is that the dp is guided by a tree decomposition , such that the size of the state space is governed by the number of partitions of the vertex sets in each bag . in the paper multiple methods for reducing the size of the search spaceare proposed and implemented in the corresponding software .we use the default _ classic _ dp however , as the relative speed ups are not large enough to make a significant contribution in our implementation .a hybrid for the stp was introduced in for which the code is publicly available under the name bossa .using a simple multistart approach , in which a construction heuristic is started from different nodes to produce a solutions that is then improved to a local optimum , does not work particularly well for the stp . for reasons that seem to be inherent to the problemmost construction heuristics usually produce the same or a few different solutions even for widely different starting points . to still be able to improve on deterministic heuristics ,the hybrid grasp algorithm in bossa employs a variety of techniques to force the algorithm to explore different areas of the search space .these include multiple different construction heuristics , randomization in the local search procedure and weight perturbations .this makes the hybrid grasp particularly useful for our algorithm , as it can generate a set of good but disjoint solutions . for a full explanation of these techniquesplease see the paper .the bossa code also includes a solution merging heuristic called path relinking , which can be used in combination with grasp .we use it to compare the performance of our algorithm .the algorithm was implemented in java integrating the existing java code from for the merging part and using system calls and text files to interface with the binary executable of bossa , to generate the solutions pool .though working with text files gave some overhead , this effect was insignificant as the time spent on read / write operations was usually small compared to the computation time .all experiments were run in a single thread on 16 core intel xeon e5 - 2650 v2 @ 2.6 ghz and 64 gb of ram . at any time no more than 15 processes were running to make sure one core was free for background processes .the maximum heap space for the java virtual machine was set to 1 gb for all instances .for all experiments , in the solution generation phase 16 solutions were generated with grasp , where each run of the grasp was set to 8 iterations and with a different random seed .we set the maximum treewidth for the final dp to and the maximum treewidth for the ranking procedure to , with iterations of random shuffling . in all experiments where grasp alone solved an instance to optimality, the instance was dropped from the test set .r0.6 [ cols="^,<,<,^,^,^,^,^,^,^ " , ] an initial test was run on the last 50 instances of the classic i640 benchmark set available through the steinlib repository .all instances are randomly generated .this benchmark is a little outdated in that nowadays most instances can quickly be solved to optimality , but the clear distinction in parameters with which the instances were generated facilitates an easy analysis of the results .all instances in the benchmark set have 640 vertices , but differ in edge densities and number of terminals . for most instancesthe optimal value is known , in the other cases we used the best known upper bound as an approximation to find the optimality gap .this is only the case for instances i640 - 311 - 315 .the results are in table [ tab : i640results ] .next to the instance name the number of terminals and edges is shown .the optimality gaps of grasp and our solution merging heuristic ( smh ) are given as a percentage of the optimal value .the column impr.% gives the percentage improvement of the optimality gap by smh compared to grasp .the running time for smh does not include grasp .the column rel . gives the time spent on smh relative to the time spent on grasp .the last column , # trees is the number of local solutions that were eventually accepted after the sorting procedure(see algorithm [ algo : rankingprocedure ] ) in the solution union for the smh .the table clearly reveals the difference in performance between sparse and denser graphs . for instances withless than 1280 edges smh usually gives a good improvement , even solving the instances to optimality in three instances , yet for none of the most dense instances an improvement was found .this is also reflected in the number of solutions that were used by the algorithm in the final run of the merging phase .there is a clear inverse relation between the density and the number of solutions the algorithm can merge while keeping treewidth within limits . as results e.g. instance 201 and 205 show , a high number of solutions merged does not guarantee improvement , although apparently it is a good indicator .also the running time of the smh relative to grasp is usually lower when no improvement can be found . as stated before most of the instances in the i640 are not particularly hard to solve with todays hardware . to get a better view of the power of the smh algorithm we wanted to apply it to some bigger instances .the most notouriously hard test set in steinlib is the puc testset , of which most instances have no known optimal solution after more than 13 years in the field .no results are plotted but for completeness that smh gave poor results on these instances : for all but the smallest instances we were not able to find any combination of solutions within width limit .we do nt know if this is because our greedy tree decomposition works particularly bad for these graphs , or because high treewidth is an inherent property of the graph . in any casemost instances from puc are denser than the second highest density instances from i640 , for which smh was already hardly able to show improvement .fortunately there are some other test sets in the steinlib repository that are big enough to justify the use of our merging heuristic but not so dense as to make it run into trouble because of the treewidth limit .results on these test sets are discussed in the rest of this section . to compare performancewe also ran the path relinking algorithm ( pr ) from bossa .the path relinking algorithm is itself a solution merging heuristic which comes in two flavours . on standard settings it first tries these different flavours and then picks the one that seems to perform best . for our experiments we forced it to use the random relink heuristic , as this turned out to perform best on all tested instances , and the initial run that determines the best settings takes a considerable amount of time .this makes for a more fair comparison . for more informationsee .the es1000fst test set contains 15 instances of randomly generated points on a grid , with l1 distances as edge weights .each instance has 1000 terminal vertices and between 2500 and 2900 vertices in total .due to preprocessing techniques applied to the graphs these instances only have between 3600 and 4500 edges , making them very sparse .the results for grasp , grasp+smh and grasp+pr are shown in table [ tab : resultses1000fst ] . again , for smh and pr the time does not include the initial grasp iterations .results are averaged over all instances . the number of instances for which the algorithms produced the best solution among all produced solutions for that instance is given by the row # best .l |*4l & & grasp & smh & pr + opt gap % & & 0.392&0.061&0.109 + time ( s)&&402&6&493 + # best&&0&14&1 + l |*3l & & grasp & smh + opt gap % & & 0.441&0.189 + total cpu time(s ) & & 194485&310 + wall clock time(s ) & & 12155 & 310 + overall the smh seems to perform better on these instances .its also nice to note that on average 15.4 solutions were used in the final dp run of the merging phase .this is probably caused by the inherently low treewidth of the instances .however , the treewidth of these instances is not so low that direct use of dp would be feasible .as the treewidth found by greedydegree for the es1000fst instances had a minimum of 14 and an average of 22 , running the tree decomposition based dp on the original instances would take ages .we also tested on the single instance in the es10000fst set .this graph is created in a similar way but with a factor 10 more terminals and vertices .because this instance is so large that only running the 128 grasp iterations needed for the smh takes more than two cpu days , we did not compare it with path relinking in a sequential run .instead grasp was run on 16 cores in parallel and the solution merging heuristic was run on a single core thereafter .results are in table [ tab : resultses10kfst ] .both the wallclock time ( time untill all threads where finished ) and the total computation time summed over all threads are shown .though not the most spectacular improvement in optimality gap it shows the good scaling properties of smh .the relative time spent on smh compared to grasp has about the same ratio as seen in the es1000fst instances when we compare wallclock time , yet it is an order of magnitude smaller when we compare the total cpu time . we need to notice however that for this instance smh was only able to use 3 solutions , as the treewidth of the entire solution pool combined was rather high at 22 .the lin test set from steinlib has very similar properties as the es1000fst set .these instances are generated from placing rectangles of different sizes in a plane , such that their corners become vertices and there edges graph edges .though no preprocessing is done on these instances , it still makes for a very sparse graph , with no vertex having a degree more than 4 . after dropping instances that were solved to optimality by grasp, only the last 13 instances remained .the number of vertices of these instances are in the range 3700 - 39000 .l |*4l & & grasp & smh & pr + opt gap % & & 0.33&0.09&0.13 + time(s)&&336&2&157 + # best&&1&10&6 +l |*4l & & grasp & smh & pr + opt gap % & & 0.27&0.07&0.11 + time(s)&&1386&7&1057 + # best&&1&8&4 + results are in table [ tab : resultslin ] .again the smh performs very good compared to pr , in smaller amount of time . in all casesthe merging phase could use all 16 solutions , often producing a union well below the treewidth limit .the last test sets we ran experiments on are the alut and alue sets from steinlib , which we combined because of their strong similarities .the structural properties of these instances are very much like the lin test set . however , these come from very - large - scale - circuit ( vlsi ) applications .the results is a grid graph with rectangular holes in it .this graph again has a maximum degree of 4 . after dropping instanceswhich grasp solved to optimality , 10 instances remained ranging in number of vertices between 3500 and 37000 and a number of terminals between 68 and 2344 . because of the fairly large size of some of these instances , we put a maximum on the running time for the combination of grasp and merging heuristic of 3.5 hours .this gave a timeout for pr on the largest instance , so we took the best found solution up to that point .to compare , smh only took 40 seconds to run for this instance , while grasp took about 2 hours .one of the nice properties of using the tree decomposition based approach is that for graphs with a regular structure such as with the last two sets we tested on , the size of the graph does not seem to matter much for the treewidth of the union of solutions , while the dp runs linear in the number of vertices . in the experimentrun on the alut / alue sets , for all but two of the remaining instances the merging phase was able to use all 16 generated solutions .the two exceptions , where only 15 solutions were used , were the largest instance , and surprisingly , the smallest instance .this illustrates that observation quite well .experimental results showed that a tree decomposition based exact algorithm can be employed as an efficient means to merge local heuristic solutions to the stp on sufficiently sparse graphs .as we have seen in results on the alut / alue test set , the sparse structure natural to vlsi derived graphs is exactly that at which our heuristic performs well .as vlsi is one of the major applications of the stp , this makes the heuristic practically relevant .as mentioned in the introduction the algorithm we used to generate solutions is no longer state of the art . in theoryany algorithm capable of generating distinct locally optimal solutions could be employed with our algorithm .we plan to investigate the competitiveness of our solution merging heuristic when combined with a faster implementation such as for generating solutions in preparation of a journal version of this paper .that fixed parameter tractable algorithms can be used as a heuristic solution merging technique for the tsp had been shown in , while we established results for the stp .it seems likely that there are more optimization problems where this technique can be used .a minimal requirement seems to be that any feasible solution has low value for the chosen parameter . however , whether a low width decomposition can be found on a combination of local solutionsdepends on the instance , and in the case of the stp the density of the input graph seems a good indicator for that .it would be interesting to see if such a characterization is possible for other optimization problems that have low width solutions . as a final remark , in our algorithm we managed the treewidth of the solution union by discarding solutions. a simple extension would be to use an iterative scheme to reduce treewidth of the solution pool : first run the solution merging heuristic on ( small ) subsets of the generated solutions to generate a new solution pool with less solutions , and repeat until all solutions are within the treewidth limit .it seems likely this could further improve the performance .bodlaender , h.l . ,koster , a.m.c.a .: treewidth computations i. upper bounds , information and computation 208 , 259 - 275 ( 2010 ) bodlaender , h. l. , fomin , f. v. , kratsch , d. , koster , a. m. c. a. , and thilikos , d. m : on exact algorithms for treewidth .acm trans . algor . 9 , 1 , article 12 ( 2012 ) cook , w. , seymour , p. tour merging via branch - decomposition . informs journal on computing , 15(3 ) , 233 - 248 ( 2003 ) fafianie , s. , bodlaender , h. l. , nederlof , j. : speeding up dynamic programming with representative sets . in : g. gutin and s. szeider ( eds . ) : ipec 2013 , lncs 8246 , pp .321334 , ( 2013 ) koch , t. , martin , a. , vo , s. : steinlib , an updated library on steiner tree problems in graphs .technical report zib - report 00 - 37 , konrad - zuse zentrum fur informationstechnik berlin ( 2000 ) , http://steinlib.zib.de/ ribeiro , c. c. , uchoa , e.,werneck , r. f. a hybrid grasp with perturbations for the steiner problem in graphs .informs journal on computing , 14(3 ) , 228 - 246 .( 2002 ) uchoa , e. , poggi de aragao , m. , werneck , r. , ribeiro , c.c . : bossa .( 2002 ) , http://www.cs.princeton.edu/~rwerneck/bossa/ polzin , t. : algorithms for the steiner problem in networks .phd thesis , universit'at des saarlandes ( 2003 ) pajor , t. , uchoa , e. , werneck , r.f . : a robust and scalable algorithm for the steiner problem in graphs arxiv preprint arxiv:1412.2787 ( 2014 )
fixed parameter tractable algorithms for bounded treewidth are known to exist for a wide class of graph optimization problems . while most research in this area has been focused on exact algorithms , it is hard to find decompositions of treewidth sufficiently small to make these algorithms fast enough for practical use . consequently , tree decomposition based algorithms have limited applicability to large scale optimization . however , by first reducing the input graph so that a small width tree decomposition can be found , we can harness the power of tree decomposition based techniques in a heuristic algorithm , usable on graphs of much larger treewidth than would be tractable to solve exactly . we propose a solution merging heuristic to the steiner tree problem that applies this idea . standard local search heuristics provide a natural way to generate subgraphs with lower treewidth than the original instance , and subsequently we extract an improved solution by solving the instance induced by this subgraph . as such the fixed parameter tractable algorithm becomes an efficient tool for our solution merging heuristic . for a large class of sparse benchmark instances the algorithm is able to find small width tree decompositions on the union of generated solutions . subsequently it can often improve on the generated solutions fast .
besides the numerous attempts to implement a large - scale quantum machine for local and centralized applications such as efficient factoring , there are other vast areas of research on interesting and unique distributed quantum algorithms with no classical variants . perceived advantages with respect to security , time complexity or communication complexity are sufficient to motivate researchers to focus on design , implementation and optimization of the algorithms .quantum key distribution ( qkd ) protocols and quantum leader election algorithms are a subset of these algorithms .however , although detailed , resource - aware analyses of monolithic algorithms are increasing , the literature for equivalent analysis of distributed algorithms remains sparse .a quantum approach for solving the classical problem of _ byzantine agreement _ is another important distributed quantum algorithm , proposed by ben - or and hassidim .their pure - theoretical distributed algorithm terminates in expected number of rounds in the presence of a computationally unbounded , full information and adaptive adversary .there is no similar variant in classical deterministic or randomized solutions with these unique features at the same time .distributed algorithms for solving the byzantine agreement problem are crucial for designing fault - tolerant systems in many domains .these algorithms have broad applications in areas ranging from fault - tolerant real - time and online services , to secure and large - scale peer - to - peer services .although quantum sharing - based byzantine agreement can be theoretically faster and more secure than the classical algorithms , a considerable gap has remained from the abstract layer to experimental layer . to fill this gap, we must address challenges ranging from finding a practical architecture for quantum processing elements in each independent node to consideration of a quantum - based infrastructure for communication known as a _quantum repeater network ( qrn ) _ , by using entanglement and teleportation .analysis of faults in end - nodes and network imperfections in comparison to the ideal model of the original algorithm ( which assumes perfect computation in each node and a perfect point - to - point quantum communication link between each pair ) increase the complexity of the problem .we face some questions about the quantum architecture of the abstract algorithm in addition to computational and communication resources required for running this algorithm . in this workwe have : * extracted the minimal architecture requirements for the quantum part of the qba protocol , * proposed two optimization techniques for the architecture and circuit for a minimum setup of qba with 5 nodes , and * estimated computation and communication costs for the minimum setup , including required fidelity .the remainder of the paper is organized as follows : in section [ sec : preliminary ] , we review classical and quantum aided byzantine agreement and quantum repeater networks .a high level analysis of the qba protocol is presented in section [ sec : highlevelanalysis ] . in section [ sec :design ] , the overall architecture design is proposed .we provide required optimization techniques in section [ sec : implementation ] .the assessment results is shown in section [ sec : result ] .finally we conclude the paper in section [ sec : conclusion ] .in this section , we review essential concepts and related background . we start with the standard definition of classical byzantine agreement and history of byzantine - tolerant solutions .we continue with the first scalable quantum sharing - based byzantine agreement proposed by ben - or and hassidim in .a general review of the network requirements for execution of the algorithm concludes this section .the history of the byzantine agreement problem goes back to a proposal by lamport et al . for defining a more sophisticated form of fault model with active and malicious behavior , now known as byzantine faults . tolerating this form of the fault which is stronger than fail - stop faults , requires more computation and communication resources in a distributed systems . to analyze the behavior of this type of fault , lamport et al .employed the colorful metaphor of a distributed system as a group of byzantine generals arrayed around a city , trying to decide as a group whether to attack or retreat .it must be assumed that less than third of these generals are traitors ( active fault as an internal adversary ) .the mission of the distributed generals will be successful if all the loyal generals agree on a unique command ( attack or retreat ) in the presence of inconsistent messages sent by traitors ( byzantine faults ) , otherwise the protocol fails . in this problem ,all communication is done by messenger . from the communication point of view , byzantine agreement is a form of reliable broadcast between multiple nodes in a network which supports point to point channels between nodes .one of the first attempts to exploit quantum advantages in a weaker version of agreement ( known as detectable broadcast ) as a form of reliable broadcast is presented by fitzi et .al . in the work does nt solve lamport s original problem , the solution is suitable for a small - scaled distributed systems ( with 3 nodes ) in a detectable broadcast application instead of byzantine agreement . & & & + round & & & + complexity & & & + & & & + in this paper , the focus is on ben - or and hassidim s algorithm .they proposed a scalable solution for creating a quantum aided byzantine agreement protocol by modification of feldman & micali s classical probabilistic algorithm to share and verify a known quantum state , instead of sharing and verifying classical random numbers .ben - or and hassidim s algorithm is based on the following assumptions : * there exists a full - duplex ideal quantum channel between each pair of players ( end - nodes ) .note that during execution of the algorithm we also need an ideal classical channel between each pair of the players .* to tolerate an upper bound on the number of faulty players ( ) , the communication model need to be synchronous .each round consists of two separate phases : a communication phase and a computation phase . for the asynchronous case ,they prove the effectiveness of their algorithm with an upper bound of .* the adversary can be adaptive , full information and computationally unbounded . at the end of algorithm execution ,the agreement between all non - faulty players , the validity of the output between them and termination of the algorithm ( with the probability of 1 ) are guaranteed .the algorithm can be analyzed from two points of view : 1 .performance analysis under security assumptions : the algorithm requires a constant number of expected rounds in the presence of a full information adversary with upper bound of malicious players between players ( ) . table [ tab : roundcomplexity ] shows the upper bound of the round complexity in classical deterministic , classical randomized and quantum - aided algorithms .security analysis under performance assumptions : as another view , the quantum algorithm is more secure than deterministic and randomized algorithm in the case of the lowest bound of round complexity . if we analyze the security of byzantine agreement protocols for fixed - round algorithms , we get the result similar to table [ tab : securityqba ] .there is no fixed - round , deterministic algorithm with the strongest type of adversaries ( adaptive , computationally unbounded and full information ) . for the case of the randomized algorithms ,the best available algorithm suffers from the assumption of communication security between each pair of non - faulty nodes . & & & + adaptive adv .& & yes & yes + unbounded adv . &no solution & yes & yes + full info . adv .& & no & yes + we will describe the details of the algorithm in addition to its theoretical behavior in the next section .execution of fundamentally distributed quantum algorithms requires a quantum - based solution for communication known as a _ quantum repeater network ( qrn ) _ ( fig .[ fig : qrn-00 ] ) .quantum repeater networks provide an efficient infrastructure for distributed systems by using entanglement , teleportation and some forms of error detection and correction . in these networks, information could be represented by entangled states and a link between two quantum nodes creates entangled states supporting quantum teleportation .purification and entanglement swapping repeaters , error correction - based repeaters and quasi - asynchronous repeaters have been proposed for qrns . despite many ongoing demonstrations of quantum key distribution ( qkd ) application without the benefits of entangled repeater networks ,the research on architecture analysis and design of other promising distributed quantum applications is rare and narrow . to the best of our knowledge, there is no detailed analysis and design for quantum - aided byzantine agreement algorithm .therefore , there is a need for analysis of the minimum requirements in quantum repeater networks for complete execution of quantum byzantine agreement .the modeling must be extended beyond that described by ben - or and hassidim .in particular , the algorithm as proposed has been analyzed assuming only pure states , and without reference to the demands made on the repeater network . for the remaining sections , the focus will be on finding appropriate answers for the following questions : * what are the required quantum resources for qba protocol ? *how resilient is it to network and gate error ? *is it practical and attractive in the real world ?* can we use qba as an early demonstration application of quantum repeater networks ? for exploration of an efficient solution , we need to analyze the relevant part of graded verifiable quantum secret sharing ( vqss ) and the qba protocol in detail , and determine the overall architecture of the quantum processing elements as will be presented in the next section .in the previous section , we introduced the quantum aided byzantine agreement protocol and enumerated its features in qualitative terms . in this sectionwe begin the quantitative analysis of the algorithm . as shown in fig .[ fig : img01 - 1 ] , to run the quantum - aided byzantine agreement ( qba ) , all the end - nodes concurrently and independently run the agreement protocol with 3 sub - protocols .the overall protocol is based on the original randomized algorithm and has a constant expected number of rounds . as shown in fig .[ fig : img01 - 1 ] , the sequential sub - protocols of quantum - aided byzantine agreement are , and . at the beginning , all nodes are supposed to start the protocol with an input value .execution of the first sub - protocol ( ) advances toward an agreement for non - faulty and uncertain nodes .the other sub - protocols and are executed sequentially after .they are pure classical protocols for biasing the outcome of the coin flipping procedure into zero or one respectively .the only quantum part of all of these sub - protocols is in the quantum aided oblivious common coin procedure of sub - protocol , labeled qocc .qocc is a modification of the original oblivious common coin ( occ ) procedure for independent and random coin flipping . in the quantum version ,the procedure requires sufficiently random numbers among the nodes to be generated .this procedure can be successfully applied by sharing the following quantum state : in the above equation is the number of end - nodes in the distributed system and is a known quantum state for producing sufficiently random numbers between the nodes . in general , a secret sharing procedure can be effectively used to generate a common coin between the distributed nodes in the presence of a limited set of faulty nodes . for this procedure ,one of the end - nodes can be selected as a dealer to share known quantum state . to avoid cheating from a faulty dealer , the verifiable version of secretsharing has been employed .the original state is encoded by the dealer into a multi - qubit register : in the above equation , is the encoded state of the original state by using a quantum error correction code such as quantum reed - solomon code .as described in the next sections , the encoded state is shared by the dealer and if the dealer is honest , the shared state can verified by all the honest nodes . for this procedure ,a quantum error correction scheme is employed .the encoded qubits are shared between all nodes by using a variant of verifiable quantum secret sharing ( vqss ) described in the next section .in the modified version of vqss ( known as graded vqss ) , the players independently run three phases : the sharing phase , the verification phase and the measurement phase . in general , after sharing and verification of the known state , the players measure their remaining qubits .the results produce sufficient random numbers .the only difference between the original vqss and graded vqss is replacement of ideal broadcast in with the gradecast procedure in .the overall analysis of all phases of quantum aided byzantine agreement will be presented in the next section .the protocol as defined calls for p - level quantum variables ( qupits ) with the minimum prime number which is larger than number of nodes ( ) . in our implementation we encode them in qubits , because bell pairs support only distribution of qubits , and error correction circuits for addition , and physical systems are all best developed for two - level systems .there are two sub - phases to sharing , with an agreed upon security parameter .this parameter is employed to improve the probability of capturing a dishonest dealer with probability on the order of .in the first level , qupit registers are prepared by the dealer .the sharing phase continues by encoding and sharing the received qubits between all nodes . in this phasea two level tree is created and distributed among the nodes . in the first level , qupit registers are prepared by the dealer .the first register is the encoded state of the state ( ) . in the second level ,the prepared system is also encoded and distributed by other nodes using additional qupit systems . as shown in fig .[ fig : vqss - phases-01 ] , the protocol starts with a dealer s action for sharing the qupits of encoded ( ) .the dealer is considered to be one of the participant nodes in the distributed system and like any other player may be traitorous .the sharing phase continues by encoding and sharing the received qubits between all nodes .after sharing and encoding the known state by the dealer and then the others , all the non - faulty nodes need to be confident about the correctness of the shared quantum state .a verification phase is executed to ensure that the dealer has shared a recoverable quantum state .the verification phase is applied by each node separately . in this phase ,all nodes apply local quantum operations on their shares .each node measures some of the remaining qubits , step by step .the results of the measurements will be distributed by using a classical gradecast protocol . in the measurement phase, the remaining qubits will be measured by all nodes and distribute the sufficiently random numbers as the required routine in the oblivious common coin procedure .to establish the node architecture , we extracted the requirements for a minimum setup for vqss .we require 5 nodes and need to set the prime number to be 7 for this setup . the security parameter ( ) is selected to have the value two .these parameters lead to a minimum classical - quantum distributed system with five nodes which tolerates one malicious node inside the system .the communication design of the system for execution of qba protocol is based on the following assumptions : * both reliable classical and quantum channels exist between each pair of nodes . * for quantum communication ,the communication resource is bell pairs .we focus on bell states instead of w state or ghz states in consideration of the capabilities of quantum repeater networks . *the bell pairs may have fidelity .the above assumptions result in a complete graph topology .although the protocol requires expected constant number of rounds to be successfully completed , each round is computationally intensive , in addition to the high rate quantum and classical communications .this requires long - live connections between each pair of nodes .since the encoding and sharing scheme for the data qubits is generally similar for the dealer and the other players , the general architecture is the same .each player applies the encoding scheme as shown in fig .[ fig : encoder - arch-01 ] . as describe in the previous section , for the sharing phasewe need two level encoding .the first one is applied by the dealer and the last one is applied by other players . before the verification phase, each player has distributed the encoded qubits and in this phase the non - faulty nodes can verify the correctness of the original share ( encoded by the dealer ) . the required architecture for each node is shown in fig .[ fig : verificationarch-02_1 ] . each player is required to have a quantum verifier module for every other node in the network .one of the main part of the quantum verifier module is modular multiplication and addition ( ) described as below : the other components of quantum verifier consist of the quantum fourier transform ( ) and its inverse ( ) . the required module for these operations is presented in fig .[ fig : verificationarch-02_1 ] . in the last phase ,the only internal operation is measurement of the remaining qubits .since it is not required to use any specific quantum operation , it may be considered as the simplest part of the protocol . but maintaining the remaining qubits is considered to be an important challenge for this phase .the result of the measurement will be gradecast by the corresponding node .in this section we present some techniques for implementing and optimizing the quantum architecture level and quantum circuit level . for the first level , we apply resource sharing in a pipelined approach , and for the last one we propose an efficient low - cost modulo 7 classical - quantum multiplier . to reduce the number of required quantum operations , reusing the quantum modules should be considered . in this technique, it is not required that all the modules be available at the same time . in the verification circuit , since multiplication and addition ( ) and ( ) are dependent on complete execution of the previous and ( ) circuits , a resource sharing scheme has been designed to reduce the number of quantum modules and active number of qubits . as shown in fig .[ fig : parallel-01 ] , we divide the execution time into 7 stages and in each stage a subsection of the required quantum operations is bound to the designed quantum modules .although the overall execution time is increased by one stage in comparison to full circuit architecture execution time , the number of required quantum modules decreased dramatically . by applying this technique , in each verification module and in the worst case, we need to implement two and one and/or , instead of 8 active modules , 4 active and modules .in addition to reduction of the number of quantum modules , this optimization technique can reduce the required number of active qubits in the architecture . as shown in the previous section ,the most challenging computational part of the quantum modules is the circuit for computation of multiplication and addition modulo ( ) .the concept for standard design of quantum circuit for computation of modular multiplication is based on modular addition . motivated by designing efficient modular multiplication for the integer factorization application ,many researchers have proposed low cost quantum circuits .the standard approach for computation of ( for classical integers and and quantum variable ) is using computation of modular addition .although this problem is similar to the state of the art modular multiplication ( and modular addition ) , the concept of computation and the resulting circuit may be different from the state of the art computations and related circuits .the first difference is related to modulo computation . for integer factorization ,the modulo is a parameter and can be changed for each new session .in contrast , the modulo depends on the upper bound of number of the players in qba protocol , and can be considered a fixed number for a designed quantum distributed system .the next difference is related to the classical operand in integer factorization and classical random numbers in modular multiplication and addition circuit of the verification module .for the first one the operand can be implicitly considered in the circuit design , but for the last one , the random number must be present as a parameter for the modular multiplication .these differences cause a need for rethinking novel quantum circuits to be used in the qba protocol .thus we have proposed a multiplier modulo 7 for the minimum setup .we havent followed the standard approach in integer factorization and attempted to compute the multiplication modulo 7 directly . the proposed modular multiplier is shown in fig .[ fig : proposedmultiplier-03 ] . note that in the protocol and the enable classical signals for 3 swap gates ( ) and not gates ( )are based on the following boolean functions : this section the required quantum resources have been analyzed and enumerated for the minimum setup described in section [ sec : design ] .for the analysis the following criteria have been considered : * number of qubits per node * number of quantum operations ( including total quantum cost and circuit depth ) * number of bell pairs consumed by the application for analysis of quantum cost in the end - nodes , we consider the basic quantum circuits to be used in the architecture .the most complex quantum operations are in the verification phase .we used the basic quantum circuits for 3-qubit and . as we described in the previous section ,the most challenging operation is related to modular multiplication and addition ( ) .we consider five different designs for analysis of the quantum depth and total quantum cost in the verifier module : * _ vbe96 : _ use the adder , modulo adder and modulo multiplier proposed in . *_ cdkm04 : _ replace the the basic adder of by the second adder proposed in . *_ vi05 : _ use quantum addition and modulo addition circuits of cdkm04 .the only difference is replacement of the modulo multiplier in by the multiplier proposed in . *_ custom : _ use the proposed multiplier modulo 7 instead of the previous standard design . *_ custom+pipelined : _ apply the proposed pipeline technique and using the circuit in the previous design. to evaluate the total cost of the required quantum operations , we estimate the required quantum gates based on the cost of cnot as the basic gate . due to reduction of the number of ancilla by using the proposed multiplier modulo 7 ,we gain atleast 20% improvement in this parameter .fig [ fig : result - both-01 ] plots the number of qubits and total depth of the quantum circuit . in this paper ,quantum error correction has not been not applied for the architecture because the target was to present a lightweight and simple design .this architecture can be effectively optimized to be demonstrated in experiments . in this situation ,the error threshold for local gates must be considered as an important criterion .we evaluate the error threshold based on the estimation of parameter , where is the total depth of the circuit and is the number of qubits in each node .the result is shown in fig .[ fig : threshold-01 ] .to estimate traffic cost , we divide the result into two sections : the first one is related to the quantum communication cost which is based on transmission of bell pairs between each pair of nodes in the network .we also consider the cost for sending classical messages during the execution of the protocol .the results are shown in table [ tab : networkcost ] .destination & quantum cost & classical cost + & & comm type & node & node & ( bell pairs ) & ( bits ) + 1(a ) & sharing & unicast & dealer & all nodes & 108 & 0 + 1(b ) & sharing & unicast & node(i ) & node ( j ) & 540 & 0 + 2(a ) & verification & broadcast & ttp & all nodes & 0 & 30 + 2(b ) & verification & gradecast & all nodes & all nodes & 0 & 14160 + 2(c ) & verification & broadcast & ttp & all nodes & 0 & 30 + 2(d ) & verification & gradecast & all nodes & all nodes & 0 & 4720 + 3(b ) & measurement & gradecast & all nodes & all nodes & 0 & 2360 + total & - & - & - & - & 648 & 21300 +in this paper , we described an optimized quantum architecture for end - nodes in quantum aided byzantine agreement protocol .the node architecture in the protocol is not as complex as that required for e.g. factoring a large number but at minimum of qubits per node , the experimental demands are substantial compared to current capabilities . in addition , the number of required bell pair is in the order of the number required for quantum key distribution . during the design process we found that modular classical - quantum multiplication and fully modular quantum addition are the most critical part of the quantum computation in the qba protocol .this requires novel circuit design that is fully depends on the minimum prime number satisfy the inequality ( ) .the other requirement is related to long lived qubits to maintain the shared and encoded qubits .this work is supported by jsps kakenhi grant number 25280034 .the authors would like to thank shota nagayama , shigeru yamashita , shigeya suzuki , takaaki matsuo and takahiko satoh for valuable technical conversations .16 p. w. shor , `` algorithms for quantum computation : discrete logarithms and factoring . '' _ in proc .35th annual symposium on foundations of computer science , _ 1994 .
quantum aided byzantine agreement ( qba ) is an important distributed quantum algorithm with unique features in comparison to classical deterministic and randomized algorithms , requiring only a constant expected number of rounds in addition to giving higher level of security . in this paper , we analyze details of the high level multi - party algorithm , and propose elements of the design for the quantum architecture and circuits required at each node to run the algorithm on a quantum repeater network . our optimization techniques have reduced the quantum circuit depth by 44% and the number of qubits in each node by 20% for a minimum five - node setup compared to the design based on the standard arithmetic circuits . these improvements lead to an architecture with per node and error threshold for the total nodes in the network . the evaluation of the designed architecture shows that to execute the algorithm once on the minimum setup , we need to successfully distribute a total of 648 bell pairs across the network , spread evenly between all pairs of nodes . this framework can be considered a starting point for establishing a road - map for light - weight demonstration of a distributed quantum application on quantum repeater networks . january 2017 _ keywords _ : distributed quantum algorithms , byzantine agreement , quantum repeater network ( qrn )
the deviation from scale invariance of the primordial scalar power spectrum is a critical prediction of inflation , and unlike other potential signatures such tensor modes or non - gaussianity , it is the only signature that is _ generic to all inflationary models_. it is therefore a vital test of the inflationary paradigm , and we address it with a minimally parametric approach .briefly , the idea is as follows . choose a functional form which allows a great deal of freedom in the form of the deviation from scale invariance ( _ e.g. _ smoothing splines ) . naively fitting this to the data will lead one to fit the fluctuations due to cosmic variance and experimental noise , with arbitrary improvement in the chi - square . instead , one performs cross - validation : throw out some of the data ( the validation set ) , fit the rest ( the training set ) , and see how well it predicts the validation set .a very good fit to the training set , which poorly predicts the validation set , indicates over - fitting of noisy data .the final ingredient in the algorithm is a roughness penalty , a parameter that penalizes a high degree of structure in the functional form . by performing cross - validation as a function of this penalty, one can judge when the amount of freedom in the smoothing spline is what the data require without fitting the noise . a minimally parametric power spectrum reconstruction combined with a roughness penalty set by cross - validationthus provides a method of determining smooth departures from scale invariance which avoids two pitfalls .firstly , a strong theory prior on the form of the power spectrum ( _ e.g. _ the commonly used power law prescription ) can lead to artificially tight constraints on even a spurious detection of a deviation from scale invariance , which is mostly due to the strength of the prior than that of the data . secondly, simple binning techniques or direct inversion of the data to obtain the primordial power spectrum can lead one to fit noisy data with a large improvement in chi - square / d.o.f .( easily plausible with current data ) , it would not be surprising to see an improvement of relative to a smooth power spectrum by `` fitting the noise '' with a power spectrum containing a high degree of structure . ] . a minimally parametric approach combined with cross - validationavoids these issues , providing a way to actually determine the strength of the shape prior justified by the quality of the data .cross - validation would also be helpful for alternative minimally - parametric methods _ e.g. _ in choosing the number of basis functions . in this work ,we use the best available data over a wide range of scales corresponding to the longest `` lever arm '' of wavenumbers currently extant , to reconstruct the shape of the primordial power spectrum in a minimally - parametric way .esa s planck satellite , which has already begun taking data , is expected to provide superior constraints on the shape of the primordial scalar power spectrum by 2012 .our goal here is to establish a benchmark of what was known about the shape of the power spectrum before the planck analysis .we perform a minimally - parametric reconstruction of the primordial power spectrum based on the method of ref . .since the simplest inflationary models , which are consistent with the data , predict the primordial power spectrum to be a smooth function , we search for smooth deviations from scale invariance with a cubic smoothing spline technique ( for details , see refs . which we only briefly summarize here ) . in this approach ,one aims to recover a function from measurements at discrete points . consider a description of by a piecewise cubic spline .it is uniquely defined by the values of at `` knots '' once we ask for continuity of and its first and second derivatives at the knots , and two boundary conditions : we require the second derivative to vanish at the exterior knots . in our application, is the primordial power spectrum , and the data are : the angular power spectrum of the 5 year wilkinson microwave anisotropy probe ( wmap5 ) cosmic microwave background ( cmb ) temperature and polarization ; alone or in combination with higher resolution , ground - based cmb experiments ( quad and acbar ) ; or with large scale structure data : the sloan digital sky survey ( sdss ) data release 7 ( dr7 ) luminous red galaxy ( lrg ) power spectrum ; and the lyman - alpha forest ( ly ) power spectrum constraints from ref .this work thus represents a significant advance over previous work , with a new wmap release ( two further years of data and a significant advance in the understanding of systematic errors ) plus substantial improvements in both ground - based cmb data and large - scale structure data .we use 5 to 7 knots depending on the data set considered ( see table [ tab : specs ] for details ; the locations of the knots in space are indicated in figs .[ fig : cvcmb ] and [ fig : cvlss ] ) . if the knot values were allowed infinite freedom and were set simply by minimizing the chi - square , in general the reconstruction would fit features created by the random noise present in the data .it is therefore necessary to add a roughness penalty which we chose to be the integral of the second derivative of the spline function .the roughness penalty is weighted by a smoothing parameter : by increasing the smoothing parameter the roughness penalty effectively reduces the degrees of freedom , disfavouring jagged functions that `` fit the noise '' ..[tab : specs ] the cross - validation set - up and the adopted number of knots for each data set used in the analysis . [ cols="^,^,^,^",options="header " , ] in generic applications of smoothing splines , cross - validation is a rigorous statistical technique for choosing the optimal smoothing parameter .cross - validation ( cv ) quantifies the notion that if the underlying function has been correctly recovered , it should accurately predict new , independent data . to make the problem computationally manageable, we opt for a cross - validation , where is the number of data points .that is , the data set is split into two halves , say , and .a markov chain monte carlo ( mcmc ) parameter estimation analysis ( for a given smoothing parameter ) is carried out on one half of the data , finding the best fit model .then the log likelihood of the second half of the data given the best fit model for the first half , cv , is computed and stored .this is repeated by switching the roles of the two halves , obtaining cv .the sum , cv+cv , gives the cv score " for that smoothing parameter .finally , the smoothing parameter that best describes the entire data set is the one that minimizes the cv score .table [ tab : specs ] gives details of the implementation .note that , as in ref . , the basic cosmological parameters ( , , , ) are varied in the mcmc as well as the values of the smoothing spline at the knots , which describe the primordial power spectrum . the mcmc is implemented with modified versions of the camb and cosmomc packages , with very stringent convergence criteria .now we will describe our treatment of the data ./mpc) , showing knot placement ( triangles , arbitrary normalization ) and cross - validation set - up .cv is red andcv is blue .red points represent the lrg power spectrum from ref .the lyman alpha measurement is represented by a filled box encompassing the constraints on the observed flux power spectrum from ref .the light blue line is a concordance lcdm model . ]_ cmb data : _ we use the version of the wmap5 likelihood function with standard options , with the temperature data divided into alternate ( roughly equal signal - to - noise ) bins for cv and cv respectively , exactly as in ref .the polarization data are always used in both cv cases .for cv we use the acbar bandpowers from ref . between .for cv we use the pipeline 1 quad bandpowers between from ref . ( see fig . [fig : cvcmb ] ) ._ sdss dr7 lrg power spectrum : _ the lrg data are used in cv with wmap5 data .the data spans the range of wavenumbers [ /mpc ] .the likelihood function we use is identical to that presented in ref . ( see fig . [fig : cvlss ] ) . _lyman - alpha constraints : _ the ly data are used in cv with wmap5 data ( see fig .[ fig : cvlss ] ) .we use the publicly - released likelihood function by a. slozar to obtain lyman- forest constraints . for this likelihood to be valid , the model must be well described by a three parameter model of amplitude , spectral slope and running at the lyman - alpha forest scales _i.e. _ <3 $ ] . to check that this assumption holds in this -range for our more general description of , we extrapolated the from the monte carlo markov chains of ref . to the lyman- scales and found that in this -range the resulting spline can be well approximated by the prescription of ref .the residuals are at the percent level , well below the intrinsic lyman- errors . with the more recent data sets we consider here ,the approximation is expected to be even better .our main results are presented in fig . [ fig : fourpanel ] for several data sets with increasing range in : wmap5 only , wmap5 in combination with quad and acbar , and wmap5 in combination with sdss lrgs and ly .we show the reconstructed for ease of comparison with the standard power law results . however , the quantity that was actually reconstructed using cross - validation to find the optimal penalty is the power spectrum . the optimal penalty for wmap5 is higher than what was found for wmap3 by a factor of , and is consistent with the optimal penalty for wmap3 in combination with cmb data at smaller scales .the corresponding is shown in the top left panel of fig .[ fig : fourpanel ] .the same optimal penalty is found for wmap5 and wmap5 ( ) , and the latter reconstruction is shown in the top right panel of fig . [fig : fourpanel ] . for wmap5, we find that cv becomes less sensitive to the value of the penalty , and the cv score dependence on the penalty flattens out at .while this may indicate a preference for a less smooth , the data can not distinguish between and a penalty an order of magnitude higher .the reconstructed are shown in the left and right bottom panels of fig .[ fig : fourpanel ] for penalties and , respectively .the dark and light blue regions enclose the best ( ordered by likelihood ) 95% and 68% reconstructions .the 95% constraints are not significantly broader than the 68% because the reconstructed spectra are simply more wiggly ; they are not allowed by the data to deviate more from the best fit , consistently across scales .cross - validation is a useful tool to check for indications of unidentified systematic biases in the data .for example , in ref . we found that the 3 year wmap data ( wmap3 ) by itself favored a primordial power spectrum with a downward deviation from a power law at small scales ( see fig . 2 of ref .however , this feature disappeared when combining wmap3 with other data sets ( see figs . 3 and 5 of ref . ) which overlapped wmap3 on the scales corresponding to the feature an inconsistency suggestive of a small residual systematic effect in the high wmap3 data . ref . argued ( based on considerations of frequency dependence ) that the unresolved residual point source contribution to be subtracted from the raw should have been smaller by % and its uncertainty increased by 60% compared to the wmap3 official values . to judgeif smoothing spline cross - validation could give some insights on possible residual systematic errors , we investigated how the point source subtraction level should have been changed for the aforementioned downturn at small scales to disappear from the reconstructed power spectrum .we obtained a point source amplitude % lower than the wmap estimated value , which is tantalizingly close to the estimate of ref . .we find that wmap5 , cmb experiments at smaller scales , and the lrg power spectrum are all consistent with each other . with the addition of ly data ,a lower penalty value is allowed .this could be a tentative indication of possible tension between ly and the other data sets , but not a very significant one : there is a cancellation between the effect of penalty and the effect of the likelihood over a wide range of penalty values as shown in the bottom panels of fig .[ fig : fourpanel ] ) . in addition , as lrg and ly scales do not overlap , we can not exclude the possibility of a low - significance local feature in the power spectrum . in fig .[ fig : cmbdr7 ] we show the reconstructed for the cmb and lrg data , with optimal penalty .the cv setup for wmap5 is the same as before , lrgs are added in cv , and quad+acbar are included together in cv .we have excluded the ly data as it is the only non - overlapping data set . for comparison, we also show the 95% and 68% constraints for wmap5 data when a power - law spectral index is assumed to describe the shape of the primordial power spectrum .we see no evidence that any -dependence of is necessary to describe the data in the cv reconstruction . while is disfavoured , the significance of the departure from scale invariance is weaker than when the `` inflation - motivated '' power law spectral index prior is adopted .this minimally - parametric reconstruction highlights how constraints relax when generic forms of are allowed .while this reconstruction is in agreement with the inflationary prior , it illustrates that better data are needed to justify its adoption observationally .forthcoming data from planck will significantly reduce the current reliance on priors in our understanding of the shape of the primordial power spectrum .future large - scale structure data and planck will overlap over a decade in scale , offering extra consistency checks .lyman alpha data , on the other hand , offer the potential to extend the lever arm by at least another decade .we hope that the results presented here will form a basis to judge the robustness of our present knowledge when confronted with the precision measurements that are on the horizon . from wmap5 , acbar , quad and sdss dr7 lrg data with optimal penalty determined from cross - validation excluding ly .the orange - red band shows the 95% and 68% constraints for wmap5 data with a power - law prior . ]
we present a minimally - parametric reconstruction of the primordial power spectrum using the most recent cosmic microwave background and large scale structure data sets . our goal is to constrain the shape of the power spectrum while simultaneously avoiding strong theoretical priors and over - fitting of the data . we find no evidence for any departure from a power law spectral index . we also find that an exact scale - invariant power spectrum is disfavored by the data , but this conclusion is weaker than the corresponding result assuming a theoretically - motivated power law spectral index prior . the reconstruction shows that better data are crucial to justify the adoption of such a strong theoretical prior observationally . these results can be used to determine the robustness of our present knowledge when compared with forthcoming precision data from planck .
one of the main difficulties for the grounding of a platform for quantum technologies is the effect of noise or `` decoherence '' on quantum states .no physical system is ever truly closed due to interactions with its environment . as a result of such interactions ,the quantum state of the system will approach classicality , thus ceasing to be of interest for quantum - empowered protocols . before being able to create useful quantum technologies , we need to understand these processes , and eventually control them .an intriguing aspect of decoherence is that , in specific cases ( such as quantum stochastic resonance , to throw an example ) , weak decoherence mechanisms give rise to sizeable advantages in , say , the performance of some quantum protocols or the transport of excitations across a quantum medium . such counterintuitive effects are tightly linked to quantum interference phenomena : decoherence changes the way the wave function of a given system evolves in time , thus affecting the occurrence of constructive and destructive interference . accidental " constructive effects may be induced , without spoiling the working principle of a given quantum process , for sufficiently weak decoherence mechanisms .all this is particularly important ( and evident ) in the quantum walk framework , whose advantage in terms of the spreading rate of the position of a walker on a given path " may be magnified by small degrees of phase noise . in this paper, we build on the already well established body of research into the behaviour of quantum walks when affected by decoherence , reported for instance in refs . and surveyed in ref . .after quickly revisiting the paradigm of quantum walks , we proceed to discuss the protocol under investigation , introducing phase noise and addressing the performance of the scheme for various strengths of such mechanism .a quantum walk is best described as the quantum analogue of the classical random walk . however , unlike the classical random walk , the evolution of a quantum walk is entirely deterministic .quantum walks of course allow for superposition states of the walker , enabling them to exhibit interesting behaviours not shown by their classical counterparts . a comprehensive survey of quantum walks , covering both continuous and discrete time variants , and detailing the behaviours of quantum walks on various structures , can be found in ref. . in this paper, we focus on discrete time quantum walks . a discrete time quantum walk operates within the hilbert space , where known as the position space describes the position of the walker on a well - defined structure ( here we shall refer to this structure as the walk s _ terrain _ ) , and known as the coin space describes an additional degree of freedom affecting the evolution of the walk : this degree of freedom determines the walker s behaviour in the next time step . for the evolution of the walk, we define two operators : the shift operator , , and the coin operator , .the shift operator will `` move '' the walker on to a new part of its terrain , depending on the coin state .for example , if the terrain of a walk is a graph , and the walker is on some vertex of the graph , the shift operator will move it along one of the vertex s edges to another vertex .the coin operator is analogous to the flipping of a coin in a classical random walk , it will act on the coin space which in turn affects how the walk shall evolve in its terrain when the shift operator is applied .we move the walker along by one step by applying the coin operator followed by the shift operator ; the state of the walker , starting in some initial state , is thus described after total steps by ^t { \left\vert\psi(0)\right\rangle},\ ] ] where is the identity operator on the position space .in other words , we apply the coin operator and the shift operator times to the initial walker state . in order to provide a complete view of the main features of the walk protocol, we now give some concrete examples of quantum walks . to represent the walk terrain , a line, we shall use the set of integers .the walker can be anywhere on the line , so we give the basis .as previously described , each step of the walk involves a coin flip and a shift . in the walk on the line ,the walker has a `` choice '' of two directions , left and right . in the classical random walk , the decision of which direction to walk in at each step is reached by flipping a fair coin .likewise , in our quantum walk we shall use a coin space of degree two , viz . is given the basis .we decide to use the hadamard operator for our coin , as in ref .this has the effect of putting our walker into a superposition of coin states and will allow for interference to occur during the course of the evolution of the walk . with regards to the walker s behaviour on the terrain , the classical random walk will move one step to the left or one step to the right depending on the most recent coin flip .the same idea applies for the quantum walk .we define the shift operator as ambainis _ et al . _have shown in ref . that the quantum walk on the line spreads out quadratically faster than the classical random walk on the line . in general , a graph is specified by fixing a set of vertices along with a set of edges connecting them .-regular graphs are graphs with edges attached to each vertex .we affix a label to each end of each edge , as illustrated in fig .[ labelled_graph ] * ( a)*. the walker will traverse the graph s vertices , moving along the edges , so we define the position space as having the basis .-0.5cm*(a)*5.25cm*(b ) * layers , .,title="fig : " ] layers , .,title="fig : " ] at each time step , the walker has a fan - out of vertices to move to and we thus have to use an iso - dimensional coin space .we now give the basis .we then introduce the grover coin which , as described in ref . , generalises the hadamard coin to hilbert spaces of dimension larger than 2 .in order for to be unitary , the conditions and have to hold ( with ) .the values of and can be changed to vary the behaviour of the walk on the graph .we shall use a grover coin later on to perform the simulations at the core of our work . as for the shift operator, this must take the walker along the appropriate edge to a new vertex , depending on the coin state .again , we state that this idea is a generalisation of the walk on the line in which we give the walker a choice of directions at each step rather than 2 .we define our shift operator as where and is labelled on s end , and is the label assigned to the destination node s end of the edge .the goal of the glued trees ( gt ) algorithm for quantum search is the following : beginning from the left - most vertex of a given gt graph , traverse the graph and reach the right - most vertex , referred to as the target vertex .childs _ et al . _ use this algorithm to show quantum walk search to be fundamentally more effective than classical random walk search by presenting a class of graphs ( the gt graphs ) that force classical random walks to make exponentially many queries to an oracle encoding the structure of the graph , but that are traversable by quantum walks with a polynomial number of queries to such an oracle . in order to study the robustness of the algorithm to the detrimental effects of decoherence, we shall determine how effectively it achieves its goal when subjected to an increasing degree of phase damping noise .for this reason , we will focus on the probability that the walker is on the target vertex at the end of the walk .we thus consider gt graphs such as the one illustrated in fig .[ labelled_graph ] * ( b ) * , i.e. consisting of layers before the gluing stage , and thus labelled as .the continuous time quantum walk exploited by childs _et al . _ can be reformulated as a discrete time one by means of a shift operator analogous to the one that has been previously described , and a grover coin with and as described in ref .more explicitly we shall use this discrete time reformulation of the protocol to study its behaviour when affected by phase damping decoherence .the gt graphs described in ref . can straightforwardly be converted to -regular graphs by adding two self loops ( one each to the left - most and right - most vertices ) , thus allowing us to use the discrete time walk model described in sec .[ discreteongraph ] . to model decoherencewe use the kraus operators for a phase damping channel acting on the -dimensional system embodied by the coin only .this modifies the initial density matrix of a system as = \sum_{k}e_k(\tau ) \rho(0)e^{\dagger}_{k}(\tau),\ ] ] where we have introduced the kraus operators for phase damping with $ ] the strength of the phase damping . by taking the parameterisation with the probability rate of a phase error, we can say that the effect of the channel is weak ( strong ) for ( ) .we have introduced the orthonormal basis spanning the coin space only .we assume an initial position - coin state decomposed as where we have introduced the orthonormal basis spanning the position space , and the position - coin density matrix elements .evolves under the action of the phase damping channel on the coin space as =\sum_{x , y}\sum^{d-1}_{l , l'=0}\rho_{x , l , y , l'}\eta^{(l - l')^2}{|x\rangle\langle y|}_{p}\otimes{|l\rangle\langle l'|}_c.\ ] ] our investigation consists of simulating a process such that , for each walk , we select a value of , apply the channel to the density matrix of the system after each time step of the walk , and evaluate the probability of the walk reaching a given vertex of the graph . in this way, we compare the behaviour of the walks affected by phase damping ( ) to that which is found for an ideal walk ( corresponding to ) .it is important to note that the probability depends on the initial coin state . in our studywe have considered walks starting in the initial coin state that maximises the performance of the ideal protocol ( such state also depends on the way in which we label the graph ; in particular , for the labelling scheme that we have mainly used , this corresponds to , with and ) .however , we have tested different ways of labelling the graph ( and therefore different initial coin states that maximise the walk s performances ) and we have obtained results similar to those presented in the remainder of the paper . in order to grasp the temporal behaviour of the walk, we consider the change in the probability distribution of the walker s position on the graph at time . in fig .[ nodeco ] we illustrate such a probability distribution for an ideal 25-step walk on the gt graph .the target vertex ( vertex number 253 ) is reached , with the highest probability , on step 16 ( as shown by the large peak in the line corresponding to step 16 ) .we will now focus on the probability of the walker being on particular vertices on the graph and the overall vertex probability distribution for specific time steps . in ref . , the effect of decoherence on the walk on the hypercube was studied quantitatively .starting from one corner , a decohered walker will reach the opposite corner in less time than in the ideal walk , hence highlighting a counterintuitive beneficial effect of such an incoherent process .our goal here is to address similar questions for the walk on a gt graph , as well as to investigate the limits of validity of the claim made in ref . , where a lingering effect of a decohered walk on the target vertex was suggested .we will thus look for the possibility that a decohered walk reaches the target vertex in less steps than the ideal one , and also attempt to determine whether it `` lingers '' on the target vertex for a longer time than in the ideal walk .here we present and discuss the results of our analysis , showing the behaviour of walks affected by decoherence of variable magnitude .all the results reported in this section , unless otherwise specified , were generated by simulating the previously discussed discrete time reformulation of the algorithm by childs _ et al . _ on the gt graph . in fig .[ target_over_time ] we plot how the probability of the walker being on the target vertex changes over time for various decoherence magnitudes , ranging from the ideal walk with to a decohered walk with .we see from this plot that the ideal walk takes 13 steps to achieve its goal of reaching the target vertex , where we say that the walker has reached some vertex on some step of the walk when there is a non - negligible probability of the walker being on on the time step .the ideal walker first appears on the target vertex on step 13 with probability , then the probability comes to a peak on step 16 with a value before steadily decreasing .we henceforth refer to the probability that a walker is on some vertex at a time step as s _ vertex probability _ at time step .we see in fig .[ target_over_time ] that , as decreases , the target vertex probability steadily decreases .the peak on step 16 decreases at a faster rate than the target vertex probabilities associated with steps 13 , 14 , 15 and 17 .some interesting behaviour can be observed on steps 21 through 27 of the walk .decoherence magnitude slightly increases the target vertex probability ; in other words , the ideal walker is less likely to be on the target vertex on steps 21 through 27 than the decohered walkers .this behaviour was reported in ref . and can be seen more clearly in fig .[ target_over_mag ] , in which we plot the effect of decoherence on the target vertex probability on various steps of the walk . in fig .[ target_over_mag ] , in order to present this behaviour more clearly , we show a curve for step 22 .we see , for step 22 , that the target vertex probability is higher for walks affected by decoherence of magnitude , with the probability peaking at .decoherence has spread out the range of time steps over which the target vertex probability is large compared to the classical value . in line with the claims in ref . , we have indeed found that weak decoherence increases the target vertex probability for an extended number of time steps .however , when we consider the difference between the and walks on step 22 , we see that the difference between the two target probabilities is very small . in our simulationswe have determined that there is a difference of approximately .as previously stated , in our simulations we have confirmed that this `` lingering '' effect lasts until step 27 .[ target_over_mag ] , as previously discussed , is a plot of the effect of decoherence on the target vertex probability on various time steps .the curve representing step 16 illustrates the effect of decoherence very well , and confirms the results reported in ref . : as the decoherence magnitude decreases , the target vertex probability decreases exponentially . in other words ,the algorithm by childs _et al . _ becomes exponentially less effective at achieving its goal as decreases .we shall now investigate the extent of the `` damage '' that decoherence has on the algorithm s effectiveness .we concede that as long as the target vertex probability is higher than the other vertex probabilities then the decoherence has not had a particularly damaging effect on the effectiveness of the scheme . on the other hand , if phase damping decreased the peak associated with the target vertex below the other probability peaks then the algorithm would end up in a state involving a more probable `` false - positive '' than a `` positive '' when affected by decoherence we would say that this is a serious blow to any scheme s usefulness . in fig .[ step16zoom ] we have plotted the entire graph s vertex probabilities on step 16 ( the step on which , as previously mentioned , the probability of the ideal walker being on the target vertex was the highest ) of walks with various decoherence magnitudes .observe that the probability of the target vertex ( corresponding to vertex number 253 ) is always higher than the non - target vertex probabilities , regardless of the value of . with fig .[ step13tophits ] we can get a clearer picture on how the probability peaks change in the walks affected by decoherence . in order to improve the legibility of the plots while maintaining the number of shown vertex probabilities at an acceptable level , we present only the vertex probabilities where is the target vertex probability . from fig .[ step16zoom ] and fig .[ step13tophits ] we can see that , in steps 13 to 16 of the walk , the peak representing the target vertex probability never drops below any of the other vertex probability peaks .we find this significant and can conclude from it that the algorithm by childs _et al . _ is not affected to the extent previously described : the walker is never on a non - target vertex with greater probability than the target vertex when allowed to run for sufficient time and phase damping decoherence does not change this fact . the authors of ref . investigate the evolution of a discrete time quantum walk on the hypercube using a grover coin .they plot the probability that the walker is on the target vertex ( they begin the walk on a corner of the hypercube and take the target vertex to be the vertex in the opposite corner of the hypercube ) for a number of time steps , in the same way we have done in this paper .they observe that decoherence lowers the probability peaks of their plots in the same way that it does in our plots regarding the walk on the gt graphs , but they also observe that the `` troughs '' in their plots ( sections of the curve representing vertices with probability much lower than others ) become raised when decoherence is applied . this same effect can be observed in the walks that we simulated on the gt graphs and can be seen in the middle plot on fig .[ step16zoom ] , where the vertices have their probabilities boosted slightly by the decoherence , with the lowest decoherence magnitude we investigated , , raising the probability by the greatest amount . for the sake of completeness, we finally extend our investigation to different sizes of the gt graph .we show in fig .[ plotvslayers ] the behaviour of the target vertex probability against the number of layers before the gluing stage in the gt graph ( i.e. we have generated gt graphs , with between 4 and 8) .we focus our interest on the step on which the probability of the walker being on the target vertex is the highest . also in this case we consider a range of decoherence magnitudes .the range of values for has been chosen as a reasonable trade - off between the computational power required by the simulation and the readability of the plot . of layers before the gluing stage in the gt graph , for range of decoherence magnitudes ( from the top to the bottom , respectively ) . ]in this paper we discussed a discrete time reformulation of the continuous time quantum walk algorithm described by childs _. .we simulated this discrete time quantum walk on gt graphs and applied phase damping to the coin system , in order to analyse the algorithm s resilience to decoherence .we did this by studying how effectively it achieved its goal when affected by decoherence of various magnitudes : we investigated how decreasing the decoherence magnitude ( making the decoherence `` stronger '' ) lowered the probability of the walker being on the target vertex at the end of the walk .we first simulated the walk with no decoherence to find how many steps it took to reach the target vertex ( we say that the walk has reached a vertex when the probability that the walker is on that vertex is non - negligible ) . we then included a range of decoherence magnitudes to see the extent of the drop in the target vertex probability at the end of the walk .we observed that the ideal walk found the target vertex at time step 13 , with a probability peak at step 16 ( for a gt graph ) .we noted that strengthening the decoherence ( decreasing ) lowered the target vertex probability on time steps 13 to 17 , but that the target vertex always had a higher probability than any other vertex , regardless of decoherence magnitude .we also observed that a decoherence magnitude of boosted the target vertex probability very slightly above the ideal walk s ( ) vertex probability on steps 21 through 27 .finally , we found that vertices with very low vertex probability had this boosted slightly by decoherence .our results have touched on some unexplored features of the algorithm by childs _et al . _ on gt graphs , in turn opening up new questions to address .the first of such behaviours is the rate at which the target vertex probability on time step 16 decreases with : out of the time steps 13 - 17 , step 16 s target vertex probability decreases at the highest rate .the second behaviour that deserves a deeper investigation in the future is observed in steps 21 through 27 of the walks studied in this paper . when compared to the target vertex probability for the ideal walk , we see a slight increase in the target vertex probability for walks . to give a more specific example , we see an increase of in the target vertex probability for on step 22 when we compare it to ideal walk . finally , the boosting of the troughs in fig .[ step16zoom ] by decoherence , which appears to be a similar phenomenon to the boosting of the target vertex probability in steps 21 through 27 .our results add more weight to the claims made in ref . on decohered walks : the `` lingering '' effect is shown to be present , but only marginally relevant because the boost in probability caused by the decoherence for the steps after the target vertex probability drop ( step 18 ) is very small .on the other hand , we have observed that decoherence does not cause any upset to the notion that , at the end of the walk , the walker should be on the target vertex with a higher probability than any other vertex .this work has been supported by the uk epsrc through a career acceleration fellowship , a grant under the new directions for research leader " initiative ( ep / g004579/1 ) , and the equipment grant ( ep / k029371/1 ) . jl thanks the centre for theoretical atomic , molecular , and optical physics for hospitality during the early stages of this work .
we study the behaviour of the glued trees algorithm described by childs _ et al . _ in [ stoc 03 , proc . 35 acm symposium on theory of computing ( 2004 ) 59 ] under decoherence . we consider a discrete time reformulation of the continuous time quantum walk protocol and apply a phase damping channel to the coin state , investigating the effect of such a mechanism on the probability of the walker appearing on the target vertex of the graph . we pay particular attention to any potential advantage coming from the use of weak decoherence for the spreading of the walk across the glued trees graph . quantum walks , quantum algorithms , decoherence
gene regulatory networks describe the effective interactions between genes .the activity of a gene , i.e. , its current rate of being transcribed into rna molecules , can have effects on the activity levels of other genes , which will as a result become up- or down- regulated .the sum of all up- and down- regulation relations in the whole genome is the gene regulatory network .the complete knowledge of the gene network would reveal a large portion of an understanding of life . however , this goal is far from being achieved . with present dna - chip technologyit is possible to measure the transcription rates at a given point in time of an entire genome , but even these technologies only allow a glimpse on the structure of the underlying network , due to the underdeterminedness of the problem .this situation got the physics community interested , to statistically characterize the available data and to ( crudely ) estimate the structure of the complex networks governing gene dynamics .a step toward an identification of potential gene interaction networks is to identify and quantify meaningful _ statistical _ indicators of gene cooperative behavior , which is the main purpose of the present work .the idea is that fluctuations of gene expressions over time , e.g. , during a cell - cycle , can be considered as an output of an interacting gene collective forming a structured network .the hope is that a network structure estimate can be inferred from statistical properties .at least it should be possible to statistically characterize the types of potential candidate networks .we consider the time - course expression data for the genome of yeast _ s. cerevisiae _ .we determine some statistical indicators of collective dynamical behavior of genes , such as the -exponential fit of the cumulative distribution , a ranking distribution and a mean - variance analysis of differential gene expressions .we construct and estimate the expression - correlation network from time increments of expression data and analyse clusters and spanning trees .we identify biological function of genes with use of yeast database .we find that the resulting , correlation based clusters match considerably well with specific biological functions of genes in the cell .the genome - wide gene expression data in are given in the form of a matrix in which every row represents one of yeast genes and each column contains the time evolution of gene expression of that gene .gene expressions are measured at 17 time points , taken every 10 minutes which covers approximately two full cell - cycles .we first properly normalize the gene expressions for each of the 17 measurements separately by dividing each gene expression value by the average value of gene expression for that corresponding column . in order to avoid systematic trends in the time series we use _ differential _ expression data defined as for each gene .we determine the cumulative distribution for each time - interval separately and also all measurements ( all entries in matrix ) .the results are given in fig .1 a. this distribution can be fitted to a -exponential form , ^\frac{1}{1-q } \ ; \ \ q \neq 1 \quad , \label{pdf}\ ] ] where represents the non - extensivity parameter .the fitted values of for the various time - intervals are in the range .the average over all times yields , potentially indicating a non - trivial collective behavior of genes along the cell - cycle . [cols="^,^ " , ] for the functional clusters and branches remain for a while before they gradually disappear in the noise .this is in agreement with the observed character of the correlation distributions in fig . 2 , where smaller deviations from a gaussian distribution are found for .in conclusion , we made several observations about the statistical nature of gene expression data which seem to suggest that at least a significant fraction of genes is up / down regulated in a highly collective manner .indicators pointing in this direction are : ( _ i _ ) the cumulative distribution of differential gene expressions can be fitted to -exponentials , with a non - trivial ; ( _ ii _ ) an approximate zipf s law holds in the ordering distribution of differential expressions ; ( _ iii _ ) an almost linear mean variance dependence with signals tightly driven dynamics ; ( _ iv _ ) the correlation matrix element distributions are non - gaussian and non - poisson and finally , ( _ v _ ) even crude correlation coefficient network displays the emergence of clusters and functional branches in minimum spanning trees , which seem to be biologically relevant .s. tavazoie , j.d .hughes , m.j .campbell , r.j .cho and g.m .church , nature genet . * 22 * , 281 ( 1999 ) ; w. wang , j.m .cherry , d. botstein and h. li , proc .natl . acad .sci . * 99 * , 16893 ( 2002 ) ; k. rho , h. jeong and b. kahng , cond - mat/0301110 ; f. li , t. long , y. lu , q. ouyang and c. tang , proc . natl .* 101 * , 4781 ( 2004 ) ; d. balcan and a. erzan , eur .j. b * 38 * , 253 ( 2004 ) . in the available data - bases of yeast ,apart from genes with unknown functions , often multiple functions can be assigned to a single gene . to keep one color of each gene we selectthe first listed function in the database .we also have grouped several fuctions related to protein synthesis to a single one in order to keep the number of colors visually distinguishable .
we analyze gene expression time - series data of yeast ( _ s . cerevisiae _ ) measured along two full cell - cycles . we quantify these data by using -exponentials , gene expression ranking and a temporal mean - variance analysis . we construct gene interaction networks based on correlation coefficients and study the formation of the corresponding giant components and minimum spanning trees . by coloring genes according to their cell function we find functional clusters in the correlation networks and functional branches in the associated trees . our results suggest that a percolation point of functional clusters can be identified on these gene expression correlation networks . + pacs numbers : 87.10.+e , 89.75.-k , 89.75.hc + keywords : spanning trees , functional clustering , q - statistics , ranking distribution
tests of gravity theories within the solar system are usually analyzed in the framework of the so - called _ parametrized post - newtonian framework _ which enables the comparison of several theories through the estimation of the value of a limited number of parameters . among these parameters , and are of particular importance for astrometry since they are connected with the classical astrometric phenomena of the light deflection and of the excess of perihelion precession in the orbits of massive objects .the same parameters are of capital importance in fundamental physics , for the problem of characterizing the best gravity theory , and for the dark energy / dark matter . moreover , precise astrometric measurements are also important in other tests of fundamental physics since , e.g. , they have the potentiality to improve on the ephemeris of the solar system bodies .these are the reasons why solar system astrometric experiments like gaia and other projects presently under study have received a particular attention from the community of fundamental physicists and cosmologists . such kind of experiments , however , call for a reliable model applicable to the involved astrometric measurements which has not only to include a correct relativistic treatment of the propagation of light , but a relativistic treatment of the observer and of the measures as well . at the same time , the large amount of data to be processed , and the complexity of the problem to be solved , call for the use of high - performance computing ( hpc ) environments in the data reduction .the development of an astrometric model based on a relativistic framework can be dated back to at least 25 years ago . in their seminal work of 1992 klioner andkopeikin described a relativistic astrometric model accurate to the level foreseen for the next generation astrometric missions .this model is built in the framework of the post - newtonian ( pn ) approximation of gr , where the finite dimensions and angular momentum of the bodies of the solar system are included and linked to the motion of the observer in order to consider the effects of parallax , aberration , and proper motion , and the light path is solved using a matching technique that links the perturbed internal solution inside the near zone of the solar system with the assumed flat external one .the light trajectory is solved in a perturbative way , as a straight line plus integrals containing the perturbations which represent , i.e. , the effects of the aberrational terms , of the light deflection , etc .an extension of this model accurate to called grem ( gaia relativistic model ) was published in 2003 .this has been adopted as one of the two model for the gaia data reduction , and it is formulated according to the ppn ( parametrized post - newtonian ) formalism in order to include the estimation of the parameter .a similar approach was followed by kopeikin , schfer , and mashhoon in the post - minkowskian approximation . in this case , however , the authors used a linard - wiechert representation of the metric tensor to describe a retarded type solution of the gravitational field equations and to avoid the use of matching techniques to solve the geodesic equations .ramod ( relativistic astrometric model ) is another family of models , whose development started in 1995 . in this approachthe definition of the observable according to the theory of measure and the immediate application to the problem of the astrometric sphere reconstruction was privileged . as a consequence , it started as a simplified model based on a plain schwarzschild metric .further enhancement brought to the first realistic estimation of the performances of gaia for the determination of the ppn parameter , and to the development of a fully accurate n - body model of the light propagation and of an observer suitable for application to space missions . since the so - called ramod3 , this model was built on a complete pm background , and the light propagation was described with the equation of motion of measurable quantities varying all along the geodesic connecting the starting point to the observer .this approach brought to a specific form of the geodesic equations as a set of coupled nonlinear differential equations which could be solved only by numerical integration .this represented a problem for an extensive application of this model to practical astrometric problems , which has been solved only recently for ramod3 by crosta who applied a re - parametrization of these equations of motion to demonstrate their equivalence to the model in , thus opening the road to an analytical solution of ramod3 and to its full application to astrometry problems .finally , another class of models based on the time transfer function ( ttf ) technique , has been developed since 2004 .the ttf formalism stands as a development of the synge world function which , contrary to all the method described so far , is an integral approach based on the principle of least action . in this modelsone does not solve the system of differential equations of the geodesic equations , and thus does not retrieve the solution of the equations of motion of the photons , but it concentrates on obtaining some essential information about the propagation of these particles between two points at finite distance ; the coordinate time of flight , the direction triple of the light ray at either the point of emission ( ) and of reception ( ) , and the ratio of the temporal components of the tangent four - covector , which is related to the frequency shift of a signal between two points .all these models are conceived to be used at least at the level , suitable for the accuracy foreseen by future astrometric experiments like gaia and game .nonetheless it has to be considered that , because of the unprecedented level of accuracy which is going to be reached , both the astrometric models and the data processing software will be applied for the first time to a real case .moreover , in the case of gaia this problem is even more delicate since here the satellite is self - calibrating and will perform _ absolute measurements _ which is equivalent to the definition of a unit of measure .these are some of the reasons why extensive analytical and numerical comparisons among the different models are being conducted . from the theoretical and analytical point of view ,a first comparison was conducted in showing that grem and ramod3 have an equivalent treatment of the aberration .later the equivalence of ramod3 and the model in at the level of the ( differential ) equations of motion has then been shown in , while the explicit formulae for the light deflection and the flight time of grem , ramod3 , and ttf was compared in where it is demonstrated the equivalence of ttf and grem at 1pn in a time - dependent gravitational field and that of ttf and ramod in the static case .numerical comparisons between the grem and the pm models showed that they give the same results at the sub- level . on the other side ,grem has been compared with a low - accuracy ( ) version of ramod proving that even a relatively unsophisticated modeling of the planetary contributions can take into account of the light deflection up to the level almost everywhere in the sky .this means that , in principle , some experiments like the reconstruction of the global astrometric sphere of gaia could initially be done by models .both the analytical and the numerical comparison , however , showed that the correct computation of the retarded distance of the ( moving ) perturbing bodies is fundamental to achieve the required accuracy .the reduction of the data coming from astrometric missions bring to the attention of the scientific community another kind of new problems , i.e. those connected to the need of reducing a huge amount of astrometric data in ways that were never experienced before .a significant example is given by the problem of the reconstruction of the global astrometric sphere in the gaia mission . from a mathematical point of view , the satellite observations translate into a large number of equations , linearized with respect to the unknown parameters around known initial values , which constitute an overdetermined and sparse equations system that is solved in the least - squares sense to obtain the astrometric catalog with its errors . in the gaia missionthese tasks are done by the astrometric global iterative solution ( agis ) but the international consortium which is in charge of the reduction of the gaia data decided to produce also an independent sphere reconstruction named avu - gsr .this was motivated by the absolute character of these results , and by uniqueness of the problem which comes from several factors , the main being represented by the dimensions of the system which are of the order of .a brute - force solution of such system would require about flops , a requirement which can not be decreased at acceptable levels even considering that the sparsity rate of the reduced normal matrix is of the order of .it is therefore necessary to resort to iterative algorithms .agis uses additional hypotheses on the correlations among the unknowns which are reflected on the convergence properties of the system and permit a separate adjustment of the astrometric , attitude , instrument calibration , and global parameters , allowing the use of an embarrassingly parallel algorithm .the starting hypotheses , however , can hardly be proved rigorously , and have only be verified `` a posteriori '' by comparing the results with simulated true values , a situation which can not hold in the operational phase with real data . moreover ,this method by definition prevents the estimation of the correlations between the different types of unknown parameters , which constitute the other unique characteristic of this problem .these considerations about the agis module lead to the solution followed by avu - gsr , which uses a modified lsqr algorithm ) to solve the system of equations which , however , can not be solved without resorting to hpc parallel programming techniques as explained in .the increasing precision in the modern astrometric measurements from space makes high - accuracy tests of the dm / de vs. gravity theory debate a target accessible to future space - born astrometric missions . to this aim ,viable relativistic astrometric models are needed , and three classes of models have been developed during the last two decades .work is still on - going to cross - check their mutual compatibility at their full extent , but what has been done so far demonstrated that they are equivalent at least at the level of accuracy required for the gaia measurements . at the same timethese missions put new challenges to the efforts of data reduction .we have briefly shown how the problem was faced in gaia , in the limited contest of the reconstruction of the global astrometric sphere , where an additional constraint is put by the absolute character of its main product .this work has been partially funded by asi under contract to inaf i/058/10/0 ( gaia mission - the italian participation to dpac ) .10 url # 1#1urlprefix[2][]#2 t , piazza f and veneziano g 2002 _ phys .lett . _ * 89 * 81601 s g , shaom and nordtvedt jr k 2004 _ astronomische nachrichten _ * 325 * 267277 m , vecchiato a , ligori s , sozzetti a and lattanzi m g 2012 _ experimental astronomy _ * 34 * 165180 m h 1989 _ relativity in astrometry , celestial mechanics and geodesy _ astronomy and astrophysics library ( berlin heidelberg new york : springer - verlag ) bandieramonte m , becciani u , vecchiato a , lattanzi m and bucciarelli b 2012 _ 2012 ieee 21st international workshop on enabling technologies : infrastructure for collaborative enterprises _ * 0 * 167172 issn 1524 - 4547
we review the mathematical models available for relativistic astrometry , discussing the different approaches and their accuracies in the context of the modern experiments from space like gaia and game , and we show how these models can be applied to the real world , and their consequences from the mathematical and numerical point of view , with specific reference to the case of gaia , whose launch is due before the end of the year .
consumer spending is an essential component of the economy , accounting for 71% of the total us gross domestic product ( gdp ) in 2013 . in spite of representing just over 10% of all consumer spending , online shopping is rapidly growing as people are becoming more comfortable with the online payment systems , security and delivery of the purchased goods . over the last three years, online sales grew over and are showing signs of exponential growth .one of the largest and fastest - growing online markets is apple s ios market , where people can make digital purchases from several different categories .apple s revenue from digital purchases surpassed 4.6b .there are six main categories of iphone purchases : applications ( apps ) , songs , movies , tv shows , books , and in - app purchases ( purchases within an app , e.g. , bonuses or coins in games ) .these categories differ vastly in numbers of purchasers : 16 m people purchased at least one song , but only 671k people purchased a tv show . the number of purchases by category varies greatly as well : there are 430 m song and 255 m in - app purchases , while movies , books , and tv shows have fewer than 40 m purchases all together .the total money spent in each category varies even more : in - app purchases account for 31.1 , and for men it is 40k annual income ( figure [ fig : spending_income ] ) .this is in contrast with online shopping , where users with higher income tend to spend more money shopping online . [cols="^,^ " , ] & accuracy + app embeddings & 4.7% + nmf & 4.1% + top apps & 2.2% + lda & 1.7% + [ table : baselines_knn ] in the case that the model from section [ subsec : novelty ] predicts that a re - purchase is most likely to happen , we use the frequency and recency of the user s previous app consumption to predict the app from which the re - purchase will occur .this may appear to be an easy prediction problem , as one might think that users almost always purchase from the last app they purchased from , or the app from which they made the majority of purchases .however , in case of in - app re - purchases , only 46.5% of re - purchases come from the latest app they purchased from , and only 45.3% of them come from the app from which the user made most of the purchases . this justifies the need for a more involved re - purchase model .we follow a similar approach to the one from , and use both recency and popularity of previous apps to predict from which app the user s next in - app purchase will come .we use a weight function and a time function that maps the frequency of the usage and time since previous usage to the learnt values .this repeat consumption model could be used for the consumption : in this equation , function represents the frequency of the purchase from the app , and function t represents the time between the purchases .these functions are optimized jointly by calculating the negative log - likelihood over the equation .the negative log - likelihood is not convex in and , but is convex in each function when the other one is fixed .thus , we use a standard gradient descent to maximize the likelihood with respect to and , separately . after learning the weight functions , we are able to predict the correct app from which the user is going to make a purchase with 54.8% accuracy , which is considerably higher than the baselines mentioned above , i.e. 46.5% and 45.3% accuracy by always predicting the latest or the most consumed app , respectively .online shopping is becoming more popular as people learn to trust online payment systems , which was not the case in the past .multiple studies , aimed at profiling online shoppers , found that online shoppers tend to be younger , wealthier , and more educated compared to the average internet user .a more recent work showed that while women are more likely to be online shoppers , men spend more money per purchase and make more purchases overall . in our work , we focus on a particular subset of online purchases , iphone digital purchases .there are considerable differences in characteristics of iphone purchases and purchases of physical goods .one of the main differences is that people are much more likely to purchase the same item multiple times .similar to online shopping , spending on mobile digital goods is increasing , and people have spent more than $ 20 billion dollars in the apple app store in 2015 , which is four times more per user than in android app stores . this might be due to the different demographics of iphone users .given this high level of spending , understanding the market would help us to more effectively target apps toward users who are likely to become regular users and frequent spenders . despite the popularity of the iphone digital market , there has not been any large - scale study of how people are spending money on this platform . in this work ,we show that most of the money is spent on in - app purchases , and we present a demographic and prediction analysis of spending .usage and purchases from apps have been the subject of a few studies .sifa et al . studied the purchase decisions in free - to - play mobile games .they built a classifier that predicts whether a user is going to make any purchase in the future and also built a regression model to estimate the amount of money that will be spent by each user .the models are moderately accurate .schoger studied the monetization of popular apps in the global market , identifying growing markets and that in - app purchases are increasingly accounting for a larger fraction of total purchases .our study , unlike those studies , includes the full history of iphone purchases by the users and considers that many users make purchases from multiple apps .moreover , the large scale of the data set allows us have enough big spenders to analyze their behavior accurately .we also study changes in user purchases over time , how users becomes frequent buyers in a particular app , and how their purchases evolve over time .the abandonment of a service is called _ consumer attrition _ or _churn_. the importance of consumer attrition analysis is driven by the fact that retaining an existing consumer is much less expensive than acquiring a new consumer .thus , prediction of consumer churn is of great interest for companies , and has been studied extensively . for example ., ritcher et al .exploit the information from users social networks to predict consumer churn in mobile networks .braun and schweidel focus on the causes of churn rather than when churn will occur .they find that a considerable fraction of churn in the service they studied happens due to reasons outside the companies control , e.g. , the consumer moving to another state . in the context of mobile games , runge et al .study user churn for two mobile games and predict it using various machine learning algorithms . they also implement an a / b test and offer players bonuses before the predicted churn .they find that the bonuses do not result in longer usage or spending by the users .kloumann et al .study the usage of apps by users who use a facebook login and model the lifetime of apps using the popularity and sociality of apps , showing that both of these affect the lifetime of the app .baeza - yates et al . addressed the problem of predicting the next app the user is going to open through a supervised learning approach . in our work , we model the whole sequence of purchases that users make , including adoption , churn , and prediction of the next app .our work is the first work that studies the details of all iphone purchases made by a large number of users .this allows us to better understand the interplay between usage of multiple apps that are competing for the same users , their attention and their purchasing power .mobile devices have grown wildly in popularity and people are spending more money purchasing digital products on their devices . to better understand this digital marketplace , we studied a large data set of more than 776 m purchases made on iphones , including songs , apps , and in - app purchases .we find that , surprisingly , 61% of all the money spent is on in - app purchases , and a small group of users are responsible for most of this spending : the top 1% of users are responsible for 59% of all spending on in - app purchases .we characterize these users , showing that they are more likely to be men , older , and less likely to be from the us .then , we focus on how these big spenders start and stop making purchases from apps , finding that as users gradually lose interest , the delay between purchases increases .the amount of money spent per day on purchases initially increases , then decreases , with a sharp drop before abandonment .nevertheless , from the perspective of app developers these big spenders are a valuable user segment as they are 4.5x more likely to be a big spender in a new app than a random app user . in the last part of our study, we model the purchasing behavior of users by breaking it down into three different steps .first , we model the time between purchases by testing a variety of different distributions , and we find the pareto distribution fits the data most accurately .second , we take a supervised learning approach to predict whether a user is going to make purchase from a new app .finally , if the purchase is from a new app , we use a novel approach to predict the new app based on the previous in - app purchases . if the purchase is from an app that the user purchased from in the past , we combine the earlier frequency of the purchases and the time between the purchases to predict from which app the re - purchase will come .the models proposed in our study can be leveraged by app developers , app stores and ad networks to better target the apps to users .
with mobile shopping surging in popularity , people are spending ever more money on digital purchases through their mobile devices and phones . however , few large - scale studies of mobile shopping exist . in this paper we analyze a large data set consisting of more than 776 m digital purchases made on apple mobile devices that include songs , apps , and in - app purchases . we find that 61% of all the spending is on in - app purchases and that the top 1% of users are responsible for 59% of all the spending . these big spenders are more likely to be male and older , and less likely to be from the us . we study how they adopt and abandon individual app , and find that , after an initial phase of increased daily spending , users gradually lose interest : the delay between their purchases increases and the spending decreases with a sharp drop toward the end . finally , we model the in - app purchasing behavior in multiple steps : we model the time between purchases ; we train a classifier to predict whether the user will make a purchase from a new app or continue purchasing from the existing app ; and based on the outcome of the previous step , we attempt to predict the exact app , new or existing , from which the next purchase will come . the results yield new insights into spending habits in the mobile digital marketplace .
graph analytics has drawn much attention from research and industry communities , due to the wide applications of graph data in different domains .one of the major issues in graph analytics is identifying cohesive subgraphs .there are lots of indexes to depict the cohesiveness of a graph , such as cliques , k - truss , k - core , f - groups , n - clans and so on , among which -core is recognized as one of the most efficient and helpful one . given a graph ,the -core is the largest subgraph in , such that the minimum degree of the subgraph is at least .the core number of a vertex is defined as the largest such that there exists a -core containing . in static graphs ,the computation of the core number of each vertex is known as the -core decomposition problem . besides the analysis of cohesive subgroup ,-core decomposition are widely used in a large number of applications to analyze the structure and function of a network .for example , the k - core decomposition can be used to analyze the topological structure of internet , , to identify influential spreader in complex networks , to analyze the structure of large - scale software systems , to predict the function of biology network , and to visualize large networks and so on . in static graphs ,the -core decomposition problem has been well studied .the algorithm presented in can compute the core number of each vertex in time , where is the number of edges in the graph .however , in many real - world applications , graphs are subject to continuous changes like insertion or deletion of vertices and edges . in such dynamic graphs ,many applications require to maintain the core number for every vertex online , given the network changes over time .but it would be expensive to recompute the core numbers of vertices after every change of the graph , though the computation time is linear , as the size of the graph can be very large .furthermore , the graph change may only affect the core numbers of a small part of vertices .hence , the _ core maintenance _ problem is recommended , which is to identify the vertices whose core numbers will be definitely changed and then update the core numbers of these vertices .there are two categories of core maintenance , _ incremental _ and _ decremental _ , which handle edge / vertex insertion and deletion respectively .previous works focus on maintaining the core numbers of vertices in the scenario that a single edge is inserted or deleted from the graph . for multiple edge/ vertex insertions / deletions , the inserted / deleted edges are processed sequentially .the sequential processing approach , on the one hand , incurs extra overheads when multiple edges / vertices are inserted / deleted , as shown in fig .[ fig1 ] , and on the hand , it does not fully make use of the computation power provided by multicore and distributed systems .therefore , it is necessary to investigate the parallelism in the edge / vertex processing procedure and devise parallel algorithm that suits to implement in multicore and distributed systems .but to the best of our knowledge , there are no known parallel algorithms proposed for the core maintenance problem . in the core maintenance problem ,the insertions / deletions of vertices can be handled by implementing an edge insertion / deletion algorithm .specifically , inserting a vertex is equivalent to the following process : first inserting the vertex into the graph by setting its core number as 0 , and then inserting the edges connected to the new vertex .similarly , the deletion of a vertex is equivalent to the process that deleting the edges connected to the vertex and finally deleting the vertex .hence , in this paper , we only consider the edge insertions and deletions .it is a very difficult task to design parallel algorithms for core maintenance in dynamic graphs .different from the single edge insertion / deletion case , where the core number of each vertex changes by at most 1 , it is hard to identify the change of a vertex s core number in the multiple edge insertion / deletion scenario , as the change of a vertex core number may be affected by several inserted edges . an intuitive manner is to split the inserted / deleted edges into sets that affect disjoint sets of vertices in the original graph .however , the parallelism of this manner is poor . in this work ,we take a more efficient approach that exhibits better parallelism .specifically , we propose a structure called _ superior edge set_. the inserted / deleted edges can be split into multiple superior edges sets , and for each vertex connected to inserted / deleted edges , a superior edge set contains at least one inserted edge connected to it .it is shown that the insertion or deletion of edges in a superior edge set can change the core number of every vertex by at most 1 .hence , the core numbers of vertices when inserting or deleting a superior edge set can be maintained using a parallel procedure : first identifying the vertices whose core numbers will change due to the insertion or deletion of every edge in parallel , and then updating the core number of these vertices by 1 .a parallel algorithm can then be obtained by iteratively handling the insertions / deletions of split superior edge sets using the above parallel procedure . in summary ,our contributions are summarized as follows .* we propose a structure called _ superior edge set _ , and show that if the edges of a superior edge set is inserted into / deleted from a graph , the core number of each vertex can change by at most 1 .it implies that the insertion / deletion of these edges can be processed in parallel .we also give sufficient conditions for identifying the vertices whose core numbers will change , when inserting / deleting a superior edge set . *we then present parallel algorithms for incremental and decremental core maintenance respectively .comparing with sequential algorithms , our algorithms reduce the number of iterations for processing inserted / deleted edges from to the maximum number of edges inserted to each vertex . in large - scale graphs , the acceleration is significant , since each vertex can connect to only a few inserted or deleted edges .for example , as shown in the experiments , even if inserting edges to the livejournal graph ( refer to table [ table_graph ] in section [ sec : experiment ] ) , the number of iterations is just 3 in our parallel algorithms , in contrast with ones in sequential processing algorithms .we also conduct extensive experiments over both real - world and synthetic graphs , to evaluate the efficiency , stability and scalability of our algorithms .the results show that comparing with sequential processing algorithms , our algorithms significantly speed up core maintenance , especially in cases of large - scale graphs and large amounts of edge insertions / deletions .[ fig1 ] and are inserted . *traversal * algorithm in processes them one by one .first for edge , it will visit vertices and update their core numbers . andthen when inserting , it will visit , and update core numbers of .however in our parallel algorithm , edges and are handled in parallel using two processes . in the process handling ,the algorithm execution will visit and update , and in another process for , the algorithm will visit and update .hence , the parallel algorithm avoids duplicate visiting of .,title="fig:",width=192 ] the rest of this paper is organized as follows . in section [ sec : relate ] , we briefly review closely related works . in section [ sec : problem ] , the problem definitions are given .theoretical results supporting the algorithm design are presented in section [ sec : basis ] .the incremental and decremental parallel algorithms are proposed in section [ sec : in ] and section [ sec : de ] respectively . in section[ sec : experiment ] , the experiment results are illustrated and analyzed . the whole paper is concluded in section [ sec : conclusion ] .in static graphs , the core decomposition problem has been extensively studied .the state - of - the art algorithm was given in , the runtime of which is linear in the number of edges . in ,an external - memory algorithm was proposed when the graph is too large to hold in memory .core decomposition in the distributed setting was studied in .the above three algorithms were compared in under the graphchi and webgraph models .parallel core decomposition was studied in .core maintenance in dynamic graphs has also been widely studied .however , all previous works focus on the case of single edge insertion / deletion , and sequentially handle multiple edge insertions / deletions .efficient algorithms were proposed in . in ,an algorithm was proposed to improve the i / o efficiency .furthermore , and solved the core maintenance problem in the distributed environment .we consider an undirected , unweighted simple graph , where is the set of vertices and is the set of edges .let and .for a node , the set of its neighbors in is denoted as , i.e. , .the number of s neighbors in is called the degree of , denoted as .so .the maximum and minimum degree of nodes in is denoted as and respectively .we next give formal definitions for the _ core number _ of a vertex and other related concepts .given a graph and an integer , the -core is a maximal connected subgraph of , where each vertex has at least neighbors in , i.e. , . given a graph , the core number of a vertex , denoted by , is the the largest , such that there exists a -core containing . for simplicity, we use to denote when the context is clear .the max - k - core associated with a vertex , denoted by , is the -core with . in this work, we aim at maintaining the core numbers of vertices in dynamic graphs .specifically , we define two categories of graph changes : _ incremental _ , where a set of edges are inserted to the original graph , and _ decremental _ , where a set of edges are deleted . based on the above classification , we distinguish the core maintenance problem into two scenarios , as defined below . given a graph , the incremental core maintenance problem is to update the core numbers of vertices after an incremental change to . given a graph , the decremental core maintenance problem is to update the core numbers of vertices after a decremental change to . in this paper , we proposed efficient parallel algorithms for both the incremental and decremental core maintenance problem , which are named the * superiorinsert * and * superiordelete * algorithm .the main idea of our algorithms is to find the definitely updated vertex set when the graph changes , which was first proposed in . besides, our algorithms can deal with both edge and vertex insertions / deletions , this to say , we can deal with the insertion and deletion of an arbitrary new graph .in this section , we give some theoretical lemmas that constitute the theoretical basis of our algorithms . at first , we introduce some definitions . given a graph , an edge called a superior edge for if .notice that in the definition , we do not require , i.e. , may be an edge that is about to insert to graph .furthermore , we define the core number of an edge as the smaller one of its endpoints , i.e. , . an edge set is called an -superior edge set , if for each edge , it satisfies : \(i ) is a superior edge with core number .( ii ) if and have an common endpoint , . in other words , in a -superior edge set , each edge is a superior edge for a vertex with core number , and in , each vertex connects to at most one superior edge for it .the union of several -superior edge sets with distinct values is called a _superior edge set_. it can be known that in a superior edge set , each vertex can still connect to at most one superior edge for it . in the following , we will first show that when inserting / deleting a superior edge set ( lemma [ them : superioredgesetinsert ] and lemma [ them : superioredgesetdelete ] ) , the core number of every vertex can change by at most 1 , and then give a sufficient condition for identifying vertices whose core numbers change ( lemma [ le : exkpaths ] , lemma [ corollary : csd ] and lemma [ corollary : sd ] ) .we first prove a result on the core number increase of every vertex when inserting a -supeior edge set . for simplicity, we use to denote when the context is clear .[ lem : k - superioredgesetinsert ] given a graph , if a -superior edge set is inserted to , where , for each node , it holds that : + if , can increase by at most 1 ; + if , will not change . for , we need to show that for a vetex with , can increase by at most 1 .otherwise , assume increases by to , where .let and be the max--core of before edge insertion and the max--core of after edge insertion respectively .then , .it can be concluded that one of inserted edges must belong to , as otherwise before insertion as well .let . for a vertex ,if , the degree of does not change when deleting the edges in , so . if , can lose at most one neighbor that is connected by a superior edge for it in , so . if , must have at least + 1 neighbors whose core numbers are not smaller than + 1 in . we add the vertices whose is larger than back to , and denote the induced graph as . it can be obtained that . andfrom to , does not lose any neighbor whose is not smaller than + 1 .hence , in , . then it can be seen that each vertex in has a degree at least + 1 , i.e. , .this means that , which contradicts with .hence , can increase by at most 1 . for , we need to show that for a vertex if , can not change .we consider two cases : and .assume = increases by to , where .let and be the max--core of before edge insertion and the max--core after edge insertion respectively .then we have , .we first consider the case .there must be at least one of the edges in belonging to , as otherwise before edge insertion .consider the edge .at least one of its endpoints has a core number , since is a -superior edge set .denote by the endpoint of with core number .as shown before , can increase by at most 1 .hence , after the edge insertion , .this means that is not in , which is a contradiction .therefore , if , will not change after the edge insertion .we next consider the case .similar as beofore , it can be shown that at least one of the edges in that is contained in .let .let be a vertex in , we consider three cases . if and as proved before , it can be obtained that .if , as shown before , the core number of will not be affected by the edge insertions .if , because , and does not connect to edges in , we can get that .let .based on above , it can be obtained that is a -core and . butthis contradicts with the fact that . then we can get that the core number of does not change after inserting . combining all above together, the lemma is proved .using a similar argument as that for proving lemma [ lem : k - superioredgesetinsert ] , we can get that the core number changes of vertices after deleting a -superior edge set from graph , as given in the following lemma .[ lem : k - superioredgesetdelete ] given a graph , if a -superior edge set is deleted from , where , for each vertex , it holds that : + if , can decrease by at most 1 ; + if , will not change . from the above lemma [ lem : k - superioredgesetinsert ] and lemma [ lem : k - superioredgesetdelete ] , we have known that for a graph , after a -superior edge set is inserted into or deleted from , only vertices with core numbers may increase / decrease , and the change is at most 1 .this implies that if a -superior edge set is inserted / deleted , it will be enough to only visit vertices whose core numbers are and check if their core numbers will be updated . and because the core numbers of these vertices can change by at most 1 , we can handle these edge insertions in parallel : first we find the update set of vertices that will change core numbers because of the insertion of each particular edge in parallel , and the union of these update sets is just the set of vertices whose core numbers will change by one . in fact , we can get even better results , which are given in the following lemma [ them : superioredgesetinsert ] and lemma [ them : superioredgesetdelete ] .[ them : superioredgesetinsert ] given a graph and a superior edge set , where for is a -superior edge set and if , it holds that after inserting into , the core number of each vertex can increase by at most 1 .it can be seen that inserting edges in into all together has the same result with inserting one by one .we next assume are inserted one by one . to prove the lemma, we need to prove that if inserting makes a vertex increase its core number from to + 1 , its core number can not change any more when inserting for .clearly , we only need to prove the above result for .there are two cases we need to consider .if , by lemma [ lem : k - superioredgesetinsert ] , the core number of will not increase any more when inserting , since only vertices with core numbers of may increase their core numbers .we next consider the case of .we claim that if there is a vertex increasing its core number from to after the insertions of and , the vertex must have a neighbor which increases the core number from to as well during the insertions .let be a vertex whose core number is increased from to after inserting and .notice that does not connect to edges in .hence , the degree of does not change when inserting .furthermore , by lemma [ lem : k - superioredgesetinsert ] , the core number of each neighbor of can be increased by at most 1 .so has at least neighbors whose core numbers are not smaller than and some of these neighbors have a core number of .denote by the vertices in whose core numbers are before inserting .it can be obtained that there must be a vertex whose core number is before inserting , as otherwise , the core number of is before inserting , which contradicts with our assumption .let denote the set of vertices whose core numbers change from to after the insertions of and .because inserting does not change the degrees of vertices in , there must be a vertex whose core number change is caused because of the core number change of vertices in , as otherwise no vertex in can change the core number .let be a vertex in whose core number change causes the core number change of .then is before inserting and is increased to after inserting . to make increase its core number to after inserting , there must be at least neighbors in whose core numbers are initially not smaller than before inserting and .it concludes that before inserting and .however , this contradicts with the fact that is before insertions .the contradiction shows that if the core number of a vertex is changed when inserting , its core number will not change any more when inserting . combining all above together ,the lemma is prove .similarly , for the case of a superior edge set deletion , we have the following result .[ them : superioredgesetdelete ] given a graph and a superior edge set , where for is a -superior edge set and if , it holds that after deleting from , the core number of each vertex can decrease by at most 1 . in above, we have shown that when inserting or deleting a superior edge set from a graph , the core numbers of vertices can change by at most 1 .this implies that the core updates of inserting / deleting edges in a superior edge set can be processed in parallel by distributing distinct -superior edge sets to distinct processes .furthermore , we have also shown which set of vertices may change due to the insertion or deletion of a -superior edge set .in the subsequent section , we give more accurate conditions for a vertex to change its core number when inserting / deleting a superior edge set .we first introduce some notations .[ de : sd ] for a vertex in a graph , is a _ superior neighbor _ of if the edge is a _ superior edge _ of .the number of s superior neighbors is called the _ superior degree _ of , denoted as .it can be known that only superior neighbors of a vertex may affect the change of its core number .the _ constraint superior degree _ of a vertex is the number of s neighbors that satisfies or . for a vertex ,its constraint superior degree is the number of neighbors , that has a larger core number than or has the same core number but has enough neighbors to support itself to increase core number .[ def : kpt_u ] for a vertex with a core number , the _ -path - tree _ of is a dfs tree rooted at and each vertex in the tree satisfies . for simplicity we use to represent k - path - tree of .the * * includes all vertices with that are reachable from via paths that consists of vertices with core numbers equal to .when a superior edge of is inserted or deleted , as shown in lemma [ lem : k - superioredgesetinsert ] , only vertices in may change their core numbers . and for the insertion case , a more accurate condition was given in for identifying the set of vertices that may change core numbers , as shown below .[ them : kpath ] given a graph , if an edge is inserted and , then only vertices in the * * of and may have their core numbers increased , and the increase is no more than 1 .however , the above lemma [ them : kpath ] is just suitable for the one edge insertion scenario .we next generalize the above result to the scenario of inserting a -superior edge set , as shown in lemma [ them : exk - path ] below , which will help find the set of vertices with core number changes when inserting multiple edges . before giving the result, we need to generalize the concept of -path - tree to -path - tree .[ def : exk - path - tree ] for a -superior edge set = \{ } , w.l.o.g . ,assume that for each = , .the union of for every is called the _ exk - path - tree _ of . for simplicitywe use to represent exk - path - tree of . by lemma[ them : kpath ] , we can get that when inserting a -superior edge set , only vertices in the satisfying may have their core numbers change , and lemma [ lem : k - superioredgesetinsert ] ensures that these vertices can change their core numbers by at most 1 .this result is summarized in the following lemma .[ them : exk - path ] given a graph , if a -superior edge set is inserted , then only vertices in the satisfying may have their core numbers increased , and the core change is at most 1 .we only prove the insertion case , and the deletion case can be proved similarly .notice that the core number changes of vertices are the same in scenarios of inserting all edges in together and inserting the edges in one by one .so we can assume that the edges in are inserted one by one .by lemma [ ] first we should know that , when a -superior edge set = \{ , , ... , }is inserted / deleted , the vertices who update cores is the same as when inserting / deleting edges in one by one .assume and core( ) core( ) = , after inserting / deleting into the graph , vertices who update cores construct a vertex set , after inserting / deleting the update vertex set is .so we can have = and , where , as otherwise a vertex may update its core more than once , which is a contradiction to theorem [ lem : k - superioredgeset ] . and according to theorem [ them : kpath ] , vertices in are all in the * * of and sd( ) core( ) = .so all vertices in are in the and satisfies sd( ) .the proof is completed .the theorem [ them : exk - path ] identifies vertices who may change cores after inserting a -superior edge set .further , for a superior edge set we can similarly have the following theorem .[ def : superior - tree ] for a superior edge set = , is a -superior edge set where .we denote the union of for all as the * superior - tree * of .the vertices in the * superior - tree * is denoted as * * , and all neighbors of all vertices in * * is defined as * *. [ them : superior - tree ] given a graph , if a superior edge set is inserted / removed then only vertices in the * superior - tree * and sd( ) where is the -superior edge set belongs to , may have their core numbers increased / decreased .the proof is similarly as theorem [ them : exk - path ] .since the insertion / deletion of a -superior edge set only affects vertices whose core is as proved before , we can insert / delete a -superior edge set at a time , the vertices that update core numbers will be exactly the same as inserting / deleting them all at a time .we denote all vertices update cores upon the insertion / deletion of as . besides as theorem [ them :superioredgeset ] and theorem [ them : exk - path ] proves , all vertices can only update core number once and all update vertices are in an and satisfies sd( ) the above lemma [ them : exk - path ] implies that after an edge in a -superior edge set is inserted , the vertices whose core numbers change during the insertion will not change any more when inserting other edges in . based on the above result and lemma [ them : superioredgesetinsert ] and lemma [ them : superioredgesetdelete ], we can get the set of vertices whose core number change when inserting a superior edge set .[ le : exkpaths ] given a graph , if a superior edge set is inserted , then only vertices in every * * **s of every for satisfying may have their core numbers increased , and the core number change can be at most 1 . by the definition of , we have the following result .[ corollary : csd ] given a graph and a vertex with core number . after inserting a superior edge set into ,if , then can not be in a -core .lemma [ le : exkpaths ] and lemma [ corollary : csd ] give accurate conditions to determine the set of vertices that will change the core numbers , after inserting a superior edge set . for deletion case , we have the following result , which can be obtained directly from definition [ de : sd ] .[ corollary : sd ] after deleting an edge set from a graph , for , if = and , then will decrease its core number . in this section ,we have given accurate conditions for the core number changes of vertices after inserting / deleting a superior edge set .in the subsequent section [ sec : in ] and section [ sec : de ] , we will show how to utilize these theoretical results to design parallel algorithms for incremental and decremental core maintenance respectively .in this section , we present how to find a superior edge set given an arbitrary edge set .the main structure we proposed is called * superior edge set * , in this structure , each vertex has only one neighbor that has a core number no smaller than itself .as theorem[them : superioredgeset ] proved , when such a structure is inserted into or deleted from a graph , all vertices can change its core number by at most one , which is easy to update . in this subsection ,we present how to find such a * superior edge set*. since the hierarchy of k - core , the finding process can be executed in parallel .each child process searches for the * superior edge set * of a given core number and return the * k - superior edge set * to the main process .[ htb][alg : findksuperioredges ] * input * + the graph , ; + the update edge set , ; + the vertices set connected to edges in , a core number ; + let be an empty edge set return the algorithm [ alg : findksuperioredges ] finds a superior edge structure for a given in an arbitrary edge set .the judgement in line 3 ensures that there can be only one superior edge for each vertex in the final result .in this section , we present the algorithm for incremental core maintenance , whose pseudo - code is given in algorithm [ alg : superioredgeinsert ] .we consider the core number update of vertices after inserting a set of edges to graph .let denote the set of vertices connecting to edges in .the set of core numbers of vertices in is denoted as .[ htb][alg : superioredgeinsert ] * input * + the graph , ; + the inserted edge set , ; + the set of vertices connected to edges in the core number of each vertex in ; + [ htb][alg : findksuperioredges ] * input * + the graph , ; + the update edge set , ; + the set of vertices connected to edges in a core number ; + * return * [ htb][alg : insertk ] * input * + the graph , ; + the -superior edge set , ; + the core number of each vertex in * initially * , empty stack for each vertex , \gets{false } , removed[v]\gets{false } , cd[v]\gets{0} ] the algorithm is executed in iterations . basically , the algorithm split the inserted edges into multiple superior edge sets , and process the insertion of one superior edge set in one iteration . in each iteration , it first uses a parallel algorithm to find a suporior edge set from the inserted edges that have not been processed so far ( line 3 ) .then a parallel algorithm is executed for each edge in parallel to identify the set of vertices whose core numbers change , and increase the core numbers of these vertices by 1 ( line 6 - 7 ) .it deserves to point out that we do not use directly algorithms handling single edge insertion / deletion as subroutine .instead , we make the edges inserted with the same core number processed together , as we find that this can efficiently avioding duplicate visiting of vertices , which further accelerates our parallel processing procedure .we next introduce the two parts in each iteration respectively .because the superior edges of vertices with different core numbers are disjoint , the -superior edge sets for different core numbers can be computed in parallel using algorithm [ alg : findksuperioredges ] . then the computed superior edge set is inserted into the graph and deleted from .the set of vertices with core number changes is also computed in parallel .specifically , for each , a child process is assigned to find the vertices whose core number changes are caused by the insertion of the computed -superior edge set , using algorithm [ alg : insertk ] .algorithm [ alg : insertk ] first computes values for each vertex in of , and then for each edge in a -superior edge set , finds the set of vertices whose core numbers change due to the insertion of . for , a _ positive _ depth - first - search ( dfs )is conducted on vertices in from the root vertex , which is one of or that has a core number and have a core number equal to , then can be either or . ] , to explore the set of vertices whose core numbers potentially change . in the algorithm ,the value of each vertex is used to evaluate the potential of a vertex to increase its core number , which records the dynamic changes of value .the intial value of is set as . for a vertex , if \le k ] is traversed in the positive dfs procedure , a _ negative _ dfs procedure initiated from will be started , to remove and update the values of other vertices with core number .after all vertices in are traversed , the vertices that are visited but not removed will increase the core numbers by 1 .* performance analysis .* we next analyze the correctness and efficiency of the proposed incremental algorithm . at first ,some notations are defined , which will be used in measuring the time complexity of the algorithm . for graph , the inserted edge set and a subset of ,let and be the set of core numbers of vertices in .for , let .as shown later , is the max times a vertex can be visited by negative dfs procedures in the algorithm execution . for , let be the set of vertices with core number , and be the neighbors of vertices in .. denoted by ] , will not increase its core number , and it will affect the potential of its neighbors to increase their core numbers . notice that this influence procedure should be spread across vertices in * * , which is done by the negative dfs procedure .when all eges in are handled , the potential vertices are visited and the ones that can not increase core numbers are removed .all above ensures the corretness of the algorithm . as for the time complexity , because in each iteration , for each vertex , at least one inserted edge connected to it can be selected into the superior edge set and processed , there are at most iterations in the algorithm execution .we next consider the time used in each iteration .now consider an iteration , and we denote the graph obtained after iteration is .denote by the superior edge set computed in iteration .the computation of values for vertices in exkpt of takes time .the positive dfs visits each vertex in exkpt for one time .hence the positive dfs procedure takes time . for the negative dfs procedures ,notice that after each dfs procedure , if a vertex is visited , is decreased by 1 .hence , each vertex can be visited by at most times , since a vertex will be removed if its values is decreased to its core number .combining together , the total time for an iteration is . by above, it can be got the time complexity of the algorithm as stated in the theorem .the decremental algorithm is showed in algorithm [ alg : superioredgedelete ] .similar with the incremental algorithm , we deal with deleted edges in iterations . in each iteration, a superior edge set is found using a parallel approach .after that , the graph is updated by deleting the computed superior edge set and the -superior edge sets are assigned to child processes . in each child process , the edges in a -superior edge set is handled one by one similarly .the main difference is that we use values to evaluate if a vertex will decrease its core number , and only execute the negative dfs to remove vertices that can not be in the current -core . when deleting an edge with core( ) core( ) , it is checked if still has enough superior neighbors that can help it keep the core number .if is decreased , algorithm [ alg : deleteremove ] is executed to remove it and disseminate the influence .[ htb][alg : superioredgedelete ] * input * + the graph , ; + the deleted edge set , ; + the set of vertices connected to edges in the core number of each vertex in [ htb][alg : deletek ] * input * + the graph , ; + the -superior edge set , ; + the core number of each vertex in * initially * , empty stack for each vertex , \gets{false } , removed[v]\gets{false } , cd[v]\gets{0} ] * performance analysis .* we next analyze the correctness and efficiency of the proposed decremental algorithm . at first , some notations are defined , which will be used in measuring the time complexity of the algorithm . for graph , the deleted edge set and a subset of ,let and be the set of core numbers of vertices in . for ,let . for ,let be the set of vertices with core number and .denote by the set of edges connected to vertices in .we then define as follows , |\}. m_r=\max_{k\in k(g_r)}\{|e(v_r(k))|\}.\ ] ] , and will depict the time used in each iteration in the algorithm execution .furthermore , we define the _ maximum deleted degree _ as the maximum number of edges deleted from each vertex in , denoted as . using a similar argument as that for analyzing the incremental algorithm, we can get the following result , which states the correctness and efficiency of the decremental algorithm . the detailed proof is put in appendix .[ deletecorrectness ] algorithm [ alg : superioredgedelete ] can update the core numbers of vertices after inserting an edge set in time .the proof is similarly as the insertion case , the only difference lies in the second part in each iteration .we only conduct the negative dfs to remove vertices that will decrease core numbers according to corollary [ corollary : sd ] . for each edge , if core( ) core( ) and , we start the negative dfs rooted as to travel the edges in * * where is or that has a smaller core number . if core( ) = core( ) and , we will travel * * to remove and disseminate the influence first , then if and is not removed yet , remve and disseminate the influence through the negative dfs rooted as . during the dfsa vertex can be visited by at most times , since a visit will decrease ] , will be marked as removed and wont be visited again .after all edges are handled , the vertices that are marked as visited and removed will decrease core numbers .algorithm [ alg : deletek ] first compute for vertices in exkpt of and the time complexity is . and the time needed for the negative dfs is , since we travel each vertex in at most times to remove it .so in one iteration the time complexity of algorithm [ alg : superioredgedelete ] is .then similarly as the insertion cese , the number of iterations is bounded by the max deleted degree .so the time complexity of the whole algorithm is .in this section , we conduct empirical studies to evaluate the performances of our proposed algorithms . the experiments use three synthetic datasets and seven real - world graphs , as shown in table [ table_graph ] .there are two main variations in our experiments , the original graph and the inserted / deleted edge set .we first evaluate the efficiency of our algorithms on real - world graphs , by changing the size and core number distribution of inserted / deleted edges . then we evaluate the scalability of our algorithms using synthetic graphs , by keeping the inserted / deleted edge set stable and changing the sizes of synthetic graphs . at last ,we compare our algorithms with the state - of - the - art core maintenance algorithms for single edge insertion / deletion , traversal algorithms given in , to evaluate the acceleration ratio of our parallel algorithms .the comparison experiments are conducted on four typical real - world datasets .all experiments are conducted on a linux machine with intel xeon cpu e5 - 2670.60ghz and 64 gb main memory , implemented in c++ and compiled by g++ compiler .* datasets .* we use seven real - world graphs and random graphs generated by three models .the seven real - world graphs can be downloaded from snap , including social network graphs ( livejournal , youtube , soc - slashdot ) , collaboration network graphs ( dblp , ca - astroph ) , communication network graphs ( wikitalk ) and web graphs ( web - berkstan ) .the synthetic graphs are generated by the snap system using the following three models : the erds - r****nyi ( er ) graph model , which generates a random graph ; the barabasi - albert ( ba ) preferential attachment model , in which each node creates preferentially attached edges ; and the r - mat ( rm ) graph model , which can generate large - scale realistic graphs similar to social networks . for all generated graphs , the average degree is fixed to 8 , such that when the number of vertices in the generated graphs is the same , the number of edges is the same as well .[ core1 ] and fig .[ core2 ] show the core number distributions of the seven real - world graphs and the generated graphs with vertices . from fig .[ core1 ] , it can be seen that in real - world graphs , more than 60 percent of vertices have core numbers smaller than 10 .especially , in wt ( wiki - talk ) , more than 70% of vertices have core number 1 . for the core distributions of generated graphs , as shown in fig .[ core2 ] , in the ba graphs , all vertices have a core number of 8 . in the er graph ,the core numbers of vertices are small and the max core number of vertices is 10 , but almost all vertices have core numbers close to the max one .the rm graph are more close to real - world graphs , where most vertices have small core numbers and as the core number increases , the percentage of vertices with core number decreases .as shown later , the core distribution of a graph will affect the performances of our algorithms .the _ core number _ of an edge is defined as the smaller core number of its two endpoints .we use the _ average processing time per edge _ as the efficiency measurement of the algorithms , such that the efficiency of the algorithms can be compared in different cases ..real - world graph datasets [ cols="^,^,^,^,^",options="header " , ] we evaluate the impacts of three factors on the algorithm performance : the size of inserted / deleted edges , the core number distribution of edges inserted / deleted , and the original graph size .the first factor affects the iterations needed to process the inserted / deleted edges , and the last two factors affect the processing time in each iteration .the first two evaluations are conducted on real - world graphs , and the third one is on synthetic graphs .we first evaluate the impact of the number of inserted/ deleted edges on the performances of our algorithms .the results for the incremental and decremental maintenance algorithm are illustrated in fig .[ size_ins ] and fig .[ size_del ] respectively . in the experiments ,we randomly insert / delete % edges with respect to the original graph , where for . in fig .[ size_ins ] and fig .[ size_del ] , the x - axis represents the datasets , and the y - axis represents the average processing time per edge . it can be seen that the processing time per edge is less than in all cases , and except for wt and lj , the processing time is much smaller than .the figures show that the processing time decreases as the number of inserted / deleted edges increases , which demonstrates that our algorithms are suitble for handling large amount of edge insertions / deletions . in this case, more edges can be selected into the superior edge set in each iteration , and hence our algorithms achieve better parallelism .furthermore , fig .[ size_ins ] and fig .[ size_del ] also illustrate that it needs a larger average processing time when the size of original graphs increases .the only exception is the wt graph .though the graph has a smaller size than bs and lj graphs , the average processing time is larger .this is because the core distribution of wt is rather unbalancing , as showed in figure [ core1 ] , where most vertices possess the same core number . in this extremal case ,on the one hand , each iteration in the algorithm takes more time in processing the inserted edges , as more vertices need to be traversed , and on the other hand , the parallelism of the algorithm is very limited , as most edges are inserted to vertices with the same core .we then evaluate the impact of the core number distribution of inserted / deleted edges on the algorithm performance .the results are illustrated in fig .[ corechange ] .in particular , by the core distributions showed in fig .[ core1 ] , we choose five typical core numbers \{ } in an increasing order for each of the seven graphs . for each core number , 20% edges of that core numberare selected randomly as the update edge set . from fig .[ corechange ] , it can be seen that larger core number induces a larger average processing time .this is because , when inserting / deleting edges to vertices with larger core numbers , the degree of these vertices generated by these inserted edges is larger . in our algorithm , only one superior edge can be handled for each vertex in each iteration .hence , it takes more iterations to process the inserted / deleted edges .but on the other hand , it can be also seen that the processing time per edge does not vary significantly .we finally evaluate scalability of our algorithms in synthetic graphs , by letting the number of vertices scale from to and keeping the average degree fixed as 8 .the results are shown in fig .[ graphchange ] . in the experiments, for each graph , we randomly select 10000 edges as the update set . in fig .[ graphchange ] , the x - axis represents the number of vertices in the graph , and the y - axis represents the average processing time per edge .[ graphchange ] shows that though the graph size increases exponentially , the average processing time increases linearly .it demonstrates that our algorithms can work well in graphs with extremely large size . from the figures, it can be also seen that the processing time in the ba graph is larger than those of the other two graphs .this is because all vertices in the ba graph have the same core number 8 .this means that in our algorithm , all edges are initially handled in one process , and hence the parallelism is poor in this extreme case .this can be seen as the worst case for our algorithms . however , as shown in fig .[ core1 ] and fig .[ core2 ] , real - word graphs exhibit much better balance in core number distribution . in this section ,we evaluate the acceleration ratio of our parallel algorithms , comparing with algorithms sequentially handling edge insertions / deletions .we compare with the state - of - the - art sequential algorithm , * traversal * algorithms given in .the comparison is conducted on four typical real - world graphs , db , wt , yt and lj in table [ table_graph ] . for each graph, we randomly select 5k-20k edges as the update set .the evaluation results are illustrated in fig .[ cmp_ins ] and fig .[ cmp_del ] respectively . in the figures ,the x - axis and y - axis represent the number of inserted / deleted edges and the acceleration ratio , respectively . from fig .[ cmp_ins ] and fig .[ cmp_del ] , it shows that in almost all cases , our algorithms achieves an acceleration ratio as large as times in both incremental and decremental core maintenance .the acceleration ratio increases as the number of edges inserted / deleted increases , which illustrates that our algorithms have better parallelism in scenarios of large amounts of graph changes .furthermore , it is also shown that our algorithms achieve larger acceleration ratios as the graph size increases .all evaluation results show that our algorithms exhibit good parallelism in core maintenance of dynamic graphs , comparing with sequential algorithms .the experiments illustrate that our algorithms are suitable for handling large amounts of edge insertions / deletions in large - scale graphs , which is desirable in realistic implementations .in this paper , we present the first known parallel algorithms for core maintenance in dynamic algorithms .our algorithms have significant accelerations comparing with sequential processing algorithms that handle inserted / deleted edges sequentially , and reduce the number of iterations for handling inserted / deleted edges from to the maximum number of edges inserted to / deleted from a vertex .experiments on real - world and synthetic graphs illustrate that our algorithms implement well in reality , especially in scenarios of large - scale graphs and large amounts of edge insertions / deletions . for the future work, it deserves more efforts to discovering structures other than superior edge set that can help design parallel core maintenance algorithms .furthermore , it is also meaningful to design parallel algorithms for maintaining other fundamental vertex parameters , such as betweenness centrality .the authors would like to thank ... * proof of theorem [ deletecorrectness ] . * + the deletion algorithm is executed in iterations , and each iteration includes two parts . the first part is similarly as the insertion case , which computes the superior edge set in parallel , and then deletes the computed superior edge set from graph . by lemma [ them : superioredgesetdelete ] , after deleting such a superior edge set from the graph , each vertex can decrease its core by at most 1 . then in the second part , we identifiy vertices that will decrease core numbers by executing algorithm [ alg : deletek ] in parallel . in each child process , it is sufficient to visit vertices in the exkpt of to find all vertices whose core numbers may decrease according to lemma [ le : exkpaths ] . for each edge ,we start a negative dfs to remove vertices that are confirmed to decrease core numbers . and by lemma [ corollary : sd ] , for a vertex , if \le k$ ] , will decrease its core number , and this will affect the value of its neighbors .so we use a variable value to represent the dynamic changes of value .after all edges are handled , vertices in exkpt are visited and the ones that can not be in the current -core are marked as removed .all above ensures the corretness of the algorithm .now consider an iteration , denote the superior edge set computed in current iteration as .the computation of values for vertices in exkpt of takes time . for the negative dfs procedures ,if a vertex is visited , is decreased by 1 .hence , each vertex can be visited by at most times , since a vertex will be removed if its values is decreased below to its core number .combining together , the total time for an iteration is .
this paper initiates the studies of parallel algorithms for core maintenance in dynamic graphs . the core number is a fundamental index reflecting the cohesiveness of a graph , which are widely used in large - scale graph analytics . the core maintenance problem requires to update the core numbers of vertices after a set of edges and vertices are inserted into or deleted from the graph . we investigate the parallelism in the core update process when multiple edges and vertices are inserted or deleted . specifically , we discover a structure called _ superior edge set _ , the insertion or deletion of edges in which can be processed in parallel . based on the structure of superior edge set , efficient parallel algorithms are then devised for incremental and decremental core maintenance respectively . to the best of our knowledge , the proposed algorithms are the first parallel ones for the fundamental core maintenance problem . the algorithms show a significant speedup in the processing time compared with previous results that sequentially handle edge and vertex insertions / deletions . finally , extensive experiments are conducted on different types of real - world and synthetic datasets , and the results illustrate the efficiency , stability and scalability of the proposed algorithms .
the organizational principles driving the evolution and development of natural and social large - scale systems , including populations of bacteria , ant colonies , herds of predators and human societies , rely on the cooperation of a large population of unrelated agents .even if cooperation seems to be a ubiquitous property of social systems , its spontaneous emergence is still a puzzle for scientists since cooperative behaviors are constantly threatened by the natural tendency of individuals towards self - preservation and the never - ceasing competition among agents for resources and success . the preference of selfishness over cooperation is also due to the higher short - term benefits that a single ( defector ) agent obtains by taking advantage of the efforts of cooperating agents .obviously , the imitation of such a selfish ( but rational ) conduct drives the system towards a state in which the higher benefits associated to cooperation are no longer achievable , with dramatic consequences for the whole population .consequently , the relevant question to address is why cooperative behavior is so common , and which are the circumstances and the mechanisms that allow it to emerge and persist . in the last decades, the study of the elementary mechanisms fostering the emergence of cooperation in populations subjected to evolutionary dynamics has attracted a lot of interest in ecology , biology and social sciences .the problem has been tackled through the formulation of simple games that neglect the microscopic differences among distinct social and natural systems , thus providing a general framework for the analysis of evolutionary dynamics .most of the classical models studied within this framework made the simplifying assumption that social systems are characterized by homogeneous structures , in which the interaction probability is the same for any pair of agents and constant over time .however , this assumption has been proven false for real systems , as the theory of complex networks has revealed that most natural and social networks exhibit large heterogeneity and non trivial interconnection topologies .it has been also shown that the structure of a network has dramatic effects on the dynamical processes taking place on it , so that complex networks analysis has become a fundamental tool in epidemiology , computer science , neuroscience and social sciences .the study of evolutionary games on complex topologies has allowed a new way out for cooperation to survive in some paradigmatic cases such as the prisoner s dilemma or the public goods games .in particular , it has been pointed out that the complex patterns of interactions among the agents found in real social networks , such as scale - free distributions of the number of contacts per individual or the presence of tightly - knit social groups , tend to favor the emergence and persistence of cooperation .this line of research , which brings together the tools and methods from the statistical mechanics of complex networks and the classical models of evolutionary game dynamics , has effectively became a new discipline , known as evolutionary graph theory .recently , the availability of longitudinal spatio - temporal information about human interactions and social relationships has revealed that social systems are not static objects at all : contacts among individuals are usually volatile and fluctuate over time , face - to - face interactions are bursty and intermittent , agents motion exhibits long spatio - temporal correlations . consequently , static networks , constructed by aggregating in a single graph all the interactions observed among a group of individuals across a given period , can be only considered as simplified models of real networked systems . for this reason , time - varying graphs have been lately introduced as a more realistic framework to encode time - dependent relationships . in particular, a time - varying graph is an ordered sequence of graphs defined over a fixed number of nodes , where each graph in the sequence aggregates all the edges observed among the nodes within a certain temporal interval .the introduction of time as a new dimension of the graph gives rise to a richer structure .therefore , new metrics specifically designed to characterize the temporal properties of graph sequences have been proposed , and most of the classical metrics defined for static graphs have been extended to the time - varying case . lately , the study of dynamical processes taking place on time - evolving graphs has shown that temporal correlations and contact recurrence play a fundamental role in diverse settings such as random walks dynamics , the spreading of information and diseases and synchronization . herewe study how the level of cooperation is affected by taking into account the more realistic picture of social system provided by time - varying graphs instead of the classical ( static ) network representation of interactions .we consider a family of social dilemmas , including the hawk - dove , the stag hunt and the prisoner s dilemma games , played by agents connected through a time - evolving topology obtained from real traces of human interactions .we analyze the effect of temporal resolution and correlations on the emergence of cooperation in two paradigmatic data sets of human proximity , namely the mit reality mining and the infocom06 co - location traces .we find that the level of cooperation achievable on time - varying graphs crucially depends on the interplay between the speed at which the network changes and the typical time - scale at which agents update their strategy .in particular , cooperation is facilitated when agents keep playing the same strategy for longer intervals , while too frequent strategy updates tend to favor defectors .our results also suggest that the presence of temporal correlations in the creation and maintenance of interactions hinders cooperation , so that synthetic time - varying networks in which link persistence is broken usually exhibit a considerably higher level of cooperation .finally , we show that both the average size of the giant component and the weighted temporal clustering calculated across different consecutive time - windows are indeed good predictors of the level of cooperation attainable on time - varying graphs .we focus on the emergence of cooperation in systems whose individuals face a social dilemma between two possible strategies : _ cooperation _ ( c ) and _ defection _ ( d ) . a large class of social dilemmas can be formulated as in via a two - parameter game described by the payoff matrix : where , , and represent the payoffs corresponding to the various possible encounters between two players .namely , when the two players choose to cooperate they both receive a payoff ( for _ reward _ ) , while if they both decide to defect they get ( for _ punishment _ ) . when a cooperator faces a defector it gets the payoff ( for _ sucker _ ) while the defector gets ( for _ temptation _ ) . in this version of the gamethe payoffs and are the only two free parameters of the model , and their respective values induce an ordering of the four payoffs which determines the type of social dilemma .we have in fact three different scenarios .when and , defecting against a cooperator provides the largest payoff , and this corresponds to the _ hawk - dove _ game . for and , cooperating with a defector is the worst case , and we have the _ stag hunt _ game . finally , for and , when a defector plays with a cooperator we have at the same time the largest ( for the defector ) and the smallest ( for the cooperator ) payoffs , and the game corresponds to the _ prisoner s dilemma_. in this work we consider the three types of games by exploring the parameter region ] . in real social systems ,each individual has more than one social contact at the same time .this situation is usually represented by associating each player to a node of a _ static _network , with adjacency matrix , whose edges indicate pairs of individuals playing the game . in this framework ,a player selects a strategy , plays a number of games equal to the number of her neighbors and accumulates the payoffs associated to each of these interactions .obviously , the outcome of playing with a neighbor depends both on the strategiy selected by node and that of the neighbor , according to the payoff matrix in eq .[ payoffs ] .when all the individuals have played with all their neighbors in the network , they update their strategies as a result of an evolutionary process , i.e. , according to the total collected payoff .namely , each individual compares her cumulated payoff with that of one of her neighbors , say , chosen at random .the probability that agent adopts the strategy of her neighbor increases with the difference ( see methods for details ) .the games defined by the payoff matrix in eq . [ payoffs ] and using a payoff - based strategy update rule have been thoroughly investigated in static networks with different topologies .the main result is that , when the network is fixed and agent strategies are allowed to evolve over time , the level of cooperation increases with the heterogeneity of the degree distribution of the network , being scale - free networks the most paradigmatic promoters of cooperation .however , in most cases human contacts and social interactions are intrinsically dynamic and varying in time , a feature which has profound consequences on any process taking place over a social network .we explore here the role of time on the emergence of cooperation in time - varying networks .we consider two data sets describing the temporal patterns of human interactions at two different time scales .the first data set has been collected during the mit reality mining experiment , and includes information about spatial proximity of a group of students , staff , and faculty members at the massachusetts institute of technology , over a period of six months .the resulting time - dependent network has nodes and consists of a time - ordered sequence of graphs ( snapshots ) , each graph representing proximity interactions during a time interval of minutes .remember that each graph accounts for all the instantaneous interactions taking place in the temporal interval ] .the value of at which is comparable with the number of nodes , i.e. when , coincides with the value of at which the cooperation diagram becomes indistinguishable from that obtained for the aggregate network , , for both the original and the reshuffled sequences of snapshots .this result confirms that the size of the giant connected component , of the graph corresponding to a given aggregation interval , plays a central role in determining the level of cooperation sustainable by the system , in agreement with the experiments discussed in for the case of static complex networks .it is also interesting to investigate the role of edge correlations on the observed cooperation level . to this aim, we analyze the temporal clustering ( see methods ) , which captures the average tendency of edges to persist over time . in fig .[ fig5 ] we plot the evolution of the temporal clustering as a function of the strategy update interval .the results reveal clearly that , for small values , the persistence of ties in the two original data sets is larger than in the randomized and the activity - driven graphs . in this regimethe average giant component is small in both the real and randomized cases , thus pointing out that the temporally connected components are composed of small clusters .however , the larger temporal clustering observed for small in the original data implies that the node composition of these small components changes very slowly compared to the faster mixing observed in the random data sets .these are then the two ingredients depressing the cooperation levels in the original data as compared to the random cases : the size of the giant component and how much such components change even at fixed size . as further confirmation, we notice that link persistence grows in a similar way as increases in randomized and activity - driven networks .this growth points out that the randomization of snapshots in one null model and the redistribution of links in the other one make the ties more stable as increases .instead , the results found on the original data sets suggest ( in particular for the case of infocom ) that ties are rather volatile , being active for a number of consecutive snapshots and then inactive for a large time interval . in randomized and activity - driven graphs the stabilization of ties together with the fast increase in the size of the giant componentmake the resulting time - varying graph much more similar to static networks , thus improving the survival of cooperation , as compared to the volatile and strongly fragmented scenario of the real time - varying graphs .although the impact of network topology on the onset and persistence of cooperation has been extensively studied in the last years , the recent availability of data sets with time - resolved information about social interactions allows a deeper investigation of the impact on evolutionary dynamics of time - evolving social structures .here we have addressed two crucial questions : does the interplay between graph evolution and strategy update affect the classical results about the enhancement of cooperation driven by network reciprocity ? and what is the role of the time - correlations of temporal networks in the evolution of cooperation ?the results of the simulations confirm that , for all the four social dilemmas studied in this work , cooperation is seriously hindered when ( i ) agent strategy is updated too frequently with respect to the typical time - scale of agent interaction and ( ii ) real - world timecorrelations are present .this phenomenon is a consequence of the relatively small size of the giant component of the graphs obtained at small aggregation intervals .however , when the temporal sequence of social contacts is replaced by time - varying networks preserving the original activity attributes of links or nodes but breaking the original temporal correlations , the structural patterns of the network at a given time - scale of strategy update changes dramatically from those observed in real data . as a consequence , the effects of temporal resolution over cooperation are smoothed and , by breaking the real temporal correlations of social contacts , cooperation can emerge and persist also for moderately small strategy update frequencies .this result highlights that both the interplay of strategy update and graph evolution and the presence of temporal correlations , such as edge persistence and recurrence , seem to have fundamental effects on the emergence of cooperation .our findings suggest that the frequency at which the connectivity of a given system are sampled has to be carefully chosen , according with the typical time - scale of the social interaction dynamics .for instance , as stock brokers might decide to change strategy after just a couple of interactions , other processes like trust formation in business or collaboration networks are likely to be better described as the result of multiple subsequent interactions .these conclusions are also supported by the results of a recent paper of ribeiro _ et al . _ in which the effects of temporal aggregation interval in the behavior of random walks are studied .also , the fundamental role played by the real - data time correlations in dynamical processes on the graph calls for more models of temporal networks and for a better understanding of their nature . in a nutshell ,the arguments indicating network reciprocity as the social promoter of cooperator have to be revisited when considering time - varying graphs . in particular, one should always bear in mind that both the over- and the under - sampling of time - evolving social graph and the use of the finest / coarsest temporal resolution could substantially bias the results of a game - theoretic model played on the corresponding network .these results pave the way to a more detailed investigation of social dilemmas in systems where not only structural but also temporal correlations are incorporated in the interaction maps .the data set describes proximity interactions collected through the use of bluetooth - enabled phones .the phones were distributed to a group of 100 users , composed by 75 mit media laboratory students and 25 faculty members .each device had a unique tag and was able to detect the presence and identity of other devices within a range of 5 - 10 meters .the interactions , intended as proximity of devices , were recorded over a period of about six months .in addition to the interaction data , the original dataset included also information regarding call logs , other bluetooth devices within detection range , the cell tower to which the phones was connected and information about phone usage and status . here , we consider only the contact network data , ignoring any other contextual metadata .the resulting time - varying network is an ordered sequence of 41291 graphs , each having n=100 nodes .each graph corresponds to a proximity scan taken every 5 minutes .an edge between two nodes indicates that the two corresponding devices were within detection range of each other during that interval .we refer to such links as _active_. during the entire recorded period ,2114 different edges have been detected as active , at least once .this corresponds to the aggregate graph having a large average node degree .however , this is an artefact of the aggregation ; the single snapshots tend to be very sparse , usually containing between 100 and 200 active edges . the data set consists of proximity measurements collected during the ieee infocom06 conference held in a hotel in barcelona in 2006 .a sample of participants from a range of different companies and institutions were chosen and equipped with a portable bluetooth device , intel imote , able to detect similar devices nearby .area `` inquiries '' were performed by the devices every 2 minutes , with a random delay or anticipation of 20 seconds .the delay / anticipation mechanism was implemented in order to avoid synchronous measurements , because , while actively sweeping the area , devices could not be detected by other devices . a total number of 2730 distinct edgeswere recorded as active at least once in the observation interval , while the number of edges active at a given time is significantly lower , varying between 0 and 200 , depending on the time of the day .the fermi rule consists in the following updating strategy .a player chooses one of her neighbors at random and copies the strategy of with a probability : where is the difference between the payoffs of the two players , and is a parameter controlling the smoothness of the transition from for small values of , to for large values of .notice that for we obtain regardless of the value of , which effectively corresponds to a random strategy update . on the other hand , when then , being the heaviside step function . in this limit, the strategy update is driven only by the ordering of the payoff values . the activity - driven model , introduced in ref . , is a simple model to generate time - varying graphs starting from the empirical observation of the activity of each node , in terms of number of contacts established per unit time .given a characteristic time - window , one measures the activity potential of each agent , defined as the total number of interactions ( edges ) established by in a time - window of length divided by the total number of interactions established on average by all agents in the same time interval .then , each agent is assigned an activity , which is the probability per unit time to create a new connection or contact with any another agent .the coefficient is a rescaling factor , whose value is appropriately set in order to ensure that the total number of active nodes per unit time in the system is equal to , where is the total number of agents .notice that effectively determines the average number of connections in a temporal snapshot whose length corresponds to the resolution of the original data set .the model works as follows . at each time the graph starts with disconnected nodes .then , each node becomes active with probability and connects to other randomly selected nodes . at the following time - step ,all the connections in are deleted , and a new snapshot is sampled .notice that time - varying graphs constructed through the activity - driven model preserve the average degree of nodes in each snapshot , but impose that connections have , on average , a duration equal to , effectively washing out any temporal correlation among edges .several metrics have been lately proposed to measure the tendency of the edges of a time - varying graph to persist over time .one of the most widely used is the unweighted temporal clustering , introduced in ref . , which for a node of a time - varying graph is defined as : where are the elements the adjacency matrix of the time - varying graph at snapshot , is the total number of edges incident on node at snapshot and is the duration of the whole observation interval .notice that takes values in $ ] . in general ,a higher value of is obtained when the interactions of node persist longer in time , while tends to zero if the interactions of are highly volatile .if each snapshot of the time - varying graph is a weighted network , where the weight represents the strength if the interaction between node and node at time , we can define a weighted version of the temporal clustering coefficient as follows : finally , if we focus more on the persistence of interaction strength across subsequent network snapshots , we can define the extremal temporal clustering as : where by considering the minimum between and one can distinguish between persistent interactions having constant strength over time and those interactions having more volatile strength . as in our case social interactions are seen to be highly volatile in real data sets , the extremal version of the temporal clustering seems to be the best choice to unveil the persistence of social ties at short time scales .this work was supported by the eu lasagne project , contract no.318132 ( strep ) , by the eu multiplex project , contract no.317532 ( strep ) , by the spanish mineco under projects mtm2009 - 13848 and fis2011 - 25167 ( co - financed by feder funds ) , by the comunidad de aragn ( grupo fenol ) and by the italian to61 infn project .is supported by spanish mineco through the ramn y cajal program .is supported by the fet project topdrim " ( ist-318121 ) .is supported by the james s. mcdonnell foundation .l. isella , m. romano , a. barrat , c. cattuto , v. colizza v , __ , close encounters in a pediatric ward : measuring face - to - face proximity and mixing patterns with wearable sensors . "plos one * 6 * , e17144 ( 2011 ) .j. tang , m. musolesi , c. mascolo , v. latora , and v. nicosia , analysing information flows and key mediators through temporal centrality metrics " proceedings of the 3rd acm workshop on social network systems ( sns10 ) , ( 2010 ) .
cooperation among unrelated individuals is frequently observed in social groups when their members join efforts and resources to obtain a shared benefit which is unachievable by single ones . however , understanding why cooperation arises despite the natural tendency of individuals towards selfish behavior is still an open problem and represents one of the most fascinating challenges in evolutionary dynamics . very recently , the structural characterization of the networks upon which social interactions take place has shed some light on the mechanisms by which cooperative behavior emerges and eventually overcome the individual temptation to defect . in particular , it has been found that the heterogeneity in the number of social ties and the presence of tightly - knit communities lead to a significant increase of cooperation as compared with the unstructured and homogeneous connection patterns considered in classical evolutionary dynamics . here we investigate the role of social ties dynamics for the emergence of cooperation in a family of social dilemmas . social interactions are in fact intrinsically dynamic , fluctuating and intermittent over time , and can be represented by time - varying networks , that is graphs where connections between nodes appear , disappear , or are rewired over time . by considering two experimental data sets of human interactions with detailed time information , we show that the temporal dynamics of social ties has a dramatic impact on the evolution of cooperation : the dynamics of pairwise interactions favor selfish behavior . _ popular abstract.- _ * why do animals ( including humans ) cooperate even when selfish actions provide larger benefits ? this question has challenged the evolutionary theory during decades . the success of cooperation is essential to humankind and ubiquitous , no matter the cultural and religious traits of particular populations . scientists have pointed out in the past a series of possible mechanisms that could favor cooperation between humans , as for example the peculiar way of establishing social relations which assumes the form of a complex network . basically , the structural attributes of these networks , such as the presence of a few individuals that have a large number of social ties , promote cooperation and discourage the imitation of free - riders . however , social interactions are inherently dynamic and changing over time , an issue which has been usually disregarded in the study of cooperation on networks . we study evolutionary models in time - varying graphs and show that the volatility of social relations tends to decrease cooperation in respect to static graphs . our results thus point out that the time - varying nature of social ties can not be neglected and that the relative speed of graph evolution and strategy update is a crucial ingredient governing the evolutionary dynamics of social networks , having as much influence as the structural organization . *
modelling nonparametric time series has received increasing interest among scholars for a few decades , see , for example , . in classical time series analysis , the stationarity of time series is a fundamental assumption .yet , it may be violated on some occasions in such the fields as finance , sound analysis and neuroscience , especially when the time span of observations tends to infinity .so , it is necessary to generalize the stationary process to the nonstationary process .priestley ( 1965 ) first introduced a stochastic process with evolutionary spectra , which locally displays an approximately stationary behavior .but in his framework , it is impossible to establish an asymptotic statistical inference .dahlhaus ( 1997 ) proposed a new generalization of stationarity , called locally stationary process , and investigated its statistical inference .more details can refer to .in essence , the locally stationary process is locally close to a stationary process over short periods of time , but its second order characteristic is gradually changing as time evolves . a formal description of locally stationary process can refer to assumption ( a1 ) in the appendix . in parametric context , the statistical inference of locally stationary process has been studied extensively by . in nonparametric context , vogt ( 2012 ) considered the time - varying nonlinear autoregressive ( tvnar ) models including its general form and estimated the time - varying multivariate regression function using the kernel - type method .however , it still suffers the `` curse of dimensionality '' problem when the dimension of covariates is high . in order to solve the aforementioned problem ,a familiar way is to adopt the additive nonparametric regression model suggested by .it not only remedies the `` curse of dimensionality '' , but also has an independent interest in practical applications due to its flexibility and interpretability .there exists abound research findings about the additive regression model in the literature . in the case of iid observations ,the additive nonparametric component functions can be estimated by kernel - based methods : the classic backfitting estimators of , the marginal integration estimators of , the smoothing backfitting estimators of , and two - stage estimators of . in the stationary time series context , there are kernel estimators via marginal integration of , spline estimators of , and the spline - backfitted kernel ( sbk ) estimators which borrow the strength of both kernel estimation and spline estimation , see .vogt considered the locally stationary additive model and proposed smooth backfitting method to estimate bivariate additive component functions . on the other hand ,the varying - coefficient model is a natural extension of linear model which allows the coefficients to change over certain common covariates instead of being invariant .this model succeeds to relax the parameter limitation of linear model and may have practical as well as theoretical significance . for this model ,there are three types of estimation methods : local polynomial smoothing method , polynomial spline estimation method and smoothing spline method .zhang and wang proposed a so - called varying - coefficient additive model to catch the evolutionary nature of time - varying regression function in the analysis of functional data .their model assumes the evolutionary regression function has the form which is more flexible in the sense that it covers both varying - coefficient model and additive model as special cases . specifically speaking, it reduces to an additive model when are all constants , and a varying - coefficient model if are all linear functions . extracting the special meaning of time in functional data analysis, one can generalize time to some other common covariates . in this paper , we model locally stationary time series . to concreteness ,let be a length- realization of dimension locally stationary time series , and assume that the data is generated by varying - coefficient additive model as follows where s are i.i.d , is the varying - coefficient component function , is the additive component function and is a bivariate nonparametric function , which allows the heteroscedasticity case . without loss of generality , we require ^{p}}, ] such that for functional data , zhang and wang proposed a two - step spline estimation procedure . in the first step , sorting the data within each subject in ascending order of time and averaging the response for each subject using trapezoidal rule to fit an additive model , then , in the second step , fitting a varying - coefficient model by substituting the estimated additive function into varying - coefficient additive model .his estimation methodology works since there are dense observation for every subject and covariates is independent of observation time within subject. however , for some other practical problems , such as longitudinal data with finite observertion time , time series data , such an assumption fails . to circumvent this problem , under mild assumptions ,we derive an initial estimation of additive component by employing a segmentation technique. then we can fit a varying - coefficient model and an additive model , respectively , to get spline estimators of varying - coefficient function and additive function . as expected , we show that the proposed estimators of and are consistent and present the corresponding rate of convergence . on the other hand, the product term in may simply reduce to a varying - coefficient term or an additive term in the case of being linear function or being constant .so , in the parsimony sense , identifying additive terms and varying - coefficient terms in are of interest . to this end, we propose a two - stage penalized least squares estimator based on scad penalty function , and , furthermore , show that our model identification strategy is consistent , i.e. , the additive term and the varying - coefficient term are correctly selected with probability approaching to 1 .meantime , rate of convergence of penalized spline estimator of each component function achieves the rate of the spline estimator of univariate nonparametric function .the rest of this paper is organized as follows .we propose a three - step spline estimation method in section 2 and a two - stage model identification procedure in section 3 .section 4 describes the smoothing parameter selection strategies .section 5 establishes the asymptotic properties of the proposed model estimation and identification methods .simulation studies are illustrated in section 6 .the main technical proofs are presented in the appendix .lemmas and other similar proofs are given in the supplementary .in this section , we propose a three - step spline estimation method for the proposed locally stationary varying - coefficient additive model .* step i : segment the rescaled time into several groups , and approximate each varying - coefficient function within the same group by a local constant . thus , model can be approximated by an additive model , and a scaled - version of the additive component functions can be obtained using spline - based method . * step ii : substitute the initial estimates of the scaled additive component functions into model to yield an approximated varying - coefficient model , and then obtain spline estimators of varying - coefficient component functions * step iii : plug - in spline estimators of varying - coefficient component functions into model to yield an approximated additive model , and then update the spline estimation of additive component functions .we first present with some notations before detailing our proposed estimation method .let be order b - spline basis with interior knots and is the number of b - spline functions estimating additive component function similarly , we denote as order b - spline basis with interior knots , and is the number of b - spline functions estimating varying - coefficient component function here , ` ' and ` ' in the subscript of b - spline functions and knots number mean that is for the additive component function and varying - coefficient function , respectively . denote and the nice properties of scaled b - spline basis are listed in the appendix . +* step i : initial estimators of scaled additive component functions * we segment the sample in ascending order of time into groups with observations in each group , where hinges on the sample size and . then approximate in the group , i.e. by a constant , where is some constant dependent on and such that for the sake of convenient presentation , we suppress the triangular array index in locally stationary time series , and represent time index in the group as for given then one can approximate model as where if are all known , one can easily construct the spline estimator of suppose that minimizes ^ 2,\ ] ] then where however , s are unknown. we instead rewrite as an additive model , where for each given let minimize ^ 2.\ ] ] by and , it is easy to see that which implies in a word , although the additive component function in can not be estimated directly , the scaled additive component function with is estimable using the proposed segmentation techniques . + * step ii : spline estimators of varying - coefficient component functions * define and , . by , substituting into yields where . ] model can be viewed as a varying - coefficient model .suppose that minimizes ^ 2,\ ] ] then , spline estimators of additive component functions in are given by * remark 1 : * the spline estimators and can be updated by iterating step ii and step iii .however , one step estimation is enough and there is no great improvement through iteration procedure .* remark 2 : * one may employ different b - spline basis functions in step i and step iii for estimating the additive component functions .yet , we do nt distinguish them in symbols for the sake of simplicity .the proposed varying - coefficient additive model is more general and flexible than either varying - coefficient model or additive model , and covers them as special cases .but , in practice , a parsimonious model is always one s preference when there exist several potential options .hence , it is of great interest to explore whether the varying - coefficient component function is truly varying and whether the additive component function degenerates to simply linear function . in this paper, we decompose varying - coefficient additive terms into additive terms and varying - coefficient terms , and , motivated by , propose a two - stage penalized least squares ( pls ) model identification procedure to identify the term that is constant ( ) or / and is linear ( ) .* stage i : plug - in the spline estimators of additive component functions obtained in the estimation stage into model , and penalize to identify linear additive terms . *stage ii : given the penalized spline estimators of additive component functions obtained in stage i of the model identification process , penalize to select constant varying - coefficient terms .we first introduce some notations .let and denote and * stage i : identifying linear additive terms * by substituting the additive component functions by their spline estimates obtained in the estimation stage , model becomes where . ] let with , and assume is given by ^ 2\\ & + t\sum_{k=1}^{p}p_{\mu_{t}}\left(k_{a}^{-3/2}\parallel\beta''_{k}\parallel_{l_{2}}\right ) , \end{aligned}\ ] ] where and is a penalty function with a tuning parameter therefore , the penalized spline estimators of are given by ,\ k=1,\cdots , p.\ ] ]in this section , we discuss various implementation issues for the proposed model estimation and identification procedures .we predetermine the degree of polynomial spline .usual options are 0 , 1 or 2 , that is to choose linear , quadratic or cubic spline functions .it is known that , when sufficient number of knots is used , the spline approximation method is quite stable .therefore , we suggest to use the same number of interior knots for all component functions and order b - spline basis functions in the step estimation to facilitate the computation . by experience, it is reasonable to choose in addition , in order to solve the least squares problem in each group in step i estimation , we require under this constraint , we choose the optimal and by bic where ^ 2 ] , is the number of b - spline basis functions adopted in the second step estimation and is the number of linear additive terms , i.e. , is sufficiently small , say , no larger than second , to select optimal , we define where ^{2} ] as =\{m|m^{\left(l\right)}\in c[0,1]\} ] + the necessary conditions to prove asymptotic properties are listed as below .* the process is stationary locally in time , that is , for each rescaled time point , ] for some and independent of and * at each rescaled time point , ] ^{p}}f_{u}\left(\mathbf{x}\right)\leq\sup_{\mathbf{x}\in[0,1]^{p}}f_{u}\left(\mathbf{x}\right)\leq c_{f } \\text{uniformly on}\ u\in[0,1].\ ] ] meantime , has density function with respective to certain measure .* iid , and given * there exists positive constants and such that for all where is the -mixing coefficients for process and defined as latexmath:[\[\alpha\left(k\right)=\sup_{b\in\sigma\{\mathcal{z}_s , s\leq t\ } , c\in \sigma\{\mathcal{z}_s , s\ge t+k\ } } * the conditional standard deviation function is bounded below and above uniformly on , ] and ,c_{1,k}) ] and ,c_{2,k}) ] therefore , let then on the one hand , ^ 2 \asymp|\mathbf{\tilde{\alpha}}^{\left(s\right)}-\mathbf{\breve{\alpha}}^{\left(s\right)}|^2 ] however , ^{2 } \preceq & \sum_{k=1}^{p}\sum_{l=1}^{j_{k , a}}e\big[\psi_{kl}(x_{t_{sj},t}^{\left(k\right ) } ) -\psi_{kl}(x_{t_{sj}}^{\left(k\right)}\left(t_{sj}/t\right))\big]^{2}\\ & + \sum_{k=1}^{p}\sum_{l=1}^{j_{k , a}}e[\psi_{kl}(x_{t_{sj}}^{\left(k\right)}\left(t_{sj}/t\right))]^{2}. \end{aligned}\ ] ] by assumption ( a2 ) and the properties of b - spline , ^ 2 \asymp\sum_{k=1}^{p}j_{k , a}=o\left(k_{a}\right ) . \ ] ] on the other hand , ^{2 } = j_{k , a}e[b_{kl , a}(x_{t , t}^{\left(k\right)})-b_{kl , a}(x_{t}^{\left(k\right)}\left(t / t\right))]^{2}\\ \leq & cj_{k , a}e[|\mathbf{x}_{t , t}-\mathbf{x}_{t}\left(t / t\right)|^2 ] \leq cj_{k , a}\frac{1}{t^{2}}e[u_{t , t}^{2}\left(t / t\right ) ] .\end{aligned}\ ] ] therefore , ^{2 } = o\left(k_{a}i_{t}\right)+\frac{i_{t}}{t^{2}}\sum_{k=1}^{p}j_{k , a}^{2}. \end{aligned}\ ] ] and in turn which completes the proof of and hence the first half of theorem 1 .the rest is direct from lemma [ eigenvalue ] . + * proof for theorem [ alpha ] .* let where denote denote where and then furthermore , assuming that with is given by and for by cauchy - schwartz inequality and identifiable condition we get * approximate error term : * we show the rate of approximate error term as follows by the definition of there exists such that satisfying }|\breve{\alpha}_{k}\left(u\right)-\alpha_{k}\left(u\right)|=o\left(\rho_{c}\right)\ ] ] for thus note that and the normal equation yields according to proposition 1 and boundness of ^{2 } = o_{p}\left(\frac{\rho_{a}^{2}}{n_{t}}+\frac{k_{a}}{t}\right ) .\end{aligned}\ ] ] on the other hand , from assumption ( a7 ) ^{2 } + \frac{1}{t}\sum_{t=1}^{t}\sum_{k=1}^{p}\hat{\gamma}_{k}^{2}(x_{t , t}^{\left(k\right ) } ) [ \delta_{k}\left(t / t\right)-\breve{\delta}_{k}\left(t / t\right)]^{2}\\ \preceq&\rho_{c}^{2}+\rho_{c}^{2}\frac{1}{t}\sum_{k=1}^{p}\sum_{t=1}^{t}\hat{\gamma}_{k}^{2}(x_{t , t}^{\left(k\right)})\\ = & o\left(\rho_{c}^{2}\right ) .\end{aligned}\ ] ] * stochastic error terms : * we next show the following rate of stochastic error term it is easy to see based on assumption ( a5 ) , it is sufficient to bound let and then obviously , under assumption ( a3 ) , . ] which yields similarly , \\ = & \frac{1}{t^{2}}\sum_{t=1}^{t}\delta\left(t / t\right)^{\tau } e[\mathbf{\psi}\left(\mathbf{x}_{t , t}\right)\mathbf{\psi}\left(\mathbf{x}_{t , t}\right)^{\tau}]\delta\left(t / t\right ) .\end{aligned}\ ] ] note that = & e\big[\sum_{l=1}^{j_{k , a}}\psi_{kl}^{2}(x_{t , t}^{\left(k\right)})\big]\leq\int \big\{\sum_{l=1}^{j_{k , a}}\psi_{kl}\left(z\right)\big\}^{2}f_{x_{t , t}}^{\left(k\right ) } \left(z\right)\mathrm{d}z\\=&j_{k , a } , \end{aligned}\ ] ] where is the marginal density of -th component of , =diag\left(j_{k , a}\right)_{k=1}^{p}=k_{a}i_{p}\ ] ] and which completes the proof of . + * proof for theorem [ cadditive ] : * * without loss of generality , we assume the true model is let and as the collection of all functions having form it is sufficient to show for any such that and for any where such that + for the sake of convenient presentation , we also denote as if let and then furthermore , where is the inner product of vector and + let we have and where the last step holds since assumption ( a7 ) and theorem [ beta ] .+ therefore , in which lies between 0 and and the proof of part(i ) is completed since and * according to theorem 6 ( p149 ) of , under assumption ( a6 ) , there exists and such that let and with next , we will show that for any given there is a sufficiently large such that according to lemma 3 of , has eigenvalues bounded away from 0 and with probability tending to one as therefore , where we use the fact that and for notice that the first term we also may choose the sufficiently large such that the third term can be dominated by the first term uniformly on finally , we observe that the -th element of is given by which is bounded by thus , the second is bounded by which is also dominated by the first term . in combination with the nonnegativity of the first term , we show , which implies with probability at least that there exists a local minimizer in the ball i.e. , again by the property of b - spline , we have that the proof is finished in combination with the proofs for theorem [ cvarying ] is very similar to theorem [ cadditive ] , and thus omitted here .9 cai , z. , fan , j. and yao , q. ( 2000 ) .functional - coefficient regression models for nonlinear time series ._ j. amer .assoc . _ * 95 * , 941956 .cai , z. and xu , x. ( 2008 ) .nonparametric quantile estimations for dynamic smooth coefficient models ._ j. amer .assoc . _ * 103 * , 15951607 .chiang , c .- t . , rice , j. a. andwu , c. o. ( 2001 ) .smoothing spline estimation for varying coefficient models with repeatedly measured dependent variables ._ j. amer .assoc . _ * 96 * , 605619 .mr1946428 dahlhaus , r. ( 1996a ) .asymptotic statistical inference for nonstationary processes with evolutionary spectra , _ in : athens conference on applied probability and time series analysis , springer ._ 145259 .dahlhaus , r. ( 1996b ) . on the kullback - leibler information divergence of locally stationary processes ._ stochastic process .appl . _ * 62 * , 139168 .dahlhaus , r. ( 1997 ) .fitting time series models to nonstationary processes .statist . _* 25 * , 137 .dahlhaus , r. , neumann , m. h. , and von sachs .nonlinear wavelet estimation of time - varying autoregressive processes . _ bernoulli . _ * 5 * , 873906 .dahlhaus , r. and rao , s. s. ( 2006 ) .statistical inference for time - varying arch processes .statist . _* 34 * , 10751114 .de boor , c. ( 1978 ) ._ a practical guide to splines_. springer - verlag , new york .fan , j. , hrdle , w. and mammen , e. ( 1998 ) .direct estimation of low dimensional components in additive models .statist . _* 26 * , 943971 .fan , j. and li , r. ( 2001 ) .variable selection via nonconcave penalized likelihood and its oracle properties ._ j. amer .assoc . _ * 96 * , 13481360 .fan , j. , yao , q. , and cai , z. ( 2002 ) .adaptive varying - coefficient linear models ._ j. r. stat .methodol . _ * 65 * , 5780. fan , j. and zhang , w. ( 1999 ) .statistical estimation in varying coefficient models . _ ann .statist . _* 27 * 14911518 .mr1742497 fryzlewicz , p. , sapatinas , t. and rao , s. s. ( 2008 ) .normalized least - squares estimation in time - varying arch models._ann .statist . _* 36 * , 742786 .fu , w. ( 1998 ) . penalized regression : the bridge versus the lasso ._ j. comput . graph .statist . _* 7 * , 397416 .hafner , c. m. and linton , o. ( 2010 ) .efficient estimation of a multivariate multiplicative volatility model. _ j. econometrics . _ * 159 * , 5573 .hastie , t. j. and tibshirani , r. j. ( 1990 ) ._ generalized additive models_. crc press .hastie , t. j. and tibshirani , r. j. ( 1993 ) varying - coefficient models .b. _ * 55 * , 757796 .mr1229881 hoover , d. r. , rice , j. a. , wu , c. o. and yang , l .- p .nonparametric smoothing estimates of time - varying coefficient models with longitudinal data ._ biometrika . _* 85 * , 809822 .mr1666699 horowitz , j. , klemel , j. and mammen , e. ( 2006 ) .optimal estimation in additive regression models ._ bernoulli ._ * 12 * , 271298 .huang , j. z. ( 1998 ) .projection estimation in multiple regression with application to functional anova models_ * 26 * , 242272 .huang , j. z. , wu , c. o. and zhou , l. ( 2002 ) .varying - coefficient models and basis function approximations for the analysis of repeated measurements ._ biometrika . _ * 89 * , 111128 .mr1888349 huang , j. z. and shen , h. ( 2004 ) .functional coefficient regression models for nonlinear time series : a polynomial spline approach .j. stat . _ * 31 * , 515534 .mr2101537 huang , j. z. and yang , l. ( 2004 ) .identification of non - linear additive autoregressive models ._ j. r. stat .methodol . _ * 66 * , 463477 .huang , j. z.,wu , c. o. and zhou , l. ( 2004 ) .polynomial spline estimation and inference for varying coefficient models with longitudinal data . _ statist .* 14 * 763788 .mr2087972 huang , j. , horowitz , j. l. and wei f. ( 2010 ) .variable selection in nonparametric additive models .statist . _* 38 * , 22822313 .kim , w. , linton , o. b. , and hengartner , n. w. ( 1999 ) .a computationally efficient oracle estimator for additive nonparametric regression with bootstrap confidence intervals ._ j. comput . graph .statist . _* 8 * , 278297 .linton , o. and nielsen , j. p. ( 1995 ) .a kernel method of estimating structured nonparametric regression based on marginal integration ._ biometrika .linton , o. ( 1997 ) .efficient estimation of additive nonparametric regression models. _ biometrika . _ * 84 * , 469473 .r. , yang .l. and hrdle , w. k. ( 2013 ) .oraclly efficient two - step estimation of generalized aditive model ._ j. amer .assoc . _ * 108 * , 619631 .koo , b. and linton , o. ( 2012 ) .semiparametric estimation of locally stationary diffusion models. _ j. econometrics . _ * 170 * , 210233 .mammen , e. , linton , o. and nielsen , j. ( 1999 ) . the existence and asymptotic properties of a backfitting projection algorithm under weak conditions . _statist . _* 27 * , 14431490 .priestley , m. b. ( 1965 ) .evolutionary spectra and non - stationary process ._ j. r. stat .methodol . _* 27 * , 204237 .stone , c. j. ( 1982 ) .optimal global rates of convergence for nonparametric regression .statist . _* 10 * , 10401053 .mr0673642 stone , c. j. ( 1985 ) .additive regression and other nonparametric models .statist . _* 13 * , 689705 .stone , c. j. ( 1994 ) .the use of polynomial splines and their tensor products in multivariate function estimation .statist . _* 22 * , 118171 .tibshirani , r. ( 1996 ) .regression shrinkage and selection vis the lasso . _ j. r. stat .soc . ser .methodol . _ * 58 * , 267288 .tjstheim , d. and auestad , b. h. ( 1994 ) .nonparametric identification of nonlinear time series : projections .vogt , m. ( 2012 ) .nonparametric regression for locally stationary time series ._ * 40 * , 26012633 .wang , l. and yang , l. ( 2007 ) .spline - backfitted kernel smoothing of nonlinear additive autoregression model ._ * 35 * , 24742503 .wang , j. and yang , l. ( 2009 ) .efficient and fast spline - backfitted kernel smoothing of additive regression model ._ * 61 * , 663690 .wu , c. o. , chiang , c. t. and hoover , d. r. ( 1998 ) .asymptotic confidence regions for kernel smoothing of a varying - coefficient model with longitudinal data ._ j. amer ._ * 93 * 13881402 .mr1666635 xue , l. and yang , l.(2006 ) .estimation of semi - parametric additive coefficient model. _ j. statist .plann . inference ._ * 136 * , 25062534 .xue , lan .consistent variable selection in additive models ._ _ statist .sinica.__**19 * * , 12811296 .yang , l. , hardle , w. and nielsen , j. ( 1999 ) .nonparametric autoregression with multiplicative volatility and additive mean ._ j. time series ._ * 20 * , 579604 .yuan , m. and lin , y. ( 2006 ) .model selection and estimation in regression with grouped variables ._ j. r. stat .. b. stat ._ * 68 * , 4967 .zhang , x. k. , park , b.u . and wang , j. l. time - varying additive models for longitudinal data ._ j. amer .* 108 * , 983998 .zhang , x. k , and wang , j. l. ( 2013 ) .varying - coefficient additive models for functional data ._ biometrika ._ * 102 * , 1532 .zou , h. ( 2006 ) .the adaptive lasso and its oracle properties ._ j. amer .* 101 * , 14181429 .
nonparametric regression models with locally stationary covariates have received increasing interest in recent years . as a nice relief of `` curse of dimensionality '' induced by large dimension of covariates , additive regression model is commonly used . however , in locally stationary context , to catch the dynamic nature of regression function , we adopt a flexible varying - coefficient additive model where the regression function has the form for this model , we propose a three - step spline estimation method for each univariate nonparametric function , and show its consistency and rate of convergence . furthermore , based upon the three - step estimators , we develop a two - stage penalty procedure to identify pure additive terms and varying - coefficient terms in varying - coefficient additive model . as expected , we demonstrate that the proposed identification procedure is consistent , and the penalized estimators achieve the same rate of convergence as the polynomial spline estimators . simulation studies are presented to illustrate the finite sample performance of the proposed three - step spline estimation method and two - stage model selection procedure . locally stationary process , varying - coefficient additive regression model , b - spline , scad , penalized least squares
the scattering matrix is a useful tool to describe the multiple scattering that occur when planes waves enter to a system which in general is of complex nature , like atomic nucleus , chaotic and/or disordered systems , etc . although it is well known for particle waves in quantum mechanics it can also be applied to any kind of plane waves . in the electromagnetic contextthe transfer matrix is known instead of but they are equivalent and related . by definition , relates the outgoing plane waves amplitudes to the incoming ones to the system , from which the _ reflection _ and _ transmission _ coefficients are obtained ; they are called _ reflectance _ and _ transmitance _ in electromagnetism . in the absence of dissipation , as happens in quantum electronic and some electromagnetic systems , becomes a unitary matrix , in particular .however , if the system contains a dissipative medium , is no longer unitary , in fact it is a sub - unitary matrix and , where the lack of unity is called _ absorbance _ in the electromagentic subject . we are concerned with the electromagnetic case . there, a natural definition of through one of its properties arises from the poynting theorem .this theorem is a energy balance equation which in the simplest form , i. e. for linear and non dispersive media , is given by where is the poynting vector which gives the energy flux per unit area per unit time , and is the electromagnetic energy density .\ ] ] here , we have assumed that the electric , electric displacement , magnetic induction , and magnetic fields , as well as the current density , are complex vectors whose real part only has physical meaning .we have also written explicitely the dependence on the position and time .the term in the right hand side of eq .( [ ec : cenergia ] ) is the negative of the work done by the fields per unit volume and represents the conversion of electromagnetic energy to thermal ( or mechanical ) energy .the version of poynting s theorem for dispersive media will not be touched here .our purpose in this paper is to apply the poynting theorem to the simplest scattering system , an interface between two different media , to illustrate the relation of poynting s theorem and one property of .first , we will consider the absence of dissipation for the dielectric - dielectric interface for which the defined matrix is unitary .second , the dissipative case is considered in the dielectric - conductor interface for which the scattering matrix , called , is sub - unitary .the last system is the simplest example to explain quantitatively a model , that we call the `` parasitic channels '' model , introduced in contemporary physics to describe complex systems with losses . also , could represents the scattering of an `` absorbing patch '' used to describe surface absorption . the paper is organized as follows . in the next sectionwe write the time averaged poynting s theorem and calculate the corresponding poynting s vector for planes waves and its flux through an open surface in a dielectric medium , as to be used in the sections that follow sect .[ sec : poynting ] .the dielectric - dielectric interface is considered in sect .[ sec : dielectric ] while sect .[ sec : conductor ] is devoted to the dielectric - conductor interface . finally , we conclude in sect . [sec : conclusions ]in what follows we will consider monocromatic high frequency oscillating fields such that where have written the precise dependence on the spatial and temporal variables to avoid confusion due to the abuse of the notation . here, the temporal average is of importance . using the definition for the average of a time dependent function over a period , the average of eq .( [ ec : cenergia ] ) can be written as where we have used that , with help of eqs .( [ ec : e ] ) and ( [ ec : h ] ) , also , by substitution of eqs . ( [ ec : e ] ) and ( [ ec : h ] ) into eq .( [ ec : sp ] ) , it is easy to see that with ( we write again the precise dependence ) being the time averaged poynting s vector . in equivalent way , using eqs .( [ ec : e ] ) and ( [ ec : j ] ) , we get .\ ] ] finally , the time averaged energy flux conservation law is given by where only the real part has physical meaning . if we integrate over a volume enclosed by a close surface , it says that the net flux of through is the negative of the work done by the fields if there are dissipative components in .i. e. , where , from one side , and , for the other side , using the divergence theorem can be written as a surface integral over as for a monocromatic plane wave with linear polarization in the axis , propagating along the positive -direction , the spatial component is being the wave number , the index of refraction of the medium and the speed of light in vacuum ; is a unit vector pointing in the positive -axis , and the complex number is the amplitude of .the magnetic field , calculated from using one of the maxwell equations ( faraday s law of induction ) , is where is a unit vector pointing in the positive -direction and is the permeability of the free space ( we have assumed a non - magnetic medium ) .direction traveling in the direction cross the open surface of area .the long arrows represent the corresponding time averaged poynting s vector . ] sustituting eqs .( [ ec : eplana ] ) and ( [ ec : hplana ] ) into eq .( [ ec : sprom2 ] ) we get where is a unit vector pointing in the positive -axis .this equation means that the time averaged energy flux is constant along the propagation .if we consider an open surface of area as in fig .[ fig : flux ] , the flux of through it is ( see eq .( [ eq : flux ] ) ) where we have taken the normal unit vector of as .let us consider an interface between two dielectrics , with refractive indices and , in the plane as shown in fig . [ fig : energia ] .for the shake of simplicity , we consider plane waves with linear polarization in the axis with normal incidence on both sides of the interface . therefore ,the spatial part of the electric fields , on the left and on the right , are of the form where and ( and ) are the complex amplitudes of the incoming ( outgoing ) planes waves . and . the amplitudes for the incoming ( and ) and outgoing ( and ) planes waves are shown with the corresponding poynting s vectors represented by long arrows .short arrows denote the unit vectors and of the covers of the cylindrical surface .the cross section of the cylinder is . ] for this case , the right hand side of eqs .( [ ec : cenergia2 ] ) and ( [ eq : balance ] ) is zero ( ) because there is not dissipation .hence , . for convenience ,we take a cylinder of cross section , including the covers , to be the closed surface ( see fig . [fig : energia ] ) . due to normal incidenceonly the flux through the covers of section contribute to .then , eq . ( [ eq : balance ] ) gives where ( ) is the flux through one of the covers with normal unit vector ( , 2 ) due to the -th plane wave ; it is given by with the convention that points outwards of , and are positive quantities while and are implicitely negative .using the result of eq .( [ eq : fluxplane ] ) for a plane wave , eq .( [ ec : cflujo ] ) can be written as which can be arranged in a matricial form that we will use later , namely by definition the fresnel coefficients relate the outgoing to the incoming plane waves amplitudes as where is the matrix the fresnel coefficients of reflection , and transmission , are although for this particular case they are real numbers , in general they are complex , in which case is a complex matrix . in order to be more general , in what followswe will assume that is complex .by substitution of eq .( [ ec : fresnel ] ) , eq .( [ ec : cflujo4 ] ) can be written as from which we see that or , equivalently , where is the identity matrix and is known in the literature as the _ scattering matrix _ which has the following general structure here , and ( and ) are the reflection and transmission amplitudes for incidence on the left ( right ) . from eqs .( [ ec : fresnel2 ] ) , ( [ ec : cfresnel1 ] ) , ( [ ec : cfresnel2 ] ) and ( [ ec : sdef1 ] ) it is easy to see that two remarks are worthy of mention . the first one is that because the system described by is invariant under inversion of time , but in a more general context that is not necessary the case . the second is that because the optical path is not the same for the reflected trajectory when incidence is from right or left . of course , eq .( [ ec : sflujo ] ) is satisfied in the case we are considering as can be easily checked using eqs .( [ eq : rrp ] ) and ( [ eq : ttp ] ) . in particular , where and are the reflection and transmission coefficients . eqs .( [ ec : fresnel ] ) and ( [ ec : sdef1 ] ) implies that by definition relates the outgoing to incoming plane waves amplitudes but normalized with the index of refraction .i. e. , where ( ) , and the electric field on both sides is given by we recall that eq .( [ ec : sdef3 ] ) is a definition of that has arised in a natural way from the restriction imposed by flux conservation , eq .( [ ec : sflujo ] ) , and its structure , given by eq .( [ ec : sdef2 ] ) , reflects the symmetries present in the problem , one of them being the _ time reversal invariance_.when one of the two media is a conductor with electric conductivity , the one on the right in fig .[ fig : energia2 ] let say , the treatment is equivalent as in sect .[ sec : dielectric ] but with a complex index of refraction : , where and are the optical constants . also , the corresponding wave number becomes complex : we replace with then , on the conductor side the electric field is an evanescent wave while on the dielectric side we have incoming and outgoing plane waves .they are from eq .( [ ec : fresnel ] ) , or eq . ( [ ec : sdef3 ] ) , with we see that where and are obtained from eqs .( [ eq : rrp ] ) and ( [ eq : ttp ] ) but replacing by , namely .the refraction index , as well as the wave vector , becomes complex such that only evanescent waves are present with an amplitude decaying exponentially as increases . ] in this case , the scattering matrix of the system is and we denote it by . by definition ( it is not necessary to normalize with respect to the index ) comparing with eq .( [ eq : rf1x1 ] ) we see that where , from eqs .( [ eq : rrpcomplex ] ) , as for , the poynting theorem impose a restriction to . to apply eq .( [ eq : balance ] ) we consider that the closed surface in fig .[ fig : energia2 ] extends to infinity on the right of the interface , such that eq .( [ eq : w ] ) gives where we used that .then , eq . ( [ eq : balance ] ) can be written as where the area has been cancelled . using eqs .( [ eq : tf1x1 ] ) and ( [ eq : tildes ] ) , after an arrangement , eq .( [ eq : psub1 ] ) gives latexmath:[\[\label{eq : psub2 } subunitary matrix with the lack of unitarity ( _ strength of absorption _ or absorbance ) , the last equality is valid for metals in the upper infrared part of the spectrum and for metals at microwave and lower frequencies . of course as can be easily verified using eqs .( [ eq : rtheta ] ) .( [ eq : psub2 ] ) can be seen as resulting of the unitarity condition for an -matrix that satisfy flux conservation ( compare with eq .( [ eq : unit1 ] ) ) .this -matrix should has the structure ( see eq .( [ ec : sdef2 ] ) ) ,\ ] ] where the phase can be taken as the phase of ( see eqs .( [ eq : tf1x1 ] ) and ( [ eq : tpdef ] ) ) ; using eq .( [ eq : ttpcomplex ] ) this form of says that the losses because of the conductor can be interpreted as due to a single mode of absorption whose `` coupling '' to the interface is .this is the simplest version of what we call the `` parasitic channels '' model appeared in the literature of contemporary physics in recent years to describe power losses in more complex systems ( see ref . and references there in ) .that model consists in simulate losses with equivalent absorbing modes , each one having an imperfect coupling to the system .the total absorption is quantified by but and can not be determined separately .our result not only explain this abstract model but quantify in an exact way the coupling of each absorbing mode as well the scattering matrix of a single absorbing patch in the surface absoprtion model . our treatment presented here can also be used to construct artificially a system with multiple absorbing modes . reduced the time averaged poynting theorem to a property of the scattering matrix . forthat we applied this balance equation to a simplest scattering system , consisting of normally incident planes waves at an interface between two media .the simplest version of this theorem was used such that dispersive media and the corresponding dissipation were ignored .two kind of interfaces were considered . in the first one , a dielectric - dielectric interface, the energy flux conservation leads to a natural definition of the scattering matrix which is restricted to be a unitary matrix .we recalled that the structure of reflects the symmetries present on the problem in other contexts . in the second one , the dielectric - conductor interface ,the definiton of -matrix was used to describe the scattering taking into account the losses on the conductor side via the poynting s theorem .this allowed us introduce the parasitic channels model used in contemporary physics to describe the scattering with losses in more complex systems .we should were able to quantify the coupling of the single parasitic mode of absorption in our simple system .a system with multiple absorbing modes can be constructed and the results will be published elsewhere . finally , the same treatement can be generalized for oblique incidence .
we apply the poynting theorem to the scattering of monochromatic electromagnetic planes waves with normal incidence to the interface of two different media . we write this energy conservation theorem to introduce a natural definition of the scattering matrix . for the dielectric - dielectric interface the balance equation lead us to the energy flux conservation which express one of the properties of : it is a unitary matrix . for the dielectric - conductor interface the scattering matrix , that we denote by , is no longer unitary due to the presence of losses at the conductor . however , the dissipative term appearing in the poynting theorem can be interpreted as a single absorbing mode at the conductor such that a whole , satisfying flux conservation and containing and this absorbing mode , can be defined . this is a simplest version of a model introduced in the current literature to describe losses in more complex systems .
it is imperative to know the frequency response of an optical device ( e.g. a cavity or high - speed modulator ) for it to be useful in optics or communication applications .for example , the frequency spacing between resonances ( the free spectral range , fsr ) of a cavity must be known before being used to calibrate a laser s frequency , perform absolute length measurements , generate non - classical light , or produce large cluster states using frequency - entangled photons .dense wavelength division multiplexed systems often use fabry - perot etalons to select and stabilize the wavelength of a tunable diode laser .a precise measurement of the etalon s fsr is necessary for matching its transmission channel with the international telecommunication union grid .similarly , it is necessary to calibrate a high - speed amplitude modulator before use in long - haul optical fiber transmission systems , or as a wideband optical signal source for characterizing the frequency response of photo - detectors or optical fibers .several methods exist for measuring the frequency response of an optical cavity or modulator .the most - straightforward method is direct detection with a calibrated fast receiver .however , this technique is highly dependent on the bandwidth of the optical receiver , and the ability to calibrate it .most characterization techniques involve a combination of either a calibrated wide bandwidth optical receiver , a commercial optical spectrum analyzer or a calibrated high - speed modulator .for example , techniques based on frequency modulation are effective in determining a cavity s fsr .however , these methods require both a calibrated modulator and a fast photo - detector with a known frequency response .several techniques exist that can determine the response of a modulator that do not require a fast photo - detector , such as optical heterodyning or swept - frequency techniques .unfortunately , these methods still rely on fast electronics or specialized laser sources that only operate at particular wavelengths . as optical device bandwidths continue to increase, it is necessary to develop a measurement technique that determines their frequency response without depending on the precise calibration of a high - speed photo - detector , modulator , or optical cavity . in this paper, we use a recursive method to obtain the frequency response of an optical system .this system consists of both an optical cavity and a high - speed ( uncalibrated ) amplitude modulator , which can be simultaneously characterized from the same measurement configuration .our approach extends a technique demonstrated by locke _ , and involves a fiber - coupled amplitude ( intensity ) modulator , and a low - frequency ( dc ) photo - detector , both of which are commercial laboratory equipment . the photo - detector is used to measure the dc cavity transmitted or reflected power as a function of the modulation frequency , while the carrier remains frequency locked to a resonance . instead of characterizing different cavity resonances by shifting the center laser wavelength , we characterize the full bandwidth up to 15.5 from the optical carrier ( 28 resonances ) by locking the center wavelength , and only adjusting the modulation frequency . by measuring the entire cavity response ( both on and off - resonance ) over this frequency range , we are able to estimate the frequency - dependent modulation depth , and use this knowledge to more precisely determine the cavity characteristics ( fsr and linewidth ) . in addition , we explore the precision of our technique by quantifying the effect modulation harmonics have on the measured linewidth , and determining the different resonant frequencies of orthogonal polarization cavity modes caused by the presence of a birefringent material ( non - linear crystal ) . once the modulator has been calibrated , this measurement technique can be used to characterize the frequency response of any passive optical element , and is not limited to cavities .in the next three subsections , we will present our proposed method , theories related to characterizing optical cavities , and how to estimate the frequency response of an optical cavity and amplitude modulator . in this section, we will briefly explain our proposed method , which is shown in fig .[ fig : theorysetup ] .the setup consists of a laser , a high - frequency amplitude modulator , an optical element to be measured , and low - frequency ( dc ) photo - detectors .we first use a cavity as the optical element under investigation as we can use the cavity s frequency response to calibrate the modulator s response .once the modulator has been calibrated , our method can be applied to characterize any optical element to a fine resolution ( e.g. , kilohertz ) over tens of gigahertz .first we modulate the laser beam with an amplitude modulator . the electric field of the output of the amplitude modulator , ,can be written as } ) , \label{eq : em } \end{aligned}\ ] ] where is the amplitude of the input to the amplitude modulator , is the laser carrier angular frequency , and are phase shifts caused by dc and ac voltages applied to the amplitude modulator , respectively .note that here we assume that the amplitude modulator consists of a mach zehnder interferometer with a phase modulator in one of two paths , but our derivation will also be valid for other amplitude modulator configurations .the ac signal applied to the modulator is a sine wave with an angular frequency , as shown in fig .[ fig : theorysetup ] . the phase shift caused by this ac signalis given as , where is the modulation depth .the modulation depth is usually frequency dependent , as we will later show is the case in our experiment .the electric field is now written as where are -order bessel functions of the first kind .the modulated beam is then injected into the optical element under investigation , and the output ( either transmission or reflection ) is measured with a low - frequency ( dc ) photo - detector , as depicted in fig .[ fig : theorysetup ] . in order to calculate the output signal, we apply a fourier transform to the modulator output , which gives after passing through the optical element , whose transmission / reflection coefficient ( transfer function ) is given as , the output electric field is written as note that we ignored the pick - off beam splitter in fig .[ fig : theorysetup ] as it only changes the power scaling of the detected signal . by applying the inverse fourier transform ,we obtain the output electric field is measured by a dc photo - detector whose cut - off frequency is much lower than the modulation frequency , .the detector output is given by where is the dc gain of the detector , , and we let and . this is the general expression of our method , where the modulation depth , , and the coefficient are frequency dependent .we also note that the laser power ( ) and the dc offset ( ) may fluctuate during experiments .in particular , internal heating due to conductive losses in the modulator may change the dc offset .since this loss is frequency dependent , the dc offset can drift with changing modulation frequency .our procedure makes it possible to precisely determine both and in the presence of these fluctuations . before detailing our procedure, we will briefly present a mathematical description of an optical cavity in the next section . in this section, we will briefly explain the theories related to the characterization of optical cavities .firstly , the transmission and reflection of ideal cavities are written as where and are intensity reflection coefficients for the input and output mirrors , respectively , and is the intra - cavity loss for a round trip ( is the loss for a half trip , ) . is the phase shift for a round trip of the cavity , which is given as , where is a speed of light , and is the effective length of the cavity .we assume the laser carrier is locked to the cavity ( i.e. , ) .the cavity is on resonance when , where is the resonant angular frequency , which satisfies , and is an integer . near a cavity resonance ( i.e. , ), and can be approximated to a lorentzian function as ^{1/4 } } f_{fsr } , \\f_{fsr } = & \ : \frac{c}{l } , \end{aligned}\ ] ] where and are the ( constant ) intensity transmission and reflection coefficients at the resonance , respectively , is the free spectral range ( fsr ) , and is the linewidth ( full width at half maximum , fwhm ) .figure [ fig : theoryplot ] shows the theoretical detector outputs for a cavity under investigation using our method , which is derived by substituting eq . [ eq : tandr ] ( or eq . [ eq : tandr2 ] ) into eq .[ eqn : idc ] . in this case , the transmission and reflection are symmetric , that is , and .we chose similar model parameters to our experimental values .the intensity reflection coefficients for the input and output mirrors are and , respectively .the total intra - cavity loss is , and the effective path length is m . we also assume that the modulation depth , , is frequency dependent as , and that the laser power and dc offset of the modulator are constant for simplicity .the input optical power to the amplitude modulator is distributed between the power at the carrier frequency and at the sideband frequencies .if the input power is constant during the measurement , then the sum of the optical powers at these frequencies will also be constant .as a result of the modulator s frequency response , if the modulation signal strength is held constant while the frequency is swept , the optical power in the sidebands will decrease for higher modulation frequencies .thus the optical power at the carrier frequency will increase .the overall upward ( downward ) trend of the off - resonance data in the transmitted ( reflected ) data , located inbetween the resonances , is due to the increasing power at the carrier frequency as a function of the modulator s frequency response .also , evidence of the second harmonic modulation sidebands from the amplitude modulator coupling into the cavity is visible as smaller peaks located halfway between the main resonances . in the next section , we will present the mathematical description of our measurement procedure . in this section, we will derive how to estimate the frequency response of an optical cavity and the frequency - dependent modulation depth , , of a high - speed amplitude modulator .first we scan the modulation frequency over tens of gigahertz to determine the responses of the cavity and modulator . during the frequency scan ,the laser power may fluctuate , as well as the dc offset of the waveguide modulator may change due to internal mechanisms ( e.g. internal heat effects via pyroelectric effect , or changing environmental conditions ) .we assume that the dc offset is set to at the start of the scan ( i.e. , at low frequency ) , and that the modulator has no significant acoustic resonances over the measurement bandwidth .we also assume that the modulation depth is approximately constant over the cavity resonance ( which is ) . under these assumptions, we can first estimate the resonant frequencies and linewidth of cavity from the data , and then determine the modulator response by using the cavity as a reference .finally , we determine the cavity characteristics more precisely using the information about the modulator response .first we derive an expression for the modulator s output , which is monitored using a dc photo - detector , as depicted in fig .[ fig : theorysetup ] .the output of the modulator is derived by letting in eq .[ eqn : idc ] , which gives where is proportional to the incident laser power , ( i.e. ) .next we will look at the transmission or reflection of a cavity . in order to eliminate the effects from laser power fluctuations, we normalize eq .[ eqn : idc ] with eq .[ eqn : idc0 ] , and define , which is constant .the normalized dc detector output becomes once we measure over a wide frequency range , we can determine the fsr of the cavity from the resonance peaks visible in fig . [fig : theoryplot ] . to determine the linewidth, we may assume that and are approximately constant over the linewidth , which gives where and are constants .we can use the derived expressions of and to approximately fit the data and obtain the linewidth .if the modulation depth , , is unknown at this point , we may use as a fitting parameter , and neglect to realize a reasonable fit .next we will extract over the measured frequency range , and then revisit the fitting to improve our precision in determining the fsr and linewidth .we can extract the on - resonance and off - resonance data from to determine the modulation depth . if the modulation sidebands are on resonance with the cavity , then .the normalized on - resonance detector output is this expression indicates that the on - resonance detector response is constant regardless of the modulation depth , , and the dc offset , .we can estimate from the multiple resonance peaks visible in the cavity s frequency response , as illustrated in fig . [fig : theoryplot ] . note that is constant over the entire measurement bandwidth . if the sidebands are off resonance , and we measure the transmitted power , then and we find , this off - resonance data can be obtained from by eliminating data around the resonant frequencies , and then interpolating . by interpolating the data ( assuming the modulation depth and the dc offset are approximately constant within the resonance linewidth ) , we can obtain continuous off - resonance data . combining eq .[ eqn : idcon ] and [ eqn : idcoff ] , we get since we can assume that at , the dc detector output of the modulator s response at is . therefore , the normalized dc detector output of the modulator s response without the presence of a cavity is combining eq .[ eqn : idcratio ] and [ eqn : idcmod ] , we obtain using this expression , we can determine the optical modulator response , , over the entire measurement bandwidth .once we obtain , we can revisit our derivation of the linewidth and fsr frequency to more precisely determine these values . from eq .[ eq : t2 ] , [ eq : r2 ] and [ eqn : idcprime ] , we can derive an expression for the lineshape of a cavity resonance as note here we use rather than to clarify the modulation frequency .this function takes the form of a constant plus two lorentzian functions .the first function , scaled by , represents the first - order modulation sidebands being on resonance with the cavity , and has a bandwidth equal to the linewidth , . while the second function , scaled by , represents the second harmonic sidebands being on resonance , and has a bandwidth of .since the modulation depth is not expected to be strong enough to excite the third harmonic , we can ignore modulation sidebands of order .therefore , our technique can characterize an optical system that consists of a cavity and a modulator with a single measurement configuration .the on and off resonance data from the cavity measurements is used to learn about the frequency - dependent modulation depth , .then we use to fit the lorentzian lineshape described by eq .[ eqn : lineshape ] to the on - resonance data to give a more precise fsr and linewidth .a clear advantage of this method is that we only require a dc photo - detector .we do not need an expensive or elaborate high - frequency detector , or to calibrate the frequency response of the detector or modulator beforehand . in the next section, we will present experimental results from applying this technique to characterize an optical system , consisting of a waveguide modulator and an optical cavity with a non - linear crystal .our frequency response measurement technique involves a recursive approach that allows us to simultaneously characterize both the optical element under investigation ( cavity ) and the measurement device ( amplitude modulator ) .thus both the cavity and modulator are characterized from a single setup , and the technique is not restricted by , or dependent on , the frequency response of the photo - detector .we determine the frequency response of our optical system using the experimental set - up shown in fig .[ fig : exptdiagram ] .the output from a 1550 nm fiber laser is first sent through a phase modulator , followed by an amplitude modulator .both devices are fiber - coupled broadband low - loss lithium niobate electro - optical modulators ( eospace ) .the amplitude modulation frequency , , is varied while the light before and after the cavity is monitored using low - frequency ( dc ) photo - detectors .the cavity has a birefringent non - linear crystal inside , which has different refractive indices for horizontally and vertically - polarized light .this results in non - degenerate polarization modes that have distinguishable resonant frequencies .since we use a polarization beam splitter to send linearly - polarized light to the cavity , we can measure either horizontal or vertical polarization modes by simply rotating a half - wave plate before the cavity .the incident power of the laser is quite low ( a few milliwatts ) to avoid exciting second harmonic generation from the non - linear crystal .the cavity is locked on resonance with the laser optical carrier frequency during the frequency measurements using a phase modulator and the pound - drever - hall locking technique .it must remain locked on resonance while the amplitude modulation frequency is swept to ensure that the cavity is not disturbed by any environmental noise while the data is captured .the cavity s response to the amplitude modulation as a function of cavity path length is illustrated in fig .[ fig : exptdiagram ] .this graph depicts the energy ratio between the sidebands and carrier during our measurements .we can control this energy ratio in two ways : the ac modulation signal strength , and the dc voltage applied to a mach - zehnder interferometer inside the amplitude modulator .we operate the modulator near the dc quadrature point ( ) to ensure a linear response .as mentioned in the theory section , we found that the dc offset changes during the frequency sweep .our calibration method allows us to eliminate the effect of the drifting dc offset .one limitation of this technique is it will not work if both the measurement setup ( amplitude modulator ) and the optical element ( cavity ) are completely unknown .we have to assume that the modulator s response is relatively smooth with no acoustic resonances around the cavity resonant frequencies , and that the variation in modulation depth over the linewidth of the cavity is small .fortunately , the s21 response data provided by the manufacturer with the modulator can be used to determine whether these assumptions are valid ( they are valid for our modulator ) .we also need to assume that the cavity has a lorentzian - type response with periodic resonances .this can be easily determined by using the dc photo - detectors to monitor the cavity s transmitted or reflected light as a function of cavity path length , as illustrated in fig .[ fig : exptdiagram ] . once the modulator has been characterize using the cavity s response , this technique can be used to determine the frequency response of any passive optical element including , but not limited to , cavities .we will present the results from our frequency response measurements in the next section .in the next three subsections , we will present wide bandwidth measurements of the frequency response of our cavity and amplitude modulator , and how we use this information to more precisely determine the cavity s fsr and linewidth .we begin our method by capturing the cavity s transmitted and reflected light with dc photo - detectors as a function of the amplitude modulation frequency over a wide frequency range ( ) . based on the measured wide bandwidth response of the cavity shown in fig .[ fig : refltx16ghz ] , we can estimate the cavity s resonant frequencies and linewidth .this data is the normalized dc detector output , which corresponds to in eq .[ eqn : idcnorm ] . the input polarization to the cavity was set to horizontally - polarized light for this wide bandwidth measurement , and the cavity was locked to the carrier frequency of this polarization .note how the measured transmission increases to a maximum as the modulation frequency approaches the cavity s first resonance at , while the reflected intensity decreases to a minimum .the modulation frequency at the center of each peak ( or dip ) is equal to a multiple of the cavity fsr , while the full width at half maximum of the on - resonance data gives the linewidth .thus from this data , we estimate the cavity s fsr to be and the linewidth to be .the overall upward ( downward ) trend of the off - resonance data in the transmitted ( reflected ) data is in good agreement with the theoretical prediction shown in fig .[ fig : theoryplot ] .since we are using dc photo - detectors , the bandwidth of this measurement is not dependent on , or limited by , the photo - detector s response . despite the modulatorhaving a bandwidth , we can clearly distinguish resonances at modulation frequencies up to .we normalize the cavity data with the modulator s output measured before the cavity to correct for any laser power drifts .we also note that the on - resonance detector levels are approximately constant , as predicted by eq .[ eqn : idcon ] . in the next section, we will discuss how the modulator s response can be extracted from this cavity measurement .once the cavity characteristics have been estimated , we can extract the modulator s response using the on and off - resonance data in the cavity s transmission response .the on - resonance data , corresponding to , is shown as a relatively constant level in fig .[ fig : refltx16ghz]b given by the 28 distinguishable peaks .the data inbetween these peaks corresponds to just the carrier being on resonance with the cavity while the sidebands are off resonance .since the fsr frequency and cavity linewidth have been estimated , the on - resonance data can be removed from the transmission response to reveal just the off - resonance data , corresponding to .we interpolate the off - resonance data with a high - order polynomial to extract the modulation depth for the entire frequency range , including at the on - resonance frequencies .the modulator s frequency - dependent modulation depth , , calculated from the on and off resonance cavity data is shown in fig .[ fig : beta ] .note that varies from 1.6 to almost 0 over the frequency range .a of 1.6 corresponds to a measurable portion of optical power present in the second harmonic modulation sidebands .evidence of the second harmonic coupling into the cavity is visible in fig .[ fig : refltx16ghz]a and [ fig : refltx16ghz]b as smaller resonances located halfway between the main resonances . in the next section, we will explore how the presence of second harmonic modulation sidebands in the modulator s output affects the measured cavity linewidth , and how knowing allows us to correct for this effect .we use the measured modulation s response to more precisely determine the cavity s characteristics by fitting a lorentzian function to the on - resonance data . in order to improve the accuracy of the fitting, we used an average of five data sets taken around each resonance , and normalized to the modulator s transmitted power .the amplitude modulation settings and dc bias voltage are kept constant while the frequency is swept in this narrow range .a least squares algorithm was used to fit the lineshape function ( eq . [ eqn : lineshape ] ) to the on - resonance data to determine the fsr and linewidth .the resulting linewidths of the first 26 cavity resonances are shown in fig .[ fig : fwhm_refl ] , which correspond to the first 26 resonances visible in fig .[ fig : refltx16ghz ] . the linewidth results from the reflected and transmitted cavity data are quite similarso only the reflected data results are shown for clarity . the data in fig .[ fig : fwhm_refl ] shown in black are from fitting the on - resonance data with just a single lorentzian function ( first term in eq .[ eqn : lineshape ] ) , which only models the first - order sidebands .the data shown in red corresponds to fitting with the complete lineshape function . fitting the on - resonance data with these two methods illustrates the importance of including the second harmonic in modelling the lineshape . since our technique measures the dc power , if modulation harmonics are present , then the detector output will consist of the sum of responses from all the sidebands .if harmonics are present but not accounted for , then the fitted linewidth will be narrower compared to the intrinsic cavity linewidth .we have quantified this effect from the frequency - dependent modulation depth , and determined the true cavity linewidth to be ( ( fwhm ) averaged over the 26 resonances .the error bars are calculated by individually fitting each of the five data sets , and then taking the standard deviation of the resulting five linewidths .the two fitting methods agree within error bars for frequencies above 9 ( 19th fsr ) where the first - order sidebands clearly dominate .thus , assuming the variation in over the cavity s linewidth is small , knowing the frequency - dependent modulator response allows us to precisely determine the characteristics of our cavity .the presence of second harmonic modulation sidebands does not corrupt the fsr measurement as all sidebands are simultaneously on resonance with the cavity .the precision and repeatability of our technique is demonstrated by our ability to measure the different resonant frequencies of orthogonal polarization modes caused by the presence of a non - linear crystal ( mgo - doped linb ) inside the cavity .the crystal has different refractive indexes along the ordinary and extra - ordinary axes , which causes horizontally - polarized light to experience a slightly different cavity path length compared to vertically - polarized light .we used a half - wave plate located before the cavity to rotate the polarization of the incident field , and characterize the distinguishable cavity modes .the two polarization modes are non - degenerate , as shown in fig .[ fig : hvfsrs ] , so we can lock the cavity to the carrier frequency of either polarization .measurements of the reflected light captured around the first resonance for horizontally and vertically - polarized light are shown in fig .[ fig : hvfsrs ] , as well as the corresponding fit from eq .[ eqn : lineshape ] .this data corresponds to the first visible resonance in the wide bandwidth measurements presented in fig .[ fig : refltx16ghz ] .we have taken the normalized average of five data sets for each polarization mode . the cavity mode due to horizontally - polarized lighthas a measured fsr of ( , whereas the vertically - polarized mode has a slightly lower fsr of ( . to illustrate the effectiveness of our measurement technique, we can use the known refractive indices of the non - linear crystal to predict the difference in resonant frequencies of the two polarization modes .the refractive indices for the ordinary and extraordinary axes of a mgo - doped linb can be calculated using the temperature - independent sellmeier equation , where is the refractive index of the ordinary axis , is the refractive index of the extraordinary axis , and is the wavelength of the laser in microns ( 1.55032 ) .the resonant frequency of a cavity can be calculated based on the total path length as where is the speed of light in a vacuum , is the index of refraction of air ( 1.000273 ) , is the cavity path length in air , is the index of refraction of the non - linear crystal , and is the crystal length ( 10.18 mm ) .first we can determine from the measured fsr for horizontally - polarized light .then we can predict the fsr for vertically - polarized light based on this value , and the refractive index of the ordinary axis , . using this method, we predict an fsr of 514.384 , which is very close to the measured value of ( ( relative difference of only ) .thus , our technique is sensitive enough to measure the effect of a path length difference of 785 ( of the total path length ) between two distinct cavity modes .the effectiveness of our measurement technique is highlighted by the fact that such an inappreciable path length difference due to the birefringent material can be precisely measured .we have described a simple yet powerful calibration technique that can determine the frequency response of an optical system consisting of both an optical cavity and a high - speed amplitude modulator .we characterized both the cavity and the modulator by measuring the cavity s response over a wide frequency range with a dc photo - detector .our method allowed us to extract the frequency - dependent modulator depth of our amplitude modulator , and characterize an optical cavity , without needing a calibrated broadband photo - detector or optical spectrum analyzer .we used the on and off - resonance cavity data to precisely identify the intrinsic cavity linewidth , which would otherwise be corrupted by the presence of higher - order modulation harmonics from the amplitude modulator .in addition , we demonstrated the precision and repeatability of our technique by measuring the different resonant frequencies of orthogonal polarization cavity modes .once the modulator has been characterized , our method can be applied to characterize any passive optical element including , by not limited to , cavities .this work was supported financially by the australian research council centres of excellence scheme number ce110001027 , the office of naval research ( onr ) , and industry canada .the authors would like to thank greg milford for lending us the rf signal generator , and darryl budarick , shane brandon , and mitchell sinclair for much appreciated technical support .we would also like to thank david moilanen for fruitful discussions . c. gamache , m. ttu , c. latrasse , n. cyr , m. a. duguay , and b. villeneuve , `` an optical frequency scale in exact multiples of 100 ghz for standardization of multifrequency communications , '' ieee photon .. lett . * 8*(2 ) , 290292 ( 1996 ) .h. haitjema , p. h. j. schellekens , and s. f. c. l. wetzels , `` calibration of displacement sensors up to m with nanometre accuracy and direct traceability to a primary standard of length , '' metrologia * 37*(1 ) , 2533 ( 2000 ) .r. j. senior , g. n. milford , j. janousek , a. e. dunlop , k. wagner , h - a .bachor , t. c. ralph , e. h. huntington , and c. c. harb , `` observation of a comb of optical squeezing over many gigahertz of bandwidth , '' opt . express * 15*(9 ) , 53105317 ( 2007 ) .r. medeiros de arajo , j. roslund , y. cai , g. ferrini , c. fabre , and n. treps , `` full characterization of a highly multimode entangled state embedded in an optical frequency comb using pulse shaping , '' phys . rev . a * 89* , 053828 ( 2014 ) .s. k. korotky , a. h. gnauck , b. l. kasper , j. c. campbell , j. j. veselka , j. r. talman , and a. r. mccormick , `` 8-gbit / s transmission experiment over km of optical fiber using a ti : linbo3 external modulator , '' ieee j. lightw . technol . *lt-5*(10 ) , 15051509 ( 1987 ) .t. okiyama , h. nishimoto , i. yokota , and t. touge , `` evaluation of 4-gbit / s optical fiber transmission distance with direct and external modulation , '' ieee j. lightw . technol . *6*(11 ) , 16861692 ( 1988 ) .e. l. wooten , k. m. kissa , a. yi - yan , e. j. murphy , d. a. lafaw , p. f. hallemeier , d. maack , d. v. attanasio , d. j. fritz , g. j. mcbrien , and d. e. bossi , `` a review of lithium niobate modulators for fiber - optic communications systems , '' ieee j. sel .topics quantum electron . , *6*(1 ) , 6982 ( 2000 ) .i. ozdur , s. ozharar , f. quinlan , s. gee , and p. j. delfyett , `` modified pound - drever - hall scheme for high - precision free spectral range measurement of fabry - perot etalon , '' electron . lett . *44*(15 ) , 927928 ( 2008 ) .m. aketagawa , s. kimura , t. yashiki , h. iwata , t. q. banh , and k. hirata , `` measurement of a free spectral range of a fabry - perot cavity using frequency modulation and null method under off - resonance conditions , '' meas .* 22 * , 025302 ( 2011 ) . t. s. tan , r. l. jungerman , and s. s. elliott , `` optical receiver and modulator frequency response measurement with a nd : yag ring laser heterodyne technique , '' ieee trans .theory techn . * 37*(8 ) , 12171222 ( 1989 ) .r. t. hawkins ii , m. d. jones , s. h. pepper , and j. h. goll , `` comparison of fast photodetector response measurement by optical heterodyne and pulse response techniques , '' ieee j. lightw .technol . * 9*(10 ) , 12891294 ( 1991 ) .a. k. m. lam , m. fairburn , and n. a. f. jaeger , `` wide - band electrooptic intensity modulator frequency response measurement using an optical heterodyne down - conversion technique , '' ieee trans .theory techn . * 54*(1 ) , 240246 ( 2006 ) .a. a. chtcherbakov , r. j. kisch , j. d. bull , and n. a. f. jaeger , `` optical heterodyne method for amplitude and phase response mmeasurement for ultrawideband electrooptic modulators , '' ieee photon .. lett . * 19*(1 ) , 1820 ( 2007 ) .r. l. jungerman , c. johnsen , d. j. mcquate , k. salomaa , m. p. zurakowski , r. c. bray , g. conrad , d. cropper , and p. hernday , `` high - speed optical modulator for application in instrumentation , '' ieee j. lightw. technol . * 8*(9 ) , 13631370 ( 1990 ) .r. w. p. drever , j. l. hall , f. v. kowalski , j. hough , g. m. ford , a. j. munley , and h. ward , `` laser phase and frequency stabilization using an optical resonator , '' appl .b * 31*(2 ) , 97105 ( 1983 ) .
precise knowledge of an optical device s frequency response is crucial for it to be useful in most applications . traditional methods for determining the frequency response of an optical system ( e.g. optical cavity or waveguide modulator ) usually rely on calibrated broadband photo - detectors or complicated rf mixdown operations . as the bandwidths of these devices continue to increase , there is a growing need for a characterization method that does not have bandwidth limitations , or require a previously calibrated device . we demonstrate a new calibration technique on an optical system ( consisting of an optical cavity and a high - speed waveguide modulator ) that is free from limitations imposed by detector bandwidth , and does not require a calibrated photo - detector or modulator . we use a low - frequency ( dc ) photo - detector to monitor the cavity s optical response as a function of modulation frequency , which is also used to determine the modulator s frequency response . knowledge of the frequency - dependent modulation depth allows us to more precisely determine the cavity s characteristics ( free spectral range and linewidth ) . the precision and repeatability of our technique is demonstrated by measuring the different resonant frequencies of orthogonal polarization cavity modes caused by the presence of a non - linear crystal . once the modulator has been characterized using this simple method , the frequency response of any passive optical element can be determined .
in a rayleigh - bnard experiment , a horizontal viscous fluid layer is heated from below .when the temperature difference between upper and lower sides is small , heat transfer solely occurs through thermal conduction .yet once beyond a critical temperature difference , a regular pattern of convection cells or rolls emerges ( bnard , 1901 ) . this sudden shift from conduction to convectionis referred to as the rayleigh - bnard ( rb ) instability , and is often quoted as an archetypal example of self - organization in non - equilibrium systems ( nicolis and prigogine , 1989 ; prigogine , 1967 ) . intuitively , it makes sense to try to apply the concept of self - organization in non - equilibrium systems to ecologicaly systems , as there are similarities between ecological and physical systems . like the rayleigh - bnard set - up, ecological systems are open systems that receive a throughput of energy and/or mass via coupling to two environments ( morowitz , 1968 ; schrdinger , 1944 ) .these two environments are typically large reservoirs and they drive the system from equilibrium .consider the example of a laboratory chemostat ecosystem ( smith and waltman , 1995 ) .this is a prime example of a chemotrophic ecosystem whereby a resource of energetic high quality chemical substrate is pumped from a reservoir into the system . in the ecosystemthis resource is degraded into low quality waste products which are emitted to the waste reservoir .when there is low feeding of resource , no biota can survive , and the resource is degraded by abiotic processes only .but when the feeding is above a critical threshold , biota can survive by consuming the resource .there is a sudden shift from a lifeless to a living state . in other words , the energetic quality difference between incoming and outgoing chemical substratesis exploited by various abiotic and biotic processes .the latter biotic processes contain the biomass synthesis and turnover of consumer micro - organisms feeding on the resource . so it is tempting to look for a deeper connection .can one compare biological processing with convection ?both mechanisms involve self - organizing structures , biological cells or convection cells , that can only survive after a critical threshold .both energetic pathways , biotic resource conversion and thermal convection , degrade energy from high quality to low quality form . and these energetic pathways are additional to the abiotic conversion or thermal conduction processes of the background . here, our ambition is to examine the link between ecological processes and convective fluid motions in a quantitative way .the first part of this article contains a highly intriguing result : the mathematical expressions of the resource - consumer chemostat ecosystem dynamics are exactly the same as the dynamics that describes the basics of the rayleigh - bnard system .furthermore , not only are the mathematical equations identical , also the physical / ecological interpretations give appealing results .particularly , by looking at the energetic pathways of the ecosystem , the ecological quantities can be mapped to the quantities used in the fluid system and vice versa .the second part tries to extend the correspondence between fluid convection and ecosystem functioning to include new processes .we will study two extensions .first , one can look at ecological competition and translate the notion of competitive fitness to the fluid system .the convection cells are in darwinian competition with each other and the fittest ones will survive. one can generalize the lorenz model to include this fluid competition . as the size of a convection cell will depend on the fitness measure, we will demonstrate that the mathematical identity of the ecological and the fluid dynamics predicts the experimentally correct size of the cells at the onset of convection .second , one can look at ecological predation . translating this notion to the fluid system leads to a new conjecture to extend the lorenz model in order to describe more complex convection patterns .these new patterns only appear when the system is driven beyond a second critical value for the temperature gradient .the predatory behavior in the convective fluid system leads us to a conjecture which we will not prove , but will be successfully tested by looking at the energy dissipation .let us start by deriving the dynamics that describes the rayleigh - bnard ( rb ) convective fluid system , named after bnard ( 1901 ) and rayleigh ( 1916 ) who were the first to study this system experimentally and theoretically .a full mathematical treatment of thermal convection requires the combined solution of the heat transport , navier - stokes and incompressibility equations , resulting in a set of five coupled non - linear partial differential equations ( chandrasekhar , 1961 ; rayleigh , 1916 ) . rather than solving this full set , we employ the approximation adopted by lorenz ( 1963 ) , which became famous as it gave an impulse to the development of chaos theory .the model describes the lowest modes of an expansion of the temperature and velocity fields for a rb system with free - free boundary conditions ( see e.g. getling 1998 ) . in appendix[ appendix xyz ] , the derivation of the lorenz system is given in a way that will suit our further discussion . a non - linear set of three ordinary differential equationsis obtained , with three variables ( see fig . [ yzandxfig ] ) : measures the rotational rate of the rolls and represents the maximal velocity at the bottom of the rolls . and are temperature deviations , where the linear profile of the conduction state is taken as a reference . , and .the temperature ( ) and horizontal velocity ( ) profiles at three vertical sections ( dashed lines ) are shown .these vertical sections are parallel with the axes of the convection rolls , where the fluid is moving up , moving horizontal or moving down .the thin linear profiles correspond with the conduction state , the thick profiles with the convection state .as indicated , and are temperature deviations and is the velocity at the bottom of a roll . ] with these three variables , the xyz lorenz system is rich enough to describe the rayleigh - bnard instability , the sudden shift from conduction to convection .but there is an even simpler model , the xz system with only two variables , that is rich enough as well .it is this xz model that allows us to make the correspondence . roughly speaking, we will perform an averaging over the horizontal directions , such that only the average vertical profile remains . as is the temperature deviation in horizontal direction ,it is this variable that will disappear after the averaging .specifically , this is done by making the pseudo steady state assumption for the variable .the latter becomes a constant and the dynamics turns into : with the height of the fluid layer , a geometric factor such that is the width of the straight convection rolls , the thermal expansion coefficient , the gravitational acceleration , the heat conduction coefficient , the kinematic viscosity and the temperature gradient .this important quantity is the thermodynamic gradient that drives the system out of equilibrium . is the high temperature of the heat reservoir below the fluid layer and is the low temperature of the heat reservoir above the layer . for further reference, we will also need measures for the temperatures at the middle and the lower halve of the fluid layer .define as the horizontally average temperature at height , and fig .[ thfig ] shows the interpretation of as a temperature measure for a linearized temperature profile in the lower half of the fluid layer .( due to symmetry in the approximation leading to the lorenz system , we will not have to include the upper half of the fluid layer . ) .the lower half of the fluid layer is shown , with the vertical ( horizontally averaged ) temperature profiles in the conduction state ( thin line ) , the convection state ( thicker line ) and the linearized convection profile ( thickest line ) .the variables and are at height . ]next , we discuss the ecosystem model , which is in essence a simple chemotrophic resource - consumer food web model , one of the mainstay models of ecology ( e.g. yodzis and innes , 1992 ) .consumer organisms are feeding on some food resource ( r ) , which is partly converted to consumer biomass ( c ) and partly to waste product ( w ) . for reference, one can think of a chemostat set - up where a chemical reactor tank contains a monoculture of micro - organisms that are feeding on a chemical substrate like methane or glucose , while respiring . .the color denotes the energetic quality of the substances , from high ( red ) to low ( light yellow ) . ]figure [ rcwmodelfig ] gives a schematic overview of the ecosystem coupled with the two environments ( denoted with the superscripts ) .there are two environmental compartments , the resource at constant concentration and the waste at constant concentration ) and three ecosystem compartments , with variable concentrations , and for the resource , the consumer biomass and the waste respectively . in appendix[ appendix rcw ] , the complete dynamics of the resource - consumer - waste ( rcw ) ecosystem is given , explaining the fluxes between the compartments .however , as we will see , the correspondence only works in a limiting case , whereby roughly speaking we will average over the waste concentrations of the system and the environment .specifically , this can be done by taking a very small relaxation time for the exchange of the waste between the ecosystem and the reservoir .this means that by studying the ecosystem at longer time scales than this relaxation time , the dynamics for w is forced to be in a pseudo steady state condition .hence , w is no longer a variable and we end up with the resource - consumer ( rc ) model , with two dynamical equations for two variables : with , the resource exchange rate parameter , the abiotic conversion ( from r to w ) rate parameter , the consumer growth rate parameter , the yield factor for the consumer growth , the consumer decay ( biomass turnover ) rate parameter , and the equilibrium constant for the chemical reaction ( oxidation ) from r to w which always slowly proceeds at the background .this is the well - known chemostat dynamics ( smith and waltman , 1995 ) , which is extended in two ways : first , abiotic conversion is included in terms of chemical oxidation with parameters and .second , instead of the classical dependence of the growth on the resource , the growth is now made dependent on .this is done for thermodynamic consistency : at chemical equilibrium , biomass synthesis should also cease ..two corresponding models with analogous mechanisms for gradient degradation [ cols="^,^,^",options="header " , ] the abiotic conversion is a chemical reaction with equilibrium constant and a constant abiotic conversion rate parameter . the latter abiotic conversion rate is increased due to a parallel biotic conversion , described by a simple linear functional response with parameter .this biotic conversion has two parts : a fraction of the resource is used for consumer growth , the other part of the resource turns immediately into waste . from a thermodynamic perspective, the latter resource turnover is necessary to drive the growth process .this fractioning is described by the yield parameter : this is the growth efficiency which denotes the amount of resource required to build up one unit of biomass .the third metabolic transformation is the biotic decay ( biomass turnover ) , represented by the rate constant .when the resource is turned into waste , the latter is emitted into the waste reservoir from the environment .the latter has a constant waste concentartion and the exchange flux can be described as putting the two exchange fluxes and the three metabolic fluxes together , the complete dynamics for the resource concentration , the consumer biomass concentration and the waste concentration now look like this is the rcw model .next , we have to simplify this model to the rc model , by assuming to be very large .this means that the relaxation time of the waste exchange is negligibly small , and we get the condition that , resulting into ( [ dyn c_r]-[dyn c_c ] ) .in this appendix , we will give all approximations and a schematic derivation in order to arrive at the lorenz system for the rayleigh - bnard convective fluid ( see berge and pomeau , 1984 or lorenz , 1963 ) .* there are no pressure terms in the energy balance equation .* the heat conduction coefficient and the kinetic viscosity are constants . *the local density field depends on the temperature as with and the constant reference density and temperature , the local temperature field , and the constant thermal expansion coefficient . * the above dependence of the density on the temperature is taken into account only in the gravitational force term in the momentum balance equation . at other places in the equations, we will write the density as . * the fluid is incompressible ( except in the thermal expansion term ) : , which results in an equality between heat capacities at constant pressure and volume : , or it can be written in terms of the velocity field as : * the local internal energy differential is . with these approximations ,the heat transport equation can be derived from an energy balance equation , and looks like the first term on the right hand side is the advective heat transport term , and the second is the heat conduction term .the equation for the velocity field is derived from the momentum balance , and results into the navier - stokes equation . in the boussinesq approximation, this leads to with the pressure field , the gravitational acceleration and the unit vector in the vertical z - direction . on the right hand sidewe see respectively the advection term , the pressure gradient term , the external gravitational force term and the viscous diffusion term . as a final step , in order to fully describe our system , we need boundary conditions .the boundary condition for the temperature is simply for the velocity , we have because there is no fluid flowing out of the layer .this is not enough , and we need another condition on the velocity .we will take free - free boundary conditions to make the description of the solutions easier .this gives in summary , we have five partial differential equations : three from the three velocity components , one from the incompressibility condition and one from the temperature .our five local variables are the velocity , pressure and temperature fields .lorenz made some further assumptions in order to turn these five p.d.e.s into three o.d.e.s with only three global variables .due to ( [ div v ] ) , one can write the velocity field as , with the streamfunction . we know from experiment that at the onset of convection ( near the critical gradient ) , a convection roll pattern will arise ( getling , 1998 ) .suppose that the axis of the rolls are along the horizontal y - direction .hence , there will be no component . the simplest way to obtainthis is by assuming . as a final step, we will expand the temperature and fields in fourier modes , taking the boundary conditions into account , and we will retain only three of these modes : with the width of the convection cell equal to . in fig .[ yzandxfig ] a physical interpretation is given to the variables , and .plugging these expressions into the above partial differential equations ( [ dyn t ] ) and ( [ curlpdev ] ) , and collecting the factors with the same spatial dependence , gives : as can be seen , the system does not close because there is a term .a final approximation consists of taking this cosine equal to one .we finally arrive at the lorenz equations , which we will call the xyz model .next , we have to simplify this xyz model to the xz model , by assuming the pseudo steady state condition for ( i.e. taking ) , resulting into ( [ dyn x]-[dyn z ] ) .we conclude this appendix with an important remark .there are two important approximations for the xz model .the first is the cancelation of the cosine factor in ( [ dyn y lorenz with cos ] ) .therefore , solutions of the lorenz system are not exact solutions of the complete fluid system in the boussinesq approximation . in this sense , the result of section [ competitive exclusion ] is not trivial , because we arrived at the correct answer whereas the underlying dynamics does not give exact solutions .our second approximation is the pseudo steady state restriction .this means that the steady states of the xz system are also steady states of the xyz model ( but as mentioned above , not necessarily of the complete fluid system ) . in this articlewe mostly restricted the discussion to the steady states of the xz model , but one should be cautious to use this model to try to find correct transient solutions for the xyz or the complete fluid systems . as an example , the xyz system has chaotic solutions with unstable steady states , whereas these chaotic solutions are absent in the xz model .the latter has always stable steady states .
both ecological systems and convective fluid systems are examples of open systems which operate far - from - equilibrium . this article demonstrates that there is a correspondence between a resource - consumer chemostat ecosystem and the rayleigh - bnard ( rb ) convective fluid system . the lorenz dynamics of the rb system can be translated into an ecosystem dynamics . not only is there a correspondence between the dynamical equations , also the physical interpretations show interesting analogies . by using this fluid - ecosystem analogy , we are able to derive the correct value of the size of convection rolls by competitive fitness arguments borrowed from ecology . we finally conjecture that the lorenz dynamics can be extended to describe more complex convection patterns that resemble ecological predation . * stijn bruers * + instituut voor theoretische fysica , + katholieke universiteit leuven , + celestijnenlaan 200d , b-3001 leuven , belgium * filip meysman * + centre for estuarine and marine ecology , + netherlands institute of ecology ( nioo - knaw ) , + korringaweg 7 , 4401 nt yerseke , the netherlands + _ keywords : _ rayleigh - bnard system ; lorenz model ; resource - consumer chemostat ; ecosystem metabolism ; thermodynamics
synchronous phenomena abound in nature and in our daily lives and have been studied from centuries past , right from huygen s observations of synchronizing clocks .various kinds of synchronous phenomena occur and have been identified ( see for example ) ( other references on the subject may also be found in ) . among these complete synchronization ( cs )is one of the most interesting since the phase , frequency and amplitude of a subsystem all coincide with those of the other subsystem it is coupled to .it is seen therefore that in cs the trajectories of the coupled elements match exactly .cs is known to occur in identical systems , and was first demonstrated in chaotic systems in .neurons and neuronal networks have been a subject of frequent theoretical and experimental studies .synchronization of neural activity has elicited a great deal of interest since it is believed that such phenomena enable cognitive tasks such as feature extraction and recognition to be performed .hodgkin and huxley classified neuron excitability mechanisms broadly into two classes : in type - ii neurons the transition from a quiescent state to a periodically spiking state occurs through a hopf bifurcation with a finite nonzero oscillation frequency . in type - i neurons ,oscillations emerge through a saddle - node bifurcation on an invariant circle . as the bifurcation parameter changes , the stable and the unstable fixed points coalesce and then disappear , leaving a large amplitude stable periodic orbit .this is a global bifurcation and the frequency of the global loop can be arbitrarily small .since axonal excitability patterns of mammalian neurons fall under the type - i class , it is but natural that this class has received special attention in the literature .various observations have been made on type - i neurons , some prominent points of which are as follows .equations for type - i neuronal dynamics can be reduced to the canonical normal form for a saddle - node bifurcation .repetitive firing occurs in the parameter regime when the system is in the close proximity of a saddle - node bifurcation on an invariant circle .et al _ and ermentrout have shown that such neurons coupled via a certain class of time - dependent synaptic conductances are difficult to synchronize . brgers and kopell made further investigations of such coupled systems .in particular they discussed the effects of random connectivity on synchronization and the ping mechanism in networks of excitatory ( e ) and inhibitory ( i ) neurons both in the presence and in the absence of external noise . in this work ,we present some computer studies of generic type - i neurons coupled via synaptic conductances such as those considered in , which are governed by ordinary differential equations and which depend upon the outputs of the presynaptic neurons , and are subject to weak additive gaussian white noise .we consider both excitatory - excitatory ( ee ) and inhibitory - excitatory ( ie ) bidirectional couplings and show that in certain regimes of the coupling constants and inputs , the system of coupled neurons shows complete synchronization ( cs ) .the issue of cs in type - i neurons was not discussed in . as discussed in section-2 ,largest lyapunov exponents are shown to not adequately give information about cs in the system .we make an observation on the inputs to the neurons ( and which are also modulated by the feedback in the system ) : we point out that the variation of the instantaneous frequency of the input received at each neuron with the instantaneous phase of the input it receives exactly coincides with that of the other neuron , in the event of complete synchronization of their outputs . it will be noted that in the presence of noise and feedback , this is not a trivial statement .we discuss the utility of this result in practical situations .+ in general , for ee synapses , our results indicate that when a common , externally applied constant input is used to perturb two bidirectionally coupled type - i neurons having identical coupling strength magnitudes and synaptic rise & decay times , weak noise induces them to exhibit cs upto a critical value of the coupling strength . for coupling strengths larger than , we find the system de - synchronizes through a power - law before locking on to a partially synchronized state for larger coupling strengths .we obtain a functional dependence for the synchronization error for neuronal outputs on coupling and noise strengths , in the regimes leading to partial synchronization .such functional dependencies have not been reported in the literature yet , to the best of our knowledge . in the noiseless case for identical ee neurons separated by different initial conditions ,we observe that the antiphase states are stable in agreement with and become completely in - phase in the presence of noise . for just two neurons with ie coupling , noise does not induce complete synchrony . in an ensemble of 200 non - identical neuronshowever , we find unexpectedly that noise - induced cs is possible with all - to - all bidirectional ie random couplings .in a system of neurons , the activity of the neuron is described by a variable which can be related to the membrane conductance .its dynamics is represented by where denotes an inverse time constant for the membrane potential , denotes its total input comprising of a constant external input and the contributions from the presynaptic neurons , with . is the synaptic gating variable and represents the fraction of ion channels open in the presynaptic neuron . is the measure of the strength of the synapse from neuron to neuron ; we have taken . when this equation has no fixed points .any initial condition tends to infinity in a finite time . to avoid this blow - up of solutions , a nonlinear transformation to new variables may be made : which maps the real line onto a circle .eqn.(2 ) then becomes the point then gets mapped to the point and is interpreted as firing of a spike .we set and work in the parameter regime in which so that the width of the spikes turn out to be in milliseconds as in real neurons in these units .the evolve in time according to the differential equation which was considered in where denotes the synaptic decay time and the synaptic rise time .the values of always lie in the range 0 to 1 , reaching the maximal value when the neuron spikes .the synapse is an excitatory one if , and if it models an inhibitory synapse .lest there be any confusion , we would like to clarify at the outset that when we refer to _ identical _ neurons , we mean neurons that have the same nature of synaptic coupling ( ee ) and have same coupling strengths , have the same value of , , and receive the same constant input , while their initial conditions differ by very little .+ transmembrane voltage and neuronal firing can be affected by various sources of neuronal noise , but predominantly by synaptic noise .the synaptic noise itself occurs due to several factors , but chief among them is the synaptic bombardment at the inputs through the large number of neuronal connections , with each input spike adding a random contribution .we model this through an additive gaussian white noise added to the neuronal input .we study the dynamics of a system of two such neurons coupled bidirectionally as depicted in fig.(1 ) and subject to gaussian white noise with the following properties : , , where the stochastic variables are taken to obey stratonovich calculus .addition of gaussian white noise to eqn.(1 ) , manifests as multiplicative noise in eqn.(2 ) because of the change of variables to , so that the equations now take the form eqns.(3 ) for the define the feedback regulating the activity of the neuron since depends upon which in turn depends upon ( ) via .the feedback increments or decrements the constant external input received by neuron .thus the control parameter acquires time dependence through the dynamical variables . as in , we consider the neuronal output to be described by the variable as its time evolution pattern resembles that of a membrane potential in real neurons .this transformation maps the resting point corresponding to to , and the spiking point to via the relation .we choose to work with these variables as we get some new and interesting insights upon the dynamics underlying the phenomenon of complete synchronization . in terms of these variables , eqns.(4 ) and ( 3 ) become : numerous studies in the literature have reported the phenomenon of complete synchronization in various coupled systems .an adequately satisfying explanation of why and under what conditions cs can occur for systems with more complicated couplings , such as , for instance , that described by eqn.(5 ) , is however , still lacking in our opinion .+ in particular , in the authors study noise induced cs in systems subjected to a common additive white noise and show that a necessary condition for cs is the existence of a significant contraction region in phase space .the systems studied in were lorenz and rossler systems which are far more amenable to analytical treatment than the equations above in eqn.(5 ) . + since excitability in type - i neurons results from a saddle node on an invariant circle bifurcation , complete synchronization of two uncoupled neurons by common noise alonecan be expected because of the existence of a contraction region close to the stable manifold of the saddle . on the other hand , when two such neurons are coupled together as in eqn.(5 ) , the existence and nature of a contraction region would depend upon the eigenvalues of the jacobian at the fixed points of the coupled system .however for eqns.(5 ) , an analytical study becomes difficult since the jacobian becomes singular at the fixed points .we therefore perform some computer studies on the system to learn more about the underlying dynamics .moreover , since as we show below , lyapunov exponents need not adequately give information about cs , we seek other explanations for occurrence of cs .+ as in , we define cs between the activities of neurons 1 and 2 as a vanishing value for the quantity which is the synchronization error averaged over all iterations .largest lyapunov exponents for the system in eqn.(5 ) in the presence of noise for both ee and ie couplings were calculated following and are shown in fig.(2 ) . to incorporate noise in the numerical calculations ,the stochastic runge - kutta-4 method was used .we note that in both ee and ie cases , the largest lyapunov exponent becomes more negative on the addition of noise .we observe also that for some intermediate values of the coupling constants ( such as ) , the values could be larger , i.e. , less negative , than those for lower coupling strengths ( e.g. , ) . for ee coupling , is almost always less than or equal to zero .in the case of ie coupling however , we find that for smaller noise - strengths and smaller couplings , for small , fluctuates between positive and negative values in the presence of noise .this happens because of the oscillation of the bifurcation parameter ( total input ) between two regimes , depending upon the relative strengths of and , since for neuron .hence calculation of lyapunov exponents may not adequately give information about cs or large windows of zero synchronization error , such as , for example , for the situation shown in fig.(3 ) , though they may certainly show the emergence of a definite order in the presence of noise and possible synchronization between the coupled units .indeed , cs is expected to occur between identical systems and finding cs between non - identical oscillators would be unusual . as we describe later, however , we do find noise - induced cs in a system of 200 non - identical oscillators with random all - to - all couplings .equally intriguingly , we find that there are parameter regimes where identical oscillators do not show cs at all , though the lyapunov exponents shown in fig.(2a ) remain negative .negative transverse lyapunov exponents are widely accepted as characterizing cs , but their calculation for the system of 200 neurons , with a nontrivial feedback mechanism for phase resetting , as in eqn.(5 ) is a difficult task and we have not attempted it here . in our case , even for , if we were to define new variables : and , expressing the synchronization error dynamics through in terms of and alone is not at all straightforward for the equations in ( 5 ) .+ moreover when each of the subsystems has a saddle - node - on - an - invariant circle bifurcation in the uncoupled limit , one could expect windows of intermittent firing patterns . in the coupled system , this is indeed observed in some parameter regimes , interspersed with large windows showing zero synchronization error , even in identical oscillators ( ee case ) for lower noise strengths ( fig.(3 ) ) .hence we believe that negative largest lyapunov exponents alone may not constitute conclusive proof for predicting noise - induced cs in coupled neurons .+ we therefore looked for other indicators which could help in understanding the mechanism of cs better in systems with feedback , such as in eqns.(5 ) .we found one such simple indicator for cs in the context of the model under study and which we will now describe .the same methods and analysis should also hold for getting information on cs in any other system .since synchronous activity is brought about by a common input or through mutual interactions and since these include components which are highly random , we study the _ instantaneous _ values of the sum total _ of the inputs _received by each unit of a coupled system .we first set up a framework for this purpose and then provide a physical motivation and explanation for understanding cs through this indicator .we construct the analytical signal : for the inputs and similarly for using hilbert transforms .the instantaneous amplitudes and phases evolve according to : where , and denotes the instantaneous difference between the phase of the output of the neuron and that of the part it sends as feedback to the presynaptic conductance of the neuron this feedback is a stochastic component of the input for neuron .we have constructed the noise terms and from the analytical noise signal : and are periodically modulated by which evolves according to the differential equation : -r_i\sin\phi_i[r_i\cos(\zeta_i-\psi_i)-b_ir_i\cos(\zeta_i-\psi_i+\rho_i ) + b_i\cos(\psi_i+\rho_i)+ \xi_{r_i}]\ } \nonumber\\ & = & \frac{1}{2\sqrt{(1+r_i^2 - 2r_i\cos\phi_i)}}\big\ { ( r^2 - 1){\sqrt r } { \dot\phi } - \sin\phi { \dot r } \big\}\end{aligned}\ ] ] the instantaneous phase therefore has no deterministic time scales and its drift and diffusion in time in the presence of noise is influenced by and , the instantaneous values of the amplitude and phase respectively of the input , and also by the instantaneous amplitude of the neuronal output . and evolve as follows : these show the effect of feedback on neuronal response .it is seen that cs between the outputs of neurons 1 and 2 occurs when the changes in the instantaneous phases and amplitudes of the two neuronal _ inputs _ exactly match each other . in other words ,cs in the inputs to the neurons is required for the outputs to synchronize in phase , amplitude and frequency .this observation is in general not quite obvious since the system is nonlinear , and has a feedback mechanism which depends upon the outputs of the other neurons .+ in fig.(4 ) we present the instantaneous - phase versus instantaneous - frequency plots of the inputs received by the two neurons . in all the numerous cases we studied for the coupled system for , we found that the signature of cs is the almost identical nature of these plots for the two systems that are in synchrony , be it with or without noise . on the other hand , the absence of cs gets reflected in the non - identical variation between instantaneous phase and instantaneous rates of phase - change of inputs to the two neurons , in the plots shown in fig.(5 ) .this is again true for both the noiseless as well as noisy cases .+ the system of equations ( eqn.(5 ) ) is of the form where and since feedback to each neuron through the synaptic coupling is oscillatory , in a fokker - planck description of the stochastic process , the probability distribution of the neuronal ensemble will not be stationary in the limit .in certain regimes of the noise strengths and coupling constants , where competing contributions from the drift and diffusion terms would make the noise - averaged difference in outputs zero , cs occurs .physically , cs is brought about through the following sequence of events .addition of a small amount of noise increases the decay time of the synaptic conductances gradually , and eventually , lowers their minimum to zero .this delays the onset of the next peaks , and hence the input to each neuron at any further instant of time . increasing the noise strength further increases the decay time of .the periodically maximal values of the inputs thus take longer to arrive at the neurons and this becomes visible in the neuronal firing pattern as departures from the previous ( noiseless ) values of the phase differences between the neurons , and those of their output differences .the instantaneous values of the phases and the rate at which they change in time , i.e. , the instantaneous frequencies of each neuron , is determined by the strengths of the synaptic inputs it receives and the noise strength for any given set of .hence for given initial conditions and different amplitudes for & , it would be reasonable to expect cs to occur when the following is satisfied for the inputs to the neurons : the variation of instantaneous values of the frequencies with instantaneous phases of the _ input _ for neuron 1 matches with that for neuron 2 .this results in the instantaneous values and of neuron 1 changing in step with and respectively of neuron 2 .this forces the amplitudes of neurons 1 & 2 to become identical with each other , since otherwise both conditions and can not be simultaneously maintained .thus cs results .a striking feature of all these plots is their strange , flame - like structure .the flame shape is reminiscent of canards that are typically associated with systems exhibiting relaxation oscillations .+ indeed , from the common factor occurring in the inverse square root on the right hand side of eqn.(8 ) , it is apparent that does evolve on a time scale different from that for & in eqns.(6 ) & ( 9 ) , though it is a dynamically varying time scale , determined also by the stochasticity of the system .the separation of time scales is in fact manifested in the time series for the neuronal inputs which show relaxation oscillations ( figs.(4 ) & ( 5 ) ) .this gives the input instantaneous phase - frequency curve its characteristic shape whenever noise is introduced .this is indicative of some order emerging in the phase .in fact , noise - induced phase synchronization can be demonstrated even in the ie system by interpreting the instantaneous phase differences in a statistical sense as in . the distribution , of cyclic instantaneous phase differences , $ ] is shown in fig.(6 ). preferred phase differences between the 2 neurons manifest as peaks in which become sharper and taller with increasing noise - strengths a clear indication of noise - induced phase coherence .in fig.(7a ) we have plotted the synchronization error for two coupled identical ( ee ) neurons and for coupled non - identical ( ie ) neurons , as a function of the coupling strength for different noise - strengths .we have considered here the special case of , and each neuron receives inputs only from the other neurons ; _ i.e. _ , we look at the effect of feedback .results and analysis for non - zero are presented elsewhere .we see that although cs is expected between identical oscillators , it does not happen for the noiseless ( deterministic ) case .increasing noise - strength brings down and indeed takes it down to zero for certain ranges of the coupling constant . for the identical ee coupled neurons ,when feedback constitutes the only input ( there being no other explicit input ) , as in figs.(7a,7b ) , then beyond a maximal critical coupling constant strength , the system gets de - synchronized , with . for , there then exists a regime in the de - synchronized system where noise strength still plays a role in determining the output . in this transition regime , for , we find that the difference in outputs depends upon through the expression where is a constant that depends upon the noise strength .+ the critical coupling strength depends upon the noise strength as well and we find that it varies as where . at very high the system gets locked to a partially synchronized state , with approaching a constant value , wherein noise strength no longer influences the difference in the outputs of the neurons . in fig.(7a ) , this constant value approaches 0.5 . for the curves in fig.(7a , b ) , for the entire regime following the beginning of desynchronization, we find a functional dependence on the coupling constant given by an equation of the form where and depend on the noise - strength through so that this expression is plotted on the numerical data points in fig.(7b ) .a rigorous theoretical treatment of the system needs to be done in future studies to establish these relations for the synchronization and desynchronization transitions through a fokker - planck approach .+ in the ie case ( fig.(7c ) ) , we see that noise - induced cs does not occur for two coupled neurons ; however , there is partial synchronization ( by this we mean that ) since gets locked to a finite , non - zero value ( ) for large .further , increasing noise - strength increases rather than decreases , in the region before partial synchronization , in contrast to the observation for the ee case .we were unable to achieve cs in the ie case for two coupled neurons , even in the presence of noise .however , for an ensemble of 200 coupled theta neurons of which 150 neurons are excitatory and 50 inhibitory , we obtain very different results . in this simulation shown in fig.(8 ) , each neuron receives the same input and there is all to all random coupling with the coupling strengths varying between and and with different initial conditions .the excitatory neurons are shown in red while the inhibitory ones are in green .on introducing gaussian white noise of strength into the system we observe synchronous phenomena emerging between the excitatory and inhibitory neurons . the interesting thing to note hereis that not only do most of the inhibitory neurons fire in synchrony with other inhibitory neurons , but that also most of them are synchronized with the excitatory neurons .this kind of noise - induced near - cs in coupled type - i neurons with random non - zero coupling strengths has not , to our knowledge , been reported previously in the literature .a detailed study of this situation is under way to explain the observed synchrony for and will be reported elsewhere : it is beyond the scope of the present work .interestingly , in the literature , spatiotemporal synchronization has been shown to occur in networks of coupled chaotic maps with varying degrees of randomness in the coupling connections .we have studied the issue of noise - induced cs in coupled type - i neurons , a class of neurons that are especially important since mammalian neurons fall under this category .we find that complete synchronization between any two neurons is signalled whenever the variations of the instantaneous input phases versus the instantaneous frequencies of the inputs of the neurons being studied are identical .we point out that such identical - plots of the neuronal inputs would be a signature of cs between the neurons .this suggests the possibility of producing completely synchronized outputs of coupled systems having feedback mechanisms in the presence of noise , by ensuring that the plots of the instantaneous frequency versus instantaneous phase of the _ inputs _ to the subsystems are identical . that this is a significant pointwill be appreciated when one recalls that this condition is required in the continued presence of noise and feedback . when monitoring or control of neurons is required to be done in a living organism in the case where synchronized neuronal output is required at another , inaccessible spot through a given external input, this result becomes important .though cs is expected between coupled , identical neurons , we find that cs occurs only upto a critical value of the coupling constant for a given noise strength beyond which the system desynchronizes again , and then for large gets locked to a partially - synchronized state .we find that the critical coupling strength depends upon the noise - strength through a power law .for greater than the critical value , from the transition regime through upto the onset of partial synchronization , we find a functional dependence of the noise - averaged output difference on and the noise strength , given by eqn.(14 ) .for a larger ensemble of 200 neurons , we find unexpectedly that non - identical neurons can show near - complete synchronization .since type - i neurons model axonal excitability patterns in mammals , the results presented here would be useful in the study of synchronous mechanisms underlying the neural code . as an immediate application, we believe our results would be useful in explaining the experimental observations reported on cat & awake monkey visual cortex which show synchronization of neuronal activity with a single stimulus , and which disappears when activated by different , independent stimuli . since a single stimulus would correspond to neuronal inputs of identical amplitudes , instantaneous phase and frequency ,this is actually the same scenario that we have found for cs to occur .it is also likewise clear why cs was experimentally observed to vanish on the activation of different stimuli , since the necessary conditions of identical input amplitude , phase and frequency are no longer met . *b * + * figure 4 . * instantaneous phase - frequency flame " plots of stochastic * input * to neuron , for ee coupling , for * ( a ) * ( top ) and * ( b ) * ( bottom ) . in these plots , , showing the effect of feedback .the extended panels below each set of flame plots show the corresponding time - series for neuronal input ( at left ) and the difference ( ) in neuronal outputs ( at right ) .parameter values are : , , .+ * figure 5 . * more instantaneous input phase - frequency flame " plots with corresponding time - series of neuronal input .row 1 : ee , ; row 2 : ee , ; row 3 : ie , ; row 4 : ie , .parameters used are: , , , .+ * figure 7 . * noise - induced synchronization in coupled type - i neurons ( a ) ee case ; transition from synchronized to partially synchronized state when ( b ) ee case , as in ( a ) ; solid lines correspond to eqn.(16 ) .( c ) ie case ; noise - induced cs is absent but there is partial synchronization as the system gets locked to for large .for both ( a ) & ( b ) : , , , initial conditions , , +
for a system of type - i neurons bidirectionally coupled through a nonlinear feedback mechanism , we discuss the issue of noise - induced complete synchronization ( cs ) . for the inputs to the neurons , we point out that the rate of change of instantaneous frequency with the instantaneous phase of the stochastic inputs to each neuron matches exactly with that for the other in the event of cs of their outputs . our observation can be exploited in practical situations to produce completely synchronized outputs in artificial devices . for excitatory - excitatory synaptic coupling , a functional dependence for the synchronization error on coupling and noise strengths is obtained . finally we report an observation of noise - induced cs between non - identical neurons coupled bidirectionally through random non - zero couplings in an all - to - all way in a large neuronal ensemble . complete synchronization , noise , coupled type - i neurons +
networks have been widely used to describe systems in a multitude of disciplines , such as genetic networks , protein networks or the internet . in ecology ,networks are mainly used to visualize and describe food webs .but not only trophic interactions are the focus of attention . in the last yearsresearchers show a growing interest in the study of other species interactions such as parasitism ( vzquez et al . , 2005 ) , scavenger species ( selva and fortuna , 2007 ) and mostly mutualism ( bascompte and jordano , 2007 and references therein ) . studies on mutualistic food webs focus on specific pairwise interactions between a plant and an animal and how they are shaped by a community context , either in a single locality , or geographically ( bascompte and jordano , 2007 ) .pairwise interactions can be described in the form of a bipartite graph or an interaction matrix .these webs are characterized by nodes that represent species or species groups and observed interactions are drawn as links that , when not binary , can render their intensity or frequency in graded thicknesses . in the interaction matrix , links are represented as nonzero cells on the intersection of a row and column . according to almeida - neto et al.,2007 , bipartite webs do in fact offer several advantages of their own : first , they are often fully resolved , without the problems of uneven resolution which haunt the analysis of complete webs .second , all links are of a single kind of ecological interaction ( e.g. mutualism ) , which ensures structural integrity as well as similar ecological and evolutionary processes throughout the entire assemblage . the most studied structure within a bipartite graph is the nested pattern of species interactions , although other structures are also possible ( prado et al .2006 ; almeida - neto et al . , 2007 ) .in nested assemblages , plants with few interactions are related only with generalist animals ; conversely , specialized animals are found related to plants with many links , that is , with large associated faunas .moreover , generalists in one species set tend to interact with generalists in the other , forming a dense core of interactions ( prado et al . , 2006 ; bascompte and jordano , 2007 ) .a nested structure is very cohesive and stable .the fact that few species are involved in many interactions ( functional redundancy ) , poses the community with the possibility for alternative routes if some interactions disappear ( bascompte and jordano , 2007 ) .a nested structure is also quite robust : it is less prone to sampling bias than number of species and links ( nielsen and bascompte , 2007 ) and not generated by the random combination of sets of plants and animals solely in proportion to their different abundances as previously thought ( prado et al . , 2006 ) .recently , a large series of mutualistic interaction assemblages have shown a significantly nested structure ( bascompte et al .substantial effort has been done in developing various measures and forms of calculating nestedness ( atmar and patterson , 1993 ; guimaraes and guimaraes , 2006 ; rodrguez - girons and santamara , 2006 ; almeida - neto et al ., submitted ) .the most commonly used nestedness metric is the `` nestedness temperature calculator '' , or `` nestcalc '' , used to calculate nestedness in binary matrices ( atmar and patterson , 1993 ) .the nestedness from this algorithm has a problem , though : the absolute value of the nestedness temperature is dependent on matrix size and fill .some studies show that the nestedness temperature of randomly assembled matrices increases with network size and attains its maximum value for intermediate fills ( rodrguez - girons and santamara , 2006 ; almeida - neto et al . , submitted ) .so , smaller networks need lower temperature than larger ones do to be significantly nested ( nielsen and bascompte , 2007 ) . from a mathematical point of view , the central object in the discussion about nestedness is a matrix of zeros and ones .the ecologist in the field interpret this matrix as a table where she ( he ) marks a cross at the column and row each time a species of group one ( e.g. plant ) is related to group two (e.g .insect ) . in order to visualize nestedness in the studied ecological communitythe ecologist has to rank rows and columns of the table .in fact , each time one row ( or column ) is permuted to another row ( or to column ) , the interactions among species of groups one and two do not change . for a matrix of species in group one and species in group 2 , there are possibilities to represent the matrix following different permutations of rows and columns .each one of these possibilities is just a different visualization of the same network structure .ranking rows and columns is a very practical option to visualize nestedness in a interaction matrix .when we rank the elements of a matrix we choose one of the possibilites , that one where the elements of the matrix are the most packed .in other words , we choose the representations where the elements of the matrix are as closest as possible from the , corner . in the literature .packing procedure is a previous step before the evaluation of a nestedness index ( atmar and patterson , 1993 ) . in this articlewe also pack the matrix before the evaluation of our nestedness index .we introduce here a new nestedness measure applied to digraphs originated from ecological data .however , the method is more general than its predecessor and can be naturally applied to graphs ( networks ) other than digraphs . in section we describe the formal objects used in this article : adjacency matrix , manhattan distance in a matrix , projection of a generic matrix into the unit square lattice , packing process , random matrix , maximum nested matrix and nestedness estimator . in section we apply the nestedness estimator to a set of insect - plant herbivory networks extracted from the community ecology literature . in section we summarize the article , point out potential applications of the method and give the final words .in this section we introduce the concept of distance in a matrix to characterize the nestedness of digraphs . in order to fix the notation we call digraph an object formed by two sets of vertices and and a set of links between these two sets . the digraph is completely described by the adjacency matrix , , of size , where and are the number of elements of and , respectively . by definition if there is a link between vertices of and of and if and are not linked .it is useful to visualize as a versus lattice with empty ( zero ) or full ( one ) sites .moreover , the number of links of a vertex is and the distribution of links of and is and respectively . in ecology ,the field data corresponding to the digraph is composed by two sets of species and the corresponding links ( interactions ) between them . as we pointed out in the introduction , the standard procedure in this area consists in packing the adjacency matrix of the data .the packing is performed in the following way : the link distributions and are ordered such that the most connected species go to the first position of the matrix . in this way the matrix shows more ones close to and corner and zeros at the opposite corner , and . from the matrix point of view , the packing process consists in replacing lines and columns until and are ordered .we emphasize that since the packing process do not change the links between species it does not alter the phenomenology underlying the network .the idea behind packing the matrix is to better visualize network nestedness .in addition , nestedness is related with the dispersion of ones and zeros after the packing process .a very nested matrix is one that , after packing , has a minimal mixing of ones and zeros . using a lattice analogy, a very nested lattice shows a minimum of holes . to introduce distance properties in the original matrix we map it into a cartesian space . in order to avoid distortionswe map the matrix to the unit square .to perform this task the cell elements assume the positions and that are done by : in this article we use the manhattan distance because it is broadly employed to measure matrix distances .euclidean distance is used for estimating distance between elements apart in continuum space , which is not the case here .in fact , in the context of abstract metric spaces ( courant , r. and hilbert , d. , 1937 ) set of distances that depends on the parameter , the case corresponds to the manhattan distance and the case to the euclidean distance .we define the occupancy number as the fraction of occupied sites in the adjacency matrix . for total number of ones in we have .to quantify nestedness of a given matrix we use two matrix benchmarks with the same , and : the maximal nested matrix and the random matrix .the maximal nested matrix is constructed in such a way that it has no holes and its elements are as close as possible to the corner .we construct filling the elements along equidistant diagonals to .in fact all elements along the same diagonal have the same distance to the corner .the construction of is the following : the first element occupied is , after that it comes and , followed by , and , etc .figure [ fig1 ] illustrates the optimal filling strategy to build .in contrast , the random matrix is constructed in such a way that all its elements are uniformly occupied with the same probability .the maximum packed matrix of the species interaction will be in - between these two .we use the manhattan distance to evaluate the distances of the filled elements ( the distances are defined for the matrix elements projected into the unit square in the cartesian plane ) .we call the sum over the distances of all the elements of the matrix projected into the unit square , that means , for . in order to define the nestedness estimator we introduce two additional distances : the total distance of the artificial matrices and .we note that has the smallest total distance among all the lattices with the same and we call its total distance , while the total distance of is .consider a sample of points ramdomly distributed along the unit square , the expected value of the distance to the origin , , is the manhatann distance from the origin to the center of the square of size , that means , .therefore : to get an insight about distances in we start exploring the behavior of and against occupancy in figure [ fig2 ] .we use in this picture .as expected , the distances follow the relation .the total distance for any matrix , after the packing process , shows the property : in fact , since is derived from an artificial matrix whose components , by construction , have the minimal distance to the origin .otherwise , because is derived from a packed matrix , and in the packing process the matrix reduces the distances of their elements when compared with a similar random matrix .the distance as defined above depends on the matrix size and the occupation .in fact , the total distance observes the relation for a given and the relation for a constant .this behavior can be visualized in figures [ fig2 ] and [ fig3 ] . in order to have a free nestedness index of the system we define the nestedness index as follows : we emphasize that and are computed over a artificial matrix with the same , and of the original system . in the next sectionwe test over a set of digraphs from the context of community ecology and discuss the results .in this chapter we select a set of insect - plant herbivory networks in the literature and apply the nestedness index we develop in this article . in table[ tab1 ] we enumerate the set of networks with its main properties : the occupancy , size and , the nestedness estimator , the temperature ( according to atmar and patterson , 1993 ) and the reference of the network . a visual inspection of the table does not reveal correlation between and .in fact , a linear correlation analysis between the two variables revels no significant correlation ( and ) . the range of values of our estimator is and the average value is . in contrast , the usual temperature estimator have the range and average value . in order to improve the visual intuition about the problem we plot in figure four lattices of insect - plant networks .it is clear in the figure that ( a ) and ( b ) are highly nested , and that on the contrary , ( c ) and ( d ) are not nested at all .this intuitive idea is corroborated by the estimator , but not by the temperature , the estimator of the two initial matrices are 0.18 and 0.23 ( low values ) , and of the last two 0.79 and 0.76 ( high values ) .the temperature estimator , on the other hand , shows an intermediary value in case ( a ) , where our estimator shows a low value and a very high temperature in ( c ) where our estimator points for a very large value .in fact , figures ( c ) and ( d ) have a large number of specialists , and in consequence the matrix can not be well nested .our estimator corroborates this observation . as a final remark concerning this set of figures, we point that all the matrices in this figure are well packed , that means , the number of elements of lines and columns are ordered . on the other hand ,the nestedness calculator from atmar and patterson , 1993 , usually fails in packing well the matrices . from the observation of figure [ fig5 ]we see that the variables are correlated , in fact , a linear correlation analysis reveals , and .an exponential correlation regression results , and .therefore the adjust of the data to the exponential curve is slightly better than the linear one .the dependence of a nestedness index to occupancy was already pointed in the literature ( girons and santamara , 2006 ) . at first, a relation between and seems intuitive : once the number of sites increase , in the average , they will be more nested after the packing process .we let for a future work a more carefull analysis of this point .in this work we develop a new nestedness estimator based on distances over the adjacency matrix of the network .we think that this estimator will be useful in the methodological discussion involving nestedness in community ecology . to make the method clearer to the reader we summarize the algorithm to find in the following sequence of steps : 1 .evaluate the link distributions and of the adjascency matrix of the network .2 . pack the matrix , that means , permute lines and columns of the matrix in order that and are ranked .this step defines a corner of nestedness .3 . project the matrix into the unit square in order to avoid distortion due to the diferences between the sizes and of the matrix .4 . find the manhatann distance of all elements of the matrix and sum to find the total distance of the elements of the matrix .5 . determine analitically the distance of the associated random matrix with the same occupancy : , for the total number of occupied elements of the matrix .determine computationally the distance of the asociated maximally nested matrix with the same : .finally , calcule the estimator . as estimated above , . in the limit the network is completely nested and corresponds to the random limit .we tested our estimator for a set of insect - plant networks and the data is summarized in table [ tab1 ] .an interesting result of our estimator is that it depends on the occupancy number .this result is in agreement with the intuitive idea that the matrix nestedness increases with its occupancy density .the parameter is a very used nestedness estimator in community ecology .this parameter , despite of its popularity , is not well defined and present several problems ( fischer and lindenmayer , 2002 ; rodrguez - girons and santamara , 2006 ; almeida - neto et al . ,submitted ) .we are perfectly aware that our estimator will be compared with the usual temperature estimator developed by atmar and patterson ( 1993 ) .what we should do is to show the good points of our method and let the methodological discussion to the scientific community . in this way we stress the strong points of our method in the following : 1 .our algorithm is based on plain geometry and metric statements . in this wayit is simple and can be calculated with help of a easy computer program .we have two benchmarks clearly defined : the total distance of the random matrix and of the completely nested one .3 . we do not use any ad hoc parameter in the equation that defines the nestedness estimator .4 . our estimator gives a number between zero and one .5 . the visual inspection criterion of nestedness of empirical matrices agrees with our estimator .this paper opens a new perspective in the study of nestedness .we develop an original index to measure nestedness that is based on direct metric analysis of the matrix . instead of considering the dispersion of elements around an artificial isocline, we estimate directly the distances of all the matrix elements from the packing corner .the nestedness of a matrix is a measure of how much the elements of the matrix are close to the corner where the matrix is packed .we hope this paper to be usefull to improve the understanding of nestedness in the community ecology context .the authors thank m. almeida - neto for helpful comments on the manuscript .umberto kubota , graciela valadares and thomas lewinsohn , who made unpublished data available .the authors gratefully acknowledge the financial support of fapesp and cnpq , brazil .prado , p. i. and lewinsohn , t. m. 1994 .genus tomoplagia ( diptera , tephritidae ) in the serra do cip , mg , brazil : host records and notes of taxonomic interest .revista brasileira de entomologia 38 : 3 - 4 .
a recent problem in community ecology lies in defining structures behind matrices of species interactions . the interest in this area is to quantify the nestedness degree of the matrix after its maximal packing . in this work we evaluate nestedness using the sum of all distances of the occupied sites to the vertex of the matrix . we calculate the distance for two artificial matrices with the same size and occupancy : a random matrix and a perfect nested one . using these two benchmarks we develop a nestedness estimator . the estimator is applied to a set of real networks of insect - plant interactions . nestedness , networks , insect - plant interactions
to apply photonic technologies to data processing has been taking place for several decades . yet , their implementation in practical systems remained limited and failed to penetrate practical processors .many demonstrated optical devices are still oversized , were based on exotic materials and needed rather complicated interfacing with other electronic components .this paper takes advantage of recent developments in silicon microphotonics and novel interfacing schemes in order to realize photonic components for fast communication and signal processing .in addition to its practical potential , as presented , the ideas exposed herein present basic physical and mathematical challenges . + microring modulators were proposed and demonstrated for analog signal modulation and for simple digital modulation , such as on - off - keying ( ook ) . recently ,microring modulators were investigated for advanced digital modulation formats such as pam , qpsk and even qam . + all the works mentioned above ( excluded ) use an analog voltage signal to drive the modulator .the application of analog signals usually calls for mediating electronic circuitry , such as digital - to - analog conversion . in the current workwe promote the application of the , so - called , _ direct digital drive _ ( ddd ) method for use with microring resonators . ddd allows the utilization of only two voltage levels directly on the photonic device ; it makes the need for mediating devices , such as electrical digital - to - analog converter , unnecessary .+ in the first part of this work we present a design of an n - bit digital - to - analog converter .a digital - to - analog converter is a device that converts an n - bit digital word to a corresponding analog ( voltage ) representation .a 4-bit dac produces 16 , equally spaced analog levels and can therefore be viewed as ( a digitally controlled ) 16-pam modulator . in the second part of this work we extended a previous work for generating m - qam signals with microring modulators by utilizing the ddd approach .an all - digital m - qam modulator is thus presented .as an example , consider the 4-bit optical dac based on a microring modulator , illustrated in figure [ fig : mring_da ] .the device basic layout is similar to previously published microring resonators .a cw light of wavelength and intensity is coupled from the waveguide to the microring .the coupling coefficient between the waveguide and the microring is denoted by and the loss per round trip inside the microring is .the light inside the microring is modulated by several , in this example , independent phase shifter segments ( electrodes ) .the phase shifter can be implemented as reversed - bias pn - diode or as zig - zag pn diode , etc .hereinafter , each phase shifter will be referred to as an _electrode_. at its electrical input , the device accepts an bits digital word , denoted , where .the input word is mapped onto each of the electrodes via the digital - to - digital converter ( ddc ) .in essence , the ddc converts a -bit input to a -bit output , which , in turn , control the electrodes . note that each electrode is driven by one of two voltage levels , and , representing binary and .described as such , the ddc is basically a lookup - table that can be realized by a ( high speed ) digital memory .an optical modulator is typically characterized by the product where is the voltage whose application to an electrode of length , induces a phase shift of . without loss of generality , we set , i.e. as the circumference of the microring , where is the radius of the ring .let , denote the index of the electrodes ( in this example ) .assume that the length of each electrode is given by : .note that . if a voltage is applied to electrode , the induced phase shift will be : where is the effective refractive index modulation due to the applied voltage on phase shifter , is an empirical constant that accounts for both the optical confinement and the coefficient of the charge density induced refractive index change .the parameter is a binary quantity that indicates whether voltage was applied to phase shifter .the dependance between and the applied voltage is known to be a nonlinear function , .the exact relation depends on the phase shifter design .the intensity transmission of the ddd multi - electrode microring structure can be written as : where a binary matrix of dimensions holds the mapping of the -bit input word on the electrodes - evidently , a highly non - linear transmission . to maximize the output dynamic range ( dr ), the microring resonator should be in critical coupling , .this will allow the smallest possible output intensity level to approach .figure [ fig : ringresoiout ] shows the output intensity of a microring resonator for phase shifts between and .phase shifts greater than about $ ] contribute very little to the output dynamic range because of the high nonlinearity of the microring modulator .hence , the voltage will be set so as to produce the intensity .the quantity is chosen as a tradeoff between ( best achievable ) linearity and output dynamic range .smaller values of can increase the output dynamic range , but also will reduce the linearity of the device , unless additional electrodes are added. a dynamic range of about of the total available dr will keep the electrode count low and close to the input bit length . in this example set to induce a phase shift that leads to an output dynamic range of about .figure [ fig : ioutn=4m=5 ] shows the output intensity of a 4-bit dac ( 16 levels on a straight line ) , based on a microring modulator with 5 electrodes .figure [ fig : sinewaven4m5 ] shows a sinewave generated with the proposed device .the linearity of the dac can be quantified by standard figure of merits : its differential non - linearity ( dnl ) is 0.2 bits ; the integral non - linearity ( inl ) is 0.4bits and the effective number of bits ( enob ) is 3.74bits . and .,width=264 ] [ fig : sinewaven4m5],width=264 ]in the previous sections we presented the design of a dac , which is equivalent to a pam modulator , by using a single microring modulator with several electrodes driven by a digital signal . in this sectionwe use a two microring configuration to generate m - qam ( two - dimensional ) signal constellations .quadrature amplitude modulation ( qam ) is a modulation scheme that conveys data by means of modulating both the amplitude and the phase of a sinusoid carrier thus providing spectral efficiencies in excess of 2bit / symbol . in previous work , we introduced a method for generating optical m - qam signals by using two mutually decoupled microring modulators . in this configuration ,one ring was used to generate the amplitude of the desired signal ( perturbed by some deterministic phase shift ) , while the second ring was used to complement the phase required to obtain the desired complex signal . the electronic input to this modulator was an analog voltage .herein , in order to drive the two rings with digital ( two - level ) signals , we split the electrode into segments .an m - qam modulator is schematically depicted in figure [ fig : mring_qam ] .the design process is similar to the one described in section [ sec : mpamdesign ] except that we now have to generate complex signals rather than intensity levels only .the role of the modulator is to generate a specific constellation of points consisting of distinct complex points ( also refereed to as _ signals _ ) ; the constellation points can be generally formulated as follows : the proposed modulator is described by means of an example .figure [ fig : mring_qam ] depicts a 16qam optical modulator based on two micro - ring resonators equipped with multiple electrodes .the electrodes in this example are divided between the two micro - rings : electrodes on each micro - ring . ,is mapped by the ddc to a 14-bit output that drives the 14 segmented electrodes .the electrode lengths ( per micro - ring ) follow a divide - by - two sequence .the ddc , shown here for exposition purposes as two separate boxes , is actually a single memory device ., width=302 ] as input , this qam modulator accepts a 4-bit digital word , denoted .the 4-bit input word is mapped onto each of the electrodes via the ddc .thus , each electrode is driven by one of two voltage levels , or , representing binary and , respectively .more generally , let and be two vectors of dimensions and , whose elements correspond to the lengths of the electrodes of the first microring and the second microring , respectively. let be a binary matrix .row of , , holds the mapping from input onto the electrodes of the left ring .likewise , binary matrix , of dimensions , holds the mappings from each of the input digital words to the electrodes of the right ring . with this nomenclature ,the output of the modulator can be formulated as a function of the digital input : \frac{\alpha_1-t_1 exp{-j\phi_1}}{1-t_1\alpha_1 exp\left(j\phi_1\right)}\cdot % \ ] ] \frac{\alpha_2-t_2 exp{-j\phi_2}}{1-t_2\alpha_2 exp\left(j\phi_2\right ) } % \ ] ] and the phase shift are where denotes the amplitude of the optical field entering the modulator .the geometrical structure and the loss and coupling parameters are set in a similar manner to the description in .the first ring is designed to work in critical coupling for which in order to generate the largest span of amplitudes .the second microring is designed to work in under - coupling regime , , to act as a phase shifter with minimum amplitude loss . by applying all possible combinations of the digital words , a finite pool of points , of maximum cardinality , can be generated .as an example , a pool of ( green ) points is shown in figure [ fig:16qam_n12=7 ] for . from this large pool of possible points , one can choose a finite set of points that form a constellation for digital optical communications , -qam in this example .an ideal set of points that constitute a -qam constellation , as suggested by eq .[ eq : target_signals ] with , is portrayed by the red triangles in figure [ fig:16qam_n12=7 ] .out of the green pool of signal points , we first select the signals that provide the best match for the ideal points .the selected set of best points amounts to a mapping between digital input values and the corresponding s , and is executed by the ddc . .the evm obtained is .,width=302 ] as can be seen in the figure , the selected points are not identical to the ideal points , thus producing an error . to quantify this error, we employ the error vector magnitude ( evm ) measure . for the above examplethe evm is .note that the evm can be further reduced by increasing the number of electrodes in each microring .such an increase will allow generating a denser pool of points from which one can select a desired set of constellation points with higher accuracy .table [ tbl : qam_evm ] presents the obtained evm for various constellations and varying number of electrodes .it shows the evm for 16-qam with and electrodes .the difference between these configurations is .it can be seen that the inherent nonlineariy of the microring leads to a more involved implementation of m - qam modulators compared to m - pam modulators .the table also presents results for 64-qam and 256qam .while for 64qam we achieve evm better than -30db with electrode configuration , 256qam requires one additional electrode in the amplitude - related modulator ring to achieve such evm .evm as a function of number of electrode [ cols="^,^,^,^",options="header " , ] the number of distinct electrode segments that can be effectively mounted on a single microring is obviously limited .this , however , does not limit the proposed technology , as the original configuration can be augmented by additional rings with additional electrodes .thus for example , rather than generating -qam with and electrodes on each ring , one can utilize rings with electrodes on each ring : two of the rings will act as amplitude modulators while the other two will act as phase modulators , as depicted in figure [ fig : microring_qam_p=2 ] . for the same parameters as above , with the new configuration of microrings , evm of is achieved .the practical implementation of the ddd modulators is not obvious .next , we briefly discuss issues that are associated with the implementation of the above devices in silicon photonic technology .silicon has the potential of full integration of optics and electronics either in a monolithic or a hyrbrid process .the first problem that arises with silicon is that the plasma dispersion effect ( which is used widely to make phase shifters for silicon modulators ) induces a voltage dependent loss .meaning , it is impossible to make a lossless phase shifter .however , this problem can be alleviated by the ddd approach .more specifically , the problem is that the vdl ( voltage dependent loss ) will reduce the average optical power and hence `` compress '' the signal pool .the solution is simply to configure the ddc to map ( shrink ) the required constellation to a lower average optical power .the second problem that might arise is that in order to generate a phase shifter to induce phases between and , a large voltage signal will be needed .this problem can be solved by using the multiple microring configuration as discussed in the previous section .for example , instead of using a single ring to induce the phase shift , we can use multiple modulators to induce the phase we are targeting .we presented application of the direct digital drive approach to microring resonators .we showed that this approach enables one to realize a digitally driven optical devices , that include compact m - pam modulator and dacs .extending the approach to more then one microring , enables the generation of optical m - qam constellations .we showed that it is possible to generate any constellation order either by using a large number of electrodes , or by using a small number of electrodes but employ additional microring modulators .p. dong , c. xie , l. l. buhl , and y .- k .chen , `` silicon microring modulators for advanced modulation formats , '' in _ optical fiber communication conference_.1em plus 0.5em minus 0.4emoptical society of america , 2013 , pp . ow4j2 . y. ehrlichman , o. amrani , and s. ruschin , `` improved digital - to - analog conversion using multi - electrode mach zehnder interferometer , '' _ j. lightw ._ , vol . 26 , no . 21 , pp .35673575 , nov.1 , 2008 .
the method of direct digital drive is applied to a microring resonator . the microring resonator is thus controlled by a segmented set of electrodes each of which is driven by binary ( digital ) signal . digital linearization is performed with the aid of digital memory lookup table . the method is applied to a single microring modulator to provide an m - bit digital - to - analog converter ( dac ) , which may also be viewed as an m - level pulse amplitude modulator ( m - pam ) . it is shown , by means of simulation , that a 4-bit dac can achieve an effective number of bits ( enob ) of 3.74bits . applying the same method for two rings , enables the generation of two - dimensional optical m - qam signals . it is shown , by means of simulation , that a 16-qam modulator achieves an evm better than -30db . micoring modulators , optical digital - to - analog converter , optical m - pam , optical m - qam .
[ [ sketch - of - the - proof - of - the - generality . ] ] sketch of the proof of the generality. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we found the two commuting hamiltonians and in the -qubit model depicted in fig .[ fig : model ] ( example a ) , whose projected counterparts and with the projection ( [ eqn : p1 ] ) of the structure generate .this single example makes us sure that it is the case for almost all systems . to see this ,let us formalize in the following way .take of example a again .we extract the relevant sector specified by from each element of and call it ( ) , which is a matrix with dimension and is a function of and . together with the identity matrix , the matrices form .this fact can be mathematically expressed as follows .we `` vectorize '' each matrix to a -dimensional column vector by lining up the columns of the matrix from top to bottom , and gather the column vectors side by side to make up a matrix .then , the fact that the matrices span is expressed as . note that this determinant is also a function of and . now take a generic couple of commuting hamiltonians and of qubits , i.e. , we randomly choose their eigenvalues , and a common unitary matrix which diagonalizes and simultaneously . inserting this couple of hamiltonians , the determinant is , by construction , a polynomial in the parameters ( ) .we already know that this polynomial is non - vanishing for the parameter set corresponding to the above specific choice of the hamiltonians and .therefore , the determinant is a non - zero polynomial in the parameters , implying that its roots are of measure zero in the parameter space . in other words , for almost all parameters , the determinant is non - vanishing , and in turn , almost all couples of commuting hamiltonians become universal , generating , by the projection on the first qubit .this argument can be generalized to any rank projection , and also to any qubit amplitude damping channel in the strong - damping limit .the continuous projection required for the qubit - chain model depicted in fig . [ fig : model ] can be induced by an amplitude damping channel acting on qubit 1 .in fact , consider the master equation with a single lindblad operator which describes the decay of qubit 1 from to , where is associated with the projection in ( [ eqn : p1 ] ) and is the state orthogonal to . solving the system dynamics under the master equation yields + e^{-\gamma t/2}[p_1\rho(0)q_1+q_1 \rho(0 ) p_1]$ ] , where , and represents the partial trace over qubit 1 .thus , in the limit , we have , and qubit 1 is projected into the state with probability 1 . if this process takes place on a time scale much shorter than any other time scales involved in the dynamics or the controls , then it is effective in inducing a quantum zeno effect on qubit 1 , and it is essentially equivalent to repeating projective measurements .d. deutsch , `` physics , philosophy and quantum technology . '' in _ proceedings of the sixth international conference on quantum communication , measurement and computing _ , edited by j. h. shapiro and o. hirota ( rinton press , princeton , nj , 2003 ) .p. zoller , th .beth , d. binosi , r. blatt , h. briegel , d. bruss , t. calarco , j. i. cirac , d. deutsch , j. eisert , a. ekert , c. fabre , n. gisin , p. grangiere , m. grassl , s. haroche , a. imamoglu , a. karlson , j. kempe , l. kouwenhoven , s. krll , g. leuchs , m. lewenstein , d. loss , n. ltkenhaus , s. massar , j. e. mooij , m. b. plenio , e. polzik , s. popescu , g. rempe , a. sergienko , d. suter , j. twamley , g. wendin , r. werner , a. winter , j. wrachtrup , and a. zeilinger , `` quantum information processing and communication . ''j. d _ * 36 * , 203228 ( 2005 ) .k. stannigel , p. hauke , d. marcos , m. hafezi , s. diehl , m. dalmonte , and p. zoller , `` constrained dynamics via the zeno effect in quantum simulation : implementing non - abelian lattice gauge theories with cold atoms . '' _arxiv:1308.0528 [ quant - ph ] _
we show that mere observation of a quantum system can turn its dynamics from a very simple one into a universal quantum computation . this effect , which occurs if the system is regularly observed at short time intervals , can be rephrased as a modern version of plato s cave allegory . more precisely , while in the original version of the myth , the reality perceived within the cave is described by the projected shadows of some more fundamental dynamics which is intrinsically more complex , we found that in the quantum world the situation changes drastically as the ` projected ' reality perceived through sequences of measurements can be more complex than the one that originated it . after discussing examples we go on to show that this effect is generally to be expected : almost any quantum dynamics will become universal once ` observed ' as outlined above . conversely , we show that any complex quantum dynamics can be ` purified ' into a simpler one in larger dimensions . in the last 30 years the possibility of using quantum effects to develop an alternative approach to engineering has emerged as a realistic way to improve the efficiency of computation , communication and metrology . at the very core of this revolutionary idea , the possibility of designing arbitrary dynamics of quantum systems without spoiling the rather fragile correlations characterizing them is crucial . what experimentalists typically do is to apply sequences of control pulses ( e.g. , by sequentially switching on and off different electromagnetic fields ) to steer quantum systems . in the quantum world , however , there is another option associated with the fact that the measurement process itself can induce a transformation on a quantum system . in this context an intriguing possibility is offered by the quantum zeno effect . it forces the system to evolve in a given subspace of the total hilbert space by performing frequent projective measurements ( zeno dynamics ) , without the need of monitoring their outcomes ( _ non - adaptive _ feedback strategy ) . several attempts have already been discussed to exploit such effects for quantum computation , see e.g. , . in this work we show that the constraint imposed via a zeno projection can in fact _ enrich _ the dynamics induced by a series of control pulses , allowing the system of interest to explore an algebra that is _ exponentially larger _ than the original one . in particular this effect can be used to turn a small set of quantum gates into a universal set . furthermore , exploiting the non - adaptive character of the scheme , we show that this zeno enhancement can also be implemented by a non - cooperative party , e.g. , by noisy environment . by the zeno effect , the dynamics of the system is forced to evolve in a given subspace of the total hilbert space . one might therefore think that the constrained dynamics is less `` rich '' than the original one . this naive expectation will turn out to be incorrect . these surprising aspects of constraints bear interesting similarities to einstein s precepts , according to which one can give a geometric description of complicated motion . the key geometrical idea is to embed the motion of the system of interest in a larger space , obtaining a forceless dynamics taking place along straight lines . the real dynamics , with interactions and potentials , is then obtained by projecting the system back onto the original space . clearly , the constrained dynamics is more _ complex _ than the higher - dimensional linear one . in classical mechanics these reduction procedures , linking a given dynamical system with the one constrained on a lower - dimensional manifold , have been extensively studied as an effective method for integrating the dynamics . in particular , different classes of completely integrable systems arise as reductions of free ones with higher degrees of freedom . notable examples include the three - dimensional kepler problem , the calogero - moser model , toda systems , kdv and other integrable systems . the moral is that in classical mechanics , by constraining the dynamics , one often obtains an increase in complexity . here we find a quantum version of this intriguing effect , which exploits the inherent non - commutative nature of quantum mechanics . the main idea is that even if two hamiltonians and are commutative , their projected counterparts can be non - commutative =0 \quad \not\rightarrow\quad [ php , ph'p]=0,\ ] ] where is a projection . due to this fact we show that when passing from a set of control hamiltonians to their projected versions one can induce an enhancement in the complexity of the system dynamics which can be exponential , to the extent that it can be used to transform a small number of quantum gates which are not universal into a universal set capable of performing arbitrary quantum - computational tasks . we find that this effect is completely general and happens in almost all systems . conversely , we prove that any complex dynamics can be viewed as a simple dynamics in a larger dimension , with the original dynamics realized as a projected dynamics . what is interesting is that , in contrast to the classical case , the constraint which transforms a hamiltonian into can be imposed not by force but by a simple projective measurement whose outcomes need not be recorded ( the process being effectively equivalent to the one associated with an external noise that is monitoring the system ) . the underlying mechanism can be rephrased as a modern version of plato s cave allegory . in the original version of the myth , the reality perceived within the cave is described by the projected shadows of some more fundamental dynamics which is intrinsically more complex . in the quantum world , however , the situation changes drastically and the _ projected _ reality perceived within plato s cave can be more complex than the one that has originated it . . ( b ) we perform projective measurements at regular time intervals during the control to check whether or not the state of the system belongs to a given subspace of the global hilbert space . ( c ) in the limit of infinitely frequent measurements ( zeno limit ) , the system is confined in the subspace , where it evolves unitarily with the zeno hamiltonians ( zeno dynamics ) . the zeno dynamics can explore the subspace more thoroughly than the purely unitary control without measurement . ] [ [ unitary-control-vs.zeno-dynamics . ] ] unitary control vs. zeno dynamics. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in controlled quantum dynamics , two hamiltonians can commute , but their projected versions need not . this contains , in embryo , the simple idea discussed in the introductory paragraph : interaction can arise from constraints ( in this case projections ) . to describe this mechanism it is worth reminding a few facts about the quantum control theory and the quantum zeno effect . in a typical quantum control scenario it is assumed that the system of interest ( say the quantum register of a quantum computer , or the spins in an nmr experiment ) can be externally driven by means of sequences of unitary pulses , activated by turning on and off a set of given hamiltonians [ fig . [ fig : zenodynamics](a ) ] . if no limitations are imposed on the temporal durations of the pulses , it is known that by properly arranging sequences composed of one can in fact force the system to evolve under the action of arbitrary transformations of the form with the anti - hermitian operators being elements of the real lie algebra formed by the linear combinations of and their iterated commutators , ] , etc . full controllability is hence achieved if the dimension of is large enough to permit the implementation of all possible unitary transformations on the system , i.e. , with being the dimension of the system . suppose now that between the applications of consecutive pulses we are allowed to perform von neumann s projective measurements [ fig . [ fig : zenodynamics](b ) ] , aimed at checking whether or not the state of the system belongs to a given subspace of the global hilbert space . specifically , we will assume that the system is originally initialized in while the various are infinitesimal transformations . under this condition , the zeno effect can be invoked , in the limit of infinitely frequent measurements , to ensure that with high probability the system will be always found in after each measurement , following a trajectory described by the effective hamiltonians , with the projection onto [ fig . [ fig : zenodynamics](c ) ] . in other words , alternating the control pulses under the frequent applications of the projection the sequence can be effectively transformed into a rotation which on is defined by the unitary operator where . accordingly the real lie algebra now replaces in defining the space of unitary transformations which can be forced upon the system . the fundamental result of this paper is to observe that by properly choosing the system setting , the dimension of can be made larger than , to the extent that the former can be used to fully control the system on , in spite of the fact that the latter is not capable of doing the same . to better elucidate the idea we find it useful to introduce a simple example , where the system is identified with a two - qubit system with control hamiltonians ( we hereafter use to denote pauli operators , and write tensor products as strings , with systems being specified with subscripts and omitting the identity operators ) . notice that their commutator vanishes =0 ] , which makes the dimension of equal to 3 ( the situation is schematically illustrated in fig . [ fig : schematic - of - the ] ) . this in particular implies that can now be used to fully control the system in the subspace ( which is isomorphic to the hilbert space of qubit 2 ) , a task that could not be fulfilled with the original . [ [ zeno - yields - full - control . ] ] zeno yields full control. + + + + + + + + + + + + + + + + + + + + + + + + + + the example presented in the previous paragraph clarifies that the constrained dynamics can be more complex than the original unconstrained one . the natural question arises : how big can such a difference become ? to what extent can the presence of a measurement process increase the complexity of dynamics in quantum mechanics ? in the following we provide a couple of examples in which the enhancement in complexity is exponential . while the unprojected dynamics are only two or three dimensional , the projected ones are _ univeral for quantum computation_. this shows that the simple ingredient of projective measurement can strongly influence the complexity of dynamics . -qubit model described in example a of the text . straight edges represent the heisenberg interactions , while the triple edge represents the three - body interaction among qubits 13 . the red part in the upper figure corresponds to acting on qubits 1 and 2 , while the remainder including a local term on qubit 3 corresponds to acting on all the qubits . the zeno projection on qubit 1 transforms the upper hamiltonians to the lower model , where the state of qubit 1 is frozen , while we are left with a heisenberg chain with the local term and a control on qubit 2 . the lie algebra of the upper system is only two dimensional , while the lower allows us to perform full control over the system apart from the frozen qubit 1 . ] example a : consider qubits ( fig . [ fig : model ] , upper ) , the first two of which are manipulated via the control hamiltonians , and complement it with consisting of the nearest - neighbor heisenberg interactions involving all the qubits but the first two , together with a coupling term acting on the first three qubits and a local term on the third , i.e. , due to the anticommutation of the pauli operators , one can easily verify that the two hamiltonians and commute with each other =0 ] , and the projection by yields and , which act as and in the original space before the extension . we can furthermore apply this procedure iteratively to larger sets of hamiltonians , which means that any complex dynamics can be thought of as a simple one taking place on a larger space , with the complexity arising only from projections . [ [ local - noise - yields - full - control . ] ] local noise yields full control. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in a classical setting the measurement process is typically perceived as a passive resource that enforces control only when properly inserted in a feedback loop . as explicitly shown by our analysis , and more generally by the results of refs . , this is no longer the case in quantum mechanics : measurements can indeed be used to directly drive a quantum system even in the absence of a feedback mechanism . interestingly enough , for the control scheme we are analyzing here , measurement is not the only way to implement the required projection . the same effect is attainable by fast unitary kicks and by strong continuous coupling . furthermore , owing to the non - adaptive character of the procedure ( we never need to use the measurement outcomes to implement the control ) , it is also achievable by tailoring a strong dissipative process . the latter option is of particular interest for us since , along the line of refs . , it points out the possibility of taking advantages of the interaction of the system of interest with an external environment , which are typically considered detrimental for quantum processing . specifically , for the qubit chain analyzed above ( example a ) , one can show that the action of a simple amplitude damping channel can raise the dynamical complexity to the level of universal quantum computation . in fact , the decay process bringing qubit 1 to the state can act as a projection ( see appendix ) , and in the strong - damping limit it is effective in inducing a quantum zeno effect on qubit 1 , yielding the full lie algebra in the rest of the qubit chain . moreover , due to the same reasoning as the one outlined above , almost all qubit amplitude damping channels induce exponential complexity . [ [ conclusions . ] ] conclusions. + + + + + + + + + + + + + the schemes presented in this work are not meant to be a practical suggestion to implement quantum computers , because the implementation of a control scheme using heisenberg chains would probably be inefficient ( note however ) . instead they should be viewed as a proof of the fact that generally adding a simple projection or noise to a dynamical system can profoundly modify the global picture and provoke a drastic _ increase _ in complexity . this bears some similarities to measurement - based quantum computation , although there are important differences , in that i ) one does not require the system to be initialized in a complex state , ii ) the measurement is constant , and iii ) its outcome is not used adaptively in future computations . our results can be presented as a quantum version of the plato s cave myth , where the projection plays a more active role , making the dynamics of the associated _ quantum shadows _ as complex as universal quantum computation ; and , conversely through hamiltonian purification , a non - commutative dynamics simple . [ [ acknowledgements . ] ] acknowledgements. + + + + + + + + + + + + + + + + + + this work was partially supported by prin 2010llkjbx on `` collective quantum phenomena : from strongly correlated systems to quantum simulators , '' by a grant - in - aid for scientific research , jsps , by the erasmus mundus - beam program , by a grant for excellent graduate school from the ministry of education , culture , sports , science and technology ( mext ) , japan , and by a waseda university grant for special research projects .
let be a bounded polygonal or polyhedral domain with lipschitz boundary in .consider the following fourth - order singularly perturbed elliptic equation with boundary conditions or where , is the standard laplace operator , and denotes the outer normal derivative on . in two dimensional cases ,the boundary value problems ( [ pde1])-([boundar condition1 ] ) and ( [ pde1])-([boundary condition2 ] ) arise in the context of linear elasticity of thin bucking plate with representing the displacement of the plate .the dimensionless positive parameter , assumed to be small ( i.e. , ) , is defined by where , is the thickness of the plate , is the young modulus of the elastic material , is the poisson ratio , is the characteristic diameter of the plate , and is the absolute value of the density of the isotropic stretching force applied at the end of the plate . in three dimensions , problems( [ pde1])-([boundar condition1 ] ) and ( [ pde1])-([boundary condition2 ] ) can be a gross simplification of the stationary cahn - hilliard equations with being the length of the transition region of phase separation .conforming , nonconforming , and mixed finite element methods for fourth order problem have been extensively studied . however , its _ a posteriori _error estimation is a much less explored topic . even for the kirchhoff plate bending problem , the finite element _ a posteriori _ error analysis is still in its infancy . in 2007 , beir et al . developed an estimator for the morley element approximation using the standard technique for nonconforming element .later , hu et al . improved the methods of by dropping two edge jump terms in both the energy norm of the error and the estimator , and by dropping the normal component in the estimators of .therefore , a naive extension of the estimators in to the current problem may probably not be robust in the parameter .designing robust _ a posteriori _ estimators is challenging , especially for singularly perturbed problems , since constants occurring in estimators usually depend on the small perturbation parameter .this motivates us to think about the question : what method and norm are suitable for the singularly perturbed fourth - order elliptic problem ? in the literature , is a widely used measure for the primal weak formulation .we recall _ a priori _ estimates in for boundary condition ( [ boundary condition2 ] ) and convex domain : hereafter , we use to denote a generic constant independent of with different value at different occurrence .this leads to multiply both sides of ( [ pde1 ] ) by , and then integrate over . using integration by parts and boundary condition ( [ boundar condition1 ] ) or ( [ boundary condition2 ] ) , we have from the poincar inequality that as a consequence , .this suggests that the two components of are unbalanced with respect to if .furthermore , if we set , then problem ( [ pde1 ] ) is written as note that has boundary layer , but usually does not have one . thus , approaches to as , which fails to describe the layer of .an observation of the two decoupled equations and suggests that the two measures and can portray the layer of and the first and second derivatives of . from ( [ maysp ] ) , we have notice that if , then and are balanced with respect to for the boundary condition ( [ boundary condition2 ] ) .these inspire us to think about the mixed finite element method for the problem ( [ pde1 ] ) and the two aforementioned measures .however , the mixed finite element method for the problem ( [ pde1 ] ) is a much less explored topic , since there exist some special problems such as the fourth order problem , where attempts at using the results of brezzi and babuka were not entirely successful since not all of the stability conditions were satisfied , cf . and the reference therein . to overcome this difficulty , falk et al . developed abstract resultsfrom which optimal error estimates for these ( biharmonic equation ) and other problems could be derived ( ) .however , it is not easy to extend the results of to the problem ( [ pde1 ] ) , because of the existence of an extra term and the singular perturbation parameter .recently , for a fourth order reaction diffusion equation , the error estimates of its mixed finite element method was derived in .we refer to about the _ a posteriori _ estimation of ciarlet - raviart methods for the biharmonic equation . in this work ,our goal is to develop robust residual - type _ a posteriori estimators for a _ mixed finite element method for the problem ( [ pde1 ] ) in the two aforementioned measures .the main difficulty lies in the fact that the boundary condition ( [ boundary condition2 ] ) does not include any information on the immediate variable . in order to overcome this difficulty, we develop a novel technique to analyze residual - based _ a posteriori _ error estimator .the key idea is to replace a function ( such that ) without boundary restriction by a function with boundary restriction , which catches at least times " of in the -weighted energy norm ( see lemma 3.3 below ) . combining this novel design with standard tools , we develop uniformly robust residual - type _ a posteriori _ estimators with respect to the singularly perturbed parameter in the two aforementioned measures .we refer to the reference on balanced norm for mixed formulation for singularly perturbed reaction - diffusion problems .the rest of this paper is organized as follows : in section 2 , we introduce mixed weak formulations and some notations , and prove an equivalent relation between the primal weak solution and the weak solution determined by its mixed formulation .some preliminary results are provided in section 3 .residual - type _ a posteriori _ estimators are developed and proven to be reliable in section 4 .an efficient lower bound is proved in section 5 . in section 6 ,numerical tests are provided to support our theory .setting , and employing the boundary condition ( [ boundar condition1 ] ) , we attain the ciarlet - raviart mixed problem : similarly , using the boundary condition ( [ boundary condition2 ] ) , we arrive at the ciarlet - raviart mixed formulation : for any bounded open subset of with lipschitz boundary , let and be the standard lebesgue and sobolev spaces equipped with standard norms and , ( see for details ) .note that .we denote the semi - norm in . similarly , denote and the inner products on and , respectively .we shall omit the symbol in the notations above if .the weak formulation of problem reads : find such that the weak formulation of the problem reads : find such that note that , by the lax - milgram lemma , both systems ( [ discrete mixed formulation p1 ] ) and ( [ discrete mixed formulation p2 ] ) have a unique solution .in fact , by regularity theory for elliptic problems , if is convex and , then and . thus ( [ discrete mixed formulation p1 ] ) has solution , which is unique since its homogeneous system has only one solution satisfying .similar conclusion can be drawn for the system ( [ discrete mixed formulation p2 ] ) .it is well known that the primal weak formulation of ( [ pde1])-([boundar condition1 ] ) is : find such that and that the one of ( [ pde1])-([boundary condition2 ] ) is : find such that the classical results of pdes imply that ( [ may1 ] ) and ( [ may2 ] ) have unique solutions ( see ) .a natural question is whether the determined by ( [ discrete mixed formulation p1 ] ) ( or ( [ discrete mixed formulation p2 ] ) ) is the solution of ( [ may1 ] ) ( or ( [ may2 ] ) ) . in , for biharmonic equation on a reentrant corners polygon , a counterexample is shown .the following theorems answer this question .[ mayy ] the solution of ( [ may1 ] ) and the determined by ( [ discrete mixed formulation p1 ] ) are identical if and only if .the necessity is trivial . if the solution of ( [ discrete mixed formulation p1 ] ) is such that , then we have from the second equation notice that is dense in .it follows that integration by parts yields which implies we obtain from ( [ may3 ] ) and the second equation of ( [ discrete mixed formulation p1 ] ) that in terms of ( [ may1 ] ) , we proved that is the solution of ( [ may1 ] ) .the solution of ( [ may2 ] ) and the determined by ( [ discrete mixed formulation p2 ] ) are identical if and only if .the necessity is trivial . from the second equation of ( [ discrete mixed formulation p2 ] ) , integration by parts , and variational principle , we know that the neumann boundary condition on is automatically satisfied . following the proof of theorem [ mayy ] , we know that if the solution of ( [ discrete mixed formulation p2 ] ) is in , then is the solution of ( [ may2 ] ) .let be a shape regular partition of into triangles ( tetrahedra for ) or parallelograms ( parallelepiped for ) satisfying the angle condition , i.e. , there exists a constant such that where .let be the space of polynomials of total degree at most if is a simplex , or the space of polynomials with degree at most for each variable if is a parallelogram / parallelepiped .define the finite element spaces and by and respectively .we introduce the mixed finite element method for problem : find such that for problem , the mixed problem reads : find such that by standard arguments , problem ( [ mixed from p1 ] ) possesses a unique solution provided there exist functions and satisfying then is the trivial solution to the system . in fact , taking in the first equation of ( [ mixed form p11 ] ) , one gets . setting in the second equation of ( [ mixed form p11 ] ) , one obtains .similarly , it is verified that problem ( [ mixed form p2 ] ) has also a unique solution .we define a measure of the error between the exact solution and the numerical solution by where is the standard energy norm of the numerical error . in this paper , we aim at robust _ a posterior _ error estimators for the numerical errors , , and .we next introduce some notations that will be used later .we denote the set of interior sides ( if ) or faces ( if ) in , the set of sides or faces of , and the union of all elements in sharing at least one point with . for a side or face in , which is the set of element sides or faces in , let be the diameter of , and be the union of all elements in sharing . for a function in the broken sobolev space" , we define |_{e } : = ( v|_{t_{+}})|_{e}-(v|_{t_{-}})|_{e} ] , denote and the two bubble functions defined in , and a continuation operator introduced in by which maps polynomials onto piecewise polynomials of the same degree .[ year3 ] the following estimates hold for all ( the set of polynomials of degree at most ) and furthermore , for , set .then there hold the following estimates for all and . following the line of the proof of lemma 3.3 in , we attain ( [ year4])-([year9 ] ) .for all , define and the elementwise indicators of and , respectively , by \big\|_{e}^{2 } \big\}^{1/2}\ ] ] and \big\|_{e}^{2 } \big\}^{1/2}.\ ] ] let and be the solutions to ( [ discrete mixed formulation p1 ] ) and ( [ mixed from p1 ] ) , respectively . then there exist positive constants , and , independent of the mesh - size function and , such that from the definition of the measure , ( [ year12 ] ) follows from ( [ year10 ] ) and ( [ year11 ] ) .we need to prove ( [ year10 ] ) and ( [ year11 ] ) .we have from the first equations of ( [ discrete mixed formulation p1 ] ) and ( [ mixed from p1 ] ) that for any , let be the clemnt interpolation of in , i.e. , . applying integration by parts and ( [ year16 ] ) , we get \big\|_{e } \|\varphi-\varphi_{h}\|_{e } \big\}.\end{aligned}\ ] ] notice that the first estimate ( [ year10 ] ) follows from a combination of ( [ year18 ] ) , ( [ year17 ] ) , and ( [ year1])-([year2 ] ) .we next prove ( [ year11 ] ) . from the second equation of ( [ discrete mixed formulation p1 ] ) and ( [ mixed from p1 ] ) , we get similarly , we have , for any and , \big\|_{e}\|v - v_{h}\|_{e}\big\ } \\ \label{year20 } & + \|\psi-\psi_{h}\|\|v\|.\end{aligned}\ ] ] recall the following estimates on clemnt interpolation ( cf . ) : and for any , the poincar inequality implies a combination of ( [ year20 ] ) and ( [ year20+])-([year21 ] ) yields \big\|_{e}^{2 } \big ) + \|\psi-\psi_{h}\| \big\ } |v|_{1}.\end{aligned}\ ] ] notice that ) , ( [ year21 + ] ) , and ( [ year10 ] ) yields this completes the proof of ( [ year11 ] ) .let and be the solutions to ( [ discrete mixed formulation p2 ] ) and ( [ mixed form p2 ] ) , respectively .if , then there exist positive constants , and , independent of the mesh - size function and , such that we have from for satisfying , from the proofs of lemmas [ julylemma1 ] and [ julylemma3 ] , there exist and , such that let be the clemnt interpolation of in . from the first equation of ( [ discrete mixed formulation p2 ] ) and ( [ mixed form p2 ] ), we have from ( [ year25 ] ) , we have repeating the proof of ( [ year17 ] ) , and applying ( [ year1])-([year2 ] ) , we have \big\|_{e}\|\tilde{v}-\tilde{v}_{h}\|_{e}\big\}\vspace{2mm}\\ & \ & \leq\displaystyle c\big\{\sum\limits_{t\in\mathcal{t}_{h}}\eta_{\psi , t}^{2}\big\}^{1/2}\|\tilde{v}\|_{\mathcal{e}}. \end{array}\ ] ] using the triangle inequality and ( [ year24 ] ) , we have a combination of ( [ year25 + ] ) , ( [ year24 ] ) , ( [ year26 ] ) , and ( [ year27 ] ) yields from ( [ year23 ] ) and ( [ year28 ] ) , we obtain which leads to the desired estimate ( [ year13 ] ) . repeating the proof of ( [ year11 ] ) and ( [ year12 ] ) , we obtain ( [ year14 ] ) and ( [ year15 ] ) . the condition is usually satisfied , since is the residual , which does nt vanish in usual . here is the piecewise laplacian of .in this section , we analyze the efficiency of the _ a posteriori _ error estimates developed in section 4 . to avoid the appearance of high order term, we assume that is a piecewise polynomial .[ year29 ] for all , there hold and we first prove ( [ year30 ] ) . to this end , let .recall the bubble function introduced in section 3 . from ( [ year4 ] ) , integration by parts , and ( [ year6 ] ), we have the desired estimate ( [ year30 ] ) follows .we next prove ( [ year31 ] ) . for convenience , denote .similarly , we have from that applying inverse estimate and ( [ year5 ] ) , we have the estimate ( [ year31 ] ) follows immediately . [ year32 ] for all , there hold \big\|_{e } \lesssim\|\psi-\psi_{h}\|_{\mathcal{e},\omega_{e}}\ ] ] and \big\|_{e } \lesssim|u - u_{h}|_{1,\omega_{e } } + h_{e}\|\psi-\psi_{h}\|_{\omega_{e}}.\ ] ] we first prove ( [ year33 ] ) . to this end , let ] and , where for .similarly , we have ,v_{e } \big)_{e},\ ] ] which leads to the following estimate : ,v_{e } \big)\vspace{2mm}\\ & = & ( \psi+\triangle_{h}u_{h},v_{e})_{\omega_{e}}-(\nabla(u - u_{h}),\nabla v_{e})_{\omega_{e}}\vspace{2mm}\\ & = & ( \psi-\psi_{h},v_{e})_{\omega_{e}}+(\triangle_{h}u_{h}+\psi_{h},v_{e})_{\omega_{e}}-(\nabla(u - u_{h}),\nabla v_{e})_{\omega_{e}}\vspace{2mm}\\ & \lesssim&\|\psi-\psi_{h}\|_{\omega_{e}}h_{e}^{1/2}\|\sigma\|_{e}+h_{e}^{1/2}\|\triangle_{h}u_{h } + \psi_{h}\|_{\omega_{e}}\|\sigma\|_{e}\vspace{2mm}\\ & \ & \ + |u - u_{h}|_{1,\omega_{e}}h_{e}^{-1/2}\|\sigma\|_{e}. \end{array}\ ] ] we obtain from the above inequality that in the last step above , we employ the estimate ( [ year31 ] ) .we complete the proof of ( [ year34 ] ) .let and be the solutions to ( [ discrete mixed formulation p1 ] ) and ( [ mixed from p1 ] ) , respectively .then there exist positive constants and , independent of the mesh - size function and , such that and summing ( [ year30 ] ) and ( [ year33 ] ) over all , we obtain the first estimate .similarly , we get the second one. let and be the solutions to ( [ discrete mixed formulation p2 ] ) and ( [ mixed form p2 ] ) , respectively .then there exist positive constants and , independent of the mesh - size function and , such that and these two estimates follow from lemmas [ year29 ] and [ year32 ] .theorems 4.1 and 5.3 ( and theorem 4.2 and 5.4 ) indicate that the ratios between the upper and lower bounds , i.e. , and ( and and ) , do not depend on the singular perturbation parameter .therefore , the estimators developed in this paper are fully robust with respect to .this further implies that each component of the new measure of the error is balanced with respect to the perturbation parameter .in this section , we test our _ a posteriori _ error estimators on two model problems .note that all programs were developed by ourselves . consider problem ( [ pde1 ] ) and ( [ boundary condition2 ] ) on the unit square . we suppose the exact solution of this model has the form the function has a boundary layer , which varies significantly near . .lower : the mesh after 10 iterations with 478 triangles ( left ) and the mesh after 12 iterations with 1010 triangles ( right ) , generated by the elementwise indicator . here and for all plots.,width=432 ] our initial mesh consists of eight isosceles right triangles .we employ drfler marking strategy with the marking parameter and use the longest edge " refinement to obtain an admissible mesh .plots in figure [ fig1 ] depict the estimators of ( upper and middle ) , and ( lower ) , respectively .we observe that strong mesh refinement near the line , which indicates the estimators of the errors and capture boundary layers well .figure [ fig4 ] demonstrates finite element approximations to ( left ) and ( right ) .it is observed that the function does nt possess layer , and that has boundary layer near .on the other hand , the upper two plots of figure [ fig6 ] display the estimated and exact errors for ( left ) and ( right ) , respectively .it is observed that the estimated convergence curve overlaps the curve of , which indicates that the estimator for is asymptotically exact even for very small .we also observe that the estimated convergence curve is parallel to the curve independent of , and both curves decrease in optimal rates .note that the study of convergence and optimality of adaptive algorithms is still in its infancy , and has been carried out mainly for standard adaptive finite element method for general second order elliptic problems ; see , e.g. , .( left ) and ( right ) against the number of elements in adaptively refined meshes for .lower : exact errors of ( left ) and ( right ) against the number of elements in adaptively refined meshes for and . herethe marking parameter .,width=480 ] the two lower plots of figure [ fig6 ] depict error curves for ( left ) and ( right ) , respectively .it is observed that the convergence curves for and are consistent , which indicates that the errors reduce uniformly with respect to .in addition , we include in figure [ fig6 ] an optimal theoretical convergence line with slope .the plots indicate that and decrease in the optimal convergence rates .tables [ tab : egone1 ] and [ tab : egone2 ] show some results of the actual errors and , the _ a posteriori _ indicators and , and the effectivity indexes eff - index for and eff - index for for example 1 , where eff - index , eff - index .it is observed that the effectivity indices of the error are close to 1 , and that the effectivity indices of the error are about 1.5 .this suggests that our estimators are robust with respect to ..example 1 : number of iterations ; numerical result of estimated error for ; eff - index the corresponding effectivity index for ( the ratio of estimated and exact errors ) .here , . [ cols="^,^,^,^,^,^,^,^,^",options="header " , ] table [ egtwo1 ] reports the given tolerance tol , the number of iterations , the estimated error ( ) for , the degrees of freedom dof , the smallest mesh size for example 2 , which show that the required dof depends on both tol and , and that the layer is gradually resolved , because the smallest mesh size has arrived at the magnitude of after 22 iterations .figure [ fig11 ] shows the estimated errors of ( left ) and ( or ) ( right ) , respectively .we observe again that the estimated errors reduce uniformly with respect to in both norms with almost optimal rate .g. awanou , _ robustness of a spline element method with constraints .36 ( 2008 ) , pp .421 - 432 .i. babuka , j. osborn , and j. pitkranta , _ analysis of mixed methods using mesh dependent norms .comput . , 35 ( 1982 ) , pp .1039 - 1062 .bank and a. weiser , _ some a posteriori error estimators for elliptic partial differential equations . _ math .44 ( 1985 ) , pp .283 - 301 .a. charbonneau , k. dossou , and r. pierre , _ a residual - based a posteriori error estimator for the ciarlet - raviart formulation of the first biharmonic problem .methods partial differential equations , 13 ( 1997 ) , pp . 93 - 111. l. chen , m. holst , and j. xu , _ convergence and optimality of adaptive mixed finite element methods .comput . , 78 ( 2009 ) , pp .z. chen and r.h .nochetto , _ residual type a posteriori error estimates for elliptic obstacle problems .84 ( 2000 ) , pp .527 - 548 .p. danumjaya and a.k .pani , _ mixed finite element methods for a fourth order reaction diffusion equation .meth . part .d. e. , 28 ( 2012 ) , pp .1227 - 1251 .a. demlow and n. kopteva , _ maximum - norm a posteriori error estimates for singularly perturbed elliptic reaction - diffusion problems ._ numer . math .133 ( 2016 ) , pp .707 - 742 .falk and j.e .osborn , _ error estimates for mixed methods . _ m2an math . model ., 14 ( 1980 ) , pp .249 - 277 .l. s. fank , _ singular perturbations in elasticity theory , analysis and its applications ._ 1 . , ios press , amsterdam , 1997 .t. gudi , _ residual - based a posteriori error estimator for the mixed finite element approximation of biharmonic equation . _ numer .methods partial differential equations , 27 ( 2011 ) , pp .315 - 328 .j. guzmn , d. leykekhman , and m. neilan , _ a family of non - conforming elements and the analysis of nitsche s method for a singularly perturbed fourth order problem ._ calcolo , 49 ( 2012 ) , pp .h. han and z. huang , _ an equation decomposition method for the numerical solution of a fourth - order elliptic singular perturbation problem .methods partial differential equations , 28 ( 2012 ) , pp .942 - 953 .j. hu and z.c .shi , _ a new a posteriori error estimate for the morley element ., 112 ( 2009 ) , pp .r. lin and m. stynes , _ a balanced finite element method for singularly perturbed reaction - diffusion problems ._ siam j. numer ., 50 ( 2012 ) , pp .2729 - 2743 .p. morin , r.h .nochetto , and k.g .siebert , _ convergence of adaptive finite element methods ._ siam review , 44 ( 2002 ) , pp .631 - 658 .morley , _ the triangular equilibrium element in the solutionof plate bending problems . _aero . quart ., 19 ( 1968 ) , pp .149 - 169 .b. semper , _ conforming finite element approximations for a fourth - order singular perturbation problem ._ siam j. numer ., 29 ( 1992 ) , pp .1043 - 1058 .z. shi , _ error estimates of morley element .sinica , 12 ( 1990 ) , pp .113 - 118 .m. wang and s. zhang , _ local a priori and a posteriori error estimates of finite elements for biharmonic equation ._ research report , 13 ( 2006 ) , school of mathematical science and institute of mathematics , peking university .s. zhang and z.m .zhang , _ invalidity of decoupling a biharmonic equation to two piosson equations on non - convex polygons ._ international journal of numerical analysis and modeling , 5 ( 2007 ) , pp .
we consider mixed finite element approximation of a singularly perturbed fourth - order elliptic problem with two different boundary conditions , and present a new measure of the error , whose components are balanced with respect to the perturbation parameter . robust residual - based _ a posteriori _ estimators for the new measure are obtained , which are achieved via a novel analytical technique based on an approximation result . numerical examples are presented to validate our theory . key words . fourth order elliptic singularly perturbed problems , mixed finite element methods , a new measure of the error , robust residual - based _ a posteriori _ error estimators ams subject classifications . 65n15 , 65n30 , 65j15
we consider a wiretap set - up , in which a message is transmitted to its legitimate receiver bob in the presence of eve the eavesdropper .eve is assumed to have unlimited computational power , but to experience an additional noise compared to bob .lattice coset coding is utilized to maximize eve s confusion , cf .bob s lattice is referred to as the code lattice or dense lattice , and eve s lattice as the sparse or coarse lattice .the channel is assumed to exhibit additive white gaussian noise ( awgn ) but no fading .the respective channel equations for bob and eve are where is the received vector , the transmitted coset - coded vector , and is awgn with respective variances . in finding optimal lattice wiretap codes , there are three main objectives : * maximizing the data rate , which is determined by the size of the codebook and the decoding delay as bits per channel use ( bpcu ) . * minimizing the legitimate receiver s decoding error probability . * minimizing the eavesdropper s probability of correct decision . considering only the first two problems , the largest codebooks for a fixed transmission power and an upper bound for the receiver s error probability are the solutions to the widely investigated sphere - packing problem .this results in lattices that are typically nonorthogonal ( see , e.g. , ) .orthogonal lattices have still traditionally been preferred due to an easy - to - implement bit - labeling algorithm , namely the gray - mapping . due to this mapping ,the encoding and decoding procedures are more straightforward for orthogonal lattices than for nonorthogonal , i.e. , skewed lattices .nevertheless , computationally efficient closest - point algorithms such as the sphere decoder also exist for nonorthogonal lattices ( for an explicit construction , see , sec . 4 ) . in , it was also demonstrated how skewed lattices can be efficiently encoded and decoded by using a modified power - controlled sphere decoder or sphere decoding adjoined with minimum - mean - square - error generalized - decision - feedback - equalization ( mmse - gdfe ) , both resulting in optimal ( maximum - likelihood ) performance . hence , skewed lattices should not be excluded when searching for optimal lattices , in particular in the light of the present paper showing that they are not only better in terms of bob s performance , but also in terms of confusing the eavesdropper .similarly to the sphere - packing problem in bob s case , we now include the third objective in our consideration .our approach is to fix the data rate and the transmission power and then compare skewed and orthogonal lattices from the point of view of the latter two objectives in an awgn channel .we study an expression that has two alternative interpretations as either the receiver s error probability ( rep ) for any lattice code in an awgn channel or the eavesdropper s correct decision probability ( ecdp ) for a lattice coset code in an awgn channel .we prove the following results ( notation will be defined in the subsequent section ) . * skewing bob s orthogonal code lattice will decrease the rep of any code . * skewing eve s orthogonal sparse lattice will decrease the ecdp of any lattice coset code . * combining the previous two results , the common set - up of the dense lattice being orthogonal and the commonly used choice of an orthogonal sublattice are suboptimal in terms of both the ecdp and the rep . according to whether gray - labeling is insisted or not , this common set - up can be improved by either choosing a skewed sublattice of the same orthogonal dense lattice , leaving bob s lattice orthogonal and the rep suboptimal , or skewing both lattices .these results suggest that skewed lattices deserve more attention in the study of the awgn wiretap channels even though their encoding and decoding are admittedly somewhat more complicated than that of orthogonal lattices .it is also worthwhile to keep in mind that in any practical system , an outer error correcting code , e.g. , a low - density parity - check ( ldpc ) code , is used in addition to the inner lattice code .the true decoding bottle - neck in this case is the outer code requiring soft input , not the lattice code .in this section , we present some necessary definitions and their information - theoretic interpretations .a _ lattice _ is a discrete additive subgroup of .any point in a lattice can be expressed in terms of a _ generator matrix _ follows we assume that the columns of are linearly independent over and hence , the _ lattice coordinates _ of a lattice point are unique .if , the lattice is of _ full rank_. a _ sublattice _ of a lattice of dimension in is an additive subgroup ; it has a generator matrix , where . here is the dimension of the sublattice and for a square matrix , the _ volume _ of the lattice is the volume of the fundamental parallellotope spanned by the column vectors of , given by for full - rank lattices .differing from some information theory references , here vectors are identified with _ column _ matrices and the lattice generator vectors with the _ columns _ of the generator matrix . the _ dual lattice _ of a full - rank lattice generated by is the one generated by let be a full - rank lattice with generator and let be a continuous function with and such that the partial sums of converge uniformly whenever is restricted onto a compact set .then , where the fourier transform is defined as the proof is given in .we point out that the condition on the continuity of is essential for the proof and is missing in the book .the function that we will optimize is the following .the _ psi function _ of a lattice at a point is given by this is a variant of lattice theta series restricted on the imaginary axis , .the convergence properties of the psi series follow from those of the theta series . in , an upper approximation for the ecdp for a lattice coset codeis derived as here is the dense and the sparse lattice , inteded for the receiver and the eavesdropper , respectively .the lattices are assumed to be of full rank and the eavesdropper s noise is assumed to be awgn with variance .the inequality is tight for large . for small ,the upper bound is larger than and hence useless . on the other hand ,using the union bound technique as is done in ( * ? ? ?* appendix ii ) for rayleigh - fading channels and setting the rayleigh fading coefficients equal to one , the rep can be approximated from above as : this formula is valid for any lattice code ( not just a coset code ) in an awgn channel and the approximation is good for small receiver s noise variances . based on these two formulae and the fact that the variances and vary with the random channels , our subsequent aim will be to provide inequalities of the form for all . when comparing different lattices sharing the same dimension , their volumes are first normalized to one .this ensures that for a relatively large fixed transmission power , the finite codebooks carved from the infinite lattices will be approximately equally large , and hence we can fairly compare the lattice codes without considering the actual data rates , as these will coincide . due to the obvious connection between the formulae and for the rep and ecdp ,respectively , one would intuitively guess that a solution for the sphere - packing problem also yields an optimal ecdp .this , however , does not seem to work on the level of mathematical proofs ; eq .is obtained by the union bound technique , whereas in the sphere - packing problem , the upper bound for rep is based on integrating a gaussian function over a ball , yielding a much tighter bound for large receiver s noise variances or , equivalently , for small arguments of . to minimize the ecdp , we want to minimize for small arguments . hence ,even if the sphere - packing probability bound is small , it does not provide us with immediate information as to how small the function is for small arguments , i.e. , how small the ecdp is .in this section , we show that skewing a lattice will always improve a code both in terms of eve s and bob s probabilities .[ translation lemma ] for any full - rank lattice , for any .denote summands of the respective sides as and , so .then , by the elementary properties of fourier transform , we have .hence , using the poisson formula , where is the generator matrix of . in continuation , we will use the knowledge that the fourier transform of the gaussian function is another gaussian , hence a real , positive and even function .( the explicit form of could be calculated but it is not necessary . ) first , since is even , the imaginary parts of the summand for lattice points cancel out , yielding next , we need the positivity of to be able to approximate the cosine by . first , note that we assumed , equivalently , with some component of , say the one , not integer . also note that where . hence , .now , choosing the lattice point such that , we immediately see that and .hence , replacing by in the preceding step , we get a strict inequality where we have again applied the the poisson formula to .finally , the double fourier transform is in general a reflection operator , so , and using the fact that we obtain the result , [ def : skew ] let be a full - rank orthogonal lattice in with generator vectors , for .we call a lattice a _ skewing _ of , if it has a generator matrix that is an upper triangular matrix with the diagonal elements . this definition has a simple geometric interpretation , depicted in fig . [ skewfig ] .we point out that skewing can be interpreted as a matrix operation .if and are the generator matrices of and , respectively , then the non - singular skewing matrix can be solved from the matrix equation this equation is non - singular , since and by the determinant rule of upper triangular matrices .this also implies that .now we are ready to state the main theorem .after this , we will provide an illustrative interpretation of the theorem and prove it . [ skewing theorem ] for a skewing of a full - rank orthogonal lattice , for all .skewings provide several easy ways to improve lattice coset codes .we point out that since skewing keeps the lattice volume constant ( in matrix representation ) , it will not affect the size of a spherical codebook .hence , a lattice comparison between skewings only requires considering the ecdp and the rep . with ths knowledge , the theorem has the following immediate implications . * comparing a dense lattice and its skewings , theorem [ skewing theorem ] applied to eq .shows that the rep is always smaller for the skewings .this holds for all codes , not just coset codes .* consider a coset code arising from a fixed nonorthogonal lattice .then , to minimize the ecdp , it seems that should not be chosen orthogonal ( if orthogonal sublattices exist ) .note that then no skewing of the orthogonal is necessarily a sublattice of , so this is just heuristics . * consider a typical set - up of generated by being orthogonal and . in this case both the rep and the ocdp are suboptimal .there are two remedies : * * first , we can skew both and .skewing by so that the skewed lattices and are generated by and , respectively , will yield a nonorthogonal lattices but preserve the volumes : ( since ) .hence , applying this and theorem [ skewing theorem ] in eqs . and , we see that skewing will decrease both the rep and the ecdp .however , the skewed lattice will not allow for a simple gray mapping , or in other words , the gray mapping is not guaranteed to give an optimal bit - labeling . * * second , we can only opt for skewing the sublattice , while leaving orthogonal .this means that the rep will remain suboptimal , but the lattice will allow for gray labeling and maintains simpler encoding and decoding for bob along the lines discussed in the introduction .moreover , the ecdp is decreased .the idea is that if is orthogonal and generated by , and , then any sublattice generated by , where is an upper triangular integer matrix with diagonal etries , is easily proven to be a skewing of ( or equal to ) .then , applying theorem [ skewing theorem ] to eq . , we see that will yield a lower ecdp .let us use the notation of definition [ def : skew ] .furthermore , denote by the embedding into , , of .equivalently , is the lattice in generated by the first columns of the generator of . continuing to ease the notation ,denote the projection of the column of onto by , so the column is .now , it is apparent from the definition of that and that on the other hand , using lemma [ translation lemma ] for the lattice , , which is a full - rank lattice of , we obtain and , as stated in lemma [ translation lemma ] , the equality holds if and only if for all , equivalently , .this is furthermore equivalent to that the column of can be replaced by without changing the lattice .next , starting from eq ., using the identity inductively , and finally using eq . , we obtain the equality holds if and only if it has been possible to modify , for all , the column of into without changing the lattice generated by . butthis is equivalent to and generating the same orthogonal lattice .this is impossible by the definition of a skewing .hence , for any skewing of , we have a strict inequality for all .this completes the proof . the gosset lattice has the generator matrix given by so it is a skewing of the orthogonal lattice generated by .the theta series of the gosset lattice is expressible by the jacobi theta functions as where .the theta series of the orthogonal lattice is where and is the diagonal element of .now , recalling that , we can compare the psi series of these two lattices by evaluating jacobi theta functions .the plots of the psi functions are depicted in fig .[ psi_skewings ] .the figure shows that for all , as predicted by theorem [ skewing theorem ] . in coset coding, this has the following interpretation : and are both index subgroups of .if bob s lattice is , then the coset lattices and will yield the same code rates , but with a better secrecy . and its skewing .,scaledwidth=50.0% ]in the construction of lattice codes for awgn wiretap channels , skewed lattices should be taken more seriously .namely , we have proved that orthogonal lattices are suboptimal not only in terms of the receiver s error probability as we already know from the sphere - packing theorems , but also in terms of the eavesdropper s correct decision probability when using lattice coset codes .hence , the design of secure lattice codes should ideally be based on skewed lattices .however , due to implementation purposes , one may opt for only skewing the eavesdropper s lattice , while preserving the orthogonality of the legitimate receiver s lattice , which results in suboptimal performance but easy - to - implement algorithms for bob , as well as improved security .this work was carried out during a. karrila s msc thesis project .the department of mathematics and systems analysis at aalto university is gratefully acknowledged for the financial support .e. viterbo and f. oggier , `` algebraic number theory and code design for rayleigh fading channels '' , _ foundations and trends in communications and information theory _ , vol . 1 , no .3 , now publishers inc . , 2004 .j. boutros , e. viterbo , c. rastello and j .- c .belfiore , `` good lattice constellations for both rayleigh fading and gaussian channels '' , _ ieee transactions on information theory _ , vol .42 , no 2 , march 1996 .
we consider lattice coset - coded transmissions over a wiretap channel with additive white gaussian noise ( awgn ) . examining a function that can be interpreted as either the legitimate receiver s error probability or the eavesdropper s correct decision probability , we rigorously show that , albeit offering simple bit labeling , orthogonal nested lattices are suboptimal for coset coding in terms of both the legitimate receiver s and the eavesdropper s probabilities .
the configuration - mediated directionality of non - covalent bonds between proteins explains their propensity to self - assemble into fibrils and filaments .protein filaments are ubiquitous in biology , forming inside the cells or in the extra - cellular matrix individually , in bundles , or in randomly crosslinked networks .they facilitate the propulsion in bacteria , they control the mechanical strength in cytoskeleton and the bending stiffness in axons , they allow positional control of organelles and provide the transport routes all around the cell . in a different situation ,the self - assembly of proteins into amyloid fibrils impairs physiological activity and is the root cause of a number of organic dysfunctions . in yet another context, filaments are artificially or spontaneously assembled to achieve a specific function in the material , such as directed conductivity , plasmonic resonances , or just the mechanical strength in a fiber composite , with important technological applications .finally , a conceptually related issue emerges in the denaturation of dna , for which the available theoretical framework can not provide predictions about the topology of the disassembly process .the typical size of all these aggregates , and its time - evolution , are a non - trivial function of the rate at which bonds along the filament spontaneously dissociate due to the thermal motion of the assembled molecules .the dissociation rate and the distribution of fragments are important parameters which enter the master kinetic equation description of self - assembling filament size and populations .a filament growth can be summarized by the reversible reaction : , where the monomer subunit is added to an existing filament of -units long . for the forward reaction, it is commonly accepted that association proceeds by the addition of a single subunit as opposed to the joining of larger segments because of the greater abundance of monomers with respect to active fragments .in contrast , despite the importance of thermal breakup in many fields of colloid science and technology , its basic understanding is far from satisfactory .several studies aimed to explain thermally - activated filament breakup in physical terms , came to the conclusion that fibrils of any respective size can aggregate , while the filament breakup can occur with equal probability anywhere along its length .in particular , lee has demonstrated that the thermal breakup occurs randomly along the chain , leading to daughter fragments of any size . in yet another classical model based on equilibrium detailed - balance between the various aggregation and breakup events , by hill , the highest breakup probability is for two fragments of equal size , i.e. the breakup rate is maximum in the middle .theoretical models in the past have focused on the simplified case of chains of harmonically bonded particles ( subunits ) , so that the binding force is linear in the inter - protein displacement . in this approximationthe normal modes of vibration of the chain are de - coupled , which makes the problem amenable to simpler analysis . even in this case, previous theoretical models reached contradictory conclusions , with either flat breakup distribution or a pronounced maximum in the middle .however , the physical bonds linking protein filament subunits ( such as hydrogen bonds and hydrophobic attraction ) are strongly anharmonic. then the problem becomes one of coupled nonlinear oscillators as in the famous fermi - pasta - ulam problem , for which the typical vibration modes are no longer delocalized periodic waves but solitons .this is also consistent with the finding that in a strained lennard - jones chain , the strain is not uniformly distributed , but localized around the bond which is going to break first .the standard tools of chemical dynamics and stochastic rate theory , all based on the harmonic approximation and on normal modes , are therefore inapplicable . here we develop a systematic microscopic understanding of this process based on brownian dynamics simulation and theoretical arguments , focusing on the nonequilibrium breakup phenomena .hence we study the intrinsic breakup rates independent of any recombination phenomena which may occur at later stages leading eventually to an equilibrium size .first of all , we discover that the topology of filament breakup critically depends on the bending stiffness of the chain .secondly , a clear connection is found between the anharmonicity of subunit interaction and the fragment distribution resulting from thermal breakup .the anharmonic lennard - jones or morse - like binding potential in stiff or semiflexible filaments inevitably leads to a very strong preference for the breakup to occur at chain ends , but recover the uniform , flat fragment distribution in the limit of harmonic ( or any other symmetric ) potential .importantly , it is not the bare anharmonicity which controls this effect , but , more precisely , the _ asymmetry _ of the bonding potential about the minimum ( larger force for bond compression than for extension ) , which is inherent to the most common anharmonic potentials .as we will show below , it is precisely the asymmetry in the potential which `` breaks the symmetry '' between dissociation rates at the middle of the filament and at the ends .those rates are equal only for symmetric potentials like harmonic , and they always differ for asymmetric potentials . in contrast , when the intermolecular interaction is purely of the central - force type , i.e. a fully flexible chain with no bending resistance , a bell - like distribution peaked in the middle is obtained in accord with the prediction of the hill model . these findings can be understood with an argument based on counting the degrees of freedom per particle for the different potentials .these results provide a fundamental link between the features of intermolecular interaction and the filament breakup rate and topology , and can be used in the future to predict , control and manipulate the filament length distribution in a variety of self - assembly processes in biological and nanomaterials applications .to model a non - covalently bonded filament we use a coarse - grained model of linear chains of brownian particles ( fig.[fig1]a ) bonded by the truncated - shifted lennard - jones ( lj ) potential , - { u_c},~~{\rm { for } } ~r < { r_c}}\\ { 0 , \qquad { \rm { for } } ~r \ge { r_c } } \end{array } } \right.\ ] ] where is the distance between two neighbor proteins and , is the linear size of the monomer unit , and ] is set to maintain a constant well depth equal to , independently of .the lj potential is inherently anharmonic , except in the close proximity of its minimum. an alternative could be the morse potential , and we have checked that the results do not change qualitatively with its use .figure [ fig1]b explains what we mean by truncation : the attractive region stretches up to a distance ( indicated by arrows in the plot and measured in terms of lj length scale ) , while the depth of the potential well is kept independently fixed ( measured by , in units of ) .the shorter the attraction range , the closer is the potential to its harmonic approximation .for all the data we use , which well approximates the strength of the most common physical interactions such as hydrogen bonds and hydrophobic attraction .we also include in our analysis the local bending energy , in the form , where is the angle between the directions of bonds from the particle to the preceding ( ) and the subsequent ( ) subunits .figure [ fig1]d illustrates the way this effect is implemented by imposing pairs of equal and opposite forces on the joining bonds , providing a net torque on the junction .it is the same algorithm that is used in , e.g. lammps ` angle - harmonic ' system . the bending modulus , in units of ,is directly related to the persistence length of the filament via the standard expression . in the chain , for several values of attractive range , measured by and indicated by arrows in the plot .( c ) the contrast between a combined potential felt by an inner subunit in the filament , bonded on both sides , and the end - subunit bonded by the regular lj potential .( d ) scheme of the bond - bending force which opposes changes in the angle between two adjacent bonds by applying couples on each adjacent bond . ]the dynamics of the chain of subunits is governed by the overdamped langevin equation , where is the vector containing the positions of all molecules , is the friction coefficient , the total potential force acting on a given particle , , has contribution from both the lj and the bending couples , and the gaussian stochastic force defined such that and , according to the fluctuation - dissipation theorem . for numerical integration eq .( [ a1 ] ) is discretised in the form known as the ermak - mccammon equation : where is randomly extracted from a normal distribution with zero average and unit standard deviation .the discrete time step is taken as , where the reduced time uint is defined as , and is the diffusion coefficient . for a typical globular protein ( e.g. lysozyme ) , with diameter nm and diffusion coefficient m/s , we obtain .therefore ps .each run is initialized with the equilibrium interparticle distance , as a straight chain ( all ) , corresponding to the minimum of all interaction potentials .a dissociation event is assumed to take place when one of the bonds exceeds the cut - off length ( ) , i.e. , at which point the simulation is terminated and the location of the rupture recorded . the location of the rupture is recorded . to generate the probability distributions plotted in figs .[ fig3 ] and [ fig4 ] , independent runs are performed and the normalised breakup probability is calculated as where is the total number of recorded breakup events for the bond . for most datawe have reached ; since the runs are independent , the are binomially ( bernoulli ) distributed and the error bars are estimated as /n}} ] , which is the same quantity as measured from the exponential fits in simulations . the exponential dependence on time can be understood from the analysis of the many - particle fokker - planck equation , eqs .( [ 2])-([3 ] ) .its general solution is , where labels the eigenfunctions and eigenvalues of the many - body operator ., normalized such that it is equal to unity at , is plotted against simulation time measured in timesteps ( ts ) .different data sets represent the different attraction range , which is our measure of potential asymmetry .the fitted lines are all simple exponentials , from which we extract the characteristic rate of the first breakup , . ]the probability to break in the middle as and the one at the end as , with subunits from each end affected , the total rate can be estimated as , which is the solid line in the plot with only a single fitting normalisation factor : ; the deviations at small are clearly due to the overlapping end effects ( see fig.[fig5 ] ) . ] according to the ground - state dominance principle , the time evolution for long filaments ( ) is dominated by the smallest non - zero eigenvalue , such that , recalling the expression for , the time dependence of the first - breakup probability is given by .hence the breakup probability is indeed exponential in time with a characteristic frequency - scale given by the smallest finite eigenvalue of the many - body operator .this result explains the exponential dependence on time of the breakup probability observed in the simulations in fig .[ fig6 ] . also , combining the expressions for and for , it is possible to show that , which confirms that the ground - state of the many - body fokker - planck equation indeed sets the time scale of breakup .furthermore , the rate grows roughly linearly with the chain length , which is demonstrated in fig .this particular dependence arises because the number of escape attempts increases with the chain size .one can show by means of the standard supersymmetric transformation of the fokker - planck equation into the schrdinger equation , that is analogous to the quantum ground - state energy of an ensemble of bound states , and the ground state energy is extensive ( ) within the quasiparticle approximation .we find a useful representation in a map that covers all of the - parameter space to study how the location of first - breakup events along the filament changes upon varying both the stiffness and the cutoff or asymmetry .the results can be represented as a contour plot for the ratio as a function of and .the contour plot is shown in fig .the bottom left corner , corresponding to flexible ( low- ) filaments with short - ranged potential close to harmonic ( low- ) , represents conditions where the filament breaks in the middle and the fragment distribution is bell - shaped , in conformity with hill s model predictions . upon increasing both and at the same time ,breakup in the middle becomes less favourable and the distribution tends to flatten out .eventually , for very stiff filaments and asymmetric potentials with large the opposite limit of u - shaped fragment distributions with preferential breakup at the filament ends is recovered .this occurs in the top - left region of the map in fig .[ fig8 ] . for symmetric binding potentials close to harmonic ( low : along the axis of the contour plot ) , the bell - shaped distribution persists longer upon increasing , eventually transforming into a flat distribution for stiff filaments . on the other side of the map , where is increased for flexible chains, the bell - shaped distribution persists for flexible chains up to which corresponds to the lj with no cutoff. in general , the most dramatic change in the breakup location and fragment - distribution shape occurs along the path of steepest ascent , defined as the path parallel to the gradient of the surface . based on our results , the path of steepest ascent and most dramatic evolution in the breakup topologyis approximately identified by the line . as a function of filament stiffness and the asymmetry parameter .the bottom - left lagoon ( dark ) represents conditions where filaments break in the middle ( bell - shaped distribution , according to hill ) , while the upper - right ( light ) region of this map represents conditions where filaments break at the ends ( the u - shaped distribution ) and negligibly in the inner locations .arrows on the top signify that these geodesic lines extrapolate towards .arrows to the right indicate that there is little further change past .the dashed geodesic line marks the condition , separating the regions of bell- and u - shape distributions . ] in figs .[ fig3 ] and [ fig4 ] we have shown that depending on the relative extent of bond - bending and central forces in the intermolecular interaction , the fragment size distribution can change from a u - shaped distribution in the limit of large bond - bending rigidity , to a bell - shaped distribution with opposite curvature in the limit of a purely central - force lennard - jones potential .intermediate bending stiffness values yield distributions with shape in between the two limiting cases .it is first important to understand the microscopic origin of this qualitative difference upon varying the bending stiffness in the intermolecular interaction .since the flexible chain breakup statistics closely resembles the prediction of hill , we take a similar approach and consider the fragment - size dependence of the breakup rate within a chemical equilibrium assumption and for the special simplifying case of harmonic bonds .we have checked that with harmonic bonds the same behaviour trend as in fig .[ fig3 ] is reproduced , with the only difference that the distribution for the stiff filament is flat ( as indeed proven by lee ) instead of u - shaped in the case of stiff filaments ( as the last curve in fig .[ fig4 ] shows ) .that is , the hill - like bell - shaped is the universal result for fully flexible chains .the equilibrium constant for a dissociation reaction of a filament into two fragments takes the form : , where is the partition function of fragment . is the dissociation rate , while is the recombination rate of these two fragments .the latter can be estimated from the diffusion - controlled collision rate of two linear chains , upon accounting for the diffusion coefficient of the two chains ( kirkwood - riseman approximation ) and for the encounter efficiency of end - to - end collisions of the two chains . in this way, the size - dependence was found to be .the size - dependence of the dissociation rate ( and hence the fragment size - distribution ) can be obtained by replacing this form for the association rate in the expression for the equilibrium constant , and upon evaluating the fragment - size dependence of the partition functions in the numerator of . from classical statistical mechanics , rigid - body translational degrees of freedom of the chaincontribute to the partition function a factor , and rigid - body rotational degrees of freedom contribute an extra factor , since the overall mass of the filament is .together these two factors give a partition function .the vibrational contributions of the monomers in the chain factorise in the partition function , as for a chain of harmonic oscillators , resulting in standard factors of the type , where is the einstein frequency . clearly these factors do not contribute to because the corresponding terms in the numerator and denominator cancel .a full consideration of the normal modes of the linear chain with free ends , beyond the einstein model , leads to an additional nontrivial size - dependence , for vibrations of harmonic spheres in 1d , and to for vibrations in a flexible 3d chain . in simple terms , upon increasing the chain length ,more low - energy modes can be accommodated in the spectrum , which causes the partition function to decrease .the importance of this effect was first recognized by j. frenkel in the context of nucleation phenomena .hence with purely central - force interaction in 3d ( flexible chain ) the overall contribution is .akin to covalent bonds in molecular physics , the bending stiffness introduces additional degrees of freedom for rotations about the bond symmetry axis , which then leads to an overall dependence .one should note that with spheres and purely central - force bonds there is no such axis of symmetry for the rotations , and the three translational degrees of freedom per particle suffice to describe the vibrational behavior . including all these considerations ,the dissociation rate will have a dependence on the fragment sizes given by the exponent , which collects all size - dependent contributions of the partition function , is different depending on whether the interaction is purely central - force , or has a bond - bending stiffness .for central forces , , whereas with semiflexible or stiff chains one has .the leading contribution is then , with a pronounced bell - shape peaked in the middle for the exclusively central - force flexible chain , and , leading to a much flatter distribution for a chain with bond - bending penalty .the fact that the slightly u - shaped distribution observed in simulations for stiff filaments is not recovered by this model should be attributed to the various approximations ( kirkwood - riseman for chain diffusion , detailed balance , etc . )involved in the model , and also to the harmonic approximation of independent linear oscillators underlying the factorization of partition functions .this argument , however , explains , qualitatively , that a flatter distribution of fragments is to be expected in the presence of bond - bending , due to the additional rotational degrees of freedom about the stiff intermolecular bond symmetry axis , which is absent with purely central - force interactions. we can briefly comment on the qualitative predictions of this model for the distribution of breakup fragments in realistic amyloid fibrils .realistic intermolecular forces which bind proteins in amyloid fibrils crucially depend on both electrostatics and temperature .we shall start considering the role of electrostatics first .electrostatic repulsion between two bound proteins in a filament is ubiquitous except for solutions at very high ionic strength .electrostatic repulsion acts to `` lift up '' the bonding minimum , and it may also contribute an additional small energy barrier to the total interaction , with a maximum co - existing or competing with the new lifted attractive minimum .we denote the new attractive minimum as . due to the fact that the electrostatic energy decreases with , and the maximum is typically at , the lifting up of the bonding minimum by the electrostatic repulsion is not entirely compensated by the energy barrier ( the new maximum in ) .hence the total energy to be overcome for the particle to escape from the bonding minimum is .this consideration points towards a role of electrostatics which promotes breakup , or at least , restructuring into a different morphology where the electrostatic energy density is reduced .this outcome of our analysis is compatible with recent experimental observations where an increased electrostatic repulsions ( e.g. at lower ionic strengths ) is responsible for fission or scission phenomena of larger compact aggregates into smaller and more anisotropic aggregates .our simulations show a crossover from a u - shaped fragment distribution into a bell - shaped distribution upon going from high values of bond - bending stiffness to lower values . in our simulations , is fixed and set independently of , the latter being kept constant throughout at varying . in reality , however , and may not be decoupled for a realistic model of amyloid fibrils .the reason is that the inter - protein bending stiffness originates , microscopically , from the strength of -sheets which bind two adjacent proteins in the fibril .the mechanism is known : due to the planar , sheet - like , nature of two hydrogen - bonded -sheets , there is an intrinsic bending resistance against sliding or rolling of the two proteins past each other . the same mechanism provides bending rigidity when two surfaces bonded by many anchored central - force springs are displaced tangentially apart . upon increasing ,the hydrogen and hydrophobic bonds which keep the two -sheets together start to dissociate , leading to lower bending stiffness and lower values of . hence , based on our simulation results , we can predict that the fragment distribution function of realistic amyloid fibrils should evolve from a u - shaped distribution at low temperature , where the -sheets of two adjacent proteins are tightly bound , into a bell - shaped distribution at higher where the -sheet bonding becomes looser , which makes the bending stiffness decrease .this prediction seems to be confirmed by preliminary experiments , and future work using ab - initio simulations should focus on identifying the relationship between and , which controls the evolution of the fragment distribution with . in future researchit will be important to combine all these effects into a general coarse - grained approach along the lines of , to achieve a bottom - up description of realistic filaments and their size evolution .when the bending rigidity of the chain is high , the probability of spontaneous bond breaking is flat when the bond potential is harmonic yet it adopts a very distinct and very strongly biased u - shape when the /asymmetry of the potential increases ( fig.[fig4 ] ) .how can we quantitatively explain why the asymmetry of interaction potential between any two bonded subunits leads to higher breakup rates at the chain ends , and much smaller breakup rates in the middle ? for a high bending modulus one can treat the bond at the filament end as a classical diatomic molecule , and a subunit in the middle of the chain as the inner particle in a linear triatomic molecule . in the latter case, the combined potential felt by the particle in the middle is sketched in fig.[fig1]c .one would be tempted to explain the difference between the higher dissociation rate at the filament end and the lower one in the middle by referring to the overall lower energy ( deeper potential well ) felt by the particle in the middle sitting in the minimum of the combined potential . applying a kramers - type escape - rate argumentwould then lead to an arrhenius dependence of the particle on the depth of the energy well and an overall large difference between the two rates .however , such an approach can not explain the observation that the rate is the same in the middle and at the end for the case of harmonic potential ; in that case the same argument about applies hence one would expect a lower rate in the middle , which is not observed , in agreement with previous calculations .what is different in the case of the harmonic potential , is the fact that the asymmetry of the bonding potential is removed for the particle at the end of the chain ( while the subunits in the middle effectively experience the harmonic potential in both cases ) .it is in fact this asymmetry which facilitates dissociation at the termini of the chain , where less resistance is encountered by the particle escaping outwardly from the bound state . in order to verify that this is indeed the right physics, we also run a test simulation with a quartic potential , which is anharmonic yet fully symmetric about the minimum , just like the harmonic potential .also in this case we found a completely flat distribution of fragments , as for the harmonic potential , which supports the proposed claim .it is therefore the asymmetry , in the case of anharmonic potentials , which plays the major role in facilitating the preferential bond breakup at the chain ends .the explanation can be found in the different values of the mean thermal fluctuation from the equilibrium position ( energy minimum ) for the particle sitting in the asymmetric lj potential at the chain end , and the particle moving in the more symmetric combined potential in the middle of the chain .an analysis of the mean thermal fluctuation done long ago by j. frenkel , shows that the mean thermal fluctuation of the particle feeling the anharmonic / asymmetric potential at the end is typically larger because of the shallower slope of the potential in the outward direction . for the particle in the middle ,the situation is different because the combined potential does not become shallower as the particle in the middle moves away from one of the two neighbours , due to the presence of the interaction with the other neighbour .by means of brownian dynamics simulations , we have shown that thermal breakup rates and breakup topology of model protein filaments ( and other linear nanoparticle aggregates ) are strongly affected by the presence of bond - bending stiffness in the interaction between subunits , and by the degree of asymmetry of the anharmonic binding potential . with stiff chains bonded by inter - particle forces with anharmonicity and asymmetry of the potential typical for intermolecular interaction potentials ( van der waals , hydrophobic attraction etc ) ,we find a strongly preferential breakup at the chain ends , and an overall u - shaped fragment distribution .in contrast , with purely central - force interactions between subunits , that is , fully flexible chains the fragment size distribution is bell - shaped , with a pronounced peak in the middle ( symmetric breakup ) , and the lowest breakup rate is found at the ends of the chain . while the preferential breakup at the end of stiff chains ( filament depolymerization ) can be explained in terms of the larger thermal fluctuations at the chain - end associated with potential anharmonicity / asymmetry in a perfectly stiff quasi-1d chain model , the dramatic change of breakup topology upon varying the strength of bond - bending interactionis more subtle .in this case we found a tentative explanation upon considering the degrees of freedom associated with the vibrational partition function of the fragments .in general , breakup into two equal fragments is favoured with purely central - force bonds because the product of the partition functions of two fragments is maximised ( which is intuitive if one considers that the classical partition function for rigid body motions increases strongly with the fragment size ) .the vibrational partition function , instead , decreases with fragment size because more low - energy modes can be accommodated in longer fragments .this effect becomes stronger in the case of bond - bending , where the total number of vibrational degrees of freedom is larger due to the rotation axis of the stiff bond . as a result of this compensation between the size dependencies of the vibrational and rigid - body partition functions ,the size - dependence of fragmentation rate with bond - bending is much weaker compared to the central - force case .hence , we found some general laws which govern the fragmentation behavior of model linear aggregates , as a function of the relative importance of central - force and bond - bending interactions between subunits .these findings are important towards achieving a bottom - up control over the length and time - evolution of filament populations , both in biological problems ( acting , amyloid fibrils etc . ) and in nanoparticle self - assembly for photonic applications .we are grateful for many discussions and input of t.p.j .knowles , t. michaels , c.m .dobson and a. bausch .this work has been supported by the ernest oppenheimer fellowship at cambridge ( az , ld ) and by the technische universitt mnchen institute for advanced study , funded by the german excellence initiative and the eu 7th framework programme under grant agreement nr .291763 ( az ) .ld also acknowledges the marie curie itn - comploids grant no .234810 .f. oosawa , f. s. asakura , _ thermodynamics of the polymerization of protein ._ ( academic press , 1975 ) .t. d. pollard , ann . rev. biochem . * 55 * , 987 - 1035 ( 1986 ) .f. chiti , c. m. dobson , annu .biochem . * 75 * , 333 ( 2006 ) .d. chandler , nature * 437 * , 640 - 647 ( 2005 ) .a. irbck , s. a. jnsson , n. linnemann , b. linse , s. wallin .lett . * 110 * , 058101 ( 2013 ) .p. s. niranjan , p. b. yim , j. g. forbes , s. c. greer , j. dudowicz , k. f. freed , j. f. douglas , j. chem . phys .* 119 * , 4070 - 4084 ( 2003 ) .j. adamcik , j .-jung , j. flakowski , p. de los rios , g. dietler , r. mezzenga , nature nanotech . * 5 * , 423 - 428 ( 2010 ) .m. tanaka , s. r. collins , b. h. toyama , j. s. weissman , nature * 442 * , 585 ( 2006 ) .a. zaccone , d. gentili , h. wu , m. morbidelli , j. chem .132 , 134903 ( 2010 ) .h. wu , a. tsoutsoura , et al .langmuir 26 , 2761 ( 2010 ) .t. p. j. knowles , c. a. waudby , g. l. devlin , s. i. a. cohen , a. aguzzi , m. vendruscolo , e. m. terentjev , m. e. welland , c. m. dobson , science * 326 * , 1533 - 1537 ( 2009 ) .v. fodera , a. zaccone , m. lattuada , a. m. donald , phys .lett . * 111 * , 108105 ( 2013 ) .l. di michele , e. eiser , v. fodera , j. phys .lett . * 4 * , 3158 ( 2012 ) .s. odenbach , ed ._ colloidal magnetic fluids . basics , development and applications of ferrofluids_. ( berlin , springer , 2009 ) .b. bonn , h. kellay , m. prochnow , k. ben - djiemiaa , j. meunier , science * 280 * , 265 - 267 ( 1998 ) .l. dimichele , et al .136 , 6538 ( 2014 ) .m. peyrard , a. r. bishop , phys .. lett . * 62 * , 2755 - 2758 ( 1989 ) . c. b. mast , s. schink , u. gerland , d. braun , proc .usa * 110 * , 8030 - 8035 ( 2013 ) . c. f. lee , phys .e * 80 * , 031134 ( 2009 ) . t. l. hill ,j. 44 , 285 ( 1983 ) .e. fermi , j. r. pasta , s. ulam , los alamos scientific laboratory report no .la-1940 , may 1955 .n. j. zabusky , m. d. kruskal , phys .lett . * 15 * , 240 - 243 ( 1965 ) .f. a. oliveira , p. l. taylor , j. chem.phys .* 101 * , 10118 ( 1994 ) .a. ghosh , d. i. dimitrov , v. g. rostiashvili , a. milchev , t. a. vilgis , j. chem .* 132 * , 204902 ( 2010 ) .a. zaccone , e. m. terentjev , phys .rev . lett .* 108 * , 038302 ( 2012 ) .p. haenggi , p. talkner , m. borkovec , rev .* 62 * , 251 - 341 ( 1990 ) .f. a. l. mauguiere , p. collins , g. s. ezra , s. wiggins , j. chem . phys . * 138 * , 134118 ( 2013 ) .j. paturej , a. milchev , v. g. rostiashvili , t. a. vilgis , j. chem . phys . * 134 * , 224901 ( 2011 ) .s. j. plimpton , j. comput . phys .* 117 * , 1 ( 1995 ) .d. l. ermak , j. chem . phys . * 62 * , 4189 ( 1975 ) .d. l. ermak , j. a. mccammon , j. chem . phys . * 69 * , 1352 ( 1978 ) .d. burne , s. kim , proc .natl . acad .90 * , 3835 ( 1993 ) .t. yanagida , m. nakase , k. nishiyama , f. oosawa , nature , * 307 * , 58 - 60 ( 1984 ) .f. gittes , b. mickey , j. nettleton , j. howard , j. cell biol . *120 * , 923 - 934 ( 1993 ) .t. p. j. knowles , j. f. smith , a. craig , c. m. dobson , m. e. welland , phys .lett . * 96 * , 238301 ( 2006 ) .j. riseman , j. g. kirkwood , j. chem . phys . * 18 * , 512 ( 1950 ) .f. f. abraham and j. canosa , j. chem .50 * , 1303 ( 1969 ) .j. lothe and g. m. pound , phys . rev .* 182 * , 339 ( 1969 ) .j. frenkel , _ kinetic theory of liquids _( dover , new york , 1946 ) .m. doi , s. f. edwards ._ the theory of polymer dynamics _ ( oxford university press , 1986 ). n. g. van kampen ._ stochastic processes in physics and chemistry _( elsevier , amsterdam , 1997 ) .w. ebeling and i.m .sokolov , _ statistical thermodynamics and stochastic theory of nonequilibrium systems _ ( world scientific , singapore , 2005 ) .d. pines and p. nozieres , _ the theory of quantum liquids _ , vol . 1 ( w.a .benjamin , reading massachusetts , 1966 ) .a. dehsorkhi , v. castelletto , i. w. hamley , j. adamcik , r. mezzenga , soft matter 9 , 6033 - 6036 ( 2013 ) .l. nicoud , s. lazzari , d. balderas barragan , and m. morbidelli , preprint ( 2015 ) .knowles , et al .109 , 158101 ( 2012 ) .s. assenza , j. adamcik , r. mezzenga , p. de los rios , phys .113 , 268103 ( 2014 ) .
protein molecules often self - assemble by means of non - covalent physical bonds to form extended filaments , such as amyloids , f - actin , intermediate filaments and many others . the kinetics of filament growth is limited by the disassembly rate , at which inter - protein bonds break due to the thermal motion . existing models often assume that the thermal dissociation of subunits occurs uniformly along the filament , or even preferentially in the middle , while the well - known propensity of f - actin to depolymerize from one end is mediated by chemical factors ( adp complexation ) . here we show for a very general ( and generic ) model , using brownian dynamics simulations and theory , that the breakup location along the filament is strongly controlled by the asymmetry of the binding force about the minimum , as well as by the bending stiffness of the filament . we provide the basic connection between the features of the interaction potential between subunits and the breakup topology . with central - force ( that is , fully flexible ) bonds the breakup rate is always maximum in the middle of the chain , whereas for semiflexible or stiff filaments this rate is either a minimum in the middle or flat . the emerging framework provides a unifying understanding of biopolymer fragmentation and depolymerization , and recovers earlier results in its different limits .
resampling methods for dependent data such as time series have been studied extensively over the last decades . for an overview of existing bootstrap methodssee the monograph of and the review papers by , , , or the recent review paper by . among the most popular bootstrap procedures in time series analysis , we mention the autoregressive ( ar ) sieve bootstrap [ cf . ( ) , , ] and block bootstrap and its variations ; cf . , , ( ) , etc .a recent addition to the available time series bootstrap methods was the linear process bootstrap ( lpb ) introduced by who showed its validity for the sample mean for univariate stationary processes without actually assuming linearity of the underlying process .the main idea of the lpb is to consider the time series data of length as one large -dimensional vector and to estimate appropriately the entire covariance structure of this vector .this is executed by using tapered covariance matrix estimators based on flat - top kernels that were defined in .the resulting covariance matrix is used to whiten the data by pre - multiplying the original ( centered ) data with its inverse cholesky matrix ; a modification of the eigenvalues , if necessary , ensures positive definiteness .this decorrelation property is illustrated in figures 5 and 6 in .after suitable centering and standardizing , the whitened vector is treated as having independent and identically distributed ( i.i.d . ) components with zero mean and unit variance .finally , i.i.d . resampling from this vector and pre - multiplying the corresponding bootstrap vector of residuals with the cholesky matrix itself results in a bootstrap sample that has ( approximately ) the same covariance structure as the original time series . due to the use of flat - top kernels with compact support , an abruptly dying - out autocovariance structureis induced to the bootstrap residuals .therefore , the lpb is particularly suitable for but not limited to time series of moving average ( ma ) type . in a sense , the lpb could be considered the closest analog to an ma - sieve bootstrap which is not practically feasible due to nonlinearities in the estimation of the ma parameters .a further similarity of the lpb to ma fitting , at least in the univariate case , is the equivalence of computing the cholesky decomposition of the covariance matrix to the innovations algorithm ; cf . , and , the latter addressing the multivariate case .typically , bootstrap methods extend easily from the univariate to the multivariate case , and the same is true for time series bootstrap procedures such as the aforementioned ar - sieve bootstrap and the block bootstrap . by contrast, it has not been clear to date if / how the lpb could be successfully applied in the context of multivariate time series data ; a proposal to that effect was described in who refer to an earlier preprint of the paper at hand but it has been unclear to date whether the multivariate lpb is asymptotically consistent and/or if it competes well with other methods .here we attempt to fill this gap : we show how to implement the lpb in a multivariate context and prove its validity for the sample mean and for spectral density estimators , the latter being a new result even in the univariate case .note that the limiting distributions of the sample mean and of kernel spectral density estimators depend only on the second - order moment structure . hence it is intuitive that the lpb would be well suited for such statistics since it generates a linear process in the bootstrap world that mimics well the second - order moment structure of the real world .furthermore , in the spirit of the times , we consider the possibility that the time series dimension is increasing with sample size and identify conditions under which the multivariate linear process bootstrap ( mlpb ) maintains its asymptotic validity , even in this case .the key here is to address the subject of consistently estimating the autocovariance sequence ; this is a sequence of matrices that we conveniently stack into one huge matrix .we are then able to show consistency of an estimator based on the aforementioned flat - top tapers ; most importantly , the consistency holds true even when the time series dimension is allowed to increase with the sample size .the paper is organized as follows . in section [ preliminaries ], we introduce the notation of this paper , discuss tapered covariance matrix estimation for multivariate stationary time series and state assumptions used throughout the paper ; we then present our results on convergence with respect to operator norm of tapered covariance matrix estimators .the mlpb bootstrap algorithm and some remarks can be found in section [ bootstrapscheme ] , and results concerned with validity of the mlpb for the sample mean and kernel spectral density estimates are summarized in section [ asymptoticresults ] .asymptotic results established for the case of increasing time series dimension are stated in section [ asymptoticresultsincreasing ] , where operator norm consistency of tapered covariance matrix estimates and a validity result for the sample mean are discussed .a finite - sample simulation study is presented in section [ secsim ] . finally ,all proofs , some additional simulations and a real data example on the weighted mean of an increasing number of stock prices taken from the german stock index dax can be found at the paper s supplementary material [ ] , which is also available at http://www.math.ucsd.edu/\textasciitilde politis / paper / mlpbsupplement.pdf[http://www.math.ucsd.edu/\textasciitilde politis / paper / mlpbsupplement.pdf ] .suppose we consider an -valued time series process with , and we have data at hand .the process is assumed to be strictly stationary and its autocovariance matrix at lag is where , and the sample autocovariance at lag is defined by where is the -variate sample mean vector .here and throughout the paper , all matrix - valued quantities are written as bold letters , all vector - valued quantities are underlined , indicates the transpose of a matrix , the complex conjugate of and denotes the transposed conjugate of .note that it is also possible to use unbiased sample autocovariances , that is , having instead of in the denominator of ( [ samplecovariance ] ) .usually the biased version as defined in ( [ samplecovariance ] ) is preferred because it guarantees a positive semi - definite estimated autocovariance function , but our tapered covariance matrix estimator discussed in section [ taperedestimator ] is adjusted in order to become positive definite in any case .now , let be the -dimensional vectorized version of the data matrix ] , respectively . for statistics contained in the broad class of functions of generalized means , discussed how by using a preliminary blocking scheme tailor - made for a specific statistic of interest , the mlpb can be shown to be consistent .this class of statistics contains estimates of with such that for some sufficiently smooth functions , and fixed .they propose to block the data first according to the known function and to apply then the ( m)lpb to the blocked data .more precisely , the multivariate lpb - of - blocks bootstrap is as follows : define , and let be the set of blocked data .apply the mlpb scheme of section [ bootstrapscheme ] to the -dimensional blocked data to get bootstrap observations .compute .repeat steps 2 and 3 -times , where is large , and approximate the unknown distribution of by the empirical distribution of .the validity of the multivariate lpb - of - blocks bootstrap for some statistic can be verified by checking the assumptions of theorem [ validitysamplemean ] for the sample mean of the new process .in this section , we consider the case when the time series dimension is allowed to increase with the sample size , that is , as . in particular , we show consistency of tapered covariance matrix estimates and derive rates that allow for an asymptotic validity result of the mlpb for the sample mean in this case .the recent paper by gives a thorough discussion of the estimation of toeplitz covariance matrices for univariate time series . in their setup , that covers also the possibility of having multiple datasets from the same data generating process , establish the optimal rates of convergence using the two simple flat - top kernels discussed in section [ taperedestimator ] , namely the truncated ( i.e. , case of pure banding no tapering ) and the trapezoid taper . when the strength of dependence is quantified via a smoothness condition on the spectral density ,they show that the trapezoid is superior to the truncated taper , thus confirming the intuitive recommendations of .the asymptotic theory of allows for increasing number of time series and increasing sample size , but their framework does not contain the multivariate time series case , neither for fixed nor for increasing time series dimension , which will be discussed in this section .note that theorem 1 in for the univariate case , as well as our theorem [ operatornormconvergence1 ] for the multivariate case of fixed time series dimension , give upper bounds that are quite sharp , coming within a log - term to the ( gaussian ) optimal rate found in theorem 2 of . instead of assumptions ( a1)(a5 ) that have been introduced in section [ assumptions ] and used in theorem [ validitysamplemean ] to obtain bootstrap consistency for the sample mean for fixed dimension , we impose the following conditions on the sequence of time series process of now increasing dimension . is a sequence of -valued strictly stationary time series processes with mean vectors and autocovariances defined as in ( [ covariance ] ) . here, is a nondecreasing sequence of positive integers such that as and , further , suppose for some to be further specified .there exists a constant such that for all and all with , we have there exists an large enough such that for all and all the eigenvalues of the covariance matrix are bounded uniformly away from zero and from above .define the sequence of projection operators for , and suppose and for the sample mean , a cramr wold - type clt holds true .that is , for any real - valued sequence of -dimensional vectors with for all and , we have assumptions ( a1)(a4 ) are uniform analogues of ( a1)(a4 ) , which are required here to tackle the increasing time series dimension . in particular , ( a1 ) implies observe also that the autocovariances are assumed to decay with increasing lag , that is , in time direction , but they are not assumed to decay with increasing , that is , with respect to increasing time series dimension .therefore , we have to make use of square summable sequences in ( a5 ) to get a clt result .this technique has been used , for example , by and to establish central limit results for the estimation of an increasing number of autoregressive coefficients .a simple sufficient condition for ( a5 ) is , for example , the case of being a sequence of i.i.d .gaussian processes with eigenvalues of bounded uniformly from above and away from zero .the following theorem generalizes the results of theorems [ operatornormconvergence1 ] and [ operatornormconvergence2 ] and of corollary [ operatornormconvergence3 ] to the case where is allowed to increase with the sample size .in contrast to the case of a stationary spatial process on the plane ( where a data matrix is observed that grows in both directions asymptotically as in our setting ) , we do not assume that the autocovariance matrix decays in all directions .therefore , to be able to establish a meaningful theory , we have to replace ( a1)(a5 ) by the uniform analogues ( a1)(a5 ) , and due to ( [ rate ] ) , an additional factor turns up in the convergence rate and has to be taken into account . [ operatornormconvergence3d ] under assumptions with specified below , and, we have : and are terms of order , where and if . and are both terms of order and if . , , and are bounded from above and below . and as well as and are bounded from above and below in probability if and , respectively .the required rates for the banding parameter and the time series dimension to get operator norm consistency can be interpreted nicely .if is chosen to be large enough , becomes the leading term , and there is a trade - off between capturing more dependence of the time series in time direction ( large ) and growing dimension of the time series in cross - sectional direction ( large ) . the subsequent theorem is a cramr wold - type generalization of theorem [ validitysamplemean ] to the case where is allowed to grow at an appropriate rate with the sample size . to tackle the increasing time series dimension and to prove such a clt result , we have to make use of appropriate sequences of square summable vectors as described in ( a5 ) above .[ validitysamplemeand ] under assumptions with specified below , , , for , as well as and for some sequence , the mlpb is asymptotically valid for the sample mean . that is , for any real - valued sequence of -dimensional vectors with for all and , we have and . in practice, the computational requirements can become very demanding for large and . in this case, we suggest to split the data vector in few subsamples , say , and to apply the mlpb scheme to each subsample separately .this operation can be justified by the fact that dependence structure is distorted only few times . precisely , we suggest the following procedure : for small , define and such that , and let , , where is filled up with zeros if .apply the mlpb bootstrap scheme as described in section [ bootstrapscheme ] separately to the subsamples to get .put end - to - end together , and discard the last values to get and . here ,computationally demanding operations as eigenvalue decomposition , cholesky decomposition and matrix inversion have to be executed only for lower - dimensional matrices , such that the algorithm above is capable to reduce the computation time considerably .further , to regain efficiency , we propose to use the pooled sample mean for centering and for whitening and re - introducing correlation structure for _ all _ subsamples in step 2 . here, is obtained analogously to ( [ gammahatepsilon ] ) , but based on the upper - left sub - matrix of .in this section we compare systematically the performance of the multivariate linear process bootstrap ( mlpb ) to that of the vector - autoregressive sieve bootstrap ( ar - sieve ) , the moving block bootstrap ( mbb ) and the tapered block bootstrap ( tbb ) by means of simulation . in order to make such a comparison ,we have chosen a statistic for which all methods lead to asymptotically correct approximations .being interested in the distribution of the sample mean , we compare the aforementioned bootstrap methods by plotting : root mean squared errors ( rmse ) for estimating the variances of and coverage rates ( cr ) of 95% bootstrap confidence intervals for the components of for two data generating processes ( dgps ) and three sample sizes in two different setups . first , in section [ secsimtuning ] , we compare the performance of all aforementioned bootstraps with respect to ( w.r.t . ) tuning parameter choice .these are the banding parameter ( mlpb ) , the autoregressive order ( ar - sieve ) and the block length ( mbb , tbb ) .furthermore , we report rmse and cr for data - adaptively chosen tuning parameters to investigate how accurate automatic selection procedures can work in practice .second , in section [ secsimdimension ] , we investigate the effect of the time series dimension on the performance of the different bootstrap approaches . for each case , we have generated time series and bootstrap replications have been used in each step . for ( a ) , the exact covariance matrix of is estimated by 20,000 monte carlo replications .further , we use the trapezoidal kernel defined in ( [ trapezoid ] ) to taper the sample covariance matrix for the mlpb and the blocks for the tbb . to correct the covariance matrix estimator to be positive definite , if necessary, we set and to get .this choice has already been used by and simulation results ( not reported in this paper ) indicate that the performance of the mlpb reacts only slightly to this choice .we have used the sub - vector resampling scheme , that is , steps 3 and 4 described in section [ bootstrapscheme ] .some additional simulation results and a real data application of the mlpb to the weighted mean of an increasing number of german stock prices taken from the dax index can be found in the supplementary material to this paper [ ] .the r code is available at http://www.math.ucsd.edu/\textasciitilde politis / soft / function_mlpb.r[http://www.math.ucsd.edu/\textasciitilde politis / soft / function_mlpb.r ] .we consider realizations of length from two bivariate ( ) dgps .precisely , we study a first - order vector moving average process and a first - order vector autoregressive process where is a normally distributed i.i.d .white noise process and have been used in all cases .it is worth noting that ( asymptotically ) all bootstrap procedures under consideration yield valid approximations for both models above . for the vma(1 )model , mlpb is valid for all ( sufficiently small ) choices of banding parameters , but ar - sieve is valid only asymptotically for tending to infinity at an appropriate rate with increasing sample size .this relationship of mlpb and ar - sieve is reversed for the var(1 ) model . for the mbb and the tbb, the block length has to increase with the sample size for both dgps . and cr of bootstrap confidence intervals for by mlpb ( solid ) , ar - sieve ( dashed ) , mbb ( dotted ) and tbb ( dash - dotted )are reported vs. the respective tuning parameters for the vma(1 ) model with sample size .line segments indicate results for data - adaptively chosen tuning parameters .mlpb with individual ( grey ) and global ( black ) banding parameter choice are reported . ] , but with var(1 ) model . ]in addition to the results for tuning parameters , we show also rmse and cr for tuning parameters chosen by automatic selection procedures in figures [ fig2 ] and [ fig3 ] . for the mlpb , we report results for data - adaptively chosen global and individual banding parameters as discussed in section [ selectionsec ] . for the ar - sieve , the order of the var model fitted to the data has been chosen by using the * r * routine var contained in the package * vars * with _ lag_.__max__ .the block length is chosen by using the * r * routine _ b_.__star__ contained in the package * np*. in figures [ fig2 ] and [ fig3 ] , we report only the results corresponding to the first component of the sample mean , as those for the second component lead qualitatively to the same results . we show them in the supplementary material , which contains also corresponding simulation results for a normal white noise dgp . for data generated by the vma(1 ) model , figure [ fig2 ] shows that the mlpb outperforms ar - sieve , mbb and tbb for adequate tuning parameter choice , that is , . in this case, the mlpb generally behaves superiorly , with respect to rmse and cr , to the other bootstrap methods for all tuning parameter choices of and .this was not unexpected since , by design , the mlpb can approximate very efficiently the covariance structure of moving average processes .nevertheless , due to the fact that all proposed bootstrap schemes are valid at least asymptotically , ar - sieve gets rid of its bias with increasing order , but at the expense of increasing variability and consequently also increasing rmse .mlpb with data - adaptively chosen banding parameter performs quite well , where the individual choice tends to perform superiorly to the global choice in most cases . in comparison ,mbb and tbb seem to perform quite well for adequate block length , but they lose in terms of rmse as well as cr performance if the block length is chosen automatically . the data from the var(1 ) model is highly persistent due to the coefficient near to unity .this leads to autocovariances that are rather slowly decreasing with increasing lag and , consequently , to large variances of .figure [ fig3 ] shows that ar - sieve outperforms mlpb , mbb and tbb with respect to cr for small ar orders .this is to be expected since the underlying var(1 ) model is captured well by ar - sieve even with finite sample size .but the picture appears to be different with respect to rmse .here , mlpb may perform superiorly for adequate tuning parameter choice , but this effect can be explained by the very small variance that compensates its large bias , in comparison to the ar - sieve ( bias and variance not reported here ) leading to a smaller rmse .this phenomenon is also illustrated by the poor performance of mlpb with respect to cr for small choices of .however , more surprising is the rather good performance of the mlpb if the banding parameter is chosen data - adaptively , where the mlpb appears to be comparable to the ar - sieve in terms of rmse and is at least close with respect to cr .further , as observed already for the vma(1 ) model in figure [ fig2 ] , the individual banding parameter choice generally tends to outperform the global choice here again .similarly , it can be seen here that the performance of ar - sieve worsens with increasing at the expense of increasing variability .the block bootstraps mbb and tbb appear to be clearly inferior to mlpb and ar - sieve , particularly with respect to cr , but also with respect to rmse if tuning parameters are chosen automatically .we consider -dimensional realizations with from two dgps of several dimensions .precisely , we study first - order vector moving average processes and first - order vector autoregressive processes of dimension , where is a -dimensional normally distributed i.i.d. white noise process , and and are such that observe that the vma(1 ) and var(1 ) models considered in section [ secsimtuning ] are included in this setup for . in figures [ fig4 ] and [ fig5 ] , we compare the performance of mlpb , ar - sieve , mbb and tbb for the dgps above using rmse and cr averaged over all time series coordinates .precisely , we compute rmse individually for the estimates of , and plot the averages in the upper half of figures [ fig4 ] and [ fig5 ] . similarly , we plot averages of individually calculated cr of bootstrap confidence intervals for , in the lower halfs .all tuning parameters are chosen in a data - based and optimal way , as described in section [ secsimtuning ] , and to reduce computation time , the less demanding algorithm , as described in section [ secreduction ] with , is used . , and average cr of bootstrap confidence intervals for , , by mlpb ( solid ) , ar - sieve ( dashed ) , mbb ( dotted ) and tbb ( dash - dotted ) with data - based optimal tuning parameter choices are reported vs. the dimension for the vma(1 ) model with sample size .mlpb with individual ( grey ) and global ( black ) banding parameter choice are reported . ] , but with var(1 ) model . ] for the vma(1 ) dgps in figure [ fig4 ] , the mlpb with individual banding parameter choice outperforms the other approaches essentially for all time series dimension under consideration with respect to averaged rmse and cr .in particular , larger time series dimensions do not seem to have a large effect on the performance of all bootstraps for the vma(1 ) dgps , with the only exception being the mlpb with global banding parameter choice .in particular , the latter is clearly inferior in comparison to the mlpb with individually chosen banding parameter , which might be explained by sparsity of the covariance matrix . in figure[ fig5 ] , for the var(1 ) dgps , the picture is different from the vma(1 ) case above .the influence of larger time series dimension on rmse ( and less pronounced for cr ) performance is much more pronounced and clearly visible .in particular , the rmse blows up with increasing dimension for all four bootstrap methods , which is due to the also increasing variance of the process .note that the zig - zag shape of the rmse curves is due to the back and forth switching from to on the diagonal of . as already observed for the vma(1 ) dgps , the mlpb with individual banding parameter choice again performs best over essentially all time series dimensions with respect to average rmse and average cr .in particular , mlpb with individual choice is superior to the global choice . here, the good performance of the mlpb is somewhat surprising as the var(1 ) dgps have rather slowly decreasing autocovariance structure , where we expected an ar - sieve to be more suitable .the authors thank timothy mcmurry for his helpful advice on the univariate case and three anonymous referees and the editor who helped to significantly improve the presentation of the paper .
multivariate time series present many challenges , especially when they are high dimensional . the paper s focus is twofold . first , we address the subject of consistently estimating the autocovariance sequence ; this is a sequence of matrices that we conveniently stack into one huge matrix . we are then able to show consistency of an estimator based on the so - called _ flat - top tapers _ ; most importantly , the consistency holds true even when the time series dimension is allowed to increase with the sample size . second , we revisit the linear process bootstrap ( lpb ) procedure proposed by mcmurry and politis [ _ j . time series anal . _ * 31 * ( 2010 ) 471482 ] for univariate time series . based on the aforementioned stacked autocovariance matrix estimator , we are able to define a version of the lpb that is valid for multivariate time series . under rather general assumptions , we show that our multivariate linear process bootstrap ( mlpb ) has asymptotic validity for the sample mean in two important cases : ( a ) when the time series dimension is fixed and ( b ) when it is allowed to increase with sample size . as an aside , in case ( a ) we show that the mlpb works also for spectral density estimators which is a novel result even in the univariate case . we conclude with a simulation study that demonstrates the superiority of the mlpb in some important cases . ./style / arxiv - general.cfg
the human brain contains about neurons , and each neuron is connected to approximately other neurons .these connections called synapses , are arranged in a highly complex network .they are responsible for neuronal communication and can be classified into two categories : electrical and chemical synapses . in electrical synapses the transmission of information from one neuron to another is directly performed from the pre - synaptic cell to the post - synaptic cell via gap junctions . in chemical synapses ,the process occurs via neurotransmitters , which cross the synaptic cleft and bind to receptors on the membrane of the synaptic cell .neurotransmitters may increase or decrease the probability of an action potential of a post - synaptic neuron , and the synapses are called excitatory or inhibitory , respectively . furthermore , the intensity of the chemical synapses can be modified , in other words , they can be minimised or potentiated .the mechanism responsible for these adjustments is known as synaptic plasticity .the synaptic plasticity , that is , the ability of synapses to weaken or strenthen over time is an important property of the mammalian brain .in addition , the synaptic plasticity is also related to processes of learning and memory .this adjustment of the intensities of the chemical synapses can be correlated with phenomena of synchronisation of the neuronal firing .the occurrence of synchronisation in some specific areas of the brain may be associated with some diseases , such as the epilepsy and the parkinson s disease . on the other hand , it is also responsible for some vital brain functions , such as processing of sensory information and motor function .methods to suppress synchronisation have been proposed in neuroscience , as the introduction of external perturbations .tass and collaborators have verified the possibility of desynchronisation in hippocampal neuronal populations through coordinated reset stimulation .meanwhile , popovych and collaborators have found that the introduction of a perturbation in a globally connected neuronal network combined with synaptic plasticity can provide a positive contribution to the firing synchronisation . in this work ,we study firing synchronisation in a random hodgkin - huxley neuronal network with plasticity according to spike timing - dependent plasticity ( stdp ) .this synaptic plasticity model adjusts the connection strengths by means of the temporal interval between pre - synaptic and post - synaptic spikes .bi and poo have reported that the change in synaptic efficiency after several repetitions of the experiment is due to the time difference of firing .if one pre - synaptic spike precedes a post - synaptic spike , a long - term potentiation occurs , otherwise , a long - term depression appears .a computational neuronal network specifies the connection architecture among neurons . a globally coupled hodgkin - huxley neuron model was also considered by popovych and collaborators .they studied the synchronisation behaviour considering stdp , and found that the mean synaptic coupling presents a dependence on the input level . in this work, we consider a random neuronal network with stdp , and input , where the connections are associated with chemical synapses .one main result is to show that spike synchonisation in a neuronal network , depending on the probability of connections , can be improved due to spike timing - dependent plasticity .this improvement is also observed when an external perturbation is applied on the network .another important result is that the orientation of the connections among neurons with a different spike frequency affect the synchronised behaviour .this paper is organised as follows : in section ii we introduce the hodgkin - huxley neuronal model . in section iii , we show the random neuronal network . in section iv, we study the synchronisation considering spike timing - dependent plasticity . finally , in the last section , we draw the conclusions .one of the most important models in computational neuroscience is the neuronal model proposed by hodgkin and huxley . in this model ,the mechanism of generation of an action potential was elucidated in a series of experiments with the squid giant axon .they found three different ions currents consisting of sodium ( na ) , potassium ( k ) and leak ( l ) mainly due to chlorine .moreover , there are voltage - dependent channels for sodium , potassium that control the entry and exit of these ions through the cell. the model is composed of a system of four coupled differential equations given by where is the membrane capacitance ( measured in / ) , is the membrane potential ( measured in mv ) , the function and are the variable of activation for sodium and potassium , and is the variable of inactivation for sodium .the functions , , , , , are given by the parameters and represent the conductance and reversal potentials for each ion .the constant is an external current density ( measured in / ) that determines a regime of a single spike ( / ) , or a scheme with periodic spikes ( / ) , as illustrated in fig .[ fig1](a ) and ( b ) , respectively .moreover , the spikes frequency increases when the constant increases .for instance , / and / approximately correspond to and , respectively .the parameters that we use in this work are presented in table [ parametros_hh ] ..parameters of the hodgkin - huxley neuronal model with a resting potential equal to . [ cols="^,^,^",options="header " , ] [ parametros_hh ] a/ that shows a single spike with a subsequent resting state , and ( b ) / that presents a regime with periodic firing.,width=302,height=302 ] computational models of neuronal networks depend on the architecture , which specifies how neurons are connected and how the dynamics is applied to each unit or node . in this work ,we consider a random network , that is , the network is constructed by connecting neurons randomly .each connection is included with probability independent from every other connection .figure [ fig2 ] exhibits a schematic representation of the neuronal network considered in this work .each neuron is connected to others by randomly chosen neurons with probability .when we have a global network , where all neurons are connected ..,width=226,height=226 ] we consider a random neuronal network with chemical synapses where the connections are unidirectional , and the local dynamics is described by the hodgkin - huxley model .the network is given by where is the membrane potential of neuron ( ) , is a constant current density randomly distributed in the interval ] ) and neurons with low frequency ( values of within the range $ ] ) .figure [ fig10 ] exhibits the time averaged coupling strength as a function of the percentage of connections from neurons with high frequency to neurons with low frequency .we can see that the time average coupling strength depends on the connections . considering the case for a small percentage of connections from hfn to lfn , the average coupling is small , indicating absence of synchronisation , a situation that changes with increasing the percentage of connections to a synchronised state .this means that , when the coupling strengths increase , a desynchronised state can suddenly become synchronised .consequently , the abrupt transition from desynchronised to synchronised state , that is observed in fig . [ fig7 ] , is due to directed synapses among spiking neurons with high and low frequency . and .,width=264,height=264 ]we have been studying a neuronal network model with spiking neurons .we have chosen , as local dynamics , the hodgkin - huxley model due to the fact that it has essential features of spiking dynamics .the hodgkin - huxley model is a coupled set of nonlinear differential equations that describes the ionic basis of the action potential .these equations are able to reproduce biophysical properties of the action potential .we have used a random coupling architecture where the connections are randomly distributed according to a probability . when the probability is equal to unity we have a globally coupled network .the connections were considered unidirectional representing excitatory chemical synapses .we have studied the effects of spike timing - dependent plasticity on the synchronisation in a hodgkin - huxley neuronal network .studies about spike synchronisation are important to understand not only progressively degenerative neurological disorders , but also processing of the sensory information .popovych and collaborators showed that stdp combined with an external perturbation can improve the spike synchronisation in a globally neuronal network .the novelty in this paper is that we have considered a random neuronal network and we have verified that the spike synchronisation depend on the probability of connections . considering a strong external perturbation the spike synchronisationis suppressed . however , when there is stdp , depending on the probability of connections , the synchronisation in the perturbed network can be improved due to a constructive effect on the synaptic weights .we have also shown that the direction of synapses has an important role on the effects of spike timing - dependent plasticity on the synchronisation in a random hodgkin - huxley neuronal network .this study was possible by partial financial support from the following brazilian government agencies : cnpq , capes and fapesp .financial support by the spanish ministry of economy and competitivity under project number fis2013 - 40653-p is also acknowledged .00 chialvo dr .critical brain networks .phys a 2004;340:756 - 765 .gerstner w , kistler w. spiking neuron models : single neurons , populations , plasticity .cambridge university press : cambridge ; 2002 . bear mf , connors bw , paradiso ma .neuroscience : exploring the brain .lippincott williams and wilkins : england ; 2008 .fundamental neuroscience . academic press : new york ; 2008 .dayan p , abbott lf .theoretical neuroscience : computational and mathematical modelling of neural systems .mit press : massachusetts ; 2001 .purves d , augustine dj , fitzpatrick d , hall wc , lamantia a - s , mcnamara jo , williams sm . neuroscience .sinauer associates inc publishers : massachusetts : 2004 .dagostin aa , mello cv , leo rm . increased bursting glutamatergic neurotransmission in an auditory forebrain area of the zebra finch ( taenopygia guttata ) induced by auditory stimulation . j comp physiol a 2012:198;705 - 716 .the organization of behavior .wiley : new york ; 1949 .citri a , malenka rc .synaptic plasticity : multiple forms , functions , and mechanisms .neuropsychopharmacol 2008:33;18 - 41 .kelso sr , ganong ah , brown th .hebbian synapses in hippocampus .proc natl acad sci usa 1986:83;5326 - 5330 .caporale n , dan y. spike timing - dependent plasticity : a hebbian learning rule .annu rev neurosci 2008:31;25 - 46 .zhigulin vp , rabinovich mi , huerta r , abarbanel hd .robustness and enhancement of neural synchronization by activity - dependent coupling .phys rev e 2003:67;021901 .abuhassan k , coyle d , maguire l. compensating for thalamocortical synaptic loss in alzheimers disease .front comput neurosci 2014:8;65 .modolo j , bhattacharya b , edwards r , campagnaud j , legros a , beuter a. using a virtual cortical module implementing a neural field model to modulate brain rhythms in parkinsons disease .front neurosci 2010:4;45 .hammond c , bergman h , brown p. pathological synchronization in parkinsons disease : networks , models and treatments .trends neurosci 2007:30;357 - 364 .nini a , feingold a , slovin h , bergman h. neurons in the globus pallidus do not show correlated activity in the normal monkey , but phase - locked oscillations appear in the mptp model of parkinsonism .j neurophysiol 1995:74;1800 - 1805 .uhlhass p , singer w. neural synchrony in brain disorders : relevance for cognitive dysfunctions and pathophysiology .neuron 2006:52;155 - 168 .lameu el , batista cas , batista am , iarosz kc , viana rl , lopes sr , kurths j. suppression of bursting synchronization in clustered scale - free ( rich - club ) neuronal networks .chaos 2012:22;043149 .popovych ov , tass pa .desyncrhonizing electrical and sensory coordinated reset neuromodulation .front hum neurosci 2012:6;58 .popovych ov , yanchuk s , tass pa .self - organized noise resistance of oscillatory neural networks with spike timing - dependent plasticity .sci rep 2013:3;2926 .nordenfelt a , used j , sanjun maf . bursting frequency versus phase synchronization in time - delayed neuron networks .phys rev e 2013:87;052903 .tass pa , silchenko an , hauptmann c , barnikol ub , speckmann e - j .long - lasting desynchronization in rat hippocampal slice induced by coordinated reset stimulation .phys rev e 2009:80;011902 .feldman de .the spike - timing dependence of plasticity .neuron 2012:75;556 - 571 .markram h , gerstner w , sjstrm pj .spike - timing - dependent plasticity : a comprehensive overview .front synaptic neurosci 2012:4;8 .bi gq , poo mm .synaptic modifications in cultured hippocampal neurons : dependence on spike timing , synaptic strength , and postsynaptic dell type .j neurosci 1998:18;10464 - 10472 .gerstner w , sprekeler h , deco g. theory and simulation in neuroscience .sci 2012:338;60 - 65 .electron microscopy of synaptic contacts on dendrite spines of the cerebral cortex .nature 1959:183;1592 - 1593 .hodgkin al , huxley af . a quantitative description of membrane current and its application to conduction and excitation in nerve .j physiol 1952:117;500 - 544 .izhikevich em . which model to use for cortical spiking neurons ?ieee t neur net 2004:15;1063 - 1070 .izhikevich em .dynamical systems in neuroscience : the geometry of excitability and bursting .mit press : london ; 2006 .erds p , rnyi a. on random graphs i. publ math 1959:6;290 - 297 .nordenfelt a , wagemakers a , sanjun maf .frequency dispersion in the time - delayed kuramoto model .phys rev e 2014:89;032905 .nordenfelt a , wagemakers a , sanjun maf .cyclic motifs as the governing topological factor in time - delayed oscillator networks .phys rev e 2014:90;052920 .destexhe a , mainen zf , sejnowki tj . an efficient method for computing synaptic conductances based on a kinetic model of receptor binding .neural comput 1994:6;14 - 18 .golomb d , rinzel j. dynamics of globally coupled inhibitory neurons with heterogeneity .phys rev e 1993:48;4810 .kuramoto y. international symposium on mathematical problems in theoretical physics .springer - verlag : new york ; 1975 .kuramoto y. chemical oscillations , waves , and turbulence .springer : berlin ; 1984 .acebrn ja , bonilla ll , vicente cjp , ritort f , spigler r. the kuramoto model : a simple paradigm for synchronization phenomena .rev mod phys 2005:77;137 - 185 .ramon y cajal s. degeneration and regeneration of the nervous system .oxford university press : london ; 1928 .bliss tvp , gardner - medwin ar .long - lasting potentiation of synaptic transmission in the dentate area of the unanaestherised rabbit following stimulation of the perforant path .j physiol 1973:232;357 - 374 .bliss tvp , collingridge gl . a synaptic model of memory : long - term potentiation in the hippocampus . nature 1993:361;31 - 39 .. brain mechanisms and learning .oxford university press : london ; 1961 .
in this paper , we study the effects of spike timing - dependent plasticity on synchronisation in a network of hodgkin - huxley neurons . neuron plasticity is a flexible property of a neuron and its network to change temporarily or permanently their biochemical , physiological , and morphological characteristics , in order to adapt to the environment . regarding the plasticity , we consider hebbian rules , specifically for spike timing - dependent plasticity ( stdp ) , and with regard to network , we consider that the connections are randomly distributed . we analyse the synchronisation and desynchronisation according to an input level and probability of connections . moreover , we verify that the transition for synchronisation depends on the neuronal network architecture , and the external perturbation level . plasticity , neuronal network , synchronisation
mixing is one of the most important process in industries , including polymer processing , rubber compounding , and food processing , because it directly affects the quality of multi - component materials .several types of mixing devices , such as twin - screw extruders , twin - rotor mixers , and single - screw extruders , have been developed for different material processabilities and different processing purposes . to select an appropriate mixing element for a variety of complex materials and product qualities ,one fundamental issue is the fluid mechanical characterization of the mixing capability of various types of the mixing elements . the mixing process , or ,more specifically , the reduction of the inhomogeneity of material mixtures , is achieved through material flow driven by mixing devices .therefore , in principle , the capability of the mixing elements can be evaluated through an analysis of the flow in the device . along these lines ,the visualization of the flow patterns has been performed numerically or experimentally to obtain a qualitative insight into the global mixing kinetics . while the global flow pattern characterizes the evolution of the material distribution , the divergence of the material trajectoriesis locally associated with the deformation of the fluid elements . since the substantial local deformation rate is described by the strain rate tensor ,several approaches for the characterization of the local strain rate have been developed .the degree of irrotationality of the deformation rate , which is often called the `` mixing index '' , has been used to quantify dispersive mixing efficiency .this characterization was motivated by the experimental fact that elongational flows are more effective than simple shear for droplet / agglomerate breakup . in a rheometric flow setup ,the elongational flows are irrotational , while the simple shear flow is a superposition of a planar shear flow and a rotational flow in a plane .therefore , the irrotationality can define the flow type for the rheometric flow setup .however , in generic flows in mixing devices , the type of elongational flows described by the strain rate tensor can not be assessed by the irrotationality by definition .another quantity , called the mixing efficiency , is the relative magnitude of the elongational rate along a certain direction to the magnitude of the strain rate tensor .the mixing efficiency is useful when the interface between phases is well defined .if , in the mixing efficiency , the maximal elongational direction is taken , the modes of the strain rate , such as the uniaxial / biaxial elongational flows and the planar shear flow , are discriminated . in this case , the eigenvalue problem for the strain rate tensor should be solved .the distribution of the strain - rate modes in combination of the magnitude of the strain - rate in general flows should be useful in understanding the dispersive mixing capability .the strain - rate modes in three - dimensional flows can be identified in principle by combining the different eigenvalues of the strain - rate tensor , and thus different forms for quantification of the strain - rate mode can be designed .however , such quantification has not been derived in a numerically tractable manner . to the best of our knowledge, the role of elongational flow in distributive mixing has been only rarely discussed , whereas the effectiveness of irrotational elongational flows for dispersive mixing has been well recognized .flow trajectories diverge by elongational flows however small the elongational rate is .the divergence directions are restricted to a plane for planar shear flow , but involve three directions for non - planar elongational flows .if distributive mixing is effectively promoted in the confined space of a mixing device during a finite period of time , a flow pattern being effective to distributive mixing should be developed .such a flow pattern is expected to be associated with a certain distribution of the non - planar or volumetric elongational flows .the distribution of the volumetric flows is therefore considered to give essential information for a better understanding of the mixing capability of mixing elements .especially , the flow patterns in the regions of smaller strain rates largely determine net mixing capabilities of mixing devices .although the area - stretching ability is very low in the small strain rate regions , such regions occupy a large fraction of the channel , and the materials are conveyed to larger strain rate regions by the flow in the small strain rate regions .the distribution of the volumetric flows is expected to be useful in understanding the relation between the structure of the flow patterns in the small strain regions and the mixer geometry .in general , the flow field in a mixing device is an arbitrary combination of volumetric elongational flows and a planar shear flow .thus , for a general three - dimensional flow , rather than only rheometric flows , the characterization of volumetric elongational flows from the strain rate tensor is required .although the strain - rate mode should be an important quantity both in dispersion capability and analysis of flow - pattern structures in relation to distributive mixing , its characterization in three - dimensional flows has not established so far . in this paper , we derive a measure which identifies the volumetric elongational flows from the strain rate tensor .we apply this measure to melt - mixing flow in twin - screw extrusion . using the distribution of the volumetric elongational flows , we discuss the differences in the flow patterns and the mixing characteristics of the different screw elements .in any kind of flow in a mixing device , the local deformation rate is a combination of shear flow , elongational flow , and rotational flow .the deformation rate of a fluid element , , is decomposed into the strain rate tensor and the vorticity tensor : where is the velocity field , , and , the superscript indicates the transpose .concerning mixing processes , the change in the distance between two nearby points is represented by .thus , the strain rate tensor is mainly responsible for the local mixing capability . for incompressible flows , _i.e. _ , , the strain rate tensor can be diagonalized with an orthonormal matrix , where and , respectively , represent the largest and the smallest eigenvalues of by assuming .the value of specifies the mode of the strain rate .for example , for a uniaxial elongational flow , for a biaxial elongational flow , and for a planar shear flow . a typical value of the strain rate is , but the magnitude of the strain rate is commonly evaluated by the second invariant of without explicit calculation of the eigenvalues , because it is always positive for finite values of by definition .since we are interested in the mode of the strain rate , or , equivalently , the value of without solving the eigenvalue problem , we consider another invariant of .the determinant of , a third - order invariant , is expressed in terms of the eigenvalues by from eq .( [ eq : determinant ] ) , we can see that the determinant of becomes zero for planar shear ( ) , irrespective of .in other words , means that the strain directions are restricted to be within a certain plane , and indicates that the strain directions extend three - dimensionally . with this property of , we can get an insight into whether the local strain rate is more planar or more volumetric without explicitly calculating the eigenvalues of . combining eqs .( [ eq : dd ] ) and ( [ eq : determinant ] ) , we define a measure for the mode of the strain rate , independent of the value of , by where the numerical prefactor is chosen so that the range of is normalized to ] while is normalized to be within d d d d$}}}\protect stays rather small . ` ] next , we compare the distributions of the volumetric strain rate between the conveying screw and the kneading disks .we use the absolute value of as defining , which is zero for a planar shear , unity for pure elongational flows , and becomes for an arbitrary superposition of planar and elongational flows , for we are interested in how the volumetric strain rate is distributed .figure [ fig3 ] shows the distributions of at a cross section in the conveying screw ( location b in fig .[ fig1 ] ) and at a cross section in the kneading disks ( location a in fig .[ fig1 ] ) . in the case of the conveying screw shown in fig .[ fig3](a ) , although an elongation flow occurs in some small regions around the tip - barrel clearance and in the intermeshing regions , small values of prevail in most of the section .this was observed in the other phase of the screw rotation . from this observation ,circumferential planar shear is predominant in the flow driven by a conveying screw , which corresponds to the flow along the screw root .the elongational flows around the tip - clearance regions in the conveying screw are explained as follows .when fluid elements go across a screw tip , that flow is a bifurcation from the upper stream flow along the screw root , followed by the confluence to another flow along the next screw root .these flow patterns are observed as the elongational flows in fig .[ fig3](a ) . except for these flow patterns ,interchange of materials rarely occurs along screw rotations .this fact is consistent with the well - known low level of the mixing capability of a conveying screw , because a volumetric bifurcation of the trajectories rarely occurs , so that the distributive mixing is not much promoted . in the case of the kneading disks shown in fig .[ fig3](b ) , we found that the volumetric strain rate develops in a remarkably large fraction of the section , and its distribution forms a characteristic pattern .elongational flows occur in the region far from the screw tips and the surfaces , which are , coincidentally , small strain rate regions .the locations of the elongational flows correspond to those of the opening space between neighboring staggered disks . as demonstrated in fig .[ fig3 ] , the distribution of the volumetric strain rate closely reflects the flow pattern structure caused by the geometric shapes of the elements , and characterizes the flow pattern in the small strain rate regions . , ( b ) the section in the kneading disks at position a in fig .[ fig1 ] . ` ] as for the kneading disks , the characteristic pattern in the distribution of is supposed to be related to their known good mixing capability . for understanding the relation between the distribution of and the mixing capability, we discuss the flow pattern in the small strain rate region in the kneading zone .figure [ fig4](a ) shows the velocity field at the mid - plane of the channel at the third disk in the kneading zone . from fig .[ fig4](a ) , we see the flow trajectories from a tip to two neighboring disks along the screw rotation . simultaneously , converging flows from two neighboring disksare developed behind another side of screw tips .in other words , the flow driven by the kneading disks bifurcates into forward and backward extrusion directions on the one place and converges from neighboring disks on the other place .because of these bifurcated trajectories , the fluid elements go back and forth within the zone of consecutively arranged disks , and repeatedly bifurcate and converge having many chances to be stretched and folded resulting in the high mixing capability of the kneading disks .this property of the flow pattern is reflected by the distribution of the biaxial / uniaxial elongational flow from the viewpoint of the strain rate mode .figure [ fig4](b ) shows the distribution of corresponding to the velocity field in fig .[ fig4](a ) and clearly captures this characteristic flow pattern structure developed in the small strain rate region of the kneading disks .this analysis demonstrates that the volumetric strain rate distribution can be useful to characterize the flow pattern structure , especially in the small strain rate regions . for comparison , the distribution of the mixing index ( or the irrotationality ) , , of eq .( [ eq : mz ] ) at the cross sections a and b in fig. [ fig1 ] are shown in fig .we found that both the conveying screw and the kneading disks share some common characteristics in the distribution .the deformation rate is almost half rotational near the surfaces of the screws and the barrel , while it is almost irrotational in the small strain rate region far from the surfaces .although the irrotational regions for both elements are different , depending on their geometric shapes , the distribution does not reflect the flow pattern shown in fig .[ fig4](a ) , suggesting that the mixing index itself hardly offers insight into the flow pattern structure in general three - dimensional flow . ,( b ) the section in the kneading disks at position a in fig .[ fig1 ] . ` ] in order to discuss the large scale characteristics of the screw elements , and are averaged in each section and over one screw rotation .the axial profiles of the mean and are shown in fig .[ fig6 ] . in proximity to the inlet and the outlet , the meanvalues are supposed to be affected by the boundary conditions , and so , physically irrelevant .we hence consider the axial locations from 20 mm to 200 mm .the means of and remain at an almost constant level in the regions of the two conveying screws .in addition , these mean quantities do not vary much at the first and last blocks in the kneading disks because they are rather smoothly connected to the conveying screws .in contrast , the means of and show large variation along the inner three blocks of the kneading disks . the piecewise variations of these values are due to the piecewise structure of the kneading disks .in particular , the variation in the mean within the inner three blocks is remarkable .the mean has a peak at the center of each of the three blocks , which reflects the bifurcated flow trajectories back and forth as observed in fig .the axial profile of the mean clearly shows that the flow pattern structure specific to the kneading disks occurs in the inner blocks and originates from the consecutive staggered arrangement of the blocks .as demonstrated above , a flow pattern being effective for distributive mixing is closely related to the distribution of the volumetric strain rate which indicates three - dimensional bifurcation and converging of trajectories .the distribution of offers a physical insight into the high mixing capability of the kneading disks as well as the low mixing capability of the conveying screws .we derived a scalar measure , , which characterizes the mode of the local strain rate by combining the second and the third invariants of the strain rate tensor . using the value of , the mode of the strain rate tensor including a uniaxial / biaxial elongational flow , a planar shear flow , and arbitrary combinations , is defined for the general three - dimensional flows observed in fluid mechanical processes without solving the eigenvalue problem for the strain rate tensor . the spatial distribution of non - planar , or volumetric , strain rate is closely related to the flow patterns , irrespective of the magnitude of the strain rate , and therefore it was found to be useful to understand the relation between the mixer geometry and the flow pattern structures . from the viewpoint of mixing processes , the flow pattern in the small strain rate regions has an important role in efficient and repetitive transport of the fluid elements to the large strain rate regions in order to enhance the net mixing capability of the mixer . for understanding the effectiveness of the flow pattern especially in the small strain rate regions , the distribution of the volumetric strain rates can be a useful tool .based on the numerical simulation of a melt - mixing flow in twin - screw extrusion , flows driven by the conveying screws and the kneading disks have been analyzed .we found that the flow patterns specific to these elements are clearly characterized by the distribution of the strain rate mode .understanding the relation between the geometric structure of the mixing elements and the flow pattern they drive is an important issue in the essential evaluation of the mixing capability of the mixing devices used in different industries .the analysis employing the strain rate mode and its distribution is effective for discussing the flow pattern in the mixing device and offers an insight into the role of the small strain rate regions on the distributive mixing .finally , we would like to mention a limitation of the analysis only by for the mixing .although the distribution of the volumetric strain rates can give information about the flow pattern structures specifying the regions where the flow bifurcates and converges , it can just distinguish the potential of particular flows . for direct evaluation of the mixing , kinetic evolution of interfaceis needed to discuss the area growth , interface folding , and material distribution .computation of the interface kinetics requires the directions of the area segments , and therefore the mixing is a function of the orientation of the interface relative to the flow .this aspect of the mixing is another important problem from the evaluation of the potential of the flow . the combined use of our measure for the strain rate mode with other fluid mechanical analyses , including kinetic evolution of area elements is an important future direction of research into predicting the mixing capabilities of different mixing elements and in designing improved novel mixing elements .the numerical calculations have been partly carried out using the computer facilities at the research institute for information technology at kyushu university . this work has been supported by grants - in - aid for scientific research ( jsps kakenhi ) under grants nos . 26400433 , 24656473 , and 15h04175 .funatsu k , kihara si , miyazaki m , katsuki s , kajiwara t. numerical analysis on the mixing performance for assemblies with filled zone of right - handed and left - handed double - flighted screws and kneading blocks in twin - screw extruders ._ polym eng sci_. 2002;42(4):707723 .bravo vl , hrymak an , wright jd .study of particle trajectories , residence times and flow behavior in kneading discs of intermeshing co - rotating twin - screw extruders ._ polym eng sci_. 2004;44(4):779793 .lawal a , kalyon dm .simulation of intensity of segregation distributions using three - dimensional fem analysis : application to corotating twin screw extrusion processing ._ j appl polym sci_. 1995;58(9):15011507 .
understanding the mixing capability of mixing devices based on their geometric shape is an important issue both for predicting mixing processes and for designing new mixers . the flow patterns in mixers are directly connected with the modes of the local strain rate , which is generally a combination of elongational flow and planar shear flow . we develop a measure to characterize the modes of the strain rate for general flow occurring in mixers . the spatial distribution of the volumetric strain rate ( or non - planar strain rate ) in connection with the flow pattern plays an essential role in understanding distributive mixing . with our measure , flows with different types of screw elements in a twin - screw extruder are numerically analyzed . the difference in flow pattern structure between conveying screws and kneading disks is successfully characterized by the distribution of the volumetric strain rate . the results suggest that the distribution of the strain rate mode offers an essential and convenient way for characterization of the relation between flow pattern structure and the mixer geometry .
fluids dissipate energy as they flow through pipes or past any smooth or rough surface .examples include river flow or wind blowing across the land .this energy dissipation is proportional to the velocity gradient , or shear rate , of the flow at the bounding surface .this frictional energy loss , and its dependence on the reynolds number of the flow , is not yet fully understood a century after the first explanation was advanced by l. prandtl .here we introduce a new scheme for measuring the shear rate near a bounding surface .it also might be applicable in the interior of a fluid . unlike some widely - used methods , the shear rate is recorded at a single `` point '' of size .the motivation for developing this technique was to improve the usual method for measuring the shear rate in turbulent flows .the scheme introduced here is that of photon correlation spectroscopy ( pcs ) .it is a variant of that used by fuller and leal to study laminar flows . for turbulence ,the shear rate is a random variable .the pcs method enables determination of the time - averaged shear rate , its standard deviation , and the gaussian transform of the probability density function ( pdf ) itself . because the method has not been used before , the values of the mean shear obtained by pcs are compared with those measured by laser doppler velocimetry ( ldv ) .the pcs scheme has the advantage of improved signal - to - noise , short data - collection times , and also the compactness of the apparatus .the pcs scheme can be used when the mean flow rate is absent or present .hence it may be useful outside of the domain of turbulence studies . with the pcs scheme ,a single beam illuminates a group of moving particles that scatter light into a photodetector at some scattering angle .the inset of fig.[setup]a shows the incident and scattered laser beam of momentum and , respectively and the scattering vector at a point in the flowing soap film , an incident beam is focused to a bright spot of size . the intensity of the incident beam is taken to be gaussian form , figure [ setup]b , a side view of the setup ,will be discussed below .the velocity at any point can then be written as the velocity at the center of the spot =0 plus a term proportional to the shear rate tensor , which is the quantity of interest .the dominant component of near a wall in this experiment is , where is in the flow direction while is in the transverse direction in the film plane .note that is a scalar quantity .let be the velocity of an illuminated particle at a horizontal distance from the center of the incident beam ( ) .then where the higher order terms have been neglected . within a multiplicative constant , the scattered electric field from particles within the incident beam at time here is the incident gaussian field at the position of the particle . because the scattering from micron - size particles is almost perfectly elastic , where is the vacuum wavelength of the incident light beam ( 633 nm ) and is the refractive index of the soap film , which is 99 % water. it will first be assumed that the flow is laminar , so that is time - independent , that is to say , the pdf of the shear tensor is a delta function centered at the mean value of . then the intensity correlation function which is simply related to the electric field autocorrelation function through the bloch - siegert theorem ( which is applicable to any gaussian pdf , including a delta function ) is where evaluating ( 3 ) , using ( 1)-(2 ) and averaging over gives a result previously obtained by fuller et al . for laminar flow , as opposed to a turbulent one .they evaluated rather than . in the experiments described below, the turbulent soap film flows in the direction with mean velocity , where this average is over the width of the soap film . then where is an average over time .use has been made here of the gaussian form of the incident beam . because is a random function for turbulent flows , an additional average over needed , giving ( with having its maximum near ) ; is the gaussian transform of two important parameters , in addition to , are and the reynolds number , , where is the kinematic viscosity of the soap solution .if the supporting walls that bound the film flow are not smooth , their roughness is another important control parameter . as in three dimensions one expects the dimensionless frictional drag to be independent of when it is sufficiently large .it now depends on the ratio . at intermediate values of ,experiment and theory support the result , where is just a number , and in 3d flows , if the shear tensor has more than one component with when the fluid is incompressible , as in this experiment .the factors to the left of the integral take into account the extraneous effects of particle diffusion and transit time broadening ; they are discussed later in the text .( a ) : setup for vertically flowing soap film .the film flows down from reservoir rt through valve v between strips rw and sw , separated by width .the weight w keeps the the nylon wires taut .inset shows scattering diagram .( b ) : side view of the setup , showing laser source , focusing lens and photodetector ] the experiments were performed on a soap film channel , shown in fig.[setup ] with =2 cm . the flow is driven by gravity , but there is an appreciable opposing force from air friction . however , near the vertical plastic strips that support the film , the viscous force from the wires dominates .these strips are glued to thin plastic wires 0.5 mm in diameter that join to form an inverted v at the top and at the bottom , as indicated in fig.[setup ] . at the apex, a small tube connects the reservoir to a valve v that controls the flow rate .the wires at the bottom connect and deliver the spent soap solution to reservoir rb , where it is pumped back to the top reservoir rt to keep the flow rate steady .typical flow rates of the soap solution are 0.2 ml / s . the soap solution is 1% dawn dishwashing detergent in water . it is loaded with neutrally buoyant polystyrene particles which scatter the incident beam from a 5 mw 633 nm he - ne laser into the photodetector , a perkin elmer spcm - aqr-12-fc .the laser source is located behind the soap film while the photodetector is located in front of it as shown in fig.[setup]b .the laser beam is focused onto the soap film with a lens of focal length 25 cm .the photon stream is delivered to the photodetector through an optical fiber , where the fiber tip is located 7 cm from the illuminated spot on the film . in these experiments , is limited by the wavelength of visible light , focal length of the focusing lens and the diameter of the incident beam and has a value of = 100 m .the scattering vector is in the vertical ( ) direction and the dominant component of the shear rate is the diameter of the seed particles ( 0.4 m ) is sufficiently small that their stokes number in the strongest turbulence is less than 0.1 . hence the particle velocities are adequately close to that of the fluid .the refractive index of the soap solution is roughly 1.3 .typically , the scattering angle , m . using a seed - particle density of 1.5 gm / l yields an average photon counting rate of 10 hz .experiments were performed with a horizontally oriented comb penetrating the soap film at a point above the measuring point and with the comb absent . only with the comb presentis the turbulence reasonably developed and the energy spectrum is of scaling form , , with .this is the , defined as the interval where vorticity of larger size fluctuations cascade to smaller scales . in two dimensionsthere is also a cascade of energy fluctuations to larger scales , where = 5/3 , as in three dimensions .however , it is not accessible for decaying turbulence , as in this experiment . by making the bounding walls rough , so that turbulence is constantly being generated there , the inverse energy cascadecan also be seen .the teeth of the comb as well as their spacing is 2 mm . to further test the pcs technique ,measurements are also made with the comb absent . in this case , there is no well - defined energy spectrum decaying as a power law .nevertheless the flow is far from laminar , so that can be measured by both pcs and ldv .to first order , and .however , experimentally is found to be a non - gaussian function . if , which is clearly non - gaussian in .both panels of fig .[ corrwall ] show that while is close to gaussian form , the gaussian fit ( solid lines ) is not perfect .these `` good '' fits to gaussian form were unexpected .there are two other effects that can contribute to the decay of : thermal diffusion of the seed particles and transit time broadening , which can be dominant for large .both of these contributions are small in these experiments but are easy to correct for . to take diffusion into account , one multiplies eq . ( 4 ) by the factor , where is the diffusion constant , which , for spherical particles of diameter is , where is boltzmann s constant and is viscosity . as for the transit time effect ,particles passing through a beam of size produce a burst of light intensity that temporally modulates the scattered light .the multiplicative correction factor here is .the decay times for both of these effects is long compared to the viscous decay time of interest so these multiplicative time factors can be dropped .for example , with a spot size m and a typical mean velocity of = 1 m / sec , the transit time associated with the effect is or order 0.1 ms .this is fifty times longer than typically measured .diffusion times are much longer than this and hence contribute insignificantly to the decay of .fig.[corrwall ] shows for measurements made with the comb absent ( a ) and present ( b ) , respectively . here 2 m / s in both experiments , =2 cm , and = 100 m .the vertical axis is linear , but the horizontal axis is , so as to display several decades of lag time .the insets to both figures show vs , so that a gaussian decay of appears as a straight line .the straight lines in the lower insets indicate that is indeed of gaussian form for very small .they are a best fit to the experimental curves and correspond = 1600 s and 1000 s for the experiments with and without the comb .the solid lines in the upper insets to fig.[corrwall ] are best fits under the assumption of a gaussian a good fit clearly extends beyond the small- limit and enables the determination of the standard deviations of the mean shear as well as itself . the mean shear is calculated from the definition of variance , .the results are = 950 hz = 300 hz with the comb absent and = 1620 hz = 500 hz with the comb present .the ratio of to is near 20 % .the shear measurement is done in the viscosity - dominated layer of width , where is the distance from the comb .ideally the spot size should be much smaller than .the function is proportional to within .prior experiments have established that at = 20 cm below the comb , where the measurements were made , is roughly 200 m .thus the beam size is small enough to correctly measure the viscous shear rate .the single - point pcs measurements of are now compared with those of ldv , made in the traditional way ; the vertical velocity component is measured at two nearby horizontally - spaced points in the viscosity - dominated interval .the ldv measurements were made 2.5 cm below the pcs beam spot , which is 80 cm below point p in fig.[setup ] and 20 cm below the comb .the ldv measurement point is advanced in 50 m steps starting at =0 .the minimum useful value of is dictated by the necessity of avoiding strong light scattering from the supporting plastic strip with its edge at =0 .the ldv laser source is 514 nm line from a coherent argon - ion laser operating at a power of 500 mw , roughly one hundred times that used in the pcs device .the data collection time for each measurement of is roughly 20 s. because the correlation time is of the order of microseconds , and the counting rate is of the order of mhz the function form of emerges after only a few seconds of data collection with the correlator .the mean shear rate in the viscous region obtained by ldv and pcs agree to within one standard deviation , as seen in table [ tab : comb_shear_rate ] .the uncertainties are deduced from seven measurements made at the indicated values of . from an individual run, one can not extract from the ldv data , because noise fluctuations can change even the sign of .the ldv and pcs measurements span the range 29000 and from 40000 , with and without comb respectively . with the comb in place , the taylor microscale reynolds number , where = 1 mm .the errors from one run to another are _ not _ statistical in origin .rather , the source is variations in the flow speed through the valve and the motion of the film plane caused by velocity fluctuations of the surrounding air which could be only partially suppressed by placing the entire apparatus in a tent ..[tab : comb_shear_rate]mean shear rate as measured by ldv and pcs in a narrow range of mean flow speeds ( comb inserted ) . [cols="^,^,^",options="header " , ] fig.[pcs_ldv_comp_15 ] shows measurements of as a function of in units of 50 m obtained using pcs ( circles ) and ldv ( triangles ) in the range out to = 1.50 mm with the comb present . here = 2.16 m / s , =2 cm and the kinematic viscosity of the soap solution is close to that of water ( = 0.01 /s ) , =45,000 .the main messages conveyed by this graph are ( a ) the two schemes give roughly the same results for , ( b ) near one of the walls , the ldv measurements are noisier ( for reasons already discussed ) , and ( c ) decreases with increasing . even in the absence of air friction , this decrease is expected and is well - studied in 3d flows . in soap filmflows , air friction slows the flow far from the walls , making analysis of the data there difficult .this experiment indicates it should be possible to measure near the wall and in the interior of 3d flows , though care must be taken to collect scattered photons from only a small volume in the fluid .far from a bounding wall in 3d turbulence , the pcs method will suffer from the limitation that should be smaller than the smallest eddy size , defined as . even in these soap film experiments , is estimated to be comparable to or smaller than . yet , as fig.[pcs_ldv_comp_15 ] shows , the ldv measurements of agree with the pcs result up to = 1.5 mm from a wall , well outside the viscous region . as a function of distance from the wall ( in mm ) with a comb in place to strengthen the turbulence .the mean flow speed = 2.16 m / s , =45,000.,width=3 ]though the photon correlation scheme has been used here to measure properties of the shear rate in a two - dimensional soap film , it can be used in three dimensional flows as well .the pcs method has good signal - to - noise , is compact , and uses a laser in the mw range .the method yields the variance of the shear rate as well as its mean value .the correlation function itself is the gaussian transform of the probability density function , .we wish to thank t. tran , t. adamo , w. troy , n. goldenfeld , n. guttenberg , and k. nguyen for their contributions to this work .this work is supported by nsf grant no .dmr 0604477 and nsf fellowship 60016281 to s. steers .
a photon correlation method is introduced for measuring components of the shear rate tensor in a turbulent soap film . this new scheme , which is also applicable to three - dimensional flows , is shown to give the same results as laser doppler velocimetry , but with less statistical noise . the technique yields the mean shear rate , its standard deviation , and a simple mathematical transform of the probability density function of the shear rate itself . 0.2 cm
emerging technologies for computing promise to outperform conventional integrated circuits in computation bandwidth or speed , power consumption , manufacturing cost , or form factor .however , razor - sharp focus on any one nascent technology and its benefits sometimes neglects serious limitations or discounts ongoing improvements in established approaches . to foster a richer context for evaluating emerging technologies, we review limiting factors and salient trends in computing that determine what is achievable _ in principle _ and _ in practice_. several fundamental limits remain substantially loose , possibly indicating viable opportunities for emerging technologies . to clarify this uncertainty , we study_ limits on fundamental limits_. * universal and general - purpose computers . * viewing _ clocks and watches _ as early computers , it is easy to see the importance of _ long - running calculations that can be repeated with high accuracy by mass - produced devices_. the significance of _ programmable _ digital computers became clear at least 200 years ago , as illustrated by _ jacquard looms _ in textile manufacturing .however , the existence of _ universal computers _ that can efficiently simulate ( almost ) all other computing devices analog or digital was only articulated in the 1930s by church and turing ( turing excluded quantum physics when considering universality ) . efficiency was studied from a theoretical perspective at first , but strong demand in military applications in the 1940s lead turing and von neumann to develop detailed hardware architectures for universal computers turing s design ( pilot ace ) was more efficient , but von neumann s was easier to program .the _ stored - program architecture _ made universal computers practical in the sense that a single computer design could be effective in many diverse applications .such practical universality thrives in economies of scale in computer hardware and among extensive software stacks .not surprisingly , the most sophisticated and commercially successful computer designs and components , such as intel and ibm cpus , were based on the von neumann s paradigm . the numerous uses and large markets of general - purpose chips , as well as the exact reproducibility of their results ,justify the enormous capital investment in the design , verification and manufacturing of leading - edge integrated circuits . today general - purpose cpus power cloud server - farms and displace specialized ( but still universal ) mainframe processors in many supercomputers . emerginguniversal computers based on field - programmable gate - arrays ( fpgas ) and general - purpose graphics processing units ( gpgpus ) outperform cpus in some cases , but their efficiencies remain complementary to those of cpus . the success of deterministic general - purpose computing manifests in the convergence of diverse functionalities in portable inexpensive smartphones . after steady improvement , general - purpose computing displaced entire industries ( newspapers , photography , etc ) and launched new applications ( video conferencing , gps navigation , online shopping , networked entertainment , etc ) .application - specific integrated circuits ( asics ) streamline input - output and networking , or optimize functionalities previously performed by general - purpose hardware .they speed up biomolecular simulation 100-fold and improve the efficiency of video decoding 500-fold , but require design effort with keen understanding of specific computations , impose high costs and financial risks , need markets where general - purpose computers lag behind , and often can not adapt to new algorithms .recent techniques for _ customizable domain - specific computing _ offer better tradeoffs , while many applications favor the combination of _ general - purpose hardware and domain - specific software _ , including specialized programming languages such as erlang used in whatsapp .* limits as aids to evaluating emerging technologies . * without sufficient history , we can not extrapolate _ scaling laws _ for emerging technologies , yet expectations run high . for example , new proposals for analog processors appear frequently ( as illustrated by adiabatic quantum computers ) , but fail to address concerns about analog computing , such as its limitations on scale , reliability , and long - running error - free computation .general - purpose computers meet these requirements with digital integrated circuits ( ic ) and now command the electronics market . in comparison , quantum computers both digital and analog hold promise _ only in niche applications _ and do not offer _ faster general - purpose computing _ as they are no faster for _ sorting _ and other specific tasks .in exaggerating the engineering impact of quantum computers , popular press missed this nuance .but in scientific research , building quantum computers may help simulating quantum - chemical phenomena and reveal new fundamental limits .sections [ sec : space ] and [ sec : conc ] discuss limits on emerging technologies .* technology extrapolation versus fundamental limits . *the scaling of commercial computing hardware regularly runs into formidable obstacles , but near - term technological advances often circumvent them .the international technology roadmap for semiconductors ( itrs ) keeps track of such obstacles and possible solutions with a focus on frequently - revised consensus estimates .for example , consensus estimates initially predicted 10 ghz cpus for the 45 nm technology node , versus the 3 - 4 ghz range seen in practice . in 2004 , the unrelated quantum information science and technology roadmap forecast 50 physical qubits by 2012 .such optimism arose by assuming technological solutions long before they were developed and validated , and by overlooking important limits .the authors of classify limits to device and interconnect as _ fundamental , material , device , circuit _ , and _system limits_. these categories define the rows of table [ tab : limits ] , and the columns reflect sections of this paper where specific limits are examined for tightness .engineering obstacles limit specific technologies and choices .for example , a key bottleneck today is ic manufacture , which packs billions of transistors and wires in several of silicon with astronomically low defect rates .layers of material are deposited on silicon and patterned with lasers , fabricating all circuit components simultaneously .precision optics and photochemical processes ensure accuracy . * limits on manufacturing . *no account of limits to computing is complete without the abbe diffraction limit : light with wavelength , traversing a medium with refractive index , and converging to a spot with angle ( perhaps , focused by a lens ) creates a spot with diameter , where is the numerical aperture . reaches 1.4 for modern optics , so it would seem that semiconductor manufacturing is limited to feature sizes , hence arf lasers with 193 nm wavelength should not support photolithographic manufacturing of transistors with 65 nm features .yet , they supported _ sub - wavelength lithography _ for the 45nm-22 nm technology nodes using _ asymmetric illumination _ and _ computational lithography _ .here one starts with optical masks that look like the intended image , but when the image gets blurry , alter masks by gently shifting edges to improve the image , possibly giving up the semblance between the two .clearly , some limits are formulated to be broken !ten years ago , researchers demonstrated patterning of nanomaterials by live viruses .known virions exceed 20 nm in diameter , whereas subwavelength lithography with 193nm - wavelength arf laser recently extended to 14 nm semiconductor manufacturing .hence , viruses and microorganisms are no longer at the forefront of semiconductor manufacturing . extreme ultra - violet ( x - ray ) lasers have been energy - limited , but are improving .their use requires changing refractive optics to reflective .additional progress in _ multiple patterning _ and _ directed self - assembly _ promises to support photolithography beyond the 10 nm technology node .* limits on individual interconnects . * despite the doubling of transistor density with moore s law , semiconductor integrated circuits ( ics ) would not work without fast and dense interconnects .metallic wires can be either fast or dense , but not both at the same time smaller cross - section increases electrical resistance , while greater height or width increase parasitic capacitance with neighboring wires ( wire delay grows with ) . in 1995 ,an intel researcher pointed out that _ on - chip interconnect scaling _ is the real limiter of high - performance ics .the scaling of interconnect is also moderated by _electron scattering against rough edges of metallic wires _ , inevitable with atomic - scale wires . hence, ic interconnect stacks have evolved from four equal - pitch layers in 2000 to 16 layers with pitches varying by 32 times , including a large amount of dense ( thin ) wiring and fast ( thick ) wires used for global on - chip communication ( figure [ fig : wires ] ) .aluminum and copper remain unrivaled for conventional interconnects and can be combined in short wires ; _ carbon - nanotube _ and _ spintronic _ interconnects are also evaluated in ._ photonic waveguides _ and _ rf links_ offer alternative ic interconnect , but obey fundamental limits derived from maxwell s equations , such as the _ maximum propagation speed of em waves _ .i / o links are limited by the perimeter or surface area of a chip , whereas chip capacity grows with area or volume , respectively .* limits on conventional transistors . * transistors are limited by their tiniest feature _ the width of the gate dielectric _ , which recently reached the size of several atoms ( figure [ fig : xtor - atoms ] ) , creating problems : a few missing atoms could alter transistor performance , manufacturing variation makes all transistors slightly different ( figure [ fig : xtor - field ] ) , electric current tends to leak through thin narrow dielectrics . instead of a _ thinner _dielectric , transistors can be redesigned with _ wider _ dielectric layers that surround a _ fin shape _ ( figure [ fig : xtor - fin ] ) .such configurations improve the control of electric field , reduce current densities and leakage , and diminish process variations .each transistor can use several fins , extending transistor scaling by several generations .semiconductor manufacturers adopted finfets for upcoming technology nodes .one step further , in _ tunneling transistors _ a gate wraps around the channel to control tunnelling rate .as mosfet transistors shrink , gate dielectric ( yellow ) thickness approaches several atoms ( 0.5 nm at the 22 nm technology node ) .atomic spacing limits device density to 1 device / nm even for radical devices ., width=294 ] as mosfet transistors shrink , the shape of electric field departs from basic rectilinear models , and level curves become disconnected .atomic - level manufacturing variations , especially for dopant atoms , start affecting device parameters , making each transistor slightly different .image credit : gold standard simulations ., width=294 ] the evolution of metallic wire stacks from 1997 to 2010 by semiconductor technology nodes .image credit : ibm research ( modified ) ., width=302 ] finfet transistors possess much wider gate dielectric ( surrounding the fin shape ) than mosfet transistors and can use multiple fins ., height=139 ] * limits on design effort . * in the 1980s , mead and conway formalized ic design using a regular grid , enabling automated layout through algorithms . butresulting optimization problems remain hard , and heuristics are only good enough for practical use . besides frequent algorithmic improvements , each technology generation alters circuit physics and requires new cad software . the cost of design has doubled in a few years , becoming prohibitive for ics with limited market penetration .emerging technologies , such as finfets and high - k dielectrics , circumvent known obstacles using forms of design optimization .therefore , reasonably tight limits should account for potential future optimizations .low - level technology enhancements , no matter how powerful , are often viewed as one - off improvements , in contrast to architectural redesigns that affect many processor generations . between technology enhancements and architectural redesignsare global and local optimizations that alter `` the texture '' of ic design , such as _ logic restructuring , gate sizing and device parameter selection_. moore s law promises higher transistor densities , but some transistors are designed to be 32 times larger than others .large gates consume greater power to drive long interconnects at acceptable speed and satisfy performance constraints . minimizing circuit area and power , subject to timing constraints ( by configuring each logic gate to a certain size , threshold voltage , etc ) ,is a hard but increasingly important optimization with a large parameter space .a recent convex optimization method saved 30% power in intel chips , and the impact of such improvements grows with circuit size .many aspects of ic design are being improved , continually raising the bar for technologies that compete with cmos .completing new ic designs , optimizing and verifying them requires great effort and continuing innovation , e.g. , the lack of scalable design automation is a limiting factor for analog ics . in 1999 , bottom - up analysis of digital ic technologies outlined design scaling up to self - contained modules with 50k standard cells ( each cell contains 1 - 3 logic gates ) , but further scaling was limited by global interconnect . in 2010 ,physical separation of modules became less critical , as large - scale placement optimizations assumed greater responsibility for ic layout and learned to blend nearby modules . in a general trend ,powerful design automation frees circuit engineers to focus on microarchitecture , but increasingly relies on algorithmic optimization . until recently, this strategy suffered significant losses in performance and power compared to ideal designs , but has now became both successful and indispensable due to rapidly increasing complexity of digital and mixed - signal electronic systems .hardware and software must now be co - designed and co - verified , with software efforts increasing at a faster rate ._ platform - based design _ combines high - level design abstractions with effective reuse of components and functionalities in engineered systems .customizable domain - specific computing and domain - specific programming languages offload specialization to software running on reusable hardware platforms .in predicting the main obstacles to improving modern electronics , the international technology roadmap for semiconductors ( itrs ) highlights _ the management of system power and energy _ as the dominant grand challenge .the faster the computation , the more energy it consumes , but actual power - performance tradeoffs depend on the physical scale .while the itrs , by its charter , focuses on near - term projections and ic design techniques , fundamental limits reflect available energy resources , properties of the physical space , power - dissipation constraints , and energy waste .a 1961 result by landauer shows that erasing one bit of information entails an energy loss ( _ thermodynamic threshold _ ) , where is the boltzmann constant and is the temperature in kelvin .this principle was validated empirically in 2012 and seems to motivate _reversible computing _ , where all input information is preserved , incurring additional costs .formally speaking , zero - energy computation is prohibited by _ the energy - time form of the heisenberg uncertainty principle _ ( ) : faster computation requires greater energy .however , recent work in applied superconductivity demonstrates `` highly exotic '' _ physically - reversible _ circuits operating at 4 with energy dissipation below the thermodynamic threshold .they apparently fail to scale to large sizes , run into other limits , and remain no more practical than `` mainstream '' superconducting circuits and refrigerated low - power cmos circuits .technologies that implement _ quantum circuits _ can _ approximate _ reversible boolean computing , but currently do not scale to large sizes , are energy - inefficient at the system level , rely on fragile components , and require heavy fault - tolerance overhead .conventional ics also do not help obtaining energy savings from reversible computing because they dissipate 30 - 60% of all energy in ( reversible ) wires and repeaters . at room temperature , landauer slimit amounts to 2.85 j a very small fraction of the total , given that modern ics dissipate 0.1 - 100 watts and contain logic gates . with the increasing dominance of interconnect ( section [ sec : space ] ) , more energy is spent on communication than on computation .logically - reversible computing is important for reasons other than energy in cryptography , quantum information processing , etc . *the end of cpu frequency scaling . * in 2004 , intel corp . abruptly cancelled a 4ghz cpu project because high power density required awkward cooling technologies .other cpu manufacturers kept clock - frequencies in the 1 - 6ghz range , but also resorted to multicore cpus .since dynamic circuit power grows with clock frequency and supply voltage squared , energy can be saved by distributing work among slower , lower - voltage parallel cpu cores _ if parallelization overhead is small_. * dark , darker , dim , gray silicon . * a companion trend to moore s law the dennard scaling theory shows how to keep power consumption of semiconductor ics constant while increasing their density . butdennard scaling broke down ten years ago .extrapolation of semiconductor scaling trends for cmos the dominant semiconductor technology for 20 years shows that the power consumption of transistors available in modern ics reduces more slowly than their size ( which is subject to moore s law ) . to ensure the performance envelope of transistors ,chip power density must be limited , and a fraction of transistors must be kept dark at any given time .modern cpus have not been able to use all their circuits at once , but this asymptotic effect termed the _ utilization wall _ will soon black out 99% of the chip , hence the term _ dark silicon _ and a reasoned reference to the apocalypse . saving power by slowing cpu cores down is termed _dim silicon_. detailed studies of dark silicon show similar results . to this end, executives from microsoft and ibm have recently proclaimed an end to the era of multicore microprocessors .two related trends appeared earlier : increasingly large ic regions remain transistor - free to aid routing and physical synthesis , to accommodate power - ground networks , etc we call them _ darker silicon _ , increasingly many gates do not perform useful computation but reinforce long , weak interconnects or slow down wires that are too short call them _gray silicon_. today , 50 - 80% of all gates in high - performance ics are repeaters .* limits for power supply and cooling .* data centers in the us consumed 2.2% of total u.s .electricity in 2011 .as powerplants take time to build , we can not sustain past trends of doubled power consumption per year .it is possible to improve the efficiency of transmission lines ( using high - temperature superconductors ) and power conversion in datacenters , but the efficiency of on - chip power - networks may soon reach 80 - 90% .modern ic power management includes _ clock _ and _ power gating _ , per - core voltage scaling , _ charge recovery _ and , in recent processors , a cpu core dedicated to power scheduling .ic power consumption depends quadratically on supply voltage , which has decreased steadily for many years , but recently stabilized at 0.5 - 2v .supply voltage typically exceeds the _ threshold voltage _ of field - effect transistors by a safety margin that ensures circuit reliability , fast operation and low leakage .threshold voltage depends on the thickness of gate dielectric , which reached a practical limit of several atoms ( section [ sec : eng ] ) .supply voltage is limited by around 200mv five times below current practice and simple circuits reach this limit . with slower operation , _ near- _ and _ sub - threshold circuits _ may consume 100 times less energy .cooling technologies can improve too , but fundamental quantum limits bound the efficiency of heat removal .the study in explores a general _ binary - logic switch model _ with binary states represented by two _ quantum wells separated by a potential barrier_. representing information by electric charge requires energy for binary switching and thus limits the logic - switching density , if a significant fraction of the chip can switch simultaneously . to circumvent this limit, one can encode information in _ spin - states , photon polarizations , super - conducting currents _ , or _ magnetic flux _ , noting that these carriers have already been in commercial use .spin - states are particularly attractive because they promise high - density nonvolatile storage and scalable interconnects .more powerful limits are based on the amount of material in earth s crust ( where silicon is the second most common element after oxygen ) , on atomic spacing ( section [ sec : eng ] ) , radii and energies , bandgaps , as well as the wavelength of the electron .we are currently using only a tiny fraction of earth s mass for computing , and yet various limits could be circumvented if new particles are discovered . beyond atomic physics ,some limits rely on basic constants : the speed of light , the gravitational constant , the quantum ( planck ) scale , the boltzmann constant , etc .lloyd , as well as kraus extend well - known bounds by bremermann and bekenstein , and give moore s law 150 and 600 years , respectively .these results are too loose to obstruct the performance of practical computers .in contrast , current consensus estimates from the itrs give moore s law only 10 - 20 years , due to both _technological _ and _ economic _ considerations .engineering limits for deployed technologies can often be circumvented , while first - principles limits on energy and power are very loose .reasonably tight limits are rare .* limits to parallelism .* suppose we wish to compare a parallel and sequential computer built from the same units , to argue that a new parallel algorithm is many times faster than the best sequential algorithm ( the same reasoning applies to logic gates on an ic ) .given parallel units and an algorithm that runs times faster on sufficiently large inputs , one can _ simulate _ the parallel system on the sequential system by dividing its time between computational slices .since this simulation is roughly times slower , it runs times faster than the original sequential algorithm . if this algorithm was best possible , we have .the bound is reasonably tight in practice for small and can be violated slightly since cpus include more cpu cache , but such violations do not justify parallel algorithms one could instead buy / build one cpu with a larger cache .such linear speedup is optimistically assumed for _ the parallelizable component _ in the 1988 gustafson s law that suggests scaling the number of processors with input size ( as illustrated by instantaneous web search queries over massive data sets ) . also in 1988 ,fisher employed _ asymptotic runtime estimates _ instead of numerical limits and avoided the breakdown into parallel and sequential runtime components , assumed in amdahl s and gustafson s laws .asymptotic estimates neglect leading constants and offer a powerful way to capture nonlinear phenomena occurring at large scale .fisher assumes a sequential computation with elementary steps for input of size , and limits the performance of its parallel variants that can use an unbounded -dimensional grid of finite - size computing units ( electrical switches on a semiconductor chip , logic gates , cpu cores , etc ) communicating at a finite speed , say , bounded by the speed of light .we highlight only one aspect of this four - page work the parallel computation requires {t(n)}) ] steps in 2d and {t(n)}) ] , shows that 3d integration asymptotically reduces to a significant but not dramatic speedup .this speedup requires an unbounded number of 2d device layers , otherwise there is no asymptotic speedup . for 3d ics with 2 - 3 layers ,the main benefits of 3d ic integration today are in improving manufacturing yield , improving i / o bandwidth , and combining 2d ics that are optimized for random logic , dense memory , fpga , analog , mems , etc .ultra - high density cmos logic ics with _ monolithic _3d integration suffer higher routing congestion than traditional 2d ics .emerging technologies promise to improve device parameters , but often remain limited by scale , faults , and interconnect , e.g. , _ quantum dots _enable terahertz switching but hamper nonlocal communication .cnt - fets leverage extraordinary carrier mobility in semiconducting carbon nanotubes to use interconnect more efficiently by improving drive strength , while reducing supply voltage .emerging interconnects include _ silicon photonics _ , shown by intel in 2013 as a 100gb / s replacement of copper cables connecting adjacent chips .it promises to reduce power consumption and form factor .quantum physics alters the nature of communication with einstein s `` spooky action at a distance '' facilitated by entanglement .however , the flows of information and entropy are subject to quantum limits .several quantum algorithms run asymptotically faster than best conventional algorithms , but fault - tolerance overhead offsets their potential benefits in practice , and empirical evidence of quantum speedups has not been compelling so far .several stages in the development of quantum information processing remain challenging , and the surprising difficulty of _ scaling up _ reliable quantum computation could stem from limits on _ communication _ and _ entropy _ .in contrast , lloyd notes that _ individual _ quantum devices now approach energy limits for switching , whereas nonquantum devices remain orders of magnitude away .this suggests an obstacle to simulating quantum physics on conventional computers ( abstract models aside ) . in terms of computational complexitythough , quantum computers _ can not _ attain significant advantage for many problem types . such lack of _ consistent general - purpose speedup _ limits the benefits of several emerging technologies in mature applications with diverse algorithmic steps , e.g. , computer - aided design and web search .accelerating one step usually does not greatly speed up the entire application , as noted by amdahl in 1967 .figuratively speaking , _the most successful computers are designed for the decathlon , rather than for sprint only_.section [ sec : space ] enabled tighter limits by neglecting energy and using asymptotic rather than numeric bounds a more abstract model focuses on the impact of scale , and recurring trends quickly overtake one - off device - specific effects .next , we neglect spatial effects and focus on the nature of computation in an abstract model ( used by software engineers ) that represents computation by elementary steps with input - independent runtimes .such limits survive many improvements in computer technologies , and are often stronger for specific problems .for example , the best - known algorithms for multiplying large numbers are only slightly slower than reading the input ( an obvious speed limit ) , but only in the asymptotic sense for numbers with bits , those algorithms lag behind simpler algorithms in actual performance . to focus onwhat matters , we now do not just track asymptotic worst - case complexity of best algorithms for a given problem , but merely distinguish _ polynomial _ asymptotic growth from _exponential_. limits formulated in such crude terms ( unsolvability in polynomial time _ on any computer _ ) are powerful : the hardness of number - factoring underpins internet commerce , while the p conjecture explains the lack of satisfactory , scalable solutions to important algorithmic problems , e.g. , in optimization and verification of ic designs .a similar conjecture p seeks to explain why many algorithmic problems that can be _ solved _ efficiently have not _ parallelized _ efficiently .most of these limits have not been proven .some can be circumvented by using radically different physics , e.g. , quantum computers solve number factoring in polynomial time ( in theory ) .but quantum computation does not affect p .the lack of proofs , despite heavy empirical evidence , requires faith and is an important limitation of many nonphysical limits to computing .this faith is not universally shared donald knuth argues that p = np would not contradict anything we know today .a rare _ proven _ result by turing ( also invulnerable to quantum physics ) states that checking if a given program ever halts is _ undecidable _ : no algorithm solves this problem in all cases regardless of runtime . yet, software developers solve this problem during peer code reviews , and computer science teachers when grading exams in programming courses ._ worst - case analysis _ is another limitation of nonphysical limits to computing , but suggests potential gains through approximation and specialization .for some np - hard optimization problems , such as the _ euclidean travelling salesman problem _ ( euctsp ) , polynomial - time approximations exist , but in other cases , such as the _ maximum clique problem _ , accurate approximation is as hard as finding optimal solutions .for some important problems and algorithms , such as the simplex algorithm for _ linear programming _ , few inputs lead to exponential runtime , and minute perturbations reduce runtime to polynomial .the death march of moore s law invites discussions of fundamental limits and alternatives to silicon semiconductors .near - term constraints invariably tie to _ costs _ and _ capital _ , but are explained away by new markets for electronics , increasing earth population , and growing world economy .such economic pressures emphasize the value of _ computational universality _ and broad applicability of ic architectures to solve multiple tasks under conventional environmental conditions . in a likely scenario ,only cpus , gpus , fpgas and dense memory ics will remain viable at the end of moore s law , while specialized circuits will be manufactured with less advanced technologies . indeed , memory chips have lead moore scaling by leveraging their simpler structure , modest interconnect , and more controllable manufacturing , but their scaling is slowing down . the decelerated scaling of cmos ics still outperforms the scaling of the most viable emerging technologies. empirical scaling laws describing the evolution of computing are well - known .in addition to moore s law , dennard scaling , as well as amdahl s and gustafson s laws reviewed earlier , metcalfe s law states that the value of a computer network , such as the internet or facebook , scales as the number of user - to - user connections that can be formed .grosch s law ties -fold improvements in computer performance to -fold cost increases ( in equivalent units ) .applying it in reverse , we can estimate acceptable performance of cheaper computers .but such laws only capture _ ongoing scaling _ and will break down in the future . the _ roadmapping process _ represented by the international technology roadmap for semiconductors ( itrs ) relies on consensus estimates and works around engineering obstacles .it tracks improvements in materials and tools , collects best practices and outlines promising design strategies . as suggested in , it can be enriched by analysis of limits .we additionally focus on how closely such limits can be approached . aside from historical `` wrong turns '' recalled in sections [ sec : eng ] and [ sec : energy ] , we find interesting effects when examining the tightness of individual limits . while energy - time limits are most critical in computer design , space - time limits appear tighter and capture bottlenecks formed by interconnect and communication .they suggest optimizing gate locations and sizes , and placing gates in three dimensions .one can also adapt algorithms to spatial embeddings and seek space - time limits .but the gap between current technologies and energy - time limits hints at greater rewards . _ charge recovery _ , _ power management _ , voltage scaling , and _ near - threshold computing _ reduce energy waste .optimizing algorithms and circuits simultaneously for energy and spatial embedding gives biological systems an edge ( from the 1d worm _ c. elegans _ with 302 neurons to the 3d human brain with 86 billion neurons ) .yet , using mass - energy to compute can be a veritable _ nuclear option_. in a 1959 talk , which predated moore s law , richard feynman suggested that there was `` plenty of room at the bottom , '' forecasting the miniaturization of electronics . today, with relatively little physical room left , _ there is plenty of _ energy _ at the bottom_. if this energy is tapped for computing , how can resulting heat be removed ? recycling heat into mass or electricity seems ruled out by limits to energy conversion and the acceptable thermal envelope .technology - specific limits for modern computers tend to express tradeoffs , especially for systems with conflicting performance parameters and properties .little is known about limits on _design technologies_. given that large - scale complex systems are often designed and implemented hierarchically with multiple levels of abstraction , it would be valuable to capture losses incurred at abstraction boundaries and between levels of design hierarchies .it is common to estimate resources required for a subsystem and then implement the subsystem to satisfy resource budgets .underestimation is avoided because it leads to failures , but overestimation results in overdesign .inaccuracies in estimation and physical modeling also lead to losses during optimization , especially in the presence of uncertainty .clarifying engineering limits gives hope to circumvent them .technology - agnostic limits look simple and have had significant impact in practice , for example aaronson explains why np - hardness is unlikely to be circumvented by through physics .limits to _ parallel computation _ became prominent after cpu speed levelled off ten years ago .they suggest using faster interconnect , local computation that reduces communication , time - division multiplexing of logic , architectural and algorithmic techniques , solving larger problem instances , and altering applications to embrace parallelism .john gustafson advocates a _ natural selection _ : the survival of applications fittest for parallelism . in another twist ,the performance and power consumption of industry - scale distributed systems is often described by probability distributions , rather than single numbers , making it harder to even formulate appropriate limits .we also can not yet formulate fundamental limits related to the complexity of the software - development effort , the efficiency of cpu caches , and computational requirements of incremental functional verification , but we have noticed that many known limits are either loose or can be circumvented , leading to _secondary limits_. to wit , the limit is worded in terms of worst - case rather than average - case performance , and has not been proven despite heavy evidence .researchers have ruled out entire categories of proof techniques as insufficient to complete such a proof . while esoteric , such _ tertiary limits _ can be effective in practice in august 2010 , they helped researchers quickly invalidate vinay deolalikar s highly - technical attempt at proving . on the other hand , the correctness of lengthy proofs for some key results could not be established with acceptable level of certainty by reviewers , prompting efforts in verifying mathematics by computation . in summary, we have reviewed what is known about limits to computation , including existential challenges arising in the sciences , design and optimization challenges arising in engineering , as well as current state of the art .these categories are closely linked due to the rapid pace of technology development . when a specific limit is approached and obstructs progress , understanding its assumptions is a key to circumventing it .some limits are hopelessly loose and can be ignored , while other limits remain conjectured based on empirical evidence and may be very difficult to establish rigorously .such _ limits on limits to computation _ deserve further study .* acknowledgments .* this work was supported in part by the semiconductor research corporation ( src ) task 2264.001 ( funded by intel and ibm ) , us airforce research laboratory award fa8750 - 11 - 2 - 0043 , and us national science foundation ( nsf ) award 1162087 .hameed , r. , qadeer , w. , wachs , m. , azizi , o. , solomatnikov , a. , lee , b. c. , richardson , s. , kozyrakis , c. , horowitz , m. understanding sources of ineffciency in general - purpose chips , _ commun .54(10 ) : 85 - 93 october ( 2011 ) .davis , j. a. , venkatesan , r. , kaloyeros , a. , beylansky , m. , souri , s. j. , banerjee , k. , saraswat , k. c. , rahman , a. , reif , r. , meindl , j. d. interconnect limits on gigascale integration ( gsi ) in the 21st century , _ proc . ieee _89(3):305 - 324 ( 2001 ) .hisamoto , d. , lee , w .- c . ,kedzierski , j. , takeuchi , h. , asano , k. , kuo , c. , anderson , e. , king , t .- j ., bokor , j. , hu , c. finfet - a self - aligned double - gate mosfet scalable to 20 nm , _ ieee trans . on electron devices _ 47(12):2320 - 2325 ( 2002 ) .brut , a. , arakelyan , a. , petrosyan , a. , ciliberto , s. , dillenschneider r. , and lutz , e. experimental verification of landauer s principle linking information and thermodynamics , _ nature _ 483 : 187 - 189 ( 2012 ). monroe , c. , raussendorf , r. , ruthven , a. , brown , k. r. , maunz , p. , duan , l .- m . , and kim , j. large - scale modular quantum - computer architecture with atomic memory and photonic interconnects , _ phys . rev .a _ 89 , 022317 .borkar , s. thousand - core chips : a technology perspective , in _ proc .design automation conf ._ ( dac ) ) 746 - 749 ( 2007 ) .rabaey , j. m. , chandrakasan , a. , nikolic , b. digital integrated circuits a design perspective , _ pearson education , inc _ ( 2003 ) .bohr , m. a 30 year retrospective on dennard s mosfet scaling paper , _ ieee solid - state circuits society newsletter _12(1 ) : 11 - 13 ( 2007 ) .taylor , m. b. is dark silicon useful ?harnessing the four horsemen of the coming dark silicon apocalypse , in _ proc .design automation conf . _( dac ) 1131 - 1136 ( 2012 ) .dreslinski , r. g. , wieckowski , m. , blaauw , d. , sylvester , d. , mudge , t. near - threshold computing : reclaiming moore s law through energy efficient integrated circuits , _ proc .98(2 ) : 253 - 266 ( 2010 ) .wolf , s. a. , awschalom , d. d. , buhrman , r. a. , daughton , j. m. , von molnr , s. , roukes , m. l. , chtchelkanova , a. y. , treger , d. m. , spintronics : a spin - based electronics vision for the future , _ science _ 294:1488 - 1494 ( 2001 ) .lee , y .- j . ,morrow , p. , lim , s. k. , `` ultra high density logic designs using transistor - level monolithic 3d integration , '' in _ proc .. computer - aided design of integrated circuits _ _ iccad _ 2012 : 539 - 546 .dror , r. o. , grossman , j. p. , mackenzie , k. m. , towles , b. , chow , e. , salmon , j. k. , young , c. , bank , j. a. , batson , b. , shaw , d. e. , kuskin , j. , larson , r. h. , moraes , m. a. , shaw , d. e. overcoming communication latency barriers in massively parallel scientific computation , _ ieee micro _31(3 ) : 8 - 19 ( 2011 ) .barroso , l. a. , clidaras , j. , hlzle , u. the datacenter as a computer : an introduction to the design of warehouse - scale machines , 2nd ed . : synthesis lectures on computer architecture ( _ morgan & claypool publishers _ 2013 ) .
an indispensable part of our lives , _ computing _ has also become essential to industries and governments . steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the last fifty years . such _ moore scaling _ now requires increasingly heroic efforts , stimulating research in alternative hardware and stirring controversy . to help evaluate emerging technologies and enrich our understanding of integrated - circuit scaling , we review fundamental limits to computation : in manufacturing , energy , physical space , design and verification effort , and algorithms . to outline what is achievable _ in principle _ and _ in practice _ , we recall how some limits were circumvented , compare loose and tight limits . we also point out that engineering difficulties encountered by emerging technologies may indicate yet - unknown limits .
consider a finite volume containing interacting particles and that is coupled at opposite boundaries to particle reservoirs held at _ different _ chemical potentials , as shown in figure [ fig : chempotgrad ] .in this situation one anticipates a net flux of particles from the reservoir with the greater chemical potential through the system and into the opposite reservoir . after some initial transients we expect a state to arisein which there is a nonzero mean flux that is constant over space and time .this flux reveals the system to be in a _nonequilibrium steady state_. more generally , we have in mind any physical system all of whose observables do not change with time , but nevertheless exhibits an irreversible exchange of heat , particles , volume or some other physical quantity with its environment . a channel held out of equilibrium through the application of a chemical potential gradient .it is anticipated that a nonequilibrium steady state will be reached in which particles flow from one reservoir to another at a constant rate . ]when heat , particle or volume exchanges are reversible , the system is at equilibrium with its environment and , as is well established , its microstates have a gibbs - boltzmann distribution .this knowledge allows one then to predict the macroscopic physics of a many - body system given a microscopic model couched in terms of energetic interactions .furthermore , these gibbs - boltzmann statistics emerge from the application of a very simple principle , namely that of equal _ a priori _ probabilities for the system combined with its environment . by contrast ,when a system is out of equilibrium , very little is known about the statistics of the microstates , not even in the steady state .this is despite a wide range of approaches aimed at addressing this deficiency in our understanding of nonequilibrium physics .these approaches fall roughly into two broad categories .first , one has macroscopic theories , such as onsager and machlup s pioneering work on near - equilibrium fluctuations . in this regime , it is assumed that the thermodynamic forces that restore an equilibrium are linear , and it is possible to determine the probability of witnessing certain fluctuations in the equilibrium steady state , the most probable trajectory given by a principle of minimum energy dissipation . in this work ,we are interested in steady states that are far from equilibrium , i.e. , where the system is driven beyond the linear response regime .recently there has been some success in extending the onsager - machlup theory of equilibrium fluctuations to such nonequilibrium steady states , as long as one has been able to derive macroscopic ( hydrodynamic ) equations of motion for the system .furthermore , over the past few years fluctuation theorems have been derived under a wide range of conditions . typically , these relate the probabilities of entropy changes of equal magnitude but opposite sign occurring in a system driven arbitrarily far from equilibrium ( see for a brief introductory overview ) .although some connections between these various topics have been established ( see , e.g. , ) , it is fair to say that a complete coherent picture of the macroscopic properties of nonequilibrium steady states is still lacking .somewhere between this macroscopic approach and a truly microscopic approach ( i.e. , one that would follow directly from some microscopic equations of motion ) lie a range of mesoscopic models that are intended to capture the essential features of nonequilibrium dynamics irreversibility , currents , dissipation of heat and so on but are nevertheless simple enough that analytical treatment is possible . in particular , it is possible to explore the macroscopic consequences of the underlying dynamics , in the process identifying both similarities with equilibrium states of matter and the novel features peculiar to nonequilibrium systems .it is this approach that is the focus of this review article .specifically , we will discuss a set of models that have a steady - state distribution of microstates that can be expressed mathematically in the form of a matrix product .we will explain in detail how these expressions can be used to calculate macroscopic steady - state properties exactly .these will include particle currents , density profiles , correlation functions , the distribution of macroscopic fluctuations and so on .these calculations will reveal phenomena that occur purely because of the far - from - equilibrium conditions : for example , boundary induced phase transitions and shock fronts . meanwhile ,we will also find conceptual connections with equilibrium statistical physics : in some of the models we discuss , for example , it is possible to construct partition functions and free - energy - like quantities that are meaningful both for equilibrium and nonequilibrium systems .we begin by explaining this modelling approach in more detail . in figure[ fig : microtubule ] we have sketched some molecular motors ( kinesins ) that are attached to a microtubule .this is essentially a track along which the motors can walk by using packets of energy carried by atp molecules present in the surroundings .clearly , at the most microscopic level , there are a number chemical processes at play that combine to give rise to the motor s progress along the track . at a morecoarse - grained , mesoscopic level , we can simply observe that the motor takes steps of a well - defined size ( approximately 8 nm for a kinesin ) at a time , and thus model the system as a set of particles that hop _ stochastically _ between sites of a lattice .this stochastic prescription is in part intended to reflect the fact that the internal degrees of freedom that govern when a particle hops are not explicitly included in the model .kinesins attached to a microtubule that move in a preferred direction by extracting chemical energy from the environment . at a mesoscopic level , this can be modelled as a stochastic process in which particles hop along a one - dimensional lattice as shown . ]the simplest such stochastic model for the particle hops is a poisson process that occurs at some prescribed rate .let and be two configurations of the lattice that differ by a single particle hop .we then define as the rate at which this hop occurs , such that in an infinitesimal time interval the probability that that hop takes place is .the rate of change of the probability for the system to be in configuration at time is then the solution of the master equation subject to some initial condition .the first term on the right - hand side of this equation gives contributions from all possible hops ( transitions ) into the configuration from other configurations ; the second term gives contributions from hops out of into other configuration .we are concerned here with steady states , where these gain and loss terms exactly balance causing the time derivative to vanish .such solutions of the master equation we denote . in this work ,nearly all the models we discuss have the property that a single steady state is reached from any initial condition : i.e. , they are ergodic . given then that the distribution is unique for these models , we concentrate almost exclusively on how it is obtained analytically , and how expectation values of observables are calculated. the remaining models that are not ergodic over the full space of configurations turn out to have a unique steady - state distribution over some subspace of configurations .we will see that such a distributions of this type can also be handled using the methods described in this work .solutions of the master equation ( [ int : me ] ) can assume a number of structures .some general classes of steady state not intended to be mutually exclusive can be summarised as follows : [ [ equilibrium - gibbs - boltzmann - steady - state ] ] equilibrium ( gibbs - boltzmann ) steady state + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + if the internal energy of a microstate is and the system as a whole is in equilibrium with a heat bath with inverse temperature , the microstates have a gibbs - boltzmann distribution .the dynamics is reversible in that the probability of witnessing any particular trajectory through phase space is equal to that of its time reversal .this then implies that stochastic dynamics expressed in terms of transition rates must satisfy the detailed balance condition where here is the gibbs - boltzmann distribution .a simple consequence of ( [ int : db ] ) is that there are no fluxes in the steady state .hence any system that does exhibit currents in the steady state , i.e. , nonequilibrium systems , generally do not have a stationary distribution that satisfies detailed balance . [[ factorised - steady - state ] ] factorised steady state + + + + + + + + + + + + + + + + + + + + + + + sometimes , whether in or out of equilibrium , one has a factorised steady state .typically one has in mind a lattice ( or graph ) with sites with configurations specified by occupancy variables ( i.e. , site contains particles ) . a factorised steady statethen takes the form .for such a structure to arise , certain constraints on the transition rates must be satisfied .for example , in an equilibrium system , one would need the energy of the system to be the sum of single - site energies .out of equilibrium , one finds factorised steady states in zero - range processes , in which particles hop at a rate that depends only on the occupation number at the departure site .moreover , necessary and sufficient conditions for factorisation in a broad class of models which includes the zrp have been established .zero - range processes and various generalisations have been reviewed recently in a companion paper and so we do not discuss them further here .[ [ matrix - product - steady - state ] ] matrix product steady state + + + + + + + + + + + + + + + + + + + + + + + + + + + a matrix product steady state is an extension to a factorised steady state that is of particular utility for one - dimensional models .the rough idea is to replace the scalar factors with matrices , the steady state probability then being given by an element of the resulting matrix product . since the matrices for different occupancy numbers need not commute , one opens up the possibility for correlations between the occupancy of different sites ( above those that emerge from global constraints , such as a fixed particle number , that one sees in factorised states ) .it turns out that quite a number of nonequilibrium models have a matrix product steady state , and we will encounter them all in the course of this review . [ [ most - general - steady - state ] ] most general steady state + + + + + + + + + + + + + + + + + + + + + + + + + in principle , the ( assumed unique ) stationary solution of the master equation ( [ int : me ] ) can always be found if the number of configurations is finite , since ( [ int : me ] ) is a system of linear equations in the probabilities .one way to express this solution is in terms of _ statistical weights _ , each of which is given by the determinant of the matrix of transition rates obtained by removing the row and column corresponding to the configuration ( see e.g. ) .to arrive at the probability distribution one requires the _ normalisation _ , so that then .this normalisation has some interesting properties : it is uniquely defined for any ergodic markov process with a finite number of configurations ; it can be shown to be equal to the product of all nonzero eigenvalues of the matrix of transition rates ; and it is a partition function of a set of trees on the graph of transitions between different microscopic configurations . in this latter case ,the densities of particular edges in the ensemble of trees are controlled by tuning the transition rates corresponding to those edges , so in this interpretation these transition rates are equivalent to equilibrium fugacities .although this interpretation of the normalisation is rather abstract , concrete connections between transition rates in certain nonequilibrium models that have a matrix product steady state and fugacities in an equilibrium ensemble have been made , and these we shall discuss later in this review .we also remark that the equilibrium theory of partition function zeros to characterise phase transitions also carries over to the normalisation as just defined , at least for those models that have been tested .we have three principal aims in this work .first , we wish to illustrate the insights into nonequilibrium statistical mechanics that have been gained from exactly solvable models , particularly those that have a steady state of the matrix product form .secondly , we seek to provide a self - contained pedagogical account of the various analytical methods and calculational tools that can be used to go from the matrix product expressions to predictions for the macroscopic physics .finally , we wish to present a thorough review of the progress that has been made using the matrix product approach over the last few years .as far as we are aware , the matrix product approach has not been the focus of a review for nearly a decade , and we feel that it is high time that the significant developments that have occurred in the meantime should be collected together in one place . in order to prevent this review from becoming unmanageably long , we focus purely on static physical properties exhibited in the steady states of models solvable by the matrix product method .this means we have unfortunately had to omit discussion of some very interesting topics , for example how some dynamical properties , such as fluctuation phenomena , have been elucidated through the use of the bethe ansatz , determinental solutions and their connection to random matrix theory and dynamical matrix products .some of these topics , however , have recently been reviewed elsewhere .we also direct the reader to established reviews , such as , for any other background that we have been forced to leave out here . to realise the aims stated above , we provide in the next section a general account ofthe physics one expects to see in nonequilibrium dynamical systems , and outline the essential ideas underlying the matrix product approach .thereafter , we go into the details of how the simplest models are solved , and show a number of contrasting ( but equivalent ) approaches that have found application to more complex problems .these cases then form the material of the remainder of the review , which we round off by posing some open problems for future research .the asymmetric simple exclusion process ( asep ) is a very simple model of a driven system in one dimension that has biased diffusion of hard - core particles .we shall discuss two versions : the periodic system and the open system .the latter is coupled to particle reservoirs at either end so that , as described in the previous section , there is a steady state with a constant particle flux .the open system has a long history , having first appeared in the literature to our knowledge as a model of biopolymerisation and transport across membranes . in the mathematical literaturemeanwhile diffusion with collisions between particles was first studied by harris and the terminology simple exclusion was first defined by spitzer . over the years , applications to other transport processes have appeared , e.g. , as a general model for traffic flow and various other theoretical and experimental studies of biophysical transport .whilst interesting and important , these many applications are not our primary concern here . also we shall not do justice to the many rigorous mathematical results which have been summarised in the books by liggett . rather , our interest in the asep lies in its having acquired the status of a fundamental model of nonequilibrium statistical physics in its own right in much the same way that the ising model has become a paradigm for equilibrium critical phenomena .in particular , the asep and related models show despite their superficial simplicity a range of nontrivial macroscopic phenomena , such as phase transitions , spontaneous symmetry breaking , shock fronts , condensation and jamming . in common with the vast majority of the models we discuss in this review , the asep is defined as a stochastic process taking place in continuous time on a discrete one - dimensional lattice of sites .hopping is totally asymmetric : a particle sitting to the left of an occupied site hops forwards one site ( as in figure [ fig : microtubule ] ) , each hop being a poisson process with rate .that is , in an infinitesimal time interval , there is a probability that one of the particles that can hop , does hop , the identity of that particle being chosen at random . in simulation terms, this type of dynamics corresponds to a _ random - sequential _ updating scheme , in which bonds between particles are chosen at random and then , if there is a particle at the left - hand end of the bond and a vacancy at the right , the particle is moved forwards . in this scheme ,each update corresponds ( on average ) to units of time ; in the following we take without loss of generality . the definition of the model is completed by specifying the boundary conditions . asymmetric exclusion process with ( a ) periodic and ( b ) open boundary conditions .the labels indicate the rates at which the various particle moves can occur . ]the simplest version of the model has a ring of sites as depicted in figure [ fig : asep](a ) : a particle hopping to the right on site lands on site if the receiving site is empty . under these dynamics, the total particle number is conserved .furthermore , as we will show in section [ rrproof ] , the steady state is very simple : all configurations ( with the allowed number of particles ) are equally likely .as there are allowed configurations the probability of any one is .the steady - state current of particles , , through a bond is given by ( here equal to 1 ) multiplied by the probability that there is a particle at site and site is vacant .one finds in the thermodynamic limit where at a constant ratio .one may also ask , starting from a known configuration , how long the system takes to relax to the steady state .it turns out that this timescale depends on the system size via the relation which defines a _ dynamic exponent _ .one can show using the bethe ansatz ( not covered here , but see , e.g. , ) that for the asep on a ring , .also in the symmetric case where particles hop both to the left and to the right with equal rates , but still with exclusion , one obtains which is the value one would find for a purely diffusive process . to model the interaction of an open system with reservoirs at different densities , poisson processes acting at the boundary sites are added .specifically , a particle may be inserted onto the leftmost site with rate ( if it is vacant ) , and may leave the system from the rightmost site at rate : see figure [ fig : asep](b ) . herewe will consider only , although there is no problem in considering rates outwith this range. then we may think of reservoirs at site 0 with density and a reservoir at site with density . to include these moves in the simulation scheme described above, one would additionally allow the bonds between the system and the reservoirs to be chosen , and perform the particle updates with probability ( entry ) or ( exit ) according to the bond chosen ( assuming and both less than ) .these moves admit the possibility of a nonzero current in the steady state .the open system is more interesting in that phase transitions may occur in the steady state . in the followingwe shall explore the origins of these phase transitions , both through approximate treatments and an exact solution . in this contextwe recognise by direct analogy with equilibrium phase transitions a phase transition as a sudden change in form of macroscopic quantities such as the particle current across a bond , or the density at a site .it turns out that the particle current plays a central role , analogous to the free energy of an equilibrium system , in determining the nature of the phase transition i.e. the order of the transition is determined by which derivative of the current with respect to some external parameter exhibits a discontinuity .we remark that other types of updating scheme can also be considered .for example , in applications to traffic flow or pedestrian dynamics , it is more natural to consider parallel dynamics where many particles can hop in concert . the asep remains solvable under certain classes of parallel updating schemes which will be discussed towards the end of this review ( section [ discretetime ] ) .we begin by reviewing a classical phenomenological theory , first applied to traffic flow , which serves as a first recourse in our understanding of the phase diagrams of driven diffusive systems .the idea is to postulate a continuity equation for the local density where is the current of particles .( note we use here a different nomenclature and notation to which discusses concentration and flow rather density and current . )the analysis rests on the key assumption that there is a unique relation , , between the current and local density .also , it is assumed that there is a maximum current ( the capacity of the road ) at density .the first assumption implies that ( [ cty ] ) becomes where now , an implicit solution of ( [ cty2 ] ) is of travelling - wave form as can readily be checked by substitution into ( [ cty2 ] ) .the arbitrary function is determined by the initial density profile .the interpretation of the solution is that a patch with local density propagates with velocity .the propagation of such a patch is known as a collective phenomenon known as a _kinematic wave _ and is a group velocity .the velocity of a single particle in an environment of density , on the other hand , is defined through one sees that thus , if the single particle velocity is a decreasing function of density , which is what we expect , we find and kinematic waves travel backwards in the frame of a moving particle .kinematic waves and sharpening of a shock : the figure sketches the evolution of a density profile from an initial profile according to ( [ cty ] ) and illustrates how sharp discontinuities in the profile may develop ( see text for discussion ) . ]if is a decreasing function of we have the phenomenon of shock formation ( see figure [ fig : kinematic ] ) .that is , since the patches of an initial density profile travel with different speeds , the low - density regions catch up with higher density regions and discontinuities in the density profile , known as shocks , develop .strictly , after the formation of a shock the description of the density profile by the first order equation ( [ cty ] ) breaks downs and one has to supplement ( [ cty ] ) with second - order spatial derivatives to describe the shock profile . to deduce the velocity of a shock consider two regions of density to the left and to the right separated by a shock .mass conservation implies that the velocity ( positive to the right ) of the shock is given by note that if the current density relation has a maximum then one can have a stationary shock , .a particular form for the current density relationship studied in and which is relevant to the asep is in this case at and is symmetric about the maximum . expression ( [ jasep ] ) may be easily understood for the asep by noting that in order for a particle to hop across a bond , the site to the left must be occupied and that to the right empty . for a bond at position ,the probability of the former is and that of the latter is . assuming these eventsare uncorrelated ( which they are not but we shall come to this later ) , we have . in the case ( [ jasep ] ) one can compute the kinematic wave and shock velocities from ( [ vg],[vs ] ) where , as above , and are the densities on either side of the shock .thus the kinematic wave velocity is negative when and the shock velocity is negative when .the kinematic wave theory can be used to predict the phase diagram of the open boundary asep , in which distinct steady - state behaviours are demarcated ( figure [ fig : asepmfpd ] ) . the left hand boundary is considered as a reservoir of particle of density and the right hand boundary ( ) as a reservoir of density . associated with these boundary densitiesare kinematic waves with velocities in the case where and both kinematic waves propagate into the system .so , for example , from an initially empty system the kinematic waves of densities and will enter from the left and right of the system and meet somewhere in the middle forming a shock which then moves with velocity if the shock moves to the right hand boundary and density associated with the left hand boundary , , is adopted throughout the bulk of the system . on the other handif the shock moves to the left hand boundary and density associated with the right hand boundary , , is adopted throughout the bulk of the system . in the case , and the shock is stationary . in the stochastic systemthe shock , although on average stationary , diffuses around the system and effectively reflects off the boundaries .the result is that the shock is equally likely to be anywhere in the system . in the case where one of or , the kinematic wave associated with that boundary does not propagate into the system and the kinematic wave which does propagate from the other boundary controls the bulk density .thus , the boundary with or controls the bulk density finally if both , .the kinematic waves from both boundaries do not penetrate . to describe this phase one needs to add a diffusive contribution to the current ( [ jasep ] ) i.e. to consider second - order spatial derivative of which we shall do in the next section .the result is that steady state of the system has density ; the system adopts the maximal current density which is the density associated with kinematic wave velocity zero .the resulting bulk densities and currents are then as shown in table [ tab : asepmf ] which corresponds to the phase diagram given in figure [ fig : asepmfpd ] .here we have adopted what has become the standard nomenclature for the three phases the terminology should hopefully be self - explanatory ..[tab : asepmf ] properties of the asep obtained using the extremal - current principle under a mean - field approximation for the density - current relationship . [ cols="^,^,^,^",options="header " , ] whilst the matrix representation given in ( [ pasep : d1 ] ) and ( [ pasep : e1 ] ) provides a quick route to the normalisation ( [ pasep : zint ] ) , it is less suited to the task of calculating density profiles . here, progress is made by using a representation that involves the -deformed harmonic oscillator algebra , a well - studied mathematical object for which many results are known .central to this approach is a pair of creation and annihilation operators and that satisfy a -commutation relation and act on basis vectors according to if we take we find from ( [ pasep : qcom ] ) that the reduction relation ( [ pasep : de ] ) is satisfied .thus this representation looks like in order that ( [ pasep : dv ] ) and ( [ pasep : ew ] ) are also respected ( bearing in mind ) we require where the parameters and are as previously given in ( [ pasep : ab ] ) . thus this representation is the generalisation to of that used in section [ diag ] .the constant appearing in ( [ pasep : wvnn ] ) is fixed by the convention that .that is , we require this sum , which converges when , features prominently in the literature on -series as it is the -analogue of the exponential function .when it converges , it can be expressed as an infinite product and .diagonalisation of the matrix proceeds in much the same way as described above in subsection [ alsalam ] . here ,the eigenfunctions are a vector of functions with eigenvalue as before .the polynomials are found to satisfy the three - term recurrence using the same approach as previously .taking , allows the functions to be identified as -hermite polynomials , the properties of which are discussed in .of particular importance is the -exponential generating function of these polynomials , since this allows computation of the scalar products and that appear during the calculation of the normalisation .one finds for the former and the corresponding expression with for the latter . with the knowledge of the orthogonality relation satisfied by the polynomials can write down the identity which is the final part of the jigsaw that is an expression for the normalisation .one can verify that the same integral ( [ pasep : zint ] ) is obtained using this alternative representation .the key benefit offered by this approach is that expressions for the difference in density between neighbouring pairs of sites c^{n - i-2 } { | v \rangle}}{z_n}\ ] ] involve the matrix which is diagonal in this representation .this one sees from ( [ pasep : deaa ] ) and the definitions ( [ pasep : adag ] ) and ( [ pasep : a ] ) .specifically , { | m \rangle } & = & \frac{{\langle n | } [ \hat{a } \hat{a}^\dagger - \hat{a}^\dagger \hat{a } ] { | m \rangle}}{1-q } \\ &= & \frac{\left[(1-q^{n+1 } ) - ( 1-q^n)\right]}{1-q } \delta_{n , m } = q^n \delta_{n , m } \;.\end{aligned}\ ] ] to express the density gradient ( [ pasep : delta ] ) in terms of integrals over the -hermite polynomials , one inserts two of the identities ( [ pasep : hp2 ] ) , one in front of the combination ] which contains some large number of lattice sites .as we shall see explicitly below , each macroscopic profile can be realised in number of ways that grows exponentially with the number of microscopic degrees of freedom .this exponential growth leads to the key property of statistical mechanical systems that in the thermodynamic limit , , one particular macrostate let us call this very much more likely to be realised than any other ( except possibly at a phase transition ) .that is , the probability ] is a functional of the density profile that vanishes when . in an equilibrium system , ] that results as a free energy functional .progress in calculating such free energy functionals and related quantities has been summarised in the lecture notes of derrida . in this subsection, we will sketch the derivation of the free energy functional for the nonequilibrium symmetric simple exclusion process ( ssep ) with open boundaries that has been achieved through the matrix product solution for the steady - state distribution .this version of the model has symmetric hopping of a single species at unit rate in the bulk , entry at the left and right boundaries at rate and respectively , and exit at the left and right boundaries at rate and .in other words , we have the dynamics illustrated in figure [ fig : pasep ] of section [ pasep ] with the bias . in order to illustrate the procedure by which the free energy functional was first calculated in from the matrix product expressions, we shall follow it here for the much simpler case in which the model parameters are chosen so as to give rise to an _ equilibrium _ steady state . to find this set of parameters, we employ a kolmogorov criterion .this states that if , for every loop in configuration space , the equality holds , the steady state of the system is an equilibrium state for which the detailed balance relation also holds for every pair of configurations and between which transitions occur at nonzero rate .here we find that ( [ free : kol ] ) is automatically satisfied unless , in total , some nonzero number particles exits at one boundary and re - enters at the other to return to the starting configuration going one way round the loop .then , the ratio of the two products in ( [ free : kol ] ) is .therefore , to realise an equilibrium steady state one must have . since the bulk hopping occurs at the same rate in both directions , the detailed balance relation ( [ free : db ] ) implies that all configurations that have particles on the lattice are equally likely in the steady state . by considering a pair of configurations that differ by the addition or removal of a single particle at the left boundary, one finds that if is the steady - state weight of any one of the -particle configurations , hence , the probability of seeing a particular configuration ( where , as previously , if site is occupied , and zero otherwise ) is given by the product once the underlying distribution of microstates has been established , the free energy functional is obtained by taking an appropriate combination of thermodynamic and continuum limits . to this end, we divide the system into a number of boxes , box containing sites and particles .the pairs can thus be used to specify a macrostate of the system in the limit . to find the probability of this macrostate we note that where is the probability that particles are in a box of size in the steady state .this is given by \right)\end{aligned}\ ] ] where the second expression , valid for large , has been obtained using stirling s approximation . inserting this into ( [ free : boxes ] ) , and taking the thermodynamic limit , one finds that \ ] ] in terms of the intensive box sizes and densities .the continuum limit is now straightforward : we take and where is some portion of the interval ] vanishes if ] for a given profile .one can verify , by asking for a vanishing functional derivative , that this function must satisfy the nonlinear differential equation additionally , the detailed analysis presented in shows that must further be a monotonic function , and satisfy the boundary conditions we note that the monotonicity requirement on ensures that the argument of the final logarithm appearing in ( [ free : ssep ] ) is positive , and further that the value of the companion function at some point will in general depend on the _ entire _ profile through the differential equation ( [ free : comp])the free energy density is non - local , as previously claimed .the companion function that appears in the nonequilibrium free energy functional ( [ free : ssep ] ) is somewhat mysterious at a first encounter .however , a physical meaning can be ascribed to this function , starting from the observation that the values imposed at the boundaries coincide with the mean densities of the left and right boundary sites in the steady state , and hence also the densities of the particle reservoirs at each end of the system . in the bulk, one finds that an infinitesimal segment of the system located at a point can be thought of as being in local equilibrium with a reservoir at density . to see this , it is convenient first to generalise the functional ( [ free : ssep ] ) to a density profile defined on an arbitrary interval ], the free energy per unit length is unchanged .the functional that has this property and coincides with ( [ free : ssep ] ) for the case is }([\rho ] ; \sigma(a ) , \sigma(b ) ) = \int_a^b \rmd x \left [ \rho(x ) \ln \frac{\rho(x)}{\sigma(x ) } + ( 1 - \rho(x ) ) \ln \frac{1 - \rho(x)}{1-\sigma(x ) } + \ln \frac{(b - a)\sigma'(x)}{\sigma(b)-\sigma(a ) } \right ] \;.\ ] ] in the limit of an infinitesimal interval , , this expression approaches the corresponding expression for the equilibrium system }([\rho ] ; \bar{\rho } ) = \int_a^b \rmd x \left [ \rho(x ) \ln \frac{\rho(x)}{\bar{\rho } } + ( 1 - \rho(x ) ) \ln \frac{1 - \rho(x)}{1-\bar{\rho } } \right]\ ] ] if we identify with , the density of the reservoirs coupled to the equilibrium system .the observation that an effective local thermal equilibrium applies does not itself allow one to recover the full expression ( [ free : ssepg ] ) for the free energy functional since _ a priori _ one does not know what densities the effective intermediate reservoirs should have .however , it was noticed in that one can construct the free energy functional through an additivity principle that involves the modified free energy }([\rho ] ; \sigma(a ) , \sigma(b ) ) = { \cal f}_{[a , b]}([\rho ] ; \rho_a,\rho_b ) + ( b - a ) \ln j(\sigma(a ) , \sigma(b))\ ] ] where is the steady - state current through a system of length coupled to a reservoir of density at the left boundary , and of density at the right . for the ssep ,this current is where we have assumed that .( since the dynamics are symmetric , the case can be treated by making the replacement . )the additivity principle given in relates the free energy functional for a system to that of two subsystems created by inserting a reservoir at a point .it reads }([\rho ] ; \sigma(a ) , \sigma(b ) ) = \max_{\sigma(y ) } \left\ { { \cal h}_{[a , y]}([\rho ] ; \sigma(a ) , \sigma(y ) ) + { \cal h}_{[y , b]}([\rho ] ; \sigma(y ) , \sigma(b ) ) \right\ } \;.\ ] ] that is , is the density one should choose for the reservoir placed at the point so that the combined free energy of the two subsystems is maximised .one can verify by recursively subdividing the subsystems created by inserting intermediate reservoirs , and by assuming that each infinitesimal segment created in this way is in a local equilibrium with its boundary reservoirs , that the expression ( [ free : ssep ] ) results .the fact that one is looking for a maximum in ( [ free : add ] ) further implies that the reservoir densities must satisfy the differential equation ( [ free : comp ] ) .thus the additivity principle ( [ free : add ] ) in tandem with the effective local equilibrium property and the free energy functional given by ( [ free : comp ] ) are equivalent .it is worth reiterating a few properties exhibited by the free energy functional ( [ free : ssep ] ) noted in .first , as is required , the equilibrium version ( [ free : eqm ] ) is recovered in the limit where the right boundary reservoir density approaches that of the left . to see this , we write ( without loss of generality ) for the effective reservoir density in the bulk + \delta\sigma(x ) \;.\ ] ] if one substitutes this expression into the differential equation ( [ free : comp ] ) that governs , one finds that the function must be proportional to ^ 2 ] .this is achieved by asking for the functional derivative of ( [ free : ssep ] ) with respect to to vanish .that is , }{\delta \rho(x ) } = \ln \left [ \frac{\rho(x)}{\sigma(x ) } \ , \frac{1-\sigma(x)}{1-\rho(x ) } \right ] = 0 \;,\ ] ] and hence we must have .the differential equation ( [ free : comp ] ) then implies that the second derivative of must vanish , which ( along with the boundary conditions ) results in the optimal and coinciding with the linear stationary profile .it is easy to see that then ( [ free : ssep ] ) is zero , indicating that this profile appears with probability in the thermodynamic limit ; it can also be shown that any other macroscopic profile appears with a probability exponentially small in the number of lattice sites .it is interesting to compare the magnitude of the free energy functional for a given profile with that for an equilibrium system that has the same stationary profile .this can be realised by coupling a one - dimensional chain of sites along its length to a series of particle reservoirs that have their chemical potentials tuned in such a way that the desired equilibrium density in a small interval ] which means that the probability of witnessing a particular fluctuation away from the optimal profile for a system driven out of equilibrium is suppressed compared to that for an equilibrium system with the same optimal profile .this is not always the case however : a similar analysis for the asymmetric simple exclusion process ( see below ) shows that fluctuations can also be enhanced relative to the equilibrium state .a final further application of the free energy functional is to explore the optimal profiles in the nonequilibrium system after imposing additional global constraints .for example , one can ask for the most likely profile given that the overall density is fixed .it turns out that the the most likely profile has an exponential form and , unless the overall imposed density happens to be equal to the equilibrium density ( in which limit the linear profile is recovered ) , the density is discontinuous at the boundaries .free energy functionals have also been derived for the partially asymmetric exclusion process ( pasep ) in the case where entry and exit of particles occurs at the left and right boundaries respectively , and the bulk bias is to the right ( i.e. , the forward - bias regime , ) . in principle , one could follow through the procedure outlined above in subsection [ hardway ] , where one starts with the full stationary distribution of microstates and takes the continuum limit .it turns out to be more straightforward instead to use the microscopic distribution to extend the additivity principle discussed in subsection [ easyway ] .then , by the effective local equilibrium property , one can construct the full free energy functional as before .when the bulk dynamics are asymmetric , it turns out that two versions of the additivity principle come into play .one applies when the left boundary reservoir density is less than that at the right ; the other when the reverse is true . in both cases, the additivity relation involves the modified free energy defined by equation ( [ free : mod ] ) , but where now the current is that found ( for example ) by an application of the extremal current principle discussed in section [ asep ] .that is , } \rho(1-\rho ) & \sigma \le \sigma ' \\\displaystyle \max_{\rho \in [ \sigma ' , \sigma ] } \rho(1-\rho ) & \sigma \ge \sigma ' \end{array } \right . \;.\ ] ] when , the additivity formula is the same as for the symmetric case , equation ( [ free : add ] ) ( see for details of the calculation ). using the local equilibrium property , one then finds the free energy functional to be }([\rho ] ; \sigma(a ) , \sigma(b ) ) = -(b - a ) \ln j(\sigma(0),\sigma(1 ) ) + { } \nonumber\\ \!\!\ !\max_{\sigma(x ) } \int_a^b \rmd x \left [ \rho(x ) \ln ( \rho(x)[1-\sigma(x ) ] ) + ( 1 - \rho(x ) ) \ln [ ( 1 - \rho(x))\sigma(x ) ] \right ] \;.\end{aligned}\ ] ] again , the companion function that gives the effective reservoir densities in the bulk must match the actual reservoir densities at the boundaries , and must also be a nonincreasing function .when , the additivity formula takes on the different form }([\rho ] ; \sigma(a ) , \sigma(b ) ) = \min_{\rho_y \in \{\sigma(a ) , \sigma(b)\ } } \left\ { { \cal h}_{[a , y]}([\rho ] ; \sigma(a ) , \sigma(y ) ) + { \cal h}_{[y , b]}([\rho ] ; \sigma(y ) , \sigma(b ) ) \right\ } \;,\ ] ] where again we refer the reader to for the details of the calculation .the fact that the intermediate reservoir inserted at the point always takes on a density that is equal to that of one of the boundary reservoirs implies that effective reservoir density is constant on one side of the point .further subdivision on that side then has no effect .on the other side the situation is the same as for the previous subdivision : on one side of the division , the density will be constant and equal to that of the appropriate boundary reservoir , whereas on the other further subdivisions will be required .the upshot of this is that the reservoir density function will be a step function , taking the value for and for for some value .this value will be such that the overall free energy is minimised , that is }([\rho ] ; \sigma(a ) , \sigma(b ) ) = -(b - a ) \ln j(\sigma(a ) , \sigma(b ) ) + { } \nonumber\\ \!\!\ !\min_{a\le y\le b } \bigg\{\int_a^y \rmd x \left [ \rho(x ) \ln ( \rho(x)[1-\sigma(a ) ] ) + ( 1 - \rho(x ) ) \ln [ ( 1 - \rho(x))\sigma(a ) ] \right ] + { } \nonumber\\ \hspace{3em } \int_y^b \rmd x \left [ \rho(x ) \ln ( \rho(x)[1-\sigma(b ) ] ) + ( 1 - \rho(x ) ) \ln [ ( 1 - \rho(x))\sigma(b ) ] \right ] \bigg\ } \;.\end{aligned}\ ] ] note that despite considerable cosmetic differences between this formula and ( [ free : asepd ] ) , the two expressions are in fact very similar : in both cases , one needs to find the set of reservoir densities that leads to an extremum of the same joint functional of and .the fact that in one case , the additivity principle involves a maximum and in another a minimum has its origins in the nature of the saddle - point which provides the relevant asymptotics .we also note that the bias parameter does not enter explicitly into these equations , only implicitly through the relationship between the boundary reservoir densities and , and the microscopic transition rates and . specifically and so affects phase boundaries in the - plane , as was seen in section [ pasep ] .the -independence of the free energy functionals does not contradict the rich structure seen in the density profiles in section [ pasep ] , since the deviations from the bulk density at the boundaries vanish under rescaling in the limit .this lack of -dependence also implies that ( [ free : ssep ] ) is not obtained as : in other words , the limits and do not commute ( as we have already seen ) .the free energy functional in the weakly asymmetric limit has been computed explicitly using the matrix product approach and on this scale a ( or ) dependence is apparent .closer scrutiny of these free energy functionals for the pasep shows similar properties to those seen for the ssep above in subsection [ ssepprops ] .for example , one finds that the stationary profile ( here , a constant profile after rescaling in the thermodynamical limit ) is also the most likely profile , except along the boundary between the high- and low - density phases along which as has been discussed before there is a superposition of shocks with the shock location distributed uniformly across the system .then , any one of these shock profiles is found to minimise the free energy functional .again , one can compare the relative size of a fluctuation away from the most likely profile in the nonequilibrium system , and an equilibrium system coupled to a reservoir with a spatially - varying chemical potential .in the case where , it is found that ( as for the ssep ) , such fluctuations are suppressed ; however when the opposition between the boundary densities and the bulk bias results in these fluctuations being enhanced .a final interesting feature of the pasep is that density fluctuations in the maximal current phase are non - gaussian .evidence for this is provided by the existence of a discontinuity at a density of in the probability of witnessing a density in a box located somewhere in the bulk of the system . that the distribution is indeed non - gaussian is confirmed by explicit calculations for the totally asymmetric case ( ) that exploit the relationship to equilibrium surface models outlined in subsection [ surf ] . in this sectionwe have seen that once one has found a function that is additive when two subsystems are connected together via a reservoir with an appropriate density , the distribution of density profile macrostates can be constructed given the existence of a local equilibrium property .this is an appealing approach , and one hopes that it can be applied to a much wider range of systems than exclusion processes .the difficulty is , however , that one does not know _ a priori _ what form the additivity principle should take : here , we have relied on the complete knowledge of the underlying distribution of microstates to construct it .nevertheless , it is worth remarking that the free energy functional for the symmetric exclusion process ( [ free : ssep ] ) has been obtained in a purely macroscopic formalism . in this approachit is assumed that , in the combined thermodynamic and continuum limit , the macroscopic density profile evolves deterministically as a consequence of the law of large numbers .then , it can be shown that the probability of witnessing a deviation from the most likely profile in the steady state is given by a functional of the most likely trajectory through phase space that begins at time at stationarity and is constrained to reach at .so far we have seen exact matrix product solutions for the steady state of two classes of model systems with nonequilibrium dynamics : the open boundary asep and pasep various aspects of which were discussed in detail in sections [ asep ] , [ open ] , [ pasep ] and [ free]which had a single species of particles hopping on a one - dimensional lattice with open boundaries ; a two - species model with periodic boundary conditions , which we examined in section [ ring ] . in this sectionwe are going to search for -species exclusion models that can be solved using the matrix product approach . for clarity, we reiterate that in this work the number of species relates to the number of particle species , excluding vacancies .it is not known how to perform an exhaustive search of _ all _ possible matrix product steady states , but it is possible to search a restricted subset where the matrix products involved can be systematically _ reduced _ using expressions like ( [ int : de])([int : ew ] ) and ( [ ring : de])([ring : da ] ) . to this end , let us recall the proof of the reduction relations that applied for exclusion models on the ring geometry given in subsection [ rrrproof ] .this proof concerned stationary weights that were given by a trace of matrices where the variable indicates the species of particle occupying site ( denotes a vacancy ) and is the corresponding matrix whose forms is to be determined .we showed that when is the rate at which particles on sites and exchange places , if one can find auxiliary matrices such that is satisfied , the weights given by ( [ 2s : f ] ) are stationary .if we are to obtain matrix reduction relations , we require that the auxiliaries are scalars ( rather than matrices ) .we must also insist that these reduction relations describe an associative algebra , i.e. , that no matter what order the reduction relations are applied , one always ends up with the same sum of irreducible strings .the various ways in which this can be be achieved were formalised , classified and catalogued by isaev , pyatov and rittenberg . in this section , we briefly outline the classification scheme for the the case of two particle species , , and show what physical dynamics the various possibilities correspond to .we then move on to discuss geometries other than the ring . in this sectionwe will mostly use the notation established above , where vacancies are denoted by and particles by and .occasionally , it will be helpful to state model dynamics using a natural `` charge representation '' , where particles of species 1 and vacancies are relabelled as positive and negative charges and particles of species 2 as vacancies : note that our notation differs from that used by .we will also use a shorthand for the transition rates and , since we are assuming the auxiliary matrices are scalars , we will write them as .we first restrict ourselves to the case , and , i.e. there are exchanges in at least one direction between particles and vacancies and between the two species of particles .then , the relations ( [ 2s : m ] ) become it remains to check whether the relations ( [ sqa ] ) are consistent . the approach of isaev , pyatov and rittenberg ( see also that of karimipour ) is to generalise the reduction process leading to ( [ int : reord ] ) .that is , one seeks to use ( [ sqa ] ) to reduce any product of matrices to a sum of irreducible strings .first we fix the order of irreducible strings as .then we require that reducing an arbitrary product of the matrices to a sum of irreducible strings is independent of the order in which the reduction is carried out using the elementary rules ( [ sqa ] ) .reduction rules which satisfy this requirement are referred to by isaev , pyatov and rittenberg as _ pbw - type algebras _ .for the two - species case the requirement is that the two possible reductions of the product , illustrated schematically below , give equivalent results .( 17,4 ) ( 1.5,1.5) ( 4.5,2)(1,1)1 ( 4.5,1.4)(1,-1)1 ( 6,3) ( 6,0) ( 9,3.2)(1,0)1 ( 9,0.2)(1,0)1 ( 10.5,3) ( 10.5,0) ( 13.5,3)(1,-1)1 ( 13.5,0.4)(1,1)1 ( 15,1.5) calculating explicitly using ( [ sqa ] ) yields which results in the following six conditions on the hopping rates from the requirement that each of the six terms in the above equation vanish : the solutions of these equations were classified in . here we summarise the various nontrivial solutions which are physically relevant and the corresponding models which have been previously studied in the literature .the classification hinges on how many of the are zero .the first class of solutions has none of the .there are then two possible solutions to ( [ pbw1])([pbw6 ] ) . ** * solution ai** .physically , this corresponds to symmetric exclusion with two species which , although labelled distinctly , have identical dynamics .matrices are easily found by setting which reduces ( [ sqa ] ) to the single species condition . on the ringthis system has a simple steady state where all allowed configurations are equally likely . * * * solution aii** for and for .this corresponds to a model where the hop rates are totally asymmetric and of the form that is , since , a particle of species 1 is faster than a particle of species 2 and overtakes with rate .the corresponding matrix algebra is this solution of ( [ pbw1])([pbw6 ] ) was first noted and studied by karimipour and followed up in .as it happens , this model can be generalised to more than two species , and as such will be discussed in its full generality in section [ sec : karimipour ] .representations are given in appendix [ appreps ] equation ( [ karimrep ] ) .this second class of solutions has one of .the first two families of solutions within this class are obtained by taking , and then choosing the hop rates to satisfy ( [ pbw1 ] ) , ( [ pbw3 ] ) and ( [ pbw5 ] ) in two different ways . ** * solution bi** , , and with .this corresponds to a model with the hop rates or , in the charge representation , this model is thus an asymmetric generalisation of the second - class particle of section [ ring ] and was first used by for the special case , .the corresponding matrix algebra is a representation of this algebra is given in appendix [ appreps ] . * * solution bii * , , , and .the corresponding model has hop rates or , in the charge representation , this model is another generalisation of the second - class problem first studied with , for a single species two particle and later generalised in .the matrix algebra is this algebra can be mapped onto that used to solve the pasep with open boundaries , ( [ pasep : de])([pasep : ew ] ) , if one takes and , .thus one can make use of results from section [ pasep ] , along with techniques for models on the ring described in section [ ring ] to analyse various cases of this model . choosing or to be zero results in the same solution as bii under relabelling particles , and one additional solution when .this is * * * solution biii** , , , , and .the corresponding model has hop rates and the matrix algebra is however on the ring this results in a steady state in which all allowed configurations are equally likely : as can be seen from ( [ biiialg ] ) in any periodic string of matrices , all and can be eliminated through the first and third relations. the third class of solutions is obtained by taking two of the scalar quantities equal to zero . equations ( [ pbw4])([pbw6 ] ) and two of ( [ pbw1])([pbw3 ] ) are then automatically satisfied .the equation that remains has a structure that is independent of which of the are taken to be nonzero , so we take which leaves us to satisfy ( [ pbw3 ] ) .this can be done in two ways . * * solution ci * , , , and .this corresponds to a model in which all six exchanges may take place and that has the matrix algebra unfortunately this algebra is not useful in describing a physical system with periodic boundary conditions . to see thisone can take for example a representation where and which satisfies the first two relations of ( [ cialg ] ) .however the third relation of ( [ cialg ] ) has the form of a vanishing deformed commutator thus if we commute say a matrix all the way around the ring we will end up with the same matrix product multiplied by a factor where is the number of species 2 particles on the ring and .thus , only for ( ) can we use this algebra on a periodic system ; alternatively , it can be used for general on a closed segment as we discuss in section [ sec : closeg ] . * * solution cii * equation ( [ pbw3 ] ) can also be satisfied by taking and unrestricted choices for the remaining rates , , , .this has similar dynamics to the previous model , but with no exchange of species 1 and 2 the matrix algebra is similar to ( [ cialg ] ) but with the third relation absent .in contrast to the previous case , this algebra can be used to describe the above dynamics on a ring .this model and its generalisation to multiple species will be discussed further in section [ sec : multispecies ] . in this case, we take all which leaves all six hop rates free . the algebra is a set of deformed commutators explicit forms for , and are discussed in section [ sec : closeg ] . on a periodic system , as with case ci discussed above , we require that a matrix product is left invariant after commuting one of the matrices once around the ring .the condition for this is that ^{n_0 } \left[\frac{w_{21}}{w_{12}}\right]^{n_2 } = \left[\frac{w_{02}}{w_{20}}\right]^{n_0 } \left[\frac{w_{12}}{w_{21}}\right]^{n_1 } = \left[\frac{w_{10}}{w_{01}}\right]^{n_1 } \left[\frac{w_{20}}{w_{02}}\right]^{n_2 } = 1 \label{ddbcond}\ ] ] where is the number of particles of species .these conditions are satisfied for example by the abc model in the special case where all particle numbers are equal .the fact that the rhs of ( [ dalg ] ) are all zero implies that to use these algebraic relations , detailed balance must hold in the steady state .therefore the condition ( [ ddbcond ] ) is actually the condition for detailed balance to hold .the corresponding energy turns out to be an interesting long - range function . in the abc model ,the particles and vacancies are relabelled as and have dynamics where we take .this corresponds to the choice of rates and .note that the dynamics are invariant under cyclic permutation of the three particle types , and that particles tend to move to the left of particles , particles tend to move to the left of particles and particles tend to move to the left of particles . in the steady state of this modelthis results in a strong phase separation into three domains of , and in the order . in the limit with fixed , the domains are pure in that far away from the boundary of the domain the probability of finding a particle of a different species ( to that of the domain ) tends to zero .that is , particles from a neighbouring domain may penetrate only a finite distance . in the case where we have equal numbers of each species condition ( [ ddbcond ] )is satisfied and one can use the matrix product to calculate the steady state exactly .this model also exhibits an interesting phase transition in the weakly asymmetric limit where varies with system size as , thus approaching unity in the thermodynamic limit .then , according to the value of , the steady state can either order into three domains which are rich in and ( but are not pure domains ) or a disordered phase where the particles are typically in a disordered configuration . in the previous subsection, we summarised a classification of all the possible sets of dynamics for which a matrix product state with scalar auxiliaries ( i.e. , one that has a set of reduction relations ) exists with a unique decomposition into irreducible strings of matrices .we also assessed whether these matrix product states could be used for models on the ring .we now extend this enquiry to other one - dimensional geometries that can be constructed .we first consider open boundary conditions , i.e. , those where particles can enter and leave at the boundaries .the most general dynamics is to have rates at which a particle of species ( or a vacancy if ) is transformed into a particle of species at site and rates at which a particle of species is transformed into a particle of species ( or a vacancy of ) at site .if we are to have statistical weights of the form ( [ int : mp ] ) , i.e. , we then obtain additional conditions involving the matrices and vectors and .these may be written as where and . since we require in the open boundary case , typically , in addition to the conditions for the solution classes of section [ ipr ] , one has further constraints on the boundary rates to allow conditions ( [ alphaij],[betaij ] ) to be satisfiedwe do not attempt to catalogue all solutions here , rather we point out which of the solution classes of section [ ipr ] may , in principle , be used in an open system and reference some examples .some general considerations of the open boundary conditions for which one has a matrix product state using the quadratic algebra ( [ sqa ] ) are discussed in the two species cases in . ** * class a**solution ai , which corresponds to symmetric exclusion of two differently labelled species with identical bulk dynamics , can be used in the open boundary case where the two species are injected and extracted with different rates , under certain conditions .solution aii can be combined with certain open boundary conditions that were found by karimipour .we shall specify these boundary interactions and discuss this model more thoroughly in section [ sec : multispecies ] . ** * class b**both solutions bi and bii can be used in models with open boundary conditions .one case that has been studied is the dynamics implied by solution bii with .this is sometimes referred to as the bridge model because one has two particle species with opposite velocities that slow down to exchange places when they meet , rather like cars on a narrow bridge .unfortunately , the full phase diagram of the bridge model and in particular an interesting broken symmetry region cannot be described by a quadratic algebra ( i.e. , one where the auxiliaries are scalars and reduction relations are implied on the steady - state weights ) .the bii case has also been studied . * * * class c**recall that class c comprises solutions where only one of the are nonzero .hence , it is not possible to satisfy that sum rule ( [ xsum ] ) , and hence the dynamics implied by these solutions can not be solved in an open system using a simple matrix product algebra . * * * class d**algebras within class d can only be used in models with open boundaries under the special conditions of detailed balance , which generally does not hold . on the other hand , as we discuss below , detailed balance _ does _ apply in conservative models with closed boundaries and thus such algebras are relevant there see below .a closed segment is a one - dimensional lattice of size with _ closed _ boundary conditions .that is , particles can not enter or leave at the left boundary or right boundaries ( sites 1 and ) . for the moment , let us consider conserving models where particles are neither created nor destroyed in the bulk : the only moves that are allowed are those where particles or vacancies on neighbouring sites exchange places .it is straightforward to show using a kolmogorov criterion ( [ free : kol ] ) that detailed balance is satisfied in the steady state of any such model .the reason for this is that in order to create a loop in configuration space , every exchange of a pair must be accompanied by an exchange in the opposite direction ( as long as the dynamics is not totally asymmetric ) .hence the forward loop contains the same set of exchanges as the reverse , and ( [ free : kol ] ) is satisfied .some quadratic algebras for closed segments and segments closed at one end and open at the other were discussed in .typical configuration in the steady state of the partially asymmetric exclusion process on a closed segment of sites with . ]the simplest example of this type is the partially asymmetric exclusion process ( pasep ) on the closed segment i.e. just one species of particle .the appropriate matrix algebra in this case is taking the configuration with the highest weight will be the configuration with all particles stacked up to the right end of the lattice as shown in figure [ fig : pasep - closed ] .the weight of any configuration can then be obtained by moving particles to the left from the maximal weight configuration and multiplying the weight by a factor at each move as implied by ( [ qdefcom ] ) .then the statistical weight of any configuration will be given by and the maximal weight is the steady state weight ( [ closegw ] ) is in fact a boltzmann weight with an energy function and the dynamics respects detailed balance with respect to this weight .let us now turn to class d models , where the matrix algebra takes the form of three deformed commutators ( [ dalg ] ) .an explicit form for the operators ,, involves tensor products of matrices which obey deformed commutation relations where so , for example , \otimes e(\omega_{21}/\omega_{12})\otimes d = 0 \;.\end{aligned}\ ] ] to obtain statistical weights here , we need to prescribe a suitable contraction operation i.e. appropriate boundary vectors . using the representation of the deformed commutator algebra in appendix [ appreps ] ( [ defcomrep ] )we see that if there are species 1 particles , species 2 particles and vacancies then will give a nonzero contraction for those configurations with the correct number of each species .in particular for the reference configuration the weight is in order to investigate the structure of shocks , derrida , lebowitz and speer studied a single second - class particle on an infinite system .the idea was to describe the stationary measure as seen from the second - class particle by a matrix product .the model considered was the partially asymmetric generalisation with dynamics taking the second - class particle to be at the origin and considering a window of sites to the left and sites to the right of the second - class the stationary probabilities ( note , not weights ) are written as a \left [ \prod _ { j=1}^n x_{\tau_j}\right ] { | v \rangle}\ ] ] where is once more the matrix corresponding to a second - class particle . in order for this form to hold for a window of arbitrary size i.e. all values of , one requires that where . to ensure that the probabilities are correctly normalised we further require the bulk dynamics falls within the solution class bi , andso the matrix algebra ( [ bialg ] ) should be used with and .this particular application has the unusual property that the choice of and has physical consequences .this is because and can not be scaled out of equations ( [ cw ] ) .the correct choice is made by insisting that the desired asymptotic densities far to the right of the second - class particle ( ) and far to the left ( ) are obtained .it turns out that one should take choosing corresponds to a shock profile as soon from the second - class particle with the second - class particle tracking the position of the shock .the structure of the shock was analysed , in particular the decay to the asymptotic values is exponential .an interesting result is that the characteristic length of the decay becomes independent of the asymmetry when to obtain a representation of the required matrices and vectors , one can use ( [ birep1])([birep0 ] ) given in appendix [ appreps ] for solution class bi .then , using ( [ cw ] ) , one can construct , to be eigenvectors with eigenvalue 1 .the expressions for the boundary vectors are , however , quite complicated in this representation .an alternative approach , used in , exploits an infinite - dimensional ( rather than semi - infinite ) representation .finite dimensional representations along special curves were studied in . in the foregoing , we have assumed that the bulk dynamics conserve particle numbers . in this sectionwe show how the matrix algebras which were encountered can be adapted to models that have non - conservative dynamics .the basic idea is to augment the dynamics with additional processes that move between different sectors ( i.e. , configurations with a particular number of particles ) .these moves will be constructed in such a way that detailed balance between sectors is satisfied , even though detailed balance does not hold within a sector . as such, one can realise any prescribed distribution of sectors : we choose weights that correspond to the grand canonical ensemble that was introduced in section [ ring ] as a means to study systems with fixed particle numbers .we first discuss this method in detail , and then give two concrete examples .as we have just described , the aim is to construct some dynamics such that the statistical weight of a configuration with particles ( one a designated species ) on an -site ring is where are the matrices that give the steady state for some conservative process , and is a fugacity that controls the mean particle number . let us denote transition rates for this conservative process as , and the additional rates that serve to increase or decrease by one as .the statistical weights must then satisfy the master equation + { } \nonumber\\ \fl\qquad \sum_{{\mathcal{c } } ' } \left [ f({\mathcal{c}}';m-1 ) w_{+}({\mathcal{c } } ' \to { \mathcal{c } } ) - f({\mathcal{c}};m ) w_{-}({\mathcal{c}}\to { \mathcal{c } } ' ) \right ] + { } \nonumber\\ \sum_{{\mathcal{c } } ' } \left [ f({\mathcal{c}}';m+1 ) w_{-}({\mathcal{c } } ' \to { \mathcal{c } } ) - f({\mathcal{c}};m ) w_{+}({\mathcal{c}}\to { \mathcal{c } } ' ) \right ] = 0 \;.\end{aligned}\ ] ] now , the first summation is what appears in the master equation for the conservative process , and since all the weights for fixed are proportional to , this first summation vanishes .the second and third summations can be made to vanish if we choose where and are the site occupation variables specifying to configurations and that have and particles respectively . as we can see, this takes the form of a detailed balance relation ( [ free : db ] ) .the two examples that follow serve to demonstrate .recall that in section [ ring ] we considered a version of the asep with a second - class particle , i.e. , a model with dynamics this falls within the family of solutions denoted bii above . in the analysis of this model we introduced a fugacity as a trick to expediate calculation of statistical weights , tuning its value to yield the desired mean particle density on the ring in the canonical ensemble .this ensemble can also be generated using physical dynamics as we have just described .this is achieved by noting that the algebra let count the number of positive charges ( species particles ) .we note that the algebra ( [ biialg ] ) implies hence from ( [ dbens ] ) , we see that the grand canonical ensemble is realised physically if we introduce two non - conserving processes where the rates satisfy we remark that and as free parameters that enforce a particular fugacity in the grand canonical ensemble generated by these non - conservative dynamics .this particular implementation of a model with non - conserving dynamics was used in to generate a grand canonical ensemble dynamically .a second non - conserving generalisation of the asep on a ring with a second - class particle is best described using the charge representation . for simplicity , we take the dynamics of bii with and and take .the case was studied in .thus , the conserving dynamics are to these dynamics we seek to add the following non - conserving moves in such a way that the bii matrix algebra ( [ biialg ] ) can be exploited . to achieve this , we require matrices such that ( [ dbens ] ) is satisfied for the non - conserving moves. that is , where is a fugacity counting the total number of positive and negative charges . to make contact with more familiar problems ,let us put , and .we see that the previous equation is satisfied if i.e. , we require to be a projector .this is achieved using the matrices for the second - class particle problem of section [ ring ] , ( [ ring : de][ring : a ] ) i.e. , where , and .the statistical weight of a configuration is then given by where is the number of vacancies in the configuration .in it was shown that an interesting phase transition arises as is varied .for there is a tendency to create positive and negative particles and this ultimately leads to a vanishing steady - state density of vacancies . on the other hand for is a tendency to eliminate positive and negative charges at the left and right boundaries respectively of domains of vacancies .thus domains of vacancies tend to grow and in the steady state this results in a finite density of vacancies .the transition is remarkable in that it is a phase transition in a periodic one dimensional system with local dynamics without an absorbing state . here, we shall use the generating function technique of section [ gfa ] to quickly obtain the asymptotics of the partition function and hence demonstrate the phase transition .first , notice from ( [ noncon2 ] ) that configurations with no vacancies are dynamically inaccessible .therefore all configurations have at least one vacancy and we write the partition function as = w { \langle w | } ( d + e + w a)^{n-1}{| v \rangle}\ ] ] ( an unimportant subtlety is that we have ignored the degeneracy factor in placing a given configuration of particles on the periodic lattice ; this factor is bounded from above by and hence does not contribute to the exponential part of which controls the macroscopic physics , as will be seen below . ) we introduce the generating function where we define .now , we may write where then using the generating function technique and ( [ znu ] ) we obtain the singularities of are a square root singularity at ( coming from ) and a pole at which only exists if . using the results of appendix [ gf ] one quickly obtains the asymptotic behaviour of due to the form ( [ wweight ] ) of the weights , , the density of vacancies , is given by where is the average number of vacancies in the system .therefore we obtain demonstrating the phase transition in the density of vacancies at .in the previous section we identified all possible two - species dynamics that have a steady state of matrix product form with a quadratic algebra i.e. obeying relations of the type given by ( [ sqa ] ) .we deferred the discussion of two families of dynamics , as both can be generalised in a straightforward way to arbitrarily many species of particles .we now examine these cases in more detail .further discussions of multi - species quadratic algebras are given in .we first consider a variant of the asep on the ring in which each particle has a different forward and reverse hop rate ( and there is no overtaking) .this is a generalisation of the two species model , case cii of section [ 2species ] .there are particles and vacancies on the ring of size .we give each of the particles an index , and introduce for each a pair of rates and so that the dynamics of particle are the corresponding matrix algebra is a generalisation of class cii of section [ 2species ] to particle species .it takes the form where we have adopted the notation and .one possible representation of these matrices is given in appendix [ appreps ] as equation ( [ dmuealg ] ) .however , it is simplest to perform calculations not with a representation , but directly from the algebra ( [ msalg1 ] ) . on a periodic systemlet the steady state weight of a configuration specified by where is the number of empty sites in front of particle , be .the matrix product form for is \label{msmatf1}\ ] ] using ( [ msalg1 ] ) for in ( [ msmatf1 ] ) gives the relation the procedure is continued using , in sequence , ( [ msalg1 ] ) with to obtain f_n ( n_1,\ldots , n_m ) = \left [ \sum_{i=0}^{m-1 } \frac{1}{p_{m - i } } \prod_{j = m+1-i}^{m } \frac{q_j}{p_j}\right ] f_{n-1 } ( n_1,\ldots , n_m-1)\ ] ] the effect of this manipulation has been to commute a hole initially in front of the particle backwards one full turn around the ring .the result is that the weight of a configuration of size is expressed as a multiple of the weight of a configuration of size with one hole fewer in front of particle . repeating the commutation procedure for a hole initially in front of particles implies that the weights are \\left[1-\prod_{k=1}^{m}\frac{q_{k}}{p_{k } } \right]^{-1 } \ ; \label{diss}\ ] ] and we can take .an interesting phenomenon that may occur in this system is that of condensation . in that casethe steady state is dominated by configurations where one particle has an extensive value for .this is easiest to understand when i.e. we consider only forward hops. then ( [ diss ] ) reduces to and we may write the weight ( [ bef ] ) clearly has the form of the weight of an ideal bose gas with . herethe ` bosons ' are vacancies and the bose states correspond to particles with the energy of the state determined by the hop rate of the particle .the equivalent of the density of states for the bose system is the distribution of particle hop rates which we denote .if there is a minimum hop rate this will correspond to the ground state of the bose system . then, if for low hop rates the distribution of particle hop rates follows bose condensation will occur for high enough vacancy density or equivalently _ low _ enough particle density . when condensation occurs the slowest particle ( the one with the minimum hop rate ) has an extensive number of vacancies in front of it whereas the rest of the particles will have gaps in front of them comprising some finite number of vacancies .thus a ` traffic jam ' forms behind the slowest particle . in is shown that the asep on a periodic system may be mapped onto a zero range process and the condensation transition is fully discussed in the context of the zero range process .recently , further progress has been made in understanding the case where by using extreme values statistics and renormalisation arguments .this is the generalisation of the two - species case aii , which had the two particle species hopping with different speeds , and the faster ones overtaking the slower .the many - species generalisation allows an arbitrary number of particle species labelled .a particle of species has a forward hop rate or ` velocity ' , and adjacent particles and exchange places with rate if , i.e. , the model provides an elegant generalisation of the open boundary tasep and its algebra .the generalisation of ( [ aiialg ] ) to the multispecies case is writing , and making the convenient choice , for yields on a periodic lattice the relations ( [ kalg ] ) are the only ones which need be satisfied and this can be done by choosing a scalar representation and where is chosen to be less than the lowest hop rate. thus on a ring all allowed configurations are equally likely .more interestingly , the relations ( [ msalg1 ] ) may be used for an open system with suitable boundary conditions . as discussed in section [ sec : extop ] we require which with the above choice of and implies at the left hand boundary a particle of species is injected with rate if the first site is empty. the choice reduces ( [ alphaij ] ) to at the left hand boundary a particle of species leaves the system with rate .the choice ensures that explicit representations of the matrices can be constructed ( see appendix [ appreps ] ) and ( [ betaij ] ) becomes relations ( [ kalg ] ) , ( [ kb ] ) and ( [ ka ] ) provide an elegant generalisation of the usual asep relations ( [ int : de])([int : ew ] ) which are recovered when we have just one species of particle with hop rate . in general , an infinite dimensional representation of ( [ kalg ] )is needed to satisfy the boundary conditions ( [ kb ] ) ; see , for example , ( [ karimrep ] ) in appendix [ appreps ] .the fully disordered system is realised when each particle that enters has a velocity drawn from some distribution , with support ] the three phases ( low - density , high - density and maximal current ) of the open boundary asep remain although the phase boundaries depend on parameters such as ] the high - density phase is suppressed and only the low - density and maximal current phases exist .recall that in section [ algproof ] we set out a general cancellation scheme for matrices that guarantees a stationary solution of a master equation for a process on the ring .this involved a set of ordinary matrices and their auxiliaries . by restricting to the case of scalar auxiliaries, we showed , in the case of two particle species in section [ 2species ] , that it was possible to determine all possible sets of dynamics that give rise to a matrix product steady state . in this section, we shall consider the more complicated case where the auxiliaries are matrices or more complicated operators .here it is not possible as far as we are aware to perform an exhaustive search for solutions .however , it has been shown that if one has a one - dimensional lattice with particle species , open boundary conditions and arbitrary nearest - neighbour dynamics in the bulk , there does exist a matrix product solution , involving auxiliaries that in general will not be scalars , obeying the algebraic relations set out in section [ algproof ] . in the next subsection we review this existence proof which is due to krebs and sandow .unfortunately , this proof does not lead to any convenient reduction relations , like ( [ int : de])([int : ew ] ) for the asep , that might be in operation , also , the proof is not constructive in that it requires the steady state to already be known in order to construct explicit matrices .rather , the proof demonstrates that there are no internal inconsistencies in the cancellation mechanism of section [ algproof ] . in the last two subsections we discuss two models of which we are aware that give examples of a matrix product state with matrix auxiliaries .the existence proof of a matrix product state ( see also for the discrete time case ) applies to models with open boundary conditions and arbitrary nearest - neighbour interactions .let us restate the master equation for a general process in the form introduced in section [ algproof ] .it reads where the operator applied to the function of state variables generates the gain and loss terms arising from interactions between neighbouring pairs of sites in the bulk , and and do the same at the boundaries . herewe shall consider models that have distinct particle species in addition to vacancies : i.e. , the occupation variables take the values .the matrix product expressions provide the steady state solution to ( [ formal : me ] ) for some set of matrices and vectors and if one can find a second set of auxiliary matrices such that the relations \cdots { | v \rangle } \\ \label{formal : tilde2 } \hat{h}_l { \langle w | } x_{\tau_1 } \cdots { | v \rangle } = - { \langle w | } \tilde{x}_{\tau_1 } \cdots { | v \rangle } \\\hat{h}_r { \langle w | } \cdots x_{\tau_n } { | v \rangle } = { \langle w | } \cdots \tilde{x}_{\tau_n } { | v \rangle } \label{formal : tilde3}\end{aligned}\ ] ] hold .then , one obtains a zero right - hand side of the master equation ( [ formal : me ] ) via a pairwise cancellation of terms coming from ( [ formal : mp])([formal : tilde3 ] ) .thus , relations ( [ int : h3][int : hr3 ] ) for the asep generalise to relations ( [ ft1][ft2 ] ) give the general form for a cancellation mechanism involving matrix ( or possibly tensor ) auxiliaries .such a cancellation scheme has been proposed in various contexts including a generalisation to longer ( but finite ) range interactions .some consequences of this scheme have been explored in . the existence of the matrices and and vectors and appearing in ( [ ft1][ft2 ] ) is proved by constructing them within an explicit representation .this representation has basis vectors that correspond to configurations of the rightmost sites of the -site lattice .these we denote as .the vector is then ascribed the role of a vacuum state , and the matrices that of creation operators in such a way that one then defines the vector via the scalar products so that ( [ formal : mp ] ) gives the desired statistical weights . the auxiliary matrices are defined as where the operators and are extended to the full space of all sub - configurations as follows .first , the bulk operator is defined in terms of the microscopic transition rates as when and , and under the same condition that .all other elements of these operators are zero .similarly , at the right boundary we have when , and with these definitions established , the relations ( [ formal : tilde1])([formal : tilde3 ] ) follow after some straightforward manipulations .let us take stock of this construction .we have seen that not only does the existence of a set of vectors and matrices that satisfy ( [ formal : tilde1])([formal : tilde3 ] ) imply a stationary solution of matrix product form of the master equation ( [ formal : me ] ) , but also that such a set of vectors and matrices can always be constructed _ once _one knows the weights ( [ formal : sp ] ) . as we previously saw in section [ pasep ] for the pasep, one can find a number of different representations of the matrices and ( which correspond to and in the more general setting ) even when the auxiliary matrices and are the same . in the representation detailed above the auxiliaries are not scalars but rather complicated objects . hence we see that ( i ) there may be many choices of both the matrices and their auxiliaries that correspond to the stationary solution of a single master equation ; ( ii ) matrix reduction relations , like those found for the asep ( [ int : de])([int : ew ] ) , constitute only sufficient conditions on , and , since valid representations where these relations do not hold can be found ( that used in this section provides an example ) ; and ( iii ) the construction of the auxiliary matrices through the generators of the stochastic process in ( [ formal : xtilde ] ) does not necessarily imply the existence of any convenient reduction relations for the matrices that allow , for example , efficient computation of statistical properties . as we saw in section [ ring ] the matrix product state onthe ring involves using a trace operation .the algebraic relations to be satisfied are given in the general case by ( [ ft1 ] ) .however , as we saw in section [ 2species ] , sometimes it happens that , although the algebraic relations are consistent , the rotational invariance of the trace operation leads to global constraints on particle numbers .for example , in the abc model discussed in [ sec : abc ] we found that the matrix product state was consistent only for models in which the numbers of each particle species was the same .it is essentially this property of models with periodic boundary conditions which has precluded the development of an existence proof for a matrix product state parallel to that given above .these difficulties are discussed in more detail in .we now discuss a specific reaction - diffusion model whose steady state can be represented in matrix product form , but where the auxiliaries are matrices .this model contains one species of particle and the stochastic processes are diffusion , coagulation and decoagulation .these last two updates involve the annihilation or creation of a particle adjacent to another particle .if the bulk rates are chosen in the following way then for suitable boundary conditions the steady state may be written in matrix product form . originally in a closed segment was considered .then , one requires the conditions to be satisfied come from the bulk algebra a four - dimensional representation of the matrices was found d & = & \left ( \begin{array}{cccc } 0 & 0 & 0&0\\ 0&\delta/(\delta + 1 ) & \delta/(\delta+1)&0\\ 0&0 & \delta & 0\\ 0&0&0 & 0 \end{array } \hspace{0.1 in } \right)\ , , \\[0.5ex ] \hat{e } & = & \left ( \begin{array}{cccc } 0 & 0 & q^{-1 } & -(q - q^{-1})^{-1}\\ 0&0 & ( q - q^{-1 } ) & -q\\ 0&0 & \delta ( q - q^{-1 } ) & -\delta q\\ 0&0&0&0 \end{array } \hspace{0.1 in } \right)\ , , \\[0.5ex ] \hat{d } & = & \left ( \begin{array}{cccc } 0 & -\delta q^{-1 } & 0 & 0\\ 0 & -\delta(q - q^{-1 } ) & 0 & 0\\ 0&0 & -\delta ( q - q^{-1 } ) & \delta q\\ 0&0&0 & 0 \end{array } \hspace{0.1 in } \right)\ , , \end{aligned}\ ] ] with boundary vectors where so that .however there is a subtlety with this model on a closed segment which is that the empty lattice is dynamically inaccessible and itself comprises an inactive steady state .if we wish to exclude this configuration we should choose and so that vanishes .this can be accomplished but in doing so renders and -dependent .a first order phase transition occurs at . for system is in a low - density phase whereas for the system is in a high - density phase .within the matrix product form of the steady state the transition can be understood as the crossing of eigenvalues of .this may occur here not because the matrices are of infinite dimension but because they have negative elements ( cf .the transfer matrices that arise in equilibrium statistical mechanics , as described in section [ ising ] ) .various other non - conserving models with finite - dimensional matrix product states have been identified .for example , the bulk dynamics just described has also been studied in the case of an open left boundary , where particles enter with rate and leave with rate and a closed right boundary by jafarpour .then , one requires and then if one can find two - dimensional representations of the algebra .further generalisations of this class of model have been shown to have finite - dimensional matrix product states . a general systematic approach to determine a necessary condition for the existence of a finite - dimensional matrix product statehas been put forward by hieida and sasamoto in which more examples are given .second - class particles were discussed in section [ sscp ] and the matrix product solution to the second - class particle problem was detailed in section [ ring ] .the matrix algebras for two - species generalisations of the second - class particle arose in the bi and bii solution classes of the classification scheme of section [ 2species ] .we now consider a natural generalisation of the second - class particle to many classes of particle , labelled with dynamics in this case it turns out one requires more complicated operators and auxiliaries than matrices . for , the case of three particle classes ,the algebraic relations to be satisfied , ( [ ft1 ] ) , become and the two - class problem , where the last lines of ( [ 3sp ] ) are absent , can be solved ( as we have seen in section [ rrrproof ] ) with scalar choices , and .it turns out that three - class problem is not a simple generalisation of the two - class case , but instead a much more complication solution is needed .matrices which satisfy ( [ 3sp1],[3sp ] ) have been given as the objects and that appear as elements of these infinite - dimensional matrices are themselves infinite dimensional matrices and which satisfy the familiar relation . in other words the are rank four tensors , as indeed are the auxiliaries in this representation .these were also given by , and take the form the tensor , or matrix within matrix , form of ( [ mat ] ) is closely related to that of the operators used to determine the steady state current fluctuations in an open system .in , some algebraic relations involving _ only _ the matrices ( i.e. , not the auxiliaries ) were quoted .these included relations which had triples and quartets of matrices reducing to pairs , and others which transformed triples and pairs to other triples and pairs .as noted in , the relations given comprise only a subset of those that would actually be required to perform a complete reduction of any string of matrices to an irreducible form .hence , a concise statement of the steady state of this system , like that encoded by the reduction relations ( [ int : de])([int : ew ] ) for the asep , is not easy . as such ,no calculations beyond simple cases have yet been attempted for the stationary properties of this model .however recently , generalising a construction by angel for the two - class problem , ferrari and martin have shown how to generate the steady state of the multi - class problem .this construction can then be used to determine operators and auxiliaries to solve the steady state for more than three classes .so far we have reviewed interacting particle systems involving continuous time , or equivalently , random sequential dynamics as discussed in section [ asep ] .however , this is not necessarily the most natural choice of dynamics with which to model some physical systems of interest . for example , in traffic flow and pedestrian modelling it is desirable that the microscopic constituents are able to move simultaneously this often originates from the existence of a smallest relevant timescale , e.g. reaction times in traffic .therefore , in these systems , parallel dynamics in which all particles are updated in one discrete timestep , may be used . in this sectionwe consider the asep under three types of discrete - time dynamics : sublattice , ordered sequential and fully parallel . in all these discrete - time casesthe steady state weights are given by the eigenvector with eigenvalue one of the transfer matrix of the dynamics , which we write in a schematic notation is the transfer matrix applied to the weight generates a sum of terms , each being the weight of a configuration multiplied by the transition _ probability _ in one discrete timestep from that configuration to . in sublattice updating the timestepis split into two halves : in the first half site 1 , the even bonds , and site are updated simultaneously ; in the second half time step the odd bonds , are updated simultaneously .note that we require the total number of sites to be even . in figure[ fig : sublattice ] we show how this type of updating is applied to partially asymmetric exclusion dynamics with open boundary conditions . in an update of site 1 ,if site 1 is empty a particle enters with probability . in an update of a bond ,the two possible transitions are : if site is occupied and site is empty the particle moves forward with probability , or else if site is occupied and site is empty the particle moves backward with probability . in an update of site , if site is occupied the particle leaves the system with probability .we note that this two - step process avoids the possibility of a conflict occurring ( e.g. , two particles attempting to hop into the same site simultaneously ) . typical configuration in the steady state of the partially asymmetric exclusion process on a closed segment of sites with .] the asep with sublattice dynamics was first studied in the case of deterministic bulk dynamics ( , ) by schtz .a matrix product solution for this case was found by hinrichsen and a matrix product solution for the general case of stochastic bulk dynamics was found by rajewsky et al .we outline this general case here .one proceeds by constructing the transfer matrix for the whole timestep as a product of operators for each half - timestep where when acting on generate the weights of configurations from which could have been reached by a transition associated with that bond or boundary site , multiplied by the transition probability. we may write \hat{l } = \left ( \begin{array}{cc } 1-\alpha&0\\ \alpha & 1 \end{array } \right)\qquad \hat{r } = \left ( \begin{array}{cc } 1&\beta\\ 0 & 1-\beta \end{array } \right)\label{lrrep}\end{aligned}\ ] ] where the basis for is ,,, and for and is , .so , for example , the matrix product state for sublattice updating takes the form in which , for odd , is a matrix if site is occupied by a particle ( ) or a matrix otherwise ; and for , even , is a matrix if site is occupied by a particle or a matrix otherwise .note that different matrices , hatted and unhatted , are used for the odd and even sublattices .the cancellation mechanism is as follows \;=\ ; \hat{x}_{\tau_i } x_{\tau_{i+1 } } \label{slcancel } \\\langle w| \hat{l } \hat{x}_{\tau_1 } \;=\ ; \langle w| x_{\tau_1}\qquad \hat{r } x_{\tau_{2l } } |v\rangle\;=\ ; \hat{x}_{\tau_{2l } } |v\rangle \ , .\label{slbccancel}\end{aligned}\ ] ] note that the action of the first half time step transfer matrix is to put hatted matrices at the even sites and unhatted matrices on the odd sites , then the action of is to restore them to their original sites .thus , the matrix product state ( [ slmp ] ) is indeed an eigenvector with eigenvalue one of the full transfer matrix ( [ fulltm ] ) . using the form ( [ trep],[lrrep ] ) of the operators ,the mechanism ( [ slcancel ] , [ slbccancel ] ) implies the following algebraic relations & = & [ d,\hat{d } ] \;=\ ; 0 \nonumber \\ \hat{e}d & = & ( 1-q)e\hat{d } + p d \hat{e } \\\hat{d}e & = & ( 1-p ) d \hat{e } + q e \hat{d } \nonumber\end{aligned}\ ] ] and the boundary conditions \langle w| ( \alpha \hat{e}+\hat{d } ) \;=\ ; \langle w| d \end{array } \qquad\begin{array}{c } ( 1-\beta)d|v\rangle \;=\ ; \hat{d}|v\rangle \\[1 mm ] ( e+\beta d)|v\rangle \;=\ ; \hat{e}|v\rangle \end{array}\,.\ ] ] remarkably , the same matrices and vectors as in the random sequential case can be used to solve these relations . to see this we make the ansatz where is a scalar to be fixed , and we find that ( [ spbulk ] , [ spbc ] ) reduce to \label{rs1}\\ d|v\rangle \;=\ ; \frac{\lambda}{\beta}|v\rangle \quad ; \quad \langle w |e \; = \ ; \langle w |\frac{\lambda ( 1-\alpha)}{\alpha}\;. \label{rs2}\end{aligned}\ ] ] finally we may set and define which then satisfy thus , we have rewritten ( [ rs1],[rs2 ] ) in terms of matrices obeying the usual algebra for the pasep ( [ pasep : de])([pasep : ew ] ) with redefined and . from this rewriting , the phase diagram can , in principle , be deduced using the general results of .another discrete - time updating scheme is to update each site in a fixed sequence in each time step .two particularly obvious choices of sequence are as follows .[ [ forward - ordered - update ] ] forward - ordered update + + + + + + + + + + + + + + + + + + + + + + here , the timestep begins by updating site 1 wherein if site 1 is empty a particle enters with probability .next , the bond is updated such that if site 1 is occupied and site 2 is empty the particle moves forward with probability , or else if site 2 is occupied and site 1 is empty the particle moves backward with probability .then the bonds for are similarly updated in order .the time step concludes with site being updated wherein if site is occupied the particle leaves the system with probability .note that in the forward sequence it is possible for a particle to move several steps forward in one timestep .[ [ backward - ordered - update ] ] backward - ordered update + + + + + + + + + + + + + + + + + + + + + + + in this case , the time step begins by updating site , then bonds to in backward sequence and finally site , where all the individual updates follow the usual rules , as described above .note that due to particle - hole symmetry the dynamics of the vacancies in the backward order is the same as the updating of particles in the forward order .therefore there is a symmetry between the steady states for the forward and backward orders . let us focus on the backward - ordered dynamics .we construct the transfer matrix for the whole timestep as a product of operators for each update of the sequence where are as in ( [ trep],[lrrep ] ) .the matrix product solution is of the usual form the cancellation mechanism is precisely the same as for the sublattice parallel case ( [ slcancel ] , [ slbccancel ] ) . in the ordered casethe hat matrices do not appear in the steady state weights at the end of the time step , rather they are auxiliary matrices which appear during the update procedure and move from right to left through the lattice .we conclude that backward ordered updating has the exact same phase diagram as sublattice updating , although exact expressions for correlation functions do depend on the details of the updating .finally the phase diagram for forward ordered updating can be obtained from the particle - hole symmetry mentioned above .current and density profiles have been calculated in . in parallel dynamics ( sometimes referred to as fully parallel to distinguish from the sublattice updating ) _all _ bonds and boundary sites are updated simultaneously .this dynamics is considered the most natural for modelling traffic flow . in an update at the bond , if site is occupied and site is empty the particle moves forward with probability . under this type of updating scheme one can not include reverse hopping without introducing the possibility of conflicts occuring , or of a particle hopping to two places at once .also note that it is the occupancies at the beginning of the timestep which determine the dynamical events . in this casethe matrix product solution is of the usual form ( [ mss ] ) but the algebraic relations have a more complicated structure which was first elucidated in .there it was found that recursion relations between systems of different sizes were higher than first order : for example the relation which relates the weights of size to those of size and size , was discovered to hold .this in turn implies that algebraic relations between the operators are _ quartic _ rather than quadratic .for example the rules in the bulk are and there are other rules which we do not quote here for matrices near to the boundary ( see ) .these rules were proved using a domain approach related to , but more complicated than , that presented in in section [ rrproof ] .subsequently , an algebraic proof similar in spirit to the cancellation mechanism for the ordered sequential updating case was found but it is still too involved to present here . the quartic algebra , ( [ bulk1])([bulk4 ] ) , as well as the other conditions mentioned above , can be reduced to quadratic rules by making a convenient choice for the operators involved .the trick is to write where , , , and are matrices of , in general , infinite dimension ; that is , and are written as rank four tensors with two indices of ( possibly ) infinite dimension and the other two indices of dimension two .correspondingly , we write and in the form where , , , and are vectors of the same dimension as and . , , , and satisfy the quadratic relations \ ; , \label{d1e1con}\]] and , , , and satisfy , \label{e2d2con}\]] and satisfying ( [ d1e1con],[d1v1 ] ) are presented in . in addition , one can choose , , , so that ( [ e2d2con ] ) , ( [ e2v2 ] ) reduce to ( [ d1e1con ] ) , ( [ d1v1 ] ) . along the curve scalar representations of , , and can be found and are operators . the continuous time limit is recovered by setting , replacing , and letting the time step . then ( [ d1e1con],[d1v1 ] ) reduce to the usual asep quadratic algebra ( [ int : de])([int : ew ] ) . the phase diagram is presented in figure [ parpd ] .it appears similar to the familiar continuous time phase diagram except that the transition lines from low density to maximal current and high density to maximal current are at and respectively . in the continuous timelimit described above one recovers and .also , in the limit of deterministic bulk dynamics , the maximal current phase disappears from the phase diagram as is also the case in other updating schemes in this limit .[ parpd ] , .mc is the maximum current phase , ld and hd are the low and high - density phases , respectively .the straight dashed lines are the boundaries between the low density and maximal current phases and the high density and maximal current phase at and respectively .the curved dashed line is the line given by ( [ specialline ] ) and intersects the line at ]in this work we have reviewed the physical properties of systems that are driven out of equilibrium , with particular reference to those one - dimensional models that can be solved exactly using a matrix product method .the most prominent physical phenomena exhibited by these systems are phase transitions which in open systems are induced by changes in the interactions with the boundaries , and in periodic systems by the addition of particle species that have different dynamics .additionally we have seen in a number of cases the formation of shocks in these one - dimensional systems .as we have also discussed , the matrix - product approach can be used to determine the steady - state statistics of models with a variety of microscopic dynamics .the majority of models that have matrix product states involve diffusing particles with hard - core interactions and whose numbers are conserved , except possibly at boundary sites . under such conditions , certain generalisations to multi - species models have been found .furthermore , there are a few cases in which non - conserving particle reactions can occur that also admit a convenient solution in terms of matrix products .however , it has to be said that the dream scenario , of being able to systematically construct any nonequilibrium steady state in matrix product form starting from the microscopic dynamics of the system , appears a remote goal .although the existence proofs discussed in section [ formal ] tell us that a matrix product formulation should nearly always be possible , we are in practice still feeling our way with particular examples .there remain a number of simple physical systems that exhibit nontrivial out - of - equilibrium behaviour , but that have nevertheless been unyielding to exact solution by matrix products , or any other means .it is perhaps appropriate here to highlight some of these cases as challenges for future research .as was discussed in section [ pwdis ] , it is possible to solve for the steady state of the asep on the ring when particles have different hop rates , as would occur when each particle is randomly assigned a hop rate from some distribution . as far as we aware, the corresponding dynamics with open boundaries has not been solved apart from the karimipour s model of overtaking which we reviewed in section [ sec : karimipour ] .furthermore , if the disordered hop rates are associated with sites , rather than particles , even the model on the ring has so far eluded a complete exact solution .indeed , even if only a single site has a different hop rate to the rest , the steady state does not appear to have a manageable form .on the other hand , some nontrivial symmetries in the disordered case have been identified .one phenomenon that can emerge when disorder is present is phase separation .one sees this most clearly for the periodic system with particle - wise disorder , where as we noted in section [ bec ] , a condensation of particles behind the slowest may occur . with site - wise disorder , a flattening of the macroscopic current - density relation seen ( recall that it is parabolic in the asep without disorder ) , and the densities at which transitions into a maximal current phase are correspondingly lowered ; it has been suggested that a linear portion in the curve may also provide a mechanism for phase separation .another interesting observation is that under site - wise disorder the location of the first - order transition for the asep becomes sample - dependent ( i.e. , the free - energy - like quantity for the nonequilibrium system is not self - averaging ) . in all the models described in this work ,particles occupy a single site of a lattice . if one has extended objects on a ring , the dynamics are unaffected . with open boundaries , however , the situation changes and even if there is a single particle species and all particles are the same length , a matrix product solution for the steady state has proved elusive .although it is quite easy to arrange for cancellation of terms in the master equation corresponding to particles joining and leaving domains ( see section [ rrproof ] ) , the necessary cancellation with boundary terms is much harder to arrange : it is not even clear what the most convenient choice of boundary conditions should be ( e.g. , extended particles sliding on and off site - by - site , or appearing and disappearing in their entirety , or a mixture of the two ) .a phenomenological diffusion equation for the macroscopic density profile in the open system with extended objects appearing and disappearing at both boundaries has been presented .furthermore , it was conjectured in that work to be the equation that would result in the continuum limit ( along the lines of that taken in section [ sec : mfh ] ) , if an exact matrix - product solution of this model were to be found .whether this should turn out to be the case or not , an application of the extremal current principle ( see [ sec : pd ] ) gives predictions for the phase diagram that are in good agreement with simulation data : the same three phases that emerge in the asep are seen , but with shifted transition points .meanwhile an approach to this problem based on a local - equilibrium approximation has shown excellent agreement with monte carlo results and have also been conjectured to be exact .the case of the open system with extended objects and a single spatial inhomogeneity in the hop rates has also been considered .the bridge model comprises oppositely - charged particles in the sense of section [ 2species ] .positive charges enter at the left at rate , hop with unit rate to the right and exit at the right boundary at rate .the dynamics of negative charges is exactly the same , but in the opposite direction .when two opposing particles meet they may exchange with some rate .thus this model has both a parity and charge - conjugation symmetry . in certain parameter ranges ,this symmetry is reflected in the steady state that results : both positive and negative charges flow freely in their preferred directions . in particular, the limit at , for which an exact solution in terms of matrix products exists , is included within this range .on the other hand , when is small the model s symmetries are found to be broken .this has been proved in the limit but there is no known exact solution for general . in a finite system , the current flips between the positive and negative direction , but as the system size increases the time between flips increases exponentially , and so in the thermodynamic limit only the state with positive or negative current state is seen .it is unclear whether the structure of the matrix product solution allows for the description of such symmetry - broken states .this may also be related to whether the boundary conditions correspond to particle reservoirs with well - defined densities the abc model is another deceptively simple model that has yet to be completely solved .this model has a ring that is fully occupied by particles from three species , which are labelled , and and exhibit a symmetry under cyclic permutation of the labels .specifically , the rates at which neighbouring particles exchange are that is , if , particles prefer to be to the left of , to the left of and to the left of .this model was discussed in section [ sec : abc ] and was shown to be exactly solvable when the numbers of , and particles are all equal as a consequence of detailed balance being satisfied . herea phase separation is seen when . if , on the other hand , all configurations become equivalent and the stationary distribution is uniform .when there is even the slightest imbalance and particle numbers _ and _ hop rates , a small current starts to flow and nothing is known about the properties of the stationary distribution .the model is of particular importance because in the special case of equal numbers of each species , it is one of few models along with the ssep and kmp model where a free energy functional for the density profile is known .we end by mentioning the original paradigm of a system driven out of equilibrium , the model due to katz , lebowitz and spohn .this model has ising - like interactions between pairs of neighbouring spins and evolves by neighbouring spins exchanging places ( kawasaki dynamics ) but with a symmetry - breaking external field that favours hops of up - spins along a particular axis . when the system is open , or has periodic boundaries , a nonequilibrium steady state ensues . in the limit where spins become non - interacting andthere is only one spatial direction , the partially asymmetric exclusion process ( pasep ) described in section [ pasep ] is recovered .when the spins interact , an ising - like steady state that involves products of matrices ( as described in section [ ising ] ) can be found , albeit only if certain conditions on the hop rates are satisfied .( these conditions do admit nonequilibrium steady states in which a current flows , however ) . in two dimensions , one has , of course , a finite critical temperature in the absence of a driving force , and there has been particular interest in the nature of the low - temperature ordered phase in the presence of an external drive .we discussed in section [ discretetime ] different updating schemes for which matrix product steady states have been determined . herewe mention some other updating schemes which have not been solved so far .first an ordered discrete time sequential updating scheme could have the order as a randomly chosen sequence .the forward and backward schemes of section refdiscretetime correspond to special sequences .it would be of interest to construct the a matrix product steady state for an arbitrary fixed sequence .recently a _ shuffled _ updating scheme has been considered , in which a new random sequence is chosen at each timestep .it has been argued that this scheme is relevant to the modelling of pedestrian dynamics .the shuffled updating scheme guarantees that each site or bond is updated exactly once in each timestep but is more stochastic than an ordered sequential scheme . againit would be of interest to see if a matrix product could be used to describe the steady state .we thank all the many colleagues with whom it has been a pleasure to discuss nonequilibrium matrix product states over the years . in particularwe acknowledge our collaborators and other authors whose work we have summarised . for their useful discussions and insightful comments during the preparation of this manuscriptwe would wish to thank bernard derrida , rosemary harris , des johnston , joachim krug , kirone mallick , andreas schadschneider , gunther schtz and robin stinchcombe .rab further acknowledges the royal society of edinburgh for the award of a research fellowship .the continuity equation ( [ cty2 ] ) is a particularly simple case of a first order quasi - linear differential equation for the density the term on the right hand side would represent a source or sink of particles and is relevant for example to the case where creation and annihilation processes exist .such an equation can generally be solved by the method of characteristics which involves identifying the characteristic curves along which information from the boundary or initial condition propagates through the space - time domain .the characteristic curves satisfy of which the two independent conditions may be written these equations generally have two families of solution then the solution to ( [ cty3 ] ) is of the form where the function is fixed by the initial data . for the case ( [ cty2 ] ) , i.e. , the characteristics in the plane are straight lines with slope and is constant along these lines , as can be seen from ( [ xrho ] ) .more generally the characteristics are curves in the plane and the density varies along the characteristic .this can be seen by integrating ( [ xrho ] ) which gives implicitly as a function of , then the trajectory of the characteristic is given by if we consider a patch of the initial density profile of density at , its density evolves according to ( [ patden ] ) and ( [ patpos ] ) gives the position of the patch .these more complicated characteristics can produce a richer for boundary induced phase transitions including stationary shocks .in this appendix , we collect together general formul for obtaining both exact and asymptotic expressions for the coefficients appearing in a generating function if the latter is known .we refer the reader to for proofs and more detailed discussion . for brevity , we introduce the notation to mean `` the coefficient of in the ( formal ) power series '' . [ [ lagrange - inversion - formula ] ] lagrange inversion formula + + + + + + + + + + + + + + + + + + + + + + + + + + if a generating function satisfies the functional relation where satisfies , one has an expression for as a function of which can be inverted to find as a function of .this procedure then allows one to determine with relative ease the coefficient of in the expansion of any _ arbitrary _ function .the formula for doing so reads \;.\ ] ] [ [ asymptotics ] ] asymptotics + + + + + + + + + + + if a generating function has a singularity at a point in the complex plane , it contributes a term to the coefficient , where and are constants to be determined . for sufficiently large , this coefficient is dominated by the singularity closest to the origin . to be more precise, we say that two sequences and are asymptotically equivalent , denoted , if then , we have that .let us decompose the generating function into its regular and singular parts around , denoting the former by .that is , the cases that interest us are poles and algebraic singularities ( noninteger ) . for the former case onefinds for sufficiently large , the binomial coefficient behaves as replacing the factorial with a gamma function gives the extension of ( [ gf : pole ] ) to algebraic singularities the prefactor in this expression can be obtained by taking the limit order to show that the integral representation of the normalisation ( [ asep : z0 ] ) is equivalent to the summation ( [ open : z2 ] ) , the simplest approach is to show that the generating functions for computed from the two expressions are equal . as we have seen in section [ gfa ] the generating function computed from the finite sum ( [ open : z2 ] ) yields ( [ open : zed ] ) which can be developed further to give }{2 \left[z-\alpha(1-\alpha)\right ] \left[z-\beta(1-\beta)\right]}\;.\label{open : zed2}\end{aligned}\ ] ] we now turn to the integral representation ( [ asep : z0 ] ) which may be recast as a contour integral by change of variable where is the ( positively oriented ) unit circle in the complex plane .we compute the generating function ( [ gendefa ] ) using ( [ zinta ] ) . summing a geometric series ( which converges for ) we find we now use the residue theorem to evaluate this integral .we take and ( although we get the same final result for other ranges of and ) . in this casethe singularities within the unit circle are simple poles at , and where we define the residues from the three poles yield } -\frac{(1-b^2)}{2(a - b ) } \frac{\beta^2}{\left[z-\beta(1-\beta)\right]}\\ & & -\frac{(1-ab)(2-u_-^2-u_-^{-2 } ) } { 2z(1+a^2-a(u_-+u_-^{-1}))(1+b^2-b(u_-+u_-^{-1}))(u_- - u_+)}\;.\end{aligned}\ ] ] some simple algebra then shows that this expression is equivalent to ( [ open : zed2 ] ) .here , we collect together some representations of the various matrix product algebras that have been discussed in this work .we begin by recapping representations for the pasep algebra in the case of injection at the left boundary and extraction at the right boundary these representations are generalisations of the three representations for the totally asymmetric case first given in .from section [ alsalam ] we have where and the boundary vectors are from this representation one sees that for certain parameter curves , namely that the representations become finite dimensional since and the upper left corner of the matrices become disconnected from the rest .curves of this type were first noted in the more general case in and finite dimensional representations were catalogued in . in the limit ( [ pasep : d1 ] ) and ( [ pasep : d1 ] ) have well - defined limits where .a second representation of ( [ pasep : dea][pasep : ewa ] ) given in section [ pasymp ] is with where the parameters and are given in ( [ abcndef ] ) , and is a constant usually chosen so that . for these matrices , the limit is not defined .a third representation , which we have not yet encountered , is where in the limit the elements of reduce to we now give representations corresponding to the physically relevant and nontrivial algebras classified in section [ 2species ] and the multispecies generalisations of section [ sec : multispecies ] .[ [ solution - aii - and - multispecies - generalisation ] ] solution aii and multispecies generalisation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + this is a multispecies model with matrices and , with an algebra where .one possible representation is this is the generalisation of the third pasep representation ( [ de3 ] ) in the case .[ [ solution - bi - and - bii ] ] solution bi and bii + + + + + + + + + + + + + + + + + + + with the algebra ( [ bialg ] ) , one has a representation where as noted in the main text a representation for solution bii is to use a presentation of the pasep algebra and write , i.e. a projector .[ [ solution - cii - and - multispecies - generalisation ] ] solution cii and multispecies generalisation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + this is the case of the asep with disordered hopping rates .the matrix algebra ( [ msalg1 ] ) can be represented via where [ [ solution - d - and - deformed - commutators ] ] solution d and deformed commutators + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + as noted in [ sec : closeg ] the solution class d operators can be constructed from tensor products of operators obeying deformed commutator relations .the deformed commutator is defined as where is a deformation parameter : corresponds to the usual commutator =0 $ ] .deformed commutators appear in the analysis of the pasep , and in some of the two - species models of section [ 2species ] .one possible representation of these matrices is we note that both and any product of and including at least one are traceless .100 url # 1`#1`urlprefix zia r k p and schmittmann b 2007 probability currents as principal characteristics in the statistical mechanics of non - equilibrium steady states _ j. stat ._ to appear ; cond - mat/0701763 leduc c , camps o , zeldovich k b , roux a , jolimaitre p , bourel - bonnet l , goud b , joanny j f , bassereau p and prost j 2004 cooperative extraction of membrane nanotubes by molecular motors _ proc .natl . acad .sci . _ * 101 * 17096 schtz g m 1996 experimental realizations of integrable reaction - diffusion processes in biological and chemical systems in _ proceedings of the nankai workshop on symmetry , statistical mechanics models , and applications _, ed f y wu and m l ge ( world scientific , singapore ) godreche c , luck j m , evans m r , mukamel d , sandow s and speer e r 1995 spontaneous symmetry breaking : exact results for a biased random walk model of an exclusion process _ j. phys .a : math . gen ._ * 28 * 6039
we consider the general problem of determining the steady state of stochastic nonequilibrium systems such as those that have been used to model ( among other things ) biological transport and traffic flow . we begin with a broad overview of this class of driven diffusive systems which includes exclusion processes focusing on interesting physical properties , such as shocks and phase transitions . we then turn our attention specifically to those models for which the exact distribution of microstates in the steady state can be expressed in a matrix product form . in addition to a gentle introduction to this matrix product approach , how it works and how it relates to similar constructions that arise in other physical contexts , we present a unified , pedagogical account of the various means by which the statistical mechanical calculations of macroscopic physical quantities are actually performed . we also review a number of more advanced topics , including nonequilibrium free energy functionals , the classification of exclusion processes involving multiple particle species , existence proofs of a matrix product state for a given model and more complicated variants of the matrix product state that allow various types of parallel dynamics to be handled . we conclude with a brief discussion of open problems for future research .
in non equilibrium statistical mechanics , the boltzmann transport equation ( bte ) is a widely used tool for describing the transport properties of a classical thermodynamic system .since this equation is applicable mainly to low density systems for which binary collisions dominate , it gives , for example , the most suitable way of analysing the composition and dynamics of the upper atmosphere of the earth . in this paper ,the properties of the solution of this equation are explored in a manner to be of value primarily in the study of thermosphere .the boltzmann equation , when applied to a gas mixture like the one in consideration in upper atmosphere , leads to many difficult situations .the general interactions are varied and complex .moreover , highly non - equilibrium conditions are found , for example , at high latitudes in the thermosphere , where the ions move at extremely high speeds compared to the neutral atoms due to the presence of magnetospheric electric fields .another example can be the case of exobase ( lower boundary of the exosphere ) where the extremely sparse population fails to give a sufficiently high rate of collisions to justify the applicability of the collision term of the bte . in this case, one would need a way to find whether or not the equation is valid . in order to address situations like these, the ordinary collision integrals used to simplify the bte , fail to provide an advantageous insight in the situation due to the insurmountable complexity of the integrals .the methods previously equipped to attempt at solving the bte include the eigenvalue approach of chapman and cowling , simplification of the collision term by chapman and enskog and orthogonal polynomial method by grad . of these , the method used by grad which primarily exploits the tensor hermite polynomials to obtain a series solution to the transport equation , presents explorable prospects .+ in this paper , we study the tensor hermite polynomials and their properties to understand their applicability in areas other than dynamics of upper atmosphere involving applications in computational fluid dynamics , study of magneto plasma in tokomak for fusion research , dynamics in the core of a nuclear reactor , etc . the basic work on tensor hermite polynomials was done by grad in 1949 introducing the need for them .he introduced them as the basis for complete set of orthonormal polynomials in n variables in tensorial form .as mentioned in his work , this set of scalar orthonormal polynomials often lacks symmetry and an advantage is gained by expressing these polynomials in tensor invariant notation .other than grad , work on these tensor polynomials has been done by viehland to solve the bte for trace amounts of ions in dilute gases and by knio to study the uncertainty propagation in computational fluid dynamics .+ in this paper , three major aspects of the tensor hermite polynomials are studied . in the first one ,the effects of scaling are examined .this directly provides a mathematical criterion governing the ambient conditions that must be satisfied for justifying the applicability of the collision integral term of the bte in gas dynamical procedures .the second studies the change in hermite polynomials under a translation of axes .the third one inspects the polynomials under rotation of axes from particle velocity coordinates to the centre - of - mass and relative velocity coordinates .this results in tensors of order 6 which are required to evaluate the collision integrals for a two component system of low density gas mixture .the tensorial hermite polynomials can be written iteratively as ( [ app : a3 ] ) : ,\ ] ] where the permutation operator is explained in [ app : a2 ] . hence , the first few polynomials are & _ 0 = 1 , [ eq:2 ] + & _ 1 = 2 , [ eq:3 ] + & _ 2 = 4_i_j - 2_ij , [ eq:4 ] + & _ 3 = 8_i_j_k - 4(_i_jk + _ j_ki + _ k_ij ) . [ eq:5 ] the entities stated in bold are tensors .the rank of these entities are denoted by the number of underlines below it or by the numerical subscript .these polynomials differ from the tensor hermite polynomials introduced by grad described in [ app : a1 ] and are related to the above definition .+ these polynomials are orthonormal and symmetric .hence , any scalar function , for example , the distribution function of the boltzmann transport equation , can be expanded in terms of these hermite polynomials provided the integral here , is the dimensionless velocity vector such that : where is the particle velocity , is its mass , is the boltzmann constant and is the thermodynamic temperature . in equation [ eq:7 ] , is the weight , which , in this context is the normalised maxwellian velocity distribution function displaced by an amount . the s in equation [ eq:6 ]are called the expansion coefficients .they can be obtained by exploiting the orthonormality of the hermite polynomials described by [ eq:9 ] + the notation represents the scalar product of the tensors and ( [ app : a4 ] ) .in this section we explore the possibility of conversion from one type of functional dependence to another where the other one is scaled with respect to .consider the relation , where and are constants .henceforth , summation over repeated indices is implied unless stated otherwise . considering the general functional dependence of the form , can be written in terms of as , [ eq:11 ] + if the same function has to be equivalently representable in terms of , assuming the same functional dependence , through the expression : [ eq:12 ] + then the series should be convergent . here , and are constants . from the above expressions ,the two series are related as : } ( \boldsymbol{a}_n , \boldsymbol{h}_n),\ ] ] [ eq:13 ] + and the series in will converge if the following integral _ -^ = ( f_0/f_0)^2 _ -^ , exists . from the orthogonality of the hermite polynomials , we can write are functions of quadratic combinations of s . for a finite value of ,the expression in equation [ eq:14 ] will be finite since depends on finite powers of .hence , the existence of the integral depends only on that of the integral of the exponent , i.e. , }\enspace dz } < \infty.\ ] ] [ eq:15 ] + this happens when has a negative coefficient in the exponent . by substituting for from equation ( [ eq : eq10 ] ), we get the condition as .+ from the expression of in equation ( [ eq : eq8 ] ) , we can see that where is the temperature of the species with the scaled velocities and is the temperature of particles with unscaled velocity .+ hence the interchange of variables can be done if .+ in studies of ion - neutral mixture in upper atmosphere , where ( is the ion temperature and is the neutral atom temperature ) , interconversions to and from a temperature is possible when it falls in the range : [ eq:16 ] + this also implies that for the boltzmann equation to hold , with the collision term , the ion temperature must be less than four times the neutral atom temperature .translation can be made from one frame of reference to another .there are generally three frames of reference : the inertial , the one moving with the entire gas , and the one moving with a particular component species .let two translations be defined as : & _ 0 = - _ 00 , [ eq:17 ] + & _ r = - _ a , [ eq:18 ] where is an arbitrary constant and is the average value of . + the hermite polynomials under both the translations are inter - related through : & _ n^0 = s_(n,3)[_p=0^n ( _ p^n ) _p^r [ 2(_a-_00)]^t(n - p ) ] , [ eq : eq19 ] + & _ p^r = s_(p,3)[_n=0^p ( _ n^p ) _ n^0 [ 2(_00 - _ a)]^t(p - n ) ] .[ eq : eq20 ] in the above expressions , and represent the hermite polynomials in terms of and defined as per equation [ eq:17 ] and [ eq:18 ] respectively . here, is the usual binomial coefficient .these expressions can be obtained by the principle of mathematical induction and using equation ( 1 ) as explained in [ app : a7 ] .it should be realised that translation destroys the property of orthogonality . in order to evaluate the collision term of boltzmann equation for the gas flow problem, one has to consider the collision integral and the collision cross section depends on the relative velocity of the collision partners .more often than not , this relative velocity can have a direction different from that of the species velocity . in that case ,a rotation of axes makes the calculation a lot less cumbersome .+ considering for example , and as the space vectors and and as the corresponding distribution functions for two species , the axes can be rotated to : & c_r = ( m_s v_rs + m_s v_rs)/ , [ eq:21 ] + & g_r = ( v_rs - v_rs ) , [ eq:22 ] + & v_rs = z_rs , [ eq:23 ] + & v_rs = z_rs , [ eq:24 ] + & = m_sm_s/(m_s + m_s ) , [ eq:25 ] where and are the rotated coordinates , and are the molecular masses of the two species , is the boltzmann constant , is the temperature , and are the species velocities and is the reduced mass . + the rotation matrix can be expressed as : & r_1;ssgc = r_1;gcs s = y & y + y & -y , + & y^2 = / m_s , y^2 = / m_s. and the corresponding relations can be expressed as : [ eq:26 ] + here , the vectors that are getting transformed are of order 6 , i.e. , two components corresponding to the two species each of order 3 .the two components can be represented by upper and lower indices .hence , we can also define the mixed hermite polynomials : & _ 0 = 1 , [ eq:27 ] + & _ 1 = ( _ 1a _ 1b)^t , [ eq:28 ] + & _ * h*__n+1 = s_(n+1,3)^(n+1,2 ) ( _ * h*__n _ * h*__1 - 2n ^u _ * h*__n-1 ) , [ eq:29 ] where and represents the permutation operator over the indices of both the components .+ therefore , using and , we can define two sets of hermite polynomials .it is observed that the distribution function is the same in rotated and in original axes frame ( [ app : a7 ] ) : [ eq : eq30 ] f_ss = f_rsf_rs = f_cg .the description of tensor hermite polynomials and a study of their behavior under scaling , translation and rotation of axes gives us an insight on their applicability in various situations . since the assumptions in the process were minimal , they provide quite an accurate and generalised computational procedure .this procedure ensures that with the inclusion of each additional tensorial polynomial in the truncated series , the discrepancy between the actual distribution function and the approximated one decreases monotonically .the investigation of these polynomials under scaling gives a condition on temperature which must be satisfied for the collision term of the bte to be applicable .the polynomials when treated under rotation , provide a simplified way of evaluation .it should be noted that because of their orthonormality and symmetry , they can be advantageously exploited in simplifying systems other than the one given by boltzmann equation . + as further research in this area , we plan to work on exploring the properties of the expansion coefficients under various transformation of variables .we also plan to look at the tensorial hermite polynomials in spherical polar coordinates which can be useful for certain systems .one of the authors ( pm ) wishes to thank tata consultancy services , mumbai for funding this project and sarthak bagaria for helping at various stages of the project .general formulae : & _ iz_j = _ ij , + & _ j^n_i^n = _ ij^n , + & _ ( n ) = ^n , + & _ ( n+1 ) = ( z_i - _ i)_(n ) , [ eq : eqa5 ] + & _ i_(n ) = _ i_(n-1 ) , [ eq : eqa6 ] + & _ ( n ) = ( - ) ^n . , + & . where is the hermite polynomial with indices and represents the normalized weight : + and the following holds & _i= -_i , + & _ i ( ) = . is a second order unit tensor , i.e. , the identity matrix and represents sum of all terms in which is attached with as explained in reference .the relation between physicist s polynomial used in this paper and mentioned in the thesis of sengupta and the probabilist s polynomial used by grad is as follows : & _ n(z ) = 2^_(n)(z ) , + & _ ( n)(z ) = 2 ^ -_n(z/ ) . where is the probabilist s hermite polynomial .now substituting for in equation [ eq : eqa6 ] gives : replacing by , we get : & _ i_n = 2_i_n-1 . similarly , in equation [ eq : eqa5 ] , substituting for the probabilist s polynomial, we get : , % \\&or \frac{1}{\sqrt{2}}h_{n+1 } = \frac{1}{\sqrt{2}}[2x_i - \nabla_i]h_n , \\&or \quad \boldsymbol{h}_{n+1 } = ( 2x_i - \nabla_i)\boldsymbol{h}_n . \end{split}\ ] ]the permutation operator acts on an argument and replaces the argument by a sum of all possible permutations over n indices for tensors in m - dimensional space .+ for example , & s_(2,3 ) ( ) = _ ij + _ ji , + & s_(3,3)(x ) = x_i_jk + x_i_kj + x_j_ki + x_j_ik + x_k_ij + x_k_ji . \\ & = \ ; s_{(n+1,3)}[\boldsymbol{h}_{n}\boldsymbol{h}_{1 } ] - s_{(n+1,3)}[2n\boldsymbol{h}_{n-1}\underline{\underline{i } } ] , \\ & = \ ; [ 2x_{i_1}s_{(n,3)}(\boldsymbol{h}_n ) + 2x_{i_2}s_{(n,3)}(\boldsymbol{h}_n ) + 2x_{i_3}s_{(n,3)}(\boldsymbol{h}_n ) + \dots ] \\ & \ ; \ ; \;- 2n[\delta_{i_1i_2}s_{(n-1,3)}(\boldsymbol{h}_{n-1 } ) + \delta_{i_1i_3}s_{(n-1,3)}(\boldsymbol{h}_{n-1 } ) + \dots \\ & \qquad \qquad \delta_{i_2i_1}s_{(n-1,3)}(\boldsymbol{h}_{n-1 } ) + \delta_{i_2i_3}s_{(n-1,3)}(\boldsymbol{h}_{n-1 } ) + \dots \\ & \qquad \qquad \dots \\ & \qquad \qquad \dots \qquad + \qquad \dots \qquad + \delta_{i_{n+1}i_n}s_{(n-1,3)}(\boldsymbol{h}_{n-1 } ) ] .\end{split}\ ] ] here , the permutation operator explained in the previous section .+ since the are the same for any permutations of the indices , we have therefore the above expression is : ( putting ) = & n![2x_{i_1}\boldsymbol{h}_n + 2x_{i_2}\boldsymbol{h}_n + \dots ] \\&-2n(n-1)![\delta_{i_1i_2}(\boldsymbol{h}_{n-1 } ) + \delta_{i_1i_3}(\boldsymbol{h}_{n-1 } ) + \dots \\ & \qquad \dots \\ & \qquad \dots \qquad + \qquad \dots \qquad + \delta_{i_{n+1}i_n}(\boldsymbol{h}_{n-1 } ) ] , \\= & n![2x_{i_1}\boldsymbol{h}_n + 2x_{i_2}\boldsymbol{h}_n + \dots ] \\&-2n![\boldsymbol{\delta}_{i_1}(\boldsymbol{h}_{n-1 } ) + \boldsymbol{\delta}_{i_2}(\boldsymbol{h}_{n-1 } ) + \dots + \boldsymbol{\delta}_{i_{n+1}}(\boldsymbol{h}_{n-1 } ) ] .\end{split}\ ] ] where the s are obtained from the expression in equation [ eq : eqa6 ] . = & n![2x_{i_1}\boldsymbol{h}_n + 2x_{i_2}\boldsymbol{h}_n + \dots ] \\&-n![\nabla_{i_1}\boldsymbol{h}_n + \nabla_{i_2}\boldsymbol{h}_n + \dots + \nabla_{i_{n+1}}\boldsymbol{h}_n ] , \\= & n![(2x_{i_1}-\nabla_{i_1})\boldsymbol{h}_n + ( 2x_{i_2}-\nabla_{i_2})\boldsymbol{h}_n + \quad \\ & \dots \quad + ( 2x_{i_{n+1}}-\nabla_{i_{n+1}})\boldsymbol{h}_n ] .\end{split}\ ] ] using the relation : we get : = & n![\boldsymbol{h}_{n+1 } + \boldsymbol{h}_{n+1 } + \quad \dots \quad + \boldsymbol{h}_{n+1 } ] , % \\= & n!(n+1)h_{n+1 } , \\= & ( n+1)!\boldsymbol{h}_{n+1}. \end{split}\ ] ]the scalar inner product of two tensors * a * and * b * of _ n_-th order is defined as : where n is the number of dimensions in the space under consideration .+ for example , in 3 dimensions , for two second order tensors ( matrices ) , the scalar inner product will be given as : & ( ^(2).^(2 ) ) = a_11b_11 + a_12b_12 + a_13b_13 + & + a_21b_21 + a_22b_22 + a_23b_23 + & + a_31b_31 + a_32b_32 + a_33b_33 where , is a tensor of rank and order 6 and is the corresponding set of rotated mixed tensor expansion coefficients . is the n - th rank rotation tensor .this proof can be done by principle of mathematical induction . substituting for the first few values of or , we observe that the expressions hold true . assuming that the expression ^{t(n - p)}],\ ] ] holds true for then proving that this implies it also holds true for shows that this is true in general . from equation ( 1 ) , we have since the are symmetric in indices , and cancel . + subsituting for the series expression of }^{t(n - p)}2(\underline{z}-\underline{z}_{00 } ) - 2n\sum_{p=0}^{n-1 } ( _ p^{n-1})\boldsymbol{h}_p^r{[2(\underline{z}_a -\underline{z}_{00})]}^{t(n - p-1)}\underline{\underline{i } } , \\ & = \sum_{p=0}^n ( _ p^n)\boldsymbol{h}_p^r{[2(\underline{z}_a -\underline{z}_{00})]}^{t(n - p)}2(\underline{z}-\underline{z}_{00 } ) \\ & \qquad \qquad - 2\sum_{p=0}^{n-1 } ( _ { p+1}^{n})(p+1)\boldsymbol{h}_p^r{[2(\underline{z}_a -\underline{z}_{00})]}^{t(n - p-1)}\underline{\underline{i } } , \\ & = { [ 2(\underline{z}_a -\underline{z}_{00})]}^{t(n)}2(\underline{z}-\underline{z}_{00 } ) + \sum_{p=1}^n ( _ p^n)\boldsymbol{h}_p^r{[2(\underline{z}_a -\underline{z}_{00})]}^{t(n - p)}2(\underline{z}-\underline{z}_{00 } ) \\ & \qquad \qquad - 2\sum_{p=1}^{n } ( _ { p}^{n})(p)\boldsymbol{h}^r_{p-1}{[2(\underline{z}_a -\underline{z}_{00})]}^{t(n - p)}\underline{\underline{i } } , \\ & = { [ 2(\underline{z}_a -\underline{z}_{00})]}^{t(n+1 ) } + { [ 2(\underline{z}_a -\underline{z}_{00})]}^{t(n ) } \boldsymbol{h}^r_{1 } + \sum_{p=1}^n ( _ p^n){[2(\underline{z}_a -\underline{z}_{00})]}^{t(n - p ) } \\ & \qquad \qquad \quad [ \boldsymbol{h}_p^r\boldsymbol{h}^r_{1 } + \boldsymbol{h}_p^r2(\underline{z}_a-\underline{z}_{00 } ) - 2p\boldsymbol{h}^r_{p-1 } \underline{\underline{i } } ] , \\ & = { [ 2(\underline{z}_a -\underline{z}_{00})]}^{t(n+1 ) } + { [ 2(\underline{z}_a -\underline{z}_{00})]}^{t(n ) } \boldsymbol{h}^r_{1 } + \sum_{p=1}^n ( _ p^n)\boldsymbol{h}^r_{p+1}{[2(\underline{z}_a -\underline{z}_{00})]}^{t(n - p ) } \\ & \qquad \qquad + \sum_{p=1}^n ( _ p^n)\boldsymbol{h}_p^r{[2(\underline{z}_a -\underline{z}_{00})]}^{t(n - p+1 ) } , \\ & = { [ 2(\underline{z}_a -\underline{z}_{00})]}^{t(n+1 ) } + { [ 2(\underline{z}_a -\underline{z}_{00})]}^{t(n ) } \boldsymbol{h}^r_{1 } + \sum_{p=1}^n ( _ p^n)\boldsymbol{h}^r_{p+1}{[2(\underline{z}_a -\underline{z}_{00})]}^{t(n - p ) } \\ & \qquad \qquad + \sum_{p=0}^{n-1 } ( _ { p+1}^n)\boldsymbol{h}^r_{p+1}{[2(\underline{z}_a -\underline{z}_{00})]}^{t(n - p ) } , \\ & = { [ 2(\underline{z}_a -\underline{z}_{00})]}^{t(n+1 ) } + ( n+1){[2(\underline{z}_a -\underline{z}_{00})]}^{t(n ) } \boldsymbol{h}^r_{1 } \\ & \qquad \qquad + \sum_{p=1}^{n-1 } ( _ { p+1}^{n+1})\boldsymbol{h}^r_{p+1}{[2(\underline{z}_a -\underline{z}_{00})]}^{t(n+1-(p+1 ) ) } + \boldsymbol{h}^r_{n+1 } , \\ & = { [ 2(\underline{z}_a -\underline{z}_{00})]}^{t(n+1 ) } + ( n+1){[2(\underline{z}_a -\underline{z}_{00})]}^{t(n ) } \boldsymbol{h}^r_{1 } \\ & \qquad \qquad + \sum_{p=0}^{n-2 } ( _ { p}^{n+1})\boldsymbol{h}_p^r{[2(\underline{z}_a -\underline{z}_{00})]}^{t(n+1-p ) } + \boldsymbol{h}^r_{n+1 } , \\ & = \sum_{p=0}^{n+1 } ( _ { p}^{n+1})\boldsymbol{h}_p^r{[2(\underline{z}_a -\underline{z}_{00})]}^{t(n+1-p)}. % \end{dmath } % \end{split}\end{aligned}\ ] ]
a description of orthogonal tensor hermite polynomials in 3-d is presented . these polynomials , as introduced by grad in 1949 , can be used to obtain a series solution to the boltzmann transport equation . the properties that are explored are scaling , translation and rotation . order 6 hermite tensors are studied while obtaining the rotation relations . from the scaling of the independent variables of particle velocities , a criterion on temperature is obtained which implies that the equation can be applied to binary gas mixtures only if the temperature of the hotter constituent is less than four times that of the cooler one . this criterion and other properties of the tensor hermite polynomials obtained in this paper can be used to study gas dynamics in the thermosphere .
multiple - access channels and two - way channels are two of the earliest channels that were considered in the literature . the multiple - access channel capacity region was determined in .the two - way channel was initially examined by shannon , , where he found inner and outer bounds for the general two - way channel , and determined the capacity region for some special cases . in , it was shown that the inner bound found by shannon was not tight in general .the capacity region of the gaussian two - way channel was found by han in .a related , somewhat more general case called two - user channels was studied in . for a comprehensive review of these channels ,the reader is referred to .a rigorous analysis of information theoretic secrecy was first given by shannon in . in this work, shannon showed that to achieve _ perfect secrecy _ in communications , which is equivalent to providing no information to an enemy cryptanalyst , the conditional probability of the _ cryptogram given a message _ must be independent of the actual transmitted message . in other words , the _ a posteriori _ probability of a message must be equivalent to its _ a priori _ probability . in , wyner applied this concept to the discrete memoryless channel .he defined the wire - tap channel , where there is a wire - tapper who has access to a degraded version of the intended receiver s signal . using the normalized conditional entropy of the transmitted message given the received signal at the wire - tapper as the secrecy measure, he found the region of all possible pairs , and the existence of a _ secrecy capacity _ , , the rate up to which it is possible to limit the rate of information transmitted to the wire - tapper to arbitrarily small values . in , it was shown that for wyner s wire - tap channel , it is possible to send several low - rate messages , each completely protected from the wire - tapper individually , and use the channel at close to capacity .however , if any of the messages are available to the wire - tapper , the secrecy of the rest may also be compromised .reference extended wyner s results in and carleial and hellman s results in to gaussian channels . the seminal work by csiszr and krner , , generalized wyner s results to less noisy " and more capable " channels .furthermore , it examined sending common information to both the receiver and the wire - tapper , while maintaining the secrecy of some private information that is communicated to the intended receiver only .reference suggested that the secrecy constraint developed by wyner needed to be strengthened , since it constrains the rate of information leaked to the wire - tapper , rather than the total information , and the information of interest might be in this small amount .it was then shown that the results of can be extended to strong " secrecy constraints for discrete channels , where the limit is on the total leaked information rather than just the rate , with no loss in achievable rates , . in the past two decades ,common randomness has emerged as a valuable resource for secret key generation , . in , it was shown that the existence of a public " feedback channel can enable the two parties to be able to generate a secret key even when the wire - tap capacity is zero .references and examined the secret key capacity and _ common randomness _capacity , for several channels .these results also benefit from to provide strong " secret key capacities .maurer also examined the case of active adversaries , where the wire - tapper has read / write access to the channel in .the secret key generation problem was investigated from a multi - party point of view in and .notably , csiszr and narayan considered the case of multiple terminals where a number of terminals try to distill a secret key and a subset of these terminals can act as helper terminals to the rest in , .recently , several new models have emerged , examining secrecy for parallel channels , relay channels , and fading channels . fading and parallel channels were examined together in .broadcast and interference channels with confidential messages were considered in .references examined the multiple access channel with confidential messages where two transmitters try to keep their messages secret from each other while communicating with a common receiver . in , an achievable regionwas found in general , and the capacity region was found for some special cases .mimo channels were considered in . in , we investigated multiple access channels where transmitters communicate with an intended receiver in the presence of an external wire - tapper from whom the messages must be kept confidential . in , we considered the case where the wire - tapper gets a degraded version of a gmac signal , and defined two separate secrecy measures extending wyner s measure to multi - user channels to reflect the level of trust the network may have in each node .achievable rate regions were found for different secrecy constraints , and it was shown that the secrecy sum - capacity can be achieved using gaussian inputs and stochastic encoders .in addition , tdma was shown to also achieve the secrecy sum - capacity . in this paper, we consider the general gaussian multiple access wire - tap channel ( ggmac - wt ) and the gaussian two - way wire - tap channel ( gtw - wt ) , both of which are of interest in wireless communications as they correspond to the case where a single physical channel is utilized by multiple transmitters , such as in an ad - hoc network .we consider an external _ eavesdropper _ that receives the transmitters signals through a general gaussian multiple access channel ( ggmac ) in both system models .we utilize a suitable secrecy constraint which is the normalized conditional entropy of the transmitted secret messages given the eavesdropper s signal , corresponding to the collective secrecy " constraints used in .we show that satisfying this constraint implies the secrecy of the messages for all users . in both scenarios ,transmitters are assumed to have one secret and one open message to transmit .this is different from in that the secrecy rates are not constrained to be at least a fixed portion of the overall rates .we find an achievable _ secrecy rate region _, where users can communicate with arbitrarily small probability of error with the intended receiver under _ perfect secrecy _ from the eavesdropper , which corresponds to the result of for the degraded case .we note that , in accordance with the recent literature , when we use the term perfect secrecy , we are referring to weak " secrecy , where the _ rate _ of information leaked to the adversary is limited .as such , this can be thought of as almost perfect secrecy " .we also find the sum - rate maximizing power allocations for the general case , which is more interesting from a practical point of view .it is seen that as long as the users are not _ single - user decodable _ at the eavesdropper , a secrecy - rate trade off is possible between the users .next , we show that a non - transmitting user can help increase the secrecy capacity for a transmitting user by effectively jamming " the eavesdropper , and even enable secret communications that would not be possible in a single - user scenario .we term this new scheme _cooperative jamming_. the gtw - wt is shown to be especially useful for secret communications , as the multiple - access nature of the channel hurts the eavesdropper without affecting the communication rate .this is due to the fact that the transmitted messages of each user essentially help hide the other user s secret messages , and reduce the extra randomness needed in wire - tap channels to confuse the eavesdropper .the rest of the paper is organized as follows : section [ sec : system ] describes the system model for the ggmac - wt and gtw - wt and the problem statement .section [ sec : ach ] describes the general achievable rates for the ggmac - wt and gtw - wt .sections [ sec : summax ] and [ sec : jam ] give the sum - secrecy rate maximizing power allocations , and the achievable rates with cooperative jamming . section [ sec : results ] gives our numerical results followed by our conclusions and future work in section [ sec : conclusion ] .we consider users communicating in the presence of an eavesdropper who has the same capabilities .each transmitter has two messages , which is secret and which is open , from two sets of equally likely messages , . let , , , , and .the messages are encoded using codes into , where .the encoded messages are then transmitted .we assume the channel parameters are universally known , and that the eavesdropper also has knowledge of the codebooks and the coding scheme .in other words , there is no shared secret .the two channels we consider in this paper are described next .this is a scenario where the users communicate with a common base station in the presence of an eavesdropper , where both channels are modeled as gaussian multiple - access channels as shown in figure [ fig : gmacwt2 ] .the intended receiver and the wire - tapper receive and , respectively .the receiver decodes to get an estimate of the transmitted messages , .we would like to communicate with the receiver with arbitrarily low probability of error , while keeping the wire - tapper ( eavesdropper ) ignorant of the secret messages , .the signals at the intended receiver and the wire - tapper are given by [ eqn : mac ] where are the awgn , is the transmitted codeword of user , and are the channel gains of user to the intended receiver ( _ main _ channel , m ) , and the eavesdropper ( _ wire - tap _ channel , w ) , respectively .each component of and .we also assume the following transmit power constraints : similar to the scaling transformation to obtain the standard form of the interference channel , , we can represent any gmac - wt by an equivalent standard form , : [ eqn : macstd ] where , for each , * the codewords are scaled to get ; * the new power constraints are ; * the wiretapper s new channel gains are ; * the noises are normalized to get and . we can show that the eavesdropper gets a stochastically degraded version of the receiver s signal if .we considered this special case in . in this scenario ,two transmitter / receiver pairs communicate with each other over a common channel .each receiver gets and the eavesdropper gets .receiver decodes to get an estimate of the transmitted messages of the other user .the users would like to communicate the open and secret messages with arbitrarily low probability of error , while maintaining secrecy of the secret messages .the signals at the intended receiver and the wiretapper are given by [ eqn : tw ] where and .we also assume the same power constraints given in ( with ) , and again use an equivalent standard form as illustrated in figure [ fig : tw ] : [ eqn : twstd ] where * the codewords are scaled to get and ; * the maximum powers are scaled to get and ; * the transmitters new channel gains are given by and ; * the wiretapper s new channel gains are given by and ; * the noises are normalized by and . in this section , we present some useful preliminary definitions including the secrecy constraint we will use . in particular , the secrecy constraint we used is the collective secrecy constraint " we defined in , and is suitable for the multi - access nature of the systems of interest .we use the _ normalized joint conditional entropy _ of the transmitted messages given the eavesdropper s received signal as our secrecy constraint , i.e. , for any set of users . for perfect secrecy of alltransmitted secret messages , we would like assume for some arbitrarily small as required .then , where as .if , then we define .thus , the perfect secrecy of the system implies the perfect secrecy of any group of users , guaranteeing that when the system is secure , so is each individual user .[ def : achrate ] let .the rate vector is said to be _ achievable _ if for any given there exists a code of sufficient length that [ eqn : achdef ] and is the average probability of error .in addition , we need where denotes our secrecy constraint and is defined in .we will call the set of all achievable rates , the _ secrecy - capacity region _ , and denote it for the ggmac - wt , and for the gtw - wt , respectively .before we state our results , we also define the following notation which will be used extensively in the rest of this paper : }}^+ } & \triangleq \max { { \left [ { \xi,0}\right ] } } \\ \label{eqn : capm } { c^{{{{\scriptscriptstyle \text{m}}}}}}_{{{\mathcal{s}}}}({{{\mathbf{p } } } } ) & \triangleq { \frac{1}{2}}\log { \left({1+{{\textstyle \sum}}_{k \in { { { \mathcal{s } } } } } p_k}\right ) } , \quad { { { \mathcal{s}}}}\subseteq { { { \mathcal{k}}}}\\ \label{eqn : capw } { c^{{{{\scriptscriptstyle \text{w}}}}}}_{{{\mathcal{s}}}}({{{\mathbf{p } } } } ) & \triangleq { \frac{1}{2}}\log { \left({1+{{\textstyle \sum}}_{k \in { { { \mathcal{s } } } } } h_k p_k}\right ) } , \quad { { { \mathcal{s}}}}\subseteq { { { \mathcal{k}}}}\\ \label{eqn : capws } { \tilde{c}^{{{{\scriptscriptstyle \text{w}}}}}}_{{{\mathcal{s}}}}({{{\mathbf{p } } } } ) & \triangleq { \frac{1}{2}}\log { \left({1+\frac{{{\textstyle \sum}}_{k \in { { { \mathcal{s } } } } } h_k p_k } { 1+{{\textstyle \sum}}_{k \in { { { { { \mathcal{s}}}}^c } } } h_k p_k}}\right ) } , \quad { { { \mathcal{s}}}}\subseteq { { { \mathcal{k}}}}\\ \label{eqn : pset } { { { \mathcal{p}}}}&\triangleq { { \left\ { { { { { \mathbf{p } } } } : 0 \le p_k \le { \bar{p}}_k , \ , \forall k}\right\}}}\\ \label{eqn : pmmax } { \bar{{{\mathbf{p}}}}}&\triangleq { { \left\ { { { \bar{p}}_1,\dotsc,{\bar{p}}_k}\right\}}}\end{aligned}\ ] ]lastly , we informally call the user _ strong _ if , and _ weak _ if .this is a way of indicating whether the intended receiver or the wiretapper is at a more of an advantage concerning that user , and is equivalent to stating whether the single - user secrecy capacity of that user is positive or zero .we later extend this concept to refer to users who can achieve positive secrecy rates and those who can not .in addition , we will say that a user is _ single - user decodable _ if its rate is such that it can be decoded by treating the other user as noise .a user group is single - user decodable by the eavesdropper if .our achievable rates can not guarantee secrecy for such a group of users .in this section , we present our main results for the ggmac - wt . we first define two separate regions and then give an achievable region : let for all .then , the superposition region , , is given by }}^+ } , \ ; \forall { { { \mathcal{s}}}}\subseteq { { { \mathcal{k}}}}\bigr\ } \end{split}\ ] ] which can be written as ^+ \!\!,\ ; \forall { { { \mathcal{s}}}}{\subseteq } { { { \mathcal{k}}}}\biggr \}. \end{split}\ ] ] let be such that for all and .let for all .then , the tdma region , , is given by }}^+ } \ !, \ ; \forall k { \in } { { { \mathcal{k}}}}\bigr \ } \end{split}\ ] ] which is equivalent to ^+ \!\ ! , \forall k { \in } { { { \mathcal{k}}}}\biggr \}.\ ! \end{split } \ ] ] the superposition and tdma regions can also be written as follows : }}^+ } , \quad \forall { { { \mathcal{s}}}}\subseteq { { { \mathcal{k}}}}\bigr \ } \end{split } \\ \begin{split } { { \ensuremath{{{\mathcal{g}}}^{\scriptscriptstyle \text{\text{ma - tdma}}}}\xspace}}({{{\mathbf{p}}}},{\boldsymbol{\alpha } } ) = \bigl \ { { { { \mathbf{r}}}}\colon \hspace{-1.2 in } & \\ & { { { r}^s}}_k + { { { r}^o}}_k \le \alpha_k { c^{{{{\scriptscriptstyle \text{m}}}}}}_k { \left({\frac{{\bar{p}}_k}{\alpha_k}}\right ) } , \quad \forall k \in { { { \mathcal{k}}}}\\ & { { { r}^s}}_k \le \alpha_k { { { \left [ { { c^{{{{\scriptscriptstyle \text{m}}}}}}_k { \left({\frac{{\bar{p}}_k}{\alpha_k}}\right ) } { - } { c^{{{{\scriptscriptstyle \text{w}}}}}}_k{\left({\frac{{\bar{p}}_k}{\alpha_k}}\right)}}\right ] } } ^+}\ ! , \ , \forall k \in { { { \mathcal{k}}}}\bigr \ } \end{split}\end{gathered}\ ] ] in accordance with the definitions in . [ thm : macach ]the rate region given below is achievable for the ggmac - wt : we first show that the superposition encoding rate region given in for a fixed power allocation is achievable . consider the following coding scheme for rates for some : * superposition encoding scheme : * for each user , consider the following scheme : generate codebooks and . consists of codewords , each component of which is drawn from .codebook has codewords with each component randomly drawn from and has codewords with each component randomly drawn from where is an arbitrarily small number to ensure that the power constraints on the codewords are satisfied with high probability and .define and . to transmit message , user finds the codewords corresponding to components of and also uniformly chooses a codeword from .user then adds all these codewords and transmits the resulting codeword , , so that it actually transmits one of codewords .let .note that since all codewords are chosen uniformly , user essentially transmits one of codewords at random for each message , and its overall rate of transmission is .specifically , we choose the rates to satisfy ^+ \ ! , \ , \forall { { { \mathcal{s}}}}{\subseteq } { { { \mathcal{k}}}}\!\end{aligned}\ ] ] which we can also write as : ^+\ ! , \quad \forall { { { \mathcal{s}}}}{\subseteq } { { { \mathcal{k}}}}. \label{eqn : maccolsec}\end{aligned}\ ] ] note that if is zero for a group of users , we can not achieve secrecy for those users . when , if the sum - capacity of the main channel is less than that of the eavesdropper channel , i.e. , , secrecy is not possible for the system .assume this quantity is positive . to ensure that we can mutually satisfy both ,, we can reclassify some open messages as secret .clearly , if we can guarantee secrecy for a larger set of messages , secrecy is achieved for the original messages . from the first set of conditions in and the gmac coding theorem , , with high probability the receiver can decode the codewords with low probability of error . to show the secrecy condition in , first note that, the coding scheme described is equivalent to each user selecting one of messages , and sending a uniformly chosen codeword from among codewords for each .define , and we have where we used , and thus we have to get .we will consider the two terms individually .first , we have the trivial bound due to channel capacity : now write since user independently sends one of codewords equally likely for each secret message , we can also write where as since , with high probability , the eavesdropper can decode given due to and code generation . using , , and in , we get now , let us consider the tdma region given in . this region is obtained when users who can achieve single - user secrecy use a single - user wire - tap code as in in a tdma schedule , where the time - share of each user is given by and . a transmitter who can achieve secrecy , i.e. , having , tranmits for portion of the time when all other users are silent , using power , satisfying its average power constraint over the tdma time - frame .this approach was used in to achieve secrecy sum - capacity for individual constraints . when the channel is degraded , i.e. , for all , then for collective constraints the tdma region is seen to be a subset of the superposition region . however , this is not necessarily true for the general case , and by time - sharing between the two schemes we can generally achieve a larger achievable region , given in .we remark that it is possible to further divide the open " messages to get more sets of private " messages which are also perfectly secret , i.e. , if we let , then as long as we impose the same restrictions on as , we can achieve perfect secrecy of , as in .however , this does not mean that we have perfect secrecy at channel capacity , as the secrecy sub - codes carry information about each other . .] observe that even for users , a rate point in this region is four dimensional , and hence can not be accurately drawn .we can instead focus on the _ secrecy rate region _, the region of all achievable .the sub - regions are shown for different channel gains in figure [ fig : ggmacwtreg ] for fixed transmit powers , and users .figure [ fig : ggmacwtureg ] represents how these regions change with different transmit powers when the channel gains are fixed . for the caseshown , we need the convex hull operation , as the achievable region is a combination of different superposition and tdma regions .note also that the main extra condition for the superposition region is on the _ total extra randomness _ added . as a result , it is possible for stronger " users to help weak " users by contributing more to the necessary extra number of codewords , which is the sum - capacity of the eavesdropper .such a weak user only has to make sure that it is not single - user decodable , provided the stronger users are willing to sacrifice some of their own rate and generate more superfluous codewords . in other words, we see that users in a set are further protected from the eavesdropper by the fact that users in set are also undecodable , compared to the single - user case .the tdma region , on the other hand , does not allow users to help each other this way .as such , only users whose channel gains allow them to achieve secrecy on their own are allowed to transmit . .] for the special degraded case of , the perfect secrecy rate region for becomes the region given by ( * ? ? ?* theorem 1 ) for .we also observe that even though there is a limit on the secrecy sum - rate achieved by our scheme , it is possible to send open messages to the intended receiver at rates such that the sum of the secrecy rate and open rate for all users is in the capacity region of the mac channel to the intended receiver . even though we can not send at capacity with secrecy , the codewords used to confuse the eavesdropper may be used to communicate meaningful information to the intended receiver . in this section ,we present an achievable region for the gtw - wt using a superposition coding similar to that used to achieve the region for the ggmac - wt .we first define let .then , the gtw - wt superposition region , , is given by ^+ \!\ !, \ , \forall { { { \mathcal{s}}}}{\subseteq } { { { \mathcal{k}}}}\biggr \ } \end{split}\ ] ] which can be written as ^+ \!\ ! , \ , \forall { { { \mathcal{s}}}}{\subseteq } { { { \mathcal{k}}}}\biggr \}. \ ! \end{split}\ ] ] we can also write this region more compactly as the following : ^+\ ! , \quad \forall { { { \mathcal{s}}}}{\subseteq } { { { \mathcal{k}}}}\biggr\}. \end{split}\ ] ] [ thm : twach ] the rate region given below is achievable for the gtw - wt : . ]the proof is very similar to the proof of theorem [ thm : macach ] .we use the same coding scheme as theorem [ thm : macach ] , the main difference is that we choose the rates to satisfy ^+ \ ! , \ ; \forall { { { \mathcal{s}}}}{\subseteq } { { { \mathcal{k}}}}\end{aligned}\ ] ] or equivalently ^+ , \quad \forall { { { \mathcal{s}}}}\subseteq { { { \mathcal{k}}}}.\end{aligned}\ ] ] assuming is positive .the decodability of from comes from and the capacity region of the gaussian two - way channel .this gives the first set of terms in the achievable region .the key here is that since each transmitter knows its own codeword , it can _ subtract its self - interference _ from the received signal and get a clear channel .therefore , the gaussian two - way channel decomposes into two parallel channels .the second group of terms in , resulting from the secrecy constraint , can be shown the same way as the proof of theorem [ thm : macach ] , since has the same form for both channels .in other words , as far as the eavesdropper is concerned , the channel is still a gmac with users . as such, we need to send extra codewords in total , which need to be shared by the two - terminals provided they are not single - user decodable . .] for different channel gains , the region of all satisfying is shown in figure [ fig : gtwwtreg ] .since we require four dimensions for an accurate depiction of the complete rate region , we only focus on our main interest , i.e. , the secrecy rate region .figure [ fig : gtwwtureg ] shows the achievable secrecy rate region as a function of transmit powers .we note that higher powers always result in a larger region .we indicate the constraint on the overall rates , corresponding to the capacity region of the gaussian two - way channel , by the dotted line .note that the secrecy region has a structure similar to the ggmac - wt with .as far as the eavesdropper is concerned , there is no difference between the two channels .however , since the main channel between users decomposes into two parallel channels , higher rates can be achieved between the legitimate terminals ( users ) .thus , in effect , each user s transmitted codewords act as a _ secret key _ for the other user s transmitted codewords , requiring fewer extraneous codewords overall to confuse the eavesdropper , and a larger secrecy region .we note that a user may either achieve secrecy or not , depending on whether it is single - user decodable or not . as a result, tdma does not enlarge the region , since each user can at least achieve their single - user secrecy rates . to see this , note that the constraint on the secrecy sum - rate can be written as : so that transmitting in the two - way channel always provides an advantage over the single - user channels .the achievable regions given in theorems [ thm : macach ] and [ thm : twach ] depend on the transmit powers .we are , thus , naturally interested in the power allocation that would maximize the total secrecy sum - rate .recall that the standardized channel gain for user is , and that the higher is , the better the corresponding eavesdropper channel . without loss of generality , assume that users are ordered in terms of increasing standardized eavesdropper channel gains , i.e. , .note that , we only need to concern ourselves with the case , since we can combine users with the same channel gains into one super - user .we can then split the resulting optimum power allocation for a super - user among the actual constituting users in any way we choose , since they would all result in the same sum - rate .in addition , from a physical point of view , assuming that the channel parameters are drawn according to a continuous distribution and then fixed , the probability that two users would have the same exact standardized channel gain is zero .we first examine the superposition region given in .the secrecy sum - rate achievable with superposition coding for the ggmac - wt was given in theorem [ thm : macach ] as }}^+ } \!\!\ ] ] and we would like to find the power allocation that maximizes this quantity .stated formally , we are interested in the transmit powers that solve the following optimization problem : } } \notag \hspace{-1.5 in } & \\ & = \min_{{{{\mathbf{p}}}}\in { { { \mathcal{p } } } } } \ ; { \frac{1}{2}}\log { \phi}_{{{\mathcal{k}}}}({{{\mathbf{p } } } } ) \\ & \equiv \min_{{{{\mathbf{p}}}}\in { { { \mathcal{p } } } } } \ ; { \phi}_{{{\mathcal{k}}}}({{{\mathbf{p } } } } ) \label{eqn : macsumprob1}\end{aligned}\ ] ] where and yields . in obtaining , we simply used the monotonicity of the function .the solution to this problem is given below : [ thm : summac ] the secrecy sum - rate maximizing power allocation for satisfies if and is where is some limiting user satisfying and we define . note that this allocation shows that only a _ subset of the strong users _ must be transmitting .we start with writing the lagrangian to be minimized : equating the derivative of the lagrangian to zero , we get where we define for any set . it is easy to see that if , then , and we have . if , then we similarly find that . finally , if , then we also have and does not depend on , so we can set with no effect on the secrecy sum - rate . thus , we have if , and if .then , the optimal set of transmitters is of the form since if a user is transmitting , all users such that must also be transmitting .we also note that .let be the last user satisfying this property , i.e. and .note that in other words , all sets for also satisfy this property , and are viable candidates for the optimal set of transmitting users .therefore , we can claim that is the optimum set of transmitting users , since from above we can iteratively see that for all . note that , for the special case of users , the optimum power allocation is we also need to consider the tdma region . in this case , the maximum achievable secrecy sum - rate is : }}. \ ] ] this is a simple complex optimization problem that can easily be solved numerically .for the degraded case , we can obtain a closed form solution : as in .in general , we can not obtain such a solution .however , it is trivial to note that users with should not be transmitting in this scheme .the secrecy sum - rate is then the maximum of the solutions given by the superposition and tdma regions .now , we will examine the power allocation that maximizes the secrecy sum - rate given in theorem [ thm : twach ] as ^+ \!.\ ] ] this problem is formally stated below : } } \\\equiv \min_{{{{\mathbf{p}}}}\in { { { \mathcal{p } } } } } \ ; { \psi}_{{{\mathcal{k}}}}({{{\mathbf{p } } } } ) \label{eqn : twsumprob1}\end{gathered}\ ] ] where and yields .the optimum power allocation is stated below : [ thm : sumtw ] the secrecy sum - rate maximizing power allocation for the gtw - wt is given by the lagrangian is , equating the derivative of the lagrangian to zero for user , we get where an argument similar to the one for the ggmac - wt establishes that if , or equivalently if , then .when equality is satisfied , then regardless of , and as such can be seen to not depend on .to conserve power , we again set in this case . on the other hand , if , then .consider user 1 .if , and , this implies that . since , we can not have . as a consequence of this contradiction, we see that whenever .assume , and consider the two alternatives for .we will have if ; and if .these cases correspond to and , respectively .thus , we have as the secrecy sum - rate maximizing power allocation .observe that the solution in theorem [ thm : sumtw ] has a structure similar to that in theorem [ thm : summac ] . in summary, it is seen that as long as a user is not single - user decodable , it should be transmitting with maximum power .hence , when both users can be made to be non - single - user decodable , then the maximum powers will provide the largest secrecy sum - rate .if this is not the case , then the user who is single - user decodable can not transmit with non - zero secrecy and will just make the secrecy sum - rate constraint tighter for the remaining user by transmitting open messages . comparing to , we see that the same form of solutions is found , but the range of channel gains where transmission is possible is larger , showing that gtw - wt allows secrecy even when the eavesdropper s channel is not very weak .in the previous section , we found the secrecy sum - rate maximizing power allocations .for both the ggmac - wt and gtw - wt , if the eavesdropper is not disadvantaged enough " for some users ,then these users transmit powers are set to zero .we posit that such a user may be able to help " a transmitting user , since it can cause more harm to the eavesdropper than to the intended receiver .we only consider the superposition region , since in the tdma region a user has a dedicated time - slot , and hence does not affect the others .we will next show that this type of cooperative behavior is indeed useful , notably exploiting the fact that the established achievable secrecy sum - rate is a difference of the sum - capacity expressions for the intended channel(s ) and the eavesdropper s channel . as a result , reducing the latter more than the former actually results in an _ increase _ in the achievable secrecy sum - rate .formally , the scheme we are considering implies partitioning the set of users , into a set of transmitting users , and a set of jamming users . if a user is jamming , then it transmits instead of codewords . in this case, we can show that we can achieve higher secrecy rates when the weaker " users are jamming .we also show that the gtw - wt , has an additional advantage compared to the ggmac - wt , that is the fact that the receiver already knows the jamming sequence . as such, this scheme only harms the eavesdropper and not the intended receivers , achieving an even higher secrecy sum - rate . once again , without loss of generality , we consider .in addition , we will assume that a user can either take the action of transmitting its information or jamming the eavesdropper , but not both .it is readily shown in section [ sec : jammac ] below that we do not lose any generality by doing so , and that splitting the power of a user between the two actions is suboptimal from the secrecy sum - rate maximization point of view . the problem is formally presented below : \\ & \hspace{.2 in } \equiv \label{eqn : jammac } \min_{{{{\mathcal{t}}}}\subseteq { { { \mathcal{k } } } } , { { { \mathbf{p}}}}\in { { { \mathcal{p } } } } } \frac{{\phi}_{{{\mathcal{k}}}}({{{\mathbf{p}}}})}{{\phi}_{{{{{\mathcal{t}}}}^c}}({{{\mathbf{p}}}})}\end{aligned}\ ] ] where we recall that is given by , such that to see that a user should not be splitting its power among jamming and transmitting , it is sufficient to note that regardless of how a user splits its power , will be the same , and the user only affects .assume the optimum solution is such that user splits its power , so and .then , it is easy to see that if , the sum - rate is increased when that user uses its jamming power to transmit , and when , the sum - rate is increased when the user uses its transmit power to jam . when , then regardless of how its power is split , the sum - rate is the same , and we can assume user either transmits or jams .note that we must have to have a non - zero secrecy sum - rate , and to have an advantage over not jamming .this scheme can be shown to achieve the following secrecy sum - rate : [ thm : jammac ] the secrecy sum - rate using cooperative jamming is where is the set of transmitters and the optimum power allocation is of the form with }}^+}\ ] ] and whenever the positive real root exists , and otherwise .we first solve the subproblem of finding the optimal power allocation for a set of given transmitters , .the solution to this will also give us insight into the structure of the optimal set of transmitters , .we start with writing the lagrangian : the derivative of the lagrangian depends on the user : since a user satisfies , it must have .consider a user .the same argument as in the sum - rate maximization proof leads to if and if .now examine a user .we can write as where let then , we have iff , and iff .thus , we again find that we must have for all .also , if , then .only if , can we have .now , since , we must have .thus , we find that .then , we know that for a given set of transmitters , , the solution is such that all users transmit with power if . in the set of jammers , all users have , and when this inequality is not satisfied with equality , the jammers jam with maximum power .if the equality is satisfied for some users , their jamming powers can be found from solving . by rearranging terms in ,we note that the optimum power allocation for this user , call it user , is found by solving the quadratic the solution of which is given in .note that defines an ( upright ) parabola .if the root given in exists and is positive , then .this comes from the fact that if , then for all , and we must have .if , on the other hand , gives a complex or negative solution , then the parabola does not intersect the axis , and is always positive .hence , , and does not belong to , i.e. . the form of this solution is intuitively pleasing , since it makes more sense for weaker " users to jam as they harm the eavesdropper more than they do the intended receiver .what we see is that all transmitting users , such that , transmit with maximum power as long as their standardized channel gain is less than some limit , and all jamming users must have .we claim that all users in must have and all users in have . to make this argument, we need to show that a such that there exists some with and such that can not be the optimum set . to see this ,let be the optimum power allocation for a set .consider a new power allocation and set such that , i.e. , user is now jamming , and let , and , for some small .we then have which is a lower value for the objective function , proving that is not optimum .this shows that all users must have for all users .since the last user in has , necessarily for all , and for all . summarizing , the optimum power allocation is such that there is a set of transmitting users with for , there is a set of silent users , and a set of jamming users with for and is found from .this is what is presented in the statement in theorem [ thm : jammac ] .note that to find , we can simply do an exhaustive search as we have narrowed the number of possible optimal sets to instead of and found the optimal power allocations for each .* two - user ggmac - wt : * for illustration purposes , let us consider the familiar case with transmitters . in this case , we know that either user 2 jams , or no user does . the solution can be found from comparing the two cases .if , without jamming , user can transmit , then it is optimal for it to continue to transmit , and jamming will not improve the sum - rate .otherwise , user 2 may be jamming to improve the secrecy rate of user 1 .the optimum power allocation for user 1 is equivalent to if and if .the power for user is found from .for users , we can simply write as where }}.\end{aligned}\ ] ] with different for .the circles indicate optimum jamming power . ] if , we automatically have . in addition , we have , so we only need to concern ourselves with the possibly positive root , .we first find when .we see that for all if , equivalent to having two negative roots , or , equivalent to having no real roots of .now examine when .this is possible if and only if .since , this happens only when or .however , if , we are better off transmitting than jamming .the last case to examine is when .this implies that , and is satisfied when .assume . in this case ,we are guaranteed .if , then we must have since the secrecy rate is 0 .we would like to find when we can have . since , we must have , and .this implies .it is easy to see that if and otherwise. thus , for users , the solution simplifies to : }}^+ } ) , & \text{if } h_1 { \le } 1 , h_2 { > } 1\\ ( 0,0 ) , & \text{if } h_1 { \ge } 1 , h_2 { \ge } h_1 , \frac{h_1 - 1}{h_2-h_1}{\ge } { \bar{p}}_1 \\ ( { \bar{p}}_1,\min { { \left\ { { p,{\bar{p}}_2}\right\ } } } ) , & \text{if } h_1 { \ge } 1 , h_2 { > } h_1 , \frac{h_1 - 1}{h_2-h_1 } { < } { \bar{p}}_1 \end{cases } \hspace{-\multlinegap}\end{gathered}\ ] ] where }}}}{h_2 ( h_2{-}h_1)}.\ ] ] this solution can be checked to be in accordance with the sum - rate maximizing power allocation of theorem [ thm : summac ] .we note that in the case unaccounted for in , i.e. , when and , both users should be transmitting . in general, the solution shows that the weaker " user should jam if it is not single - user decodable , and if it has enough power to make the other user strong " in the new effective channel . with different for .the circles indicate optimum jamming power . ] once again , we propose to maximize the secrecy sum - rate using cooperative jamming when useful .this problem is formally stated as follows : }}\\ \equiv \min_{{{{\mathcal{t}}}}\subseteq { { { \mathcal{k } } } } } \min_{{{{\mathbf{p}}}}\in { \bar { { { \mathcal{p } } } } } } \frac{{\psi}_{{{\mathcal{k}}}}({{{\mathbf{p}}}})}{{\psi}_{{{{{\mathcal{t}}}}^c}}({{{\mathbf{p } } } } ) } \label{eqn : jamtw}\end{gathered}\ ] ] where we recall that is given by and note that since there are only two terminals . a similar argument to the ggmac - wt casecan easily be used to establish that we can assume a user to be either transmitting or jamming , but not both . since the jamming user is also the receiver that the other user is communicating with and knows the transmitted signal , this scheme entails no loss of capacity as far as the transmitting user is concerned .the optimum power allocations are given as follows .[ thm : jamtw ] the achievable secrecy sum - rate for the the collaborative scheme described is where is the set of transmitting users and the optimum power allocations are given by & 1{<}h_1{<}1{+}h_2{\bar{p}}_2 , \\ & { \psi}_2({\bar{{{\mathbf{p}}}}}){>}{\psi}_1({\bar{{{\mathbf{p } } } } } ) \end{aligned } \\ \ !( { \bar{p}}_1,{\bar{p}}_2 ) , \ , \text{2 transm . , 1 jams , } & \!\!\text{if } \begin{aligned}[t ] & 1{<}h_1{<}h_2{<}1{+}h_1{\bar{p}}_1 , \\ & { \psi}_1({\bar{{{\mathbf{p}}}}}){>}{\psi}_2({\bar{{{\mathbf{p } } } } } ) \end{aligned}\\ \ ! ( 0,0 ) , & \!\ !\text{otherwise } \end{cases } \hspace{-\multlinegap}\end{gathered}\ ] ] similar to the ggmac - wt , we start with the sub - problem of finding the optimal power allocation given a jamming set .the lagrangian is given by taking the derivative we have since a user satisfies , it must have .consider user .we again argue that if , then and if , then .now examine a user .it is easy to see that since such a user only harms the jammer , the optimal jamming strategy should have , i.e. , the maximum power .this can also be seen by noting that for this case simplifies to and hence we must have for all . the jamming set will be one of , since there is no point in jamming when there is no transmission . also , if any of the two users is jamming , by the argument above , , .we can easily see that jamming by a user only offers an advantage if , i.e. , iff for .thus , when , both users should be transmitting instead of jamming . however , when any user has , jamming always does better than the case when both users are transmitting . in this case , for some user , and the objective function in is minimized when this user is jamming , and the other one is transmitting .if , however , , then it will not transmit , and we should not be jamming . consolidating all of these results ,we come up with the power allocation in in theorem [ thm : jamtw ] . with different for .] a sufficient , but not necessary condition for the weaker user to be the jamming user is if ; this case corresponds to having higher snr at the eavesdropper for the original , non - standardized model .this can be interpreted as jam with maximum power if it is possible to change user 1 s effective channel gain such that it is no longer single - user decodable " . for the simple case of equal power constraints , , it is easily seen that user 1 should never be jamming .the optimal power allocation in that case reduces to this section , we present numerical results to illustrate the achievable rates obtained , as well as the cooperative jamming scheme and its effect on achievable secrecy sum - rates .as mentioned earlier in the paper , examples of achievable secrecy rate regions are given in figures [ fig : ggmacwtureg ] and [ fig : gtwwtureg ] for the ggmac - wt with and gtw - wt respectively .comparing figures [ fig : ggmacwtureg ] and [ fig : gtwwtureg ] , we see that the gtw - wt achieves a larger secrecy rate region then the ggmac - wt , and offers more protection to weak " users . in addition , tdma does not enlarge the achievable region for gtw - wt since superposition coding always allows users to achieve their single - user secrecy rates for any transmit power .let us have a closer look at the secrecy advantage of the two - way channel over the mac with two users .for the ggmac - wt with , the achievable maximum secrecy sum - rate , is limited by the channel parameters .it was shown in that for the degraded case , , the secrecy sum - capacity , , is an increasing function of the total sum power , .however , it is limited since as . for the general case , where , theorem [ thm : summac ] implies that the sum - rate is maximized when only user 1 transmits ( assuming ) , and is bounded similarly by . on the other hand , for the gtw - wt , unlike the ggmac - wt , it is possible to increase the secrecy capacity by increasing the transmit powers .this mainly stems from the fact that the users now have the extra advantage over the eavesdropper that they know their own transmitted codewords . in effect, _ each user helps encrypt the other user s transmission_. to see this more clearly , consider the symmetric case where and , which makes all users receive a similarly noisy version of the same sum - message .the only disadvantage the eavesdropper has , is that he does not know any of the codewords whereas user knows . in this case, is achievable , and this rate approaches as .thus , it is possible to achieve a secrecy - rate increase at the same rate as the increase in channel capacity .next , we examine the secrecy sum - rate maximizing power allocations and optimum powers for the cooperative jamming scheme .figures [ fig : ggmacwtjamp2 ] and [ fig : ggmacwtjamp100 ] show the achievable secrecy rate improvement for the cooperative jamming scheme for various channel parameters for the ggmac - wt with .the plots are the secrecy rates for user 1 when user 2 is jamming with a given power , which correspond to user 1 s single - user secrecy _ capacity _ , , since only one user is transmitting .when , the secrecy capacity is seen to be zero , unless user has enough power to _ convert _ user 1 s re - standardized channel gain to less than 1 . for the gtw - wt , it is always optimal for user to jam as long as it enables user to transmit , as seen in figure [ fig : twwtjamp2 ] .the results show , as expected , that secrecy is achievable for both users so long as we can keep the eavesdropper from single - user decoding the transmitted codewords by treating the remaining user as noise .since the coding schemes considered here assume knowledge of eavesdropper s channel gains , applications are limited .one practical application could be securing of a physically protected area such as inside a building , when the eavesdropper is known to be outside .in such a case we can design for the worst case scenario .an example is given in figure [ fig : ggmacwtcoljam ] for the ggmac - wt , where we assume a simple path - loss model and fixed locations for two transmitters ( t ) and one receiver ( r ) at the center .we examine the transmit / jam powers for this area when the eavesdropper is known to be at using a fixed path - loss model for the channel gains , and plot the transmit / jam powers and the achieved secrecy sum - rates as a function of the eavesdropper location .it is readily seen that when the eavesdropper is close to the bs , the secrecy sum - rate falls to zero . also , when the eavesdropper is in the vicinity of a transmitter , that transmitter can not transmit in secrecy .however , in this case , the transmitter can jam the eavesdropper effectively , and allow the other transmitter to transmit and/or increase its secrecy rate with little jamming power .the situation for the gtw - wt is similar , and is shown in figure [ fig : gtwwtcoljam ] . in this case , jamming is more useful as compared to the ggmac - wt , and we see that it is possible to provide secrecy for a much larger area where the eavesdropper is located , as the jamming signal does not hurt the intended receiver .in this paper , we have considered the gaussian multiple access and two - way channels in the presence of an external eavesdropper who receives the transmitted signals through a multiple - access channel , and provided achievable secrecy rates .we have shown that the multiple - access nature of the channels considered can be utilized to improve the secrecy of the system .in particular , we have shown that the total extra randomness is what matters mainly concerning the eavesdropper , rather than the individual randomness in the codes .as such , it may possible for users whose single - user wire - tap capacity are zero , to communicate with non - zero secrecy rate , _ as long as it is possible to put the eavesdropper at an overall disadvantage_. this is even clearer for two - way channels , where even though the eavesdropper s channel gain may be better than a terminal s , the extra knowledge of its own codeword by that terminal enables communication in perfect secrecy as long as the eavesdropper s received signal is not strong enough to allow single - user decoding .we found achievable secrecy rate regions for the general gaussian multiple - access wire - tap channel ( ggmac - wt ) and the gaussian two - way wire - tap channel ( gtw - wt ) .we also showed that for the ggmac - wt the secrecy sum - rate is maximized when only users with strong " channels to the intended receiver as opposed to the eavesdropper transmit , and they do so using all their available power .for the gtw - wt , the sum - rate is maximized when both terminals transmit with maximum power as long as the eavesdropper s channel is not good enough to decode them using single - user decoding . finally , we proposed a scheme termed _ cooperative jamming _ , where a disadvantaged user may help improve the secrecy rate by jamming the eavesdropper .we found the optimum power allocations for the transmitting and jamming users , and showed that significant rate gains may be achieved , especially when the eavesdropper has much higher snr than the receivers and normal secret communications is not possible .the gains can be significant for both the ggmac - wt and gtw - wt .this cooperative behavior is useful when the maximum secrecy sum - rate is of interest .we have also contrasted the secrecy rates of the two channels we considered , noting the benefit of the two - way channels where the fact that each receiver has perfect knowledge of its transmitted signal brings an advantage with each user effectively encrypting the communications of the other user . in this paper , we only presented achievable secrecy rates for the ggmac - wt and gtw - wt .the secrecy capacity region for these channels are still open problems . in , we also found an upper bound for the secrecy sum - rate of the ggmac - wt and noted that the achievable secrecy sum - rate and the upper bound we found only coincide for the degraded case , so that we have the secrecy sum - capacity for the degraded gmac - wt . even though there is a gap between the achievable secrecy sum - rates and upper bounds , cooperative jammingwas shown in to give a secrecy sum - rate that is close to the upper bound in general .finally , we note that the results provided are of mainly theoretical interest , since as of yet there are no currently known practical codes for multi - access wire - tap channels unlike the single - user case where in some cases practical codes have been shown to be useful for the wire - tap channel , .furthermore , accurate estimates of the eavesdropper channel parameters are required for code design for wire - tap channels where the channel model is quasi - static , as in our models considered in this paper .e. tekin and a. yener , `` achievable rates for the general gaussian multiple access wire - tap channel with collective secrecy , '' in _ proc .allerton conf ._ , monticello , il , sep 2729 2006 , [ online . ]available : http://arxiv.org/abs/cs/0612088 .u. maurer and s. wolf , `` information - theoretic key agreement : from weak to strong secrecy for free , '' in _ proceedings of eurocrypt 2000 , lecture notes in computer science _ ,1807.1em plus 0.5em minus 0.4em springer - verlag , 2000 , pp .351368 .u. maurer and s. wolf , `` secret - key agreement over unauthenticated public channels - part i : definitions and a completeness result , '' _ ieee trans .inform . theory _49 , no . 4 , pp . 822831 , apr 2003 .r. liu , i. maric , r. d. yates , and p. spasojevic , `` discrete memoryless interference and broadcast channels with confidential messages , '' in _ proc .allerton conf ._ , monticello , il , sep 27 - 29 2006 .y. liang and v. poor , `` generalized multiple access channels with confidential messages , '' _ ieee trans .inform . theory _ , submitted for publication , [ online .] available : http://arxiv.org/format/cs.it/0605014 .r. liu , i. maric , r. d. yates , and p. spasojevic , `` the discrete memoryless multiple access channel with confidential messages , '' in _ proc .inform . theory ( isit )_ , seattle , wa , jul 9 - 14 2006 .a. thangaraj , s. dihidar , a. r. calderbank , s. mclaughlin , and j .-merolla , `` applications of ldpc codes to the wiretap channel , '' _ ieee trans .53 , no .29332945 , aug 2007 .m. bloch , j. barros , m. r. d. rodrigues , and s. w. mclaughlin , `` wireless information - theoretic security - part ii : practical implementation , '' _ ieee trans .inform . theory _ ,submitted for publication .
the general gaussian multiple access wire - tap channel ( ggmac - wt ) and the gaussian two - way wire - tap channel ( gtw - wt ) are considered . in the ggmac - wt , multiple users communicate with an intended receiver in the presence of an eavesdropper who receives their signals through another gmac . in the gtw - wt , two users communicate with each other over a common gaussian channel , with an eavesdropper listening through a gmac . a secrecy measure that is suitable for this multi - terminal environment is defined , and achievable secrecy rate regions are found for both channels . for both cases , the power allocations maximizing the achievable secrecy sum - rate are determined . it is seen that the optimum policy may prevent some terminals from transmission in order to preserve the secrecy of the system . inspired by this construct , a new scheme , _ cooperative jamming _ , is proposed , where users who are prevented from transmitting according to the secrecy sum - rate maximizing power allocation policy jam " the eavesdropper , thereby helping the remaining users . this scheme is shown to increase the achievable secrecy sum - rate . overall , our results show that in multiple - access scenarios , users can help each other to collectively achieve positive secrecy rates . in other words , cooperation among users can be invaluable for achieving secrecy for the system . secrecy capacity , gaussian multiple access channel , gaussian two - way channel , wire - tap channel , confidential messages
the last century was marked by the birth of two major scientific and engineering disciplines : silicon - based computing and the theory and technology of genetic data analysis .the research field very likely to dominate the area of scientific computing in the foreseeable future is the merger of these two disciplines , leading to unprecedented possibilities for applications in varied areas of engineering and science .the first steps toward this goal were made in 1994 , when leonard adleman solved a quite unremarkable computational problem , an instance of the directed travelling salesmen problem on a graph with seven nodes , with an exceptional method .the technique used for solving the problem was a new technological paradigm , termed dna computing .dna computing introduced the possibility of using genetic data to tackle computationally hard classes of problems that are otherwise impossible to solve using traditional computing methods .the way in which dna computers make it possible to achieve this goal is through massive parallelism of operation on nano - scale , low - power , molecular hardware and software systems .one of the major obstacles to efficient dna computing , and more generally dna storage and signal processing , is the very low reliability of single - stranded dna sequence operations .dna computing experiments require the creation of a controlled environment that allows for a set of single - stranded dna codewords to bind ( hybridize ) with their complements in an appropriate fashion .if the codewords are not carefully chosen , unwanted , or non - selective , hybridization may occur .even more detrimental is the fact that a single - stranded dna sequence may fold back onto itself , forming a secondary structure which completely inhibits the sequence from participating in the computational process .secondary structure formation is also a major bottleneck in dna storage systems .for example , it was reported in that of read - out attempts in a dna storage system failed due to the formation of special secondary structures called _ hairpins _ in the single - stranded dna molecules used to store information .so far , the focus of coding for dna computing was exclusively directed towards constructing large sets of dna codewords with fixed base frequencies ( constant gc - content ) and prescribed hamming / reverse - complement hamming distance .such sets of codewords are expected to result in very rare hybridization error events . as an example, it was shown in that there exist codewords of length with minimum hamming distance and with exactly / bases . at the same time , the wisconsin dna group , led by d. shoemaker , reported that through extensive computer search and experimental testing , only sequences of length at hamming distance at least were found to be free of secondary structure at temperatures of . since at lower ambient temperatures the probability of secondary structure formation is even higher , it is clear that the effective number of codewords useful for dna computing is extremely small . in this paper, we investigate properties of dna sequences that may lead to undesirable folding . our approach is based on analysis of a well - known algorithm for approximately determining dna secondary structure , called nussinov s method .this analysis allows us to extract some design criteria that yield dna sequences that are unlikely to fold undesirably .these criteria reduce to the requirement that the first few shifts of a dna codeword have the property that they do not contain watson - crick complementary matchings with the original sequence .we consider the enumeration of sequences having the shift property and provide some simple construction strategies which meet the required restrictions . to the best of our knowledge , this is the first attempt in the literature aimed at providing a rigorous setting that links dna folding properties to constraints on the primary structure of the sequences .dna of higher species consists of two complementary chains twisted around each other to form a double helix .each chain is a linear sequence of nucleotides , or _bases_ two _ purines _ , adenine ( ) and guanine ( ) , and two _ pyrimidines _ , thymine ( ) and cytosine ( ) . the purine bases and pyrimindine bases are _ watson - crick ( wc ) complements _ of each other , in the sense that more specifically , in a double helix , the base pairs with by means of two hydrogen bonds , while pairs with by means of three hydrogen bonds ( i.e. the strength of the former bond is weaker than the strength of the latter ) . for dna computing purposes ,one is only concerned with single - stranded ( henceforth , _ oligonucleotide _ ) dna sequences .oligonucleotide dna sequences are formed by heating dna double helices to denaturation temperatures , at which they break down into single strands .if the temperature is subsequently reduced , oligonucleotide strands with large regions of sequence complementarity can bind back together in a process called _ hybridization_. hybridization is assumed to occur only between complementary base pairs , and lies at the core of dna computing . as a first approximation, oligonucleotide dna sequences can be simply viewed as words over a four - letter alphabet , with a prescribed set of complex properties .the generic notation for such sequences will be , with indicating the length of the sequences .the wc complement of a dna sequence is defined as , being the wc complement of as given by ( [ bar_def ] ) .the _ secondary structure _ of a dna codeword is a set , , of disjoint pairings between complementary bases with .a secondary structure is formed by a chemically active oligonucleotide sequence folding back onto itself due to _ self - hybridization _ , _i.e. _ , hybridization between complementary base pairs belonging to the same sequence . as a consequence of the bending ,elaborate spatial structures are formed , the most important components of which are loops ( including branching , internal , hairpin and bulge loops ) , stem helical regions , as well as unstructured single strands .figure [ sec - str ] illustrates these concepts for an rna strand .it was shown experimentally that the most important factors influencing the secondary structure of a dna sequence are the number of base pairs in stem regions , the number of base pairs in a hairpin loop region as well as the number of unpaired bases .determining the exact pairings in a secondary structure of a dna sequence is a complicated task , as we shall try to explain briefly . for a system of interacting entities ,one measure commonly used for assessing the system s property is the free energy .the stability and form of a secondary configuration is usually governed by this energy , the general rule - of - thumb being that a secondary structure minimizes the free energy associated with a dna sequence .the free energy of a secondary structure is determined by the energy of its constituent pairings .now , the energy of a pairing depends on the bases involved in the pairing as well as all bases adjacent to it . adding complication is the fact that in the presence of other neighboring pairings , these energies change according to some nontrivial rules .nevertheless , some simple dynamic programming techniques can be used to _approximately _ determine the secondary structure of a dna sequence .such approximations usually have the correct form in of the cases considered . among these techniques , _nussinov s folding algorithm _ is the most widely used scheme .nussinov s algorithm is based on the assumption that in a dna sequence , the energy between a pair of bases , , is independent of all other pairs .for simplicity of exposition , we shall assume that if , if , and otherwise and . ] .let denote the minimum free energy of the subsequence .the independence assumption allows us to compute the minimum free energy of the sequence through the recursion where for and for .the value of is the minimum free energy of a secondary structure of .note that .a very large negative value for the free energy of a sequence is a good indicator of the presence of stacked base pairs and loops , _i.e. _ , a secondary structure , in the physical dna sequence .nussinov s algorithm can be described in terms of free - energy tables , two of which are shown below .we first describe how such a table is filled out , after which we will point out some important properties of such tables ..free - energy table for the sequence [ cols="^,^,^,^,^,^,^,^,^,^ " , ] in a free - energy table , the entry at position ( the top left position being ( 1,1 ) ) , contains the value of .the table is filled out by initializing the entries on the main diagonal and on the first lower sub - diagonal of the matrix to zero , and calculating the energy levels according to the recursion in .the calculations proceed successively through the upper diagonals : entries at positions are calculated first , followed by entries at positions , and so on .note that the entry at , depends on and the entries at , , , , and .the minimum - energy secondary structure itself can be found by the _ backtracking algorithm _ which retraces the steps of nussinov s algorithm .the secondary structures for the sequences in tables [ table1 ] and [ table2 ] , shown in figures [ sec - str1 ] and [ sec - str2 ] , have been found using the vienna rna / dna secondary structure package , which is based on the nussinov algorithm , but which uses more accurate values for the parameters , as well as more sophisticated prediction methods for base pairing probabilities .tables [ table1 ] and [ table2 ] show that the minimum free energy for the sequence is , while that for the sequence is ., the sequence is deemed to have no secondary structure ; this is due to the fact that the one possible complementary base pairing , namely that of and , forms too weak a bond for the resultant structure to be stable .] this fact alone indicates that the number of paired bases in the first sequence ought to be larger than in the second one , and hence the former is more likely to have a secondary structure than the latter .more generally , one can observe the following characteristics of free - energy tables : if the first upper diagonal contains a large number of or entries , then some of these entries `` percolate '' through to the second upper diagonal , where they get possibly increased by or if complementary base pairs are present at positions and , in the dna sequence .the values on the second diagonal , in turn , percolate through to the third diagonal , and so on .hence , the free energy of the dna sequence depends strongly on the number of non - zero values present on the first diagonal .this phenomenon was also observed experimentally in , where the free energy was modelled by a function of the form with denoting a correction factor which depends on the number of and bases in the sequence * c*. the stability of a secondary structure , as well as its melting properties can be directly inferred from ( [ app - energy ] ) .note that under the assumption that for all pairings , the absolute value of the sum in ( [ app - energy ] ) is simply the total number of pairings of complementary bases between the dna codeword * c * and the codeword shifted one position to the right or equivalently , the sum of the entries in the first upper diagonal of the free - energy table .these observations imply that a more accurate model for the free energy should be of the form for a correction factor and some non - zero weighting factors .furthermore , the same observation implies that from the stand - point of designing dna codewords without secondary structure , it is desirable to have codewords for which the respective sums of the elements on the first several diagonals are either all zero or of some very small absolute value .this requirement can be rephrased in terms of requiring a dna sequence to satisfy a _shift property _ , in which a sequence and its first few shifts have few or no complementary base pairs at the same positions .in this section , we consider the enumeration and construction of dna sequences satisfying certain shift properties , which we shall define rigorously . it is clear that a dna sequence is counted by iff it contains no pair of complementary bases. such a sequence must be over one of the alphabets , , and .there are such sequences , since there are sequences over each of these alphabets , of which , , and are each counted twice .let denote the set of all sequences of length for which , .thus , .note that for any , } ] and } ] , }$ ] , then and , defined as in lemma [ wc - binary ] , are just truncations of codewords from the simplex code .since the simplex code is a constant - weight code , with minimum distance , each pair of codewords intersects in exactly positions .this implies that there exist exactly positions for which one given codeword contains all zeros , and the other codeword contains all ones .these are the positions that are counted in , which proves the claimed result .consider the previous construction for , and a generating codeword .there are dna codewords of length obtained based on the outlined method .these codewords have minimum hamming distance equal to four , and they also have constant content . a selected subset of codewords from this code is listed below . the last two codewords consist of the bases and only , and clearly satisfy the shift property with for all . on the other hand , for the first three codewordsone has , while for the next three codewords it holds that ( meeting the upper bound in the theorem ) . due to the cyclic nature of the generating code, one can easily generate the nussinov folding table for all the codewords .such an evaluation , as well as the use of the program package vienna , show that none of the codewords exhibits a secondary structure .the largest known dna codes with the parameters specified above consists of codewords .this code is generated by a simulated annealing process which does not allow for simple secondary structure testing .adleman , `` molecular computation of solutions to combinatorial problems , '' _ science _ ,266 , pp.10211024 , nov . 1994 .k. breslauer , r. frank , h. blocker , and l. marky , `` predicting dna duplex stability from the base sequence , '' _ proceedings of the national academy of science _, usa 83 , pp . 37463750 , 1986 .p. gaborit , and h. king , `` linear constructions for dna codes , '' preprint .goulden and d.m .jackson , _ combinatorial enumeration _ ,dover , 2004 .hall , lecture notes on error - control coding , available online at ` http://www.mth.msu.edu/\simjhall/ ` .macwilliams , and n.j.a .sloane , _ the theory of error correcting codes _ , north - holland , 1977 .m. mansuripur , p.k .khulbe , s.m .kuebler , j.w .perry , m.s.giridhar and n. peyghambarian , `` information storage and retrieval using macromolecules as storage media , '' _ university of arizona technical report _ , 2003 .o. milenkovic and n. kashyap , `` new constructions of codes for dna computing , '' accepted for presentation at wcc 2005 , bergen , norway .s. mneimneh , `` computational biology lecture 20 : rna secondary structures , '' available online at ` engr.smu.edu/\simsaad/courses/ cse8354/lectures / lecture20.pdf ` .r. nussinov , g. pieczenik , j.r .griggs and d.j .kleitman , `` algorithms for loop matchings , ''_ siam j. appl ._ , vol .35 , no . 1, pp . 6882 , 1978 . s. tsaftaris , a. katsaggelos , t. pappas and e. papoutsakis , `` dna computing from a signal processing viewpoint , '' _ ieee signal processing magazine _ , pp .100106 , sept .2004 . the vienna rna secondary structure package , ` http://rna.tbi.univie.ac.at/cgi-bin/rnafold.cgi `
in this paper , we consider the problem of designing codewords for dna storage systems and dna computers that are unlikely to fold back onto themselves to form undesirable secondary structures . secondary structure formation causes a dna codeword to become less active chemically , thus rendering it useless for the purpose of dna computing . it also defeats the read - back mechanism in a dna storage system , so that information stored in such a folded dna codeword can not be retrieved . based on some simple properties of a dynamic - programming algorithm , known as nussinov s method , which is an effective predictor of secondary structure given the sequence of bases in a dna codeword , we identify some design criteria that reduce the possibility of secondary structure formation in a codeword . these design criteria can be formulated in terms of the requirement that the watson - crick distance between a dna codeword and a number of its shifts be larger than a given threshold . this paper addresses both the issue of enumerating dna sequences with such properties and the problem of practical dna code construction .
to maintain the desired luminosity of the nlc , the focusing components on the main linac must be kept at a few nanometers above a few hz . these components can be affected by far - field ( natural ) and near - field ( man - made ) vibration sources .this paper is concerned only with near - field sources ( e.g. mechanical and electrical equipment , rf generating equipment , etc ) .these sources are mainly located either in the support tunnel , fig.[2tunnel ] , or far away ( 100 m ) from the beam tunnel .the characterization of near - field vibration sources and its effects on the main linac components is part of an ongoing r&d program at nlc that is presented in this paper .the first part of the paper will present the influence of the vibration induced by rf power generating elements and by the rf itself . the second part will deal with the transfer of vibration from surface to the tunnel invert .[ 2tunnel ]at first , we will focus on the vibration contribution of high power generating rf components .the study was carried out near the supporting structure of the 8 pack project .the 8 pack project is the test bench of klystrons and their modulator to produce high power rf for the next linear collider test accelerator ( nlcta ) .one of the geophone sensors was placed at the base of the modulator at 76 cm above the concrete floor and the second one was located on the concrete floor at .2 m away from the base of the modulator support , or at its base ( 0 m ) .the signals of the mark-4 geophones were measured simultaneously .we performed tests in two different conditions of the modulator . in the first case , the water cooled modulator was under a 20 kv voltage and was running at 30 hz or 10 hz ( at this voltage the klystrons was not conducting , thus did not deliver rf power ) . in the second case the modulator was running at 10 hz andwas under 400 kv ( the klystron was delivering mw of rf power at 1.6 ) .fig.[speedtimesensor ] shows the response of the sensors vs times for the modulator under a 20 kv . in this casethe modulator was running at different frequencies .the seismometer located at the base of the modulator clearly shows vibrations due to modulator pulsed operation , while the geophone located on the concrete floor does not indicate any change due to the modulator running conditions .[ speedtimesensor ] fourier analysis also does not reveal any additional noises due to modulator on the floor .fig.[displc8packjan ] and fig.[displc8packfeb ] show the average integrated displacement ( aid ) of the two seismometers .the blue lines are the response of the sensor placed at the base of the modulator , where the difference between the modulator on and off cases is clearly seen .the green lines are the response of the sensor placed on the concrete floor either at .2 m or at the base of the stand ( 0 m ) , and the on and off cases overlap .interesting that , when the modulator is under 400 kv and delivering power to the klystron , its vibration is slightly less than for a 20 kv running modulator .one can also note that background noise was different in the conditions of fig.[displc8packjan ] and fig.[displc8packfeb ] ( see green curves below 60 hz ) which was due to different level of activity of the construction crew working at nlcta .the main conclusion of the modulator vibration study is that the transmission of vibration from the modulator to the concrete floor is not significant , and , on the level of background noise at nlcta , not noticeable .[ displc8packjan ] [ displc8packfeb ] we also studied vibrations of accelerating structures due to cooling water ( reported earlier , see ) , and due to rf pulse ( presented below ) . when the klystrons are delivering power , the rf heating produces acoustical vibration in the accelerating structure .possible vibration produced by the rf were measured by 3 piezo - accelerometers placed on the accelerating travelling wave structure bolted on its support or girder , placed on the girder , and placed on the waveguide .the x - rays coming out from the structure when filled by the rf did not affect the measurements of our accelerometers . [ rfinnlcta ] these measurements , fig.[rfinnlcta ] , show that feeding a structure ( 60 cm long travelling wave structure h60vg3r ) , via its waveguide , with 100 mw at 400 ns of rf power at 60 hz ( corresponded to about 70 mv / m accelerating gradient ) does not lead to any significant increase of vibration in comparison to vibrations produced by cooling water .the water induced vibrations dominate , but they are tolerable , since , with an appropriate design , the vibration transmission to the linac quadrupoles is rather small .vibration of the loosely supported rf waveguide is higher than of the rf structure , but apparently this does not significantly increase vibration of either structure or the quadrupoles .one of the questions to be addressed is whether there is a significant difference in the vibration attenuation characteristics between a tunnel bored at great depth ( in bedrock ) or excavated in cut - and - cover construction at lesser depth , with regard to vibrations generated at the surface .a study was carried out at slac , representing the vibration attenuation characteristics of the soil , assuming the slac beam line housing to be representative of a cut - and - cover construction .the second vibration measurement study is currently underway in the red line tunnels in los angeles .this study will establish the vibration characteristics between two tunnels , along the tunnel as well as from surface to the tunnel .the following is a brief presentation of data from the study carried out at slac .[ relativeloc ] fig.[relativeloc ] shows the relative locations of sources and the receivers .the receivers locations were on the floor along the centerline of the beam .the sources were on the ground surface .drive location s1 was at the same approximate elevation as of the klystron gallery floor , which lies about 36 ft ( 11 m ) above the elevation of the receiver locations .drive location s2 was about 10 ft ( 3 m ) uphill from s1 .[ hammerblow ] fig.[hammerblow ] shows two spectra .one is the ambient measured at the receiver location r1 .the other is a fast fourier transform of the response to several hammer blows .several observations may be made .the peaks at 60 hz and 120 hz are electronic artefacts .the peaks centered at .75 hz and at .25 hz in both ambient and response have a nearly identical amplitudes .the peak at 18.75 hz has a slightly different amplitude . at these frequencies ,the `` response '' ( red line ) is being governed more by ambient than by the input force , so the transfer functions derived from the `` response '' will be invalid .the `` ambient '' ( blue line ) at frequencies less than 7 hz ( in this case ) lies above the `` response '' .this suggests that there is some variability to the ambient environment at these low frequencies , and these will degrade the accuracy of transmission function at lower frequencies .[ groundtransmis ] fig.[groundtransmis ] shows a smoothed spectra which were calculated by log averaging the amplitudes over a 10 hz interval centered on the plotted point .it provides the transfer function ( showing change of amplitude ) measured at r1 to r5 using s1 as a drive point , fig.[relativeloc ] ..attenuation factor , refering to fig.[relativeloc ] .[ cols="^,^,^,^,^ " , ] the table [ tblea ] summarizes the apparent attenuation at several frequencies at which mechanical equipment usually operates .the figures in table [ tblea ] represent the attenuation factor a for a vibration with its source near s1 propagating along the same path .let us give an example how such data can be used .suppose a pump is installed at s1 , and it produces vibrations at 30 hz of amplitude x. the amplitude at 30 hz that we would measure at r5 would be the greater of either ambient or 0.0009x .suppose we take the ambient measured shown for the tunnel in ( fig.[hammerblow ] ) as representative the tunnel in general .the amplitude at 30 hz is about 1.5 m/s . if we were to place a pump at s1 and be sure to avoid having itsvibrations exceed ambient at r5 , we would need to impose a limit on the resulting vibration at s1 of 1.5/0.0009=1.7 mm / s .it has been shown that the vibration transmitted by the rf generating equipment to the floor is insignificant .hence , klystrons and or modulators running in the support tunnel of the nlc should not effect alignment of the linac . vibration contribution of an rf pulse to an accelerating structure has also been found negligible relative to water - cooling .thus , it leaves electrical and mechanical rotating equipment as possibly a dominating source of vibration .the attenuation factors presented in the paper can be used in planning stage of the nlc project for specifying and locating the mechanical rotating equipment , as well as to assess their vibration effects on the focusing components on the main linac and provide a means for establishing the vibration budgeting scheme for the project .
the vibration stability requirements for the next linear collider ( nlc ) are far more stringent than for the previous generation of colliders . to meet these goals , it is imperative that the effects of vibration on nlc linac components from near - field sources ( e.g. compressors , high vacuum equipment , klystrons , modulators , pumps , fans , etc ) be well understood . the civil construction method , whether cut - and - cover or parallel bored tunnels , can determine the proximity and possible isolation of noise sources this paper presents a brief summary and analysis of recently completed and planned studies for characterization of near - field vibration sources under either construction method . the results of in - situ vibration measurements will also be included .
in other work two dimensional fields of quantum reference frames were described that were based on different quantum theory representations of the real numbers . because the description of the fields does not seem to be understood , it is worthwhile to approach a description in a way that will , hopefully , help to make the fields better understood .this the goal of this contribution to the third feynman conference proceedings .the approach taken here is based on three main points : * there are a large number of different quantum theory representations of natural numbers , integers , and rational numbers as states of qubit strings .these arise from gauge transformations of the qubit string states . * for each representation , cauchy sequences of rational string states give a representation of the real ( and complex ) numbers .a reference frame is associated to each representation . *each frame contains a representation of all mathematical and physical theories .each of these is a mathematical structure that is based on the real and complex number representation base of the frame .this approach is summarized in the next section with more details given in the following sections .as is well known , the large amount of work and interest in quantum computing and quantum information theory is based on quantum mechanical representations of numbers as states of strings of qubits and linear superpositions of these states .the numbers represented by these states are usually the nonnegative integers or the natural numbers. examples of integer representations are states of the form and their linear superpositions here is a state of a string of qubits where the qubit at location denotes the sign ( as ) and \rightarrow \{0,1\} ] where this last condition removes the redundancy of leading this description can be extended to give quantum mechanical representations of rational numbers as states of qubit strings and of real and complex numbers as cauchy sequences of rational number states of qubit strings . as will be seen , string rational numberscan be represented by qubit string states where is a valued function on an integer interval ] here , is the location of the qubit , and the qubits occupy all positions in the interval .[{a^{\dag}_{{\alpha},j}},{a^{\dag}_{{\alpha}^{{\prime}},j^{{\prime}}}}]=[{a_{{\alpha},j}},{a_{{\alpha}^{{\prime}},j^{{\prime}}}}]=0 ] is a valued function on the integer interval ] of a function defined on all the integers .an operator can be defined whose eigenvalues correspond to the values of the rational numbers one associates with the string states . is the product of two commuting operators , a sign scale operator , and a value operator one has the operator is given for reference only as it is not used to define arithmetic properties of the rational string states .there is a large amount of arithmetical redundancy in the states .for instance the arithmetic properties of a rational string state are invariant under a translation along the integer axis .this is a consequence of the fact that these properties of the state are determined by the distribution of relative to the position of the sign and not on the value of .the other redundancy arises from the fact that states that differ in the number of leading or trailing are all arithmetically equal .these redundancies can be used to define equivalence classes of states or select one member as a representative of each class . here the latter choice will be made in that rational number states will be restricted to those with for the sign location and those with restricted so that if and if this last condition removes leading and trailing the state is the number . for ease in notation from now on the variables will be dropped from states . thus states be represented as with the values of included in the definition of .there are two basic arithmetic properties , equality , and ordering arithmetic equality is defined by here the set of integers for which and similarly for that is , two states are arithmetically equal if one has the same distribution of relative to the location of the sign as the other .arithmetic ordering on positive rational string states is defined by where the extension to zero and negative states is given by the definitions of can be extended to linear superpositions of rational string states in a straightforward manner to give probabilities that two states are arithmetically equal or that one state is less than another state .operators for the basic arithmetic operations of addition , subtraction , multiplication , and division to any accuracy are represented by the state }}{a^{\dag}}_{1,-\ell}|0\rangle.\ ] ] where } } = { a^{\dag}}_{0,0}{a^{\dag}}_{0,-1}\cdots{a^{\dag}}_{0,-\ell+1}, ] is a sum over all integer intervals ] from this one can define the probability that the arithmetic absolute value of the arithmetic difference between and is arithmetically less than or equal to by the sequence satisfies the cauchy condition if where here is the probability that the sequence satisfies the cauchy condition .cauchy sequences can be collected into equivalence classes by defining if the cauchy condition holds with replacing and replacing in eq .[ cauchy ] . to this endlet ] of all equivalence classes .it can be shown that ] another representation of real numbers can be obtained by replacing the sequences by operators .this can be achieved by replacing each index by the rational string state that corresponds to the natural number .these are defined by where and where is the lower interval bound , for the domain of as a function over the integer interval . ] be a function from the set of all integers where define a sequence of states the sequence is cauchy as for all however for any gauge transformation the sequence is not cauchy as expansion of the in terms of the by eq .[ adu ] gives as a sum of terms whose arithmetic divergence is independent of . to show that is cauchy in the transformed frame if and only if is cauchy in the original frame one can start with the expression for the cauchy condition in the transformed frame : from eq .[ defou ] one gets from eqs . [ addn ] and [ addnpla ] applied to and eqs .[ addau ] and [ addaua ] one obtains use of the absolute value operator gives finally from eq .[ = au ] one obtains which is the desired result .thus one sees that the cauchy property is preserved in unitary transformations from one reference frame to another . as was done with the cauchy sequences and operators , the u - cauchy sequences or their equivalents , u - cauchy operators ,can be collected into a set of equivalence classes that represent the real numbers .this involves lifting up the basic arithmetic relations and operations to real number relations and operations and showing that is a complete , ordered , field .it is also the case that for almost all gauge the real numbers in are orthogonal to those in in the following sense .one can see that each equivalence class in contains a state sequence }\rangle ] both sequences satisfy their respective cauchy conditions .however the overlap }|u_{0}{\gamma}_{n},us_{[u ,-n]}\rangle \rightarrow 0 $ ] as this expresses the sense in which and are orthogonal .as has been seen , one can define many quantum theory representations of real numbers as cauchy sequences of states of qubit strings or as cauchy operators on the qubit string states .the large number of representations stems from the gauge ( global and local ) freedom in the choice of a quantization axis for the qubit strings .complex number representations are included either as ordered pairs of the real number representations or as extensions of the description of cauchy sequences or cauchy operators to complex rational string states .it was also seen that for each gauge transformation the real and complex number representations are the base of a frame .the frame also contains representations of physical theories as mathematical structures based on the work in the last two sections shows that the description of rational string states as states of finite strings of qubits given is a description in a fock space .( a fock space is used because of the need to describe states of variable numbers of qubits and their linear superpositions in one space . )the arithmetic operations on states of these strings are represented by fock space operators .the properties of these operators acting on the qubit string states are used to show that the states represent binary rational numbers. finally equivalence classes of sequences of these states or of operators that satisfy the cauchy condition are proved to be real numbers .the essential point here is that the fock space , , and any additional mathematics used to obtain these results are based on a set of real and complex numbers . for example , superposition coefficients of basis states are in , the inner product is a map from pairs of states to , operator spectra are elements of , etc .the space time manifold used to describe the dynamics of any physical representations of the qubit strings is given by .it follows that one can assign a reference frame to and here contains all physical and mathematical theories that are represented as mathematical structures based on and .however , unlike the case for the frames the only properties that and have are those based on the relevant axioms ( complete ordered field for ) .other than that , nothing is known about how they are represented .this can be expressed by saying that and are external , absolute , and given .this seems to be the usual position taken by physics in using theories based on and .physical theories are silent on what properties and may have other than those based on the relevant axioms . however , as has been seen , one can use these theories to describe many representations and and associated frames based on gauge transformations of the qubit strings .as noted , for each contains representations of all physical theories as mathematical structures over . for these framesone can see that and have additional properties besides those given by the relevant axioms .they are also equivalence classes of cauchy sequences or cauchy operators fig .[ rcst1 ] is a schematic illustration of the relation between frame and the frames .only three of the infinitely many frames are shown .the arrows indicate the derivation direction in that based theory in is used to describe , for each and that are the base of all theories in .note that the frame with the identity gauge transformation is also included as one of the it is _ not _ the same as because is not the same as .the above relations between the frames and shows that one can repeat the description of real and complex numbers as cauchy sequences of ( or cauchy operators on ) rational string states in each frame . in this casethe fock space representation , , used to describe the qubit string states in , is different from from in that it is based on instead of on however the two space representations are related by an isomorphism that is based on the isomorphism between and it is useful to examine this more .first consider the states of a qubit at site j. for an observer in frame these states have the general form where and are complex numbers in .let be an gauge transformation where is defined by where the states and in frame are seen by an observer in as the states and respectively as the quantization axis is different .a similar situation holds for states of qubit strings . to an observer in state is different from however an observer in frame with a different set of quantization axes for each qubit , would represent the state as as to him it is the same state relative to his axis set as it is to the observer in for his axis set . the situation is slightly different for linear superpositions of basis states . to an observer in ,the coefficients in the state represent abstract elements of .the same observer sees that this state in is represented by where as elements of , represent the same abstract complex numbers as do however an observer in sees this same state as to him the real and complex number base of his frame is abstract and is represented by in general an observer in any frame sees the real and complex number base of his own frame as abstract and given with no particular representation evident .however the observer in frame also sees that what to him is the abstract number is the number as and element of in frame these considerations also extend to group representations as matrices of complex numbers . if the element as an abstract element of is represented in frame by the matrix where are elements of , then , to an observer in is represented in frame by here , as elements of , correspond to the same abstract complex numbers as do however an observer in sees this representation as which is the same as the observer sees in his own frame .following this line of argument one can now describe another generation of frames with each frame in the role of a parent to progeny frames just as is a parent to the frames as in fig .[ rcst1 ] .this is shown schematically in fig .[ rcst2 ] .again , only three of the infinitely many stage 2 frames emanating from each stage 1 frame are shown . heresomething new appears in that there are many different paths to a stage 2 frame . for each path , such as is the product of two gauge transformations where an observer in this frame sees the real and complex number frame base as abstract , and given .to him they can be represented as an observer in sees the real and complex number base of as however an observer in sees the real and complex number base of as the subscript denotes the fact that relative to the number base of is constructed in two stages .first the fock space is used to construct representations of and as cauchy sequences of states then in frame the fock space based on is used to construct the number representation base of as cauchy sequences of qubit string states in one sees , then , that , for each path leading to a specific stage 2 frame , there is a different two stage construction of the number base of the frame .this view from the parent frame is the same view that we have as observers outside the whole frame structure .that is , our external view coincides with that for an observer inside the parent frame .the above description of frames emanating from frames for 2 stages suggests that the process can be iterated .there are several possibilities besides a finite number of iterations exemplified by fig.[rcst2 ] for 2 iterations .[ rcst3 ] shows the field structure for a one way infinite number of iterations .here one sees that each frame has an infinite number of descendent frame generations and , except for frame a finite number number of ancestor generations .the structure of the frame field seen by an observer in is the same as that viewed from the outside . for both observersthe base real and complex numbers for are seen as abstract and given with no structure other than that given by the axioms for the real and complex numbers .there are two other possible stage or generation structures for the frame fields , two way infinite and finite cyclic structures .these are shown schematically in figs .[ rcst4 ] and [ rcst5 ] .the direction of iteration in the cyclic field is shown by the arrows on the circle rather than example arrows connecting frames as in the other figures .for both these frame fields each frame has infinitely many parent frames and infinitely many daughter frames .there is no single ancestor frame and no final descendent frames .the cyclic field is unique in that each frame is both its own descendent and its own ancestor .the distance between these connections is equal to the number of iterations in the cyclic field .these two frame fields differ from the others in that the structure seen from the outside is different from that for an observer in any frame .there is no frame from which the view is the same as from the outside . viewed from the outsidethere are no abstract , given real and complex number sets for the field as a whole .all of these are internal in that an observer in frame at generation sees the base as abstract with no properties other than those given by the relevant axioms .the same holds for the representations of the space time manifold .viewed from the outside there is no fixed abstract space time representation as a 4-tuple of real numbers associated with the field as a whole .all space time representations of this form are internal and associated with each frame .this is based on the observation that the points of a continuum space time as a 4-tuple of representations of the real numbers are different in each frame because the real number representations are different in each frame .also , contrary to the situation for the fields in figs .[ rcst1]-[rcst3 ] , there is no representation of the space time points that can be considered to be as fixed and external to the frame field .the lack of a fixed abstract , external space time manifold representation for the two - way infinite and cyclic frame fields is in some ways similar to the lack of a background space time in loop quantum gravity .there are differences in that in loop quantum gravity space is discrete on the planck scale and not continuous .it should be noted though that the representation of space time as continuous is not a necessary consequence of the frame fields and their properties .the important part is the real and complex number representation base of each frame , not whether space time is represented as discrete or continuous .it is useful to summarize the views of observers inside frames and outside of frames for the different field types .for all the fields except the cyclic ones an observer in any frame at stage sees the real number base of his frame as abstract and external with no properties other than those given by the axioms for a complete ordered field .the observer also can not see any ancestor frames .he / she can see the whole frame field structure for all descendent frames at stages except as noted below , the view of an outside observer is different in that he / she can see the whole frame field structure .this includes the point that , to internal observers in a frame , the real and complex number base of the frame is abstract and external . for frame fields with a fixed ancestor frame , figs .[ rcst1 ] , [ rcst2 ] , [ rcst3 ] , the view of an outside observer is almost the same as that of an observer in frame .both see the real and complex number base of as abstract and external .both can also see the field structure for all frames in the fields .however the outside observer can see that frame has no ancestors .this is not available to an observer in as he / she can not see the whole frame field .the cyclic frame field is different in that for an observer in any frame at stage frames at other stages are both descendent and ancestor frames .this suggests that the requirement that a frame observer can not see the field structure for ancestor frames , but can see it for descendent frames , may have to be changed , at least for this type of frame field .how and what one learns from such changes are a subject for future work .so far , frame fields based on different quantum mechanical representations of real and complex numbers have been described .each frame contains a representation of physical theories as mathematical structures based on the real number representation base of the frame .the representations of the physical theories in the different frames are different because the real ( and complex ) number representations are different .they are also isomorphic because the real ( and complex ) number representations are isomorphic .the description of the frame field given so far is incomplete because nothing has been said about the relation of the frame field to physics .so far the only use of physics has been to limit the description of rational number representations to quantum mechanical states of strings of qubits .the main problem here is that to date all physical theories make no use of this frame field .this is evidenced by the observation that the only properties of real numbers relevant to physical theories are those derivable from the real number axioms for a complete , ordered field .so far physical theories are completely insensitive to details of different representations of the real numbers .this problem is also shown by the observation that there is no evidence of this frame structure and the multiplicity of real number representations in our view of the existence of just one physical universe with its space time manifold , and with real and complex numbers that can be treated as absolute and given .there is no hint , so far , either in physical theories or in properties of physical systems and the physical universe , of the different representations and the frame fields .this shows that the main problem is to reconcile the great number of different representations of the real and complex numbers and the space time manifold as bases for different representations of physical theories with the lack of dependence of physical theories on these representations and no evidence for them in our view of the physical universe .one possible way to do this might be to collapse the frame field to a smaller field , ideally with just one frame . as a step in this direction one could limit quantum theory representations of rational string numbers to those that are gauge invariant .this would have the effect of collapsing all frames at any stage into one stage frame is a gauge transformation .it is not the element of one . ] .the resulting frame field would then be one dimensional with one frame at each stage .the idea of constructing representations that are gauge invariant for some gauge transformations has already been used in another context .this is the use of the decoherent free subspace ( dfs ) approach to quantum error correction .this approach is based to quantum error avoidance in quantum computations .this method identifies quantum errors with gauge transformations . in this casethe goal is to find subspaces of qubit string states that are invariant under at least some gauge and are preserved by the hamiltonian dynamics for the system .one way to achieve this is based on irreducible representations of direct products of as the irreducible subspaces are invariant under the action of some . as an example, one can show that the subspaces defined by the irreducible 4 dimensional representation of are invariant under the action of any global .the subspaces are the three dimensional subspace with isospin , spanned by the states and the subspace containing the action of any global on states in the subspace can change one of the states into linear superpositions of all states in the subspace . but it does not connect the states in the subspace with that in the subspace .it follows that one can replace a string of qubits with a string of new qubits where the and states of the new qubit correspond to any state in the respective and subspaces of the 4 dimensional representation of any state of the new qubits is invariant under all global gauge transformations and all local gauge transformations where this replacement of states of strings of qubits by states of strings of new qubits gives the result that , for all satisfying eq . [ ueq ] , the frames at any stage all become just one frame at stage .however this still leaves many gauge for which the new qubit string state representation is not gauge invariant .another method of arriving at a gauge invariant description of rational string states is based on the description of the kinematics of a quantum system by states in a hilbert space based on the group manifold .details of this , generalized to all compact lie groups , are given in and .in essence this extends the well known situation for angular momentum representations of states based on the manifold of the group to all compact lie groups . for the angular momentum casethe the action of any rotation on the states gives linear combinations of states with different values but with the same value .the hilbert space spanned by all angular momentum eigenstates can be expanded as a direct sum where labels the irreducible representations of qubits can be associated with this representation by choosing two different values , say and then any states in the subspaces and correspond to the and qubit states respectively .these states are invariant under all rotations .extension of this construction to all finite qubit strings gives a representation of natural numbers , integers and rational numbers that is invariant under all gauge transformations .this development can also be carried out for any compact lie group where the quantum kinematics of a system is based on the group manifold . in the case of eq .[ hl ] holds with .the momentum subspaces are invariant under all transformations . as in the angular momentum case onecan use this to describe states of qubits as that are invariant .this construction can be extended to states of finite strings of qubits .details of the mathematics needed for this , applied to graphs on a compact space manifold , are given in . in this way one can describe representations of rational string numbers that are gauge invariant for all gauge .there is a possible connection between these representations of numbers and the ashtekar approach to loop quantum gravity .the ashtekar approach describes valued connections on graphs defined on a space manifold where is a compact lie group .the hilbert space of states on all graphs can be represented as a direct sum of spaces for each graph .the space for each graph can be further decomposed into a sum of gauge invariant subspaces .this is similar to the spin network decomposition first described by penrose .the connection to qubit strings is made by noting that strings correspond to simple one dimensional graphs .states of qubit strings are defined as above by choosing two values for the space of invariant subspaces as in eq .[ invqub ] .it is hoped to describe more details of this connection in future work .implementation of this approach to reduction of the frame field one still leaves a one dimensional line of iterated frames .the line is finite , fig .[ rcst2 ] , one way infinite , fig .[ rcst3 ] , two way infinite , fig .[ rcst4 ] , or closed , fig .[ rcst5 ] .because the two way infinite and cyclic fields have no abstract external sets of real and complex numbers and no abstract external space time , it seems appropriate to limit consideration to them . herethe cyclic field may be the most interesting because the number of the iterations in the cycle is undetermined .if it were possible to reduce the number to , then one would end up with a picture like fig .[ rcst1 ] except that the and base of the frame would be identified in some sense with the gauge invariant representations described in the frame . whether this is possible , or even desirable , or not , is a problem left to future work .another approach to connecting the frame field to physics is based on noting that the independence of physical theories from the properties of different real and complex number representations can be expressed using notions of symmetry and invariance .this is that note that the gauge transformations apply not only to the qubit string states but also to the arithmetic relations and operations on the string states , eqs .[ = au ] and [ addau ] , to sequences of states , to the cauchy condition eq .[ cauchyu ] , and to the definition of the basic field operations on the real numbers .the symmetry of physical theories under these gauge transformations suggests that it may be useful to drop the invariance and consider at the outset candidate theories that break this symmetry .these would be theories in which some basic aspect of physics is representation dependent .one approach might be to look for some type of action whose minimization , coupled with the above requirement of gauge invariance , leads to some restriction on candidate theories .this gauge theory approach is yet another aspect to investigate in the future .there are some points of the material presented here that should be noted .the gauge transformations described here apply to finite strings of qubits and their states .these are the basic objects . since these can be used to represent natural numbers , integers , and rational numbers in quantum mechanics , one can , for each type of number , describe different representations related by gauge transformations on the qubit string states . herethis description was extended to sequences of qubit string states that satisfied the cauchy condition to give different representations of the real numbers .a reference frame was associated to each real and complex number representation .each frame contains a representation of all physical theories as mathematical structures based on the real and complex number representation base of the frame .if the space time manifold is considered to be a of the real numbers , then each frame includes a representation of space time as a of the real number representation .if one takes this view regarding space time , it follows that for all frames with an ancestor frame , an observer outside the frame field or an observer in an ancestor frame sees that the points of space time in each descendent frame have structure as each point is an equivalence class of cauchy sequences of ( or a cauchy operator on ) states of qubit strings .it is somewhat disconcerting to regard space time points as having structure .however this structure is seen only by the observers noted above .an observer in a frame does not see his or her own space time points as having structure because the real numbers that are the base of his frame do not have structure .relative to an observer in frame , the space time base of the frame is a manifold of featureless points .it should also be noted that even if one takes the view that the space time manifold is some noncompact , smooth manifold that is independent of one still has the fact that functions from the manifold to the real numbers are frame dependent in that the range set of the functions is the real number representation base of the frame .space time metrics are good examples of this . as is well known they play an essential role in physics . in quite general terms, this work is motivated by the need to develop a coherent theory that treats mathematics and physics together as a coherent whole .it may be hoped that the approach taken here that describes fields of frames based on different representations of real and complex numbers will shed light on such a theory .the point that these representations are based on different representations of states of qubit strings shows the basic importance of these strings to this endeavor .finally it should be noted that the structure of frames emanating from frames has nothing to do with the everett wheeler view of multiple universes .if these multiple universes exist , then they would exist within each frame in the field .this work was supported by the u.s .department of energy , office of nuclear physics , under contract no .w-31 - 109-eng-38 .
in this paper fields of quantum reference frames based on gauge transformations of rational string states are described in a way that , hopefully , makes them more understandable than their description in an earlier paper . the approach taken here is based on three main points : ( 1 ) there are a large number of different quantum theory representations of natural numbers , integers , and rational numbers as states of qubit strings . ( 2 ) for each representation , cauchy sequences of rational string states give a representation of the real ( and complex ) numbers . a reference frame is associated to each representation . ( 3 ) each frame contains a representation of all mathematical and physical theories that have the real and complex numbers as a scalar base for the theories . these points and other aspects of the resulting fields are then discussed and justified in some detail . also two different methods of relating the frame field to physics are discussed .
the diamond channel introduced by schein consists of a broadcast channel ( bc ) from a source to two relays and a multiple access channel ( mac ) from the two relays to a destination .the capacity of the diamond channel is not known in general . to simplify the problem ,let us consider a diamond channel having bc with two orthogonal links and gaussian mac . in this setup , there is a tension between increasing the amount of information sent over the bc and increasing the coherent combining gain for the mac .two coding schemes corresponding to the extremes would be partial decode - and - forward , where independent partial messages are sent to the relays , and decode - and - forward ( df ) , where the whole message is sent to each of the relays . by incorporating multicoding at the source , , proposed a coding scheme in which the relays send independent partial messages using dependent codewords and showed that this coding scheme strictly outperforms the df and partial df in some regime .furthermore , showed an upper bound by taking into account the correlation between the two relay signals , which is strictly tighter than the cutset bound .this upper bound was shown to coincide with the lower bound of , for some channel parameters . in this paper , we consider the degraded gaussian diamond - wiretap channel presented in fig .[ fig : physical ] and present lower and upper bounds on the secrecy capacity by exploiting the correlation between the two relay signals .we identify several ranges of channel parameters where these bounds coincide with useful intuitions and investigate the effect of the presence of an eavesdropper on the capacity .we note that this model is a natural first step to studying diamond - wiretap channel because the sum secrecy capacity of the multiple access - wiretap channel has been characterized only for the degraded gaussian case .a practical situation corresponding to this model is the side channel attack where the eavesdropper attacks by probing the physical signals such as timing information and power consumption leaked from the legitimate destination . in the presence of an eavesdropper , the technique of utilizing randomness is widely used to confuse the eavesdropper .we consider the following two scenarios regarding the availability of randomness : 1 ) a common randomness of rate is available at the source and the two relays and 2 ) a randomness of rate is available only at the source and there is no available randomness at the relays .see , for the related works assuming restricted randomness at encoders . for the upper bound, we generalize the upper bound on the capacity of the diamond channel and the upper bound on the sum secrecy capacity of the multiple access - wiretap channel . for the lower bound, we propose two types of coding schemes : 1 ) a decode - and - forward ( df ) scheme where the relays cooperatively transmit the message and the fictitious message and 2 ) a partial df scheme incorporated with multicoding in which each relay sends an independent partial message and the whole or partial fictitious message using dependent codewords .if there is no secrecy constraint , our partial df scheme incorporated with multicoding falls back to that in , .interestingly , in the presence of the eavesdropper , the availability of randomness at the encoders is shown to affect the optimal selection of correlation coefficient between the two relay signals in our proposed schemes .the remaining this paper is organized as follows . in section [ sec :model ] , we formally present the model of the degraded gaussian diamond - wiretap channel . our main results on the secrecy capacity are given in section [ sec : main ] . in section [ sec :proof ] , we derive our upper and lower bounds on the secrecy capacity .we conclude this paper in section [ sec : conclusion ] .consider the degraded gaussian diamond - wiretap channel in fig . [ fig : physical ] that consists of a source , two relays , a legitimate destination , and an eavesdropper .the source is connected to two relays through orthogonal links of capacities and and there is no direct link from the source to the legitimate destination or eavesdropper .the channel outputs and at the legitimate destination and the eavesdropper , respectively , are given as and , where , and are the channel inputs from relay 1 and relay 2 , respectively , is the gaussian noise with zero mean and unit variance at the legitimate destination , and is the gaussian noise with zero mean and variance of at the eavesdropper . and are assumed to be independent .the transmit power constraint at relay is given as , where denotes the number of channel uses . note that the channel output at the eavesdropper is a physically degraded version of the channel output at the legitimate destination .we consider the following two scenarios regarding the availability of randomness . in the first scenario , a common fictitious message of rate ,i.e. , ] for two integers and denotes the set . ]is available at the source and the two relays . in this case ,a secrecy code consists of a message ] to \times [ 1:2^{nc_2}] ] to , and a decoding function at the legitimate destination that maps to ] to . for both scenarios , the probability of erroris given as .a secrecy rate of is said to be achievable if there exists a sequence of codes such that and .the secrecy capacity is the supremum of all achievable secrecy rates .let and denote the secrecy capacity for the first scenario and for the second scenario , respectively . because the legitimate destination and the eavesdropper do not cooperate , the secrecy capacity in fig .[ fig : physical ] is the same as that of stochastically degraded case in fig .[ fig : stochastic ] , in which is given as , where has zero mean and unit variance and is independent of .in this section , we present main results of this paper on the secrecy capacity of the degraded gaussian diamond - wiretap channel described in section [ sec : model ] . for the brevity of presentation ,let us define the following functions : [ eqn : f_def ] where the domain of , , and is ] for . becomes negative infinity when . ]the following two theorems give upper and lower bounds on , respectively , whose proofs are in section [ sec : proof ] .[ thm : common_ub ] for , is upper - bounded by where for .we note that the functions s for ] and , is lower - bounded by where we note that the functions s for ] are defined in ( [ eqn : f_def ] ) , , and .[ thm : no_lb ] for ] are defined in ( [ eqn : f_def ] ). note that in both the upper and lower bounds for the first scenario , the term , which corresponds to the required rate of randomness to confuse the eavesdropper , appears only with , which signifies the amount of information sent through the mac .in contrast , in both the upper and lower bounds for the second scenario , because the fictitious message has to be sent through the bc , appears in common for all terms .this affects sufficient ranges of correlation coefficient for the lower bounds for large enough as remarked in the following .[ remark : negative ] for large enough , sufficient ranges of correlation coefficient for the lower bounds in theorem [ thm : common_lb ] and theorem [ thm : no_lb ] are different . for the first scenario , note that the df rate is maximized at and that it is enough to consider nonnegative for the pdf - m scheme . on the other hand , for the second scenario , because the minus term is common for all terms , considering smaller can be beneficial by decreasing and we need consider all . in the df scheme for the second scenario , the source sends to both relays the fictitious message as well as the message .hence , is obtained from by replacing and by and , respectively . for a partial df scheme incorporated with multicoding for the second scenario , a straightforward extension from that for the first scenariois to let the source send the fictitious message as well as the partial message and the relay codeword index to relay for .since each relay decodes a partial genuine message and a whole fictitious message , we call this scheme as pdf - df - m scheme . note that is obtained by replacing and by and , respectively , in . however , since the same fictitious message is sent to both relays , there exists inefficiency in the use of the bc . to resolve this inefficiency ,we let each of relay codebooks be indexed by independent partial fictitious message , i.e. , codebook for relay is constructed for each by representing as two partial fictitious messages . by using this pdf - pdf - m scheme where each relay decodes a partial genuine message and a partial fictitious message ,we show that is achievable , which has intead of in .we note that having independent fictitious message at each relay reduces the achievable rate region over the mac , which results in additional contraints and in . nevertheless , as long as , is always higher than or equal to because , which should be satisfied if , implies and . if , can be strictly higher than as illustrated in fig .[ fig : pdfdf ] .let and denote the rates of pdf - df and pdf - pdf schemes ( without multicoding ) .similarly as for the first scenario , let us consider sufficiently large and symmetric channel parameters .since , we only consider the df , pdf - pdf - m , and pdf - pdf schemes for the lower bounds .it can be easily proved that the df scheme , which achieves }\min ( c , f_4(\rho))-f_5(\rho) ] is such that . ) , ] .furthermore , for any ] as follows : note that we can rewrite the condition in ( [ eqn : cap_condition ] ) as and .since and are monotonically decreasing function and monotonically increasing function of ] such that .hence , we have .now , let us show .since both and for ] and ] and ] and ] as the partial message pair \times [ 1:2^{nr_2}] ] is uniformly distributed over ] .for each ] , generate independently according to . *encoding at the source : for message and fictitious message , the source finds an such that for ] : note that fictitious message is given at relay . after receiving from the source, relay sends . * decoding at the legitimate destination : the legitimate destination finds such that the legitimate destination declares that is the message .* error analysis : from the mutual covering lemma , the encoding error at the source averaged over the codebooks tends to zero as tends to infinity if for ] has zero mean and variance of and the correlation coefficient between and is ] and is from some similar steps as in the derivation of ( [ eqn : intro_u ] ) .note that we have the following lower and upper bounds on : hence , there exists ] .then , due to similar reasons as in the proof of theorem [ thm : common_ub ] , we have . then , from ( [ eqn : f4_sec])-([eqn : four ] ) , ( [ eqn : rho_def ] ) , and ( [ eqn : secrety_minus ] ) , we have . now , assume that further satisfies . ] .this concludes the proof of theorem [ thm : no_ub ] . as in the proof of theorem [ thm : common_lb ] , we first assume that the channel from the relays to the legitimate destination and the eavesdropper is a discrete memoryless channel with a conditional pmf .fix and .let for the df scheme , by letting the source send both the message and the fictitious message to the relays , an achievable secrecy rate of is obtained from ( [ eqn : dfd_1 ] ) by replacing and by and , respectively .similarly , for the pdf - df - m scheme , by letting the source send the fictitious message as well as the partial message and the relay codeword index to relay for in the pdf - m scheme for the first scenario , an achievable secrecy rate of is obtained from ( [ eqn : pdfmd_1 ] ) by replacing and by and , respectively .the pdf - pdf - m scheme is described in the following . * codebook generation : we represent the message ] as a partial message pair \times [ 1:2^{nr_2}] ] , respectively , for some nonnegative rates , and such that consider for ] and \times [ 1:2^{nr'_k } ] \times [ 1:2^{n\tilde{r}_k}] ] , the source sends to relay . * encoding at relay ] , the transmission of from the source to relay requires from the standard error analysis , the decoding error at the legitimate destination averaged over the codebooks tends to zero as tends to infinity if * secrecy analysis : we can show if ( [ eqn : ach_fictitious2 ] ) and the following inequalities are satisfied . see section [ sub : secrecy_analysis ] for the detail . therefore , there exists a sequence of codes such that tends to zero and as tends to infinity if ( [ eqn : ach_fictitious2 ] ) , ( [ eqn : ach_eq2])-([eqn : ach_sec3_no ] ) are satisfied . by performing fourier - mozkin elimination to ( [ eqn : ach_fictitious2 ] ) , ( [ eqn : ach_eq2])-([eqn : ach_sec3_no ] ) and by taking , a secrecy rate of subject to the constraints is obtained . from the standard discretization procedure , , , and obtained by evaluating ( [ eqn : dfd_2 ] ) , ( [ eqn : pdfmd_2 ] ) , ( [ eqn : pdfmd2 ] ) , and ( [ eqn : pdfmd2_constr ] ) for the degraded gaussian diamond - wiretap channel discussed in section [ sec : model ] and a jointly gaussian distribution such that for ] .let denote the random codebook . for message , fictitious message , and chosen relay codeword indices , we have for sufficiently lage , where is because the eavesdropper who already knows and can decode and with high probability when ( [ eqn : ach_sec1])-([eqn : ach_sec3 ] ) are satisfied for the first scenario and when ( [ eqn : ach_sec1_no])-([eqn : ach_sec3_no ] ) are satisfied for the second scenario .hence , we have .in this paper , we derived nontrivial upper and lower bounds on the secrecy capacity of the degraded gaussian diamond - wiretap channel under two scenarios regarding the availability of randomness .our upper bound was obtained by taking into account the correlation between the two relay signals and the availability of randomness at each encoder , which generalizes both the upper bound on the capacity of the diamond channel without secrecy constraint and the upper bound on the sum secrecy capacity of the mac wiretap channel . for the lower bound , we proposed df scheme and partial df scheme incorporated with multicoding that is called pdf - m scheme for the first scenario and pdf - df - m and pdf - pdf - m schemes for the second scenario depending on whether the relay decodes the whole or partial fictitious message . in the first scenario , pdf - m scheme with strictly positive correlation coefficientwas shown to outperform df and pdf ( without multicoding ) schemes for some channel parameters .we also showed that the pdf scheme is asymptotically optimal for the first scenario when at least one of relay power constraint tends to infinity .for the second scenario , we presented a condition for channel parameters where the pdf - pdf - m scheme is optimal . furthermore , because the fictitious message has to be sent through the bc for the second scenario , it was shown to be befinicial to consider negative correlation in all df , pdf - df - m , pdf - pdf - m schemes when the bc cut becomes the bottleneck .furthermore , we investigated the effect of the presence of an eavesdropper on the capacity . if there is a sufficient amount of common randomness between the source and the relays , it was shown that there is no decrease in capacity due to an eavesdropper for some range of . as a final remark, it seems to be straightforward to combine our df scheme and partial df scheme incorporated with multicoding by using superposition coding , but the resultant rate expression would be rather complicated with less useful insights .d. kobayashi , h. yamamoto , and t. ogawa , `` how to attain the ordinary channel capacity securely in wiretap channels , '' in _ proc .ieee information theory workshop on theory and practice in information - theoretic security _2005 , pp . 1318 .r. a. chou and m. r. bloch , `` uniform distributed source coding for the multiple access wiretap channel , '' in _ proc .ieee conference on communications and network security ( cns ) _ , oct .2014 , pp . 127132 .
in this paper , we present nontrivial upper and lower bounds on the secrecy capacity of the degraded gaussian diamond - wiretap channel and identify several ranges of channel parameters where these bounds coincide with useful intuitions . furthermore , we investigate the effect of the presence of an eavesdropper on the capacity . we consider the following two scenarios regarding the availability of randomness : 1 ) a common randomness is available at the source and the two relays and 2 ) a randomness is available only at the source and there is no available randomness at the relays . we obtain the upper bound by taking into account the correlation between the two relay signals and the availability of randomness at each encoder . for the lower bound , we propose two types of coding schemes : 1 ) a decode - and - forward scheme where the relays cooperatively transmit the message and the fictitious message and 2 ) a partial df scheme incorporated with multicoding in which each relay sends an independent partial message and the whole or partial fictitious message using dependent codewords . wiretap channel , diamond channel , diamond - wiretap channel , multicoding
malicious cryptography and malicious mathematics malicious are an emerging field [ 2 , 3,4 ] that finds its origin in the initial work of young and yung on the use of asymmetric cryptography in the design of dedicated offensive functions for money extorsion ( cryptovirology ) [ 5 ] . but this initial approach is very limited and gives only a very small insight of the way mathematics and cryptography can be actually perverted by malware .malicious cryptology and malicious mathematics make in fact explode young and yung s narrow vision .this results in an unlimited , fascinating yet disturbing field of research and experimentation .this new domain covers several fields and topics ( non - exhaustive list ) : * use of cryptography and mathematics to develop `` _ super malware _ ''( _ ber - malware _ ) which evade any kind of detection by implementing : * * optimized propagation and attack techniques ( e.g. by using biased or specific random number generator ) . * * sophisticated self - protection techniques .the malware code protect itself and its own functional activity by using strong cryptography - based tools . * * code invisibility techniques using testing simulability . * * code mutation techniques ( by combining the different methods of the previous three items ) . * use of complexity theory or computability theory to design undetectable malware . * use of malware to perform applied cryptanalysis operations ( theft of secrets elements such as passwords or secret keys , static or on - the - fly modification of systems to reduce their cryptographic strength ... ) by a direct action on the cryptographic algorithm or its environment . * design and implementation of encryption systems with hidden mathematical trapdoors .the knowledge of the trap ( by the system designer only ) enables to break the system very efficiently . despite the fact that system is open and public, the trapdoor must remain undetectable .this can also apply to the keys themselves in the case of asymmetric cryptography . to summarize, we can define malicious cryptography and malicious mathematics as the interconnection and interdependence of computer virology with cryptology and mathematics for their mutual benefit .the possibilities are virtually infinite .it is worth mentioning that this vision and the underlying techniques can be translated for `` non - malicious '' use as the protection of legitimate codes ( e.g. for copyright protection purposes ) against static ( reverse engineering ) and dynamic ( sandboxing , virtualization ) analyses . if simple and classical obfuscation techniques are bound to fail as proved by barak and al . theoretically ( for a rather restricted model of computation ) , new models of computation and new malware techniques have proved that efficient code protection can be achieved practically .if the techniques arising from this new domain are conceptually attractive and potentially powerful , their operational implementation is much more difficult and tricky .this requires a very good mastery of algorithms and low - level programming . above all the prior operational thinking of the attackis unavoidable .the same code implementing the same techniques in two different contexts will have quite different outcomes .in particular , we must never forget that the code analysis is static ( disassembly / decompilation ) but also dynamic ( debugging , functional analysis ... ) .code encryption usually protects possibly against static code analysis only . in this paperwe will show how the techniques of malicious cryptography enable to implement total amoring of programs , thus prohibiting any reverse engineering operation .the main interest of that approach lies in the fact that transec properties are achieved at the same time . in other words ,the protected binary has the same entropy as any legitimate , unprotected code .this same technique can also achieve a certain level of polymorphism / metamorphism at the same time : a suitable 59-bit key stream cipher is sufficient to generate up to variants very simply .more interestingly , the old fashioned concept of decryptor which usually constitutes a potential signature and hence a weakness , is totally revisited .these techniques have been implemented by the author in the libthor which has been written and directed by anthony desnos .the paper is organized as follows .section [ cacm ] recalls some important aspect about code protection and discusses key points about code armouring and code mutation .then section [ cs ] presents the context we consider to apply our technique : we aim at protecting critical portions of code that are first transformed into an intermediate representation ( ir ) , then into the bytecode .this code itself is that of virtual machine used to provide efficent protection against dynamic analysis .section [ mprng ] presents the different models and techniques of malicious pseudo - random number generator ( prng ) .finally section [ imp_k ] presents implementation issue which must be considered to achieve protection against both static and dynamic analyses .we will not recall in details what these two techniques are . the reader may consult for a complete presentation with respect to them . as far as malicious cryptographyis concerned , we are just going to discuss some critical points which are bottlenecks in their effective implementation .most of the times the attempts to play with these techniques fail due to a clumsy use of cryptography .code armouring consists in writing a code so as to delay , complicate or even prevent its analysis .as for code mutation techniques they strive to limit ( polymorphism ) or to remove ( metamorphism ) any fixed component ( invariant ) from one mutated version to another . the purpose is to circumvent the notion of antiviral signature , may it be a a simple sequence of contiguous or nor contiguous bytes or meta - structures such as control flow graphs ( cfg ) and other traces describing the code functional ( behavioral ) structure . in all these cases ,the general approach is to encrypt the code to protect or to use special techniques for generating random data .but encryption and generation of randomness relates to two major practical problems : their characteristic entropy profile and the secret elements ( keys ) management .the whole problem lies in the fact that code armouring and code mutation involve random data .these must be generated on - the - fly and in the context of metamorphism , the generator itself must be too .for sake of simplicity , we shall speak of _ pseudo - random number generator _ ( prng ) to describe both a random number generator and an encryption system .the difference lies in the fact that in the latter case either random data produced from the expansion of the key are combined with the plaintext ( stream ciphers ) or they are the result of the combination of the key with the plaintext ( block ciphers ) .the whole issue lies in the generation of a so - called `` good '' randomness . except that in the context of malicious cryptography , the term `` good '' does not necessarily correspond to what cryptographersusually mean .in fact , it is better yet a simplified but sufficient reduction as a first approximation to use the concept of entropy . in the same way, the term of random data will indifferently describe the random data themselves or the result of encryption . consider a ( malicious ) code as an information source . when parsed , the source outputs characters taking the possible values , each with a probability $ ] .then the entropy of the source is the following sum - grams and would compute entropy when . ] : random data , by nature will exhibit a high entropy value . ] . on the contrary ,non random data exhibit a low entropy profile ( they are easier or less difficult to predict ) . from the attackers point of view the presence of random data means that something is hidden but he has to make the difference between legitimate data ( e.g. use of packers to protect code against piracy ) and illegitimate data ( e.g. malware code ) . in the nato terminology at the present time it is the most precise and accurate one as far as infosec is concerned random data relate to a comsec ( _ communication security _ ) aspect only . for the attacker ( automated software or human expert ) ,the problem is twofold : first detect random data parts inside a code and then decrypt them . in this respect ,any code area exhibiting a high entropy profile must be considered as suspicious . to prevent attention to be focused on those random parts ,is it possible to add some transec ( _ transmission security _ ) aspect .the most famous one is steganography but for malware or program protection purposes it is not directly usable ( data can not be directly executed ) and we have to find different ways .the other solution is to use malicious statistics as defined and exposed in .it is also possible to break randomness by using noisy encoding techniques like perseus . in this casethe code remains executable while being protected and exhibiting low entropy profile at the same time ( or the entropy profile of any arbitrary kind of data ) . as for an exemple , on a 175-bytes script , the unprotected code has an entropy of 3.90 .an encrypted version ( by combining simple transpositions with simple substitutions ) of that code has an entropy equal to 5.5 .when protected by perseus the entropy is about 2.57 .so it is very close to a normal program and thus evade entropy - based detection .this applies well on any data used for code mutation ( e.g. junk code insertion ) , including specific subsets of code as cfgs : randomly mutated cfg must exhibit the same profile as any normal cfg would .otherwise , considering the comsec aspect only is bound to make the code detection very easy .encrypting a code or a piece of code implies its preliminary deciphering whenever it is executed .but in all of the cases except those involving money extortion introduced young and yung the key must be accessible to the code itself to decipher .consequently in a way or another it is contained in a more or less obfuscated form inside the code .therefore is it accessible to the analyst who will always succeed in finding and accessing it . instead of performing cryptanalysis , a simple decoding /deciphering operation is sufficient .it is therefore necessary to consider keys that are external to the encrypted code .two cases are possible : * environmental key management introduced in and developped in .the code gathers information in its execution environment and calculates the key repeatedly .the correct key will be computed when and only when the suitable conditions will be realized in the code environment which is usually under the control of the code designer .the security model should prohibit dictionary attacks or environment reduction attacks ( enabling reduced exhaustive search ) by the code analyst .consequently the analyst must examine the code in an controlled dynamic area ( sandbox or virtual machine ) and wait until suitable conditions are met without knowing when they will be .however it is possible to build more operational scenarii for this case and to detect that the code is being analyzed and controlled . *use of -ary codes in which a program is no longer a single monolithic binary entity but a set of binaries and non executable files ( working in a serial mode or in a parallel mode ) to produce a desired final ( malicious or not ) action .then the analyst has a reduced view on the whole code only since generally he can access a limited subset of this -set . in the context of ( legitimate ) code protection , one of the files will be a kernel - land module communicating with a userland code to protect .the code without the appropriate operating environment with a user - specific configuration by the administrator will never work .this solution has the great advantage of hiding ( by outsourcing it ) , the encryption system itself .it is one particular instance with respect to this last solution that we present in this paper .in this section we are going to illustrate our approach with simple but powerful cases . without loss of generality and for sake of clarity , we consider a reduced case in which only a very few instructions are protected against any disassembly attempt . of course any real case as we did in libthor will consider far more instructions , especially all critical ones ( those defining the cfg for instance ) .the libthor library initiated and mainly developped by anthony desnos uses virtual machines ( vm ) as the core tool .virtual machines offer a powerful protection for an algorithm or anything else that we would like to protect against reverse engineering .we can have a lot of different vms piled up ( like russian dolls ) into a software .this technique must be combined with classical techniques since it is just a one more step towards code protection . in libthor ,the main goal of vms is to build a simple code which interprets another one .the idea is to take a piece of x86 assembly instructions and to have a simple , dynamic , metamorphic vms very quickly to interpret it .it is worth stressing on the fact that you can embed different vm into the same program to protect differents parts . in desnos libthor vmsare codes which interpret an intermediate representation ( ir ) derived from the reil language .so a translation between x86 assembly instructions and the chosen ir is performed .the vm is totaly independent from the libc or anything like that .the main goal of the vm is to run an algorithm into an encapsulated object ( which can be loaded anywhere ; in fact it is a simple shellcode ) , but the main feature of the vm is to have a different version of it at each generation for a same code so if we want to protect a simple instruction , we can build a new vm every time to protect the same .this new vm will be different from the previous one .moreover everything must be dynamic .therefore we must have simple working rules : * the bytecode must be dynamic and hidden by using the malicious cryptography tools ( malicious pnrg ) we are exposing hereafter ; * information must be dynamic ( area for each context , the offset for each register in the context ... ) ; * the code must be transformed with the internal libthor metamorphic package ( in which malicious pnrg can used used too ) ; * integer constants can be hidden with libthor internal junk package .this is very useful to hide opcode values , register offsets or anything which represent any invariant in a program . hereagain malicious prngs play a critical role . during the code generation ( mutation ) , we use the libthor metamorphic package a lot of times on a function .we have several steps during a generation : 1 .obfuscation of c source code .2 . compiling the c source code of the vm into a library .3 . extraction of interesting functions from the library .4 . transformation of x86 assembly instructions into ir .obfuscation of ir by using metamorphic packagetransformation of ir into bytecodecreation of dynamic functions to handle the bytecodeobfuscation of functions by using the metamorphic package .the mutated code is produced here and the malicious prng is mainly involved here .assembly of all parts to have a simple shellcode .our prng is essentially dedicated to the protection of the bytecode which is the final result of the transformation : x86 asm reil ir bytecode .here is an illustrating example : .... [ x86 asm ] mov eax , 0x3 [ b803000000 ] [ reil ir ] str ( 0x3 , b4 , 1 , 0 ) , ( eax , b4 , 0 , 0 ) [ bytecodes ] 0xf1010000 0x40004 0x3 0x0 0x6a .... we have five fields in the bytecode corresponding respectively to : * 0xf1010000 * * 0xf1 : the opcode of the instruction ( str ) , * * 0x01 : specifies that it is an integer value , * * 0x00 : useless with respect to this instruction , * * 0x00 : specifies that it is a register . * 0x40004 * * 0x04 : the size of the first operand , * * 0x00 : useless with respect this instruction , * * 0x04 : the size of the third operand , * 0x3 : direct value of the integer , * 0x0 : useless with respect to this instruction , * 0x6a : value of the register . in this setting the 0x00 values ( useless with respect this instruction ) contribute directly to the transec aspect .now that the working environment is defined , let us explain how a malicious prng can efficiently protect the opcode values while generating them dynamically .sophisticated polymorphic / metamorphic or obfuscation techniques must rely on prng ( pseudo - random number generator ) . in the context of this paper ,the aim is to generate sequences of random numbers ( here bytecode values ) on - the - fly while hiding the code behavior .sequences are precomputed and we have to design a generator which will afterwards output those data .the idea is that any data produced by the resulting generator will be first used by the code as a valid address , and then will itself seed the pnrg to produce the next random data .three cases are to be considered : 1 .the code is built from any arbitrary random sequence ; 2 .the sequence is given by a ( non yet protected ) instance of bytecode and we have to design an instance of pnrg accordingly ; 3 .a more interesting problem lies in producing random data that can be somehow interpreted by a prng as meaningful instructions like jump 0x89 directly .this relates to interesting problems of prng cryptanalysis .we are going to address these three cases . from a general point of viewit is necessary to recall that for both three cases the malware author needs reproducible random sequences . by reproducible ( hence the term of pseudo - random ) , we mean that the malware will replay this sequence to operate its course of actions .the reproducibility condition implies to consider a _ deterministic finite - state machine _ ( dfsm ) .the general scheme of how this dfsm is working is illustrated as follows . without the dfsm , any instruction data whenever executed produced a data used the next instruction ( e.g. an address , an operand ... ) . any analysis of the code will easily reveal to the malware analyst all the malware internals since all instructions are hardcoded .but if a few data / instructions are kept under an encrypted form , and deciphered at execution only , the analysis is likely to be far more difficult ( up to decryptor and the secret key protection issue ) .it is denied of a priori analysis capabilities .so we intend to have where for all i. upon execution , we just have to input data into the dfsm which will then output the data .a few critical points are worth stressing on 1 .no key is neither required nor used ; 2 .instructions can similarly protected as well .of course to be useful as a prevention tool against analysis , the dfsm must itself be obfuscated and protected against analysis . but this last point is supposed to be fulfilled .we will show a more powerful implementation by using -ary malware in section [ imp_k ] . in this case, the sequence is arbitrary chosen before the design of the code and hence the code is written directly from this arbitrary sequence .this case is the most simple to manage .we just have to choose carefully the dfsm we need .one of the best choice is to take a congruential generator since it implies a very reduced algorithm with simple instructions .let us consider an initial value and the corresponding equation where is the multiplier , is the increment and is the modulus . since the length of the sequence involved in the malware design is rather very short ( up to a few tens of bytes ) , the choice of those parameters is not as critical as it would be for practical cryptographic applications . in this respect, one can refer to knuth s seminal book to get the best sets of parameters .here are a few such examples among many others : standard minimal generator : : .vax - marsaglia generator : : .lavaux & jenssens generator : : .haynes generator : : .kuth s generator : : and .of course the choice of the modulus is directly depending on the data type used in the malware .another interesting approach is to consider hash functions and s / key .the principle is almost the same .we take a hash function which produces a -bit output from a -bit input with . in our casewe can build in the following way ....m = < data to protect><padding of random data><size of data > .... or equivalently ....m = d_i < random data > |d_i| .... then we choose a -bit initialization vector ( iv ) and we compute the random sequence as follows the iteration value can be used to get one or more required arbitrary value thus anticipating the next case .of course the nature of the hash function is also a key parameter : you can either use existing hash function ( e.g md5 , sha-1 , ripemd 160 , sha-2 ... ) and keep only a subset of the output bit ; or you can design your own hash function as explained in . in this slightly different case ,the sequence is determined by a ( non yet protected ) instance of a code .this issue is then to design or use an instance of prng accordingly .this is of course a far more difficult issue which implies cryptanalytic techniques . to formalize the problem we have a sequence which represents critical data ( addresses , asm instructions , operands ... ) of a particular instance of a ( malware ) code . as for example let us consider three series of 32-bit integers describing bytecode values as defined in section [ cs ] : .... 0x2f0100000x040004 0x3 0x0 0x89 ( 1 ) 0x3d010000 0x040004 0x3 0x0 0x50 ( 2 ) 0x5010000 0x040004 0x3 0x0 0x8d ( 3 ) .... they are just different instances of the same instruction .the aim is to have these data in the code under a non hardcoded form .then we intend to code them under an obfuscated form , e.g. we then have to find a dfsm such that the notation directly suggests that the quantity input to the dfsm is a key in a cryptographic context but these keys have to exhibit local low entropy profile at the same time .so the malicious prng must take this into account as well . in this case, we have to face a two - fold cryptanalytic issue : * either fix the output value and find out the key which outputs for an arbitrary dfsm , * or for an arbitrary set of pairs design a unique suitable dfsm for those pairs .the first case directly relates to a cryptanalytic problem while the second refers more to the problem of designing cryptographic dfsms with trapdoors . in our context of malicious cryptography ,the trapdoors here are precisely the arbitrary pairs of values while the dfsm behaves for any other pair as a strong cryptosystem .this second issue is far more complex to address and will not be exposed in this paper ( research under way ; to be published later ) .we will focus on the first case which has been partially addressed for real - life cryptosystem like _ bluetooth _e0 in the context of zero knowledge - like proof of cryptanalysis .but in our case we do not need to consider such systems and much simpler dfsm can be built conveniently for our purposes : sequences of data we use are rather short .we have designed several such dfsms and we have proceeded to their cryptanalysis to find out the keys which output the values . as we will see , those dfsm have to exhibit additional features in order to * be used for code mutation purposes , * exhibit transec properties . in other words , if we have , then and must have the same entropy profile .replacing with a having a higher entropy profile would focus the analyst s attention . without loss of generality , let us consider the mathematical description of a 59-key bit stream cipher ( we currently work on block cipher which should offer more interesting features ; to be continued ) we have designed for that purpose .other dfsms with larger key size ( up to 256 ) have been also designed ( available upon request ) .the principle is very simple yet powerful : three registers and are filtered by a boolean function , thus outputing bits ( or bytes in a vectorized version ) that are combined to the plaintext ( figure [ combsys ] ) . the value initializes the content of registers and at time instant , and outputs bits which represent the binary version of values .the linear feedback polynomials driving the registers are the following : where denotes the exclusive or .the combination function is the _ majority function_ applied on three input bits and is given by an efficient implementation in c language is freely available by contacting the author . for our purposes , we will use it in a procedure whose prototype is given by ....void sco(unsigned long long int * x , unsigned long long int k ) { / * k obfuscated value ( input ) , x unobfuscated value ( output ) * / / * ( array of 8 unsigned char ) by sco * / ... } .... now according to the level of obfuscation you need , different ways exist to protect your critical data inside the code ( series of integers ( 1 ) , ( 2 ) and ( 3 ) above ) .we are going to detail two of them .the dfsm outputs critical data under a concatenated form to produce chunks of code corresponding to the exact entropy of the input value , thus preventing any local increase of the code entropy . for the dfsmconsidered , it means that we output series ( 1 ) , ( 2 ) and ( 3 ) under the following form ....1)-- > 0x2f010000000400040000000300000000000000892)-- > 0x3d01000000040004000000030000000000000050 3)-- > 0x050100000004000400000003000000000000008d .... let us detail the first output sequence ( 1 ) .it will be encoded as three 59-bit outputs and ....m_1 = 0x0bc04000000ll ; m_2 = 0x080008000000060ll ; m_3 = 0x000000000000089ll ; .... to transform and back into five 32-bit values and , we use the following piece of code : .... / * generate the m_i values * / sco(&m_1 , k_1 ) ; sco(&m_2 , k_2 ) ; sco(&m_3 , k_3 ) ; x_1 = m_1 > > 10 ; / * x_1 = 0x2f010000l * / x_2 = ( ( m_2 > > 37 ) | ( m_1 < < 22 ) ) & 0xffffffffl / * x_2 = 0x00040004l * / x_3 = ( m_2 > > 5 ) & 0xffffffffl ; / * x_3 = 0x3 * / x_4 = ( ( m_3 > > 32 ) | ( m_2 < < 27 ) ) & 0xffffffffl ; / * x_4 = 0x0 * / x_5 = m_3 & 0xffffffffl ; / * x_5 = 0x89 * / .... values and will be stored in the code as the values and with : .... k_1 = 0x6aa006000000099ll ; k_2 = 0x500403000015dc8ll ; k_3 = 0x0e045100001eb8all ; .... similarly we have for sequence ( 2 ) .... m_1 = 0x0f404000000ll ; k_1 = 0x7514360000053c0ll ; m_2 = 0x080008000000060ll ; k_2 = 0x4c07a200000a414ll ; m_3 = 0x000000000000050ll ; k_3 = 0x60409500001884all ; .... and for sequence ( 3 ) .... m_1 = 0x01404000000ll ; k_1 = 0x76050e00001f0b1ll ; m_2 = 0x080008000000060ll ; k_2 = 0x00000010c80c460ll ; m_3 = 0x00000000000008dll ; k_3 = 0x000000075098031ll ; .... the main interest of that method is that the interpretation of code is not straightforward .code / data alignment does not follow any logic ( that is precisely why we have chosen a 59-bit fsm which is far better that a seemingly more obvious 64-bit fsm ; any prime value is optimal ) .moreover , as we can notice , the values are themselves sparse as unobfuscated opcodes are ( structural aspect ) .additionally , their entropy profile ( quantitative aspects ) is very similar to the values ( and hence the ones ) .this implies that any detection techniques based on local entropy picks is bound to fail . due to the careful design of our 59-bit dfsm, we managed to obtain a unicity distance which is greater than 59 bits ( the unicity distance is the minimal size for a dfsm output to be produced by a single secret key ) . in our case a large number of different 59-bit keys can output an arbitrary output sequence .here are the results for the three series ( 1 ) , ( 2 ) and ( 3 ) ( table [ res ] ) : .number of possible keys for a given output value [ cols="^,^,^",options="header " , ] this implies that we can randomly select our 9 values and thus we have different possible code variants .it is approximatively equal to variants .the different files ( with ) whose name is given in right column of table [ res ] are freely available upon request .to build a variant , you just have to select data in the nine files randomly according to the following piece of code ( code extract to generate the values for the first serie ( 1 ) only ; refer to section [ imp_k ] for a secure implementation ) : .... f1 = fopen("res11","r " ) ; f2 = fopen("res12","r " ) ; f3 = fopen("res13","r " ) ; randval = ( 314.0*(rand()/(1 + rand_max ) ) ; for(i = 0 ; i < randval ; i++ ) fscanf(f1 , % lx % lx % lx\n , & y1,&y2 , & y3 ) ; k_1 = y1 | ( y2 < < 17 ) | ( y3 < < 36 ) ; / * do the same for values m_2 and m_3 of serie ( 1 ) * / .... / * repeat the same for series ( 2 ) and ( 3 ) * / .... / * generate m_1 value for series(1 ) * / sco(&m_1 , k_1 ) ; .... the different files ( with ) can be stored in a program whose size is less than 400 kb . in this second case ,the dfsm outputs 59-bit chunks of data whose only the 32 least significant bits are useful .in this case we output five 59-bit chunks of data and .for sequence ( 1 ) we have .... m_1 = 0x???????2f010000ll ; m_2 = 0x???????00040004ll ; m_3 = 0x???????00000003ll ; m_4 = 0x???????00000000ll ; m_5 = 0x???????00000089ll ; .... where ` ? ` describes any random nibble .we get the values from the values with the main interest of that method is that it naturally and very simply provides increased polymorphism compared to the previous approach . indeed about 5-tuples whenever input in our dfsm produces 5-tuples . then we can produce a huge number of different instances of the same code by randomly choosing any possible 5-tuples . by increasing size of the memory of the fsmwe even can arbitrarily increase the number of possible polymorphic instances .the obfuscation techniques we have presented before , which are based on malicous cryptography ( malicious , biased prng ) and cryptanalysis techniques ( to precompute inputs to the dfsm ) require obviously to protect the dfsm itself very strongly . indeed obtaining the values from the straightforward whenever we have the dfsm .we would then come back to the weak existing implementations and reduce the problem to a simple decoding scheme .but this obfuscation is impossible to reverse in the context of -ary malware . in this case , the viral information is not contained in a single code as usual malware do , but it is split into different innocent - looking files whose combined action -serially or in parallel - results in the actual malware behavior and in the metamorphic code instance generation .just imagine a 2-ary code ( we can take of course ) made of parts and .part just embeds the dfsm and wait in memory for values coming from part .its role is to compute values and send them back to part according to the protocol summarized by figure [ fig2 ] .the interpretation of data by the reverser ( and hence the reversing of the code ) is no longer possible since the dfsm is deported in the part and the analysis of part would require to reconstruct the dfsm and to break it .this is of course impossible since the code does not contain a sufficient quantity of encrypted information . in the case of metamorphism, the part will output a random value instead during the code instance generation . from a practical point of view, we have considered different implementations .communication pipes : : due to the relationship between father and children processes , only parallel class a or c k - ary codes can be implemented .it is not the most optimal solution .named communication pipes : : in this case , k - ary parallel class b codes can be efficiently implemented ( the most powerful class since there is no reference in any part [ hence information ] to other any part ) .system v ipc : : this is the most powerful method since everything is located into shared memory. the source codes of our implementation will be soon publicly available . in the context of code protection for legitimate purpose , the part will be a kernel - land module can be user / host specific .it can also be a program located on a server outside the code analyst s scope .we have shown in this paper that it is possible to prevent code analysis from reversing while ensuring a high level of metamorphism . of course , we have to apply those techniques to all critical parts of the code ( especially those which determine the execution flow ) . in this caseit is obvious that static analysis is no longer possible . as far as dynamic analysis is concerned , the analyst has to perform debugging and sandboxing . but using delay detection technique can trigger a different code behaviour and code mutation .our current research and experimentation work is related to non deterministic fsms . in this caseany internal state of the fsm results in many possible outcome ( next state at time instant ) .even if is likely to be far beyond any av software detection capability , human analysis becomes impossible .non deterministic contexts result in untractable complexity .thanks to anthony desnos for his support with respect to virtual machines and ir and for many other reasons : geoffroy gueguen ( and his awful vanwinjgaarden formal grammars ) , eloi vanderbken ( aka _ baboon _ ) for his funny way of thinking security .50 b. barak , o. goldreich , r. impagliazzo , s. rudich , a. sahai , s. vadhan and k. yang ( 2001 ) . on the ( im)possibility of obfuscation programs . in : _ advances in cryptology - crypto2001 _ , lncs 2139 , pp . 1 - 8 , springer verlag .philippe beaucamps and eric filiol ( 2006 ) . on the possibility of practically obfuscating programs - towards a unified perspective of code protection ._ journal in computer virology _, 2(4 ) , wtcv06 special issue , g. bonfante editor , springer verlag .anthony desnos ( 2009 ) .implementation of -ary viruses in python .hack.lu 2009 conference .anthony desnos ( 2010 ) .dynamic , metamorphic ( and open source ) virtual machines . to appear in the _ proceedings of the hack.lu 2010 conference _ , luxembourg .thomas dullien and sebastian porst ( 2009 ) .reil : a platform independent intermediate representation of disassembled code for static code analysis .cansecwest 2009 , vancouver , canada .available on http://www.zynamics.com/downloads/csw09.pdf robert erra and christophe grenier ( 2009 ) . how to choose rsa keys ( the art of rsa : past , present and future ) ?iawacs 2009 conference .available on http://www.esiea-recherche.eu/iawacs_2009_papers.html eric filiol .strong cryptography armoured computer viruses forbidding code analysis : the bradley virus . in : _proceedings of the 14th eicar conference _ , pp.- 201217 , may 2005 , malta .eric filiol ( 2005 ) ._ computer viruses : from theory to applications _ , iris international series , springer verlag france , isbn 2 - 287 - 23939 - 1eric filiol ( 2007 ) . _techniques virales avances _ , collection iris , springer verlag france , isbn 978 - 2 - 287 - 33887 - 8 .eric filiol and sbastien josse . a statistical model for viral detection undecidability .eicar 2007 special issue , v. broucek ed ., _ journal in computer virology _ , 3 ( 2 ) .springer verlag .eric filiol , edouard franc , alessandro gubbioli , benoit moquet and guillaume roblot ( 2007 ) .combinatorial optimisation of worm propagation on an unknown network ._ international journal in computer science _ , 2 ( 2 ) , pp .124130 , 2007 .eric filiol ( 2007 ) .formalisation and implementation aspects of -ary ( malicious ) codes ._ journal in computer virology _ ,3 , issue 3 .springer verlag .eric filiol ( 2007 ) .zero knowledge - like proof of cryptanalysis of bluetooth encryption . _international journal in information theory _ , 3 ( 4 ) , pp .40ff , http://www.waset.org/journals/ijit/v3/v3-4-40.pdf eric filiol and fred raynal ( 2008)._malicious cryptography ... reloaded - and also malicious statistics_. cansecwest conference , vancouver , canada , march 2008 .available on http://cansecwest.com/csw08/csw08-raynal.pdf eric filiol .the malware of the future : when mathematics are on the bad side ._ opening keynote talk _ , hack.lu 2008 , luxembourg , october 2008 .available on the hack.lu website .eric filiol ( 2010 ) anti - forensics techniques based on malicious cryptography . in : _ proceedings of the 9th european conference in information warfare eciw 2010 _ , thessaloniki , greece .eric filiol ( 2010 ) dynamic cryptographic trapdoors . submitted .shafi goldwasser and guy n. rothblum ( 2007 ) . on best - possible obfuscation . in : _ proceedings of the 4th theory of cryptography conference - tcc07_. lncs 4392 ,194213 , springer verlag .donald e. knuth ( 1998 ) ._ the art of computer programming : seminumerical algorithms _ , volume 2 , addison - wesley .libthor website ( 2010 ) _ playing with virology for fun and profit _ http://libthor.avcave.org .available on october 2010 .perseus homepage .james riordan and bruce schneier ( 1998 ) . environmental key generation towards clueless agents . in : _ mobile agents and security _ , g. vigna ed . ,lecture notes in computer science , pp . 1524 , springer verlag .adam young and moti yung ( 1999 ) . _ malicious cryptography - exposing cryptovirology_. wiley .isbn 0 - 7645 - 4975 - 8 .
fighting against computer malware require a mandatory step of reverse engineering . as soon as the code has been disassemblied / decompiled ( including a dynamic analysis step ) , there is a hope to understand what the malware actually does and to implement a detection mean . this also applies to protection of software whenever one wishes to analyze them . in this paper , we show how to amour code in such a way that reserse engineering techniques ( static and dymanic ) are absolutely impossible by combining malicious cryptography techniques developped in our laboratory and new types of programming ( k - ary codes ) . suitable encryption algorithms combined with new cryptanalytic approaches to ease the protection of ( malicious or not ) binaries , enable to provide both total code armouring and large scale polymorphic features at the same time . a simple 400 kb of executable code enables to produce a binary code and around mutated forms natively while going far beyond the old concept of decryptor .
while positive feedback has been overwhelmingly studied in complex networks , negative feedback remains ubiquitous in nature .there is much room for modeling network growth besides the traditional degree - based preferential attachment .a simple twist on this seminal work is to form attachments based on the clustering coefficient . doing so naturally creates a negative feedback mechanism that leads to aging , burstiness , and the formation of community structure in networks .the simplicity and robustness of this mechanism is encouraging and may serve as a starting point for investigating the origin of higher - order structures in growing networks , as well as evolving networks that are in equilibrium .the emergence of communities and highly variable temporal behavior observed in many complex networks , social networks in particular , can be investigated from a ca perspective . based on our results, it may be promising to investigate systems in which attachment propensities are determined by other centrality measures that capture a different aspect of local network properties .it is worth considering the potential practical applications of our ca model . in a poorly understood area such as complex systems , hypothetical models such as oursare useful for discovering potentially overlooked dynamical mechanisms and may serve to direct future empirical studies to explore such mechanisms . here, one can imagine many systems where nodes are drawn not towards hubs , but towards densely connected groups . for example , in a social network , individuals may not want to make friends with a very popular person but , instead , with members of a small group of very closely knit friends .such hypotheses are becoming testable thanks to the appearance of high - resolution dynamical contact networks and face - to - face proximity data .being attracted to density may also play a role in follower - followee networks for flocking or swarming animals , where individuals may wish to belong to a small but very cohesive group instead of being part of a jumbled crowd all following a single leader ( the hub animal ) .another area of interest may be the dynamical evolution of functional brain networks .indeed , positive feedback is associated with neurological conditions such as epileptic seizures .recently , it has been shown that networks derived from fmri data are better explained by a model where new connections prefer to complete triangles than by traditional preferential attachment .this model is still quite different from our work .it incorporates anatomical distances in its attachment mechanism , but it demonstrates that clustering can play a role in the evolution of real systems . preferential inhibition can also be used to model fads and fashions .for example , music listeners may actively seek musicians that are not well known .this corresponds to attachment probabilities that decrease with increasing degree , of which clustering attachment is one example .the prevalence of community structure in social systems is not explained by degree preferential attachment alone .likewise , social networks typically feature exponential cutoffs in the degree distribution , simply because people have limited time with which to maintain social relationships .this may imply that * both * preferential attachment and preferential inhibition ( or , equivalently , density attachment ) mechanisms are involved .mixing some inhibition into the system will both inject community structure and limit the formation of very high degree nodes .practically , this means that agents in a system are simultaneously drawn towards highly connected regions and densely connected regions .we believe that exploring these combined effects is a very intriguing direction for improving our understanding of such systems .we thank f. simini and s. redner for many useful discussions and the volkswagen foundation for support .35ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1103/physrevlett.85.5234 [ * * , ( ) ] _ _ ( , ) link:\doibase 10.1016/j.physrep.2012.03.001 [ ( ) , 10.1016/j.physrep.2012.03.001 ] * * , ( ) * * , ( ) * * , ( ) http://stacks.iop.org/1367-2630/14/i=1/a=013055 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1103/physreve.83.025102 [ * * , ( ) ] link:\doibase 10.1038/srep00397 [ * * , ( ) ] * * , ( ) link:\doibase 10.1073/pnas.122653799 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physreve.74.036121 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physrevlett.109.098701 [ * * , ( ) ] * * , ( ) link:\doibase 10.1016/j.physd.2006.09.009 [ * * , ( ) ] link:\doibase 10.1103/physreve.85.066118 [ * * , ( ) ] link:\doibase 10.1103/physreve.69.026113 [ * * , ( ) ] link:\doibase 10.1371/journal.pone.0023176 [ * * , ( ) ] link:\doibase 10.1103/physreve.83.056109 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1073/pnas.1111738109 [ * * , ( ) ]
network models with preferential attachment , where new nodes are injected into the network and form links with existing nodes proportional to their current connectivity , have been well studied for some time . extensions have been introduced where nodes attach proportionally to arbitrary fitness functions . however , in these models , attaching to a node always increases the ability of that node to gain more links in the future . we study network growth where nodes attach proportionally to the clustering coefficients , or local densities of triangles , of existing nodes . attaching to a node typically lowers its clustering coefficient , in contrast to preferential attachment or rich - get - richer models . this simple modification naturally leads to a variety of rich phenomena , including aging , non - poissonian bursty dynamics , and community formation . this theoretical model shows that complex network structure can be generated without artificially imposing multiple dynamical mechanisms and may reveal potentially overlooked mechanisms present in complex systems . growing network models have been introduced to study the topological evolution of systems such as citations between scientific articles , protein interactions in various organisms , the world wide web , and more . meanwhile , recent interest has been drawn towards understanding not simply the topology of these systems or how the individual system elements interact , but also the temporal nature of these interactions . for example , studies of the burstiness of human dynamics , whether by letter writing or mobile phone usage , have advanced our knowledge of how information spreads through systems mediated by such dynamics . one of the most successful mechanisms to model growing networks remains preferential attachment ( pa ) . the original pa model starts from a small seed network that grows by injecting nodes one at a time , and each newly injected node connects to existing nodes . each existing node is chosen randomly from the current network with a probability proportional to its degree : , where is the degree , or number of neighbors , of node . this `` rich - get - richer '' mechanism leads to scale - free degree distributions , , where the earliest nodes will , over time , emerge as the wealthiest hubs in the network , accruing far more links than those nodes injected at later times . this strong early - mover advantage is one of the most striking features of pa . pa alone can not account for topological and statistical features observed in real networks such as dense modular structures and high clustering ( the abundance of triangles beyond what is expected by chance ) , and its most significant feature , the scale - free degree distribution , collapses in equilibrium situations ( in which node injections are balanced by node removal ) . however , the success of pa is the identification of a minimal set of mechanistic ingredients ( growth , degree - driven attachment , and thus positive feedback ) that are required to account for a universal feature abundant in many real systems . pa has thus been the basic starting point for more complex models that generalize the approach to include fitness variables and temporal correlations to account for higher clustering and community structure observed in real - world scale - free networks . here , inspired by the simplicity and generality of pa , we address the following general question : what are the dynamic and topological consequences if the attachment propensity of incoming nodes is determined by a target node s _ neighborhood _ instead of its pure degree . although this type of modification of the original pa model is small mechanistically , we show that the dynamic consequences are substantial . our model exhibits _ emergent _ aging and temporally correlated dynamics , and it naturally possesses negative feedback in the attachment propensity of existing nodes . numerical investigations supported by theory show that these effects are controlled entirely by the attachment process . no additional , artificially imposed rules are necessary . we adapt the original preferential attachment network growth model in the following way . instead of attaching to an existing node with probability proportional to its degree , we attach proportional to its _ clustering coefficient _ ( clustering attachment , or ca ) is the clustering coefficient of node , is the number of links between neighbors of or , equivalently , the number of triangles involving node , is a constant probability for attachment ( which may be zero ) , and the exponent is a parameter in our model . other aspects of network growth remain the same . ( we assume each new node attaches to existing nodes throughout ; the features are the same for , but calculations become more cumbersome . ) we investigate both growing and fixed - size evolving networks . for the latter , a random node is removed every time a new node is added . for the original pa mechanism , the only possible `` reaction '' upon attaching to is to increment its degree , i.e. , . for ca , however , two reactions are possible : or . while the degree always grows , the number of triangles around depends on whether a neighbor of also receives a new link . these two reactions lead to the following potential changes in the clustering coefficient of the existing node before and after the attachment : here , is the change due to connecting to and a neighbor of , while is the change due to connecting to and a non - neighbor of . even when a new triangle is formed , the clustering coefficient after an attachment is almost always less than it was before : an increase in after a new node s attachment is only possible if the existing node has degree . this means that , in contrast to pa , the ca mechanism does not feature rich - get - richer effects . instead , attaching to a node drives down s probability for further attachments . a pure ca system will not exhibit a power - law degree distribution because negative feedback prevents the emergence of hub nodes . instead , networks grown according to ca exhibit an exponential tail in the degree distribution . forming new links based on the clustering coefficient provides a particularly simple model of such negative feedback or preferential inhibition . yet , temporal effects play a role here as well , with the temporal sequence of node injections determining what happens to subsequent nodes . for example , suppose a new node is injected and happens to form a triangle . this will give that new node maximum ; it may become a _ hot spot _ for future attachments . in fig . [ fig : cartoon]a we draw a single realization of the ca model with nodes and . qualitatively , we observe that ca dynamics naturally gives rise to community structure , where the hot spot forms the seed for a new dense group to grow . these communities tend to form sequentially : a hot spot forms and then many nodes attach to it , driving its attractiveness down until another seed appears . this repeating process emerges naturally from the attachment mechanism ; nothing has been artificially imposed . ) . node size is proportional to clustering and node color represents the age of the node ( time since it was injected ) . communities emerge approximately sequentially in time . a measurement of the model s community structure using modularity by running a community detection algorithm while a network evolves according to ca . raw modularity scores may be problematic since sparsity alone can potentially force to high values . we instead consider relative to , the average value observed over the course of the model . we see fluctuations in modularity over time for far larger than observed for purely random attachment ( ) . this quantifies the successive emergence and dissolution of modular structure in the model . these fluctuations occur for both growing and stationary networks . the relative distributions of during the temporal evolution shown in ( b ) . the random case is sharply peaked about its average value . the clustering coefficient averaged over all nodes , whic increases significantly as increases . clustering is another hallmark of community structure . error bars denote .d . [ fig : cartoon ] ] we quantify the presence and evolution of these communities by running a community detection method as a network evolves according to ca . figure [ fig : cartoon]b depicts the optimized modularity of the communities found by the method . higher values of can be used to indicate `` better '' communities . however , raw values of should be interpreted with caution , as can become very large due only to sparsity in the network . instead , in fig . [ fig : cartoon]b we plot modularity relative to its average value over the evolution of each ca realization . we see distinct fluctuations in that are not present in the case of a random network ( ) [ fig . [ fig : cartoon]c ] . these fluctuations are due to the sequential growth and decay of communities : a dense community forms , boosting ; then it becomes sparser as more nodes attach to the community , lowering until a new community forms and the process repeats . these fluctuations are present for both growing and stationary networks . further , in fig . [ fig : cartoon]d we plot the average clustering coefficient as a function of . clustering is another hallmark of modular structure , and it increases as is increased . taken together , we find that plays a significant role in the modular nature of the model . ca thus can give rise to both correlated network structure and nontrivial temporal dynamics . an important question , however , is if this behavior is present for the entire range of exponents or if a critical parameter threshold exists . to understand this and characterize the dynamics further , we now explore ( i ) the aging dynamics of individual nodes after injection , and ( ii ) the influence that older nodes exert on newly injected ones . for the latter , we fix the size of the ca networks by removing a randomly chosen node alongside each new injection , as per fig . [ fig : cartoon]b . nodes . each matrix element ( ) represents the clustering of node at time . nodes are indexed from oldest ( ) to youngest ( ) . at each time step a new node is injected and the oldest node removed such that the time course of an individual node forms a diagonal across the matrix . below each matrix is a spike train denoting injections of high - clustering nodes . as increases , the clustering coefficients of individual nodes persist for longer times and the arrivals of high - clustering nodes become increasingly temporally correlated . [ fig : matrices ] ] when a new node is injected into the system , its degree and clustering will evolve with the time since injection . this new node may then exert an influence on the time course of subsequent nodes . to see this qualitatively , fig . [ fig : matrices ] depicts `` space - time '' matrices for three realizations of ca . in this matrix , each column represents the clustering coefficients of the network s nodes at that time . nodes are ordered by age . the oldest node is removed and a new node injected such that the time course of for each node forms a diagonal streak across the matrix . below each matrix a spike train is shown , highlighting the injection times of high- nodes . as increases , the injection times of high- nodes become temporally correlated and the clustering coefficients of those nodes decay more slowly : both temporal correlations and individual aging effects are affected by the exponent of the ca mechanism . more quantitatively , by averaging over many realizations , we measure the expected time courses and for nodes that are injected with , shown in fig . [ fig : expectedtimecourses ] . these time courses exhibit approximate power - law decay ( growth ) in time for ( ) . to understand the time scaling of and , consider the following simple analysis : first , and }^{-\alpha } \sim \delta^\alpha \kbar^{-2\alpha } ] . ] . we now unify the bursty time dynamics for triangle formation with the aging time courses for node clustering [ eq . ] . for an active system in equilibrium , the density of spikes at time should become approximately constant ( i.e. , independent of time ) such that the expected number of spikes emitted in a time interval is proportional to . ( this is not the same as a poisson process , as the expectation is over an ensemble of ca realizations . ) suppose a spike occurred at some past time ( without loss of generality , we shift time so that ) . then , assuming spikes are rare , a point we will return to , we approximate the spike density at by in other words , a spike occurs at , depending on the probability for the most recent preceding spike to occur at ( which is itself governed by the hazard function for the spike at ) weighted by the clustering at time . given eq . , what hazard function will give rise to a constant ? if , we have where from eq . and the second relation follows by introducing a constant to ensure the initial condition and that the integral does not diverge . when , as , and thus we expect an equilibrium system to be a poisson process for . when , however , no poisson process can be in equilibrium for our expected . instead , a time - dependent hazard function ( ) is necessary : where the latter holds when . therefore , the system will be in equilibrium when . as we mentioned , eq . is most valid at low spike densities , where the typical time between spikes is much greater than the typical time it takes for to decay . for higher densities , the probability for a new spike to occur at time will depend upon a superposition of earlier spikes . yet the contributions of the earlier spikes will _ each _ be time independent when . thus , our derivation should hold even at higher spike densities . in summary , if the above arguments hold we expect an equilibrium system to exhibit a hazard function with indeed , there is good evidence for this relationship in the inset of fig . [ fig : spike_times]b . . when , we recover the constant ( ) corresponding to a poisson process . ( inset ) the observed relationship between and the fitted . the solid line is the prediction of eq . . [ fig : spike_times ] ]
the problem of inferring interactions couplings in complex systems arises from the huge quantity of empirical data that are being made available in many fields of science and from the difficulty of making systematic measurements on interactions . in biology , for example, empirical data on neurons populations , small molecules , proteins and genetic interactions , have largely outgrown the understanding of the underlying system mechanisms . in all these cases the inverse problem , whose aim is to infer some effective model from the empirical data with just partial a priori knowledge , is of course extremely relevant . statistical physics , with its set of well - understood theoretical models , has the crucial role to provide complex , but clear - cut benchmarks ,whose direct solution is known and that can therefore be used to develop and test new inference methods . in a nutshell , the equilibrium approach to inverse problems consists inferring some information about a system defined through an energy function starting from a set of sampled equilibrium configurations .suppose i.i.d sampled configurations generated by a boltzmann distribution of an unknown energy function , are given .the posterior distribution of ( also called likelihood ) is given by , where represents the average of over the given sample configurations and the prior knowledge about .the parameter plays the role of an inverse temperature : when is very large , peaks on the maximums ( with respect to ) of the log - likelihood .the problem of identifying the maximum of can be thought of as an optimization problem , normally very difficult both because the space of is large and because is very difficult to estimate by itself on a candidate solution .several methodological advances and stimulating preliminary applications in neuroscience have been put forward in the last few years .still the field presents several major conceptual and methodological open problems related to both the efficiency and the accuracy of the methods .one problem which we consider here is how to perform inference when data are not coming from a uniform sampling over the equilibrium configurations of a system but rather they are taken from a subset of all the attractive states .this case arises for instance when we consider systems with multiple attractors and we want to reconstruct the interactions couplings from measurements coming from a subset of the attractors . in what follows ,we take as model system the hopfield model over random graphs in its memory phase ( i.e. with multiple equilibrium states ) and show how the interaction couplings can be inferred from data taken from a subset of memories ( states ) .this will be done by employing the bethe equations ( normally used in the ergodic phase where they are asymptotically exact ) by taking advantage of a certain property of their multiple fixed points in the non - ergodic phase .the method can be used to infer both couplings and external local fields .we also show how from the inference method one can derive a simple unsupervised learning protocol which is able to learn patterns in presence of week and highly fluctuating input signals , without ever reaching a spin glass like saturation regime in which all the memories are lost .the technique that we will discuss is based on the so called cavity method and leads to a distributive algorithmic implementation generically known as message passing scheme .the paper is organized as follows .first , in section [ s2 ] we define the problem , and make connections with related works .section [ s3 ] is concerned with the inference problem in non - ergodic regimes , for which a simple algorithmic approach is presented . in section [ s4 ]we apply the technique to the finite connectivity hopfield model in the memory phase .section [ s5 ] shows how the approach can be turned into an unsupervised learning protocol .conclusions and perspectives are given in section [ s6 ] .the hopfield model is a simple neural network model with pair - wise interactions which behaves as an attractor associative memory .its phase diagram is known exactly when memories are random patterns and the model is defined over either fully connected or sparse graphs .reconstructing interactions in the hopfield model from partial data thus represents a natural benchmark problem for tools which pretend to be applied to data coming from multi electrode measurements from large collections of neurons .the underlying idea is that a statistically consistent interacting model ( like the hopfield model ) inferred from the data could capture some aspects of the system which are not easy to grasp from the raw data .here we limit our analysis to artificial data . in the hopfield modelthe couplings are given by the covariance matrix of a set of random patterns which represent the memories to be stored in the system .we will use the model to generate data through sampling and we will aim at inferring the couplings .the structure of the phase space of the hopfield model at sufficiently low temperature and for a not too large number of patterns is divided into clusters of configurations which are highly correlated with the patterns .we will proceed by sampling configurations from a subset of clusters and try to infer the interactions .the simple observation that we want to exploit is the fact that fluctuations within single clusters are heavily influenced by the existence of other clusters and thus contain information about the total system .we consider a system of binary neurons ( or ising spins ) interacting over a random regular graph of degree ; that is every node has a fixed number of neighbors which are selected randomly .the connectivity pattern is defined by the elements of the adjacency matrix .the ( symmetric ) interactions between two neighboring neurons are given by the hebb rule , i.e. , where are the patterns to be memorized and is their number . at finite temperature , we simulate the system by a glauber dynamics ; starting from an initial configuration , the spins are chosen in a random sequential way and flipped with the following transition probability where is the local field experienced by spin and is an external field .we use to denote the set of neighbors interacting with .the process satisfies detailed balance and at equilibrium the probability of steady state configurations is given by the gibbs measure }e^{\beta \sum_i \theta_i \sigma_i+\beta \sum_{i < j } j_{ij}\sigma_i \sigma_j},\ ] ] where ] , which in general is a difficult task .a well known technique which can be used for not too big systems is of course the monte carlo method ( see e.g. ) .though under certain limiting assumptions , there exist good approximation techniques which are efficient , namely mean field , small - correlation and large - field expansions .in this paper we resort to the mean - field cavity method , or belief propagation ( bp ) , to compute the log - likelihood ( see e.g. ) .this technique is closely related to the thouless - anderson - palmer ( tap ) approximation in spin glass literature .the approximation is exact on tree graphs and asymptotically correct as long as the graph is locally tree - like or the correlations between variables are sufficiently weak . in spin glass jargon ,the approximation works well in the so called replica symmetric phase . in the bp approach ,the marginals of variables and their joint probability distribution ( which is assumed to take a factorized form where only pair correlations are kept ) are estimated by solving a set of self - consistency functional equations , by exchanging messages along the edge of the interaction graph ( see ref . for comprehensive review ) . a message ( typically called `` cavity belief '' ) is the probability that spin takes state ignoring the interaction with its neighbor , i.e. in a cavity graph .we call this probability distribution a cavity message . assuming a tree interaction graph we can write an equation for relating it to other cavity messages sent to : as in cavity graphs the neighboring variables are independent of each other .these are bp equations and can be used even in loopy graphs to estimate the local marginals .the equations are solved by starting from random initial values for the cavity messages and updating them in some random sequential order till a fixed point is reached . upon convergencethe cavity messages are used to obtain the local marginals or `` beliefs '' : these marginals are enough to compute the magnetizations and correlations and thus can be used for maximizing the log - likelihood by updating the parameters as with and positive . repeating this procedure for sufficient times leads to an estimate of the unknown parameters . assuming that the external fields are absent , the inference error can be written as : a more accurate estimate of the correlations can be obtained by exploiting the fluctuation - response theorem .this method , called susceptibility propagation , uses derivatives of cavity messages ( cavity susceptibilities ) .its time complexity grows as , to be compared with the complexity of bp equations . in this paperwe will work exclusively with the bp estimate which is simple and accurate enough for our studies .actually , if one is interested only on correlations along the edges of a sparse graph , the bp estimation would be as good as the one obtained by susceptibility propagation .the reader can find more on the susceptibility propagation in .in an ergodic phase a system visits all configuration space . sampling for a long timeis well represented by the measure in ( [ gibbs ] ) . in a non - ergodic phase , as happens for the hopfield model in the memory phase , the steady state of a system is determined by the initial conditions .starting from a configuration close to pattern , the system spends most of its time ( depending on the size of system ) in that state .we indicate the probability measure which describes such a situation by , that is the gibbs measure restricted to state . if configurations are sampled from one state , then the expression for the log - likelihood in ( [ logl ] )should be corrected by replacing with , the free energy of state .still the log - likelihood is a concave function of its arguments and so there is a unique solution to this problem .it is well known that the bethe approximation is asymptotically exact in the ergodic phase ( ) . in this case , the gibbs weight can be approximately expressed in terms of one- and two- point marginals , as follows : the above equation is exact only asymptotically ( on a replica - symmetric system ) ; it can be used for inference in at least two ways : the simplest one is by replacing and in the above expression by their experimental estimation ( given as input of the inference process ) , equating ( [ eq : pbethe ] ) to ( [ gibbs ] ) and solving for and .this is known as the `` independent pairs '' approximation .a second one , often more precise but computationally more involved , is to search for a set of and a corresponding -fixed point of bp equations , such that the bethe estimation , of the two- and one - point marginals match the experimental input as accurately as possible . in a non - ergodic phase ,it is known however that bp equations typically do not converge or have multiple fixed points .this is normally attributed to the fact that the bp hypothesis of decorrelation of cavity marginals fails to be true .when a bp fixed point is attained , it is believed to approximate marginals inside a single state ( and not the full gibbs probability ) , as the decorrelation hypothesis are satisfied once statistics are restricted to this state .the fact that bp solutions correspond to restriction to subsets of the original measure may suggest that there is little hope in exploiting ( [ eq : pbethe ] ) on such systems .fortunately , this is not the case . for every finite system , and every bp fixed point the following holds , } \prod_i \pi^\alpha_i(\sigma_i ) \prod_{i < j}\frac{\pi^\alpha_{ij}(\sigma_i,\sigma_j)}{\pi^\alpha_i(\sigma_i)\pi^\alpha_j(\sigma_j)}. \label{eq : pbethe2}\ ] ] a proof of a more general statement will be given in appendix [ app - bp ] .as in the ergodic case , ( [ eq : pbethe2 ] ) can be exploited in at least two ways : one is by replacing and by their experimental estimation inside a state and solving for the identity between ( [ eq : pbethe2 ] ) and ( [ gibbs ] ) , exactly like in the independent pairs approximation , as if one just forgets that the samples come from a single ergodic component .a second one is by inducing bp equations to converge on fixed points corresponding to appropriate ergodic components . in this paperwe will take the latter option .please notice that the second method , as an algorithm , is more flexible with respect to the first one ; indeed , there is no reason to have a bp fixed point for any set of experimental data , especially when the number of samples is not too large .it means that matching exactly the data with those of a bp fixed point is not always possible .therefore , a better strategy would be to find a good bp solution which is close enough to the experimental data . ignoring the information that our samples come from a single ergodic component would result in a large inference error due to the maximization of the wrong likelihood .as an example , we take a tree graph with ising spins interacting through random couplings , in zero external fields .choose an arbitrary pattern and fix a fraction of the boundary spins to the values in .for the system would be in paramagnetic phase for any finite , therefore the average overlap of the internal spins with the pattern would be zero .on the other hand , for and low temperatures the overlap would be greater than zero , as expected from a localized gibbs state around pattern . in this case the observed magnetizations are nonzero and without any information about the boundary condition we may attribute these magnetizations to external fields which in turn result to a large inference error in the couplings .equivalently , we could put the boundary spins free but restrict the spin configurations to a subspace , for instance a sphere of radius centered at pattern in the configuration space .that is , the system follows the following measure : where is an indicator function which selects configurations in the subspace . by the bp approximationwe can compute the magnetizations and the correlations , see appendix [ app - bpd ] for more details. taking these as experimental data , we may perform the inference by assuming that our data represent the whole configuration space .this again would result to a large inference error ( for the same reason mentioned before ) whereas taking into account that the system is limited to , we are able to infer the right parameters by maximizing the correct likelihood ; i.e. replacing the total free energy in the log - likelihood with , the free energy associated to subspace . in figure we display the inference error obtained by ignoring the prior information in the above two cases .notice that in principle the error would be zero if we knew and the boundary condition .as it is seen in the figure , the error remains nonzero when the boundary spins are fixed ( ) even if sampling is performed over the whole space .inference error on a cayley tree when the leaves are free ( ) or fixed ( ) to random configuration .the data come from subspace ( a sphere of radius centered at ) . in the inference algorithmwe ignore the boundary condition and that samples are restricted to .the internal nodes have degree and size of the tree is .,width=377 ]the hopfield model can be found in three different phases . for large temperaturesthe system is in the paramagnetic phase where , in the absence of external fields , magnetizations and overlaps are zero . if the number of patterns is smaller than the critical value , for small temperatures the system enters the memory phase where the overlap between the patterns and the configurations belonging to states selected by the initial conditions can be nonzero . for , the hopfield model at low temperatureenters in a spin glass phase , where the overlaps are typically zero . in fully connected graphs and in random poissonian graphs where is the average degree .take the hopfield model with zero external fields and in the memory phase .we measure samples from a glauber dynamics which starts from a configuration close to a pattern . the system will stay for a long time in the state and is thus well described by the restricted gibbs measure . in a configuration , the local field seen by neuron is . if corresponds to the retrieved pattern , the first term ( signal ) would have the dominant contribution to . the last term ( noise )is a contribution of the other patterns to the local field . to exploit this information, we look for a set of couplings that result to a gibbs state equivalent to the observed state of the system . one way to dothis is by introducing an auxiliary external field pointing to the experimental magnetizations , i.e. , for a positive ; we may set the couplings at the beginning to zero and compute our estimate of the correlations by the bp algorithm .this can be used to update the couplings by a small amount in the direction that maximizes the likelihood , as in ( [ update - j ] ) ( we do not update the external fields which for simplicity are assumed to be zero ) .this updating is repeated iteratively , decreasing by a small amount in each step .the procedure ends when reaches the value zero .the auxiliary field is introduced only to induce convergence of the equations towards a fixed point giving statistics inside a particular state .figure [ f2 ] compares the inference error obtained with the above procedure for several values of temperature in the memory phase . for the parameters in the figure , the inferred couplings from one basin were enough to recover the other two patterns . in the figurewe also see how the error decreases by taking larger number of samples from the system .inference error versus number of samples for different temperatures .the data are extracted from one pure state of the hopfield model in the memory phase ( ) .size of the system is , each spin interacts with other randomly selected spins , and number of stored patterns is . in the inference algorithm we use .,width=377 ] in general we may have samples from different states of a system .let us assume that in the hopfield model we stored patterns by the hebb rule but the samples are from basins .the estimated correlations in any state should be as close as possible to the experimental values .a natural generalization of the previous algorithm is the following : as before we introduce external fields for each state . at fixed positive compute the estimated bp correlations for different states .each of these estimations can be used to update the couplings as in the single state case. specifically , this amounts to make a single additive update to the couplings by the average vector given by , where and . indeed , the addends of will be typically linearly independent , so will imply for .we then decrease and do the bp computation and update steps .again we have to repeat these steps until the external field goes to zero .figures [ f3 ] and [ f4 ] show how the inference error changes with sampling from different states .notice that if we had an algorithm that returns exact correlations , an infinite number of samples from one state would be enough to infer the right interactions in the thermodynamic limit .however , given that we are limited by the number of samples , the learning process is more efficient if this number is taken from different states instead of just one .evolution of the inference error with update iterations .the data obtained by sampling from one or several pure states of the hopfield model in the memory phase ( ) .the total number of samples in each case is .size of the system is , each spin interacts with other randomly selected spins , and number of stored patterns is . in the inference algorithm we use .,width=377 ] inference error and number of states which are stable and highly correlated with the patterns after sampling from pure states of the hopfield model in the memory phase ( ) .size of the system is , each spin interacts with other randomly selected spins , and number of stored patterns is . in the inference algorithm we use and number of samplesis .,width=377 ]hebbian learning is a stylized way of representing learning processes . among the many oversimplifications that it involvesthere is the fact that patterns are assumed to be presented to the networks through very strong biasing signals . on the contrary it is of biological interest to consider the opposite limit where only weak signals are allowed and retrieval takes place with sizable amount of errors . in the spin language we are thus interested in the case in which the system is only slightly biased toward the patterns during the learning phase . in what followswe show that one can `` invert '' the inference process discussed in the previous sections and define a local learning rule which copes efficiently with this problem . as first stepwe consider a learning protocol in which the patterns are presented sequentially and in random order to the system by applying an external field in direction of the pattern , that is a field with .we assume that initially all couplings are zero .depending on the strength of the field , the system will be forced to explore configurations at different overlaps with the presented pattern .a small corresponds to a weak or noisy learning whereas for large the system has to remain very close to the pattern .what is a small or large , of course depends on the temperature and strength of the couplings .here we assumed the couplings are initially zero , so defines the boundary between weak and strong fields .the learning algorithm should indeed force the system to follow a behavior that is suggested by the auxiliary field .therefore , it seems reasonable if we try to match the correlations in the two cases : in absence and presence of the field .notice to the similarities and differences with the first part of the study . as before we are to update the couplings according to deviations in the correlations .but , here the auxiliary field is necessary for the learning ; without that the couplings would not be updated anymore .moreover , it is obvious that we can not match exactly the correlations in absence and presence of an external field .we just push the system for a while towards one of the patterns to reach a stationary state in which all the patterns are remembered . for any we can compute the correlations by either sampling from the glauber dynamics or by directly running bp , with external fields . at the same timewe can compute correlations by the bp algorithm in zero external fields and with initial messages corresponding to pattern .then we try to find couplings which match the correlations in the two cases , namely we update the couplings by a quantity .the process is repeated for all couplings and for iterations with the same pattern .next we switch to some other randomly selected pattern and the whole process is repeated for learning steps .notice that here is fixed from the beginning .the above learning protocol displays a range of interesting phenomena .firstly one notices that for ( i.e. for very large external fields ) and ( i.e. for very high temperature or isolated neurons ) the above learning results to the hebb couplings of the hopfield model . in figure [ f5 ]we compare the histogram of learned couplings for small and large with the hebbian ones . comparing the histogram of hebbian couplings with that of learned couplings for small and large values of the external field .size of the system is , each spin interacts with other randomly selected spins . herewe are to store random and uncorrelated patterns . in the learning algorithmwe use and .,width=377 ] the number of patterns which are highly correlated with stable configurations depends on the strength of external fields .we consider that pattern is `` learned '' if there is a gibbs state with nonzero overlap that is definitely larger than the other ones .figures [ f6 ] and [ f7 ] show how these quantities evolve during the learning process and by increasing the magnitude of external filed .evolution of the average overlap and fraction of successfully learned patterns in the learning algorithm .size of the system is , each spin interacts with other randomly selected spins . in the learning algorithm we set , , , and number of patterns that are to store is .,width=377 ] average number of successfully learned patterns in the learning algorithm for different values of .size of the system is , each spin interacts with other randomly selected spins .the learning algorithm works at , and number of patterns that are to store is .the average is taken over realizations of the patterns.,width=377 ] for small nearly all patterns are learned , whereas , for larger some patterns are missing .a large number of patters can thus be learned at the price of smaller overlaps and weaker states .that is , the average overlap in successfully learned patterns decreases continuously by increasing , approaching the paramagnetic limit . in figure [ f8 ]we compare this behavior with that of hebb couplings .as the figure shows , there is a main qualitative difference between hebbian learning of the hofield model and the protocol discussed here . in the former case when the number of stored patterns exceeds some critical value the systems enters in a spin glass phase where all memories are lost and the bp algorithm does not converge anymore . on the contrary , in our case many patterns can be stored without ever entering the spin glass phase ( for a wide range of choices of ) .the bp algorithm always converges , possibly to a wrong fixed point if the corresponding pattern is not stored .average number of successfully learned patterns in the learning algorithm and hebb rule versus , the number of patterns that are to store .the inset shows the average overlap .size of the system is , each spin interacts with other randomly selected spins .the learning algorithm works at . the average is taken over realizations of the patterns.,width=377 ] population dynamicsis usually used to obtain the asymptotic and average behavior of quantities that obey a set of deterministic or stochastic equations .for instance , to obtain the phase diagram of the hopfield model with population dynamics one introduces a population of messages representing the bp cavity messages in a reference state , e.g. pattern .then one updates the messages in the population according to the bp equations : at each time step a randomly selected cavity message is replaced with a new one computed by randomly selected ones appearing on the r.h.s . of the bp equations .in each update , one generates the random couplings by sampling the other random patterns .after a sufficiently large number of updates , one can compute the average overlap with the condensed pattern to check if the system is in a memory phase .the stability of condensed state would depend on the stability of the above dynamics with respect to small noises in the cavity messages .if is large enough , one obtains the phase diagram of hopfield model in the thermodynamic limit averaged over the ensemble of random regular graphs and patterns . we used the above population dynamics to obtain the phase diagram of the hopfield model on random regular graphs ,see figure [ f9 ] .the phase diagram of hopfield model on random regular graphs of degree obtained with population dynamics ( ) .horizontal axes is number of patterns and vertical axes is temperature .the paramagnetic , memory and spin glass phases are labeled with and , respectively.,width=377 ] in order to study the new learning protocol we need a more sophisticated population dynamics .the reason is that in contrast to hebb couplings , we do not know in advance the learned couplings . in appendix [ app - pop ]we explain in more details the population dynamics that we use to analyze the learning process studied in this paper . the algorithm is based on populations of bp messages and one population of couplings .these populations represent the probability distributions of bp messages in different states and couplings over the interaction graph . for a fixed set of patterns we update the populations according to the bp equations and the learning rule , to reach a steady state .figure [ f10 ] displays the histogram of couplings obtained in this way . in the figure we compare two cases of bounded and unbounded couplings . in the first casethe couplings should have a magnitude less than or equal to whereas in the second case they are free to take larger values .we observe a clear difference between the two cases ; when is small , the couplings are nearly clipped in the bounded case whereas the unbounded couplings go beyond . however , in both cases there is some structure in the range of small couplings . increasing the magnitude of we get more and more structured couplings .for very large fields they are similar to the hebb couplings . for small fieldsthe histogram of the couplings is very different from the hebb one , though the sign of the learned and the hebbian couplings is the same .there are a few comments to mention here ; in the population dynamics we do not have a fixed graph structure and to distinguish patterns from each other we have to fix them at the beginning of the algorithm .moreover , we have to modify the bp equations to ensure that populations are representing the given patterns , see appendix [ app - pop ] . andfinally the outcome would be an average over the ensemble of random regular graphs , for a fixed set of patterns .having the stationary population of couplings , one can check the stability of each state by checking the stability of the bp equations at the corresponding fixed point .the maximum capacity that we obtain in this way for the learned couplings is the same as the hebb one whereas on single instances we could store much larger number of patterns .the reason why we do not observe this phenomenon in the population dynamics resides in the way that we are checking stability ; the fixed patterns should be stable in the ensemble of random regular graphs .in other words , checking for stability in the population dynamics is stronger than checking it in a specific graph .the main result of our analysis consists in showing that the distribution of the couplings arising from the bp learning protocol is definitely different from the hebbian one .the histogram of learned couplings obtained by the population dynamics in random regular graphs of degree .number of patterns that are to store is . in the upper panelwe compare the two cases of learning with bounded and unbounded couplings for a small external field . in the lower panelwe compare the hebb rule with the learning algorithm for a large external field . in the algorithm we use , , and .,width=377 ]we studied the finite connectivity inverse hopfield problem at low temperature , where the data are sampled from a non - ergodic regime .we showed that the information contained in the fluctuations within single pure states can be used to infer the correct interactions .we also used these findings to design a simple learning protocol which is able to store patterns learned under noisy conditions .surprisingly enough it was possible to show that by demanding a small though finite overlap with the patterns it is possible to store a large number of patterns without ever reaching a spin glass phase .the learning process avoids the spin glass phase by decreasing the overlaps , as the number of patterns increases . a separate analysis which is similar to the one presented in ref . ( and not reported here ) shows that the equations can be heavily simplified without loosing their main learning capabilities . in this paper we focused on a simple model of neural networks with symmetric couplings .it would be interesting to study more realistic models like the integrate and fire model of neurons with general asymmetric couplings .moreover , instead of random unbiased patterns one may consider sparse patterns which are more relevant in the realm of neural networks .the arguments presented in this paper can also be relevant to problem of inferring a dynamical model for a system by observing its dynamics . in this case , a system is defined solely by its evolution equations and one can not rely on the boltzmann equilibrium distribution . still it is possible to try to infer the model by writing the likelihood for the model parameters given the data and given the underlying dynamical stochastic process .a mean - field approach has been recently described in .we actually checked this approach in our problem and observed qualitatively the same behavior as the static approach .in fact , which method is best heavily depends on the type of data which are available .the work of was partially supported by a _programma neuroscienze _ grant by the compagnia di san paolo and the ec grant 265496 .a limit version of this result ( except the determination of the value of the constant ) appeared in .this result is valid for general ( non - zero ) interactions . for a family of `` potentials '' , where we denote by the subvector of given by .we will use the shorthand or to mean . _proposition_. given a factorized probability function and a bp fixed point and plaquette marginals and single marginals for every , then _ proof_. using the fact that and , we obtain , then using the definitions : this proves that a fixed point can be interpreted as a form of reparametrization of the original potentials .in fact , a sort of converse also holds : _ proposition _ : if satisfies a bethe - type expression with for every .then there exists a bp fixed point such that ._ proof _ : choose any configuration .we will use the following notation : , and . define , normalized appropriately .afterwards , we can define . by definition of we have . similarly , but using ( [ eq : bethe3 ] ) , and noting by , we have also .then , and thus .this also implies that , proving that the first bp equation is satisfied . by definition of , where . moreover using ( [ eq : bethe3 ] ), we can conclude that also where this implies that .but we also have that , so as desired .now by hypothesis , so and this proves that the second bp equation is also satisfied .consider the ising model on a tree graph of size with couplings and external fields .suppose that we are given a reference point in the configuration space and the following measure where is a sphere of radius centered at .by distance of two configurations we mean the hamming distance , i.e. number of spins which are different in the two configurations .the aim is to compute thermodynamic quantities like average magnetizations and correlations in an efficient way .we do this by means of the bethe approximation and so bp algorithm .first we express the global constraint as a set of local constraints by introducing messages that each node sends for its neighbors . for a given configuration , denotes the distance of from in the cavity graph which includes and all nodes connected to through . with these new variables we can write bp equations as where is an indicator function to check the constrains on and . starting from random initial values for the bp messages we update them according to the above equation .after convergence the local marginals read where in we check if .these marginals will be used to compute the average magnetizations and correlations .notice that when the graph is not a tree we need to pass the messages only along the edges of a spanning tree ( or chain ) which is selected and fixed at the beginning of the algorithm .consider patterns , where and goes from to , which is equivalent to the size of system .the patterns , learning rate and parameter are fixed at the beginning of the algorithm . to each pattenwe assign a population of messages where ( is the node degree ) .these are to represent the normalized bp messages that we use in the learning algorithm .besides this we have also a population of couplings .notice to the maximum we are taking in the last step .this is to ensure that bp messages in population are related to pattern .we do these updates for iterations , where in each iteration all members of a population are updated in a random sequential way .
we discuss how inference can be performed when data are sampled from the non - ergodic phase of systems with multiple attractors . we take as model system the finite connectivity hopfield model in the memory phase and suggest a cavity method approach to reconstruct the couplings when the data are separately sampled from few attractor states . we also show how the inference results can be converted into a learning protocol for neural networks in which patterns are presented through weak external fields . the protocol is simple and fully local , and is able to store patterns with a finite overlap with the input patterns without ever reaching a spin glass phase where all memories are lost .
ocean circulation impacts weather , climate , marine fish and mammal populations , and contaminant transport , making ocean dynamics of great industrial , military and scientific interest .a major problem in the field of ocean dynamics involves forecasting , or predicting , a variety of physical quantities , including temperature , salinity and density . by fusing recently measured data with detailed flow models ,it is possible to achieve improved prediction .therefore , forecasts occur with increased accuracy . as a result , ocean surveillance may be improved by incorporating the continuous monitoring of a region of interest .researchers have used surface drifters and submerged floats to acquire data for many years .more recently , sensing platforms such as autonomous underwater gliders have been developed .the gliders can operate in both littoral ( coastal ) and deep - ocean regimes , and may be used for data acquisition , surveillance and reconnaissance .one drawback in the use of gliders involves their limited amount of total control actuation due to energy constraints , such as short battery life . for applications such as regional surveillance , energy constraintsmay be alleviated by taking advantage of the dynamical flow field structures and their respective body forces .autonomous underwater gliders are subjected to drift due to hydrodynamic forces .this drift can be extremely complicated since the velocity fields found in the ocean are aperiodic in time and feature a complex spatial structure . instead of constantly reacting to the drift ( and thereby expending energy ), one can minimize the glider s energy expenditure by taking advantage of the underlying structure found in geophysical flows .however , in order to harness the ocean forces to minimize energy expenditure during control actuation , one needs to analyze the structures from the correct dynamical viewpoint .the potential for dynamical systems tools to shed light on complex , even turbulent , flow fields such as ocean dynamics , has been understood for decades .work in this area has intensified in the past decade with new focus placed on the improved lagrangian perspective that dynamical systems approaches may provide , especially for complex , aperiodic flows in which the traditional eulerian perspective on the flow is unhelpful or even misleading . during boundary layer separation , for example ,sheet - like flow structures coincide with fluid particles being ejected from the wall region , a technologically important feature that an eulerian framework fails to capture in unsteady cases , but which has a clear lagrangian signature that can be identified using finite - time lyapunov exponents . in liquid jet breakup ,a similar ejection process occurs during primary atomization , but a limited lagrangian perspective is provided by the liquid - gas interface , revealing liquid sheets and ligaments that precede droplet formation .the development of these critical fluid sheets and ligaments can be traced to unsteady , finite - amplitude global flow structures .dynamical systems tools thus provide a new approach to the characterization and control of these important flows .geophysical flows offer another example where an eulerian framework is ineffective in the diagnosis of large lagrangian structures and the measurement of transport . for prediction and control of particle dynamics in large surveillance regions of interest ,lagrangian structures of geophysical flows need to be characterized in both deterministic and stochastic settings .the field of geophysical fluid dynamics ( gfd ) involves the study of large - scale fluid flows that occur naturally .gfd flows are , by nature , aperiodic and stochastic .the data sets describing them are usually finite - time and of low resolution .established tools of dynamical systems have proven to be less effective in these cases : while providing some insight , they can not provide realistic or detailed flow field data relevant to the trajectories of tracer particles . for this , dynamical systems tools can be applied to fluid flows in an alternative way , by interpreting the eulerian flow field as a dynamical system describing the trajectories of tracer particles .the phase space and real space are identical here and due to the incompressibility condition , the resulting dynamical system is conservative .this `` chaotic advection '' approach originated with aref and demonstrated that even simple laminar two - dimensional ( 2d ) periodic and three - dimensional ( 3d ) steady flows could lead to complex , chaotic particle trajectories .figure [ fig : ocean ] illustrates an example of how particles in a periodic flow may exhibit unexpected trajectories . in this figure , a single - layer quasi - geostrophic beta plane is being driven by a bimodal wind - stress with a small amplitude periodic perturbation .details of this ocean model can be found in appendix [ sec : ocean_model ] . in the last two decades , this approach has led to new tools including the study of transport by coherent structures , lobe dynamics , distinguished trajectories , and global bifurcations .even though the transport controlling structures in gfd flows are inherently complicated and unsteady , their understanding is necessary to the design of glider controls . to overcome this obstaclewe combine a set of dynamical systems tools that have proven effective in higher dimensions and stochastic problems . in this paper, we will consider a well - known driven double - gyre flow as an example to illustrate our prediction / control framework .this model can be thought of as a simplified version of the double - gyre shown in fig .[ fig : ocean ] which is a solution to a realistic quasi - geostrophic ocean model .it should be noted that our methods are general and may be applied to any flow of interest .the goal of our approach is the production of a complete picture of particle trajectories and tracer lingering times that enables one to design a control strategy that limits tracers from switching between gyres .( color online ) streamfunction ( color ) and two particle trajectories within a single - layer quasi - geostrophic ocean model ( see appendix [ sec : ocean_model ] ) subject to low - amplitude periodic forcing whose mean gives a steady double - gyre / western boundary current flow solution .initial points on the trajectory are identified with open circles ., width=321,height=321 ] the techniques we will use to analyze the dynamical systems are based on both deterministic and stochastic analysis methods , and reveal different structures depending on the system under examination . in the deterministic case , the lagrangian coherent structures reveal much about transport and information related to the basin boundaries .they also pinpoint regions of local sensitivity in the phase space of interest . since basin boundaries are most sensitive to initial conditions , uncertainties in the data near the boundaries generate obstructions to predictability .therefore , highly uncertain regions in phase space may be revealed by computing local probability densities of uncertain regions in the deterministic case , but using noisy initial data .finally , in the time - dependent stochastic case , we can describe which sets contain very long term , albeit finite , trajectories .the sets are almost invariant due to the stochastic forcing on the system , which causes random switching between the almost invariant sets .the tools we use here are based on the stochastic frobenius - perron operator theory .once the full structure of almost invariant sets is identified along with regions of high uncertainty , control strategies may be designed to maintain long time trajectories within a given region with minimal actuation . in fig .[ fig : ocean ] , one can see that one particle s trajectory remains in the lower gyre for a long period of time , while the other particle escapes from the lower gyre to the upper gyre .the tools we will develop and outline in this article will enable one to know if and when a control force must be actuated to prevent the particle from escaping .the layout of the paper is as follows . in sec .[ sec : themodel ] we present the stochastic double - gyre system and examine the deterministic dynamical features . in sec .[ sec : ftle ] , we show how to use the finite - time lyapunov exponents to describe transport , and we show in sec . [ sec : unc ] how to quantify uncertainty regions .we then turn to the stochastic system , and describe in sec .[ sec : ais ] how to compute almost invariant sets .section [ sec : control ] contains a discussion of our corral control strategy , and sec .[ sec : conc ] contains the conclusions and discussion .a simple model of the wind - driven double - gyre flow is provided by & = -a(f(x , t))(y)-x + _ 1(t),[e : toygyre_a ] + & = a(f(x , t))(y)-y + _ 2(t),[e : toygyre_b ] + & f(x , t)=(t + ) x^2+(1 - 2(t + ) ) x.[e : toygyre_c ] when , the double - gyre flow is time - independent , while for , the gyres undergo a periodic expansion and contraction in the -direction . in eqs .( [ e : toygyre_a])-([e : toygyre_c ] ) , approximately determines the amplitude of the velocity vectors , gives the oscillation frequency , determines the amplitude of the left - right motion of the separatrix between the two gyres , is the phase , determines the dissipation , and describes a stochastic white noise with mean zero and standard deviation , for noise intensity .this noise is characterized by the following first and second order statistics : , and for . in the rest of the article we shall use the following parameter values : , , , and .we consider the dynamics restricted to the domain and .an unforced , undamped autonomous version of the double - gyre model was studied by rom - kedar , et.al . , and an undamped system with different forcing was studied by froyland and padberg . prior to defining the control of certain sets of the stochastic system, we first describe the important dynamical features of the deterministic part of eqs .( [ e : toygyre_a])-([e : toygyre_c ] ) .there are two attracting periodic orbits , which correspond to two fixed points of the poincar map defined by sampling the system at the forcing period .for each fixed point , there corresponds a left and a right basin of attraction .local analysis on the poincar section about the fixed points reveals the attractors to be spiral sinks , which generate the global double - gyre . a representative basin map at the phase is shown in fig . [ fig : basin ] .one can see the complicated basin boundary structure in which the basins of attraction are intermingled , a signature of the existence of a fractal basin boundary .there are also several unstable ( saddle ) periodic orbits corresponding to fixed points that lie along or close to the domain boundary .due to the intermingling of the basin boundaries , one can expect that small perturbations , or uncertainties , in initial conditions near the basin boundary will generate large changes in dynamical behavior .this can occur deterministically , or when noise is added to the system .therefore , we quantify regions of uncertainty in phase space in both deterministic and stochastic settings in sections iv and v respectively .( color online ) the basin poincar map for the deterministic part of eqs .( [ e : toygyre_a])-([e : toygyre_c ] ) at phase .the locations of the attracting fixed points corresponding to periodic orbits are denoted by the stars .the coloring represents the convergence rate to the attractors . as shown by the color bar ,positive values converge to the left basin and negative values converge to the right basin .the largest magnitude values converge the fastest .four saddle fixed points are located within the boundaries of the domain , and are denoted by large dots .the remaining two are located at and .,width=321,height=160 ]one method that can be used to understand transport and which quantifies localized sensitive dependence to initial conditions in a given fluid flow involves the computation of finite - time lyapunov exponents ( ftle ) . in a deterministic setting, the ftle also gives an explicit measure of phase space uncertainty .given a dynamical system , one is often interested in determining how particles that are initially infinitesimally close behave as time .it is well - known that a quantitative measure of this asymptotic behavior is provided by the classical lyapunov exponent . in a similar manner , a quantitative measure of how much nearby particlesseparate after a specific amount of time has elapsed is provided by the ftle . in the early 1990s ,pierrehumbert and pierrehumbert and yang characterized atmospheric structures using ftle fields .in particular , their work enabled the identification of both mixing regions and transport barriers .later , in a series of papers published in the early 2000s , haller introduced the idea of lagrangian coherent structures ( lcs ) in order to provide a more rigorous , quantitative framework for the identification of fluid structures .haller proposed that the lcs be defined as a ridge of the ftle field , and this idea was formalized several years later by shadden , lekien and marsden . when computing the ftle field of a dynamical system , these lcs , or ridges , are seen to be the structures which have a locally maximal ftle value .although the ftle / lcs theory can be extended to arbitrary dimension , in this article we consider a 2d velocity field given by the deterministic part of eqs .( [ e : toygyre_a])-([e : toygyre_c ] ) which is defined over the time interval \subset\mathbb{r} ] , is a random variable with intensity . to find the pdf , we switch to polar coordinates using the time - dependent change of variables given by and .this results in the following transformed stochastic system : here , we find that and the noise vanishes entirely from the equation . assuming the transformed noise term still has intensity , and is a normalization constant , the probability density is given by the pdf is now fully specified by the deterministic flow and noise drift , control strength and actuation region , and can be plotted for various parameters . in fig .[ fig : pdf ] one can see how an increase in the control strength decreases the probability of the trajectory moving outside the boundary of the control region .the inset figure shows that the local minimum of the pdf occurs at , which is the location of the unstable limit cycle .the pdfs branch at the point of control at .the thick solid curve represents the dynamics with no control .it follows that a trajectory escaping outside the limit cycle will diverge to infinity , and the pdf increases as one increases .( color online ) pdf as a function of the control radius for varying control force found using eq .( [ eq : radial pdf ] ) .continuous noise is used with , , and .the normalization constant , , was set in each case so that the area under the curve is one .the inset shows a close - up of the pdf near the attractor for the case of no control.,width=321 ] in addition , one can perform the one - dimensional almost invariant set analysis for this example .the transport matrix is computed using a grid of 500 intervals on the domain and gaussian noise with standard deviation of .the point of control is located at , to the left of the basin boundary at .figure [ fig : ais_control](a ) , shows the expansion of the left almost invariant set to the right as the control is increased .this information is contained in the second eigenvector of the transport matrix .the lower and upper almost constant regions of the function represent location of the sets .figure [ fig : ais_control](b ) , shows the movement of the transport region to the right as the control is increased .this information is contained in the third eigenvector of the transport matrix .the maximum value of the function represents the location of the transition region .notice that it is the complement of the almost invariant sets .both the pdf and almost invariant set analysis demonstrate the change that the control algorithm has on the natural dynamics of the stochastic system .the control algorithm decreases the probability of trajectories visiting regions where deterministic dynamics would cause them to diverge .this moves the effective basin boundary farther from the attractor , increasing its size . by using sufficient control radius and force , any trajectory that would naturally divergecan be redirected towards the attractor .therefore , the almost invariant set can be expanded to the desired size at a cost of the number of control actuations .( color online ) the results of the almost invariant sets analysis for varying control force in eq .( [ eq : radial lc controller ] ) with , , and gaussian noise with standard deviation of .figure [ fig : ais_control](a ) shows the almost invariant set for varying control force .the second eigenvector of the transport matrix is mapped back to the domain .notice that as increases , the right basin extends to the right .figure [ fig : ais_control](b ) shows the transition set for varying control force .the third eigenvector of the transport matrix is mapped back to the domain .notice that as increases , the transition region moves to the right . , title="fig:",width=302 ] ( color online ) the results of the almost invariant sets analysis for varying control force in eq .( [ eq : radial lc controller ] ) with , , and gaussian noise with standard deviation of .figure [ fig : ais_control](a ) shows the almost invariant set for varying control force .the second eigenvector of the transport matrix is mapped back to the domain .notice that as increases , the right basin extends to the right .figure [ fig : ais_control](b ) shows the transition set for varying control force .the third eigenvector of the transport matrix is mapped back to the domain .notice that as increases , the transition region moves to the right ., title="fig:",width=302 ] additionally , it is possible to quantify the relation between the mean escape time and the potential defined by the pdf . using the well - known kramer s escape rate , one can predict the rate at which a particle can escape over a potential barrier under brownian motion . in one dimension ,the escape time of a particle from a potential defined by is } ] on a -plane .the governing non - dimensionalized equation for the fluid streamfunction is : where is the jacobian operator , and forcing is provided by a wind stress curl , , that is prescribed as follows to form a double - gyre circulation with a weak periodic `` seasonal '' variation : where the amplitude and frequency were used to produce fig .[ fig : ocean ] .this system is characterized by the non - dimensional parameters where is the rotation parameter , is the bottom friction and and are respectively the characteristic length and velocity scales of the basin . the parameters and correspond to the relative length scales of the stommel and inertial layers , respectively . to produce fig . [ fig : ocean ] , we used and .the above model has been numerically integrated using second - order spatial differences and second - order runge - kutta time stepping for the streamfunction , and a fourth - order runge - kutta algorithm to compute the lagrangian trajectories of tracer particles . in fig .[ fig : ocean ] , a grid of resolution was used and a dimensionless time step of .coordinates of tracer particles are independent of the grid while flow velocities at the particle locations are found using bilinear interpolation from the grid values .an initially static ocean spins - up in a few hundred time steps .if the spun - up solution is stationary , while non - zero leads to a superimposed oscillatory behavior .the tracers are held in place until spin - up is complete . c. c. eriksen , t. j. osse , r. d. light , t. wen , t. w. lehman , p. l. sabin , j. w. ballard , and a. m. chiodi , `` seaglider : a long - range autonomous underwater vehicle for oceanographic research , '' ieee j. oceanic eng . *26 * , 424 ( 2001 ) .k. ide , d. small , and s. wiggins , `` distinguished hyperbolic trajectories in time - dependent fluid flows : analytical and computational approach for velocity fields defined as data sets , '' nonlinear proc . geoph .* 9 * , 237 ( 2002 ) .r. meucci , d. cinotti , e. allaria , l. billings , i. triandaf , d. morgan , and i. b. schwartz , `` global manifold control in a driven laser : sustaining chaos and regular dynamics , '' physica d * 189 * , 70 ( 2004 ) .s. c. shadden , f. lekien , and j. e. marsden , `` definition and properties of lagrangian coherent structures from finite - time lyapunov exponents in two - dimensional aperiodic flows , '' physica d * 212 * , 271 ( 2005 ) .m. branicki and s. wiggins , `` finite - time lagrangian transport analysis : stable and unstable manifolds of hyperbolic trajectories and finite - time lyapunov exponents , '' nonlinear proc .* 17 * , 1 ( 2010 ) .w. tang , p. w. chan , and g. haller , `` accurate extraction of lagrangian coherent structures over finite domains with application to flight data analysis over hong kong international airport , '' chaos * 20 * , 017502 ( 2010 ) .
we consider the problem of stochastic prediction and control in a time - dependent stochastic environment , such as the ocean , where escape from an almost invariant region occurs due to random fluctuations . we determine high - probability control - actuation sets by computing regions of uncertainty , almost invariant sets , and lagrangian coherent structures . the combination of geometric and probabilistic methods allows us to design regions of control that provide an increase in loitering time while minimizing the amount of control actuation . we show how the loitering time in almost invariant sets scales exponentially with respect to the control actuation , causing an exponential increase in loitering times with only small changes in actuation force . the result is that the control actuation makes almost invariant sets more invariant . * prediction and control of the motion of an object in time - dependent and stochastic environments is an important and fundamental problem in nonlinear dynamical systems . one of the main goals of control is the design of a theory that can take unstable states and render them stable . for example , small perturbations at the base of an inverted pendulum will stabilize the inverted state . noise poses a greater problem for deterministically controlled states , in that stochastic effects destabilize the states as well as their neighborhoods . therefore , control theory of stochastic dynamical systems may be addressed by examining the change in stability of certain sets . * we present a variety of geometric and probabilistic set - based methods that enable one to compute controllable sets . for a particle moving under the influence of deterministic and stochastic forces with no control , these sets determine regions which are unstable in the sense that the particle will leave the set after a sufficiently long time . controls added to particle dynamics to increase the time to escape ( or loitering time ) have a strong dependence on the probabilistic and geometric set characteristics . the determination of controls and associated sets allow for an increase in the amount of time the particle can loiter in a particular region while minimizing the amount of control actuation . our theoretical analysis shows how an increase in the strength of the control force leads to a decrease in the probability that an object will escape from the control region . in fact , we have found that small changes in the control actuation force have an exponential effect on the loitering time of the object . additionally , we show how the exponential increase in escape times from the controlled sets is related to the problem of noise - induced escape from a potential well .
a common characteristic of massive data sets whose major purpose of study is to discover the association between response and predictors is that the number of predictors is larger than the number of independent individuals .although linear regression or its generalizations are useful tools to detect associations , some computational and theoretical issues are still remained unsolved for massive data .many statistical approaches have been developed during the past two decades in many aspects . in this work ,we concentrate on the screening problem for binary response regressions . considering all predictors in one linear modelis not practical without placing restrictions on the parameter space . instead , one may design screening statistics to rule out unimportant predictors before building final models .a screening statistic is defined as a surrogate measure of the underlying association between the response and a predictor .fan and lv ( 2008 ) and fan and song ( 2010 ) propose the concept of sure screening " : a screening statistic possesses the sure screening property if that the statistic is relatively small if the true association is negligible or 0 .their sure independence screenings were designed toward this end .fan and lv ( 2008 ) proposes the sure independence screening ( sis ) for linear regression .they choose the predictors with large absolute covariances between response and predictors as important predictors and then build final regression models based on these important predictors . as we will show later , the covariance is proportional to the slope of the simple linear regression so hereafter , we use slope instead of covariance if there is no confusion .the computation of sis is extraordinarily fast because it only involves centering and inner products but no matrix inversion .note that when the term linear regression " is applied , we generally presume that the response is continuous or more restrictively , the response follows normal distribution . either way, least - square estimation can be applied to estimate the regression coefficients .one of our major interest is the consequence of applying least - square estimation to binary response data . as we will show later , under some conditions , it is useful for screening but not for estimation and prediction .for binary response regressions , it is popular to choose logit , probit or complementary log - log link function ( mccullagh and nelder , 1989 ) to formulate the likelihood .many statistical softwares perform estimation and testing tasks well . however , there are two major issues on using these models .first , the choice of link function is essential to estimation science different link functions yield different regression coefficient estimates .li and duan ( 1989 ) proves that , the maximum likelihood estimate ( mle ) is consistent to the true regression coefficient up to an unknown constant when the link function is misspecified .so , when the true link function is unknown , the regression coefficient estimates is always questionable .second , the mle of regression coefficient is sometimes unidentifiable , unique or finite mle does not exist ( albert and anderson , 1984 ) .these two reasons urge us to find a computational efficient procedure for screening rather than merely using traditional logistic or probit regressions .the rest of this article is arranged as follows . in section 2, we review some useful results of linear model as well as the sure independence screening in linear model ( fan and lv , 2008 ) and in generalized linear regression ( fan and song , 2010 ) .moreover , we show that , for binary response regressions , the screening statistics of both sis and the newly proposed least - square screening ( less ) converge in probability to its linear model counterpart up to a constant when the predictors follow multivariate normal and the link function belongs to a class of scale mixture of normals .simulation and data analysis are shown in section 4 followed by our concluding remarks in section 5 .we begin with matching parameters of the true model and parameters of working models .consider the linear model where , follows , and and s are independent .assume that not all of s are 0 .denote and as the element of .let ( [ eq : truemodel ] ) be the true model and call the predictors with non - zero ( zero ) regression coefficients as active ( inactive ) predictors .as taught in the first course of linear regression , the least - square estimator of the slop of the working model converges in probability to and hence where , , and is a diagonal matrix with diagonal elements . for a more general case ,let the working model be where .define and and partition the regression coefficient vector as where and are regression coefficients corresponding to and , respectively .further , define the partition of the variance covariance matrix as \ ] ] with respect to and , too. then we have which implies that the least - square estimate of the working model ( [ eq : working1 ] ) converges in probability to this suggests that if either or which is actually the partial orthogonality condition defined in huang , horowitz and ma ( 2008 ) .in other words , to successfully estimate regression coefficients ( ) without contamination ( ) , a subset of predictors , say , should be chosen so that and are uncorrelated or none of predictors in is active . note that , the multiple regression with predictors is a misspecified model for the true model ( [ eq : truemodel ] ) as long as these predictors do not include all active predictors .a well - known result in linear model literature is that if the working model is misspecified , the least - square estimator of the regression coefficient is biased .the asymptotic bias can be quantified explicitly by ( [ eq : main ] ) . however , to our best knowledge , there is no such expression under binary response regressions .the score equation is applied to link the true model parameters and the maximum likelihood estimates of the parameter under a specific misspecified model .suppose the true model is and , where is the so called logit link .assume that the true link is known and a simple working model is specified .consequently , the score function converges in probability to and thus , the maximum likelihood estimator of converges to such that = e\left [ x_1 \frac{1}{1+\exp\{-\gamma_0 - \sum_{j=1}^p x_j\gamma_j\ } } \right ] - e\left[x_1 \frac{1}{1+\exp\{-\beta_0^{ml } - \beta_1^{ml } x_1\ } } \right]\ ] ] so the relationship between the true model and working model can be quantified by these two expectations .the calculation of these expectations are not trivial and their numerical evaluations had been studies by crouch and spiegelman ( 1990 ) and monahan and stefanski ( 1992 ) .one of the major contribution is providing a closed - form expression of these expectations .it is worth to emphasize that in general and we wish to express in terms of true parameter values s like ( [ eq : main ] ) .hereafter , denote and as the probability density function ( _ p.d.f . _ ) and cumulative distribution function ( _ c.d.f . _ ) of the standard normal distribution , respectively , and denote as the density function of the normal distribution with mean and variance .now , we derive the relationship between the regression coefficient of the true model and of the working model for binary response regression models .let the true model be and the working model be where a function with the subscript means that the function is unknown but one is posited to it , and the function with subscript means that the function is the underlying function .moreover , we require that as well as is a valid _ c.d.f . _ with the form of scale normal mixture and is a valid density function for either continuous or discrete .this implies that is a symmetric _ p.d.f . _such an can be the __ of gaussian , logistic , double exponential , student- ( andrews and mallows , 1974 ) , exponential power family ( box and tiao , 1973 ; west , 1987 ) and others .hereafter , s should satisfy 1 ) , 2 ) is a valid density function , and 3 ) for every and . a sufficient and necessary condition for the existence of is provided by andrews and mallows ( 1974 ) .following is one of our major conclusion and it is the result of lemma [ lm : ab00 ] and lemma [ lm : probit ] .theirs proofs are deferred to appendix a. theorem [ th : main ] implies that the least square estimator converges to a value proportional to the desire value ( [ eq : main ] ) and the proportion is expressed in a form of integration .[ th : main ] under the true model ( [ eq : btrue ] ) and the working model ( [ eq : bworking ] ) , the least - square estimator converges in probability to where .[ lm : ab00 ] arnold and beaver ( 2000 ) prove that 1 .linearly skewed normal is defined according to the following equation 2 .if then ^{-1}\ ] ] [ lm : probit ] under the true model ( [ eq : btrue ] ) with probit link ( ) , under the working model ( [ eq : bworking ] ) , we show that the maximum likelihood estimator of converges in probability to in theorem [ th : mle ] where the proof is rooted from the score equation \right\ } = e\left\{{\bf z}_1\left[h_t({\bf z}^t{{\boldsymbol}\gamma})-h_w({\bf z}^t_1{{\boldsymbol}\beta}^{ml}_1)\right]\right\} ] , , and is a carefully chosen constant .note that is not necessarily a probability mass function for arbitrary .a sufficient condition to make a probability mass function is that moreover , the correlation coefficient between and is given by biswas and hwang ( 2002 ) .consequently , an algorithm to simulate correlated is as follows : 1 .sample from .sample from .set .2 . for simulating , sample from and from .3 . given , if the sufficient condition ( [ eq : suf ] ) is not satisfied , let .sample from the conditional probability ( [ eq : cond ] ) with .5 . let .goto 2 . while and stop while .note that , setting makes two binomial random variables independent .so step 3 .enforces two consecutive variables to be independent with probability roughly equal to 0.1 according to simulation .moreover , the resulting correlation ranges from 0.2 to 0.8 with median 0.4 by simulation .
screening before model building is a reasonable strategy to reduce the dimension of regression problems . sure independence screening is an efficient approach to this purpose . it applies the slope estimate of a simple linear regression as a surrogate measure of the association between the response and the predictor so that the final model can be built by those predictors with steep slopes . however , if the response is truly affected by a nontrivial linear combination of some predictors then the simple linear regression model is a misspecified model . in this work , we investigate the performance of the sure independence screening in the view of model misspecification for binary response regressions . both maximum likelihood screening and least square screening are studied with the assumption that predictors follow multivariate normal distribution and the true and the working link function belong to a class of scale mixtures of normal distributions . link function , logistic model , probit model , sure independence screening
all radio telescopes can be classified as either filled or unfilled - aperture antennas .the simplest filled - aperture antennas are single - dish telescopes .the desired astronomical object to be observed and the particular scientific goals determine which type of radio telescope to use . in general ,single - dishes are considered as tools for low spatial resolution observations , while interferometers are used for high resolution observations . while compact objects are more suited for interferometric observations , extended objectsare commonly observed with single - dishes as interferometers can not faithfully recover information on the largest spatial scales .however , in many scientific cases it is essential to obtain high spatial resolution observations of large objects , and to accurately represent emission present over a wide range of spatial scales .a simple recipe you may follow in such cases is : * observe ( mosaic ) your object with an interferometer , * observe your object with a single - dish , * cross - calibrate the two data sets , and then * combine the single - dish and interferometer data .this combination of single - dish and interferometer data , when observing extended objects , is referred to as the short - spacings correction .this ` simple ' recipe may be considered as an artistic touch to the interferometric images as it makes them look much nicer but still preserves their high spatial resolution .this results from : ( a ) inclusion of more resolution elements , those seen by a single - dish ; and ( b ) reconstruction of image artifacts . at the same time , these images contain information about the total power , and can be used to measure accurate flux densities , column densities , masses , etc . from a pure historical perspective, the short - spacings correction bridges the gap between the two classes of radio telescopes , essentially obtaining the best of ` both worlds ' , that is the high spatial resolution information provided by interferometers , and the low spatial resolution , including the total power , information provided by single - dishes .this article will explain what the short - spacing problem is , how it is manifested , and how we can , both theoretically and practically , solve this problem .section 2 depicts very briefly the fundamentals of interferometry , defines the spatial frequency domain , and draws an analogy between a single dish and an interferometer .the effects of missing short spacings are demonstrated in section 3 , as well as prospects for solving the problem .section 4 considers the cross - calibration of interferometer and single dish data which is a precursor to any combination method .methods for data combination are discussed in section 5 and section 6 , and compared in section 7 .a very brief review of the basics of interferometry is necessary right at the beginning of this article , in order to define and explain some terms that will be used further on .however , we do not want to go deeply into interferometry , as there is a vast literature available on this topic , starting with `` interferometry and synthesis in radio astronomy '' ( thompson , moran , & swenson 1986 ) and `` synthesis imaging in radio astronomy '' ( taylor , carilli , & perley 1999 ) .the fundamental idea behind interferometry is that a fourier transform relation exists between the sky radio brightness distribution and the response of a radio interferometer . if the distance between two antennas ( the baseline ) is , then the so - called visibility function , , is given by : d\omega \;.\ ] ] here , is an antenna reception pattern , or * primary beam * , and is the vector difference between a given celestial position and the central position of the field of view .the * aperture synthesis technique * is a method of solving equation [ e : int - basic ] for by measuring at suitable values of . to simplify equation [ e : int - basic ] , a more convenient , right - hand rectilinear ,coordinate system is introduced in fugure [ f : interferometer ] .coordinates of vector in this system are , where the direction to the source center defines the direction , and and are baseline projections onto the plane perpendicular to the direction , towards the east and the north , respectively .a synthesized image in the plane represents a projection of the celestial sphere onto a tangential plane at the source center . in certain conditions , that is in the case of an earth tracking , east - west interferometer array ,with the -axis lying in the direction of the celestial pole , further simplifications of equation [ e : int - basic ] are possible : \frac{dl dm } { \sqrt{1-l^{2}-m^{2 } } } \;.\ ] ] therefore , the visibility function can be expressed as the fourier transform of a modified brightness distribution .coordinates and ( ) are measured in units of wavelength and the plane is called * the spatial frequency domain*. these are effectively projections of a terrestrial baseline onto a plane perpendicular to the source direction .the plane is referred to as * the image domain*. to obtain , from equation [ e : v ] , an inverse fourier transform of is required , meaning that a complete sampling of the spatial frequency domain is essential . in practice however , a bit more than a simple inversion is needed as only limited sampling of the plane is available. for a given configuration of antennas any interferometer array has a limited range of baselines , lying between a minimum , , and maximum , , baselines . as an example , 5 antennas of the australia telescope compact array ( atca ) form 10 baselines , with m and m for a particularly compact configuration .the final resolution ( ) is inversely related to the maximum baseline by . in the case of an earth tracking interferometer , as the earth rotates the baseline projections on the plane trace a series of ellipses .the parameters of each ellipse depend upon the declination of the source , the length and orientation of the baseline , and the latitude of the center of the baseline ( thompson et al . 1986 ) .the ellipses are concentric for a linear array ( e.g. atca ) . for a 2-dimensional array ( e.g. the very large array , vla ) , the ellipses are not concentric and so can intersect .as each baseline traces a different ellipse , the ensemble of ellipses indicates the spatial frequencies that can be measured by the array ( see thompson et al .at each sampling interval , the correlator measures the visibility function for each baseline , thus resulting in a number of samples being measured over elliptical tracks in the plane .hence , the resultant interferometer coverage will always be , more or less , incomplete , having a hole in the center of the plane whose diameter corresponds to the minimum baseline , gaps between measured elliptical tracks , and gaps between each adjacent samples on each elliptical track .the ensemble of ellipses ( loci ) is known as the * transfer * or * sampling function * , .an example of the sampling function obtained with the vla is shown in fugure [ f : tracks12 ] .hence , if is a true ( ideal ) visibility function , the measured ( observed ) visibilities ( ) can be expressed as : is usually representable by a set of -functions , between the lowest and the highest spatial frequency sampled by the interferometer ( corresponding to the shortest and the longest baselines , respectively ) .the fourier transform of equation [ e : int1 ] gives the observed sky brightness distribution ( so called * ` dirty ' image * ) : where is * the synthesized or ` dirty ' beam * , which is the point source response of the interferometer .as usually , asterisks ( ) are used to denote convolution .when imaging , incomplete coverage leads to severe artifacts , such as negative ` bowls ' around emission regions and negative and positive sidelobes ( cornwell , braun , & briggs 1999 ) .we return to this in section [ s : effects ] the determination of from in the deconvolution process , requires beforehand interpolation and extrapolation of for missing data due to the discontinuous nature of ( cornwell & braun 1989 ) .this process works well when a compact configuration of antennas is used and when the source is small enough , with angular size ( bajaja & van albada 1979 ) . for imaging larger objects , with angular size ,a significant improvement in filling in the coverage can be achieved by using the * ` mosaicing technique ' * , where observations of many pointing centers are obtained and ` pasted ' together ( see holdaway 1999 ) .mosaicing effectively reduces the shortest projected baseline to , where is the diameter of an individual antenna .nevertheless , the center of the plane still suffers if significant large scale structure is present .this lack of information for very low spatial frequencies ( around the center of the coverage ) in an interferometric observation is usually referred to as the * ` short - spacings problem'*. extended objects with angular size can be observed with a single - dish .let us now think of a single - dish in a slightly unusual way , imagining filled apertures consisting of a large number of small panels packed closely together. then all these panels can act as interferometer elements with their signals being combined together at the focus , making a so called phased or adding array ( see contribution by d. emerson in this volume ) .the distance between each two panels corresponds to a baseline , as shown in fugure [ f : dish ] .the baseline distribution then monotonically decrease from zero at the center up to the maximum baseline , determined by the single - dish diameter .this is also shown in fugure [ f : dish ] .one observation with a single - dish provides a total flux density measurement , corresponding to the zero spacing , .however , if a single - dish scans across an extended celestial object , it measures not only a single spatial frequency , but a whole range of continuous spatial frequencies all the way up to a maximum of ( ekers & rots 1979 ) .hence , a single - dish behaves as an interferometer with an almost infinite number of antennas , and therefore has a continuous range of baselines , from zero up to .the nice thing about this representation is that we can now use the same mathematical notation to describe both single - dishes and interferometers .the observed sky brightness distribution in the case of single - dish observations is then given by : with being the * single - dish beam * pattern .the fourier transform of equation [ e : sd1 ] gives the observed single - dish ` visibilities ' , : where is the fourier transform of the single - dish beam pattern which , unlike , is a continuous function between zero and the highest spatial frequency sampled by the single - dish .determination of from requires deconvolution , but no interpolation of is needed since this is a continuous function .as shown in equation [ e : int ] , the sky brightness distribution can be reconstructed , in the case of interferometric observations , by deconvolving the ` dirty ' image with the synthesized beam . as an example , fugure [ f : effect2 ] shows the result of the hi ` mosaic ' observations of the small magellanic cloud ( smc ) with the atca .more information about these observations and data reduction is available in stanimirovic et al .the two adjacent panels on the right side show right ascension ( ra ) cuts through the image .negative bowls ( shown in white on the image ) are seen around emission peaks ( shown in black ) , as well as in ra cuts .these are typical interferometric artifacts resulting from an incomplete coverage . a simple graphical explanation of why this happens , borrowed from braun & walterbos ( 1985 ) , is shown in fugure [ f : effect1 ] for the case of a point source .the solid vertical line in fugure [ f : effect1 ] distinguishes the spatial frequency ( a ) from the image domain ( b ) .the distribution of measured spatial frequencies , or what we have already defined as a transfer ( or sampling ) function , , is given on the left side , while its fourier transform , that is the synthesized beam , , is shown in the right .an exclusion of the central values from the spatial frequency domain , is equivalent to a subtraction of a broad pedestal in the image domain , resulting in the presence of a deep negative ` bowl ' around the observed object , as seen in fugure [ f : effect2 ] .this demonstrates simply how severe the effects of missing short - spacings can be .the larger the object is relative to the reciprocal of the shortest measured baseline one tries to image , the more prominent the short - spacing problem becomes .there are two main questions concerning the short - spacings problem : 1 . how to provide ( observe ) missing short spacings to interferometric data ; 2 . how to combine short - spacing data with those from an interferometer. in answering the first question ,all solutions can be grouped into two array schemes : * homogeneous * , having all antennas of the same size , and * heterogeneous * , based on observations obtained with different - sized antennas .there are many possibilities concerning the heterogeneous arrays , such as using smaller arrays and even a hierarchy of smaller arrays .the simplest option , though , is a single - dish telescope with a diameter ( ) larger than the interferometer s minimum baseline .we briefly touch here on some pros and cons for both array schemes .one of the difficulties in providing short - spacings with a single - dish is that it is hard to provide a large single - dish which would have the sensitivity equivalent to that of an interferometer ( holdaway 1999 ) .also , single - dish observations are complex ( they require a lot of separate pointing centers to cover a large object ) and very sensitive to systematic errors . using theoretical analysis , numerical simulations and observational tests , cornwell , holdaway , & uson ( 1993 )show that a homogeneous array in which the short - spacings are obtained from single antennas of the array , allows high quality imaging .they find that a key advantage over the large single - dish scheme is pure simplicity , which is an important factor for the complex interferometric systems . as both interferometric and total power dataare obtained with the same array elements , no cross - calibration is required in this case .note that in this case total - power and interferometric observations have to be synchronized which is not a simple task because of different observing techniques involved ( e.g. the single - dish observations require frequency or position switching modes ! ) .this turns out to be an especially difficult task for the continuum observations .however , to _ fully _ fill in the central gap in an interferometer coverage and _ preserve sensitivity _ at the same time , the heterogeneous array scheme appears more advantageous .this has been recently recognized in the planning of the future atacama large millimeter / submillimeter array ( alma ) .imaging simulations have shown that antenna pointing errors of only a few percent of the primary beam width produce large errors in the visibilities in the central plane , causing a large degradation of image quality ( see morita 2001 ) . to compensate for this problem , an additional , smaller array of 6 8 m dishes ,the so called alma compact array ( aca ) , has been proposed to provide short baselines . in answering the second question ,methods for the combination of interferometer and single - dish data can be grouped into two classes : data combination in the spatial frequency domain ( bajaja & van albada 1979 ; vogel et al .1984 ; roger et al . 1984 ; wilner & welch 1994 ; zhou , evans , & wang 1996 ) , and data combination in the image domain ( ye & turtle 1991 ; stewart et al . 1993 ; schwarz & wakker 1991 ; holdaway 1999 ) .each approach can be realized through a number of different methods .both approaches are very common and are becoming a standard data processing step . as the most common scheme of a heterogeneous array involves use of a large single - dish telescope, we proceed to consider this particular case further .before adding short - spacing data , it is necessary to be sure that both the interferometer and single - dish data sets have identical flux density scales . as calibration is never perfect , the calibration differences between the two sets of observations can be significant in some cases ( e.g. observations spread over a long period of time , different data quality , use of different flux density scales for calibration , quality of calibrators , etc . ) .this results in a small but appreciable difference in the measured flux densities .we define the calibration scaling factor , , as the ratio of the flux densities of an unresolved source in the single - dish and interferometer maps : in the case of perfect calibration , .however , otherwise , and needs to be determined very accurately .unfortunately , it is hard to find suitable compact sources to directly determine .hence , the best way to estimate is to compare the surface brightness of the observed object in the overlap region of the plane , see fugure [ f : uv - scheme ] .this region should correspond to angular sizes to which both telescopes are sensitive . fora source of brightness , , both the interferometer and single dish should measure within this region the same , , and calibration errors will appear as : for an extended source and are often , for convenience , expressed in units of jy beam not jy sr , and so will be different numbers because of the different beams considered ( with beam areas and , respectively ) . for this purpose ,an estimate of the resolution difference between the two data sets ( ) is also needed . to determine the following stepsare required : 1 .scale the single - dish data by to account for the difference in brightness caused _ only _ by different resolutions , 2 .fourier transform the interferometer and scaled single - dish images , 3 .deconvolve the single - dish data ( by dividing them by the fourier transform of the single - dish beam ) , and 4 .compare ` visibilities ' in the overlapping region of spatial frequencies .several important issues should be considered here : * when fourier transforming in step 2 watch for edge - effects ! to avoid nasty edge - effects in some cases apodizing of both interferometer and single - dish images may be required in order to make the image intensities smoothly decrease to zero near the edges .* step 3 requires a very good knowledge of the single - dish beam ! to make things even harder , the fwhm of the single - dish beam and the calibration scaling factor are highly coupled ( sault & killeen 1998 ) . therefore , an error in the single - dish beam model has the same effect in the overlapping region as an error in the flux density scale . if the single - dish beam is poorly known , will be a quadratic function of distance in the fourier plane , in the first approximation ( see stanimirovic 1999 ) . * a sufficient overlap in spatial frequencyis required for step 4 . assuming a gaussian - tapered illumination pattern for a single - dish , and considering a cut - off level of 0.2 for reliable data, we can estimate the minimum diameter , , of a single - dish necessary to provide all spacings shorter than for a given interferometer : in order to have a reasonable overlap of spatial frequencies so that can be derived , a slightly larger single - dish is required with .for example , for the atca shortest baseline of 31 m , the single - dish providing short spacings should have diameter of m. therefore , the 64 m parkes telescope can do a really great job . also , while the 100-m green bank telescope will be able to provide short - spacing data for the vla c and d arrays ( with m ) , only arecibo could do so for the vla b array ( with m ) .as shown by bajaja & van albada ( 1979 ) , the true missing short - spacing visibilities can be provided from the function in equation [ e : sd2 ] , if the single - dish is large enough to cover the whole central gap in the interferometer coverage .the deconvolution of the single - dish data gives the true single - dish visibilities , where , by : function can be then substituted in equation [ e : int1 ] , after rescaling by , everywhere in the plane where equation [ e : sd3 ] holds .this would provide the resultant coverage having the inner visibilities from the single - dish data only ( rescaled to match the interferometer flux - density scale due to the calibration differences ) , and the outer visibilities from just the interferometer data .this is effectively feathering or padding the interferometer visibilities with the single - dish data .depending on the type of input images used , and/or the type of weighting applied within the region of overlapping spatial frequencies , there are several applications of this technique ( roger et al .1984 ; vogel et al . 1984 ; wilner & welch 1994 ; zhou et al . 1996 ) .however , in all cases , for a good data combination the single - dish data set should fulfill the following two conditions . * a sufficiently fine sampling of the single - dish data at the nyquist rate ( 2.4 pixels across the beamwidth )is required to avoid aliasing during deconvolution ( vogel et al .the single - dish data must also have the same coordinate system as the interferometer data .therefore , it is sometimes necessary to re - grid single - dish images . *visibilities derived from the single - dish data should have a signal - to - noise ratio comparable to those of the interferometer in the overlapping region in order not to degrade the combined map ( vogel et al . 1984 ) .this linear data combination in the spatial frequency domain is very widely used and is implemented in several packages for radio data reduction , e.g. task imerg in aips , image tool in aips++ and task immerge in miriad . as an example we will discuss immerge here .immerge takes as input a clean ( deconvolved ) high - resolution image , and a non - deconvolved , low - resolution image .these images are fourier transformed ( labeled as and ) and combined in the fourier domain applying tapering functions , and , such that their sum is equal to the gaussian function having a fwhm of the interferometer , : function is a gaussian with the fwhm of the single dish .the low - resolution visibilities are multiplied by to account for the calibration differences .the final resolution is that of the interferometer image .the tapering functions and are shown in fugure [ f : immerge - beam ] , together with the tapering function of the merged data set , for the case of the atca interferometer and the parkes single - dish telescope .immerge can also estimate the calibration scaling factor , , by comparing single - dish and interferometer data in the region of overlapping spatial fequencies specified by the user .as an example of how immerge works in practice , fugure [ f : fourier_comb ] shows a sequence of images at various stages of data processing : before merging , after fourier transforming , and the final version .there are two distinct methods for data combination in the image domain .the first , the ` linear combination ' method ( stanimirovic et al . 1999 ) , merges data sets in a simple linear fashion before final deconvolution , while the second ( sault & killeen 1998 ) , the non - linear method , combines all data during the deconvolution process . the theoretical basis for merging before deconvolution is the linear property of the fourier transform : a fourier transform of a sum of two functions is equal to the sum of the fourier transforms of the individual functions .therefore , instead of adding two maps in the fourier domain and fourier transforming the combined map to the image domain , one can produce the same effect ( fill in missing short - spacings in an interferometer coverage ) by adding maps in the image domain .this method was first applied by ye & turtle ( 1991 ) , and stewart et al .( 1993 ) . as we have seen from equations [ e : int ] and [ e : sd1 ] ,the interferometer and single - dish data obey the convolution relationship .the dirty images and beams can be combined to form a composite dirty image ( ) and a composite beam ( ) with the following weighting : where is an estimate of the resolution difference between the two data sets .the convolution relationship , , still exists between the composite dirty image , , and the true sky brightness distribution , .deconvolving the composite dirty image with the composite beam hence solves for .there is no single existing program ( task ) that employs this method , but a linear combination of maps can be easily obtained in any package for radio data reduction , followed by a favorite choice of a deconvolution algorithm . as an example , fugure [ f : image_comb ] shows the ` linear combination ' method applied on the atca mosaic and parkes telescope hi observations of the smc at several stages of data processing . note that in the case of mosaic observations represents a whole cube of beams , one for each pointing in the mosaic .the combined dirty image was deconvolved using miriad s maximum entropy algorithm ( sault , staveley - smith , & brouw 1996 ) .the model was finally restored with a 98-arcsec gaussian function . besides the missing information in the center of the plane , an interferometer coverage suffers from spatial - frequency gaps . since the missing information can be introduced in an infinite number of ways , the convolution equation ( ) has a non - unique solution .hence , the deconvolution has the task of selecting the ` best ' image from all those possible ( cornwell 1988 ) .since deconvolution has to estimate missing information , a non - linear algorithm must be employed ( cornwell 1988 ; sault et al .cornwell ( 1988 ) and sault et al .( 1996 ) showed that the deconvolution algorithms implementing the so called ` joint ' deconvolution scheme , whereby observations of many pointing centers are combined prior to deconvolution , produce superior results in the case of mosaicing , since more information is fed to the deconvolver .we expect that the same argument might apply for the addition of the single - dish data , resulting in the merging before and during deconvolution being more successful than the merging of clean images in the spatial frequency domain .the maximum entropy method ( mem ) is one of the non - linear deconvolution algorithms .it selects the deconvolution solution so it fits the data and , at the same time , has a maximum ` entropy ' .cornwell ( 1988 ) explains this entropy as something which when maximised produces a positive image with a compressed range in pixel values .the compression criterion forces the final solution ( image ) to be smooth , while the positivity criterion forces interpolation of unmeasured fourier components .one of the commonly used definitions of entropy is : where is the brightness of pixel of the mem image and is the brightness of pixel of a ` default ' image incorporated to allow _ a priori _ knowledge to be used ( is the base of the natural logarithms ) . the requirement that the final image fits the data is usually incorporated in a constraint such that the fit of the predicted visibilities to those observed ( cornwell 1988 ) is close to the expected value : with being the number of independent pixels in the map and being the noise variance of the interferometer data .the single - dish data can be incorporated during the maximum - entropy deconvolution process in two ways . 1 .* the ` default ' image * + the easiest way is to use the single - dish data as a ` default ' image in equation [ e : mem ] since , in the absence of any other information or constraints , this forces the deconvolved image to resemble the single - dish image in the spatial frequency domain where the interferometer data contribute no information .since this method puts more weight on the interferometer data wherever it exists , the size of the overlapping region plays a very important role ( holdaway 1999 ) .as large an overlap of spatial frequencies as possible is required to provide good quality interferometer and single - dish data within this region , in order to retain the same sensitivity over the image .* the ` joint ' deconvolution * + the second way maximizes the entropy while being subject to the constraints of fitting both data sets simultaneously : the ` joint ' deconvolution method provides also an alternative way , completely performed in the image domain , for determining the calibration scaling factor . maximizing the entropy , while fitting both data sets , a ` joint ' deconvolution algorithm can iteratively solve for and simultaneously . as a schematic example, fugure [ f : mem ] shows the non - linear approach for data combination performed by miriad s program mosmem .the qualitative comparison of all four methods for the short - spacings correction addressed in sections 5 and 6 , is shown in fugure [ f:4methods ] for the 169 velocity channel of the smc data .all four images have the same grey - scale range ( to 107 k ) and are remarkably similar .they all have the same resolution and show the same small and large scale features with only slightly different flux - density scales .no signs of interferometric artifacts are visible on any of the images .this demonstrates that all four methods for the short - spacings correction give satisfactory results in the first approximation , when an _ a priori _ determined calibration scaling factor is used .a similar conclusion was reached in wong & blitz ( 2000 ) where results of data combination using immerge and the ` linear combination ' method ( section [ s : linear ] ) were compared for the case of bima and kitt peak 12-m telescope co observations . .1trueinccccc method & total flux & min & max & noise + & ( jy ) & ( jy beam ) & ( jy beam ) & ( mjy beam ) + a & 5600 & & 1.97 & 30 + b & 6500 & & 2.16 & 32 + c & 6300 & & 2.00 & 28 + d & 5900 & & 2.00 & 29 + the quantification of the quality of an image depends on the scientific questions we want to address ( cornwell et al .1993 ) and is therefore case specific .something that any short - spacings correction must fulfill , though , is that the resolution of the final image should be the same as for the interferometer data alone , while the integrated flux density of the final image should be the same as measured from the single - dish data alone .table [ t : comparison ] shows measurements of the total flux density , minimum / maximum values and noise level in the four resultant images .all four images have very comparable noise levels , minimum values and maximum values .the last two images also have very comparable ( within 3% ) total flux density , relative to the parkes value alone , while the first two have lower ( by 8% ) and higher ( by 7% ) flux densities , respectively .the differences come , most likely , from the different weighting of the single - dish data employed by the different methods . while immerge slightly over - weights very short interferometric spatial frequencies , the ` linear combination ' method slightly over - weights single - dish data in the region of overlap .this results in the total flux density being slightly lower in the first case , and slightly higher in the second case .a few ( general ) remarks on all four methods : * the ` feathering ' or fourier method ( a ) is the fastest and the least computer intensive way to add short - spacings .it is also the most robust way relative to the other three methods which all require a non - linear deconvolution at the end . *the great advantage of the ` linear combination ' method is that it does not require either fourier transformation of the single - dish data , which can suffer severely from edge effects , nor deconvolution of the single - dish data which is especially uncertain and leads to amplification of errors . *the ` default ' image method shows surprisingly reliable results when a significantly large single - dish is used . * adding during deconvolution when fitting both data sets simultaneously provides , theoretically , the best way to do the short - spacing correction .however , this method depends heavily on a good estimate of the interferometer and single - dish noise variances .in this article the need for , and methods of combining interferometer and single - dish data have been explained and demonstrated .this combination is an important step when mapping extended objects and it is becoming a standard observing and data processing technique .after a brief introduction to interferometry in section 2 the short - spacings problem and general approaches for its solution were discussed in section 3 . to _ fully and accurately _ fill in the missing short - spacings to interferometric data , a heterogeneous array scheme seems to be preferable .a sufficiently large single - dish , with diameter at least 1.5 greater larger than the shortest interferometer baseline , provides the simplest option . in order to cross - calibrate single - dish and interferometer data sets , a significant overlap of spatial frequenciesis required .four different combination methods , two linear and two non - linear , for the short - spacings correction have been discussed and the results of applying these methods to the case of hi observations of the smc presented .linear methods are data combination in the spatial frequency domain and the ` linear combination ' method , while data combination during deconvolution provides two non - linear methods .all four techniques yield satisfactory and comparable results .many thanks to darrel emerson , chris salter and john dickey for reading the article and providing valuable suggestions for improvement .i am also grateful to matthew wyndham for his help with figures , as well as assisting with last minute crises in organizing the meeting .cornwell , t. j. , holdaway , m. a. , uson , j. m. 1993 , a&a , 271 , 697 cornwell , t. , & braun , r. 1989 , in asp conf .vol . 6 , synthesis imaging in radio astronomy , ed .r. perley , f. schwab , & a. bridle ( san francisco : asp ) , 167
while , in general , interferometers provide high spatial resolution for imaging small - scale structure ( corresponding to high spatial frequencies in the fourier plane ) , single - dishes can be used to image the largest spatial scales ( corresponding to the lowest spatial frequencies ) , including the total power ( corresponding to zero spatial frequency ) . for many astrophysical studies , it is essential to bring ` both worlds ' together by combining information over a wide range of spatial frequencies . this article demonstrates the effects of missing short - spacings , and discusses two main issues : ( a ) how to provide missing short - spacings to interferometric data , and ( b ) how to combine short - spacing single - dish data with those from an interferometer .