article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
of the steps reconstructing events in the cms detector , tracking is the most computationally complex , most sensitive to activity in the detector , and least amenable to parallelization .the speed of online reconstruction has a direct impact on how effectively interesting data can be selected from the 40 mhz collisions rate , while the speed of the offline reconstruction limits how much data can be processed for physics analyses .the large time spent in tracking will become even more important in the hl - lhc era of the large hadron collider .the increase in event rate will increase the detector occupancy ( `` pile - up '' , pu ) , leading to the exponential gain in time taken to perform track reconstruction illustrated in figure [ fig : eff_tracking_pileup] . at the same time , due to power density and physical scaling limits , the performance of single cpus has slowed , with advances in performance increasingly relying on multi - core or highly parallel architectures . in order to sustain the higher hl - lhc processing requirements without compromising physics performance and timeliness of results , the lhc experiments must make full use of highly parallel architectures . as a potential solution, we are investigating adapting the existing cms track finding algorithms and data structures to make efficient use of highly parallel architectures , such as intel s xeon phi and nvidia gpus . in this paperwe provide an update to results most recently reported at chep2015 and connecting the dots 2016 , including our first results using intel threaded building blocks ( tbb ) instead of openmp for multi - threading support .our targets for parallel processing are track reconstruction and fitting algorithms based on the kalman filter ( kf ) .kf - based tracking algorithms are widely used to incorporate estimates of multiple scattering directly into the trajectory of the particle .other algorithms , such as hough transforms and cellular automata adapted from image processing applications , are more naturally parallelized .however , these are not the main algorithms in use at the lhc today .the lhc experiments have an extensive understanding of the physics performance of kf algorithms ; they have proven to be robust and perform well in the difficult experimental environment of the lhc .kf tracking proceeds in three main stages : seeding , building , and fitting .seeding provides the initial estimate of the track parameters based on a few hits in the innermost regions of the detector .realistic seeding is currently under development and will not be reported here .track building projects the track candidate outwards to collects additional hits , using the kf to estimate which hits represent the most likely continuation of the track candidate .track building is by far the most time consuming step of tracking , as it requires branching to explore multiple candidate tracks per seed after finding compatible hits on a given layer . when a complete track has been reconstructed , a final fit using the kf is performed to provide the best estimate of the track s parameters .to take full advantage of parallel architectures , we need to exploit two types of parallelism : vectorization and parallelization .vector operations perform a single instruction on multiple data at the same time , in lockstep . in tracking , branching to explore multiple candidates per seed can interfere with lock - step single - instruction synchronization .multi - thread parallelization aims to perform different instructions at the same time on different data .the challenge to tracking is workload balancing across different threads , as track occupancy in a detector is not uniformly distributed on a per event basis .past work by our group has shown progress in porting sub - stages of kf tracking to support parallelism in simplified detectors ( see , e.g. our presentations at acat2014 and chep2015 ) .as the hit collection is completely determined after track building , track fitting can repeatedly apply the kf algorithm without branching , making this a simpler case for both vectorization and parallelization ; first results in porting kf tracking to xeon and xeon phi were shown at acat2014 .the implementation of a kf - based tracking algorithm consists of a sequence of operations on matrices of sizes from up to . in order to optimize efficient vector operations on small matrices , and to decouple the computational details from the high level algorithm ,we have developed a new matrix library .the _ matriplex _memory layout uses a matrix - major representation optimized for loading vector registers for simd operations on small matrices , using the native vector - unit width on processors with small vector units or very large vector widths on gpus .matriplex includes a code generator for defining optimized matrix operations , with support for symmetric matrices and on - the - fly matrix transposition .patterns of elements that are known by construction to be zero or one can be specified , and the resulting code will be optimized to eliminate unnecessary register loads and arithmetic operations .the generated code can be either standard c++ or macros that map to architecture - specific intrinsic functions . in order to study parallelization with minimal distractions, we developed a standalone kf - based tracking code using a simplified ideal barrel geometry with a uniform longitudinal magnetic field , gaussian - smeared hit positions , a particle gun simulation with flat transverse momentum distribution between 0.5 and 10 gev , no material interaction , and no correlation between particles nor decays .this simplified configuration was used to study vector and parallel performance and to study the performance of different choices of coordinate system and handling of kf - updates .these studies led to the use of a hybrid coordinate system , using global cartesian coordinates for spatial positions and polar coordinates for the momentum vector .recently we have begun work on using a more realistic cms detector geometry .event data from the full cms simulation suite is translated into a simplified form that can be processed by our standalone tracking code . for track building ,track propagation is performed in two steps , an initial propagation to the average radius of the barrel or average z of the endcap disk , followed by a second propagation to the exact hit position for each candidate hit .kf updates are performed on the plane tangent to the hit radius .these choices make it possible to use a simplified form of the cms geometry , avoiding the complexities of the full detector description .achieving acceptable vector and multi - thread parallel performance requires careful attention to detail .regular profiling with intel vtune amplifier and attention to the compiler optimization reports helped identify many obstacles , some relatively subtle .we found that references to unaligned locations in aligned data structures may force the compiler into unaligned accesses , reducing vector performance ; prefetching , scatter / gather instructions and other intrinsics need to be used in an organized , systematic fashion ; unwanted conversions from float to double can reduce effective vector width ; variables should be declared in the smallest scope possible , and use `` const '' wherever possible . for multi - thread parallelism ,the two most critical issues we found were memory management and workload balancing . to avoid memory stalls and cache conflicts , we reduced our data structures to the minimum necessary for the algorithm , optimized our data structures for efficient vector operations , and minimized object instantiations and dynamic memory allocations .workload balancing for track building is complicated by the uncertain distribution of track candidates and hits , which can result in large tail effects for a naive static partitioning scheme .we found that using intel threaded building blocks tasks , with smaller units of work and dynamic `` work - stealing '' , let us naturally express thread - parallelism in a way that allowed more dynamic allocation of resources and reduced the tail effects from uneven workloads .figure [ fig : tbb - vtune ] illustrates the difference in tail effects between our initial openmp implementation with static workload partitioning compared to dynamic partitioning using tbb .the primary platforms discussed are a xeon phi 7120p `` knights corner '' ( knc ) processor and xeon e5 - 2620 `` sandy bridge '' ( snb ) system , with brief discussion of preliminary results on a xeon phi 72xx `` knights landing '' ( knl ) processor and the nvidia tesla k20/k40 and pascal p100 .scaling on the more traditional xeon architecture is in most cases near optimal , so the discussion primarily focus on the more highly parallel architectures .vector and multi - thread absolute performance and scaling improved mostly from our chep2015 results , with vectorization scaling showing the most improvement , shown in figure [ fig : fit - compare ] .this result is somewhat deceptive , as there are significantly more operations performed in the latest results due to the change of momentum to polar coordinates ( from global cartesian coordinates ) .the change of coordinate system results in significantly more complicated off - diagonal terms in the propagation matrix .the cost of these additional calculations was nearly exactly canceled by improvements to the memory and cache usage , and better vectorization .on knc , we achieve a vector speedup of nearly eight times , approximately half the theoretical maximum with a vector width of 16 .multi - thread scaling is near ideal up to 61 threads , reaching nearly 100x speedup with 200 threads .the knc processor used has 61 physical cores , but must alternate between hyper - threads to fill all the available instruction slots , so ideal scaling would be a speedup by a factor of 122 .the `` knee '' in the scaling curve indicates that with two threads per core we are not achieving full utilization , possibly due to remaining memory bandwidth or cache effects .the change in coordinate system and other improvements significantly improved kf update stability , resulting in the much improved track parameter resolution and hit finding efficiency , shown in figure [ fig : physics - perf ] .the absolute time for single - thread track building was reduced by 90% due to the performance tuning discussed in section [ sec : tuning ] .however , effective vectorization of the track building remains a challenge , as shown in figure [ fig : build - vector ] .the combinatorial nature of the track building algorithm , which examines and adds a variable number of hit candidates in each layer , results in many branching operations that impede vectorization , as well as adding frequent repacking operations to keep the full vector width utilized .the larger and more complicated data structures used for selecting hit candidates also results in poorer data locality and higher bandwidth requirements .we are continuing to investigate possible strategies for improving the track building vector performance .compared to our previous results , as shown in figure [ fig : build - parallel ] , multi - thread track scaling has improved significantly , scaling within 15% of ideal up to 61 threads and reaching approximately 80x speedup with 200 threads .this result is very similar to the multi - thread scaling of the much simpler track fitting task .combining the improvements to the single - thread track finding efficiency and multi - thread scaling , track building with 200 threads is nearly 100 times faster than our chep2015 result .these improvements illustrate the results of careful performance tuning combined with the more flexible workload balancing enabled by the switch to tbb for multi - threading .preliminary results for track building using the simplified cms geometry discussed in section [ sec : scenarios ] are shown in figure [ fig : cms - scaling ] .these results show better vector scaling , which we believe is due to the two - step propagation , first to the average radius and then to the hit radius .the propagation routine vectorizes well , so the additional computation results in more time spent in routines with good vector register utilization .multi - thread scaling is significantly worse than our `` toy '' setup . for this testwe use seeds from the first step of the cms iterative tracker , which yields around 500 seed tracks per event , compared to 20,000 monte carlo `` truth '' seeds in our toy tests .repeating our tests with the idealized scenario with 500 tracks per event shows scaling similar to the cms geometry tests .this is not surprising , as with only one event being processed at a time we need seeds to fill the vector registers of 200 threads .we are currently working on processing multiple events at a time in a way that allows us to use seeds from multiple events in the same matriplex so that we can fully use the vector registers .we have run very preliminary , untuned tests of our track building on a knl system , with the results shown in figure [ fig : knl - perf ] .on knl we see better vectorization than knc , and similar multi - thread scaling .we find it encouraging that our track building algorithms show decent scaling performance on snb , knc and knl without significant platform - specific scaling beyond simply matching the platforms vector register width .we have also implemented the track fitting algorithms and a simpler ( non - combinatoric ) version of the track building on nvidia gpus using the cuda toolkit .the gpu version uses a templated gplex structure that matches the interfaces and layout of the matriplex class , allowing substantial code sharing of the kf routines , while the higher - level `` steering '' routines are somewhat different due to different setup and memory management requirements of the different platforms .the gpu implementations show good scaling of the kf routines but the total time tends to be dominated by setup times copying data structures to the gpu .preliminary tests with a pascal p100 gpu , shown in figure [ fig : gpu - perf ] , show better scaling with simpler memory management .we have made significant progress in parallelized and vectorized kalman filter - based end - to - end tracking r&d on xeon and xeon phi architectures , with some initial work on gpus . through the use of a variety of toolswe have developed a good understanding of bottlenecks and limitations of our implementation which has led to further improvements . through the use of our own matriplex package and intel threaded building blocks ( tbb ) ,we can achieve good utilization of unconventional highly parallel vector architectures .we are currently focusing on processing fully realistic data , with encouraging preliminary results .this work is supported by the u.s. national science foundation , under the grants phy-1520969 , phy-1521042 , phy-1520942 and phy-1120138 , and by the u.s .department of energy .9 cerati g 2015 _ pos vertex _ * 2014 * 037 the cms collaboration 2014 _ jinst _ * 9 * p10009 cerati g _ et al . _ 2015 _ jop : conf . series _ * 664 * 072008 cerati g _ et al ._ 2016 _ epj web conf . _ * 127 * 00010 fruehwirth r 1987 _ nim _ a * 262 * 444 - 450 halyo v , legresley p , lujan p , karpusenko v and vladimirov a 2014 _ jinst _ * 9 * p04005 cerati g _ et al ._ 2014 _ jop : conf . series _* 608 * 012057
|
limits on power dissipation have pushed cpus to grow in parallel processing capabilities rather than clock rate , leading to the rise of `` manycore '' or gpu - like processors . in order to achieve the best performance , applications must be able to take full advantage of vector units across multiple cores , or some analogous arrangement on an accelerator card . such parallel performance is becoming a critical requirement for methods to reconstruct the tracks of charged particles at the large hadron collider and , in the future , at the high luminosity lhc . this is because the steady increase in luminosity is causing an exponential growth in the overall event reconstruction time , and tracking is by far the most demanding task for both online and offline processing . many past and present collider experiments adopted kalman filter - based algorithms for tracking because of their robustness and their excellent physics performance , especially for solid state detectors where material interactions play a significant role . we report on the progress of our studies towards a kalman filter track reconstruction algorithm with optimal performance on manycore architectures . the combinatorial structure of these algorithms is not immediately compatible with an efficient simd ( or simt ) implementation ; the challenge for us is to recast the existing software so it can readily generate hundreds of shared - memory threads that exploit the underlying instruction set of modern processors . we show how the data and associated tasks can be organized in a way that is conducive to both multithreading and vectorization . we demonstrate very good performance on intel xeon and xeon phi architectures , as well as promising first results on nvidia gpus .
|
the rapid proliferation of smart mobile devices has triggered an unprecedented growth of the global mobile data traffic .recently , caching at base stations ( bss ) has been proposed as a promising approach for massive content delivery by reducing the distance between popular files and their requesters . as the cache size is limited in general , designing caching strategiesappropriately is a prerequisite for efficient content dissemination . in ,the authors consider identical caching at bss ( i.e. , all bss store the same set of the most popular files ) , and analyze the outage probability and average rate . in ,the authors consider random caching with files being stored at each bs in an i.i.d . manner . in ,the authors consider random caching and multicasting on the basis of file combinations , and analyze and optimize the joint design .note that the identical caching design in can not provide file diversity at different bss , and hence may not sufficiently exploit storage resources .the random caching designs in can provide file diversity .however , a file transmission may not fully benefit from the file diversity provided by the random caching designs in , when the serving bs is not the nearest bs of the file requester .note that the caching designs proposed in focus on storing entire files at each bs . on the other hand , in ,to further improve file diversity , files are partitioned into multiple subfiles , and each bs may store a uncoded or coded subfile of a file .references consider caching coded subfiles .for instance , in and , network coding - based caching designs are proposed and analyzed . in ,the authors propose a mds - based caching design , and consider the analysis and optimization of the backhaul rate . in ,the authors propose a partition - based uncoded caching design and employ successive interference cancelation ( sic ) at each user to enable parallel subfile transmissions .note that the network coding - based caching designs proposed in are restricted to a single file and can not be directly applied to the practical scenario with multiple files .in addition , the coded caching designs in do not consider the delivery of the cached files .hence , it is unclear how these designs affect ultimate user experiences .reference considers the delivery of the cached files .but the partition - based uncoded caching design in may not sufficiently exploit storage resources . in summary ,further studies are required to understand the fundamental impacts of communication , caching and computation ( e.g. , sic ) capabilities on network performance . in this paper, we would like to address the above issues .we consider a reasonable cache - enabled wireless network model with multiple files and random channel fading as well as stochastic geographic locations of bss .we propose a random liner network coding - based caching design with a design parameter and adopt sic to decode multiple coded subfiles for recovering a requested file . utilizing tools from stochastic geometry , we derive a tractable expression for the successful transmission probability in the general file size regime . to further obtain design insights , we derive closed - form expressions for the successful transmission probability in the small and large file size regimes , respectively , utilizing series expansion of some special functions .then , we consider the successful transmission probability maximization in the general file size regime , which is a complex discrete optimization problem . by exploring structural properties ,we propose a two - stage optimization framework to obtain a near optimal solution with superior performance and manageable complexity .we also obtain closed - form asymptotically optimal solutions in the small and large file size regimes , respectively .the analysis and optimization results reveal that the network coding - based caching significantly facilitates content dissemination when the file size is small or moderate , while caching the most popular files greatly helps content dissemination when the file size is large .in addition , in the small file size regime , the optimal successful transmission probability increases with the cache size and the sic capability . while , in the large file size regime , the optimal successful transmission probability increases with the cache size and is not affected by the sic capability .finally , by numerical results , we show that the proposed near optimal caching design achieves a significant performance gain over some baseline caching designs .we consider a large - scale wireless network , as shown in fig .[ fig : system ] . the locations of the base stations ( bss )are spatially distributed as a two - dimensional homogeneous poisson point process ( ppp ) with density .we focus on a typical user , which we assume without loss of generality ( w.l.o.g . ) to be located at the origin .we enumerate the bss with respect to their distances to starting with the bs nearest to .the indices of the bss from the nearest one to the farthest one are denoted as and .let denote the distance between bs and .thus , we have .we consider the downlink transmission .each bs has one transmit antenna and transmits with power over bandwidth .each user has one receive antenna .consider a discrete - time system with time being slotted .the duration of each time slot is seconds .we study one slot of the network .we consider both path loss and small - scale fading .specifically , due to path loss , transmitted signals with distance are attenuated by a factor , where is the path loss exponent . for small - scale fading , we assume rayleigh fading , i.e. , each small - scale channel .let denote the set of files in the network .for ease of illustration , we assume that all files have the same size of bits .each file is of certain popularity . requests one file , which is file with probability , where .thus , the file popularity distribution is given by , which is assumed to be known apriori .in addition , w.l.o.g . , we assume .the network consists of cache - enabled bss .in particular , each bs is equipped with a cache of size ( in files ) , i.e. , ( in bits ) .assume each bs can not store all files in due to the limited storage capacity , i.e. , .we propose a random linear network coding - based caching design parameterized by , where here , denotes the set of positive integers .we now interpret the design parameter .consider file .( i ) if , file is not stored at any bs .( ii ) if , file is stored at each bs .( iii ) if , file is partitioned into subfiles , each of bits , and each bs stores a random linear combination of all the subfiles of file , i.e. , a coded subfile of file which is of bits , using random linear network coding . represents the amount of storage ( in files ) allocated to file at each bs .we consider random linear network coding over a large field , and assume that file can be decoded from any coded subfiles of file stored in the network . for notation convenience , denote for all .note that indicates that file is not stored at any bs .the design parameter of a feasible caching design satisfies the following constraint we now introduce the file transmission strategy , as illustrated in fig .[ fig : system ] .suppose requests file .( i ) if , can not obtain file from the cache of the network . may be served through other service mechanisms . for example, bss can fetch some uncached files from the core network through backhaul links and transmit them over other reserved frequency bands .the service of uncached files may involve backhaul cost or extra delay .the investigation of service mechanisms for uncached files is beyond the scope of this paper . ]( ii ) if , the nearest bs transmits the uncoded file to over the whole bandwidth and time slot .( iii ) if , each of the nearest bss ( i.e. , each bs in ) transmits the coded subfile of file stored locally to over the whole bandwidth and time slot . in this paper, we consider an interference - limited network and neglect the background thermal noise .we assume all bss are active for serving their own users .thus , the received signal of is given by where is the distance between bs and , is the small - scale channel between bs and , is the transmit signal from bs .the first sum in represents the desired signal , and the second sum in represents the interference . to obtain the subfiles for recovering file , needs to decode the signals from .thus , we adopt sic . as in , we consider the distance - based decoding and cancelation order .in particular , when decoding the signal from bs , all signals from the nearer bss in need to be successfully decoded and canceled .the signal - to - interference ratio ( sir ) of the signal from bs after successfully decoding and canceling the signals from the nearer bss in is given by where denotes the interference in decoding the signal from bs . if , can successfully decode ( and cancel ) the signal from bs .due to the limited computational capability and the delay constraint of , as in , we assume that has limited sic capability .that is , can perform decoding and cancelation for at most times to obtain its desired signals .denote and .therefore , it is possible for to obtain file only when .requesters are mostly concerned about whether their desired files can be successfully received .therefore , in this paper , we consider the successful transmission probability of a file randomly requested by as the network performance metric . according to the file transmission and reception strategy discussed in section [ subsec : file_transmission_and_reception ] ,the successful transmission probability of file requested by is given by ,\notag\\ & \hspace{66mm}\ n\in\mathcal n.\notag\end{aligned}\ ] ] according to the total probability theorem , the successful transmission probability of a file randomly requested by is given by this section , we analyze the successful transmission probabilities for a given design parameter in the general file size regime , the small file size regime and the large file size regime , respectively . the calculation of requires the conditional joint probability density function of conditioned on the distances , which is difficult to obtain . as in , we assume the independence between the events .different from , here , requesting file needs to decode multiple signals from the received signal to recover file .the successful transmission probability under the proposed caching design is given below .[ thm : cp_close_form_dist_order ] the successful transmission probability is given by where }{\left(1+\frac{2}{\alpha}(2^{\frac{s_ns}{tw}}-1)^{\frac{2}{\alpha } } b'\left(\frac{2}{\alpha},1-\frac{2}{\alpha},2^{-\frac{s_ns}{tw}}\right)\right)^{\frac{s_n+1}{2s_n^2}}}.\label{eqn : f^c}\end{aligned}\ ] ] here , ] , and increases linearly to $ ] as decreases to .in addition , the design parameter affects the asymptotic behavior of the successful transmission probability in the small file size regime by affecting the the limit of and the coefficient of .[ fig : asymp_stp ] ( a ) plots the successful transmission probability versus file size in the small file size regime .we see from fig .[ fig : asymp_stp ] ( a ) that when the file size is small , the general " curves , which are plotted using theorem [ thm : cp_close_form_dist_order ] , are reasonably close to the asymptotic " curves , which are plotted using lemma [ lem : cp_asymp_0 ] .thus , fig .[ fig : asymp_stp ] ( a ) verifies lemma [ lem : cp_asymp_0 ] . in this part, we analyze the successful transmission probability in the large file size regime , i.e. , . utilizing series expansion of some special functions , from theorem [ thm : cp_close_form_dist_order ] ,we derive the asymptotic successful transmission probability in the large file size regime as follows .[ lem : cp_asymp_cp ] as , we have , means . ]where }{\left(\frac{2}{\alpha } b\left(\frac{2}{\alpha},1-\frac{2}{\alpha}\right)\right)^{\frac{s_{\rm max}+1}{2s_{\rm max}^2 } } } \;.\label{eqn : cp_coding_caching_aympt}\end{aligned}\ ] ] here , and is the beta function . from lemma [ lem : cp_asymp_cp ] , we know that , and decreases exponentially to as increases to .in addition , the design parameter affects the asymptotic behavior of the successful transmission probability in the large file size regime in the form of only .[ fig : asymp_stp ] ( b ) plots the successful transmission probability versus file size in the large file size regime .we see from fig .[ fig : asymp_stp ] ( b ) that when the file size is large , the general " curves , which are plotted using theorem [ thm : cp_close_form_dist_order ] , are reasonably close to the asymptotic " curves , which are plotted using lemma [ lem : cp_asymp_cp ] .thus , fig[fig : asymp_stp ] ( b ) verifies lemma [ lem : cp_asymp_cp ] .in this section , we formulate the successful transmission probability maximization problems and obtain the optimal caching designs in the general file size regime , the small file size regime and the large file size regime , respectively .the caching design affects the successful transmission probability via design parameter .we would like to maximize by carefully optimizing in the general file size regime .[ caching design in general file size regime][prob : opt_coding ] where is given by .let denote the optimal solution . note that problem [ prob : opt_coding ] is a challenging discrete optimization problem with a complex objective function .the number of possible choices for is given by .thus , a brute - force solution to problem [ prob : opt_coding ] , i.e. , exhaustive search , is not acceptable when and are large .in addition , a naive greedy solution , i.e. , storing the most popular uncoded files at each bs , can not guarantee good performance in general , due to the lack of file diversity , as illustrated in .we aim to obtain a low - complexity solution with superior performance , by carefully exploiting structural properties of problem [ prob : opt_coding ] .in particular , we propose a two - stage optimization framework to obtain a near optimal solution to problem [ prob : opt_coding ] . in the first stage , we construct a feasible solution to problem 1 based on an optimal solution to its relaxed linear optimization problem . in the second stage , we use a greedy method to obtain an improved solution based on the feasible solution obtained in the first stage .the two - stage optimization framework is summarized in algorithm [ alg : two_stage ] .the details are given below .instead of using parameter , we introduce an matrix to specify the proposed caching design equivalently , where .in particular , we set if and only if , and otherwise .note that the constraints on in and are equivalent to the following constraints on note that indicates that file is stored in the wireless network , and otherwise . represents the amount of storage allocated to file at each bs .that is , file is partitioned into subfiles , and each bs stores a coded subfile of file .then , by relaxing to ,\ ; n\in \mathcal n , m\in\mathcal m,\label{eqn : interval_constraint}\end{aligned}\ ] ] we obtain the following relaxed problem of problem [ prob : opt_coding ] .[ continuous relaxation of problem [ prob : opt_coding]][prob : opt_equi_contin ] where is given by .let denote the optimal solution .problem [ prob : opt_equi_contin ] is a standard linear optimization problem . we can apply simplex method or interior point method to obtain an optimal solution to problem [ prob : opt_equi_contin ] efficiently .note that is not an integer .based on the optimal solution , we now construct a binary feasible solution to problem [ prob : opt_equi_contin ] ( corresponding to a feasible solution to problem [ prob : opt_coding ] ) , where here , denotes the smallest integer greater than or equal to .indicates that under the binary feasible solution , file takes the largest amount of storage smaller than or equal to that under the continuous optimal solution .thus , the total amount of occupied storage under the binary feasible solution is smaller than or equal to that under . for any binary feasible solution to problem [ prob : opt_equi_contin ] , define note that can be treated as the unused storage for binary feasible solution , and can be further utilized to improve the performance .( i ) if file is not stored in the wireless network ( i.e. , ) and , we can increase the amount of storage allocated to file from to by setting and keeping for all , unchanged . in this case , the unused storage of is utilized to achieve a performance increase of .( ii ) if a coded subfile of file is stored at each bs ( i.e. , there exists an integer such that ) and , we can increase the amount of storage allocated to file from to by setting , and keeping for all , unchanged . in this case , the unused storage of is utilized to achieve a performance increase of .( iii ) otherwise , we can not increase the amount of storage allocated to file .therefore , we define the increase rate of the successful transmission probability at as , which is given in , as shown at the top of the next page , by allocating some ( or all ) of the unused storage to file . now , we propose a greedy method to improve the successful transmission probability of obtained in stage i by gradually utilizing unused storage ) .in particular , at each step , for given binary feasible solution , we calculate the increase rate for all and obtain .if , we allocate some of the unused storage to file . if , the greedy method terminatesthe greedy method is illustrated in steps 3 - 12 of algorithm [ alg : two_stage ] .obtain the optimal solution to problem [ prob : opt_equi_contin ] using simplex method or interior point method .obtain a binary feasible solution to problem [ prob : opt_equi_contin ] using .set and .calculate unused storage by .calculate increase rate for all , and obtain .set and . [ alg : two_stage ] in this part , we consider the optimization of the asymptotic successful transmission probability in the small file size regime .[ caching design in small file size regime][prob : opt_coding_asymp_0 ] where is given by . by exploring structural properties of , we can obtain the optimal caching design in the small file size regime .[ lem : opt_asymp_0 ] suppose .there exists , such that for all , the optimal solution to problem [ prob : opt_coding_asymp_0 ] is given by and the optimal value to problem [ prob : opt_coding_asymp_0 ] is given by lemma [ lem : opt_asymp_0 ] indicates that in the small file size regime , when , it is optimal to allocate the storage of each bs equally to the most popular files .that is , each of the most popular files is partitioned into subfiles ( each of bits ) , and each bs stores a coded subfile ( of bits ) of each of the most popular files .the reason is as follows . in the small file size regime ,the probability that can decode the signal from each of the nearest bss is high , and allocating the storage of each of the nearest bss equally to files maximizes the number of files that can be successfully decoded by .thus , storing the most popular files obviously maximizes the successful transmission probability .in addition , lemma [ lem : opt_asymp_0 ] reveals that in the small file size regime , the optimal successful transmission probability increases with the product of the cache size and sic capability , i.e. , .in this part , we consider the optimization of the asymptotic successful transmission probability in the large file size regime .[ caching design in large file size regime][prob : opt_coding_asymp_infty ] where is given by . by exploring structural properties of , we can obtain the optimal caching design in the large file size regime .[ lem : opt_asymp_infty ] there exists , such that for all , the optimal solution to problem [ prob : opt_coding_asymp_infty ] is given by and the optimal value to problem [ prob : opt_coding_asymp_infty ] is given by lemma [ lem : opt_asymp_infty ] indicates that in the large file size regime , it is optimal to allocate the storage of each bs equally to the most popular files .that is , each bs stores each of the most popular ( uncoded ) files .the reason is as follows . in the large file size regime ,the probability that can decode the signal from any bs besides the nearest one is very small .allocating the storage of the nearest bs to uncoded files maximizes the number of files that can be successfully decoded by . storing the most popular filesobviously maximizes the successful transmission probability .in addition , lemma [ lem : opt_asymp_infty ] reveals that in the large file size regime , the optimal successful transmission probability increases with cache size and is not affected by sic capability . . , , , , and .consider zipf distribution with .,width=283 ] now , we use a numerical example to compare the optimal solution obtained by exhaustive search and the proposed near optimal solution obtained by algorithm [ alg : two_stage ] in both successful transmission probability and computational complexity .we also use this example to verify the asymptotically optimal solutions obtained in lemmas [ lem : opt_asymp_0 ] and [ lem : opt_asymp_infty ] in the asymptotic file size regimes .[ fig : compare_asymp_opt_and_opt ] plots the successful transmission probability versus file size .we can see that the successful transmission probability of the proposed near optimal solution is very close to that of the optimal solution . while , the average computation time for the optimal solution is times of that for the near optimal solution .this demonstrates the applicability and effectiveness of the near optimal solution .in addition , we can see that the successful transmission probabilities of the asymptotically optimal solutions obtained by lemmas [ lem : opt_asymp_0 ] and [ lem : opt_asymp_infty ] approach that of the optimal solution in the small and large file size regimes , respectively , verifying lemmas [ lem : opt_asymp_0 ] and [ lem : opt_asymp_infty ] .in this section , we compare the proposed near optimal network coding - based caching design with two baselines . in the simulation , we assume the popularity follows zipf distribution , i.e. , , where is the zipf exponent .we choose , , , , and bits .baseline refers to the network coding - based caching design in which the storage of each bs is equally allocated to the most popular files ( i.e. , for , and for ) .baseline refers to the uncoded caching design in which the most popular uncoded files are stored at each bs ( i.e. , for , and for ) .fig.[fig : simulation ] illustrates the successful transmission probability versus and .we can observe that the proposed near optimal caching design significantly outperforms the two baseline designs , and its performance increases much faster with the sic capability and the cache size .this is because the proposed caching design wisely exploits sic capability and storage resource .in addition , the proposed caching design and baseline have much better performance than baseline .this is because the proposed caching design and baseline provide file diversity .in this paper , we considered the analysis and optimization of a random linear network coding - based caching design in a large - scale sic - enabled wireless network . by utilizing tools from stochastic geometry, we analyzed the successful transmission probability in the general file size regime and the two asymptotic file size regimes .then , we considered the successful transmission probability maximization .we obtained a near optimal solution with superior performance and manageable complexity in the general file size regime .we also obtained closed - form asymptotically optimal solutions in the small and large file size regimes , respectively .y. cui , d. jiang , and y. wu , `` analysis and optimization of caching and multicasting in large - scale cache - enabled wireless networks , '' _ ieee trans .wireless commun ._ , vol . 15 , no . 7 , pp . 51015112 , jul. 2016 .z. chen , j. lee , t. q. s. quek , and m. kountouris , `` cooperative caching and transmission design in cluster - centric small cell networks , '' _ corr _ , vol .abs/1601.00321 , 2016 .[ online ] .available : http://arxiv.org/abs/1601.00321 m. wildemeersch , t. q. s. quek , m. kountouris , a. rabbachin , and c. h. slump , `` successive interference cancellation in heterogeneous networks , '' _ ieee trans ._ , vol .62 , no . 12 , pp .44404453 , dec . 2014 .
|
network coding - based caching at base stations ( bss ) is a promising caching approach to support massive content delivery over wireless networks . however , existing network coding - based caching designs do not fully explore and exploit the potential advantages . in this paper , we consider the analysis and optimization of a random linear network coding - based caching design in large - scale successive interference cancelation ( sic)-enabled wireless networks . by utilizing tools from stochastic geometry , we derive a tractable expression for the successful transmission probability in the general file size regime . to further obtain design insights , we also derive closed - form expressions for the successful transmission probability in the small and large file size regimes , respectively . then , we consider the successful transmission probability maximization by optimizing a design parameter , which is a complex discrete optimization problem . we propose a two - stage optimization framework and obtain a near optimal solution with superior performance and manageable complexity . the analysis and optimization results provide valuable design insights for practical cache and sic enabled wireless networks . finally , by numerical results , we show that the proposed near optimal caching design achieves a significant performance gain over some baseline caching designs . cache , network coding , successive interference cancelation , stochastic geometry , optimization
|
this work considers sophisticated attempts to visualise pace and rhythm within a narrative .the key insight of these techniques is not to replace a qualitative evaluation ( the reading of the text ) with a quantitative assessment , but , by means of a rigorous deterministic process , to extract relationships from input data and display them for interpretation .in essence , one qualitative evaluation ( of the text ) is augmented with another ( of an image ) ; however , the qualitative evaluation of the image has the advantage that it is not only vastly faster , but also independent of both language and reader familiarity .fiction writing is a competitive industry , and supports several sub - sectors in the form of writing classes , manuscript consultants , and networking events .writers face challenges in getting feedback on their work , particularly in terms of rhythm and pace . not only is quality subjective ,the process is extremely time - consuming for the reader .moreover , if the writer is to iterate through drafts of their work , then the feedback of any given reader becomes less and less useful as the reader becomes familiar with the text .there are also situational difficulties , such as if the writer simply does nt accept aspects of the criticism as valid .a nave tool might split a narrative into chapters and then plot a chart showing how a measure like the flesch reading index changed between chapters . such a chart would have limited general use ; however , if a chapter had a significantly different index it would be sensible to conclude that the chapter was considerably different in style to the surrounding chapters and that the writer should be aware of this .a key point here is that the writer certainly should nt be expected to change the narrative simply because one chapter is somewhat unusual by some measure .there are many possible sensible reasons for the anomaly , but it is our position that it is to the writer s advantage that they are aware of both the result and the tool , so they can reason about why the result occurred . if the writer has purposely caused the effect to further the narrative , then such a result would be a validation , otherwise , if the writer has accidentally caused the effect then they can consider the worth of the effect and potentially take steps to adjust or remove it .this work uses a framework for narrative analysis proposed in and applies such techniques to two example domains , with a view to evaluating the system to see if it can provide insights of value in literary research .one domain is in the traditional agent / consultant model , whereas the other is a group process , situated much closer to writing for tv or film scripts .of course , our comparisons are not an adequate or complete way of assessing individual style ; they are nonetheless an element that can be employed usefully for our specific purpose .this paper first details work in related areas and places the techniques examined here in an insightful and innovative context .the following sections describe the operational use domains .visualisations of the narrative mapping are described .the analysis of these mappings is accompanied by examples and notes on how the use was suited , or not , to particular aspects of each domain .previous work relating narrative and computer science tends to focus on creation for example , designing systems that produce emergent narrative or by modelling an existing narrative as a sequence of actions with pre and postconditions .there are also many instances where media outlets have announced computer systems that can pick the next bestselling book , script , or music . the failure of these systems to live up to the hype has led people to be naturally cautious about any analysis system in the creative domains .the techniques examined in this paper were first used in to distinguish the style and structure of film and tv scripts .murtagh et al .focused on capturing the semantics of the data and the plausibility of taking text as a practical and useful expression of underlying story .this work can be characterised as providing a platform to construct visual representations of the semantics encoded in the data .there is an overlap with the area of _ sentiment analysis _ , which analyses user - generated content : often by determining if the author of a blog comment or tweet is in favour of , or against , a product .although visualisations have been constructed this way , such approaches are based , thus far , in examining a small set of sentiment - bearing words , and they consider the source text as a single block , rather than a set of discrete scenes comprising a narrative arc .this section presents details of domains of deployment .later sections will evaluate how different information mapping methods are used to enhance the workflow of each .our evaluation is based on our observations and testimonials that were provided .a number of interviews were conducted with experts in the publishing industry that made it clear that there was a large degree of resistance to what the industry might see as `` replacement by robots '' .the two mapping through visualisation techniques we evaluate here are of interest because they require a level of interpretation from the user , and so may be much more acceptable to the industry .the use of these techniques was evaluated in two domains , which were selected to represent the extremes of creative writing .the writer s desk is a consultancy offering a very traditional feedback mechanism to authors , whereas project toomanycooks models the deadline - driven high intensity creativity found in group writing for tv , film , or magazines .project toomanycooks ( tmc ) ( described briefly in ) is a creative writing project that runs camps of 8 to 10 student writers who collaboratively create a novel ( depending on the age of the students this is normally in the 30,000 to 65,000 word range ) over a period of five days .it has two core goals : to increase the contact time and feedback between students interested in fiction writing ; and to give students experience of the lifecycle of the novel from inception to printing .example outputs include . in this domain , users were particularly interested in using the analysis techniques to quickly alert them to sections that in some sense did nt follow the overall voice of the rest of the novel .the project was also interested in mapping and visualisation of overall plot arcs : allowing them to reorder sections in such a way that particular scenes do not overshadow each other within the narrative .the writer s desk ( twd ) is a commercial entity specialising in the review of manuscripts for authors .twd s role is in giving professional feedback to authors over the style and structure of their work .this study spent six months providing narrative analysis for a selection of the submissions they received .the analysis reports were either used internally for developing twd feedback or passed on to authors as an appendix .twd and their writers were particularly interested in seeing the chapter - to - chapter flow and , as an extension of this , how an author s work sits as a whole .as a commercial enterprise , twd was also interested in identifying target markets and in grooming submissions to hit an area of particular interest to the public more precisely .this work reports experiences using two mappings to express the narrative arc .firstly , quite general frequency of occurrence data is determined for word usage in context .based on all interrelationships between words and text segments , a mapping is obtained that is euclidean and hence easly visualised as a map - like representation . from that , and aided greatly by the euclidean map most often , of full inherent dimensionality and hence not suffering any loss of information a tree or hierarchical visualisation is obtained . a further innovative development is to have such a hierarchy respect a given ordering of the input text related to narrative development or chronology .each input text is automatically divided into a number of segments , with chapter headings being used to delimit segments . given these segments and a list of unique words in the input text ,a cross - tabulation is constructed which gives the count of the occurrences of a given word in a given segment . from a machine - learning perspectiveour data was semi - structured , in that it is organised into discrete chapters or segments. one can use correspondence analysis to extract from a cross - tabulation some level of structure from the text in the form of an embedding in euclidean space .details of the construction are available in .we refer to the extracted structure as mapping the semantics of the text , because each word is a weighted average of text segments , and each text segment is a weighted average of the words it contains .both the tree visualisations to be presented use euclidean space embedding as a starting point . for each visualisation, a description is given with examples and then a detailed analysis is reported on of the advantages and also limitations of the visualisation in each domain . the relationships in the data given by the set of all frequency of occurrence ( including 0 = no presence ) values can be projected into two dimensions to show the relative position of each chapter ( text segment ) in the projection .figure [ default ] shows such a projection from _owen noone and the marauder _ , with each segment of the text represented by a point on the projection .since the process used the relative word counts as its starting point , two segments in the novel will appear closer to each other in the projection if they have similar relative word frequencies .it is our position that when an author writes a segment in a distinctly different style or tone ( examples might be moving to a different tense or a sudden change in the tension in the storyline ) then these word frequencies will change significantly and be visible on the projection for interpretation .for example , figure [ default ] shows a tight core grouping over to the right hand side of the projection , with a number of outliers .subjectively one might say that this grouping represents the `` voice '' of the author or novel and it may be considered worthwhile to investigate the nature of those segments that did not fit in with this voice . if one examines segment 69 , which is the most extreme outlier , it can been seen that it is written as a fictional extract from the newspaper _ usa today _ , as opposed to the majority of the novel , which is written with a more conventional third - person narrator .the author very much intended to give this segment a different `` voice '' . in this particular work ,the majority of the other relative outliers are similar plot devices in the form of radio announcements , magazine articles and so on .of course , the software makes no judgement here .it simply displays the information for an expert evaluation .the example of _ owen noone _ is a static study of a published novel after a rigorous proofing and editing process .we shall shortly show how twd used the visualisation to examine a snapshot of styles to position a novel in the market , while tmc used the visualisation to track the progress of construction over time .one of the core goals within the toomanycooks process was to give the appearance of having one single author with a clear style and `` voice '' .the group originally relied on the `` wikipedia effect '' that is that if enough different authors proofread and rewrite the same section repeatedly , then differences in style become invisible to the causal reader .however the 2-dimensional projection allowed users to visualise the style and see which sections might benefit from a stylistic rewrite .it is tempting to assume that this `` core style '' was simply the average of the styles of the writers .in fact , this was the working model used in twomanycooks this visualisation was introduced in the proofreading stages as a way of applying a consistent style across the novel . during the latter two days of a toomanycooks project , the current draft of the novel is repeatedly printed out , proofread , and has changes made to it ( generally on the order of three iterations per day ) . in early iterations , group coordinators would identify outliers evaluate each of them to see if the outlier was an intentional outlier and , if not , paired it with another segment that was in some sense opposing the first . the group members who wrote the first drafts of each of thesewere instructed to copy - edit each other s draft with the intention that the stylistic differences would cancel out .one could imagine a similar process pairing writers and sub - editors on a magazine or a newspaper . in later iterations ,this becomes much more a process of identifying unintentional outliers and focusing the stronger writers on those chapters for rewriting , while other writers polished more minor corrections in those chapters that had nt shown as outliers .later work provided more grist for the mill of our thinking .a recent toomanycooks group was selected from students who had won a short story competition .figure [ strodesscatter ] shows a projection in which the short stories are compared with both the novel that the writers produced , and ( for context ) the popular novels _ harry potter and the half blood prince _ , represented by h , and _ pride and prejudice _ , represented by p. the short stories , represented by the i symbols , unexpectedly do not surround the novel that the authors later collaborated on ( represented by s symbols ) .this suggests that in fact the core clustering is more a result of the group of writers improving the consistency of their prose with regard to an intended style , rather than being shackled to a literary fingerprint .note also that the clustering of the toomanycooks novel is much less tight than either of the two popular authors , which is probably to be expected from a small group of 6th - form students writing over a five day period .the major use of the unordered visualisation for the toomanycooks project was in identifying sections of unusual style and being able to evaluate each for its role in the story .being able to highlight those aspects of the story that did not have the same `` voice '' as the main narrative allowed the writers to streamline the feedback process and present to readers a more consistent narrative .an attraction of the projections for twd was the ability to quickly compare with other artists within the same genre .a regular complaint of publishers and agents is that they are sent manuscripts for genres in which they do not specialise and end up rejecting the vast majority of these out of hand . at the fine - grained level, editors have regularly commented that an author does not necessarily write in the style that they believe they do and , more crucially , they do not necessarily aim at a market segment that they are best suited for . by using the projection visualisation to compare a target manuscript with a selection of commercial novels onecan compare explicitly .for example , twd had a commission to examine a particular target novel that was aimed at the style of romance novel exemplified by danellie steel .figure [ track ] shows the the target novel text ( t ) , compared with several other novels .these are : _ kaleidoscope _, by danellie steele ( s ) ; _ emma _ , by jane austen ( a ) ; and _ eclipse _ , by stephanie meyer ( m ) . this allowed twd to evaluate , to their own satisfaction , if the style and word choice in this instance was closer to the steele - style romance than either the classic or teen styles of the other examples .furthermore , the overall consistency of the text is similar to what would be expected from a published novel . there is , of course , a psychological component to some of this feedback .some authors react viscerally to the idea of this sort of analysis , fearing that the approach reduces creativity , while some react very favourably , having more faith in their own interpretation of the visualisations than they necessarily have in their agents or editors ( who they might see as sparing them hard truths ) .the ability to highlight anomalous sections was also of great interest within the twd domain as it provided a useful metric for working one - on - one with authors , and to invite them to interpret the results in relation to their work .this allowed the conversation to be more about the guiding of the author and not about a difference in personal tastes between people .feedback from the company was universally positive , particularly in the area of how comfortable they were in interpreting the visualisation for themselves , and in helping with the more commercial aspects of the business .although the planar presentations are useful , they do not address the fact that the narrative is consumed linearly , and so they reflect only those differences that we are referring to as style or mood between any given successive pair of segments . to gain more insight into the actual structure of the narrative , a visualisation is used that respects the sequentiality of the segments .this section evaluates this hierarchical arrangement of the information , again starting from quite generic text / word association data with relatively minimal pre - processing .. the hierarchical clustering algorithm used here is detailed in , and was used as a device to deconstruct the film _ casablanca _ in .briefly , the algorithm repeatedly merges the least dissimilar pair of adjacent scenes to form a tree - like structure that shows how segments of a narrative cluster together .this sequential ordering allows the viewer to notice how , although a chapter or set of chapters may fit within the overall ` style ' of a novel , they may not necessarily match with their immediate neighbours . once again, there can be outliers , and a human can decide if an outlier is there to intentionally shape the narrative or not .for example , figure [ potterden ] shows the ordered visualisation of _ harry potter and the half - blood prince _ by j.k .rowling , in which each segment is a chapter in the novel .viewing the structure , one can see that the cluster comprising only the first chapter is rated as being remarkably dissimilar to the cluster containing all other chapters .the opening chapter of the novel is a conversation between the prime minister of the uk , and the minister for magic ; the chapter is used mainly for setting up the narrative and the mood , and neither character features significantly in the remainder of the text .a subjective reading of the novel may support that the first chapter was separate structurally from the text .although the comparative deconstructing of such works to a much lower level of detail is a fascinating subject in its own right , it is outside the scope of this work . in particular, our two target domains focus much more heavily on the use of this ordered visualisation for examining novels as works - in - progress . for more information on this clustering see , e.g. .figure [ mden2 ] shows a dendrogram using an early draft of _ the shadow hours _, which was the test novel for the toomanycooks project .the major anomalous section in figure [ mden2 ] ( chosen by eye ) is 44 , followed by 26 , 27 , and 6 .section 44 happens to be the smallest section in the narrative in that draft and the only one that had nt been expanded from a skeletal outline into a draft section so it required attention .sections 26 and 27 were character development of one of the minor characters ; they had been drafted by one team member and had not been reviewed yet by other team members .examining figure [ mden3 ] , which shows a slightly later draft of the same novel , in this draft section 44 is still a clearly anomolous section , but by much less of a degree , and 26 , 27 , and 6 , now merge much more closely with the surrounding chapters .this is compared with contemporaneous notes from the project showing that 44 had now been drafted , and the other scenes were going though second drafts . in this casethe dendrogram allowed an `` at a glance '' notification of areas that required particular attention and revealed that a section had been missed due to a communication error in the team .given the much greater amount of time that staff at twd had to examine a manuscript , the ability to `` immediately evaluate '' the structure of a document was less important .instead the structural diagrams were used to validate , and later guide , the reviewer s own evaluation . during the early stage of the project , staff reviewed documents as normal , and then examined the structural diagrams to see how much their interpretation of the diagram agreed with their interpretation of the text .as trust built , this progressed to reviewing documents before using the diagrams to check that no obviously anomoulous sections were missing , and then to reviewing both the text and the diagram at the same time , allowing the reviewer to re - examine text on the fly and get a much stronger impression of not only where the current section of text is going but how it slots into the overall narrative .we have developed tools that we have used effectively to augment and improve upon qualitative analyses of narrative .our findings are that these techniques can be effective , depending greatly on the situation they are applied in . given the reported benefits of data visualisation , the publishing sector has been slow to engage in use of visualisation .in a set of interviews with 14 industry representatives that were conducted as part of the research , without exception the interviewees reported no use of software for anything other than counting words , and only a fraction of the interviewees were interested in seeing demos of any kind of supporting technology .however , some publishing staff have been very positive about the idea of at - a - glance market placement and the added - value of being able to check that the section of the book that one has read is typical of the author s voice .those who have made use of the technology are positive , and provided us with testimonials .we would like to thank especially adam ganz for his guiding expertise and long association along with the staff at twd , in particular jacqueline kibby .thanks are also due to all participants on the toomanycooks projects , and to the varied expertise provided by david wells , tony greenwood , adam roberts , meg mitchell , lucy yeomans , emm johnstone , patrick leman , yvonne skipper , peter dunsmuir , john vines , and mark dorling .t. kakkonen and g. galic kakkonen .sentiprofiler : creating comparable visual profiles of sentimental content in texts . in _ proceedings of the workshop on language technologies for digital humanities and cultural heritage _ , pages 6269 , hissar , bulgaria , september 2011 .m. kriegel and r. aylett .emergent narrative as a novel framework for massively collaborative authoring . in _ proceedings of the 8th international conference on intelligent virtual agents _ , iva 08 , pages 7380 , berlin , springer , 2008 .s. louchart , i. swartjes , m. kriegel , and r. aylett .purposeful authoring for emergent narrative . in _icids 08 proceedings of the 1st joint international conference on interactive digital storytelling : interactive storytelling _ , pages 273284 , berlin , springer , 2008 .
|
from the earliest days of computing , there have been tools to help shape narrative . spell - checking , word counts , and readability analysis , give today s novelists tools that dickens , austen , and shakespeare could only have dreamt of . however , such tools have focused on the word , or phrase levels . in the last decade , research focus has shifted to support for collaborative editing of documents . this work considers more sophisticated attempts to visualise the semantics , pace and rhythm within a narrative through data mining . we describe real life applications in two related domains . * keywords : * visualisation , narrative , human - computer interfaces , data mining .
|
( b ) to define linear response functions , consider an individual component at tension .a small additional oscillatory force applied at the left end leads to endpoint oscillations with amplitudes and which defines the self and cross response functions .( c ) two objects x and y connected in series behave as a composite object xy whose response functions can be derived through simple rules ( eq . ) from the individual x and y response functions .( d ) schematic representation of the optical tweezer setup consisting of beads ( b ) , double - stranded dna handles ( h ) and protein ( p ) , with connecting springs . ] as a representative case , in this paper we consider the double trap setup shown in fig .[ sys](a ) , which typically involves two optically - trapped polystyrene beads of radius , two double - stranded dna handles , each , attached to a protein in the center . for fixed trap positions and sufficiently soft trapping potentials, the entire system will be in equilibrium at an approximately constant tension .we are interested in a force regime ( pn in the system under consideration ) where the handles are significantly stretched in the direction parallel to the applied force ( chosen as the axis ) , and rotational fluctuations of the handle - bead contact points are small .since the experimental setup is designed to measure the separation of the beads as a function of time , we focus entirely on the dynamic response of the system along the direction. however , the methods below can be easily generalized to the transverse response as well . though we consider only a passive measurement system in our analysis , an active feedback loop that minimizes force fluctuationscan also be incorporated , as an additional component with its own characteristic dynamic response ( with the added complication that the response of the feedback mechanism would have to be independently determined ) . to set the stage for our dynamic deconvolution theory ,we first illustrate the static deconvolution for two objects x and y connected in series under constant tension , e.g. a protein and a handle .let and be the constant - force probability distributions for each of these objects having end - to - end distance .the total system end - to - end distribution is given by . in terms of the fourier - transformed distributions ,this can be stated simply through the ordinary convolution theorem , .if is derived from histograms of the experimental time series , and if can be estimated independently ( either from an experiment without the protein , or through theory ) , then we can invert the convolution relation to solve for the protein distribution and thus extract the folding free energy landscape .a similar approach works for multiple objects in series or in parallel .before we consider dynamic networks , we define the linear response of a single object under constant stretching force along the direction , as shown in fig .[ sys](b ) .imagine applying an additional small oscillatory force along the axis to the left end. the result will be small oscillations and of the two ends around their equilibrium positions .the complex amplitudes and are related to through linear response : , , defining the _ self response function _ of the left end and the _ cross response function _ . if the oscillatory force is applied instead at the right end , the response takes the form : , .note that since the object is in general asymmetric , and are distinct functions .however , there is only a single cross response ( in the absence of time - reversal breaking effects such as magnetic fields ) .for the purposes of dynamic deconvolution of a network , these three response functions contain the complete dynamical description of a given component and are all we need .it is convenient to define the _ end - to - end response function _ , with , where the oscillatory force is applied simultaneously to both ends of the object in opposite directions .this response turns out to be a linear combination of the other functions : . as an illustrationwe take the simplest , non - trivial example : two spheres with different mobilities and connected by a harmonic spring of stiffness . in waterwe are typically in the low reynolds number regime and an overdamped dynamical description is appropriate .if an oscillating force of amplitude is applied to the left sphere , its velocity will oscillate with the amplitude ) ] . using the above definitions of the response functions we obtain where . by symmetry is the same as with subscripts l and r interchanged .the end - to - end response has a standard lorentzian form . for more realistic force transducers , such as semiflexible polymers, will later be written as a sum of lorentzians reflecting the polymer normal modes .note that when , and the spheres no longer interact , , the standard result for a diffusing sphere , and , as expected , since there is no force transmission from one sphere to the other . though all the linear response functions are defined in terms of an external oscillatory force , in practice one does not need to actually apply such a force to determine the functions experimentally .as described in the two - step deconvolution procedure below , one can extract them from measurements that are far easier to implement in the lab , namely by calculating autocorrelation functions of equilibrium fluctuations .based on the notion of self and cross response functions , we now consider the dynamics of composites .we explicitly display the convolution formulas for combining two objects in series and in parallel ; by iteration the response of a network of arbitrary topology and complexity can thus be constructed . as shown in fig .[ sys](c ) , assume we have two objects x and y connected by a spring .x is described by response functions , , and , and we have the analogous set for y. the internal spring is added for easy evaluation of the force acting between the objects , it is eliminated at the end by sending its stiffness to infinity .we would like to know the response functions of the composite xy object , , , and , where the x and y labels correspond to left and right ends , respectively .the rules ( with full derivation in the supplementary information ( si ) ) read the rules for connecting two objects in parallel are more straightforward and read , where is any one of the function categories ( self , cross or end - to - end ) , and denote the inverse response functions .one particularly relevant realization for parallel mechanical pathways are long - range hydrodynamic coupling effects , that experimentally act between beads and polymer handles in the force clamp setup .we derive the parallel rule and show an example hydrodynamic application in the si . for simplicity, however , we will concentrate in our analysis on serial connections .to proceed , if we set x = h and y = b , we can obtain the response functions of the composite handle - bead ( hb ) object , , if we know the response functions of the bead and handle separately . our full system in fig .[ sys](d ) is just the protein sandwiched between two hb components ( oriented such that the handle ends of each hb are attached to the protein ) .the total system response functions ( denoted by `` 2hb+p '' ) in terms of the individual protein and hb functions result by iterating the pair convolution in eq . twice .in particular , the end - to - end response is given by : this is a key relation , since we show below that both and the three hb response functions , , can be derived from force clamp experimental data . hence eq .allows us to estimate the unknown protein response function .we note a striking similarity to the signal processing scenario , where the output of a linear time - invariant ( lti ) network ( e.g. an rlc electric circuit ) is characterized through a `` transfer function '' . for such networks , combination rules in terms of serial ,parallel , and feedback loop motifs exist .the result for in eq . can be seen in a similar light : the first term is the self - response of object x which is independent of the presence of object y. the rational function in the second term is the `` feedback '' due to x interacting with y. as expected , if the cross - response connecting the two ends of x is turned off , this feedback disappears . in analogy to the transfer function theory for lti systems , our convolution rules form a comprehensive basis to describe the response of an arbitrarily complicated network inside a force clamp experiment . andlike the transfer functions which arise out of lti feedback loops , the convolution of interacting components consists of a nonlinear combination of the individual response functions .the rational functions due to the feedback of mechanical perturbations across the connected elements are non - trivial , but can be exactly accounted for via iteration of the convolution rules .to illustrate our theory , we construct a two - step procedure to analyze the experimental system in fig .[ sys](a ) , with the ultimate goal of determining dynamic protein properties in the force clamp . under a constant force , the protein extension will fluctuate around a mean corresponding to a folded or unfolded state . though it has been demonstrated that under appropriately tuned forces the protein can show spontaneous transitions between folded and unfolded states , we for the moment neglect this more complex scenario .( in the si we analyze simulation results for a protein exhibiting a double - well free energy , where the two states can be analyzed independently by pooling data from every visit to a given well ; the same idea can be readily extended to analyze time series data from proteins with one or more intermediate states . )we consider the protein dynamics as diffusion of the reaction coordinate ( the protein end - to - end distance ) in a free energy landscape .the end - to - end response function reflects the shape of around the local minimum , and the internal protein friction ( i.e the local mobility ) , which is the key quantity of interest .the simplest example is a parabolic well at position , namely . if we assume the protein mobility is approximately constant within this state , the end - to - end response is given by which has the same lorentzian form as the harmonic two - sphere end - to - end response in eq . .depending on the resolution and quality of the experimental data , more complex fitting forms may be substituted , including anharmonic corrections and non - constant diffusivity profiles ( an example of these is given in the two - state protein case analyzed in the si ) .however for practical purposes eq . is a good starting point .an experimentalist seeking to determine would carry out the following two - step procedure : _ first step : _ make a preliminary run using a system without the protein ( just two beads and two handles , as illustrated in fig .[ dhb](a ) ) . as described in the materials and methods ( mm ) , time derivatives of autocorrelation functions calculated from the bead position time seriescan be fourier transformed to directly give and .the convolution rules in eq .relate and to the bead / handle response functions , and , which via another application of eq .are related to the response functions of a single bead and a single handle .the bead functions and depend solely on known experimental parameters ( mm ) , leaving only the handle functions and as unknowns in the convolution equations . choosing an appropriate fitting form , determined by polymer dynamical theory ( see mm ) , we can straightforwardly determine and ._ second step : _ make a production run with the protein .. relates the resulting end - to - end response , extracted from the experimental data , to the response of the protein alone . since the first step yielded the composite handle - bead functions , , which appear in eq . , the only unknown is . we can thus solve for the parameters and which appear in eq . .this two - step procedure can be repeated at different applied tensions , revealing how the protein properties ( i.e. the intramolecular interactions that contribute to the diffusivity ) depend on force .even analyzing the unfolded state of the protein might yield interesting results : certain forces might be strong enough to destroy the tertiary structure , but not completely destabilize the secondary structure , which could transiently refold and affect . to demonstrate the two - step deconvolution procedure in a realistic context , we perform brownian dynamics simulations mimicking a typical force clamp experiment : two beads that undergo rotational and translational fluctuations are trapped in 3d harmonic potentials and connected to two semiflexible polymers which are linked together via a potential function that represents the protein folding landscape ( see mm for details ) .we ignore hydrodynamic effects which can easily be accounted for through parallel coupling pathways , as mentioned above .we begin with the * first step * of the deconvolution procedure .a snapshot of the simulation system , two handles and two beads without a protein , is shown in fig .[ dhb](a ) .a representative segment of the time series is shown in fig . [ dhb](b ) . equilibrium analysis of the time series yields the end - to - end distribution , which is useful for extracting static properties of the protein like the free energy landscape : when the protein is added to the system , the total end - to - end distribution is just a convolution of the 2hb and protein distributions .( the asymmetry of seen in fig . [dhb](b ) arises from the semiflexible nature of the handles . )as described in mm , we use the time series to calculate self and end - to - end msd curves and [ fig .[ dhb](c ) ] whose derivatives are proportional to the time - domain response functions and .the multi - exponential fits to these functions are illustrated in fig .[ dhb](c ) , and their analytic fourier transforms plotted in the left column of fig .[ dhb](d ) .we thus have a complete dynamical picture of the 2hb system response .however , in order to use eq . to extract the protein response, we first have to determine the handle - bead response functions .although a general fit of is possible , it is useful to apply the knowledge about the bead parameters and symmetry properties of the handle response .the handle parameters ( the set in mm eq . ) are the only unknowns in the three hb response functions : , , and . note that the handle - bead object is clearly asymmetric , so the self response will be different at the handle ( h ) and bead ( b ) end .convolving two hb objects according to eq . andfitting the handle parameters to the simulation results for and leads to the excellent description shown as solid lines on the left in fig .[ dhb](d ) .the handle parameters derived from this fitting completely describe the hb response functions , shown in the right column of fig .[ dhb](d ) .the hb response curves reflect their individual components : there is a low frequency peak / plateau in the imaginary / real part of the hb self response , related to the slow relaxation of the bead in the trap .the higher frequency contributions are due to the handles , and as a result they are more prominent in the self response of the handle end than of the bead end .there is a similarly non - trivial structure in the end - to - end response , due to the complex interactions between the handle normal modes and the fluctuations of the trapped bead ( see si for more details ) . of an optical tweezer system with the protein modeled as a single parabolic potential well ( , ) .symbols are simulation results , and the solid line is the theoretical prediction , based on the convolution of the protein response with the hb response functions of fig . [ dhb](d ) according to eq . .for comparison , ( dashed line ) and ( dot - dashed line ) are also included .insets : to show the sensitivity of the theoretical fitting , zoomed - in sections of near the maxima of the real ( top ) and imaginary ( bottom ) components . both simulation ( symbols ) and theoretical ( blue / red curve ) results are plotted .the thin pink / cyan curves are theoretical results with different from the true value : from left to right , , , , . ] the double - hb end - to - end distribution in fig .[ dhb](b ) and the hb response functions in fig .[ dhb](d ) are all we need to know about the optical tweezer system : the equilibrium end - to - end distribution and linear response of any object which we now put between the handles can be reconstructed .we will illustrate this using a toy model of a protein .in our simulations for the * second step * , we use a parabolic potential with , and a fixed mobility . here nm and , where is the viscosity of water .this leads to the single - lorentzian response function of eq . .if this exact theoretical form of is convolved with the hb response functions from the first step according to eq . , we get the result in fig .[ sw ] : very close agreement with directly derived from the simulated time series data . for comparisonwe also plot the separate end - to - end responses of the protein alone ( p ) and the double - hb setup without a protein ( 2hb ) . as expected differs substantially from both of these as correctly predicted by the convolution theory .the effect of adding handles and beads to the protein is to shift the peak in the imaginary part of the total system response to lower frequencies .additionally , we see in the contributions of the handle and bead rotational motions , which are dominant at higher frequencies . the sensitivity of the theoretical fit is shown in the insets of fig .[ sw ] : zooming in on the maxima of and , we plot the true theoretical prediction ( red / blue curves ) and results with shifted away from the the true value ( thin pink / cyan curves ) .in fact if and are taken as free parameters , numerical fitting to the simulation yields accurate values of : and .examples of successful deconvolution with other values of the intrinsic protein parameters are given in the double - well free energy analysis in the si .in practice , any theoretical approach must take into consideration instrumental limitations : most significantly , there will be a minimum possible interval between data collections , related to the time resolution of the measuring equipment .the deconvolution theory can always be applied in the frequency range up to .whatever physical features of any component in the system that fall within this range , can be modeled and extracted , without requiring inaccessible knowledge of fluctuation modes above the frequency cutoff . in the si, we illustrate this directly on the toy protein discussed above , coarse - graining the simulation time series to 0.01 ms intervals , mimicking the equipment resolution used in ref .the characteristic frequency of the protein within the tweezer setup falls within the cutoff , and hence our two - step deconvolution procedure can still be applied to yield accurate best - fit results for the protein parameters .the si also includes a discussion of other experimental artifacts white noise , drift , and effective averaging of the time series on the time scale shows how to adapt the procedure to correct for these effects .dynamic deconvolution theory allows us to extract the response functions of a single component from the overall response of a multicomposite network .the theory is most transparently formulated in the frequency domain , and provides the means to reverse the filtering influence of all elements that are connected to the component of interest . from the extracted single - component response function , dynamic properties such as the internal mobility or friction can be directly deduced . at the heart of our theory stands the observation that the response of any component in the network is completely determined by three functions , namely the cross response and the two self responses , which are in general different at the two ends .the response of any network can be predicted by repeated iteration of our convolution formulas for serial and parallel connections .self - similar or more complicated network topologies , as occur in visco - elastic media , can thus be treated as well .we demonstrate the application of our deconvolution theory for a simple mechanical network that mimics a double - laser - tweezer setup , but the underlying idea is directly analogous to the signal processing rules which describe other scalar dynamic networks , such as electrical circuits or chemical reaction pathways in systems biology .we finally point out that dynamic convolution also occurs in fret experiments on proteins where polymeric linkers and conformational fluctuations of fluorophores , as well as the internal fluorescence dynamics , modify the measured dynamic fluctuation spectrum .the experimental challenge in the future will thus be to generate time - series data for single biomolecules with a sufficient frequency range in order to perform an accurate deconvolution .for this a careful matching of the relevant time and spatial scales of the biomolecule under study and the corresponding scales of the measuring device ( handles as well as beads ) is crucial , for which our theory provides the necessary guidance .a key step in the experimental analysis is to obtain the system response functions and from the raw data ( which can either be the double handle - bead system with or without a protein ) .this data consists of two time series and for the left / right bead positions from which we calculate the mean square displacement ( msd ) functions where , and we have averaged the self msd of the two endpoints because they are identical by symmetry .calculating the msd functions is equivalent to finding the autocorrelation of the time series : for example , if is the end - to - end autocorrelation , the msd is simply given by .+ from the fluctuation - dissipation theorem , the time - domain response functions and are related to the derivatives of the msd functions : , , where . to get the fourier - space response , the time - domain functions can be numerically fit to a multi - exponential form , for example . in our simulation examplestypically 4 - 5 exponentials are needed for a reasonable fit .once the parameters and are determined , the expression can be exactly fourier - transformed to give the frequency - domain response function , .an analogous procedure is used to obtain .the cross response follows as .the power spectrum associated with a particular type of fluctuation , for example the end - to - end spectrum ( defined as the the fourier transform of the autocorrelation ) , is just proportional to the imaginary part of the corresponding response function : .the response functions of the beads in the optical traps are the easiest to characterize , since they depend on quantities which are all known by the experimentalist : the trap stiffness , bead radius , mobility , and rotational mobility . here is the viscosity of water . for each bead the three response functionscan be defined as described above , with the two `` endpoints '' being the handle - attachment point on the bead surface ( ) and the bead center ( ) .the latter point is significant because this position is what is directly measured by the experiment . for the case of large , where the rotational diffusion of the bead is confined to small angles away from the axis ,the response functions are : the second term in describes the contribution of the bead rotational motion , which has a characteristic relaxation frequency .this term is derived in the si , though it can also be found from an earlier theory of rotational brownian diffusion in uniaxial liquid crystals .the double - stranded dna handles are semiflexible polymers whose fluctuation behavior in equilibrium can be decomposed into normal modes .we do not need the precise details of this decomposition , beyond the fact that by symmetry these modes can be grouped into even and odd functions of the polymer contour length , and that they are related to the linear response of the polymer through the fluctuation - dissipation theorem . from these assumptions ,the handle response functions have the following generic form ( a fuller description can be found in the si ) : for some set of parameters .note that since the handles are symmetric objects , the self response of each endpoint is the same function .the mobilities and elastic coefficients encode the normal mode characteristics , with the mode relaxation times ordered from largest ( ) to smallest ( ) .the parameter is the center - of - mass mobility of the handle along the force direction .simple scaling expressions for the zeroth and first mode parameters in terms of physical polymer parameters as well as the connection between the expressions in eq . andeq . are given in the si .( these same expressions , with a smaller , could describe a completely unfolded , non - interacting , polypeptide chain at high force . ) in practice , the high - frequency cutoff can be kept quite small ( i.e. ) to describe the system over the frequency range of interest .care must be taken in manipulating fourier - space relationships like eq . .directly inverting such equations generally leads to numerical instabilities due to noise and singularities . in our case, we can avoid direct inversion because the forms of the component functions are known beforehand ( i.e. eq . for the beads , eq . for the handles , eq. for the protein ). thus when we model the response of the double handle - bead system with a protein , , we end up through eqs . and with some theoretical function where is the set of unknown parameters related to the components. since is known as a function of from the experimental time series , we find by minimizing the goodness - of - fit function ^ 2 $ ] , where is a logarithmically spaced set of frequencies , up to the cutoff frequency determined by the time resolution of the measuring equipment .this is equivalent to simultaneously fitting the real and imaginary parts of our system response on a log - log scale . in our brownian dynamics simulationseach handle is a semiflexible bead - spring chain of 25 beads of radius nm , every bead having mobility .the handle persistence length is .the harmonic springs used to connect all components together ( including the beads making up the handles ) have stiffness .the beads have radius , and the traps have strength , which corresponds to 0.01 pn / nm .the traps are positioned such that the average force in equilibrium pn .+ to capture the essential features of protein dynamics , we construct a simple toy model .the protein is characterized by two vectors : a center - of - mass position and an end - to - end separation . both and the transverse components of standard langevin dynamics with a mobility .the internal dynamics of protein fluctuations is modeled through the longitudinal end - to - end component , subject to a potential and a mobility .the transverse components feel a harmonic potential with spring constant .+ the simulation dynamics are implemented through a discretized free - draining langevin equation with time step , where the time unit .data is collected every 1000 time steps .typical simulation times are steps , with averages collected from independent runs for each system considered .this work was supported by deutsche forschungsgemeinschaft ( dfg ) within grant sfb 863 .we thank the feza grsey institute for computational resources provided by the gilgamesh cluster .10 carrion - vazquez m , et al .( 1999 ) mechanical and chemical unfolding of a single protein : a comparison .u. s. a. _ 96:36943699 .oesterhelt f , et al .( 2000 ) unfolding pathways of individual bacteriorhodopsins ._ science _ 288:143146 .wiita ap , et al .( 2007 ) probing the chemistry of thioredoxin catalysis with force . _nature _ 450:124130 .khatri bs , kawakami m , byrne k , smith da , mcleish tcb ( 2007 ) entropy and barrier - controlled fluctuations determine conformational viscoelasticity of single biomolecules ._ biophys . j. _ 92:18251835 .greene dn , et al .( 2008 ) single - molecule force spectroscopy reveals a stepwise unfolding of caenorhabditis elegans giant protein kinase domains ._ biophys . j. _ 95:13601370 .puchner em , et al .( 2008 ) mechanoenzymatics of titin kinase .u. s. a. _ 105:1338513390 .junker jp , ziegler f , rief m ( 2009 ) ligand - dependent equilibrium fluctuations of single calmodulin molecules ._ science _ 323:633637 .cecconi c , shank ea , bustamante c , marqusee s ( 2005 ) direct observation of the three - state folding of a single protein molecule . _ science _ 309:20572060 .woodside mt , et al .( 2006 ) direct measurement of the full , sequence - dependent folding landscape of a nucleic acid ._ science _ 314:10011004 .woodside mt , et al .( 2006 ) nanomechanical measurements of the sequence - dependent folding landscapes of single nucleic acid hairpins .u. s. a. _ 103:61906195 .greenleaf wj , frieda kl , foster dan , woodside mt , block sm ( 2008 ) direct observation of hierarchical folding in single riboswitch aptamers ._ science _ 319:630633 .chen yf , blab ga , meiners jc ( 2009 ) stretching submicron biomolecules with constant - force axial optical tweezers ._ biophys . j. _ 96:47014708 .gebhardt jcm , bornschlgl t , rief m ( 2010 ) full distance - resolved folding energy landscape of one single protein molecule .u. s. a. _ 107:20132018 .hyeon c , morrison g , thirumalai d ( 2008 ) force - dependent hopping rates of rna hairpins can be estimated from accurate measurement of the folding landscapes .u. s. a. _ 105:96049609 .manosas m , ritort f ( 2005 ) thermodynamic and kinetic aspects of rna pulling experiments ._ biophys .j. _ 88:32243242 .dudko ok , hummer g , szabo a ( 2006 ) intrinsic rates and activation free energies from single - molecule pulling experiments ._ 96:108101 .manosas m , et al .( 2007 ) force unfolding kinetics of rna using optical tweezers .ii . modeling experiments ._ biophys . j. _ 92:30103021 .dudko ok , hummer g , szabo a ( 2008 ) theory , analysis , and interpretation of single - molecule force spectroscopy experiments .u. s. a. _ 105:1575515760 .lang mj , asbury cl , shaevitz jw , block sm ( 2002 ) an automated two - dimensional optical force clamp for single molecule studies ._ biophys .j. _ 83:491501 .nambiar r , gajraj a , meiners jc ( 2004 ) all - optical constant - force laser tweezers ._ biophys .j. _ 87:19721980 .greenleaf wj , woodside mt , abbondanzieri ea , block sm ( 2005 ) passive all - optical force clamp for high - resolution laser trapping .lett . _ 95:208102 .cellmer t , henry er , hofrichter j , eaton wa ( 2008 ) measuring internal friction of an ultrafast - folding protein .u. s. a. _ 105:1832018325 .best rb , hummer g ( 2010 ) coordinate - dependent diffusion in protein folding .u. s. a. _ 107:10881093 .hinczewski m , von hansen y , dzubiella j , netz rr ( 2010 ) how the diffusivity profile reduces the arbitrariness of protein folding free energies ._ j. chem ._ 132:245103 .landau l , lifshitz e ( 1980 ) _ statistical physics , part 1 _ ( pergamon press , oxford ) .oppenheim av , willsky as , nawab sh ( 1996 ) _ signals & systems ( 2nd ed . ) _( prentice - hall , upper saddle river , nj , usa ) .muzzey d , gomez - uribe ca , mettetal jt , van oudenaarden a ( 2009 ) a systems - level analysis of perfect adaptation in yeast osmoregulation ._ cell _ 138:160171 .gopich iv , szabo a ( 2009 ) decoding the pattern of photon colors in single - molecule fret ._ j. phys .b _ 113:1096510973 .chung hs , louis jm , eaton wa ( 2010 ) distinguishing between protein dynamics and dye photophysics in single - molecule fret experiments ._ biophys . j. _ 98:696706 .chung hs , et al .( 2011 ) extracting rate coefficients from single - molecule photon trajectories and fret efficiency histograms for a fast - folding protein ._ j. phys .a _ 115:36423656 .szabo a ( 1980 ) theory of polarized fluorescent emission in uniaxial liquid - crystals ._ j. chem ._ 72:46204626 . * supplementary material for `` deconvolution of dynamic mechanical networks '' +consider a system under tension consisting of two objects x and y connected by a spring of stiffness , as shown in fig .1(c ) in the main text .the left and right equilibrium endpoint positions are , respectively for x , and , for y. imagine applying a small additional oscillatory force at along the direction . in the long time limit, every endpoint coordinate in the system would exhibit oscillations of the form with some amplitude , , .let us also denote the instantaneous force exerted by the connecting spring as . in terms of and the four amplitudes , we can write down five equations based on the definitions of the self / cross response functions for x and y : where the dependence of the response functions is implicit . we would like to relate the x and y response functions to those of the full system . since the external force perturbation is applied at the x end of the system , by definition , . solving for and from eq . , we find the following expressions for the full system response functions in the limit : if the force perturbation is applied at yr instead of xl , an analogous derivation yields the remaining self response function : by iterating eqs . - , reducing each pair of components into a single composite object , we can readily obtain the convolution equations for an arbitrary number of components in series .though the systems described in the main text consist of multiple objects connected in series , one can extend the theory to cases where parallel connections are also present . the most significant experimental realization of such connections are objects coupled through long - range hydrodynamic interactions . to make our convolution theory comprehensive ,in this section we will derive the rules for parallel pathways , and then show in particular how they can be used to treat hydrodynamics .following the notation of si sec .[ series ] , consider two objects x and y in parallel ( in other words , sharing the same left and right end - points ) . for object ,let and be the forces that need to be applied on that object at the left and right ends in order that the end - points exhibit oscillations and .the relationship between the end - point force and oscillation amplitudes is given by : since the objects are connected in parallel , we know that and . hence we can write eq . as a matrix equation , where : looking at the full system of two objects together , the total forces on the left and right ends required for a particular set of oscillation amplitudes and are additive : , where .if we define the inverse response matrix , , then and we obtain : this is the main result for convolving parallel pathways : the inverse response matrices of each pathway additively combine to yield the total inverse response matrix .let us label the components of the inverse response matrix as follows : then since , we can solve for the xy response functions in terms of the components of : eqs . - constitute the complete rules for dynamic convolution of two objects in parallel . as a simple , realistic application of the above parallel convolution rules , consider the following setup , which constitutes two parallel pathways : a ) two beads of self - mobility trapped in optical tweezers of strength , interacting through long - range hydrodynamics ; b ) a dna handle which connects the two beads .( the handle can actually be any composite object in series , for example two dna strands on either side of a protein , like in the main text . )though in principle all the objects in the system are coupled hydrodynamically to each other , we will consider only a single pairwise interaction : between the two beads .since the beads are the objects with the largest drag , this is by far the most important interaction for the practical purposes of analyzing experimental data . though the rotational degrees of freedom for the beads can be incorporated in the theory, we will ignore them in this example , since they give only a small correction at large forces .thus the center - of - mass of bead , and the handle end - point attached to that bead , will have the same oscillation amplitude : , . to get the total system behavior , we start by looking at each pathway independently , and calculating the associated inverse response matrix .if we assume the handle is symmetric with end - point response functions and , the matrix for the handle pathway follows from eq . : for the bead pathway , we can derive starting from the equations of motion of the two trapped beads in the presence of external force amplitudes and acting on the left and right beads : here is the _ total _ force amplitude on the bead , including the contribution of the trap force and the external force : .the terms and are respectively the diagonal and off - diagonal components of the two - bead hydrodynamic mobility tensor . is the bead self - mobility , and is the cross mobility , describing the long - range hydrodynamic coupling between the beads . we assume both of these mobilities are roughly constant .( the justification for this approximation is given at the end of this section , along with full expressions for and in a typical experimental setup . )we can solve eq . for , expressing it in the form , where : the full system inverse response matrix is .starting with eq . for and eq . for , we can derive the components of the total response from eq . .to complete the description of this system , we need expressions for and . in equilibrium ,the beads have some average center - to - center separation , and for added realism , we will consider the bead centers to be a height above a surface ( i.e. the microscope slide ) . in a typical setup and be on the order of the bead radius , resulting in non - negligible modifications to each bead s self - mobility from both the presence of the wall ( assumed to be a no - slip boundary ) , and the presence of the other bead .though the mobilities will instantaneously depend on the exact positions of the beads relative to themselves and the wall , we will use the mobility values at the equilibrium positions , since the amplitude of the bead fluctuations in the traps is small compared to and .hence the mobilities will be functions of , , and .the bead self mobility can be estimated as : where is the self - mobility of a bead alone in an unbounded fluid .the first parenthesis is the correction due to the wall , and the second parenthesis is the correction due to the other bead .finally , for we use the corresponding component of the blake tensor ( at the rotne - prager level ) , which describes the cross mobility of two beads moving parallel to a no - slip boundary : .\end{split}\ ] ]in the absence of bead rotation ( where the bead only has center - of - mass translational degrees of freedom ) , the self response functions of both the bead center ( ) and and handle - attachment point on the bead surface ( ) are identical : they describe a sphere with translational mobility in an optical trap of strength , namely .( here , as in the main text , we ignore self - mobility corrections due to the microscope slide surface or the presence of the other bead ; if desired , these can be simply incorporated as described in the previous section . ) similarly , since the bead is a rigid body and any force perturbation at the bead center is directly communicated to the bead surface , is equal to .the situation gets more complicated when the rotational degrees of freedom are included , but this involves only the surface response function , which now has to include an extra term to account for the rotation of , along with the translational motion of the bead itself . to derive the rotational response ( i.e. the second term of in eq .[ 6 ] of the main text ) we take advantage of the fluctuation - dissipation theorem .if is the vector connecting the bead center to the handle - attachment point , such that , and the corresponding unit vector , then we would like to calculate the time correlation function in the case where is subject to a constant force along the axis . here refers to the equilibrium average of . for the purposes of analyzing the optical tweezer experiment, we consider only the large force case , where , and thus is in the vicinity of 1 .the fluctuation - dissipation theorem states that the time - domain rotational response function is equal to , where , which we can fourier transform to get the frequency - domain response .the key to calculating is to note that the equilibrium fluctuations of correspond to diffusion on the surface of a unit sphere subject to the external potential .let us define the green s function as the conditional probability of ending up at at time , given an initial position at , or equivalently .this green s function satisfies the smoluchowski equation for rotational diffusion : where .the correlation can be expressed in terms of as : where is the equilibrium probability distribution of , with normalization constant . taking the time derivative of both sides of eq . , and substituting the right - hand side of eq . for , one can derive the following relation describing the time evolution of : here is the higher order correlation function , where is the second - order legendre polynomial .in fact , can itself be expressed in terms of even higher order correlation functions , and this procedure can be iterated to form an infinite set of linear differential equations . rather than formally solving this set , which is possible but tedious , we employ the effective relaxation time approximation , where is assumed to be dominated by a single exponential decay with relaxation time , namely .the coefficient , and can be evaluated using the equilibrium distribution , yielding for large .using eq . , the relaxation time can be written as : .\ ] ] like , the appearing on the right of eq .is an equilibrium average determined through : in the large limit . since in this limit the first term in the brackets in eq .is negligible compared to the second , we ultimately find .( the same approximate expression for the relaxation time can also be derived from a theory describing rotational brownian diffusion in uniaxial liquid crystals . )the rotational response is the fourier transform of : can use standard ideas from the theory of polymer dynamics to derive a simple fitting form for the handle response functions . consider a handle in isolation at equilibrium under constant tension , described by a continuous space curve with component at time and contour coordinate , where runs from 0 to . without deriving a detailed dynamical theory ,one can still make a few generic assumptions about the behavior of .the first is that it can be decomposed into a sum over normal modes in the following way : here the first term is some reference contour , and the second term represents deviations from that reference contour , where is the coefficient of the normal mode . for conveniencewe set the reference contour , the thermodynamic average over all polymer configurations in equilibrium .the dependence of incorporates the center - of - mass motion of the polymer , so that , where is the center - of - mass diffusion constant . with this choice of reference contour , .the second assumption is that the equilibrium time correlations of the normal mode coefficients have the form , for , where is some constant , and is the inverse relaxation time of the normal mode .this type of exponentially decaying normal mode correlation appears in dynamical theories for many types of polymers : flexible chains , nearly rigid rods , mean - field approximations for semiflexible chains . by convention , we order the normal modes such that increases with , i.e. corresponds to the largest relaxation time .the third and final assumption is that due to the symmetry of the handle ( the two end - points are equivalent ) , the normal modes can be grouped into even and odd functions of around the center , such that .putting all these properties together , we can derive expressions for the msd of a handle end - point , , and the cross msd , : from the fluctuation - dissipation theorem , the time - domain response functions are related to the msds through : , , . taking the fourier transforms of yields the handle fitting forms shown in eq .[ 7 ] of the main text : where , , , . in calculating the fourier transforms, we note that all terms in eq .are implicitly multiplied by the unit step function , since the msd functions are defined only for .as a demonstration of the handle fitting forms , consider the simplest case , where a handle consists of two spheres , like in the response function example preceding eq .[ 1 ] in the main text .the functions in eq .[ 1 ] can be rewritten in terms of a center - of - mass and one normal mode contribution , as expected for a single - spring system : in the symmetric case where , these response functions have exactly the same form as eq .[ eqh4 ] , with , . for polymer handles used in experiments there will be contributions from a spectrum of normal modes. however the response functions are still dominated by the center - of - mass and lowest - frequency normal mode terms .hence for the purpose of estimating the linear response characteristics of a given setup , we provide simple scaling expressions for the parameters , , and in the case of a handle which is a semiflexible polymer of contour length and persistence length ( i.e. double - stranded dna ) . by analogy to the two - sphere example above , is just the center - of - mass mobility of the handle along the stretching direction , and , where is the effective spring constant of the handle . for large forces , where the handle is near maximum extension, we can approximate it as thin rod , for which the mobility along the long axis is given by to leading order . here is the viscosity of water and the diameter of the handle .hence . to get the effective spring constant ,the starting point is the approximate marko - siggia interpolation formula relating the tension felt by the handle to its average end - to - end extension along the axis : .\ ] ] since the force magnitude is related to the polymer free energy through , and the effective handle spring constant , we can estimate from eq .: & = \frac{k_bt}{l_pl}\left[1 + 4\left(\frac{l_p f}{k_b t } -\frac{z_\text{ee}}{l } + \frac{1}{4}\right)^{3/2}\right]\\ & \approx \frac{4 k_bt}{l_pl}\left(\frac{l_p f}{k_b t}\right)^{3/2 } , \end{split}\ ] ] where in the second line we have used the fact that for the cases we consider , i.e. pn for double - stranded dna , where nm . since , we can derive the following expression for the longest relaxation time of the handle end - to - end fluctuations along the force direction ( in other words the relaxation time of the first normal mode ) : up to a constant prefactor , this is the large force limiting case of an earlier scaling expression for the longitudinal relaxation time that has been verified through optical tweezer experiments on single dna molecules . with the parameter values nm , k , and mpa , we can estimate a typical relaxation time for dna handles where nm and : . for the specific case of the brownian dynamics simulations , where the handles are somewhat shorter at , the scaling argument predicts ns , comparable to the numerically fitted value ns , where ns . in fig .2(d ) in the main text , the frequency scale s is where we see the clear divergence between the handle - end and bead - end hb self response functions and .this is related to the fact that the handle fluctuations become the dominant contribution for the hb object above this frequency scale .( given the bead radius and trap strength , the characteristic bead frequency scale s is much lower . ) as a general design principle , experiments will have the best signal - to - noise characteristics when there is a good separation of scales between the handle , protein , and bead characteristic frequencies .the example in the main text satisfies this idea , since the protein characteristic frequency s is distinct from the ranges of the beads and handles .as described in the main text , the hb response functions reflect the contributions of their handle and bead components . to elucidate this , we plot in fig .[ hb ] the hb functions from fig .2 , together with the individual h and b functions .as expected , the hb self response at the h end is very similar to the individual h response , and at the b end it is close to the individual b response . for the hb end - to - end function ,only comparison with the handle component is possible ( since the bead end - to - end response is zero ) .again the two functions are similar , but clearly adding the bead onto the handle perturbs its end - to - end response , shifting it to lower frequencies and changing its shape .the way in which the components contribute to the response of the total object is not merely additive , and requires the convolution rules in order to be to accurately predicted .as a function of end - to - end distance , from eq . .the potential as shown is tilted by the external stretching force .( b ) the protein mobility profile .( c ) formal linear response result for based on a numerical fokker - planck solution using the full profiles in ( a ) and ( b ) .inset : ( red curve ) , with analytical estimates for the three contributions to the protein response ( orange ) .the sum of these analytical estimates is shown as a dashed black curve .( d ) fragment of time series for , with the portions assigned to either the folded or unfolded wells colored red and blue respectively .the remainder is shaded gray .( e ) the full system free energy profile calculated from the simulation ( dark purple ) , together with the protein - only result ( light purple ) obtained by static deconvolution using the 2hb distribution in fig .2(b ) of the main text .this agrees with the original theoretical form , eq . ,shown as a dashed line .( f ) for the time series in each well , the corresponding full system potential ( blue / red lines ) and the deconvolved result ( cyan / orange lines ) , . ] instead of the single - well protein of the main text , we start with a double - well potential shown in fig .[ dw](a ) .this potential arises from the force : where the parameters , , , , , .the energy profile plotted in fig .[ dw](a ) has been additionally tilted by a contribution representing the equilibrium stretching at constant force . to mimic the effects of internal friction on the protein dynamics , we allow for a coordinate - dependent mobility , which is plotted in fig .[ dw](b ) . in the leftwell ( folded state ) , while in the right well we have a higher value ( unfolded state ) , with a linear transition of width around the energy barrier .for real proteins , larger internal friction in the folded state may occur due to a greater proportion of intact native bonds compared to unfolded configurations .formally , the protein end - to - end response may be derived for this double - well potential using a numerical solution of the corresponding fokker - planck equation , based on the method of bicout and szabo .the numerical procedure consists of the following steps .we discretize our potential over values , , in some range to .the bin size .define local equilibrium probabilities , where .transition frequencies of going from bin to bin are given by : reflective boundary conditions at the ends of our range are imposed by defining .the symmetric rate matrix has the nonzero elements : the eigenvalues and eigenvectors of this matrix satisfy an equation of the form : , where the index and we arrange the eigenvalues in increasing order . with reflective boundary conditions at either end ,the smallest eigenvalue is always .the correlation function for the protein is then given by , where the coefficients are : here is the component of the eigenvector .finally , from the fluctuation - dissipation theorem , the time - domain response function is related to through .after a fourier transform , we get the response function : where all the parameters are known numerically .this response function is shown in fig .[ dw](c ) .its structure can actually be decomposed into three contributions , as shown in the inset : two correspond to fluctuations of the protein about the local minima ; the third , at lower frequencies , corresponds to transitions between the wells .the orange curves in the inset are simple analytical predictions for these contributions : the two intrawell peaks at higher frequencies are just single lorentzians with the appropriate values of and for each well , and the interwell transition peak is a based on a discrete two - state description .though approximate , the analytical sum ( dashed line ) captures well the exact numerical result ( red line ) . however if we go beyond this and try to naively apply the convolution theory on the of fig .[ dw](c ) , the results for the total system will deviate from the actual behavior as seen in simulations .this is not surprising , since the ( large amplitude , low frequency ) transitions between wells are a fundamentally nonlinear process , and thus violate the linear response assumption on which our theory is based . to correctly apply our analysis in this situation, we should focus on the linear response of the system within each state , where the protein fluctuates about the local minimum : in essence , we break the problem into two single - well systems .to implement this approach , we need to extract `` single - well '' time series from the full simulation data , a portion of which is shown in fig .[ dw](d ) .since what we measure is the end - to - end behavior of the total system , we choose the separation line between wells to lie at the barrier position in the full system free energy profile [ fig .[ dw](e ) ] .segments of the time series falling on either side of the separation line will be assigned to well 1 ( folded ) or well 2 ( unfolded ) , with the following exceptions : points that are within a certain time window of a transition ( i.e. a crossing of the separation line ) are excluded .setting to , longer than the relaxation times within the wells , this exclusion ensures that the resulting single - well time series are not significantly perturbed by memory effects from the transitions . note that times both before and after each transition are excluded in order to preserve the time - reversal symmetry of the equilibrium time series . for the time series in fig .[ dw](d ) , the parts assigned to well 1 and 2 are colored red and blue respectively , with the remainder shaded gray .the same method can be generalized to a more complicated free energy landscape , for example in a protein with intermediate states , to get a single - well time series corresponding to every state .the end - to - end distributions from the single - well series yield corresponding energy profiles and , shown in fig .[ dw](f ) . after static deconvolution with the double - hb distribution from fig .2(b ) , we get profiles in terms of the protein end - to - end distance : and . near the minima theseare identical to the original , but have anharmonic corrections as they approach infinity at the separation line .the dynamic deconvolution analysis for each well is analogous to the simple parabolic case described in the main text , except instead of the single lorentzian we will use the corrected based on the fokker - planck numerical solution with and .this yields a generalized form in each well : , with the multiple lorentzians accounting for the anharmonic corrections . for the parabolic case , , , and there were no other terms .given a for the well , the parameters are known from the fokker - planck solution .( in fact only the depend on the diffusion constant ; note that is an equilibrium average , so the must be independent of ) . using the double - well protein model .the ( a ) folded and ( b ) unfolded states are analyzed separately , with the simulation results plotted as symbols and the theory as solid lines .the latter uses the true values of in the folded state and in the unfolded state .blue / red results denote the re / im parts respectively . for comparison ,the protein response ( dashed line ) is also shown for each well , based on the local fokker - planck numerical solution .inset to ( b ) : for the unfolded state , a zoomed - in section of the peak , with simulation ( symbols ) and theoretical ( red curve ) results .the cyan curves are theoretical results with different from the true value : from left to right , , , , .,width=302 ] as in the single - well example , applying the convolution of with the hb response functions , and comparing with the simulation results , shows excellent agreement [ fig . [ dw2 ] ] . here the theoretical curves ( red lines ) for each well are based on setting to the values at the minima . for the unfolded state ,the inset in fig .[ dw2 ] shows how this theoretical result shifts if is displaced from the true value of .the sensitivity of the theoretical fitting allows one to numerically extract good estimates for in each well : ( folded ) and ( unfolded ) , where the exact values are and respectively .given the complications associated with excluding interwell hopping ( and the nonconstant diffusion profile across the barrier ) , it is remarkable that we can still fit to within of the true values .if one was interested in getting just an estimate of in each well , without necessarily getting a perfect fit for the total system response , the anharmonic corrections could be ignored and the single lorentzian form used instead of the exact in the fitting .the resulting values for have a comparable accuracy .practical implementation of the deconvolution theory on experimental time series must take into account possible errors due to instrumental limitations .we will focus on the optical tweezer apparatus in particular , since this is the example analyzed in the main text .however the techniques discussed below are applicable in a generic experimental setup . for optical tweezers , effects like drift and noise which enter into the measured outputare typically divided into `` experimental '' and `` brownian '' categories : the former arises from the instrumental components , while the latter is related to thermal fluctuations of the beads and the objects under study .equilibrium brownian noise is completely captured within the deconvolution theory : the linear response formalism provides a quantitative prediction of exactly how each thermally fluctuating part of the system will contribute to the total signal .we will thus concentrate on experimental effects , and outline how they enter into the theoretical deconvolution procedure . for a perfect instrument, the raw data collected by the experimentalist would be a `` true '' time series at discrete time steps , where refers to either one of the bead positions ( or ) , or the bead - bead separation ( ) .the sampling time is the time resolution of the apparatus . in any realistic scenario ,the actual measured time series is , where is an extraneous signal due to the instrument .we will consider two possible contributions to : a ) white noise , or any high - frequency random signal that is uncorrelated at time scales above ; b ) a low - frequency signal related to instrumental drift over an extended measurement period .the latter can result from environmental factors like slow temperature changes and air currents that affect the laser beam position . as a side note ,one of the advantages of the dual - trap setup illustrated in the text is a dramatic reduction of such drift effects , relative to earlier designs involving a single trap and biomolecules tethered to a surface .whereas the beam position of a single trap will move over time relative to the surface , the dual traps are created from a single laser , and hence any overall drift in the beam position will not affect the trap separation .( though smaller issues may exist , like the effective mobility of the beads shifting as the distance from the sample chamber surface varies . ) in any case , the low drift of the dual - trap setup allows for long , reliable data collection periods ( i.e. on the order of a minute in the leucine zipper study of ref . ) .the final experimental effect we will describe in this section relates to the time resolution : the sampling period can not be made smaller than a certain value , dependent on the instrumentation .not only is the frequency of data collection limited , but because the electronics have a finite response time , there will also effectively be some kind of averaging of the true signal across the time window between measurements .both of these effects can be incorporated into the deconvolution analysis , and we will illustrate this concretely through the toy model protein simulation data discussed in the main text .these simulations had data collection at intervals of ns , in order to clearly illustrate that the theory is valid over a wide frequency regime encompassing the characteristic fluctuations of all the system components .however we can mimic the experimental apparatus by resampling and averaging the data over windows of size ms ( the resolution of the gebhardt _ et . setup ) .the deconvolution procedure works for the reduced frequency range ( up to ) , and the main quantity of interest ( the protein diffusivity ) can still be extracted with good accuracy .the reason for this is that the protein characteristic frequency in our example ( after being slowed down by the handles and beads ) falls under the cutoff . in general ,any component fluctuation modes with frequencies can be deconvolved using the theory , even though we lack information about higher frequency modes above the cutoff . though we may not be able to directly observe the modes of certain objects like the short dna handles whose characteristic frequencies are does not impede us from exploiting the full physical content of the time series in the frequency range below .consider the situation where our time series is contaminated by a gaussian white noise signal , with zero mean and correlation .we will also assume that the white noise is to a good approximation not correlated with the true component signal : , .from the measured time series the main quantity which enters the deconvolution theory is the msd function : where is the true msd value .thus the white noise induces a uniform upward shift of the msd , which is irrelevant to the theoretical analysis , since we are interested only in the msd slope , which determines the linear response function .however this assumes that we can collect an infinite time series , reducing the statistical uncertainty of our expectation values to zero . in reality ,our sampling is carried out only times , over a finite time interval .our calculated for this time interval will differ from the result by some random amount characterized by a standard deviation .the standard error analysis for correlation functions ( which assumes gaussian - distributed fluctuations ) yields an approximate magnitude for , up to lowest order in : }.\ ] ] here is a numerical prefactor ( which may depend on ) , , and is an effective correlation time .since can be modeled as a sum of terms exponentially converging toward the long - time limit , the value of is on the order of the longest decay time .. allows us to see how to correct the effects of white noise : if in the absence of noise ( ) we could achieve a certain statistical uncertainty by taking steps , the same uncertainty could be obtained in the presence of white noise by taking a larger number of steps , .generally the larger the correlation time for the specific msd of interest , the longer the sampling time necessary to achieve a certain level of precision ( i.e. a self msd , involving the motion of the individual trapped beads , will typically have a slower relaxation time than an end - to - end msd ) .if the largest is ms ( an upper bound estimate for the dual - trap setup discussed in the main text ) , ms , and nm is the characteristic scale of the msd function , then a 1% precision , , requires a sampling time of s ( with no white noise ). the same precision with a noise strength of nm would need more sampling time .in contrast to the high frequency noise modeled above , imagine that we have a low frequency contamination induced by slow drift : for some constants and .the measured msd becomes : thus instead of reaching a plateau at large times , the msd continues to increase .if any deviation of this kind is observed in the experimental time series , the simplest solution is to fit the long - time msd , extract the functional form ( i.e. the slope ) , and subtract the drift contribution from to recover .alternatively , since the drift time scale is typically larger than any characteristic relaxation time in the system , , one could collect data from many short runs of length , where .in addition to the possibilities of noise and drift modifying the signal , we have to consider that the measuring apparatus will have some finite time resolution , defined as the interval between consecutive recordings of the data .the measured value , , even in the absence of noise contributions , only approximately corresponds to the instantaneous true value . because of the finite response time of the equipment, it is more realistic to model as some weighted average from the previous interval : for some weighting function . in the present discussionwe will ignore any contributions from white noise and drift in the time - averaged signal , since these can be corrected for in the same manner as described above. the measured autocorrelation function between two time points and is related to the true function as follows : where we introduce the combined weighting function : while the precise weighting function may vary depending on the apparatus ,a reasonable approximation is that , or that the signal is directly an average over the interval . plugging this into eq . ,with , we find : the autocorrelation enters the deconvolution analysis through the msd , and its derivative .if is expressed as a sum of decaying exponentials , , then this corresponds to with the coefficients . assuming this underlying exponential form , the relationship between the measured and actual can be calculated using eqs . and : thus the only modification is in the coefficient of each exponential term , , which gets a prefactor dependent on .the restriction to times comes from the fact that can not be measured for times smaller than the experimental resolution .the prefactor is only appreciably larger than 1 for relaxation times comparable to or smaller than .however , even for , which is roughly the smallest relaxation time we can realistically fit from the measured data , , so the averaging effect is modest .thus when we fit a sum of exponentials to the functions derived from the experimental time series , we are effectively calculating and . to recover the actual , we just divide out the prefactor : . in this way we correct for the distortion due to time averaging .( circles ) and ( squares ) calculated from coarse - grained simulation time series of the 2hb and 2hb+p systems respectively . the coarse - graining involved averaging over intervals of ms , and using the time series of mean values .the solid lines are the results of single - exponential fitting to the data .the original fit results based on the fine - grained time series , and , are shown as dashed lines for comparison .( b , c ) the thin dashed and solid lines are the theoretical predictions for the fourier - domain end - to - end responses and respectively ; panel ( b ) shows the real part , panel ( c ) the imaginary part .these are taken from fig . 3 in the main text .the thick solid lines are re and i m as determined from the time - domain best - fit in panel ( a ) , after correcting for the averaging effect .the vertical dotted line marks the frequency cutoff for the coarse - grained data . ] to illustrate the effects of the experimental time resolution , and how the above corrections would work in practice , we will redo the deconvolution analysis for the toy protein example described in the main text .however , instead of using the original simulation time series , whose data collection interval was 1.2 ns , we will average the data over windows of width ms , and use the time series of these mean values , spaced at intervals of ( with a total sampling time s ) .we thus roughly mimic the apparatus of ref .[ samp](a ) shows time - domain response functions derived from the coarse - grained data : , from the first deconvolution step , involving the beads and handles alone ; , from the second step , where the toy protein is included in the system .the fits , drawn as solid lines , involve only single exponentials , since these are sufficient to capture the behaviors in the restricted time range . from the fouriertransform of the fit results for ( with the correction ) we can get the true frequency space linear response of the 2hb+p system , .the real and imaginary parts of this response are drawn as thick solid lines in fig .[ samp](b ) and ( c ) respectively .as expected , we can calculate this response only up to the cutoff frequency of the measuring equipment .however , it coincides very well with the theoretical prediction ( thin solid lines , taken from fig . 3 of the main text ) .even though we are limited to the range , the convolution rules still work : a relation like eq .[ 3 ] in the main text is valid at all individual values of .though we see only a restricted portion of the total system end - to - end response , it does exhibit non - trivial structure for , i.e. the downturn in the real part and leveling off of the imaginary part at .deconvolution allows us to extract the physical content of this structure : when numerical fitting is carried out over the range , we find toy protein parameters : , , within 6% of the exact values ( and ) . what is the main reason behind the relative success of the fitting in extracting the protein parameters , despite the limited time resolution ?[ samp](b , c ) also shows the theoretical response for the protein alone , which has a characteristic frequency , above the equipment cutoff .( graphically , the peak in falls to the right of the vertical dotted line marking . )however , the protein dynamics is modified by the presence of the handles and beads in the experimental system , in a manner quantitatively described by our convolution theory . as a result ,the characteristic frequency of the protein within the experimental apparatus ( i.e. the peak position in the imaginary part ) is shifted to , just within the cutoff .hence we are able to fit for the protein properties .the relative accuracy of the fitting is notable , since we are essentially at the very limit of resolvability .this example illustrates a general principle : whatever dynamical features a protein ( or any other component ) exhibits at within the full experimental setup , our deconvolution theory should be able to extract . for this purpose ,knowledge of the system behavior for is not necessary . in the current case , the theoretical 2hb+p end - to - end response has a contribution at due to the dna handle fluctuation modes , i.e. the shoulder seen in fig .[ samp](b , c ) .these modes are experimentally inaccessible at our time resolution , so the coarse - grained time series reveals nothing about their properties .however , this does not stop us from applying the theory at .
|
time - resolved single - molecule biophysical experiments yield data that contain a wealth of dynamic information , in addition to the equilibrium distributions derived from histograms of the time series . in typical force spectroscopic setups the molecule is connected via linkers to a read - out device , forming a mechanically coupled dynamic network . deconvolution of equilibrium distributions , filtering out the influence of the linkers , is a straightforward and common practice . we have developed an analogous dynamic deconvolution theory for the more challenging task of extracting kinetic properties of individual components in networks of arbitrary complexity and topology . our method determines the intrinsic linear response functions of a given molecule in the network , describing the power spectrum of conformational fluctuations . the practicality of our approach is demonstrated for the particular case of a protein linked via dna handles to two optically trapped beads at constant stretching force , which we mimic through brownian dynamics simulations . each well in the protein free energy landscape ( corresponding to folded , unfolded , or possibly intermediate states ) will have its own characteristic equilibrium fluctuations . the associated linear response function is rich in physical content , since it depends both on the shape of the well and its diffusivity a measure of the internal friction arising from such processes like the transient breaking and reformation of bonds in the protein structure . starting from the autocorrelation functions of the equilibrium bead fluctuations measured in this force clamp setup , we show how an experimentalist can accurately extract the state - dependent protein diffusivity using a straightforward two - step procedure . force spectroscopy of single biomolecules relies most commonly on atomic force microscope ( afm ) or optical tweezer techniques . by recording distance fluctuations under applied tension , these experiments serve as sensitive probes of free energy landscapes , and structural transformations associated with ligand binding or enzymatic activity . all such studies share an unavoidable complication : the signal of interest is the molecule extension as a function of time , but the experimental output signal is an indirect measure like the deflection of the afm cantilever or the positions of beads in optical traps . the signal is distorted through all elements in the system , which in addition typically include polymeric handles such as protein domains or double - stranded dna which connect the biomolecule to the cantilever or bead . as shown in the case of an rna hairpin in an optical tweezer , handle fluctuations lead to nontrivial distortions in equilibrium properties like the energy landscape as well as dynamic quantities like folding / unfolding rates . if an accurate estimate of the biomolecule properties is the goal , then one needs a systematic procedure to subtract the extraneous effects and recover the original signal from experimental time series data . _ static deconvolution _ , which operates on the equilibrium distribution functions of connected objects , is a well - known statistical mechanics procedure and has been successfully applied to recover the free energy landscape of dna hairpins and more recently of a leucine zipper domain . in contrast , for dynamic properties of the biomolecule , no comprehensive deconvolution method exists . handles and beads have their own dissipative characteristics and will tend to suppress the high - frequency fluctuations of the biomolecule and as a result distort the measured power spectrum . in the context of single - molecule pulling experiments , theoretical progress has been made in accounting for handle / bead effects on the observed unfolding transition rates . however , the full intrinsic fluctuation spectrum , as encoded in the time - dependent linear response function , has remained out of reach . the current work presents a systematic _ dynamic deconvolution _ procedure that fills this gap , providing a way to recover the linear response of a biomolecule integrated into a mechanical dissipative network . we work in the constant force ensemble as appropriate for optical force clamp setups with active feedback mechanisms or passive means . while our theory is general and applies to mechanical networks of arbitrary topology , we illustrate our approach for the specific experimental setup of ref . : a protein attached to optically trapped beads through dsdna handles . the only inputs required by our theory are the autocorrelation functions of the bead fluctuations in equilibrium . we demonstrate how the results from two different experimental runs one with the protein , and one without can be combined to yield the protein linear response functions . we apply this two - step procedure on a force clamp setup simulated through brownian dynamics , and verify the accuracy of our dynamic deconvolution method . knowledge of mechanical linear response functions forms the basis of understanding viscoelastic material properties ; the protein case is particularly interesting because every folding state , i.e. each well in the free energy landscape , will have its own spectrum of equilibrium fluctuations , and hence a distinct linear response function . two key properties determine this function : the shape of the free energy around the minimum , and the local diffusivity . the latter has contributions both from solvent drag and the effective roughness of the energy landscape internal friction due to molecular torsional transitions and the formation and rupture of bonds between different parts of the peptide chain . the diffusivity profile is crucial for getting a comprehensive picture of protein folding kinetics and arguably it is just as important as the free energy landscape itself for very fast folding proteins . our dynamic deconvolution theory provides a promising route to extract this important protein characteristic from future force clamp studies .
|
there is an increasing desire amongst industrial practitioners of computational fluid dynamics ( cfd ) to perform scale - resolving simulations of unsteady compressible flows within the vicinity of complex geometries .however , current generation industry - standard cfd software predominantly based on first- or second - order accurate reynolds averaged navier - stokes ( rans ) technology is not well suited to the task .henceforth , over the past decade there has been significant interest in the application of high - order methods for mixed unstructured grids to such problems .popular examples of high - order schemes for mixed unstructured grids include the discontinuous galerkin ( dg ) method , first introduced by reed and hill , and the spectral difference ( sd ) methods originally proposed under the moniker ` staggered - gird chebyshev multidomain methods ' by kopriva and kolias in 1996 and later popularised by sun et al . . in 2007huynh proposed the flux reconstruction ( fr ) approach ; a unifying framework for high - order schemes on unstructured grids that incorporates both the nodal dg schemes of and , at least for a linear flux function , any sd scheme .in addition to offering high - order accuracy on unstructured mixed grids , fr schemes are also compact in space , and thus when combined with explicit time marching offer a significant degree of element locality .this locality makes such schemes extremely good candidates for acceleration using either the vector units of modern cpus or graphics processing units ( gpus ) .there exist a variety of approaches for writing accelerated codes .these include directive based methodologies such as openmp 4.0 and openacc , and language frameworks such as opencl and cuda .unfortunately , at the time of writing , there exists no single approach which is _ performance portable _ across all major hardware platforms .codes desiring cross - platform portability must therefore incorporate support for multiple approaches .further , there is also a growing interest from the scientific community in _ heterogeneous computing _ whereby multiple platforms are employed simultaneously to solve a problem .the promise of heterogeneous computing is improved resource utilisation on systems with a mix of hardware .such systems are becoming increasingly common .pyfr is a high - order fr code for solving the euler and compressible navier - stokes equations on mixed unstructured grids . written in the python programming language pyfrincorporates backends for c / openmp , cuda , and opencl .it is therefore capable of running on conventional cpus , as well as gpus from both nvidia and amd , as well as heterogeneous mixtures of the aforementioned .the objective of this paper is to demonstrate the ability of pyfr to perform high - order accurate unsteady simulations of flow on mixed unstructured meshes using a heterogeneous hardware platform demonstrating the concept of ` heterogeneous computing from a homogeneous codebase ' .specifically , we will undertake simulations of unsteady flow over a cylinder at reynolds number using a mixed unstructured grid of prismatic and tetrahedral elements using a desktop workstation containing an intel xeon e5 - 2697 v2 cpu , an nvidia tesla k40c gpu , and an amd firepro w9100 gpu . at the time of writingthese represent the high - end workstation offerings from the three major hardware vendors .all are designed for workstations with support for error - correcting code ( ecc ) memory and double precision floating point arithmetic .the paper is structured as follows . inwe provide a brief overview of the pyfr codebase .in section we provide details of the cylinder test case . in single node performanceis discussed . in multi - node heterogeneous performanceis discussed , and finally in conclusions are drawn .for a detailed overview of pyfr the reader is referred to witherden et al .key functionality of pyfr v0.2.2 is summarised in .we note that pyfr achieves platform portability via use of an innovative ` backend ' infrastructure ..[tab : pyfr - func]key functionality of pyfr v0.2.2 . [ cols= "> , < " , ] plots of averaged stream - wise velocity at , , and are shown in .the results are shown alongside the experimental results of parnaudeau et al . and the mode - l dns results of lehmkuhl et al . .with the profiles deviate significantly from the previous studies . on increasing the order to the resultsare improved .we observe a u - shape profile at , with strong gradients near the mixing layer between the wake and the free stream .the results agree well those of the previous studies .the only exception is the reduced magnitude of the averaged velocity deficit near the centre of the wake at .plots of averaged cross - wise velocity at , , and are shown in .the results are also shown alongside the experimental results of parnaudeau et al . and the mode - l dns results of lehmkuhl et al . .the results agree well with those of the previous studies .time - span - average stream - wise velocity for compared with the experimental results of parnaudeau et al . and dns results of lehmkuhl et al .] time - span - average cross - stream velocity for compared with the experimental results of parnaudeau et al . and dns results of lehmkuhl et al .in this paper we have demonstrated the ability of pyfr to perform high - order accurate unsteady simulations of flow on mixed unstructured grids using _ heterogeneous _ multi - node hardware . specifically , after benchmarking single - node performance for various platforms , pyfr v0.2.2 was used to undertake simulations of unsteady flow over a circular cylinder at reynolds number using a mixed unstructured grid of prismatic and tetrahedral elements on a desktop workstation containing an intel xeon e5 - 2697 v2 cpu , an nvidia tesla k40c gpu , and an amd firepro w9100 gpu .results demonstrate that pyfr achieves performance portability across various hardware platforms .in particular , the ability of pyfr to target individual platforms with their ` native ' language leads to significantly enhanced performance __ targeting each platform with opencl alone .pyfr was also found to be performant on the heterogeneous multi - node system achieving a significant fraction of the available flop / s .further , the numerical results obtained using a mixed unstructured grid of prismatic and tetrahedral elements were found to be in good agreement with previous experimental and numerical data .the authors would like to thank the engineering and physical sciences research council for their support via a doctoral training grant and an early career fellowship ( ep / k027379/1 ) .the authors would also like to thank amd , intel , and nvidia for hardware donations .patrice castonguay , david m williams , peter e vincent , manuel lopez , and antony jameson . on the development of a high - order , multi - gpu enabled ,compressible viscous flow solver for mixed unstructured grids ., 3229:2011 , 2011 .witherden , a.m. farrington , and p.e .vincent . : an open source framework for solving advection diffusion type problems on streaming architectures using the flux reconstruction approach ., 185(11):30283040 , 2014 .franois pellegrini and jean roman .scotch : a software package for static mapping by dual recursive bipartitioning of process and architecture graphs . in _ high - performance computing and networking _ ,pages 493498 .springer , 1996 .
|
pyfr is an open - source high - order accurate computational fluid dynamics solver for mixed unstructured grids that can target a range of hardware platforms from a single codebase . in this paper we demonstrate the ability of pyfr to perform high - order accurate unsteady simulations of flow on mixed unstructured grids using _ heterogeneous _ multi - node hardware . specifically , after benchmarking single - node performance for various platforms , pyfr v0.2.2 is used to undertake simulations of unsteady flow over a circular cylinder at reynolds number using a mixed unstructured grid of prismatic and tetrahedral elements on a desktop workstation containing an intel xeon e5 - 2697 v2 cpu , an nvidia tesla k40c gpu , and an amd firepro w9100 gpu . both the performance and accuracy of pyfr are assessed . pyfr v0.2.2 is freely available under a 3-clause new style bsd license ( see www.pyfr.org ) .
|
theoretical and computational physical chemists have long sought reliable and accurate means of performing quantum dynamics calculations for molecular systems , as quantum effects such as tunneling and interference often play an important role in such systems .if `` exact '' methods are required i.e . , numerical techniques for which the error bars may ( in principle ) be reduced arbitrarily the traditional approach has been to represent the system hamiltonian using a finite , direct - product basis set .however , this approach suffers from the drawback that the scaling of computational effort is necessarily exponential with system dimensionality. recently , a number of promising new methods have emerged that may spell an end to exponential scaling or at the very least , drastically reduce the exponent .the latter category includes various basis set optimization methods, which have nearly doubled the number of degrees of freedom ( dofs ) that may be tackled on present - day computers , from about 6 to 10 dofs .very recently , the first basis - set method to defeat exponential scaling ( at least in principle ) was introduced. this method , which uses wavelets in conjunction with a phase space truncation scheme , has been applied to model problems up to 15 dofs , and is easily extendible to higher dimensionalities although some technical issues vis - a - vis applicability to real molecular systems must still be resolved .a completely different approach to the exact quantum dynamics problem may be found in time - dependent trajectory methods .although trajectory methods are extremely common in molecular dynamics applications , they are almost always classical , quasiclassical , or semiclassical i.e . , not exact , in the sense described above .however , it is possible to perform exact quantum dynamical propagation using trajectory - based methods .these so - called `` quantum trajectory methods'' ( qtms ) are based on the hydrodynamical picture of quantum mechanics , developed over half a century ago by bohm and takabayasi, using even earlier ideas of madelung and van vleck. trajectory methods of all kinds are appealing , because they offer an intuitive , classical - like understanding of the underlying dynamics .qtms are especially appealing , however not only because they ultimately yield exact results , but also because they offer a pedagogical understanding of quantum effects such as tunneling. curiously , qtms thus far have not fared so well as semiclassical methods , vis - a - vis their treatment of another fundamental quantum effect interference .this issue is discussed in more detail below , as it is of central concern for the present paper .an in - depth comparison of interference phenomena is provided in an intriguing article by zhao and makri. garashchuk and rassolov discuss the interesting connection between qtm and semiclassical propagators, in the context of herman - kluk initial value representations. perhaps the greatest attraction of qtms , however , has been the promise that they may be able to defeat exponential scaling . in any event ,qtms have undergone tremendous development since their introduction in 1999most notably within the last year or two .much of the early development centered around accurate evaluation of spatial derivatives of the wavefunction, but with the introduction of local least - squares fit adaptive and unstructured grid techniques, this difficulty is now essentially resolved .this has paved the way for a number of interesting applications of qtms , including barrier transmission, non - adiabatic dynamics, and mode relaxation. several intriguing phase space generalizations have also emerged, of particular relevance for dissipative systems. on the other hand , qtms still suffer from one major drawback they are numerically highly unstable in the vicinity of nodes .this `` node problem '' manifests in several different ways: ( 1 ) infinite forces , giving rise to kinky , erratic trajectories ; ( 2 ) compression / inflation of trajectories near wavefunction local extrema / nodes , leading to ; ( 3 ) insufficient sampling for accurate derivative evaluations . in the best case , this can result in substantially more trajectories and time steps than the corresponding classical calculation ; in the worst case , the calculation may fail altogether , beyond a certain point in time . for many molecular applications ( though certainly not all ), the initial wavepacket is nodeless ; however , it may develop nodes at some later point in time .moreover , from a practical standpoint , nodes per se are not the only source of numerical difficulty ; in general , any large or rapid oscillations in the wavefunction termed `` quasinodes'' can be sufficient to cause the problems described above . such oscillations are very easily formed in molecular systems , particularly during barrier reflection . accordingly, several numerical techniques are being developed to address the node problem , including the arbitrary lagrangian - eulerian frame method, and the artifical viscosity method. in this paper , we take a different approach to the node problem , based on a thorough understanding of the differences and similarities between bohmian and semiclassical mechanics .formally , these two theories share many similarities , as was known from the earliest days yet in practical terms , semiclassical and quantum trajectories often behave very differently .for instance , the former may cross in position space , but not in phase space ; the latter do exactly the opposite . for the special case of stationary eigenstates in 1 dof ( the focus of the present paper ) , semiclassical trajectories evolve in phase space along the contours of the classical hamiltonian , whereas quantum trajectories _ do not move at all_. for well - behaved potentials , classical trajectories are always smooth and well - behaved , but quantum trajectories may be kinky and erratic . as noted by zhao and makri, nowhereare the differences between the two methods more profound than in the treatment of nodes and interference phenomena which is not even _ qualitatively _ similar . in the semiclassical treatment ,the approximate wavefunction is expressed in terms of simple functions that are generally as smooth and well - behaved as the potential itself .oscillations and nodes are obtained when different lobes or `` sheets '' of the semiclassical wavefunction come to occupy the same region of space thus giving rise naturally to interference .in contrast , the bohmian representation of the wavefunction is single - valued , and therefore incapable of self - interference .consequently , all of the undesirable even `` unphysical''aspects of quantum trajectories , as described in the previous paragraph , are necessary in order to represent nodes and quasinodes explicitly . from a dynamical perspective, the only difference between semiclassical and quantum trajectories is the quantum potential , , and it is a remarkable fact that alone is responsible for all of the very fundamental differences described above . on the other hand, this situation is also cause for alarm , for turns out to be the order correction that is ignored in semiclassical treatments being regarded as `` insignificant '' in the large action limit , in accord with the correspondence principle . from the previous discussion , it is clearly incorrect to regard as insignificant , which seems to place the semiclassical approximation and indeed , the correspondence principle itself in jeopardy .moreover , there are certain unappealing aspects of the semiclassical approach multivaluedness , caustics , phase discontinuities , etc. that simply do not arise in a bohmian treatment . on the other hand ,the semiclassical approximation is known to be valid in the large action limit which together with the undesirable features of the bohmian approach as discussed in the previous paragraph , seems to call into question the physical correctness of the latter .this paradox has been a source of concern for some researchers , notably einstein. the primary goal of the present paper is to reconcile the semiclassical and bohmian theories , in a manner that preserves the best features of both , and also satisfies the correspondence principle . at least within the context of stationary eigenstates in 1 dof, the above paradox turns out to be remarkably easy to resolve .it can be shown that the disturbing dissimilarities described above stem not from the theoretical methodologies themselves , but from the simple fact that each method uses a different ansatz to represent the wavefunction thus giving rise to a rather unfair comparison .in particular , since the semiclassical functions are double - valued in the classically allowed region of space , the stationary wavefunction is represented as the sum of two terms essentially a pair of `` traveling waves , '' headed in opposite directions . in contrast, the standard bohmian approach uses a single term to represent the wavefunction .virtually all of the disparities described above arise from this simple fact .it is therefore natural to consider what would happen if the bohmian formalism were applied to a _ two - term _ wavefunction thus placing it on a proper par with the semiclassical method . as will be shown in this paper , this results in everything `` falling into place . '' in particular , the quantum potential far from being singular in the vicinity of nodes is well - behaved everywhere , and in fact , becomes vanishingly small in the large action limit , exactly in accord with the correspondence principle .the same can be said for the quantum trajectories , which are no longer stationary , and approach the corresponding semiclassical trajectories in the large action limit ( within the classically allowed region of space ) .this implies the somewhat counterintuitive result that quantum trajectories must be well - behaved when the number of nodes is _ large _ , for this signifies the large - action limit . in any event ,the two - term bohmian approach provides us with the `` best of both worlds , '' i.e. the well - behaved trajectories of semiclassical mechanics , together with the singlevaluedness and lack of caustics and phase discontinuities that characterize bohmian mechanics .more generally than for just the stationary states in 1 dof considered here , it is anticipated that a multi - term bohmian implementation will go a long way towards alleviating the node problem .let be any normalized wavefunction in the single dof , .being a complex function , can be decomposed into two real functions , and , representing the amplitude and phase , respectively , as follows : ( x ) = r_b(x ) e^i s_b(x)/ [ onelmb ] equation ( [ onelmb ] ) is the celebrated madelung - bohm ansatz, which we term the `` unipolar ansatz,'' as it consists of a single term only .the function has units and interpretation of action . if time evolution is considered , then plays the role of hamilton s principle function in classical mechanics, which therefore properly depends on as well as on .however , for stationary states in a time - independent context , is analogous to hamilton s characteristic function , , which is time - independent .both interpretations will be found to be important for the present approach .the above decomposition is essentially unique if is nonnegative throughout ; however , it leads to discontinuities in , and cusps in , if has nodes . despite these drawbacks ,is the decomposition generally utilized in standard bohmian treatments, thus motivating use of the `` '' subscript .it will be shown in this paper evidently for the first time that this convention in and of itself gives rise to certain node - related numerical difficulties that would otherwise not arise ( sec . [ nodeissues ] ) .accordingly , for the present work , we presume amplitude functions that change sign when passing through nodes . to avoid confusion with the standard bohmian decomposition, we use unsubscripted quantities to represent the present unipolar ansatz decomposition , i.e. ( x ) = r(x ) e^i s(x)/ , [ onelm ] where , , and the new decomposition functions , and , are smooth and continuous throughout the entire coordinate range ; for the latter reason , is deemed a better analog for hamilton s functions than is . in any event , throughout this work , when discussing the unipolar ansatz , we are referring to , unless explicitly stated otherwise .although the function is unique modulo , physically , it is well - defined only up to the addition of an arbitrary constant , which introduces an arbitrary but immaterial phase factor into .equation ( [ onelm ] ) is the starting point of both quantum trajectory _ and _ semiclassical methods . presuming a quantum hamiltonian of the form = ^2 2 m + v ( ) , [ ham ] the general time evolution equations for the unipolar decomposition functions are where , , etc . , andthe coordinate dependences have been suppressed to save space .equation ( [ rdotuni ] ) is the continuity equation , essentially stating that probability is conserved .note that this equation is independent of the particular system potential , .equation ( [ sdotuni ] ) is the quantum analog of the hamilton - jacobi equation, which does depend on the particular .it is in the treatment of this equation that the semiclassical and bohmian theories part company .the former ignores the first term in the square brackets , giving rise to the standard classical hamilton - jacobi equation .the latter regards the first term as the `` quantum potential , '' q(x ) = - ^2 2 m , [ qex ] which is added to the true potential , to form the modified potential , .apart from the substitution , the dynamical laws for the two approaches are identical . in particular , in both cases ,the momentum is related to the action via p(x ) = s(x ) [ peeeq ] the set of points constitute a one - dimensional subspace of the 1 dof phase space known as a `` lagrangian manifold '' ( lm). this terminology is generally used in a classical or semiclassical context only ; however , we find it convenient to apply it in the exact quantum case as well . if is presumed to be a stationary state , an eigenstate of the hamiltonian , , with energy , and .the first result , together with , is consistent with the well - known property that for standing waves , the phase is constant over . without loss of generality , we may take to be real , so that and . the second result , together with , yields the quantum stationary hamilton - jacobi equation ( qshje ) . by rearranging , and making use of , we obtain : an important connection between the semiclassical and bohmian theories is suggested by namely , _ the semiclassical approximation is accurate when the quantum potential is small_. let us now consider the semiclassical approximation proper i.e . , with no term .the resultant algebraic equation has two solutions for , i.e. }$]where lower case ` ' is now used , for reasons that will be explained shortly .these two solutions correspond to the positive and negative momentum `` sheets '' of the lm in phase space .the two sheets are joined together at the classical turning points , and , to form a single lm in phase space , along the classical hamiltonian contour , .( turning points are the caustics for stationary states ) .thus , the function is double - valued over the classically allowed region , , and zero - valued everywhere else .it is illuminating to compare the semiclassical situation , as described above , with the bohm prescription . from the correspondence principle ,one expects to be small in the large action limit i.e . ,the limit in which the excitation numbers , , become large .this in turn would imply , via , that the semiclassical and exact quantum lms would resemble each other in the large action limit .the actual situation is completely different , however .firstly , imply that the quantum is single - valued everywhere rather than zero- or double - valued , like the semiclassical .secondly , for stationary states , , implying that the quantum lm is the real axis of the phase space plane , which in no way resembles the semiclassical , hamiltonian - contour lm .thirdly , quantum trajectories are stationary over time , whereas semiclassical trajectories evolve around their lms ( sec . [ trajectories ] ) .the origin of these seemingly profound qualitative differences is deceptively simple : it is due to the fact that the semiclassical approximation does not actually incorporate the unipolar ansatz of . instead ,a _ bipolar _ ansatz is used for the total wavefunction , consisting of two terms rather than one : in above , the constant specifies the relative phase between the two components , .it has been singled out from in order to clarify certain issues pertaining to square - integrability ( sec .[ quantization ] ) .although itself is real , the are both complex , and conjugate to each other .the stationary `` standing wave '' is therefore obtained in practice as the linear superposition of two `` traveling waves , '' moving in opposite directions .the standard bohmian prescription , being unipolar , completely misses out on this elegant and useful aspect of the semiclassical approach , which gives rise to a qualitatively very different kind of amplitude / action decomposition . on the other hand ,if the bipolar ansatz of is incorporated into the bohmian theory , rather than , then it is indeed possible to reconcile these two approaches , in a manner consistent with the correspondence principle , as will be shown in sec .[ decomposition ] .note , however , that an important difference between semiclassical and bipolar quantum lms is already evident in ; namely that the bipolar momentum function is double - valued throughout the entire coordinate range .thus , the two exact quantum lm sheets never join , but extend into the classically forbidden regions all the way to . throughout this paper , we use lower case to denote the bipolar ansatz functions , so as to distinguish these from the unipolar ansatz functions , for which upper case is used . for convenience , all bipolar functions are hereafter defined to be single - valued everywhere , by referring to the positive - momentum lm sheet only , .the increased flexibility of the bipolar ansatz is extremely useful vis - a - vis the treatment of nodes , for it allows for the direct representation of nodes as _ interference fringes arising naturally between the two traveling waves_. this possibility is exploited to great effect in semiclassical methods , which manage to contrive ( approximate ) bipolar amplitude functions that are completely _ nodeless_no matter how many nodes are present in itself .thus , apart from discontinuities near turning points ( associated with maslov indices ) , the semiclassical is smooth and positive , and the semiclassical is smooth and monotonically increasing .moreover , these decomposition functions tend to be very slowly varying , in relation to itself , particularly when the latter has many nodes .the above properties would of course also be beneficial for exact qtms which from a practical standpoint , is a primary reason why the bipolar ansatz ought to be considered within a bohmian context .one difficulty is that the exact quantum bipolar decomposition of is _ not unique _ , in the sense that the qshje of has a two - parameter family of solutions. in particular , one trivial solution is , which simply reproduces the unipolar result .clearly , this is not the solution that we want , i.e. , one that exhibits semiclassical correspondence in the large action limit ; obtaining the latter will be the focus of sec .[ decomposition ] . for the unipolar ansatz, it has been stated that nodes always give rise to infinities in , owing to the singular denominator in . however , this is only strictly true if the standard bohmian ansatz of is used , for which the convention is employed .if instead , one adopts the convention , so that smoothly changes sign as a node is traversed , then need not always be singular at a node .in particular , is _ never _ singular when is a stationary eigenstate of , provided is well - behaved everywhere .this is because the time - independent equation guarantees that the nodes of also happen to be inflection points .using moreover , it can be shown that .thus , for the stationary states considered in this paper , even the unipolar ansatz is not singular , contrary to what previously has been widely considered to be the case .even if the standard bohmian ansatz is used , at a node does not exhibit a singularity per se , but is rather ill - defined , owing to the cusp in ; away from the nodal point , . in any event , we find it useful and convenient to distinguish between two types of nodes , depending on whether is formally well - behaved ( `` type one '' nodes ) or singular ( `` type two '' nodes ) . from a numerical perspective , even type one nodes will cause difficulties for standard quantum trajectory calculations performed using the unipolar bohmian ansatz .this is because the slightest numerical error in the evaluation of the ratio will result in instability near the nodes even though formally , does not diverge .in contrast , due to the smoothness and lack of nodes of the functions that arise in the bipolar decomposition , numerical evaluation of the corresponding bipolar quantum potentials , , causes no such instabilities for type one nodes .more general nodal implications of the bipolar ansatz will be discussed in greater detail in future publications .in this section , we derive a unique bipolar decomposition of the form , for any given stationary wavepacket , , which satisfies semiclassical correspondence in the large action limit .in general , the decomposition is nonunique .the semiclassical solution , however , is essentially unique ( sec , [ stationarybipolar ] ) .we will therefore use the latter as a guide , for selecting the particular quantum bipolar decomposition which most closely resembles the semiclassical solution .note that certain assumptions have already entered into the form of , which is clearly more constrained than a completely general bipolar decomposition of into two arbitrary components . in particular , we have presumed to be a superposition of equal and opposite traveling waves a natural assumption , completely analogous to the semiclassical situation .be that as it may , there is still an enormous number of ways in which may be be realized for a given real wavepacket , and so the decomposition is still far from being unique . to help narrow the field ,we first summarize some of the additional properties of semiclassical eigenstates in 1 dof , which we will attempt to emulate in the exact quantum decomposition : 1 .the lm itself _ does not change _ over timethe classical probability distribution _ does not change _ over time .the classical flow for either lm sheet maintains _ invariant flux _ over all , with the flux value for the two sheets being equal and opposite .the area enclosed within the lm , i.e. the enclosed action , , is given by , where is the number of nodes .5 . for a normalized distribution, the absolute value of the invariant flux equals the inverse of the period of the trajectory , i.e. .6 . the median of the enclosed action , , satisfies .all trajectories move _ along _ the lm . by `` translating '' these properties appropriately into the exact quantum context, we will be able to define an essentially unique bipolar decomposition of the form .properties ( 1 ) and ( 2 ) above are the most fundamental , and will be considered first . for a particular decomposition , the corresponding positive - momentum lm sheet is given by .property ( 1 ) states that , which in turn implies .property ( 2 ) implies .together , these properties imply that the _ components _ of must be stationary eigenstates of in their own right .equation ( [ twolm ] ) then implies that the eigenvalues for the two components must both be equal to .since the are stationary , the quantum mechanical flux associated with each of these components , i.e. is independent of , with equal and opposite constant values , .note that demonstrates that the quantum analog of property ( 3 ) is also satisfied a necessary consequence of properties ( 1 ) and ( 2 ) .we call this the `` invariant flux '' property . for , , which reproduces the unipolar ansatz. we shall therefore hereafter restrict consideration to the case , for which the invariant flux property , and , provide a specification for , in terms of , and the constant , : r(x ) = [ rfeq ] note that if for all , then implies that for all desirable property for the positive momentum solution , also satisfied by the semiclassical solution .this would also imply that is monotonically increasing .accordingly , the condition is adopted . for ,the two components are linearly independent .this implies that at least one of the two must be non- .in fact , being complex conjugates of each other , _ both _solutions must be non- .moreover , it can be shown that diverges as . this is due to the fact that as in order that the enclosed action be finite ( sec .[ quantization ] ) ; but this implies via that diverges .it is instructive to rederive the invariant flux property in another manner . by the superposition principle, the time evolution of can be obtained by propagating each of the components separately in time . sincethe two components are stationary eigenstates of in their own right , the time - evolving s must each independently satisfy and .the former , applied to a lower case version of , is equivalent to the spatial derivative of .we now address the exact quantum analog of property ( 4 ) , the quantization condition . in a proper semiclassical treatment ,this half - integer condition on the enclosed action, , must be supplemented by the discontinuous jumps in phase that occur as one traverses a turning point , from one lm sheet to another . in a certain sense , these jumps account for the fact that the wkb solutions do not incorporate the portion of the true wavefunction that tunnels into the forbidden region which contribute an additional one half quanta of action , over the course of one complete circuit around the lm. when this discontinuous contribution is properly added to the usual enclosed action contribution , one obtains an _ integer _ quantization condition for the total action , , even within a purely semiclassical context . in the quantum case, there is no distinction between classically allowed and forbidden regions ; one travels smoothly from one to the other , over the entire position space . since the two lm sheets are symmetrically placed in phase space about the real axis ( fig .[ hogroundfig ] ) , the area enclosed between them is clearly twice the change in the action function , , as one travels from to . from the above description, we expect this change in action to be , where is the number of nodes .an integer quantization condition is therefore expected to hold for the exact bipolar quantum decomposition .this is indeed correct , as has been shown previously. bipolar lagrangian manifolds ( lms ) for the harmonic oscillator ground state , , for three different flux values , ( in each case ) : ( a ) , the semiclassical value ; ( b ) ; ( c ) .solid curves indicate quantum lms ; dotted curves indicate semiclassical lms .the former enclose an area ; the latter . in ( a ), the small open / filled circles represent semiclassical / quantum trajectory locations at times , for integer . ] a sufficient , though certainly not necessary ( see below ) , condition for achieving integer quantization of the quantum action is that be , which requires that . for convenience ,we adopt the convention that in .the condition then determines a value for .e ., for even , and for odd .this yields the following : ( x ) = [ twolmsin ] the somewhat awkward distinction between the even and odd cases is due to our choice of boundary condition for ; the reasons for this seemingly perverse choice will be made clear shortly . since must also be zero if is , implies that , where is an integer . since everywhere, all nodes of , for even / odd , must occur at values for which is an odd / even multiple of .this , in turn , requires , from which is obtained j = 2 s = 2 s(+ ) - 2 s(- ) = 2 ( n+1 ) , [ quant ] i.e. , the integer quantization condition .note that is independent of the normalization of .however , the amplitude function is not ; thus , if is presumed normalized to unity , then defines the normalization convention for , and for . the specific value of the constant , given above for even / odd not arbitrary , but is required in order to ensure that the resultant be . if a different value were chosen , then a different , _non_- solution to the equation would be obtained .this situation is in stark contrast to the unipolar case ( for which the constant simply changes the overall phase ) , and is due to the fact that the bipolar represents a _ relative _ phase shift . in any event, we can regard as a parameter that is used to specify a particular solution of the equation at the energy .in fact , _ all _ such real - valued solutions ( apart from an immaterial scaling factor ) may be obtained by varying the value of in .this is due to the fact that the are linearly independent .in any event , an important consequence is that all solutions are described by the exact same action function , , which is unaffected by the value of . among other things, this implies that the integer quantization rule applies to all _ non_- solutions , _ as well as _ to the solution .in this section , we continue the approach introduced at the end of sec .[ invariant ] , where the time evolution equations are applied to the two components separately .the equation was seen to yield the same invariant flux relation , for each component the equation , as applied separately to the two components , also yields identical results .one of the important ramifications of this is that the ansatz is preserved over time , at least for stationary eigenstates . in any event , since are solutions to the equation , the equation must result in a lower - case version of the qshje [ ] , for which the bipolar quantum potential , , is defined via lower - case .we shall rewrite this qshje by expressing directly in terms of , using lower - case , and : p^2 = 2m(e - v ) - ^2 [ p2eq ] whereas the semiclassical approximation [ obtained by ignoring the term in brackets ] has a unique solution , is a second - order differential equation in , with a two - parameter _ family _ of different solutions to choose from .note that applies to _ all _ equation solutions , i.e. the and the non- solutions , both .since we are interested only in the former , and since there is a one - parameter family of solutions in all , one might expect that the specification of the solution would determine the value of one of the two parameters .this is not correct however , as demonstrated earlier ( sec .[ quantization ] ) .thus , even for the solution alone , bipolar decomposition gives rise to two variable parameters via .what are the two parameters , and how should their values be chosen ? to determine what the two parameters are , it is convenient to combine together , to obtain a formula for directly in terms of .the result is : ( x ) = [ seq ] equation ( [ seq ] ) is a first - order differential equation for ; the general solution is easily found to be for nodeless wavepackets ( ) , everywhere , and the integrand of has no singularities . when , is still correct , but requires careful branch selection , to ensure that the final curve is continuous throughout the coordinate range .note that is presumed to be the solution .equation ( [ tans ] ) provides an explicit recipe for obtaining the decomposition .the two parameters can thus be taken as : ( 1 ) the flux parameter , ; ( 2 ) the integration limit parameter , .note that ; consequently , may also be interpreted as the median of the action , as per sec . [ semiprops ] .by varying the two parameters and in , different bipolar decompositions may be achieved .these correspond to different _ affine transformations _ of each other , in the sense that varying is equivalent to _ rescaling _ the right - hand - side ( rhs ) of , whereas varying is equivalent to _ adding a constant _ to the rhs . for a given , the various bipolar lms that can be constructed via vary significantly with respect to and ( sec .[ horesults ] ) , and so a general procedure for obtaining reasonable parameter values must be provided . at present , the best approach seems to be to touch base once again with the semiclassical properties in particular , property ( 5 ) for determining the appropriate value of , and property ( 6 ) for determining the appropriate value of .semiclassically , the flux for a normalized distribution is given by , where is the classical angular frequency for the appropriate semiclassical trajectory , the ( uniform ) rate at which the angle variable of the action / angle pair changes . the corresponding quantum trajectory is not that of a bound state , and so it is not possible to assign an value to it ( sec . [ trajectories ] ) . on the other hand ,the normalization convention allows us to determine a unique flux value for the quantum trajectory , which is all that is required . by setting the quantum flux value equal to the semiclassical value, it is anticipated that the resultant quantum lms will closely resemble the semiclassical lms , as desired .as for , the median of the enclosed action : this can be can be regarded as the exact middle of the wavepacket in a certain sense ; semiclassically , is the classically allowed configuration that is furthest from both of the turning points , vis - a - vis the action .consequently , we expect the greatest agreement of semiclassical and quantum lms i.e . the smallest values in the vicinity of the semiclassical .this can be achieved by allowing the quantum to coincide with the semiclassical value , the latter is chosen to be the location where . in the quantum bipolar decomposition scheme even for fixed is otherwise free to place the action median , , essentially anywhere along the position axis .the ramifications are particularly illuminating when is even .for such potentials , _ only _ the choice gives rise to quantum bipolar decomposition functions that are even or odd in , thereby respecting the physical symmetry of the system , and of itself .this choice for is also consistent with the median action criterion . presumably , it would be unphysical to consider any of the asymmetrical decompositions ; nevertheless , it is interesting to note that one can generate _ asymmetrical _ bipolar decompositions that give rise to the _ symmetrical _ , simply by shifting away from the origin .this has been verified via explicit construction for the harmonic oscillator ground state .we now address the issue of the quantum trajectories themselves , related to property ( 7 ) .semiclassically , over the course of time , the bound state trajectories simply move around and around the hamiltonian contour lms , which do not themselves change [ property ( 1 ) ] . in a conventional unipolar quantum treatment ,the initial lm specifying the initial conditions for the ensemble of trajectories is just the real axis , i.e. the `` curve '' , since is real .the quantum trajectories evolve under the unipolar modified potential , , which by , must be the constant function [ even if there are nodes ( sec .[ nodeissues ] ) ] .consequently , , and so the unipolar quantum trajectories _ do not move at all _ over time .in contrast , the bipolar quantum trajectories are _ not _ stationary , but move along the positive and negative momentum lm sheets .this is true because , and because the bipolar lms themselves do not change over time , thus verifying property ( 7 ) .moreover , provided the bipolar quantum potential is small in the classically allowed region , then the bipolar quantum trajectories must resemble the semiclassical trajectories within this region , since the lms are similar , and .of course , the bipolar quantum trajectories do _ not _ change their direction at the classical turning points , moving between positive and negative - momentum lm sheets , like classical trajectories .instead , all quantum trajectories on say , the positive momentum lm sheet , continue to head to the right for all time .once these trajectories enter the classically forbidden region , however , their speed decreases very suddenly .it is worth discussing the very different role played by the quantum potential in the unipolar ansatz , versus that of the bipolar ansatz with the specific decomposition suggested here ( i.e. , parameter choices of sec .[ paramchoice ] ) . in the unipolar case, serves to counteract the true potential everywhere ; thus , is not generally small .in contrast , the bipolar quantum potential , , can be regarded as the correction to the semiclassical approximation in the truest correspondence - principle sense of lower - case .the value of is therefore small in the appropriate semiclassical limits , in the classically allowed region far from turning points , and in the limit of large action , when becomes large .near the turning points , increases substantially , so as to ensure that all trajectories keep moving past the classical turning point without changing direction .this increase continues well into the classically forbidden region , where curiously , approaches .e .it effectively cancels out the true potential .consequently , the bipolar modified potential , , resembles the true potential in the classically allowed region , and the unipolar modified potential , , in the asymptotic regions .the above discussion hinges on the assumption that the semiclassical and bipolar quantum lms become arbitrarily close in the appropriate semiclassical limits described above .we can justify this expectation as follows .first , the semiclassical approximation is known to become arbitrarily accurate in these limits ; each of the two semiclassical traveling wave components must therefore approach some particular corresponding pair of exact quantum solutions , , arbitrarily closely .the latter must therefore have the same characteristics , vis - a - vis action , trajectories , and flux , as do the semiclassical approximations , in the appropriate limits .therefore , by choosing the available parameters for the quantum solutions ( i.e. and ) so as to match the semiclassical approximations , the correspondence principle must be satisfied .although the primary interest of this paper is bound , stationary eigenstates of the hamiltonian of , our ultimate interest is wavepackets that evolve dynamically over time . as a first step in this direction , we generalize the previous discussion to include wavepackets that are only momentarily `` stationary . '' in other words , the initial wavepacket is real , but otherwise arbitrary , i.e. , not presumed to be an eigenstate of .this results in , but only instantaneously , at the initial time .we shall call such a wavepacket a `` stationary non - eigenstate . ''to what extent can the bipolar decomposition scheme be applied to stationary non - eigenstate wavepackets ?the question is relevant , because it is only necessary to specify the bipolar decomposition at a single point in time , in order to propagate the two components independently , over all time .our approach shall be to regard as the eigenstate of some hermitian , hamiltonian - like operator , , which without loss of generality , may be taken to be of the form = ^2 2 m + v_0 ( ) . [ hz ]if is known , it is a trivial matter to obtain by solving the equation in reverse , i.e. = ^22 m .[ inverse ] the so obtained can then be used to generate the appropriate and values semiclassically .as a completely general procedure , this approach has one unavoidable flaw .if has type two nodes , then will have singularities at the nodes , which is undesirable .in such cases , since the precise values of the and parameters may not matter all that much in numerical practice , one should simply choose `` reasonable '' values by comparison with known cases e.g ., for , one could use the parameters of a gaussian with the same center and standard deviation as . on the other hand ,almost all initial wavepackets used in chemical physics applications correspond [ via ] to potentials that are well - behaved .as a classic benchmark example , we now work out analytic solutions for the harmonic oscillator ( ho ) eigenstates , i.e. . this example is particularly important , as the ground state provides the proper decomposition for gaussian wavepackets , which are used very frequently in time - dependent studies. we shall also find the excited harmonic oscillator states to be quite useful , particular with respect to investigations regarding nodes and interference .the normalized harmonic oscillator eigenstate is given by where is the hermite polynomial , and .we start with the ground state , .application of yields \right\}\end{aligned}\ ] ] as the generic , - and -dependent solution .this gives rise via to ^ 2\right\}^{1/2}.\nonumber\end{aligned}\ ] ] as per sec .[ paramchoice ] , the appropriate value of is clearly . the appropriate value of , whether from symmetry considerations , or the more general median action criterion , is clearly . with these choices for the parameter values , we obtain the simpler result for convenience , we choose units such that . in these units , , and . in these units , and for these parameter values ,all of the relevant bipolar decomposition functions are as follows : all of the functions in are smooth , slowly varying , and monotonic in .the lm is an elegant `` eye - shaped '' curve [ specified by the equation above ] that deviates smoothly , and positively , from the circular semiclassical lm , with the point of closest approach being .all of these features are as predicted in sec .[ decomposition ] , and would not have been satisfied if substantially different parameter values were used . a plot of the semiclassical and bipolar quantum lms is presented in fig .[ hogroundfig ] , for and other values ( but all with ) . whereas some of these other plots have the qualitatively correct behavior , it is very clear that the curve is the smoothest , most `` correct '' choice especially vis - a - vis comparison with the corresponding semiclassical lm .figure [ horfig ] is a comparison between the unipolar and bipolar amplitude functions and , respectively .as is clear from the figure , these two types of amplitude behave completely differently . in particular , whereas decreases quickly as , increases as one moves away from the origin , and actually diverges in the limits , as predicted in sec .[ invariant ] .clearly , the are non- solutions .amplitude functions for the harmonic oscillator ground state , . dashed curve :unipolar amplitude , .solid curve : bipolar amplitude , , for semiclassical parameter values , and . ] in fig .[ hopotfig](a ) , we present a comparison of the actual and bipolar modified potentials i.e . , , and .the two potentials resemble each other in the classically allowed region , away from the turning points at . as one approaches the turning points , the difference increases markedly . in the classically forbidden region, ceases to emulate the true potential , and in the asymptotic limits , approaches the unipolar constant value of .all of this is as predicted in sec .[ trajectories ] .bipolar modified potentials , , and true potential , , for two different harmonic oscillator eigenstates : ( a ) ground state , ; ( b ) tenth excited state , .solid curves indicate ; dotted curves indicate . in ( a ) , approaches as , but resembles in the classically allowed region , [ inset shows vs. ] . in ( b ) , and are virtually indistinguishable over the ( classically allowed ) coordinate range indicated . ]we also performed trajectory calculations .in particular , a single trajectory was propagated over a very long period of time , using what wyatt has called the `` analytical approach.'' in this scheme , the modified force [ i.e. is computed analytically , but the trajectory itself is propagated numerically .we found first of all that this numerical propagation scheme was extremely stable , as demonstrated by the fact that the numerical trajectory did not deviate appreciably from the lm at any point in time . phase space values for the trajectory at various times are indicated as small circles in fig .[ hogroundfig](a ) , from which it is also clear that quantum trajectories correspond fairly well to the semiclassical trajectories in the classically allowed region . in the classically forbidden regions , trajectories slow down very quickly , as predicted .this is evidenced by the pile - up of trajectory points that ensues in these regions [ fig .[ hogroundfig](a ) ] .formally , however , the trajectories do not actually reach zero momentum until .they are thus analogous to classical trajectories for a system that has just enough energy for dissociation .this fact is also reflected in the asymptotic behavior of as discussed above .we now briefly address the issue of trajectory `` pile - up '' in the classically forbidden regions , which is an important concern for numerical calculations .although the bipolar ansatz has the effect of placing more trajectories in regions of space where the actual probability is small , this situation is numerically agreeable for two reasons : ( 1 ) more accuracy is needed in these regions , because itself is effectively obtained as the difference between two large numbers ; ( 2 ) a simple `` recycling '' scheme can be introduced to reduce the number of trajectories to a minimum .these issues will be discussed in great detail in future publications .the correspondence between the semiclassical and quantum lms for the harmonic oscillator ground state is only fairly good , but one ought to recall that the action is minimal in this case . a real test of the correspondence principle requires a detailed investigation of the lm behavior in the large action limit .this in turn , requires that the bipolar decomposition be performed for the excited harmonic oscillator states . using , with , , and , we have obtained analytical solutions for all up to .the general form of the bipolar action for the eigenstate [ denoted is as follows : s_n(x ) = [ simpleex ] in above , is an - order odd / even polynomial , and is an - order even / odd polynomial , for even / odd .explicit coefficient values for and are listed in tables [ ftab ] and [ gtab ] , respectively ( coefficients for larger can be provided on request ) . [ cols="^,^,^,^,^,^,^,^,^,^,^ " , ] the bipolar solutions for the excited states behave exactly as predicted . in particular everywhere , and both and are as smooth , slowly varying , and monotonic as for the ground state .in fact , all of the bipolar functions qualitatively resemble those for , except on a larger scale as is also true of the semiclassical functions .this is in sharp contrast to the behavior of the wavefunctions , , themselves , which gain nodes and rapid oscillations as is increased .one very encouraging aspect of the excited state bipolar functions is that nodal features are not evident anywhere .however , this requires choosing the correct branch of the solution at every point in position space , such that the resultant curve is not discontinuous across a node .this issue is discussed in more detail in sec .[ numericalissues ] .in any case , the basic goal of the bipolar ansatz has been achieved , to obtain a decomposition which , like the corresponding semiclassical approximation , treats all nodes as interference between a superposition of left and right traveling waves , , which are themselves nodeless .moreover , beyond achieving just this basic goal , we find that the _ correspondence principle is satisfied in the large limit_. this is exemplified in fig .[ hoexcitedfig ] , wherein the semiclassical and bipolar quantum lms are presented for several harmonic oscillator states over the range considered .bipolar lagrangian manifolds ( lms ) for three different harmonic oscillator eigenstates ; moving concentrically outward from the origin , these are , , and , respectively .solid curves indicate quantum lms ; dotted curves indicate semiclassical lms .the correspondence principle is clearly satisfied with increasing . ] from the figure , the quantum lms are seen to enclose one - half quanta of area more than the semiclassical lms , which manifests primarily in the forbidden regions near the turning points , as expected . in a relative sense , this discrepancy becomes decreasingly significant in the large limit .note that the quantum lms completely enclose the semiclassical lms , which was not anticipated earlier , but is certainly a desirable property . from lower - case , this will only be satisfied if the bipolar quantum potential is negative everywhere .this has been observed for all examples considered thus far , using the appropriate semiclassical values for and ; however , most other parameter choices would not satisfy this property .even more so than in the ground state case , the bipolar quantum potential for is found to be very small throughout most of the classically allowed region .the magnitude of decreases with increasing , so that whereas for the ground state , by we have . of course ,the extent of the classical region is also larger with increasing ; thus by , we find to be practically indistinguishable from the over the range .the situation , depicted in fig .[ hopotfig](b ) , can be regarded as another manifestation of the correspondence principle .the correspondence principle also has important ramifications for trajectory calculations .in particular , not only are the bipolar quantum trajectories for large smooth and well - behaved throughout , but in the classically allowed region , they are virtually indistinguishable from classical trajectories .this has once again been verified by performing analytical trajectory calculations for , which were found to be just as numerically stable as for the ground state despite the fact that itself has _ten nodes_. this bodes very well for obviating the node problem in general .although the bipolar decomposition functions once obtained exhibit no special behavior in the vicinity of nodes , it turns out that nodes complicate the determination of these functions somewhat , vis - a - vis implementation of . to begin with ,let us imagine that as in the current harmonic oscillator case an analytical expression for the integral is available . for the moment, we also take to be even .note that the left and right sides of must be infinite at the nodes .thus , whereas the exact analytical expression can be used across the entire coordinate range , a new branch is encountered each time a node is traversed .the specific branch of interest is specified by the condition of continuity for , and by .a superficial difficulty is encountered for the odd states , for which there is necessarily a node at .strictly speaking , this implies that the integration must be singular . to circumvent this difficulty, we express the rhs of as an indefinite analytical integral , plus an arbitrary constant , .note that since must lie at a node for odd , can not serve as the second parameter , for singling out the particular solution of interest for .we can , however , use for this purpose . in particular ,if is even , then only one value of gives rise to the requisite odd solution .more generally , i.e. for arbitrary , we can still apply the oddness criterion locally .in other words , it is easy to show that should in general be chosen such that in the limit of small .this technique bears a resemblance to the cauchy principal value method. the above discussion presumes that an analytical integral is available for the rhs of . generally speaking, this will not be the case , and we must consider how to apply the above procedures when the integrations are performed numerically .fortunately , this is straightforward . the general procedure is to pick an arbitrary integration limit , , lying in between each adjacent pair of nodes ( where are treated as `` nodes '' in this context ) .equation ( [ tans ] ) is then applied to each interval separately , generating a smooth , numerically integrated function over the entire interval , that is correct to within an additive constant , .the individual values are then obtained , using the constraint , and applying the cauchy - like condition described above across each of the nodes separately .the numerical procedure described above has been applied succesfully to the morse oscillator system , for which : ( 1 ) is not symmetrical , and ; ( 2 ) the integrations must be performed numerically .the results will be presented in a future paper .we mention the morse investigation in this paper simply to emphasize the fact the present method is in fact applicable in a much broader context than the analytical harmonic oscillator system considered here .moreover , all of the conclusions drawn for the harmonic oscillator , regarding the correspondence principle and the like , evidently apply to more general systems .this is demonstrated in fig .[ morsefig ] , which depicts the semiclassical and bipolar quantum lms for the state of the morse oscillator .bipolar lagrangian manifolds ( lms ) for the fourth excited eigenstate of the morse oscillator system with twenty bound states total .solid curves indicate quantum lms ; dotted curves indicate semiclassical lms .semiclassical values for and were used to specify the quantum solution , as per sec .[ numericalissues ] . ]the present work does not constitute the first application of the bipolar ansatz in a bohmian - like dynamical context . for two decades or so, a very interesting bipolar approach has been developed and advocated by e. r. floyd. more recently , essentially the same technique was derived by faraggi and matone ( fm), within a much broader context , and using a very different physical picture .it is somewhat remarkable that these two approaches give rise to exactly the same dynamical equations ( an illuminating comparison is presented in ref . and in ref . ) . perhaps even more remarkable , however , is that the dynamical law used in floydian / fm trajectory propagation is _ not _ equivalent to that of bohmian mechanics . in this section ,we compare and contrast the methods of floyd , fm , and the present work .the various approaches were developed independently , and so a brief discussion of the different philosophies is presented , as well as the mathematical similarities and differences .the comparison is particularly apt for the present paper , in that both the floyd and fm theories are restricted to _ stationary states only_. the starting point of the floydian approach is the qshje in 1 dof ( multidimensional systems may be considered , but only if there is separation of variables ) . as per sec .[ background ] , this is natural enough , for a bohmian - like theory applied to stationary states ; however , there are some subtleties regarding the manner in which energy is treated , that give rise to a non - bohmian dynamical law .the intriguing approach of fm begins not with the qshje , but with the fundamental postulate that all systems are equivalent under coordinate transformations. this is termed the `` equivalence principle , '' in obvious analogy with general relativity , with which the fm approach shares many parallels .indeed , one of the goals of fm is to reconcile quantum mechanics and general relativity , to which end it is natural to focus on the action , which plays a key role in both physical theories . in any event, the fm approach relies on the existence of a particular coordinate transformation that reduces the system to that of a free particle. this approach implicitly relies on the fact that the wigner - weyl correspondence is not preserved under canonical / unitary transformations a feature that has been exploited to great effect in improvements to semiclassical theories , such as the langer modification, although not always with full comprehension that this is what was taking place. starting from the equivalence principle , fm _ derive _ the qshje , and demonstrate that energy quantization arises naturally from the condition that this postulate be satisfied .they find , moreover , that a unipolar ansatz is insufficient to achieve this , but rather , a bipolar ansatz of the form is required .in contrast , floyd begins with the qshje , and then presumes a bipolar ansatz for which each of the two terms is a solution .floyd s motivations are evidently similar to the author s , in that he introduces the bipolar ansatz in order to: ( 1 ) obtain a bohm - like formalism that is well - behaved at nodes ; ( 2 ) obtain dynamical trajectories that are _ not _ stationary , thus obviating einstein s concern , and allowing for the possibility of satisfying the correspondence principle . for purposes of this discussion ,the approaches of floyd and of fm may from this point forward be regarded as identical . since both employ the qshje ,as does the present approach , it is clear that _ all three utilize essentially the same bipolar functions _, , , , and . however , there are some important differences .floyd uses a different convention , for which both the normalization of , and the flux , change simultaneously , so that his is proportional to ours .neither of the other methods identifies the two parameters associated with in the way that we have done , and they certainly do not provide a means of selecting preferred values for these parameters , as per sec .[ paramchoice ] .indeed , floyd considers each of the two - parameter family of solutions to , which he terms `` microstates , '' to constitute an equally valid decomposition of .he further asserts that since the equation per se provides no means of distinguishing microstates , that the qshje must be regarded as the more fundamental equation , in some sense .floyd also acknowledges the fundamental differences between and , and the advantages of the former vis - a - vis nodes , as discussed in sec .[ nodeissues ] ( although he incorrectly claims that the latter is singular at the origin for the first excited harmonic oscillator eigenstate ) . on the other hand, the lack of a criterion for selecting a prefered microstate implies that the quantum trajectories associated with his approach termed `` floydian trajectories''will not in general approach classical trajectories in the correspondence principle limits .actually , there is another , more fundamental reason why floydian trajectories do not satisfy the correspondence principle .this is because the underlying dynamical law governing their evolution is radically different from that of the bohmian quantum trajectories utilized here .the trajectories are different , _ even though _ the bipolar decompositions are identical a very curious situation that bears further analysis . in particular , the period of floydian trajectories is finite , necessitating a speed that approaches infinity as .in contrast , bohmian trajectories _rapidly slow down _ in the forbidden regions , never reaching the coordinate asymptotes , as discussed in sec .[ hoground ] .whereas floyd finds implications for hidden variables and the copenhagen interpretation , we adopt a decidedly less philosophically ambitious perspective. the floydian dynamical law is obtained from the qshje by applying a classical procedure, wherein trajectory evolution is related to the quantity to yield x = ^-1 1 m s(x ) , [ floyd ] where is the bipolar modified potential .various interpretations may be provided for the factor in .fm adopt a relativistic interpretation which lumps it together with to form the `` effective quantum mass , '' whereas brown ( in essence ) uses it to define an `` effective time . '' in the bohmian approach , is regarded as independent of , thus giving rise to the usual relation . in the floydian approachhowever , is considered to depend on , giving rise to the more complicated expression above , for which the canonically conjugate momentum , , is not necessarily equal to mechanical momentum , . how can the same be regarded as energy - independent in one theory and energy - dependent in another ?it has to do with different interpretations of the energy .one can make a distinction between the quantum energy , , and the classical energy , .the quantum energy is that of the eigenstate , which determines , , and .the contours of define the classical energies , with corresponding to the bipolar quantum lm .the function clearly depends on , but does not depend on .the meaning of therefore depends on whether ( floydian approach ) , or ( bohmian approach ) .floydian dynamics , therefore , represents a different philosophical outlook than that of bohmian dynamics .both approaches are correct , and ultimately yield mathematically identical results . the main pointwe wish to make here is simply that this difference does not appear to have anything to do with the bipolar ansatz per se .indeed , one could easily apply the floydian dynamical law to the _ unipolar _ ansatz although for stationary eigenstates , this would yield the same results as bohmian propagation .we close this section with a few final comparisons between the floyd / fm and present approaches .first , we comment that quantum energies are discrete for bound states , and so any derivative with respect to requires careful consideration , as has been previously noted. this is relevant for floydian trajectories , but is essentially a non - issue for bohmian trajectories .second , although floyd has certainly considered the 1 dof harmonic oscillator system , he appears not to have derived analytic expressions for the relevant bipolar functions , as we have done in sec .[ horesults ] .we have , moreover , verified that floyd s closed form expression for the bipolar modified potential, u(x ) = e - 1/^2 , [ floydclosed ] is consistent with , by substituting , , , , and into .finally , we comment that the floyd / fm approach , being based on the qshje , does not appear to generalize to the non - stationary - case , whereas the present approach does at least in certain situations .the equation is linear , yet the equivalent obtained via substitution of the ansatz into the equation are not . quite apart from the philosophically intriguing issues which this raises ,a primary conclusion of the present work is that this situation may also be exploited for practical purposes . in particular ,the superposition principle allows us to divide up the initial wavepacket into pieces , evolve each of these separately over time , and then recombine them to construct the time - evolved itself .so far as the equation is concerned , the division into pieces is arbitrary .however , if the evolution of the is performed using , nonlinearity implies that the division is _ not _ arbitrary , but has a large impact on the time - dependent behavior of the resultant and . in principle ,therefore , one may improve the numerical performance of quantum trajectory calculations simply by judiciously dividing up the initial wavepacket into several pieces .it remains to be seen the extent to which such a procedure will prove beneficial for actual numerical calculations of real molecular systems to be sure , much depends on the manner in which is decomposed .nevertheless , the particular bipolar scheme explored in this paper , already appears to exhibit much promise at least with regard to ameliorating the infamous node problem , which has thus far severely limited the effectiveness of qtms in the molecular arena .the basic idea is to decompose a wavepacket , , that _ has _ nodes , into a linear combination of two components , , that do not . in practice , it is not only nodes per se that cause problems for numerical qtms , but more generally , any large or rapid oscillations in . thus , should ideally be not only nodeless , but also smooth and slowly varying .for the special case where is a stationary hamiltonian eigenstate , the semiclassical method is well - known to yield approximate functions with the requisite properties even when itself is highly oscillatory .it is for this reason that the semiclassical solutions were used as a guide for determining the corresponding exact quantum s .not only have we provided an explicit recipe for obtaining the latter , we have also shown that these satisfy the correspondence principle in the appropriate semiclassical limits .thus , the bipolar quantum potential , , obtained here now understood to represent the quantum correction to the semiclassical approximation is not only well - behaved in the vicinity of nodes , but actually approaches zero , in the large action limit .the bipolar quantum trajectories also behave very much like classical / semiclassical trajectories ; indeed , the two are nearly identical in the classically allowed region , in the large action limit .this may seem paradoxical , as both bipolar and unipolar quantum trajectories conform to the same dynamical law , and the latter are known to behave very non - classically .unipolar trajectories do not cross in position space , for instance which can cause kinky trajectories and other node - related difficulties , especially when the wavepacket undergoes reflection .the bipolar trajectories get around this difficulty as follows : whereas the trajectories _ on a single lm sheet _never cross each other , they are all headed in the same direction anyway , and so they do nt get in each other s way . on the other hand , trajectories on one lm sheetare free to cross those on the other sheet just like the corresponding semiclassical trajectories . from a philosophical standpoint, one might thus regard the present bipolar decomposition to be more compelling than the standard unipolar approach although curiously , this stance would require one to abandon bohm s original pilot wave interpretation .the above discussion anticipates future application of the present ideas to arbitrary time - evolving wavepackets ; but it must be borne in mind that thus far , only stationary wavepackets have been considered .it is encouraging that the bipolar decomposition scheme outlined here was found to be preserved over time for stationary states ( sec .[ qshjeintro ] ) . on the other hand , for the more general time - evolving case, the lms themselves will change over time , and the bipolar decomposition scheme itself need not be preserved .it is therefore possible that the initially nodeless s may develop nodes over the course of time .this need not cause difficulties in practice however , because at any desired time , one is free to redecompose into new s that _ are _ nodeless .one of the most appealing aspects of the excited harmonic oscillator results of sec .[ hoexcited ] is the fact that the bipolar functions remain smooth and slowly varying for all values of .indeed , the lms for all values resemble each other , apart from a change of scale .this is very advantageous from a numerical perspective , as it suggests that very few trajectories would be needed to accurately compute derivatives , if a completely numerical propagation scheme were adopted .more to the point : the number of trajectories required should be essentially _ independent _ of the number of nodes . for sufficiently large , it should even be possible to perform an accurate calculation with _ fewer than trajectories_a prospect that would be virtually unheard of in a unipolar context .note that since the bipolar trajectories themselves are also much smoother than the unipolar trajectories , far fewer time steps should be required in the bipolar case .we conclude with a brief discussion of the prospects for multidimensional systems . at present , it is not entirely clear how best to apply bipolar decomposition to an arbitrary , real , multidimensional wavepacket , . for regular systems ,semiclassical theory suggests an essentially direct - product decomposition , via pairs of action - angle coordinates .generally speaking , however , it is difficult to find these coordinates unless the wavepacket is initially separable , which is very often the case in molecular applications . on the other hand, the factor - of - two bifurcation applies to each degree of freedom separately , resulting in components total , where is the number of dofs .this is clearly undesirable for large .a more effective strategy may be to bifurcate along the _ reaction coordinate only_. these and other ideas will be explored in future publications .this work was supported by awards from the welch foundation ( d-1523 ) and research corporation .the author would like to acknowledge robert e. wyatt for many stimulating discussions , and for introducing him to the work of floyd , and of faraggi and matone .david j. tannor is also acknowledged .jason mcafee is also acknowledged for his aid in converting this manuscript to an electronic format suitable for the arxiv preprint server .
|
the semiclassical method is characterized by finite forces and smooth , well - behaved trajectories , but also by multivalued representational functions that are ill - behaved at caustics . in contrast , quantum trajectory methods based on bohmian mechanics ( quantum hydrodynamics)are characterized by divergent forces and erratic trajectories near nodes , but also well - behaved , single - valued representational functions . in this paper , we unify these two approaches into a single method that captures the best features of both , and in addition , satisfies the correspondence principle . stationary eigenstates in one degree of freedom are the primary focus , but more general applications are also anticipated .
|
stochastic optimization for insurance business with investment control has been studied extensively in recent years . under the classical cramer - lundberg model ( compound poisson risk process ) , the problem of ruin minimization was first considered in hipp and plum , where the investment control is unconstrained and the investment amount in the risky asset can be at any level .more recently , azcue and muler studied the same model with a borrowing constraint that restricts the ratio of borrowed amount to the surplus level to invest in the risky asset .in belkina et al . , the authors considered the problem with a restriction that the investment ( purchase or short - sell ) in the risky asset is allowed only within a limited proportion of the surplus . in gaier ,grandits and schachermayer , and hipp and schmidli , for the case with zero interest and light tailed claims , asymptotic behavior of the ruin probability was investigated and convergence of the optimal investment level was proved as the surplus tends to infinity . in frolova et . , for the case with exponential claims , a power function approximation of the ruin probability was provided under the assumption that all surplus is invested in the risky asset . in gaier and grandits , , for claims with regularly varying tails , and in grandits , schmidli and eisenberg , for sub - exponential claims , certain asymptotic properties of the ruin probability and the optimal investment amount were obtained .other related research articles worked on more complex control forms with reinsurance and investment .for example , schmidli considered an unconstrained optimal reinsurance - investment control problem under the classical model .taksar and markussen , luo and luo et al . studied the problem under the diffusion approximation model with various investment restrictions .one common extension of the compound poisson model considers brownian perturbation , which is known as the jump - diffusion model ( see , e.g. , , , and ) .the surplus process is the sum of the classical risk process and a brownian motion . under this model , in zhang and yang , a general objective function was studied with unconstrained investment control and numerical methods to compute the optimal investment strategy were discussed . in gerber and yang ,absolute ruin probability was considered .laubis and lin showed a limiting expression for the ruin probability function which is a power function under the assumptions that investment amount is a fixed fraction of the surplus and that the insurance claims are exponential . in lin , an exponential upper bound for the minimal ruin probabilitywas obtained and numerical calculations revealing the relationships between the adjustment coefficient and model parameters were conducted . in a surplus model with stochastic interest rate , paulsen and gjessing studied the probability of eventual ruin and the time of ruin without investment control. this paper studies the problem of ruin probability minimization with investment control .we consider the perturbed compound poisson surplus model and assume positive interest rate as in and .the surplus can be invested into a risky asset ( stock ) and a risk - free asset , where the risky asset price follows a geometric brownian motion .we consider two cases of investment . in the first case, we assume that there is no restriction on the investment ( see , e.g. and ) .that is , the investment amount in the risky asset can be at any level .note that short - selling of the risky asset is allowed at any level in this case . in the second case ,we assume that the investment amount in the risky asset is no more than a fixed level and short - selling the stock is not allowed .this restriction is set to reduce leveraging level of the insurer which was considered in e.g. . in both cases ,the goal is to minimize the probability of ruin .the minimal ruin probability function is characterized by an integro - differential hjb equation as in , , , and .the hjb equation has a classical solution and it can be shown that the minimal ruin probability function is proportional to the solution by a verification result .we summarize three main contributions of this paper .first , we define novel operators to prove the existence of a classical solution to the hjb equation in the two investment cases .these operators provide an alternative method to compute the optimal investment strategy and the minimal ruin probability ( see numerical examples in the last section ) .second , we give asymptotic results on the optimal investment strategy and the minimal ruin function for low surplus levels . in the unconstrained case , we find that when the surplus level approaches to , the optimal investment amount tends to a fixed non - zero value , in contrast to the model without perturbation in and , where the optimal investment level tends to zero as surplus tends to .in addition , we find the rate at which the optimal investment amount converges to the non - zero level .asymptotic results near for the minimal ruin function are also studied . in the constrained case, we find close interplay between the model parameters and the optimal investment control .we give parameter conditions under which the optimal investment amount takes values , , or a certain level in between respectively when the surplus is low .note that all these asymptotic results are obtained for an arbitrary claim size distribution .third , in the special case with an exponential claim distribution , we prove some new asymptotic results for large surplus levels .we show that the optimal investment amount has a finite limit when surplus tends to infinity .we also show that the minimal ruin probability function has a limiting expression that is the product of an exponential function and a power function as surplus tends to infinity .we note that these new limiting results ( of the optimal investment problem with exponential claims ) hold also in the classical model without perturbation .the rest of this paper is organized as follows . in section 2, we formulate the optimization problem . in section 3 , we prove existence of the classical optimal solution and give asymptotic results in the constrained case . in section 4 , we study the unconstrained case . in section 5 , we investigate the model when the claim size is exponential . numerical examples and concluding remarks are given in section 6 .we assume that without investment the surplus of the insurance company is governed by the cramer - lundberg model : where is the initial surplus , is the premium rate , is a poisson process with constant intensity , s are positive i.i.d . random claims .suppose that at time , the insurance company invests an amount of to a risky asset whose price follows a geometric brownian motion where is the stock return rate , is the volatility , and is a standard brownian motion independent of and s .the rest of the surplus of amount is invested in a risk free asset which evolves as where is the interest rate .we also assume that the surplus process is perturbed by a brownian noise term . with perturbation and dynamic investment control , denoted by ,the surplus process is governed by +\sigma \int_0^t a_sdb_s + \sigma_1\int_0^t db^1_s -\underset{i=1}{\overset{n(t)}{\sum}}y_i , \end{split}\ ] ] where is a standard brownian motion with for some correlation and is the perturbation volatility .we assume that all the random variables are defined in a complete probability space endowed with a filtration generated by processes and .a control policy is said to be _ admissible _ if satisfies the following conditions : ( i ) is predictable , ( ii ) , and ( iii ) is square integrable over any finite time interval almost surely , where ] .we assume in this section and omit the other case which can be treated similarly .suppose is a twice continuously differentiable function and solves the hjb equation .write when . then is the unconstrained maximizer of the right side of hjb equation when and if .the constrained maximizer when ] for any , the functions and are continuous in .function is continuous in uniformly for ] .use the supremum norm for any ] , suppose and , then we have [w_1(x)-w_2(x)]\ } } { \sigma^2(a_2^*)^2 + 2\rho\sigma\sigma_1a_2^*+\sigma_1 ^ 2}\\ \leq&c(k)||w_1-w_2|| , \end{split}\ ] ] where /\sigma_1 ^ 2.\ ] ] thus we see the operator is lipschitz with respect to the supremum norm on ]. then it holds if we select a small such that .this shows the operator is a contraction on ] such that the solution can be extended to ] if ; if ; and } = -\frac{\mu - r}{\sigma^2[\rho\frac{\sigma_1}{\sigma } -\frac{c}{\mu - r}+\sqrt{\frac{c^2}{(\mu - r)^2 } + \frac{2\sigma_1c}{\sigma(\mu - r)}(\rho_1-\rho)}]} ] for any , the functions , and are continuous in .consider two continuous functions in ] with constant : the rest of the proof is similar to that of lemma [ lemmavtv ] and we skip it .we then conclude .write where is the solution of .we prove the following lemma : is positive and is negative on .we prove by contradiction .define and suppose .then it holds on , and ( due to continuity of and definition of ) .it also holds for .notice that satisfies on .passing in , we obtain wherefrom we see is finite and hence choose and close to with such that for . thus passing in the above , contradiction !thus it must hold and we conclude is positive on . immediately , is negative . to this end, we see that the function given by solves the hjb equation with respect to conditions , and . by a verification result ( similar to lemma [ ver ] ) , we see that the maximal survival function ( value function ) is given by .next we study the asymptotic behavior of the hjb solution for low initial surplus .write and then satisfies equation where is defined in , wit initial condition multiplying both sides of the last equation by , we get equation we find representations of the solution of equation with condition and its derivative in such forms as : where and are some constants . taking into account that , , we have from on principal terms of expansion : \left({1 + o(1 ) } \right)\\ = & \left [ { c_\rho\alpha^2 \beta x^{2\beta-1 } + c_\rho\alpha\beta x^{\beta - 1 } + \left ( { r - \frac{\lambda } { { \beta + 1 } } } \right)\alpha^2 \beta x^{2\beta } } \right . \\ & + \left . { ( r - \lambda ) \alpha\beta x^\beta + \frac{1}{2}\sigma_\rho^2\alpha^2 \beta ^2 x^{2\beta-2}}\right ] \left ( { 1 + o(1)}\right ) , \,\,\,x \to 0 . \end{split}\ ] ] from this relation it is easy to see that then therefore }. \end{split}\ ] ] recall that in view of .we have , and where note that },\ ] ] where is given by .hence , we get asymptotic representation of : in view of the value function , we have [ thm41 ] it holds , \,\,\,x \to 0,\ ] ] where is given in and is a positive constant .next , we find more exact asymptotic representation of to obtain an asymptotic representation of the optimal strategy . for this , we introduce the change of variables where is defined in .then we characterize in the following form : where and are some constants . fromwe have and then and for the optimal strategy , in view of , and , we obtain for : }{\sigma^2b[1 + 2\eta x(1+o(1))]}-\rho\frac{\sigma_1}{\sigma}\\ & = -\frac{c}{\mu - r}+\sqrt{\frac{c^2}{(\mu - r)^2}+\frac{2\sigma_1c}{\sigma(\mu - r)}(\rho_1-\rho ) } -\frac{\mu -r}{\sigma ^2}\left(1+\frac{2\eta}{b}\right)x(1 + o(1 ) ) .\end{split}\ ] ] finally , we have [ thm42 ] for the optimal investment strategy , it holds \left[a_v^*(0+)+\rho\frac{\sigma_1}{\sigma}\right]+c_\rho \frac{{\left ( { \mu - r } \right ) } } { { \sigma ^2 } } } { \sqrt{c_\rho ^2+\frac{{\left ( { \mu - r } \right)^2 } } { { \sigma ^2 } } \sigma _ \rho ^2}}\right)x(1 + o(1)),\ ] ] when , where is given in , and are given in .we note that the results in theorems [ thm41 ] and [ thm42 ] hold also in the constrained case under the parameter condition .these results are obtained for any claim size distribution with the property when .[ rmk - mr ] the results in theorems [ thm41 ] and [ thm42 ] hold for the parameter case under which the optimal investment amount is a constant strategy with and . in the case with unconstrained investment ,when and , we have , which indicates that it is optimal to short - sell the high return stock to earn interest at low surplus levels .we note that this counter - intuitive investment strategy , which never occurs in the classical model without perturbation in the unconstrained case , shows a special feature of the perturbed model that investment ( buying / short - selling the stock ) is not only for the stock return but also for neutralizing the perturbation risk .we also note that this strategy ( short - selling the high return stock to earn interest ) can occur when a strong investment constraint on borrowing ( money ) and buying ( stock ) is imposed in the model without perturbation ( see , e.g. ) .we now analyze the case when the claim size has an exponential distribution with mean . in this case , we show some new results on asymptotic behaviors for large surplus levels . for the special case , from , we see that the optimal investment amount is a constant and this case is addressed in remark [ rmk-54 ] . in the following ,we assume . as in ( the case ), we first derive an equation for the optimal strategy . from equation , for the case with exponential claim distribution function , where , we have where , is given in , and are given in .let it holds .then equation can be rewritten as and hence differentiating this equation with respect to , we get . \end{split}\ ] ] divide both sides by , and write we have the following equation for : , \end{split}\ ] ] and finally , \tilde{a}_v'(x ) = & -\frac{\sigma ^2}{m } \tilde{a}_v^3(x)-2\left[r-\lambda+ \frac{c_\rho}{m } -\frac{{(\mu - r)^2 } } { 2\sigma ^2 } + \frac{r}{m}x \right ] \frac{\sigma ^2}{\mu - r}\tilde{a}_v^2(x)\\ & + 2\left(c_{\rho } + rx + \frac{1}{2m}\sigma _\rho^2\right)\tilde{a}_v(x ) - \sigma _\rho^2 \frac{{\mu - r}}{{\sigma ^2 } } .\end{split}\ ] ] in the sequel , we find asymptotic representations of the optimal strategy and the value function at infinity .it can be shown that equation has a family of bounded solutions , each of which is representable in principal in the form of the following asymptotic series for large : where a short justification of , for the function defined in , is given in the appendix .we then obtain the following property of the optimal strategy : [ th51 ] it holds for .noticing '=-\frac{\mu - r}{\sigma^2\tilde{a}_v(x)} ] . in these models ,investment amount in the risky asset tends to infinity as surplus tends to infinity .consequently , when the surplus level is large , the stock volatility and stock growth are major parameters that affect the ruin probability , but not the exponential mean , claim occurrence intensity and premium rate . in theorem 4.1 of , it is shown where solves the equation we note that the surplus process there is a special case of the jump - diffusion process in this paper with and , and that he bound is obtained by using a constant investment policy with amount under which the process is a martingale . in theorem 4.2 of with brownian perturbation , it is shown , where solves the equation the discounted surplus process of is a jump - diffusion process in a slight different form of ours with . the exponential bound can be obtained using a constant investment strategy of amount in our model with , and the process is a martingale .first assume that function satisfies equation for large .this equation has the form '(x ) - m(v)(x ) = 0.\ ] ] in the case of exponential claims , recall , and equation can be rewritten as ' ' ( x)+\left [ { c + ( \mu - r)a + rx } \right]v ' ( x)\\ + & k\lambda\int\limits_0^x { v ( x - y)\exp ( - k y)dy } - \lambda v ( x ) = 0 .\end{split}\ ] ] denote it is easy to see that if satisfies , then it satisfies the following equation : where ' ' ( x)+\left [ { c + ( \mu - r)a + rx } \right]v ' ( x)\\ + & k\lambda\int\limits_0^x { v ( x - y)\exp ( - k y)dy } - \lambda v ( x ) , \end{split}\ ] ] which is the left - hand side of equation . then in view of, equation can be rewritten as an ordinary differential equation ( ode ) of the 3-rd order : ' ' ( x ) \\ & + \left [ { \frac{{2((r - \lambda ) + k c + k a(\mu - r))}}{{a_{\rho}^2 } } + \frac{{2rk } } { { a_{\rho}^2 } } x }\right]v ' ( x ) , \end{split}\ ] ] where put : and }}{{a_{\rho}^2 } } , \quad a_4 = \frac{{2rk } } { { a_{\rho}^2}},\ ] ] then the ode takes the form where we set then therefore , we obtain the equation in the following matrix form : where , and rewrite equation in the form this system has an irregular singular point at infinity of the 2-nd rang ( see ) .since the matrix has the eigenvalue zero , then to obtain a principal term of asymptotic behavior of the solution at infinity , we must find the correction to the zero eigenvalue by perturbation theory up to to do this , we use the method of asymptotic diagonalization for systems of linear ode ( see and the references therein ) .first , we find a diagonalizator of matrix , i.e. a matrix such that where is a diagonal matrix .it is easy to show that next introduce a change of variables where , is the identity matrix , and , are some matrices to be determined below . differentiating equation in , we have and we get from the equation ,\ ] ] where choose matrices , in such a way that the equation assumes the form where and are some diagonal matrices .we let ( diagonal elements of are the same as those in the matrix ) and we determine below . equating the right sides of and we have equating the coefficients of we obtain which yields }}/{{a_2 } } } & 0 \\\end{array } } \right).\ ] ] equating now the coefficients of we get and }}\left ( { a_1 - 2k } \right)/{{a_2 ^ 2 } } } & 0 \\\end{array } } \right),\\ & \tilde a_2 = \left ( { \begin{array}{*{20}c } { { \lambda } /{r } - 1 } & 0 \\ 0 & { 1 - { \lambda } /{r } } \\ \end{array } } \right ) .\end{split}\ ] ] the system is asymptotically equivalent to the following system ( see ) : where , which is separated into two independent equations : for , the solutions of these equations have the following form : for some constants and . the same representations are true for the solution of .notice , \end{split}\ ] ] where , are the elements of matrix : therefore , considering the above notation for nondecreasing function satisfying ( [ hjba ] ) for large , we conclude and where .thus note that the same conclusions are true for the case , if we consider the corresponding values of , , from , and from ( in particular , in this case ) .write if , or , we have and for large , thus and solves . if ( i.e. , ) , from the asymptotic representation for optimal strategy at large values of the surplus in the unconstrained case , we see and for large values of the surplus ; then the optimizer is , where solves .if ( ) , we have and for large , thus and solves .if ( i.e. , ) , then , and for large if is positive , or and if is negative . then solves or respectively .if , one of these cases takes place depending on others parameters and we omit further discussions . if ( ) , we have , and for large if is positive , or and if is negative. then solves or respectively . if , one of these cases takes place depending on others parameters .we have the following theorems on the optimal strategy and the value function : [ th53 ] for large , it holds (1+o(\frac{1}{x } ) ) , \ ; & \\frac{(\mu - r)m}{\sigma ^2}-\rho\frac{\sigma_1}{\sigma}\geq 0 ; \\\qquad \qquad 0 , & \\frac{(\mu - r)m}{\sigma ^2}-\rho\frac{\sigma_1}{\sigma}<0;\\ \qquad \qquad a , & \a<\frac{(\mu - r)m}{\sigma ^2}-\rho\frac{\sigma_1}{\sigma}. \end{cases}\end{aligned}\ ] ] moreover , if in the first case , more exact relation is fulfilled .[ th54 ] it holds for some constant .[ rmk-54 ] from the analysis in this section , we see that under any investment strategy with a fixed amount invested in the risky asset , the survival probability function has a limiting expression with the same principal term ( product of exponential and power functions ) as in theorem [ th54 ] .we note that the results in theorems [ th53 ] and [ th54 ] remain valid in the model without perturbation ( ) .we also note that in the case without risky investment and perturbation ( and ) , the ruin probability function has the following form ( see , e.g. and ) : which implies , , for some .in this section , we give two numerical examples and a few concluding remarks . in the examples , we consider the case of unconstrained investment .computations using the asymptotic results and the operator are conducted for various claim distributions . in this example , the parameters are given by the following : , , , , , , , and .we give calculations for cases with exponential , half - normal and log - normal claim distributions .for all cases of different distributions , we have the following asymptotic result for low surplus levels using and theorem [ thm42 ] .the exponential claim distribution has mean with tail probability function .the half - normal claim distribution has density and tail - probability functions given below : ,\ \x>0,\ ] ] where is the standard normal distribution function and is a parameter .we set and then the mean of the distribution is .the log - normal distribution has density and tail probability functions with parameters and .we set and and hence the mean is .for these claim distributions , the optimal investment controls calculated using and the operator numerically are given in figures [ eg1 - 0 ] and [ eg1-infty ] . with the given exponential claim distribution , it holds the following asymptotic result for large surplus levels : using . in this example , the parameters are given by the following : , , , , , , , and .we give calculations for cases with exponential , weibull and pareto claim distributions .for all cases of different distributions , we have the following asymptotic result for low surplus levels using and theorem [ thm42 ] .the exponential distribution has mean and .the weibull distribution has density and tail - probability functions where and are parameters .we set , .so the mean of the distribution is .the pareto claim distribution has density and tail probability functions where and are parameters .we set , ; so the mean is .for these claim distributions , the optimal investment controls calculated using and the operator numerically are given in figures [ eg2 - 0 ] and [ eg2-infty ] ( we note that the optimal investment strategies in this example barely show a difference in figure [ eg2 - 0 ] in the two cases with the exponential and pareto claim distributions ) .in the case with the exponential claim distribution , we have the following asymptotic result for large surplus levels using . in this paper, we study the optimal investment control problem under the scenario of ruin minimization .the surplus is modeled by a perturbed cramer - lundberg process .investment control with a black - scholes stock and a risk - free asset is considered .we prove the existence of a classical solution to the hjb equation in both cases of investment using operators . in the constrained investment case , for low surplus levels, we find parameter conditions under which the optimal investment amount takes values ( no investment in the risky asset ) or ( maximal level of risky investment ) , or it tends to a fixed level . in the unconstrained case ,we show that the optimal investment amount approaches to a fixed level at a rate of order as the surplus level goes to .we also show that the maximal survival probability tends to at a rate of order . in the case with exponential claims ,we give new asymptotic results for large surplus values .we prove that the optimal investment amount tends to a fixed level at a rate of as the surplus level tends to infinity .we also prove that the minimal ruin probability function has a limit expression of .in general , the optimal investment control and the maximal survival probability function are not analytically tractable under the jump - diffusion model , i.e. , it is usually unable to give explicit expressions for them .thus the asymptotic results in this paper provide convenient and insightful calculations when finding the optimal investment control and the maximal survival probability .the first author of this research was supported by the russian fund of basic research , grants rfbr 13 - 01 - 00784 and rfbr 11 - 01 - 00219 , and the international laboratory of quantitative finance , nru hse , rf government grant , ag . 14.a12.31.0007 ( tb ) .this paper was also written partially during a visit ( tb ) to the hausdorff research institute for mathematics at the university of bonn in the framework of the trimester program stochastic dynamics in economics and finance " ( tb ) . the second author ( sl )acknowledges the support of a professional development assignment and a summer research fellowship from the university of northern iowa .belkina , t.a . and norshteyn , m.v . : structure of optimal investment strategy in a dynamic model for risks with diffusion disturbances , _ analysis and modeling of economic processes , the collection of articles , ed .v.z.belenky_ , * 9 * , moscow , cemi ras ( 2012 ) ( in russian ) ; e - print : www.cemi.rssi.ru/publication/books laubis , l. and lin , j.e . :optimal investment allocation in a jump diffusion risk model with investment : a numerical analysis of several examples , _ proceedings ( electronic ) of 43rd actuarial research conference _ , ( 2008 ) note that the function is a solution to equation , which is a nonlinear ode copied below ( in terms of ) : \phi ' ( x ) = & -\frac{\sigma ^2}{m } \phi^3(x)-2\left[r-\lambda+ \frac{c_\rho}{m } -\frac{{(\mu - r)^2 } } { 2\sigma ^2 } + \frac{r}{m}x \right ] \frac{\sigma ^2}{\mu - r}\phi^2(x)\\ & + 2\left(c_{\rho } + rx + \frac{1}{2m}\sigma _\rho^2\right)\phi(x ) - \sigma_\rho^2 \frac{{\mu - r}}{{\sigma ^2 } } .\end{split}\ ] ] we see that the equation is asymptotically autonomous by a change of variables and letting .this autonomous equation has two finite stationary points .one of these is a stable point equal to and the other is an unstable point . as a result , a solution of must have a finite limit equal to the stable point or tend to ( + or - ) infinity as ( see , and ) .we characterize at first one finite limit solution to given by a series where the coefficients are given by ( using ): suppose is another finite - limit - solution to .define .the function solves the ode ^\prime\\ = & -\frac{\sigma^2}{m}\left(b^3 + 3b^2w+3bw^2\right)-2(a+bx)(b^2 + 2bw)+2(c+rx)b , \end{split}\ ] ] where \frac{\sigma ^2}{\mu - r},\quad b= \frac{r\sigma ^2}{m(\mu - r ) } , \quad c = c_{\rho } + \frac{\sigma _ \rho^2}{2m}.\ ] ] further let us linearize the ode on with , . taking into account the principal linear terms of the expansion in powers of , we obtain here <0 ] and letting , we then have \frac{1}{\mu - r } -2x\frac{\frac{r\sigma^2}{m(\mu - r)}\tilde{a}_v^2-r\tilde{a}_v}{\sigma ^2 \tilde{a}_v^2(x)+\sigma _ \rho^2}\right\ } = -\infty,\ ] ] wherefrom it holds , which contradicts to the assumption ! for the case , if we assume , then similarly from we have which implies , leading to contradiction !so we conclude that is a finite - limit solution of which has the form of .we then obtain .
|
we study an optimal investment control problem for an insurance company . the surplus process follows the cramer - lundberg process with perturbation of a brownian motion . the company can invest its surplus into a risk free asset and a black - scholes risky asset . the optimization objective is to minimize the probability of ruin . we show by new operators that the minimal ruin probability function is a classical solution to the corresponding hjb equation . asymptotic behaviors of the optimal investment control policy and the minimal ruin probability function are studied for low surplus levels with a general claim size distribution . some new asymptotic results for large surplus levels in the case with exponential claim distributions are obtained . we consider two cases of investment control - unconstrained investment and investment with a limited amount .
|
theory and computation about complex networks such as the bacterial colonies , interacting ecological species , and the spreading of computer virus over the internet are becoming very promising and they may have important applications in a wide range of areas .the proper modelling of these networks is a challenging task and the studies in this area are still at very early stage .however , various techniques and applications have been investigated , especially in the area of computational logic , the internet network , and application of bio - inspired algorithms [ 1 - 7 ] . since the pioneer work of watts and strogatz on small - world networks , a lot of interesting studies on the theory and application of small - world networks [ 7 - 9,12 - 18 ] have been initiated .more recently , the automata networks have been developed by tomassini and his colleagues to study the automata network in noise environment .their study shows that small - world automata networks are less affected by random noise . the properties of complex networks such as population interactions , internet servers , forest fires , ecological species and financial transactions are mainly determined by the way of connections between the vertices or occupied sites .network modelling and formulations are essentially discrete in the sense that they deal with discrete interactions among discrete nodes of networks because almost all the formulations are in terms of the degrees of clustering , connectivity , average nodal distance and other countable degrees of freedom .therefore , they do not work well for interactions over continuous networks and media . in the later case , modelling and computationsare usually carried out in terms partial differential equations ( pdes ) , however , almost all pdes ( except those with integral boundary conditions ) are local equations because the derivatives and the dependent variable are all evaluated at concurrent locations .for example , the 3-d reaction - diffusion equation describes the variation of such as temperature and concentration with spatial coordinates and time . while the diffusion coefficient can be constant , but the reaction rate may depends on and location as well as time .this equation is local because , , and are all evaluated at the same point at any given time . nowif we introduce some long - distance shortcuts ( e.g. , a computer virus can spread from one computer to another computer over a long - distance , not necessary local computers ) , then the reaction rate can have a nonlocal influence in a similar manner .we can now modify the above equation as where depends on the local point and another point far away .obviously , can be any function form . as a simple example in the 1-d case : +\beta u(x - s , t) ] is a constant .this equation is nonlocal since the reaction rate depends the values of at both and at .this simple extension makes it difficult to find analytical solutions .even numerical solutions are not easy to find because the standard numerical methods do not necessarily converge due to the extra nonlocal term .this paper will investigate this aspect in detail using unconventional solution methods such as small - world cellular automata networks .the present work aims to develop a new type of small - world cellular automata by combining local updating rules with a probability of long - range shortcuts to simulate the interactions and behaviour of a complex system . by using a small fraction of sparse long - range shortcut interactions together with the local interactions, we can simulate the evolution of complex networks .self - organized criticality will be tested based on the results from the cellular automata .the important implications in the modelling and applications will also be discussed .small - world networks are a special class of networks with a high degree of local clustering as well as a small average distance , and this small - world phenomenon can be achieved by adding randomly only a small fraction of the long - range connections , and some common networks such as power grids , financial networks and neural networks behave like small - world networks .the application of small - world networks into the modelling of infection occurring locally and at a distance was first carried out by boots and sasaki with some interesting results .the dynamic features such as spreading and response of an influence over a network have also been investigated in recent studies by using shortest paths in system with sparse long - range connections in the frame work of small - world models .the influence propagates from the infected site to all uninfected sites connected to it via a link at each time step , whenever a long - range connection or shortcut is met ; the influence is newly activated at the other end of the shortcut so as to simulate long - range sparkling effect .these phenomena have successfully been studied by newman and watts model and moukarzel .their models are linear in the sense that the governing equation is linear and the response is immediate as there is no time delay in their models .more recently , one of the most interesting studies has been carried out by de arcaneglis and herrmann using the classic height model on a lattice , which implied the self - organized criticality in the small - world system concerned . on the other hand ,cellular automata have been used to simulate many processes such as lattice gas , fluid flow , reaction - diffusion and complex systems in terms of interaction rules rather than the conventional partial differential equations .compared to the equation - based models , simulations in term of cellular automata are more stable due to their finite states and local interacting rules .in fact , in most cases , the pde models are equivalent to rule - based cellular automata if the local rules can be derived from the corresponding pde models , and thus both pde models and ca rules can simulate the same process .however , we will show that cellular automata networks are a better approach for solving nonlocal equation - based models .the rest of the present paper will focus on : 1 ) to formulate a cellular automaton network on a 2-d lattice grid with sparse long - range shortcuts ; 2 ) to simulate the transition and complexity concerning small - world nonlocal interactions ; 3 ) to test the self - organized criticality of the constructed network systems ; 4 ) to find the characteristics of any possible transition .earlier studies on cellular automata use local rules updating the state of each cell and the influence is local .that is to say , the state at the next time step is determined by the states of the present cell concerned and those of its immediate surrounding neighbour cells .even the simple rules can produce complex patterns .the rule and its locality determine the characteristics of the cellular automata .in fact , we do not have to restrict that the rules must be local , and in general the influence can be either local or nonlocal .thus , we can assume the rules of cellular automata can be either local or nonlocal or even global .the state of a cell can be determined by cells consisting of immediate neighbour cells and other cells at longer distance . in the case of local rules only , and . if , then the rules are nonlocal. if is the same order of the total cells of the cellular automaton , then rules are global .nonlocal interactions rule for lattice - gas system was first developed by appert and zaleski in the discussion of a new momentum - conserving lattice - gas model allowing the particles exchange momentum between distant sites .some properties of local and nonlocal site exchange deterministic cellular automata were investigated by researchers .as the nonlocal rules are different from the local rules , it is naturally expected that the nonlocal rules may lead to different behavior from conventional local rule - based cellular automata .furthermore , self - organized criticality has been found in many systems in nature pioneered by bak and his colleagues .one can expect that there may be cases when self - organized criticality , cellular automata , and small - world phenomena can occur at the same time . more specifically ,if a finite - state cellular automaton with a small - fraction of long - range shortcuts is formulated , a natural question is : do the self - organized criticality exist in the small - world cellular automaton ?is there any transition in the system ?a cellular automaton is a finite - state machine defined on a regular lattice in -dimensional case , and the state of a cell is determined by the current state of the cell and the states of its neighbour cells .for simplicity , we use 2-d in our discussions .a state of a cell at time step can be written in terms of the previous states where summation is over the moore neighbourhood cells . in the -dimensional case ,there are moore neighbourhood cells . is the size of the 2-d automaton , and are the coefficients . for the simplest and well - known 2-d conway s game of life for 8 neighbour cells ( ) .now let us introduce some nonlocal influence from some sparse long - range cells ( see fig .1 ) by combining small - world long - range shortcuts and conventional cellular automata to form a new type of cellular automata networks . for simplicity, we define a small - world cellular automaton network as a local cellular automaton with an additional fraction or probability of sparse long - range nonlocal shortcuts ( see fig .1 ) . for immediate neumann neighbours and nonlocal cells ,the updating rule for a cell becomes where is a control parameter that can turn the long - range cells on ( ) or off ( ) depending on the probability .the probability is the fractions of long - range shortcuts in the total of every possible combinations . for cells ,there are possible connections .the simplest form of can be written as where is a heaviside function . is a critical probability , and can be taken as to be zero in most simulations in this paper .the updating rules are additive and thus form a subclass of special rules .we can extend the above updating rules to a generalized form , but we are only interested in additive rules here because they may have interesting properties and can easily be transformed to differential equations . in addition , the neighourhood can be either extended moore neighourhood or neumann neighbourhood . for moore neighbourhood , and .our numerical experiments seem to indicate that moore neighbourhood is more sensitive for avalanche and neumann neighbourhood is more stable for pattern formation .a simple case for a small - world cellular automaton in 2-d case is and ( or any ) so that it has 5 immediate neumann neighbour cells and 2 shortcuts .the distance between the nonlocal cells to cell can be defined as the nonlocality requires the nonlocal influence can also be introduced in other ways .alternatively , we can use the conventional local rule - based cellular automaton and adding the long - distance shortcuts between some cells in a random manner .the probability of the long - range shortcuts in all the possible connections is usually very small . under certain conditions ,these two formulations are equivalent .more generally , a finite - state cellular automaton with a transition rule \;\;(i , j=1,2, ... ,n) ] at time level to a new state \;\;(i , j=1,2, ... ,n)$ ] at time level can be written as where takes the same form as equation ( [ equ - rule ] ) for small - world cellular automata .of long - range shortcuts , width=336,height=192 ] the state of each cell can be taken to be discrete or continuous . from simplicity, we use -valued discrete system and for most of the simulations in the rest of the paper , we use ( thus , each cell can only be or ) for self - criticality testing , and for pattern formation. other numbers of states can be used to meet the need of higher accuracies .by using the small - world cellular automaton formulated in the previous section , a large number of computer simulations have been carried out in order to find the statistic characteristics of the complex patterns and behaviour arising from cellular automata networks with different probabilities of long - range shortcuts .numerical simulations are carried out on an lattice in 2-d setting , and usually , , or up to .different simulations with different lattice size are compared to ensure the simulated results are independent of the lattice size and time steps . in the rest of the paper , we present some results concerning the features of transition and self - organized criticality of small - world cellular automata . for a lattice size of with a fixed , a single cell is randomly selected and perturbed by flipping its state in order to simulate an event of avalanche in 2-d automata networks with the standard moore neighbourhood and game - of - life updating rules , but a probability is used to add long - range shortcuts to the cellular automaton .a shortcut forces the two connecting cells having the same state .figure 2 shows the avalanche size distribution for two different values of and , respectively .the avalanche size is defined as the number of cells affected by any single flipping perturbation . in the double logarithmic plot ,the data follows two straight lines .it is clearly seen that there exists a power law in the distribution , and the gradient of the straight line is the exponent of the power - law distribution .a least - square fitting of , leads to the exponents of for and for .although a power - law distribution does not necessarily mean the self - organized criticality .self - organized criticality has been observed in other systems [ 1 - 4,26 ] .the pattern formed in the system is quasi - stable and a little perturbation to the equilibrium state usually causes avalanche - like readjustment in the system imply the self - organized criticality in the evolution of complex patterns of the cellular automaton .this is the first time of its kind by using computer simulations to demonstrate the feature of self - organized criticality on a _ cellular automaton network_. we can also see in figure 2 that different probabilities will lead to different values of exponents .the higher the probability , the steeper the slope . for a fixed grid of cells, we can vary the probability to see what can happen . for a single event of flipping state ,the fraction of population affected is plotted versus in a semi - logarithmic plot as shown in figure 3 where the fraction of population is defined as the number ( ) of cells affected among the whole population , that is .the sharp increase of the fraction versus the probability indicates a transition in the properties of cellular automata networks . for a very small probability , the influence of the event mainly behaves in a similar way as the conventional local cellular automata . as the probability increases ,a transition occurs at about . for , any event will affect the whole population .this feature of transition is consistent with the typical small - world networks .of long - range shortcuts.,width=288,height=240 ] comparing with the local rule - based cellular automata , the transition in small - world cellular automata is an interesting feature . without such shortcuts , there was no transition observed in the simulations .however , self - organized criticality was still observed in finite - state cellular automata without transition . in the present case , both self - organized criticality and transition emerge naturally .thus , the transition in cellular automata networks suggests that this transition may be the result of nonlocal interactions by long - range shortcuts .this feature of transition may have important implications when applied to the modelling of real - world phenomena such as the internet and social networks . for a system with few or no long - range interactions, there is no noticeable change in its behavior in transition .however , as the long - range shortcuts or interacting components increase a little bit more , say to , then a transition may occur and thus any event can affect a large fraction of the whole population .for example , to increase the speed of finding information on the internet , a small fraction of long - range shortcuts in terms of website portals and search engine ( e.g. , google ) and high - capacity / bandwidth connections could significantly increase the performance of the system concerned .in addition , the self - organized criticality can also imply some interesting properties of the internet and other small - world networks , and these could serve as some topics for further research .the evolution of a system can usually be described by two major ways : rule - based systems and equation - based systems .the rule - based systems are typically discrete and use local rules such as cellular automata or finite difference system . as discussed bymany researchers , the finite difference systems are equivalent to cellular automata if the updating rules for the cellular automata are derived directly from their equation - based counterpart . on the other hand ,the equation - based systems are typically continuous and they are often written as partial differential equations .sometimes , the same system can described using these two different ways . however , there is no universal relationship between a rule - based system and an equation - based system .given differential equation , it is possible to construct a rule - based cellular automaton by discretizing the differential equations , but it is far more complicated to formulate a system of partial differential equations for a given cellular automaton .for example , the following 2-d partial differential equation for nonlinear pattern formation for can always be written as an equivalent cellular automaton if the local rules of cellular automaton are obtained from the pde .conversely , a local cellular automaton can lead to a local system of partial differential equation ( pde ) , if the construction is possible .a local pde can generally be written as a nonlocal pde can be written as where is the the averaged distance of long - range shortcuts and is the probability of nonlocal long - range shortcuts . in order to show what a nonlocal equation means , we modify the above equation for pattern formation as .\ ] ] this nonlocal equation is far more complicated than equation ( 8) .for the proposed cellular automaton networks , a system of nonlocal partial differential equations will be derived , though the explicit form of a generic form is very difficult to obtain and this requires further research .grid at ; b ) 2-d pattern formation and distribution displayed at ., title="fig:",width=288,height=192 ] grid at ; b ) 2-d pattern formation and distribution displayed at . ,title="fig:",width=288,height=192 ] even for simple nonlinear partial differential equations , complex pattern formation can arise naturally from initially random states .for example , the following nonlinear partial differential equation for pattern formation can be discretized using central finite difference scheme in space with .then , it is equivalent to where , .it is a cellular automaton with the standard neumann neighbourhood for this pde . the formed patterns and their distribution resulting from the system on a grid are shown in fig . 4 where , , and .we can see that stable patterns can be formed from initial random states .grid as the initial condition at ; b ) 2-d pattern distribution with a short - cut probability and ( or ) displayed at ; c ) 2-d pattern with a short - cut probability and ( or ) at . , title="fig:",width=144,height=144 ] grid as the initial condition at ; b ) 2-d pattern distribution with a short - cut probability and ( or ) displayed at ; c ) 2-d pattern with a short - cut probability and ( or ) at ., title="fig:",width=144,height=144 ] grid as the initial condition at ; b ) 2-d pattern distribution with a short - cut probability and ( or ) displayed at ; c ) 2-d pattern with a short - cut probability and ( or ) at ., title="fig:",width=144,height=144 ] \(a ) ( b ) ( c ) the formed patterns using the neumann neighbourhood ( ) are very stable and are almost independent of the initial conditions .in fact , the initial state does not matter and the only requirement for the initial state is some degree of randomness .if we run the same program using a photograph or the uc2007 conference logo , similar patterns can also form naturally as shown in figure 5 . in our simulations ,we have used and .other parameters and can vary .for the case shown in fig .5b , and , while and are used in fig .this means that the initial state does not affect the characteristics of pattern formation and this is consistent with the stability analysis .small - world cellular automata networks have been formulated to simulate the interactions and behaviour of multi - agent systems and small - world complex networks with long - range shortcuts .simulations show that power - law distribution emerges for a fixed probability of long - range shortcuts , which implies self - organized criticality in the avalanche and evolving complex patterns . for a given size of cellular grid, the increase of the probability of long - range shortcuts leads to a transition , and in this case , a single even can affect a large fraction of the whole population . in this sense ,the characteristics of small - world cellular automota are very different from the conventional locally interacting cellular automata .the nonlocal rule - based network systems in terms of cellular automata can have other complicated features such as its classifications compared with the conventional automata and its relationship its partial differential equations .in addition , cellular automota networks could provide a new avenue for efficient unconventional computing for simulating complex systems with many open questions such as the relationship between cellular automata networks and nonlocal pdes , and the potential implication on the parallelism of these algorithms .these are open problems to be investigated in the future research .yang x. s. and young y. , cellular automata , pdes , and pattern formation , in : handbooks of bioinspired algorithms and applications ( eds . olarius s. and zomaya a. y.),chapman & hall /crc press , 271 - 282 ( 2005 ) .
|
a small - world cellular automaton network has been formulated to simulate the long - range interactions of complex networks using unconventional computing methods in this paper . conventional cellular automata use local updating rules . the new type of cellular automata networks uses local rules with a fraction of long - range shortcuts derived from the properties of small - world networks . simulations show that the self - organized criticality emerges naturally in the system for a given probability of shortcuts and transition occurs as the probability increases to some critical value indicating the small - world behaviour of the complex automata networks . pattern formation of cellular automata networks and the comparison with equation - based reaction - diffusion systems are also discussed . + * keywords : * automata networks , cellular automata , nonlocal pde , small - world networks , self - organized criticality . + * citation detail : * x. s. yang and y. z. l. yang , cellular automata networks , _ proceedings of unconventional computing 2007 _ ( eds . a. adamatzky , l. bull , b.de lacy costello , s. stepney , c. teuscher ) , luniver press , pp . 280 - 302 ( 2007 ) .
|
sparse channel estimation is an important topic found in many different applications ( see e.g and the references therein ) .in fact , in many real - world channels of practical interest , e.g. underwater acoustic channels , digital television channels , and residential ultrawideband channels , the associated impulse response tends to be sparse . to obtain an accurate channel impulse response is crucial since it is used in the decoding stage .sparsity helps one can obtain better channel estimates .in addition , the most common technique for promoting sparsity is by an regularization , commonly termed as lasso .however , sparsity can be promoted in different ways .for example , in , sparsity is promoted by generating a pool of possible models , and then performing model selection .a special characteristic of ofdm systems is its sensitivity to frequency synchronizations errors ( see e.g. ) , which is produced ( among other causes ) by carrier frequency offset ( cfo ) .this adds an extra difficulty to the channel estimation problem , since the cfo must be estimated as well as other channel parameters . to estimate the cfo , maximum likelihood ( ml )estimation has been successfully utilized ( see e.g. ) . in this work ,we combine the following problems : ( i ) estimation of a sparse channel impulse response ( cir ) in ofdm systems , ( ii ) estimation of cfo , ( iii ) estimation of the noise variance , ( iv ) estimation of the transmitted symbol , and ( v ) estimation of the ( hyper ) parameter defining the prior probability density function ( pdf ) of the sparse channel .the estimation problem is solved by utilizing a generalization of the em algorithm ( see e.g. and the references therein ) for map estimation , based on the of the cir . in particular , the same methodology has been applied in for the identificaton of a sparse finite impulse response filter with quantized data .our work generalizes previous work on joint cfo and cir estimation , see and the generalization .the problem of estimating a sparse channel and the transmitted symbol has been previously addresses in the literature . in , it is also considered bit interleaved coded modulation ( bicm ) in ofdm systems .the approach in corresponds to the utilization of the _ generalized approximate message passing _ ( gamp ) algorithm , which allows for solving the bicm problem .gamp corresponds to a generalization of the _ approximate message passing _ ( amp ) algorithm , although it does not allow for unknown parameters other than the channel .the amp and gamp algorithms are based on belief propagation . when the system is linear ( with respect to the channel response ) , gamp and amp are the same algorithm .in addition , for sparsity problems , the amp algorithm corresponds to an efficient implementation of the lasso estimator , see and the references therein .hence , under the same setup , the map - em algorithm we propose and the gamp algorithm utilized in yield the same results .we consider the following ofdm system model ( see e.g. and the references therein ) , depiected in fig .[ fig : signal ] : 0 cm the channel is modelled as a finite impulse response ( fir ) filter ^t \in \mathbb{c}^{l} ] , * x * is the transmitted signal ( after the inverse discrete fourier transform ) , is a permutation matrix that shuffles the transmitted symbol samples in any desired fashion , , and is the identity matrix of dimension .notice that the time - domain representation of the multicarrier signals in resembles a single - carrier system .however , the main difference corresponds to the utilization of the cyclic prefix , which yields a circulant channel matrix at the receiver after the cyclic prefix removal .[ fig : signal ] the transmitted signal is assumed to have a deterministic part ( comprising known training data ) and a stochastic part ( comprising the unknown data ) .thus , the transmitted signal corresponds , after the application of the idft , to the time domain multiplexing of a training sequence and data coming from the data terminal equipment .we also need to express the transmitted signal in terms of the known ( training ) component , , and the unknown component , .thus , the real representation of the transmitted signal is given by ^t \, \in \mathbb{r}^{2n_c } , \vspace{-1mm}\ ] ] where , , and represent the real part , imaginary part , training part , and unknown part , respectively . for estimation purposes , it is possible to express the model in as a real - valued state - space model with sample index : { \bar{\textbf{x}}}+ \bar{{\boldsymbol{\eta}}}_k = \bar{\textbf{m}}_k { \bar{\textbf{x}}}+ \bar{{\boldsymbol{\eta}}}_k , \label{eq : sss}\ ] ] where ^t ] , is the time sample index of the ofdm symbol , and denote the real and imaginary parts , respectively, and is the column of the identity matrix .this state - space representation is equivalent to , but it is more convenient for the identification approach used in this work .in addition , and as it will be shown in section [ section : q_ml ] , the estimation procedure is based upon expressions in the form of ] , amongst other quantities .the attainment of these two expectations can be achieved , for instance , by applying bayes rule for the _ posterior _ pdf for any given _ prior _ pdf .it is possible to extend the state - space model in by including a constant state vector that corresponds to the whole unknown transmitted signal ( see e.g ( * ? ? ?9 ) ) . that is ,notice that the subindex for in indicates that remains unchanged for every sample index .this extension allows for the utilization of _ filtering techniques _ for the attainment of ( and consequently and ) .we consider a general state - space model that can be utilized for proper and improper signals . in this sense, our approach can be applied to all common modulation schemes , such as binary phase shift keying ( bpsk ) and gaussian minimum shift keying ( gmsk ) , which are improper ( see e.g. ) . + regarding the received signal , the conditional pdf of ^t ] , and , \vspace{-2mm}\ ] ] with ] .then , we can write . replacing this equality in , and taking the derivative of with respect to , and , we obtain : - \mathbb{e } [ { \boldsymbol{{\mathcal{m}}}}^t{\boldsymbol{{\mathcal{m } } } } \vert \textbf{y } , \hat{\boldsymbol{\gamma}}^{(i ) } ] \textbf{\textscg } \right),\\ \frac{\partial { \mathcal{q}}_{\text{ml}}}{\partial \rho } & = \frac{2 n_c}{\rho } - 2\left ( \rho \textbf{y}^t \textbf{y } - \textbf{y}^t \mathbb{e}\left[({\boldsymbol{{\mathcal{m}}}})\vert \textbf{y } , \hat{\boldsymbol{\gamma}}^{(i)}\right ] \textbf{\textscg}\right ) .\label{eq : dq_dsigma2}\end{aligned}\ ] ] we express as a variance - mean gaussian mixture ( vmgm ) . when expressed in terms , a vmgm for the parameters is given by ( see e.g ) where . in this sense , the random variable ( ) can be considered as a _hidden variable _ in the em algorithm .hence , given , the auxiliary function , can be expressed as p(\lambda_j|\hat{{\text{\textscg}}}_j^{(i)})d(\lambda_j ) \vspace{-3mm}\ ] ] +\log [ p(\lambda_j ) ] \right ) p(\lambda_j|\hat{{\text{\textscg}}}_j^{(i)})d(\lambda_j),\vspace{-2 mm } \label{eq : q_prior1}\ ] ] since . in the case that is given by ,then its derivative is given by where is the expectation obtained from see . + from the m - step , at the iteration , an estimate ( ) of is obtained .this estimate is , then , inserted into , in order to obtain an estimate of , which in turn is utilized in the maximization of . once the new estimate has been obtained , it is inserted into and the iteration continues until convergence has been reached . in our particular case , we only want to promote sparsity in ( consequently in the cir ) .thus , our chosen penalty function is = -\tau \text{sign}(\hat{{\text{\textscg}}}^{(i)}_j)/ \hat{{\text{\textscg}}}^{(i)}_j ] , take derivative with respect to , and then set the result equal to zero . + in general , the computation of ] . ] in this section , we present a numerical example using our approach for an ofdm system with cfo .we assume that the unknown part of the time - domain transmitted signal is approximately gaussian distributed ( a consequence of the central limit theorem ) .thus , , where , ] , and ] and ] , in fig [ fig : ber ] , the average ber for ml estimation is 0.0195 , and for map estimation is 0.0132 .in this work , we have proposed an algorithm to estimate sparse channels in ofdm systems , the cfo , the variance of the noise , the symbol , and the parameter defining the a priori distribution of the sparse channel .this is achieved in the framework of map estimation , using the em algorithm .sparsity has been promoted by using an -norm regularization , in the form of a prior distribution for the cir .for that , the em algorithm has been modified to include this case .in addition , we have concentrated the cost function in the m - step to numerically optimize one single variable ( ) .the numerical examples illustrate the effectiveness of this approach for the partial training case , obtaining , in most cases studied , a lower value for nmse using regularization compared to the value for nmse using no regularization . for the full training case, there is no noticeable difference between the estimates obtained with ml and map .this confirms that prior knowledge is useful when the amount of data is limited .s. zhou , j.c .preisig , and p. willett sparse channel estimation for muticarrier underwater acoustic communication : from subspace methods to compressed sensing , _ ieee trans . signal proc . , _58(3 ) , pp . 17081721 , 2010 .r. mo , y. h. chew , t. t. tjhung , and c. c. ko , `` an em - based semiblind joint channel and frequency offset estimator for ofdm systems over frequency selective fading channels , '' _ ieee trans .57(5 ) , pp.32753282 , 2008 .b. godoy , j.c .agero , r. carvajal , g.c .goodwin , and j.i .yuz , `` identification of sparse fir systems using a general quantization scheme , '' _ international journal of control _ , accepted for publication .
|
' '' '' ' '' '' ' '' '' ' '' ''
|
option pricing has become an important field of theoretical and applied research both in probability theory and finance , focusing the attention of many mathematicians , financial economists and physicists .meanwhile , the rapid expansion of the options market has made option pricing and hedging an important issue for market practitioners .the interest for option pricing theory culminated in the attribution of the 1997 nobel prize in economics to myron scholes and robert merton for their pioneering work on this subject .while option pricing theory has traditionally focused on obtaining methods for pricing and hedging of derivative securities based on parameters of the underlying assets , recent approaches tend to consider the market prices of options as given and view them as a source of information on the market .while there are many excellent textbooks and monographs on the former approach , the latter has only been developed in the recent literature and is less well known .it is this approach on which we will focus here : we will try to show why market prices of options can be considered as a source of information , describe different theoretical tools and procedures for extracting their information content and show how this information can be interpreted in economic terms and used in applications .the following text is divided into four sections .section [ general ] is a general review of option pricing theory and introduces notations used throughout the text .section [ inverse ] discusses the informational content of option prices and defines the notions of implied volatility and state price density .section [ methods ] presents various methods which have been proposed to extract the information content of option prices .section [ results ] discusses how these results may be interpreted in economic terms and used in applications .section [ summary ] highlights the salient features of the results obtained in various empirical studies and the important points to keep in mind when interpreting and using them .a _ derivative security _ or _ contingent claim _ is a financial asset whose ( future ) payoff is defined to be a function of the future price(s ) of another ( or several other ) assets , called the _ underlying assets_. option pricing theory focuses on the problem of pricing and hedging derivative securities in a consistent way given a market in which the underlying assets are represented as stochastic processes .consider an investor participating in a stock market , where stock prices fluctuate according to a random process .one of the simplest types of derivative securities is a contract which entitles its bearer to buy , if she wishes , one share of stock at a specified date in the future for a price specified in advance .such a contract is called a _european call option _ on the stock , with exercise price or _ strike _ and maturity .the stock is said to be the _underlying asset_. let be the price of the underlying asset at time .if at the expiration date of the option the stock price is below the exercise price i.e. the holder will not exercise his option to buy : the option will then be worthless .if the stock price at expiration is above the exercise price i.e. then bearer can _ exercise _ the option i.e. use it to buy one share of stock at the strike price and sell it at the current price , making a profit of .a european call option is thus equivalent to a ticket entitling the bearer to a payment of at the expiration date of the option .the function is called the _ payoff _ of the option .a european call option has therefore a non - negative payoff in all cases : the stock price may rise or fall but in either case the bearer of the option will not lose money .an option may be viewed as an insurance against the rise of the stock price above a specified level which is precisely the exercise price . like any insurance contract, an option must therefore have a certain value .options are financial assets themselves and may be bought or sold in a market , like stocks . since 1975 , when the first options exchange floor was opened in chicago , options have been traded in organized markets . the question for the buyer or the seller of an option is then : what is the value of such a contract ? how much should an investor be willing to a pay for an option ?a related question is : once an option has been sold , what strategy should the seller ( underwriter ) of the option follow in order to minimize his / her risk of having to pay off a large sum in the case the option is exercised ? the first two questions are concerned with _ pricing _ while the last one is concerned with _the response to these questions has stimulated a vast literature , initiated by the seminal work of black and scholes , and has led to the development of a sophisticated theoretical framework known as _ option valuation theory_. there are a great variety of derivative securities with more complicated payoff structures .the payoff may depend in a complicated fashion not only on the final price of the underlying asset but also on its trajectory ( path - dependent options ) .the option may also have early exercise features ( american options ) or depend on the prices of more than one underlying asset ( spread options ) .we will consider here only the simplest type of option , namely the european call option defined above . in fact , contrarily to what is suggested by many popular textbooks , even the pricing and hedging of such a simple option is non - trivial under realistic assumptions for the price process of the underlying asset . a naive approach to the pricing of an option would be to state that the present value of an uncertain future cash flow is simple equal to the discounted expected value of the cash flow : where is the probability density function of the random variable representing the stock price at a future date .the exponential is a discounting factor taking into account the effect of a constant interest rate . under some stationarity hypothesis on the increments of the price process, the density may be obtained by an appropriate statistical analysis of the historical evolution of prices .for this reason we will allude to it as the _ historical _ density .we will refer to such a pricing rule as expectation pricing " .however , nothing guarantees that such a pricing rule is _ consistent _ in the sense that one can not find a riskless strategy for making a profit by trading at these prices .such a strategy is called an arbitrage opportunity .the consistency of prices requires that if two dynamics trading strategies have the same final payoff ( with probability one ) then they must have the same initial cost otherwise this will create an arbitrage opportunity for any investor aware of this inconsistency .this is precisely the cornerstone of the mathematical approach to option pricing , which postulates that in a liquid market there should be no arbitrage opportunities : the market is efficient enough to make price inconsistencies disappear almost as soon as they appear .the first example of this approach was given by black & scholes who remarked that when the price of the underlying asset is described by a geometric brownian motion process : where is a brownian motion ( wiener ) process , then the expectation pricing rule gives inconsistent prices : pricing european call options according to eq .( 1 ) can create arbitrage opportunities .furthermore they showed that requiring the absence of arbitrage opportunities is sufficient to define a _ unique _price for a european call option , independently of the preferences of market agents .this price is given by the black - scholes formula : where is the cumulative distribution function of a standard gaussian random variable : however the method initially used by black & scholes and merton relies in an essential way on the hypothesis that the underlying asset follows geometric brownian motion ( eq.[bs1 ] ) , which does not adequately describe the real dynamics of asset prices .the methodology of black & scholes was subsequently generalized to diffusion processes defined as solutions of stochastic differential equations where is gaussian white noise ( increment of a wiener process ) and deterministic functions of the price . a good introduction to arbitrage pricing techniquesis given in . even though naive, the representation eq.[expec ] of the price of an option as its expected future payoff is appealing to economic intuition : the present value of an uncertain cash flow should be somehow related to its expected value .harrison & kreps have shown that that even in the arbitrage pricing framework it is still possible express prices of contingent claims as expectations of their payoff , but at a certain price ( ! ) : these expectations are no longer calculated with the density of the underlying asset but with another density , different from . more precisely , harrison & pliska show that in a market where asset prices are described by stochastic processes verifying certain regularity conditions , the absence of arbitrage opportunities is equivalent to the existence of a probability measure equivalent and are said to be equivalent if for any event , iff i.e. if they define the same set of impossible events . in the case of a single asset considered herethis is a rather mild restriction . ] to , called an _ equivalent martingale measure _, such that all ( discounted ) asset prices are -martingales : that is , if one denotes by the conditional density of the stock price at maturity under the measure given the past history up to time , then the price of any derivative asset with payoff verifies which does not give any additional information here . ] in particular the ( discounted ) stock price itself is a -martingale : this does not imply that real asset prices are martingales or even driftless processes : in fact there is a positive drift in most asset prices and also some degree of predictability .eq.([martingale ] ) should be considered as a property defining and not as a property of the price process whose probabilistic properties are related to the historical density .the density is merely a mathematical intermediary expressing the relation between the prices of different options with the same maturity .it should not be confused with the historical density .the martingale property ( eq . [ martingale ] ) then implies that the price of any european option can be calculated as the expectation of its payoff under the probability measure .in particular then the price of any call option is therefore given by : under the assumption of stationarity , will only depend on but this assumption does not necessarily hold in real markets . the density has been given several names in the literature : risk - neutral probability " , state price deflator " , state price density , equivalent martingale measure . while these different notions coincide in the case of the black - scholes model , they correspond to different objects in the general case of an incomplete market ( see below ) .the term risk - neutral density " refers precisely to the case where , as in the black - scholes model , all contingent payoffs can not be replicated by a self - financing portfolio strategy .this is not true in general , neither theoretically nor empirically so we will refrain from using the term risk - neutral " density .the term martingale measure " refers to the property that asset prices are expected to be -martingales : again , this property does not define uniquely in the case of an incomplete market .we will use the term state price density to refer to a density such that the market prices of options can be expressed by eq.([spd ] ) : the state price density should not be viewed as a mathematical property of the underlying assets stochastic process but as a way of characterizing the prices of _ options _ on this asset .from the point of view of economic theory , one can consider the formalism introduced by harrison & pliska as an extension of arrow - debreu theory to a continuous time / continuous state space framework .the state price density is thus the continuum equivalent of the arrow - debreu state prices .however , while the emphasis of arrow - debreu theory is on the notion of value , the emphasis of is on the notions of dynamic hedging and arbitrage , which are important concerns for market operators .the situation can thus be summarized as follows . in the framework of an arbitrage - free market , each assetis characterized by two different probability densities : the historical density which describes the random variations of the asset price between and and the state price density which is used for pricing options on the underlying asset .these two densities are different a priori and , except in very special cases such as the black - scholes model arbitrage arguments do not enable us to calculate one of them given the other .the main results of the arbitrage approach are existence theorems which state that the absence of arbitrage opportunities leads to the existence of a density such that all option prices are expectation of their payoffs with respect to but do not say anything about the uniqueness of such a measure .indeed , except in very special cases like the black - scholes or the binomial tree model where is determined uniquely by arbitrage conditions there are in general infinitely many densities which satisfy no - arbitrage requirements . in this casethe market is said to be _incomplete_. one could argue however that market prices are not unique either : there are always two prices- a bid price and an ask price- quoted for each option .this has led to theoretical efforts to express the bid and ask prices as the supremum/ infimum of arbitrage - free prices , the supremum / infimum being taken either over all martingale measures or over a set of dominating strategies .elegant as they may seem , these approaches give disappointing results .for example eberlein & jacod have shown that in the case of a purely discontinuous price process taking the supremum / infimum over all martingale measures leads to trivial bounds on the option prices which give no information whatsoever : for a derivative asset with payoff , arbitrage constraints impose that the price should lie in the interval ] .the lower bound is the price of a futures contract of exercise price : arbitrage arguments simply tell us that the price of an option lies between the price of the underlying asset and the price of a futures contract , a result which can be retrieved by elementary arguments .more importantly , the price interval predicted by such an approach is way too large compared to real bid - ask spreads .these results show that arbitrage constraints alone are not sufficient for determining the price of a simple option such as a european call as soon as the underlying stochastic process has a more complex behavior than ( geometric ) brownian motion , which is the case for real asset prices .one therefore needs to use constraints other than those imposed by arbitrage in order to determine the market price of the option .one can represent the situation as if the market had chosen among all the possible arbitrage - free pricing systems a particular one which could be represented by a particular martingale measure , the _ market measure_. the situation may be compared to that encountered in the ergodic theory of dynamical systems . for a given dynamical system there may be several invariant measures .however , a given trajectory of the dynamical system will reach a stationary state described by a probability measure called the physical measure " of the system .the procedure by which the physical measure is selected among all possible invariant measures involves other physical mechanisms is not described by the probabilistic formulation .the first approach is to choose , among all state price densities , one which verifies a certain optimization criterion .the price of the option is then determined by eq .[ spd ] using the spd thus chosen .the optimization criterion can either correspond to the minimization of hedging risk or to a certain trade - off between the cost and accuracy of hedging .fllmer & schweizer propose to choose among all martingales measures the one which is the closest to the historical probability in terms of relative entropy ( see below ) . in any casethe minimization of the criterion over all martingale densities leads to the selection of a unique density which is then assumed to be the state price density .another approach to option pricing in incomplete markets , proposed by el karoui _ et al ._ , is based on dynamic optimization techniques : it leads to lower and upper bounds on the price of options . a different approach proposed by bouchaud _is to abandon arbitrage arguments and define the price of the option as the cost of the best hedging strategy i.e. the hedging strategy which minimizes hedging risk in a quadratic sense .this approach , which is further developed in is not based on arbitrage pricing and although the prices obtained coincide with the arbitrage - free ones in the case where arbitrage arguments define a unique price , they may not be arbitrage - free a priori in the mathematical sense of the term .in particular they are not necessarily the same as the ones obtained by the quadratic risk minimization approaches of fllmer & schweizer and schl .the options market has drastically changed since black & scholes published their famous article in 1973 ; today , many options are liquid assets and their price is determined by the interplay between market supply and demand . pricing " such options may therefore not be the priority of market operators since their market price is an observation and not a quantity to be fixed by a mathematical approach .this has led in the recent years to the emergence of a new direction in research : what can the observed market prices of options tell us about the statistical properties of the underlying asset ? or , in the terms defined above : what can one infer for the densities and from the observation of market prices of options ? in the black - scholes lognormal model , all option prices are described by a single parameter : the volatility of the underlying asset .therefore the knowledge of either the price or the volatility enables to calculate the other parameter . in practice ,the volatility is not an observable variable whereas the market price of the option is ; one can therefore invert the black - scholes formula to determine the value of the volatility parameter which would give a black - scholes price corresponding to the observed market price : this value is called the ( black - scholes ) _ implied volatility _ . can be obtained through a numerical resolution of the above equation .actually this is how the black - scholes formula is used by options traders : not so much as a pricing tool but as a means for switching back and forth between market prices of options and their associated implied volatilities .the implied volatility is the simplest example of a statistical parameter implicit in option prices .note that the implied volatility is not necessarily equal to the variance of the underlying asset s return : it is extracted from option prices and not from historical data from the underlying asset . in generalthe two values are different .it has been conjectured that the implied volatility is a good predictor of the future volatility of the underlying asset but the results highly depend on the type of data and averaging period used to calculate the volatility .[ fig1 ] in a black - scholes universe , the implied volatility would in fact be a constant equal to the true volatility of the underlying asset .the non - dependence of implied volatility on the strike price can be viewed as a specification test for the black - scholes model . however , empirical studies of implied volatilities show a systematic dependence of implied volatilities on the exercise price and on maturity . in many casesthe implied volatility presents a minimum at - the - money ( when ) and has a convex , parabolic shape called the smile " an example of which is given in figure [ fig1 ] .this is not always the case however : the implied volatility plotted as a function of the strike price may take various forms .some of these alternative patterns , well known to options traders , are documented in . on many markets , the convex parabolic smile " pattern observed frequently after the 1987 crash has been replaced in the recent years by a still convex but monotonically decreasing profile .the empirical evidence alluded to above points out to the misspecification of the black - scholes model and call for a satisfying explanation .if the spd is not a lognormal then there is no reason that a single parameter , the implied volatility , should adequately summarize the information content of option prices .on the other hand , the availability of large data sets of option prices from organized markets such as the cboe ( chicago board of options exchange ) add a complementary dimension to the data sets available for empirical research in finance : whereas time series data give one observation per date , options prices contain a whole cross section of prices for each maturity date and thus enable comparison between cross sectional and time series information , giving a richer view of market variables . in theory ,the information content of option prices is fully reflected by the knowledge of the entire density : this has led to developments of methods which , starting from a set of option prices search for a density such that eq.([spd ] ) holds .such a distribution is called an _ implied distribution _ , by analogy with implied volatility is a lognormal , the variance of the implied distribution does _ not _ coincide with the black - scholes implied volatility ] .if one adheres to the assumption of absence of arbitrage opportunities , the notion of implied distribution coincides with the concept of state price density defined above . buteven if one does not adopt this point of view , the implied distribution still contains important information on the market . in the followingwe will use indifferently the terms implied distribution " and state price density " for .let us now describe various methods for extracting information about the state price density from option prices .given that all options prices can be expressed in terms of a single function , the state price density , one can imagine statistical procedures to extract from a sufficiently large set of option prices .different methods have been proposed to reach this objective , among which we distinguish three different approaches .expansion methods use a series expansion of the spd which is then truncated to give a parametric approximation , the parameters of which can be calibrated to observed option prices .non parametric methods do not make any specific assumption on the form of the spd but require a lot of data .parametric methods postulate a particular form for the spd and fit the parameters to observed option prices .we regroup in this section various methods which have in common the use of a series expansion for the state price density .the general methodology can be stated as follows .one starts with an expansion formula for the state price density considered as a general probability distribution : the first term of the expansion corresponding either to the lognormal or the normal distribution .the following terms can be therefore considered as successive corrections to the lognormal or normal approximations .the series is then truncated at a finite order , which gives a parametric approximation to the spd which , if analytically tractable , enables explicit expressions to be obtained for prices of options .these expressions are then used to estimate the parameters of the model from market prices for options .resubstituting in the expansion enables to retrieve an approximate expression for the spd .a general feature of these methods is that even when the infinite sum in the expansion represents a probability distribution , finite order approximations of it may become negative which leads to negative probabilities far enough in the tails .this drawback should not be viewed as prohibitive however : it only means that these methods should not be used to price options too far from the money .we will review here three expansion methods : lognormal edgeworth expansions , cumulant expansions and hermite polynomials .all these methods are based on a series expansion of the fourier transform of a probability distribution ( here , the state price density ) defined by : the cumulants of the probability density are then defined as the coefficients of the taylor expansion : the cumulants are related to the central moments by the relations one can normalize the cumulants to obtain dimensionless quantities : is called the skewness of the distribution , the kurtosis .the skewness is a measure of asymmetry of the distribution : for a distribution symmetric around its mean , while indicates more weight on the right side of the distribution .the kurtosis measures the fatness of the tails : for a normal distribution , a positive value of indicated a slowly decaying tail while distributions with a compact support often have negative kurtosis .a distribution with is said to be leptokurtic .an edgeworth expansion is an expansion of the difference between two probability densities and in terms of their cumulants : since the density of reference used for evaluating payoffs in the black - scholes model is the lognormal density , jarrow & rudd suggested the use of the expansion above , taking as the state price density and as the lognormal density .the price of a call option , expressed by eq .[ spd ] , is given by : where is the black - scholes price , the implied variance of the spd and and are respectively the skewness and the kurtosis of the spd and of the lognormal distribution .given a set of option prices for maturity , eq .[ jarro ] can then be used to determine the implied variance and the implied cumulants and .this method has been applied by corrado & su to s&p options : they extract the implied cumulants and for various maturities from option prices and show evidence of significant kurtosis and skewness in the implied distribution . using the representation above, they propose to correct the black - scholes pricing formula for skewness and kurtosis by adding the first two terms in eq.[jarro ] .no comparison is made however between implied and historical parameters ( cumulants of ) .another method , proposed by potters , cont & bouchaud , is based on an expansion of the state price density starting from a normal distribution . the first two terms are given by where is the skewness and the kurtosis of the density .although the mathematical starting point here is quite similar to the hermite or edgeworth expansion , the procedure used by potters _et al _ is very different : instead of directly matching the parameters to option prices , they focus on reproducing correctly the shape of the volatility smile .their procedure is the following : starting from the expansion ( [ cumu ] ) an analytic expression for the option price can be obtained in the form of series expansion containing the cumulants .the series is then truncated at a finite order and the expression for the option price inverted to give an analytical approximation for the volatility smile in terms of the cumulants up to order , expression as a polynomial of degree in .this expression is then fitted to the observed volatility smile ( for example using a least squares method ) to yield the implied cumulants .an advantage of this formulation is that it corresponds more closely to market habits : indeed , option traders do not work with prices but with implied volatilities which they rightly considered to be more stable in time than option prices .this analysis can be repeated for different maturities to yield the implied cumulants as a function of maturity : the resulting term structure of the cumulants then ( shown in fig.[fig2 ] ) gives an insight into the evolution of the state price density under time - aggregation . by applying this method to options on bund contracts on the liffe market ,the authors show that the term structure of the implied kurtosis matches closely that of the historical kurtosis , at least for short maturities for which kurtosis effects are very important .this observation shows that the densities and have similar time - aggregation properties ( dependence on ) a fact which is not easily explained in the arbitrage pricing framework where the relation between and is unknown in incomplete markets .[ fig2 ] although the expansion given in uses only the skewness and kurtosis , one could in principle move further in the expansion and use higher cumulants , which would lead to a polynomial expression for the implied volatility smile .however empirical estimates of higher order cumulants are unreliable because of their high standard deviations .the -th hermite polynomial is defined as : the method recently proposed by abken _et al _ uses a hermite polynomial expansion for both and the payoff function .although the starting point is similar to the approach of , the method is different : it is based on the properties of hermite polynomials which form an orthonormal basis for the scalar product : the state price density can be expanded on this basis : madan & milne also use a representation of the payoff function in the hermite polynomial basis : therefore , in contrast with the cumulant expansion method , not only the spd is approximated but also the payoff .the coefficients can be calculated analytically for a given payoff function . in the case of a europeancall the coefficients are given in .the price of an option with payoff is then given by : the price of any option can therefore be expressed as a linear combination of the coefficients , which correspond to the market price of hermite polynomial risk " . in order to retrieve these coefficients from option prices , one can truncate the expansion in eq.[hermit ] at a certain order and , knowing the coefficients , calculate so as to reproduce as closely as possible a set of option prices , for example using a least - squares method .the state price density can then be reconstructed using eq .[ hermite ] . an empirical exampleis given in with : the empirical results show that both the historical and state price densities have significant kurtosis ; however , the tails of the state price density are found to be fatter than those of the historical density , especially the left tail : the interpretation is that the market fears large negative jumps in the price that have not ( yet ! ) been observed in the recent price history . such results have also been reported in several other studies of option prices .one of the drawbacks of the black - scholes model is that it is based on a strong assumption for the form of the distribution of the underlying assets fluctuations , namely their lognormality .although everybody agrees on the weaknesses of the lognormal model , it is not an easy task to propose a stochastic process reproducing in a satisfying manner the dynamics of asset prices .non - parametric methods enable to avoid this problem by using model - free statistical methods based on very few assumptions about the process generating the data .two types of non - parametric methods have been proposed in the context of the study of option prices : kernel regression and maximum entropy techniques .ait sahalia & lo have introduced another method based on the following observation by breeden & litzenberger : if denotes the price of call option then the state price density can be obtained by taking the second derivative of with respect to the exercise price : if one observed a sufficient range of exercise prices then eq . [ bibi] could be used in discrete form to estimate .however it is well known that the discrete derivative of an empirically estimated curve need not necessarily yield a good estimator of the theoretical derivative , let alone the second - order derivative .ait - sahalia & lo propose to avoid this difficulty by using non - parametric kernel regression : kernel methods yield a smooth estimator of the function and under certain regularity conditions it can be shown that the second derivative of the estimator converges to for large samples .however the convergence is slowed down both because of differentiation and because of the curse of dimensionality " i.e. the large number of parameters in the function . applying this method to s&p futures options , ait - sahalia & lo obtain an estimator of the state price density for various maturities varying between 21 days and 9 months .the densities obtained are systematically different from a lognormal density and present significant skewness and kurtosis .but the interesting feature of this approach is that it yields the entire distribution and not only the moments or cumulants .one can then plot the spd and compare it with the historical density or with various analytical distributions .another important feature is that the method used by ait - sahalia & lo also estimated the dependence on the maturity of the option prices , yielding , as in the cumulant expansion method , the term - structure ( scaling behavior ) of various statistical parameters as a by - product .however it is numerically intensive and difficult to use in real - time applications .the non - parametric methods described above minimize the distance between observed option prices and theoretical option prices obtained from a certain state price density .however such a problem has in principle an infinite number of solutions since the density has an infinite number of degrees of freedom constrained by a finite number of option prices .indeed , different non - parametric procedures will not lead in general to the same estimated densities .this leads to the need for a criterion to choose between the numerous densities reproducing correctly the observed prices .two recent papers have proposed a method for estimating the state price density based on a statistical mechanics/ information theoretic approach , namely the maximum entropy method .the entropy of a probability density is defined as : is a measure of the information content the idea is to choose , between all densities which price correctly an observed set of options , the one which has the maximum entropy i.e. maximizes under the constraint : and subject to the constraint that a certain set of observed option prices are correctly reproduced : this approach is interesting in several aspects .first , it is based on the minimization of an information criterion which seems less arbitrary than other penalty functions such as those used in other non - parametric methods .second , one can generalize this method to minimize the kullback - leibler distance between and the historical density , defined as : minimizing this distance gives the state price density which is the closest " to the historical density in an information - theoretic sense .this density should be related to the minimal martingale measure proposed by .the value of can then give a straightforward answer to the question : how different is the spd from the historical distribution ? or : how different are market prices of options from those obtained by naive expectation pricing ( see section [ expect ] ) ?however the absence of smoothness constraints has its drawbacks .one of the characteristics of this method is that it typically gives bumpy " i.e. multimodal estimates of the state price density .this is due to the fact that , contrarily to the there is no constraint on the smoothness of the density .this may seem a bit strange because it is not the type of feature one expects to observe : for example , the historical pdfs of stock returns are always unimodal .this has to contrasted with the high degree of smoothness required in kernel regression methods .some authors have argued that these bumps may be intrinsic properties " of market data and should not be dismissed as aberrations but no economic explanation has been proposed .jackwerth & rubinstein solve this problem by imposing smoothness constraints on the density : this can be done by subtracting from the optimization criterion a term penalizing large variations in the derivative .however the relative weight of smoothness vs.entropy terms may modify the results .apart from the black - scholes model , the other widely used option pricing model is the discrete - time binomial tree model . in the same way that continuous - time models can be used to extract continuous state price densities from market prices of options ,the binomial tree model can be used to extract from option prices an implied tree " the parameters of which are conditioned to reproduce correctly a set of observed option prices .rubinstein proposes an algorithm which , starting from a set of option prices at a given maturity , constructs an implied binomial tree which reproduces them exactly .the implied tree contains the same type of information as the state price density presented above .the tree can then be used to price other options .although discrete by definition , binomial trees can approximate as closely as one wishes any continuous state price density provided the number of nodes is large enough .rubinstein s approach is easier to implement from a practical point of view than kernel methods and can perfectly fit a given set of option prices for any single maturity .however the large number of parameters may be a drawback when it comes to parameter stability : in practice the nodes of the binomial tree have to be recalculated every day and , as in the case of the black - scholes implied volatility , the implied transition probabilities will in general change with time .the habit of working with the lognormal distribution by reference to the black - scholes model has led to parametric models representing the state price density as a mixture of lognormals of different variances : where is a lognormal distribution with unit variance and mean , the ( risk - free ) interest rate .the advantage of such a procedure is that the price of an option is simply obtained as the average of black - scholes prices for the different volatilities weighted by the respective weights of each distribution in the mixture . in principleone could interpret such a mixture as the outcome of a switching procedure between regimes of different volatility , the conditional spd being lognormal in each case .such models have been fitted to options prices in various markets by .their results are not surprising : by construction , a mixture of lognormals has thin tails unless one allows high values of variance .but the major drawback of such a parametric form is probably its absence of theoretical or economic justification .remember that the density which is modeled as a mixture of lognormals is not the historical density but the state price density : even in the hypothesis of market completeness it is not clear what sort of stochastic process for the underlying asset would give rise to such a state price density ..advantages and drawbacks of various methods for extracting information from option prices . [ cols="<,<,<",options="header " , ]if one considers a simple exchange economy with a representative investor then from the knowledge of any two of the three following ingredients it is theoretically possible to deduce the third one : 1 .the preferences of the representative investor .2 . the stochastic process of the underlying asset .3 . the prices of derivative assets . therefore , at least in theory , knowing the prices of a sufficient number of options and using time series data to obtain information about the price process of the underlying asset one can draw conclusions about the characteristics of the representative agents preferences .such an approach has been proposed by jackwerth to extract the degree of risk aversion of investors implied by option prices .exciting as it may seem , such an approach is limited as it stands for several reasons. first , while a representative investor approach may be justified in a normative context ( which is the one adopted implicitly in option pricing theory ) it does not make sense in a _ positive _ approach to the study of market prices .the limits of the concept of a representative agent have been already pointed out by many authors .taking seriously the idea of a representative investor would imply all sorts of paradoxes , the absence of trade not being the least of them .furthermore , even if the representative agent model were qualitatively correct , in order to obtain quantitative information on their preferences one must choose a parametric representation for the decision criterion adopted by the representative investor .typically , this amounts to postulating that the representative investor maximizes the expectation of a certain utility function of her wealth ; depending on the choice for the form of the function one may obtain different results from the procedure described above . given that utility functions are not empirically observable objects , the choice of a parametric family for is often ad - hoc thus reducing the interest of such an approach from an empirical point of view .all options traded on a given underlying asset do not have the same liquidity : there are typically a few strikes and maturities for which the market activity is intense and the further one moves away from the money and towards longer maturities the less liquid the options become .it is therefore reasonable to consider that some options prices are more accurate " than others in the sense their prices are more carefully arbitraged .these considerations must be taken into account when choosing the data to base the estimations on ; for further discussion of this issue see .given this fact , one can then use the information contained in the market prices of liquid options -considered to be priced more efficiently"- to price less liquid options in a coherent , arbitrage - free fashion .the idea is simple : first , the state price density is estimated by one of the methods explained above based only on market prices of liquid options ; the estimated spd is then used to calculate values of other , less liquid options .this method may be used for example to interpolate between existing maturities or exercice prices .if one has an efficient method for pricing illiquid options better than the market then such a method can potentially be used for obtaining profits by systematically buying underpriced options and selling overpriced ones .these strategies are not arbitrage strategies in a textbook sense i.e. riskless strategies with positive payoff but they are statistical arbitrage strategies : they are supposed to give consistently positive returns in the long run .the first such test was conducted by who used the implied volatility as a predictor of future price volatility of the underlying asset .more recently ait - sahalia _ et al _ have proposed an arbitrage strategy based on non - parametric kernel estimators of the spd .the idea is the following : one starts with a diffusion model for the stock price : where the instantaneous volatility is considered to be a deterministic function of the price level .the function is then estimated from the historical price series of the underlying assets using a non - parametric approach . under the assumption of a complete market, the state price density may be calculated from , yielding an estimator .another estimator may be obtained by a kernel method as explained above .if options were priced according to the theory based on the assumption in eq.[diff ] then one would observe , a hypothesis which is rejected by the data .the authors then propose to exploit the difference between the two distributions to implement a simple trading strategy , which boils down to buying options for which the theoretical price calculated with is lower than the market price ( given by ) and selling in the opposite case .they show that their strategy yields a steady profit ( annualized with a sharpe ratio around 1.0 ) when tested on historical data .such results have yet to be confirmed on other markets and data sets and it should be noted that large data sets are needed to implement them .we have described different methods for extracting the statistical information contained in market prices of options .there are several points which , in our opinion , should be kept in mind when using the results of such methods either in a theoretical context or in applications : 1 .all methods point out to the existence of fat tails , excess kurtosis and skewness in the state price density and clearly show that the state price density is different from a lognormal , as assumed in the black - scholes model .the black - scholes formula is simply used as a tool for translating prices into implied volatilities and not as a pricing method .the study of the evolution of the state price density under time aggregation shows a nontrivial term structure of the implied cumulants , resembling the terms structure of the historical cumulants .for example the term structure of the implied kurtosis shows a slow decrease with maturity which bears a striking similarity with that of historical kurtosis . in the terms used in the mathematical finance literature ,the risk - neutral " dynamics is not well described by a random walk / ( geometric ) brownian motion model .3 . one should _ not _ confuse the state price densities estimated by the approaches discussed above with the historical densities obtained from the historical evolution of the underlying asset .this confusion can be seen in many of the articles cited above : it amounts to implicitly assuming an expectation pricing rule .the two densities reflect two different types of information : while the historical densities reflects the fluctuations in the market price of the underlying asset , option prices and therefore the state price density reflects the anticipations and preferences of market participants rather than the actual ( past or future ) evolution of prices .this distinction is clearly emphasized in and more explicitly in the maximum entropy method where even by minimizing the distance of the spd with the historical distribution one finds two different distributions .another way of stating this result is that option prices are not simply given by historical averages of their payoffs .4 . more specifically , accessing the state price densityempirically enables a direct comparison with the historical density which provides a tool for studying a central question in option pricing theory : the relation between the historical and the so - called risk - neutral " density .the results show that the two distributions not only differ in their mean but may also differ in higher moments such as skewness or kurtosis .+ in particular the intuition " conveyed by the black - scholes model that the pricing density is simply a centered ( zero - mean ) version of the historical density is not correct . in this senseone sees that the black - scholes model is a singularity " and its properties should not be considered as generic .although this is implicitly assumed by many authors , it is not obvious that the state price densities estimated from options data do actually correspond to the risk - neutral probabilities " or martingale measures " used in the mathematical finance literature . although constraint of the absence of arbitrage opportunities theoretically imposes that all option prices be expressed as expectations of their payoff with respect to the _ same _ density , the introduction of transaction costs and other market imperfections ( limited liquidity for example ) can allow for the simultaneous existence of several spds compatible with the observed prices .indeed the presence of market imperfections may drastically modify the conclusions of arbitrage - based pricing theories . in statistical terms, it is not clear whether a set of option prices determine the spd uniquely in the presence of market imperfections _ even _ from a theoretical point of view ( e.g. if one could observe an infinite number of strikes ) .this issue has yet to be investigated both from a theoretical and empirical point of view .the methods described above are becoming increasingly common in applications and will lead to an enhancement of arbitrage activities between the spot and option markets .given the rapid development of options markets , the volume of such arbitrage trades is not negligible compared to the initial volume of the spot market , giving rise to non - negligible feedback effects .the existence of feedback implies that derivative asset such as call options can not be priced in a framework where the underlying asset is considered as a totally exogenous stochastic process : the distinction between underlying and derivative asset becomes less clear - cut than what text - book definitions tend to make us think .the development of such integrated " approaches to asset pricing should certainly be on the agenda of future research .i would like to thank yacine ait - sahalia , jean - pierre aguilar , jean - philippe bouchaud , nicole el karoui , jeff miller and marc potters for helpful discussions , _ science & finance sa _ for their hospitality and the organizers of the budapest workshop , janos kertesz and imre kondor , for their invitation .the figures are taken from .ait - sahalia , y. & lo , a.h .( 1996 ) `` nonparametric estimation of state price dens ities implicit in financial asset prices '' nber working paper 5351 .ait - sahalia y. , wang y. & yared , f. ( 1997 ) do option markets correctly assess the probabilities of movement of the underlying asset ? " , communication presented at aarhus university center for analytical finance , sept .1997 .eberlein , e. & jacod , j. ( 1997 ) on the range of options prices `` _ finance & stochastics _ , * 1 * ( 2 ) , 131 - 140 .el karoui , n. & m.c .quenez ( 1991 ) ' ' dynamic programming and pricing of contingent claims in incomplete markets " _ siam journal of control and optimization_. fllmer , h. & schweizer , m. ( 1990 ) `` hedging of contingent claims under incomplete information '' in davis , m.h.a & elliott , r.j ._ applied stochastic analysis _ , stochastic monographs * 5 * , london : gordon & breach , pp 389 - 414 .fllmer , h. & sondermann , d. ( 1986 ) `` hedging of non redundant contingent claims '' hildenbrand , w. & mas - colell , a. ( eds . ) , _ contributions to mathematical economics in honor of grard debreu _ , 205 - 223 , north holland , amsterdam .melick , w.r . &thomas , c.p . (1997 ) recovering an assets implied pdf from options prices : an application to crude oil during the gulf crisis " _ journal of financial and quantitative analysis _ , * 32 * , 91 - 116 .
|
after a brief review of option pricing theory , we introduce various methods proposed for extracting the statistical information implicit in options prices . we discuss the advantages and drawbacks of each method , the interpretation of their results in economic terms , their theoretical consequences and their relevance for applications .
|
the use of copulas has become popular for modeling multivariate data .initially , the marginal distributions are fitted , using the vast range of univariate models available , and the dependence between variables is then modeled using a copula .this approach is sometimes easier than seeking a ` natural ' multivariate distribution derived from a probabilistic model , because there may be no suitable multivariate distribution with the required marginals .baker gave a class of bivariate and multivariate copulas based on order - statistics , and this work seeks to ` dig deeper ' .the ordering and some other properties of these copulas are derived here , and the copulas are generalized into further copulas in the bivariate case .further experience is also gained in fitting distributions derived from the bivariate and multivariate copulas to data .first , we briefly recapitulate the essential concept of the earlier work .many topics covered there , such as the connection with bernstein polynomials and the farlie - gumbel - morgenstern ( fgm ) distribution , are not repeated here .the derivation of the class of distributions of interest is most easily done by considering the generation of correlated random variables from independent random variables and with distribution functions and pdfs ( where defined ) .if sets of random variables and are sorted into order statistics and , they can be paired off as , and one such pair randomly selected .this scheme yields a pair of dependent ( positively correlated ) random variables , the spearman ( grade ) correlation between them being .the marginal distributions of and are still respectively , because a randomly chosen order - statistic from a distribution is just a random variable from that distribution .we term the resulting bivariate distribution the ` bivariate distribution of order ' .the procedure also works in the general multivariate case , when sets of random variables can be similarly grouped .in addition to random variable pairs being selected as described , to be ` in phase ' , they can be chosen to be in antiphase , by pairing with , so giving a negative grade correlation of .this is not discussed further ; to model negative correlations one simply replaces by in the formulas .baker obtained a one - parameter family of bivariate copulas for a given by randomly choosing a pair from with probability , and a pair from with probability ; these random variables have grade correlation .equivalently , with probability we can choose and randomly and independently from their order statistics etc ; we could then say that and are chosen from independent cycles .the resulting distributions are a mixture of the distribution of order and the independent distribution .we term them ` mixture distributions of order ' .the device of taking mixtures of copulas will be used later to derive new copulas .some mathematical preliminaries are necessary : the distribution function of the of order statistics is given by ( stuart and ord ) .the corresponding pdf if it exists is and the bivariate distribution function of a random order - statistic pair is the mixture distribution of order then has distribution function and where applicable , pdf note that the copula in fact has one continuous and one discrete parameter .there is no need to pair corresponding order statistics : in the most general case the pair can be chosen with probability , so that where for the correct marginal distributions , we must have the matrix is doubly stochastic , and has independent elements . equation ( [ eq : plus ] ) corresponds to the choice .the birkhoff - von neumann theorem states that the set of doubly stochastic matrices of order is the convex hull of the set of permutation matrices of order , and that the extreme points of the set are the permutation matrices . here, we can view ( [ eq : gen ] ) as a mixture of the possible pairings of the and order statistics .the pairing chosen in ( [ eq : plus ] ) can achieve the largest grade correlation for a given , so the corresponding copula is of special interest . when considering copulas rather than bivariate distribution functions , it is convenient to define analogously to ( [ eq : fkn ] ) the distribution functions of order statistics of as where is a bernstein polynomial ; see lorenz for their mathematical description . then the copula corresponding to ( [ eq : hn ] ) ( the copula of order ) is and for completeness , the mixture copula is the material presented so far , except for the nomenclature and the remarks on the birkhoff - von neumann theorem , was given in .the essence of the earlier paper was the derivation of the bivariate distribution of order ( [ eq : hn ] ) and its mixture with a distribution of order 1 to obtain the mixture distribution of order ( [ eq : plus ] ) .this distribution allows arbitrary correlations , and the corresponding copula ( [ eq : mixcop ] ) has the unusual feature of possessing one continuous and one discrete parameter . from this point on ,new results are presented .the mixture copula ( [ eq : mixcop ] ) is of interest in itself , and some further properties of it are derived , such as its ordering properties .it is however convenient to start from the more basic copula of order ( [ eq : oldcop ] ) both when deriving properties of ( [ eq : mixcop ] ) and when deriving further copulas . in the next section , some further properties of the bivariate copulas ( [ eq : oldcop ] ) and ( [ eq : mixcop ] ) are given .the strongest ordering property is positive likelihood ratio dependence ( lrd ) , where when .the lrd property implies all other quadrant dependence properties ( nelsen , 2006 ) .the fgm distribution is known to be lrd ( eg drouet - mari and kotz , 2001 ) and it is proved here that ( [ eq : hn ] ) is lrd .the general distribution ( [ eq : gen ] ) is not .writing for brevity etc , from ( [ eq : hn ] ) we have that where since , the right - hand side can be rewritten as .this can be factored into since , then .as , each bracket of ( [ eq : brack ] ) is positive , and the lrd property follows .it follows straightforwardly that mixture distributions derived from ( [ eq : hn ] ) such as ( [ eq : plus ] ) are also lrd .the calculation of kendall s tau for ( [ eq : oldcop ] ) and ( [ eq : mixcop ] ) is given in baker .blomqvist s medial coefficient or beta is another widely used measure of association , given by . from ( [ eq : hn ] ) , this does not simplify much ; it can also be written there is an additional factor of for the copula ( [ eq : mixcop ] ) .note that at the median , , and the pdf from ( [ eq : hn ] ) takes a simple form because the series can then be summed , to give since , we have that . gini s gamma is a coefficient of association that can be expressed as ( nelsen , ) . for ( [ eq : oldcop ] ) this gives after some algebra there is again an additional factor of for the copula ( [ eq : mixcop ] ) .note that the schweizer - wolff sigma defined as is numerically identical to spearman s rho for ( [ eq : plus ] ) , because it possesses the pqd ( positive quadrant dependence ) property as a consequence of the lrd property .these dependence measures are shown in figure 1 plotted against .dependence increases with and the frchet bound is attained as .the coefficient of tail dependence ( e.g. joe ) is defined in general as , where , . from its definition , , and .the distribution ( [ eq : hn ] ) can be shown after some algebra to yield , so that the random variables are asymptotically independent .this property also holds for all finite mixture distributions . the copula ( [ eq : oldcop ] ) and its mixtures possess the radial or reflective symmetry , also seen for example in the frank and plackett copulas .all existing copulas seem to have the simpler symmetry property .asymmetry between and is usually handled by using different marginal distributions and , but there is no reason why the copula itself should be symmetric .asymmetry can not occur for archimedean copulas , for which since , where is a ( decreasing ) function . yet , as order statistics of can be paired with any permutation of order statistics of , and still give marginal distributions , it is easy to construct copulas which do not have this symmetry .for example , when , the copula which can be written is asymmetric .it would be interesting to devise tests of this symmetry and to see whether such asymmetric copulas are ever needed in practice .this short section covers four small points for completeness .the hazard function takes simple forms in the tails .when , since the survival function , we have that . only the term of the pdf from ( [ eq : plus ] ) survives , and this gives .the correlation between the random variables inflates the hazard . in the right - hand tail , where , , only the term survives from ( [ eq : fkn ] ) , for all .it follows that for , we have as the denominator can be much larger than , the correlation between the random variables can decrease the hazard in the tail .the pdf from ( [ eq : hn ] ) can be written as a hypergeometric function : median regression is the curve , where .this does not take any simple form for these distributions . finally , the pdf from ( [ eq : hn ] ) is proportional to the probability that a random walk in the plane returns to its start point after steps .given probabilities of moving left or right , and probabilities of moving up or down , so that , we have that so that where , , e.g. . at the median where , the random walk is symmetric , with probability of moving in any direction .having derived new properties of the copulas ( [ eq : oldcop ] ) and ( [ eq : mixcop ] ) introduced earlier , we now seek to generalize them into bivariate models that could be useful for fitting to data .the most general form of the bivariate model ( [ eq : gen ] ) could be fitted directly to data for low , and has parameters . from the definition of the grade correlation , we have ( nelsen , 2006 ) generalizing the proof in baker , this gives in the most general case , this model can reproduce any copula with increasing accuracy as , and all models of lower order can be written as ( [ eq : gen ] ) for some choice of . however, this does not help in the construction of simple models with few parameters , which is our aim .we first discuss several possible approaches , before introducing what seems the most useful new copula , the ` bessel function copula ' .one possibility for reducing the number of model parameters from would be to modify the scheme for generating correlated random numbers given in the introduction , by first generating a uniformly - distributed random number . then a random variable is chosen as the order statistic number of from , and number of from , where is the ` floor ' function .this allows the two random variables to be chosen from different orders of order statistic .the resulting model is a special case of ( [ eq : gen ] ) where .unfortunately , the resulting distributions , characterised by two discrete parameters , are not mathematically tractable .we therefore seek instead to obtain models with only a few parameters by generalizing ( [ eq : hn ] ) .a more general model arises from pairing order statistics only within some range or ranges ; for example , suppose only the 1st to and to order statistics pair , and the remainder associate randomly .then from ( [ eq : rhos ] ) it follows that this allows a distribution with three discrete parameters where the random variables correlate strongly only in one or both tails .another way to generate models that are more general than ( [ eq : hn ] ) is to form a finite mixture distribution where .this of course can be expressed as a special case of ( [ eq : gen ] ) .the spearman correlation is simply a new distribution can be derived by taking an infinite mixture of models .this gives copulas indexed by one parameter , if the mixing distribution is a 1-parameter distribution .in general the pdf is rearranging , an interesting distribution arises on taking where , , and denotes the bessel function of imaginary argument .this is a special case of 2-parameter discrete bessel function distribution first described by pitman and yor and later by yuan and kalbfleisch .then from the series expansion of the bessel function , we have that the copula is figures [ scatfig1 ] and [ scatfig2 ] illustrate the copula as scatterplots , for and respectively . herea randomly generated sample of size 1000 was generated from the joint distribution with copula and uniform marginals .this copula is the first one known to the author that requires special functions ; all others require only exponentials , logarithms , and powers .the spearman correlation is calculated from ( [ eq : spear ] ) and ( [ eq : wt ] ) as to obtain negative correlations , one sets . as , ( [ eq : besspdf ] ) gives . as , since , we have that as where .this shows that if .hence as the distribution attains the frchet bound . from ( [ eq: rhos1 ] ) as , we have that , so the grade correlation approaches unity , as it must . as this copula is not a finite mixture of the copula ( [ eq : oldcop ] ) , the coefficient of tail dependence could be nonzero .however , using the reflection symmetry of the copula , we have that the coefficient of right ( and left ) tail dependence is , where the double integral in ( [ eq : besscop ] ) is .the coefficient of tail dependence is thus still zero , except of course in the limit as the frchet - hoeffding bound is approached as .random variables from this copula can be derived by generating from the discrete bessel distribution , as described by devroye , and then randomly selecting one of the order - statistic pairs .this is how figures [ scatfig1 ] and [ scatfig2 ] were generated . here, was generated using the inverse probability method .this general strategy would be efficient if many random numbers were required , when unused order statistic pairs could be stored and used in preference to generating fresh ones .it also requires only generation of random numbers from the marginal distributions , and does not require the use of the inverse probability transformation on these distributions .the alternative method , of generating and then generating from the conditional distribution is not recommended as it is computationally more time consuming .one can also take the weight another special case of the bessel function distribution .after summing the series , this yields the more complex form where , .the spearman correlation from ( [ eq : spear ] ) is the copula is in general similar to ( [ eq : besscop ] ) but slightly more complex .still other choices can be made for , but these lead to pdfs that are much less tractable , being infinite sums of hypergeometric functions .the spearman correlations however are more tractable ; for example taking a displaced poisson distribution for the weights , the spearman correlation may be shown to be dataset from the australian institute of sport is used as an example .this is given in cook and weisberg and has been used as a testbed for new distributions by azzalini and others . here , percentage body fat and weight of 102 male athletes were used .figure [ fig2 ] shows the skew distribution of percentage body fat .the distribution of weight ( not shown ) was also slightly skew .a suitable univariate model for the marginal distributions was chosen as the lagged normal distribution , where the random variable , where is gaussian , and is exponential .in fact , taking gives a distribution that can be skew in either direction and long - tailed to either or both left and right .taking the normal mean as and standard deviation , and the exponential means as and , the pdf is ,\label{eq : pdflag}\end{gathered}\ ] ] where is the normal distribution function .the distribution function is .\label{eq : cdflag}\end{gathered}\ ] ] the mean , variance , skewness , and kurtosis . since is a well - known special function , and the distribution function can be written as a function of , and also the moments can be written down , this distribution is quite an attractive choice for fitting data that depart from normality , and are not heavy tailed .the easy computation of the distribution function makes it particularly attractive for use in fitting multivariate distributions via copulas .care is needed in computing the pdf and distribution function when or are small .one can then use the asymptotic expansion for , which avoids the rounding errors implicit in taking the product of very large and very small quantities .the symmetric form of this distribution , with , is described in johnson , kotz and balakrishnan , vol .2 , chap . 24 . the percentage body fat could be fitted by maximum likelihood to a lagged normal distribution , where only the right tail was needed , so that .figure [ fig2 ] shows the fitted curve , with azzalini s skew normal distribution also fitted .both distributions fitted satisfactorily , according to the kolmogorov test , although better fits can be achieved at the expense of using more parameters ; there is even a suggestion of bimodality in the data .this is possible , as the sample comprises athletes from a variety of different sports .weight and height look normal , but weight has a lower aic ( akaike information criterion ) if fitted to a lagged normal , and this was done . the bivariate pdf ( [ eq : pdfp ] ) with fitted the data with a log - likelihood of and a weight .the observed spearman and pearson correlations were 0.613 and 0.581 , and the predictions from the model were 0.640 and 0.576 .the bessel function pdf ( [ eq : besspdf ] ) also fitted satisfactorily , with , and predicted spearman correlation of 0.65 , pearson correlation 0.565 .the fitted value of was . for comparison , the azzalini bivariate distribution fitted with , with the same number ( 7 ) of parameters .the point here is that the bessel function copula ( [ eq : besscop ] ) performs satisfactorily , as does the whole copula - based methodology of modeling the marginal distributions individually , and gluing them together with a copula .one can obtain good fits to the data , without forcing both the marginal distributions to be of the same form .there is then the freedom to vary the marginal modeling , for example by fitting a bimodal distribution in figure [ fig2 ] , which option is not available on fitting a standard multivariate distribution .consider the multivariate generalization of the models presented so far .this topic was only briefly touched on in , and the results here are new . denote the of random variables by , and denote the corresponding distribution functions , pdfs , and distribution functions of the of order statistics by and respectively .the most general multivariate model of order would be where and for all .this would have parameters . to reduce this number, one could consider only models in which the random variables are in phase ( or in antiphase , for negative correlations ) in cycles of length .variables could all be in the same cycle , or some could be in independent cycles .for example , with 5 variables , two could be paired in one cycle , two in another independent cycle , and the fifth variable be in a cycle of its own . to generate random numbers from such a distribution, one could compute the order statistics for the 5 variables , and then one random number would decide which order statistic was to be taken for variables 1 and 2 , another independent random number would select the third and fourth random variable pair , and a third independent random choice would select the fifth random variable from among its order statistics . clearly , random numbers for variables in such single cycles could be more efficiently generated by simply choosing a random variable from the appropriate marginal distribution .the number of models is the number of ways distinguishable objects ( random variables ) fit into or fewer identical boxes ( cycles ) .this is given by the recursion relation with ( tucker , ) .this may be derived by considering the addition of the object .it must occur in a box containing other objects , where the other objects can be chosen in ways , and the remaining other objects in the other boxes can be arranged in ways .the recursion relation follows , and the number of mixing parameters for a mixture model is .table 1 shows the number of models resulting ; the number grows faster than exponentially with .the table also shows the number of parameters for the subset of models obtained by simply including or excluding random variables from one common cycle .this simple scheme gives distributions whose marginals allow differing spearman correlations , and is feasible up to dimensions of 5 or 6 , beyond which the number of model parameters becomes excessive .the multivariate distribution function can be written where the sets run through all possible subsets of the random variables , and where .the model parameters can be estimated in the same way as described earlier for bivariate mixture models .with the notation etc , the trivariate case of ( [ eq : bigh ] ) can be written where .this model was fitted to the percentage of body fat , weight , and height for the australian institute of sport data from cook and weisberg . after fitting the three marginal distributions by maximum likelihood , using the lagged normal distribution ,the trivariate distribution was fitted by maximum likelihood to find the four parameters , keeping the marginal distributions fixed .subsequently , allowing the parameters of the marginal distributions to float increased the log - likelihood by only a very small amount .fitting weights that sum to unity poses a computational problem .the simple solution adopted was to use parameters to , fixing one parameter , and then the sum to unity , while the 4 free parameters can take any value on the real line .any of the can be set to 0 , but the choice is best altered if the term chosen fits to very small weight , as then all the other become huge .the observed and predicted pearson and spearman correlations are given in table [ t : two ] , showing fairly good agreement .a value of was used , but the results are not very sensitive to this , as long as is large enough to allow the highest correlation .the five fitted weights in ( [ eq:3 ] ) were respectively .interestingly , the independence ( first ) term is not needed .a common spearman correlation of derives from the last term in ( [ eq:3 ] ) , and the correlation between weight and height is then boosted by the second term , while that between body fat and weight is boosted by the fourth term . the trivariate fit by this copula fits just slightly worse than the 3-dimensional azzalini model , in terms of log - likelihood , even although the marginal fits are slightly better .the azzalini model gave , while this model gave .unfortunately , this is the ` little rift within the lute ' that limits the usefulness of these multivariate models .although they can accommodate large and variable correlations , they can not fit an arbitrary correlation matrix .this was seen much more clearly on moving to a quadrivariate example , taken from a study by penrose and available online via statlib , in which percentage body fat , weight , height and abdominal circumference were fitted , for a sample of 252 men .the quadrivariate model in a terse notation , writing e.g. , was although the lagged normal distribution gave satisfactory marginal fits , the quadrivariate model fitted with 14 parameters gave , compared with the quadrivariate azzalini distribution fit of , and most of the weights fitted as zero .it was clear that the fitted correlations were in general too small .hence the usefulness of these multivariate distributions seems limited .following the introduction of a new copula in baker , it became clear to the author that its properties had not been fully enumerated , and also that it was possible to derive further copulas by generalizing it .further , only a few bivariate distributions had been fitted to data , and there was no practical experience at all with fitting multivariate distributions . in this paper , several ways of extending the class of copulas have been given .perhaps the most promising one is to make the order a random variable from the discrete bessel distribution .this leads to the ` bessel function ' copula ( [ eq : besscop ] ) , the only copula in the author s experience that requires a special function for its expression .this copula is indexed by one parameter .like the frank , clayton and plackett copulas , it contains the independence case , and can attain the frchet bound as .negative correlations are dealt with by e.g. setting .the use of this copula has been illustrated by fitting it to the australian institute of sport dataset .the fact that the copula must be written either as a double integral , or as a series expansion is a drawback , but in fitting to data by likelihood - based methods , the crucial requirement is that the pdf must be easily computable .this pdf is easy to compute , given the widespread existence of routines to compute the special functions and .it is also not difficult to generate random variables .this copula is by the way not archimedean ; archimedean copulas must be associative , but computations showed a difference between and ( lack of associativity ) of up to about 2% .the properties of the bivariate copulas have been further explored .the most significant is probably that the original copula in ( [ eq : plus ] ) and its mixtures possess the lrd ( likelihood ratio dominance ) ordering property .this property is therefore also possessed by the bessel function copula .the properties of the analogous multivariate copulas have also been studied , but here results are less positive .they do have some flexibility ; marginal distributions need not have identical parameters , and high correlations can be accommodated . although the hitherto untried process of fitting trivariate and quadrivariate models to data by maximum - likelihood estimation proved entirely feasible, it seems that despite their many parameters these distributions can not reproduce an arbitrary correlation matrix .the use of these distributions for is therefore problematical .they may however prove to be a starting point for the development of more useful distributions ..numbers of parameters to be estimated for two classes of multivariate model , and the number of correlations .the models , from left to right , are the single cycle model , and the multicycle model . [ cols="<,<,<,<",options="header " , ] 99 a. azzalini , a capitanio , distributions generated by perturbation of symmetry with emphasis on a multivariate skew t - distribution , journal of the royal statistical society series b , 65 ( 2003 ) 367 - 389 .a. azzalini , a capitanio , statistical applications of the multivariate skew normal distribution , journal of the royal statistical society series b , 61 ( 1999 ) 579 - 602 .r. d. baker , constructing multivariate distributions with fixed marginals , journal of multivariate analysis ( 2008 ) doi : 10.1016/j.jmva.2008.02.019 g. birkhoff , tres observaciones sobre el algebra lineal , univ .tucumn rev , ser .( 1946 ) 147 - 151 .r. d. cook and s. weisberg , an introduction to regression graphics , wiley , new york , 1994 .g. c. davis , jr . andm. h. kutner , the lagged normal family of probability density functions applied to indicator - dilution curves , biometrics , 32 , ( 1976 ) 669 - 675 .l. devroye , simulating bessel random variables , statistics & probability letters , 57 , ( 2002 ) 249 - 257 .d. drouet - mari , s. kotz , correlation and dependence , imperial college press , london , 2001 .h. joe , multivariate models and dependence concepts , chapman and hall , new york 1997 . n. l. johnson , a. w. kemp and s. kotz , univariate discrete distributions , 3rd .wiley , new york 2005 .n. l. johnson , s. kotz and n. balakrishnan , continuous univariate distributions , wiley , new york 1995 .r. b. nelsen , an introduction to copulas , 2nd ed . ,springer , new york , 2006 .g. g. lorenz , bernstein polynomials , 2nd .ed . , ams / chelsea , new york , 1986 .penrose , a.g .nelson , a.g .fisher , generalized body composition prediction equation for men using simple measurement techniques , medicine and science in sports and exercise , 17 ( 1985 ) 189 .j. pitman and m. yor , a decomposition of bessel bridges .z. fr wahrscheinlichkeitstheorie und verwandte gebiete 59 ( 1982 ) 425 - 457 .a. stuart , k. ord , kendall s advanced theory of statistics , vol 1 , 5th ed , p546 , charles griffin , london , 1987 .a. tucker , applied combinatorics , 3rd ed . , wiley , new york , 1995 .l. yuan and j. d. kalbfleisch , on the bessel distribution and related problems , annals of the institute of statistical mathematics , tokyo ( 2000 ) 52 , 438 - 447 .
|
a new class of copulas based on order statistics was introduced by baker ( 2008 ) . here , further properties of the bivariate and multivariate copulas are described , such as that of likelihood ratio dominance ( lrd ) , and further bivariate copulas are introduced that generalize the earlier work . one of the new copulas is an integral of a product of bessel functions of imaginary argument , and can attain the frchet bound . the use of these copulas for fitting data is described , and illustrated with examples . it was found empirically that the multivariate copulas previously proposed are not flexible enough to be generally useful in data fitting , and further development is needed in this area . copulas ; order - statistics ; bessel function ; random numbers .
|
in some of the problems we may face with complicated functions of the z - transform variable , , whose inverse transform either can not be calculated analytically or the required calculations are quite tedious .clearly , numerical methods should be used in dealing with such problems to find the inverse z - transform . at this time ,as far as the author knows , the main contribution in the field of numerical inversion of z - transform is , which may seem complicated for software implementation .moreover , so far no application of this method is reported in the literature .the aim of this brief paper is to represent two very simple and effective methods for numerical inversion of the z - transform , which can be easily implemented by software . as a well - known classical fact , ], can be considered equal to unity which leads to =\frac{1}{2\pi}\int_0^{2\pi } x(e^{j\omega } ) e^{j\omega n } d\omega.\ ] ] the above equation simply represents the inverse fourier transform of .the main problem with this approach ( as well as all of the others based on the numerical evaluation of an integral ) is that sometimes it does not lead to accurate results .more precisely , according to the residue theorem the ] ( in the numerical examples presented at the end of this paper it is observed that the most accurate results are obtained when the radius of circle is considered equal to unity ) . especially , the numerical integration algorithms may lead to less accurate results for larger values of ( note that the frequency of the kernel of ( [ int_four ] ) is increased by increasing ) .the other problem with this technique is that for any value of we have to evaluate the integral again , which may be time consuming . by the way , calculation of the inverse z - transform by numerical evaluation of an integralis a well - known method widely used by researchers .the aim of this paper is to discuss on two less famous methods . in the following discussions it is assumed that the signal ] is not absolutely summable . for this purposewe can simply use the equation \right\ } = x(a^{-1}z) ] and \} ] for ) .clearly , eq . ( [ sec1 ] ) holds for any in the roc . for example , for we have z_1^{-n}.\ ] ] considering the fact that ] tends to zero as tends to infinity .moreover , the weights in ( [ sec2 ] ) also rapidly tend to zero by increasing the value of provided that ( note that according to the above discussion the region outside the unit circle necessarily belongs to roc ) .it concludes that the sigma in the righthand side of ( [ sec2 ] ) can safely be approximated by its first few terms as the following : z_1^{-n},\ ] ] where is chosen sufficiently large . assuming that are ( random ) points chosen from roc , eq .( [ sec3 ] ) leads to the following approximate linear algebraic equation : \\x[1 ] \\ \vdots\\ x[n ] \\\end{array}% \right),\end{gathered}\ ] ] where is the number of ( random ) points chosen from roc .assuming ^t ] , and eq . ( [ eq1 ] ) can be written as which contains variables and equations .assuming this equation ( theoretically ) has a unique solution provided that the points ( ) are suitably chosen such that the given in ( [ a ] ) is full rank . in the simulations of this paper the value of considered slightly larger than ( more precisely , ) and the points ( ) are randomly chosen from the region defied by .then the solution of ( [ eq1 ] ) is obtained by finding the which minimizes ( such a solution can easily be obtained by using the matlab command , i.e. , ) . + * remark 2 . * in practice it is observed that using the such that makes the problem ill - conditioned for larger values of ( even if belongs to roc ) .hence , points inside the unit circle should be avoided .as mentioned earlier , the second proposed method is based on the discrete fourier transform ( dft ) , which can effectively be calculated through the fast fourier transform ( fft ) ( see the and commands in matlab ) . before explaining our method , we need to briefly review the main result of dft .the following discussion can be found in with more details .consider the discrete - time signal ] , , and \}=x(e^{j\omega}) ] ; that is = x(e^{j\omega})|_{\omega=2\pi k / n} ] is also periodic with period .now , one can prove that if ] , then ] are related through the following equation : =\sum_{r=-\infty } ^\infty x[n - rn].\ ] ] considering the fact that =0 ] ( i.e. , the idft of samples of \} ] as given in ( [ dft2 ] ) .but , ] provided that =0 ] is assumed to be absolutely summable ) and is chosen sufficiently large .it concludes that in order to find the inverse z - transform of , whose all poles are located inside the unit circle and has no poles at infinity , first we calculate ( which is valid since the unit circle belongs to roc according to our previous assumption ). then we calculate the samples of at frequencies , where and is a sufficiently big number .denote the sample of as ] ( using , e.g. , the matlab command ) we arrive at ] for ) are approximately equal to ] ( more detailed discussion can be found in the following ) .the following is a brief explanation of the second method for calculation of the inverse z - transform of whose all poles are located inside the unit circle and has no poles at infinity : 1 .calculate .2 . calculate =x(e^{j\omega})|_{\omega=2\pi k / n} ] ( using the matlab command ) to arrive an approximation like ] . in the following we discuss on the error caused by the above method in a simple case .according to ( [ dft3 ] ) we have =x[n]+\sum^\infty_{r=1 } x[n+rn].\ ] ] the second term in the righthand side of ( [ dft4 ] ) indicates the error caused by the proposed method , i.e. , if =0 ] and the inversion is errorless . assuming that \} ] ( assuming sufficiently large values for ) is obtained as the following : = \sum^\infty_{r=1 } x[n+rn]\approx \sum^\infty_{r=1 } a a^{n+rn},\ ] ] which leads to : \approx\frac{aa^{n+n}}{1-a^n } , \quad n=0,\ldots , n-1.\ ] ] ( note that according to ( [ dft4 ] ) the error in , e.g. , ] . on the other hand , for sufficiently large values of approximation given in ( [ dft5 ] ) is valid which leads to the error calculated above for the sample of ] .moreover , it is observed that increasing the value of is highly effective for decreasing the error in ] is approximately proportional to , which means that duplication of decreases the error in ] ( i.e. , smaller values of ) .a similar discussion can be presented for rational with repeated poles , which is not discussed here . + * remark 3 . *note that a function in non - integer ( fractional ) powers of can not be obtained by taking the z - transform from any real - world discrete - time signal .for example , there is no ] ( however , there is a such that , e.g. , , where stands for the laplace transform ) .the reason is that is necessarily single - valued by definition which can not be equal to any multi - valued ( fractional - order ) function of .that is why the fractional calculus does not have a discrete - time dual .two numerical examples are presented in this section in support of the proposed methods . in each case, the numerical inversion is performed directly by using ( [ int_four ] ) as well as the proposed algorithms , and then the results are compared . in all of the following simulations ,the integrals in ( [ int_z ] ) and ( [ int_four ] ) are evaluated numerically using the matlab r2009a command .+ * example 1 . *consider whose roc is equal to . the inverse z - transform of this function can be obtained through the laurent series expansion as the following which concludes that =0,\quad x[1]=x[2]=1 , \quad x[3]=\frac{1}{3}\ ] ] =0,\quad x[5]=\frac{1}{30 } , \ldots\ ] ] and =0 ] depends on the especial shape of the contour [ fig_ex13](a)-(c ) show the ] .however , comparing fig .[ fig_ex13 ] with fig .[ fig_ex11 ] concludes that the most accurate result corresponds to .+ * example 2 . *consider the z - transform given in the following expression which can further be written as as it is observed , according to the complexity of the resulted expression it is really difficult to extract the complete inverse z - transform from it ( however , it can be easily verified that =x[1]=e ] ) .[ fig_ex21](a)-(c ) show the inverse z - transform of when the first method , second method , and eq .( [ int_four ] ) is applied , respectively . figs .[ fig_ex22](a)-(c ) show the absolute error caused by each of these methods . figs .[ fig_ex23](a)-(c ) show the $ ] obtained through ( [ int_z ] ) when is considered as a circle with radius , respectively .this figure clearly shows that the most accurate result corresponds to .
|
in some of the problems , complicated functions of the z - transform variable , , appear which either can not be inverted analytically or the required calculations are quite tedious . in such cases numerical methods should be used to find the inverse z - transform . the aim of this paper is to propose two simple and effective methods for this purpose . the only restriction on the signal ( whose z - transform is given ) is that it must be absolutely summable ( of course , this limitation can be removed by a suitable scaling ) . the first proposed method is based on the discrete fourier transform ( dft ) and the second one is based on solving a linear system of algebraic equations , which is obtained after truncating the signal whose z - transform is known . numerical examples are also presented to confirm the efficiency of the proposed methods . functions in non - integer powers of are also briefly discussed and it is shown that such functions can not be obtained by taking the z - transform from any discrete - time signal . numerical inverse z - transform , discrete fourier transform , irrational , fractional , non - integer power of
|
we assume that an observed trajectory , , is an element of the ensemble of trajectories or an element of function space in the construction of statistical mathematical model .if is assumed to be continuous , then this space can be considered a set , which are continuous functions in . in other words , where is a realization of some random process with known characteristics , is a reversible conversion in . is called a model of observed data .process is basic in model for . for discrete observation ( time series ) , and in assumption about automodeling , for the highly oscillating trajectory , basic process with unlimited variation is selected . in particular , , where is a fractional brownian motion ( fbm ) , which was first introduced by b. mandelbrot in and is defined as a gaussian random process with zero mean and covariance function : the -dimensional density distribution of fractional brownian motion looks as follows : parameter is called the hurst exponent of fbm , and the transformation consists of actions that transform fbm realization of the observed trajectory . using of fractional brownian motion as a basic process in the model ( [ eq:1 ] ) is justified by non - markovian . the first motivation for studies of this process and its applications are considered in .the results of studies of the properties of fractional brownian motion and its application in models of natural and economic processes are covered in .let s note the reviews . research statistics of fractional brownian motion are quoted below .let s choose the model of fractional brownian motion for observed time series : and transformation is defined .let s calculate hurst exponent of observed time series as of the basic process .note that this value depends on transformation .the criteria for adequacy of the representation ( [ eq:2 ] ) are shown in . from empirical considerationsfollows that the model ( [ eq:2 ] ) is suitable for describing the random time data and apriority is nt satisfactory for the approximation of deterministic chaotic sequences . as a rule, the deterministic and stochastic components can be present in observed data . in present work , a new method of estimation parameters and justified , and the quality of the forecast is investigated for the observed realization of fractional brownian motion . for the real time series , a model is proposed , which uses fractional brownian motion as a basic process .the criteria of adequacy of this model are developed and short - term forecasting is constructed .let s consider the increments , which form the gaussian stationary sequence with zero mean and the correlation matrix , and elements of the matrix that look as follows : in particular , the coefficient of correlation between neighbor increments is the limit theorems for sequence were first proved by peltier : for statistics with probability 1 , . from the last equation , consistency estimates of parameters and follow : let s propose new estimation method of fractional brownian motion , by observed data , two unknown parameters , .let s assume : where matrix is defined by ( [ eq:3 ] ) , is a vector of increments .* statement * statistic is a consistent estimator of the parameter . * proof * is the canonical gaussian vector with the following characteristics : then , , therefore and consequently the statistic and here statistic is an unbiased estimate of the parameter .the dispersion of estimate is calculated using the formula of integration by parts ( ) . from ( [ eq:4 ] ) and ( [ eq:6 ] ), it follows that and consistency of estimates means that , where is a hurst exponent of observed fractional brownian motion . the implementation of the corresponding algorithm is to choose such a value of argument in , where .the efficiency of the algorithm is confirmed by numerical experiment .the statistical values are shown in table [ table:1 ] . where is a generated vector of increments fbm with hurst exponent , is the normalized correlation matrix , corresponding to the index fbm with hurst exponent . for every ,values are calculated with the selection of parameter with step .generation is performed with the following parameters : .efficiency of evaluation method [ cols="^,^,^,^,^,^,^,^,^,^ " , ] [ table:6 ] table [ table:6 ] confirms the satisfactory quality of forecast .the proposed model of real time series with fractional brownian motion as a basic process is effective , if the increments of the observed data have the property of stationarity .considered examples of physical and financial nature allow an approximation by a persistent process , which is confirmed by checking the adequacy of model .constructed short - term forecast is satisfactory .bezborodov v. , mishura y. , luca di persio .option pricing with fractional stochastic volatility and discontinuous payoff function of polynomial growt .arxiv : 16.07.07392[math.pr ] ( submitted on 25 jul 2016 ) breton j. c. , nourdin i. error bounds on the non - normal approximation of hermite power variations of fractional brownian motion // electronic communications in probability , 2008 . p. 482493 .coeurjolly j .- f . simulation and identification of the fractional brownian motion : a bibliographical and comparative study / coeurjolly j .- f .// journal of statistical software . v. 5 . p. 152 .coeurjolly j .- f . estimating the parameters of a fractional brownian motion by discrete variations of its sample paths / j .- f . coeurjolly //statistical inference for stochastic processes , 2001 . p. 199227 .kubilius k. , mishura y. , ralchenko k. , seleznjev o. consistency of the drift parameter estimator for the discretized fractional ornstein - uhlenbeck process with hurst index .j. stat . 9p. 17991825 .mandelbrot b. b. une classe de processus stochastiques homothetiques a soi : application a la loi climatologique de h. e. hurst // comptes rendus de lacademie des sciences .1965 . v. 240 .p. 32743277 .nourdin i. , rveillac a. asymptotic behavior of weighted quadratic variations of fractional brownian motion : the critical case // the annals of probability . v. 37 , issue 6 , p. 22002230 .nourdin i. central and non - central limit theorems for weighted power variations of fractional brownian motion / i. nourdin , d. nualart , c. tudor . ann .inst h. poincar probab statist . p. 10551079 .nualart d. , saussereau b. malliavin calculus for stochastic differential equations driven by a fractional brownian motion . stochastic processes and their applications ( 2009 ) v. 119 , issue 2 , p. 391409 .
|
we investigated the quality of forecasting of fractional brownian motion , and new method for estimating of hurst exponent is validated . stochastic model of the time series in the form of converted fractional brownian motion is proposed . the method of checking the adequacy of the proposed model is developed and short - term forecasting for temporary data is constructed . the research results are implemented in software tools for analysis and modeling of time series . * keywords : * stochastic model , optimal forecast , fractional brownian motion .
|
a dominant mode of temperature variability over the last sixty thousand years is connected with dansgaard oeschger events ( dos ) .these are fast warming episodes ( in the north atlantic region in a few decades ) , followed by a gradual cooling that lasts from hundreds to thousands of years , often with a final jump back to stadial condition .the spectral power of their time series shows a peak at approximately 1,500 years and integer multiples of this value , suggesting the presence of such a periodicity in the glacial climate .the relation between dos and large changes in the atlantic meridional overturning circulation is generally considered well established , despite the limitations of the paleoclimatic records .different low dimensional models have been proposed to explain these rapid climate fluctuations , linking dos to ocean and ice sheet dynamics .no clear evidence in favour of one of them has emerged from observational data . here , we use high resolution ice core isotope data to investigate the statistical properties of the climate fluctuations in the period before the onset of the abrupt change .we analyse isotope data from the ngrip ice core .the data spans the time interval from 59,420 to 14,777 years before 2,000 a.d .( years b2k ) with an average resolution of 2.7 years ( more than 80% of the data has a resolution better than 3.5 years ) . can be interpreted as a proxy for atmospheric temperature over greenland .we use the dating for the onset of the interstadials given in .the dataset then spans the events numbered 2 to 16 . in the first part of the paper, we discuss the bimodality of the time series , already demonstrated in other works .in particular , we establish that bimodality is a robust feature of the climate system that produced it , and not an artifact of the projection of complex dynamics onto a scalar quantity .this is done using a phase embedding technique well known in the non linear dynamics community , but not often used in climate sciences . in the second part of the paper , we show that the statistical properties of the paleoclimatic time series can be used to distinguish between the various models that have been proposed to explain dos .we suggest that considering the properties of an ensemble of events may uncover signals otherwise hidden , and we show that this seems to be the case for the dos in the time series considered here . within the limitations of the available data and of the techniques used , we find that the statistics are most compatible with a system that switches between two different climate equilibrium states in response to a changing external forcing , or with stochastic resonance . in both cases ,the external forcing controls the distance of the system from a bifurcation point .it must be clarified here that we use `` external forcing '' in a purely mathematical sense : the presence of an external forcing means that the climate system can be described by a non autonomous system of equations , i.e. the evolution of the system explicitely depends on time , as a forcing term is present ( see e.g. * ? ? ?no assumption on the nature of this forcing is made .other hypotheses ( noise - induced transitions and autonomous oscillations , suggested by * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) are less compatible with the data .the bimodality hypothesis for this climate proxy has been tested in previous works , but these make the strong assumption that the bimodality of the recorded time series can be directly ascribed to the climate system that produced it .this is not necessarily the case if the evolution of a system with many degrees of freedom , as the real climate system , is projected on a scalar record . exploiting takens embedding theorem for reconstructing phase space ( the space that includes all the states of a system ) ,we assess bimodality without making any assumption on the underlying model beforehand .this approach has the advantage of avoiding artifacts that may arise from the projection of a higher dimensional system onto a scalar time series .the technique that we apply is outlined in , and has some points of contact with the singular spectrum analysis described e.g. in .first , the irregular time series is linearly interpolated to a regular time step , 4 years in the results shown here .data after the last do is discarded for producing the phase space reconstruction .after detrending , an optimal time lag has to be empirically chosen , guided by the first minimum of the average mutual information function ( 20 years in our case ) . with copies of the regular timeseries lagged by multiples of the optimal time lag as coordinates , a phase space reconstruction is obtained .the dimension of the embedding space is , in general , greater than or equal to the one of the physical system which produced the signal , and the dynamics of the system are thus completely unfolded .the number of dimensions is chosen as the minimum one that brings the fraction of empirical global false neighbours to the background value . in our reconstructions ,four dimensions are used .we define as global false neighbours in dimension those couples of points whose distance increases more than 15 times , or more than two times the standard deviation of the data , when the embedding dimension is increased by one .the scientific tools for python ( scipy ) implementation of the kd tree algorithm is used for computing distances in the _ n_dimensional space .the 4dimensional phase space reconstruction is presented in figure [ fig : bimodality ] , performed separately for the data before and after year 22,000 b2k .the results are converted to a polar coordinate system and the anomaly from the average is considered , following .subsequently , the pdfs for the coordinates distributions are computed through a gaussian kernel estimation ( blue line in fig . [ fig : bimodality ] ) .the error of the pdfs is defined as the standard deviation of an ensemble of 50,000 synthetic datasets ( shaded region in the figure ) , obtained by bootstrapping ( see e.g. * ? ? ?the theoretical distributions ( black lines ) refer to a multidimensional normal distribution ( the simplest null hypothesis , following ) .the isotope data possess a distinct trend that marks the drift of the glacial climate towards the last glacial maximum ( approximately 22,000 years ago ) , see figure [ fig : timeseries ] ( top ) .also , associated with the drift is a gradual increase in variance that makes up an early warning signal ( ews ) for the abrupt change towards the last glacial termination ( approximately 18,000 years ago , * ? ? ?* ; * ? ? ?it turns out that the trend enhances the bimodality of the data but detrending is required to unequivocally attribute this bimodality to the dos , as the trend is connected with a longer time scale process . in the four left panels of fig .[ fig : bimodality ] ( between the beginning of the time series , year 59,000b2k , and year 22,000 b2k ) , the normal distribution clearly lies outside the error bars for all four dimensions , and has to be rejected .the bimodality of the data is especially evident from the pdfs of the angular coordinates .the radial distribution is well below the normal one only at intermediate distances from the baricenter ( ) , while it seems to be systematically above the normal distribution close to zero and in the tail , but the error bars include the normal distribution here .these features are very robust , and they can be unambiguously detected also in phase space reconstructions using different interpolation steps , time lags and number of dimensions . in the four right panels ( after year 22,000 b2k until the end of the time series , year 15,000b2k ) ,the deviations from normality are much smaller , and become indistinguishable from a normal distribution when the errors are taken into account .the only statistically non insignificant deviations from normality after year 22,000 b2k are seen in panel h , which represents the longitude coordinate of the polar system . however , no bimodality is observed , but only a non spherically symmetrical distribution of the data . on this basis, we can confirm the claim of ; around year 22,000 b2k the climate undergoes a transition from a bimodal to a unimodal behaviour .the climate system remains unimodal until the end of the time series ( 15,000 b2k ) ; the behaviour after this time can not be assessed using this time series .the cause of the transition at year 22,000 b2k may be a shift of the bifurcation diagram of the system towards a more stable regime ( see fig .[ fig : sketches]a ) , either externally forced or due to internal dynamics , but may as well be caused by a reduction of the strength of stochastic perturbations .in several recent works , an abrupt transition is treated as the jump between different steady states in a system that can be decomposed in a low dimensional non linear component and a small additive white noise component .the jump is considered to be due to the approach to a bifurcation point , usually a fold bifurcation ( see e.g. * ? ? ? * for a discussion of fold bifurcations ) , in which a steady state loses its stability in response to a change in the boundary conditions . if a system perturbed by a small amount of additive white noise approaches a fold bifurcation point , an increase in the variance ( ) is observed , as well as in its lag1 autocorrelation .in addition , detrended fluctuation analysis ( dfa ) can be used to detect the approach to a bifurcation point .this tool , equivalent to others that measure the decay rate of fluctuations in a nonlinear system , has the clear advantage over autocorrelation and variance of inherently distinguishing between the actual fluctuations of the system and trends that may be part of the signal .this follows from the fact that the detrending is part of the dfa procedure itself , and different orders of detrending can be tested , provided that a sufficient amount of data is available to sustain the higher order fit . approaching a bifurcation point , the lag1 autocorrelation coefficient ( ) and the dfa exponent ( ) both tend to increase , ideally towards and respectively .this reflects the fact that the fluctuations in the system are increasingly correlated as the bifurcation point is approached : they become more similar to a brownian motion as the restoring force of the system approaches zero .it must be reminded that all the results obtained using these techniques are based on strong assumptions on the nature of the noise .the noise in the signal is generally assumed to be of additive nature , to be colorless and to be recorded in the time series through the same processes that give rise to the slower dynamics .a clear separation of time scales between the noise and the low dimensional dynamics is also assumed .these assumptions may not be satisfied by the climate system , and this important caveat has to be considered when evaluating the results .our aim is to analyse the statistical properties of the shorter time scale fluctuations in the ice core record that occur before the onset of dos .the quality of the paleoclimatic time series and the intrinsic limits of the ews technique are a serious constraint to the conclusions that can be drawn .still , the analysis identifies ewss before the onset of dos .this suggests that the onset of dos is connected with the approach to a point of reduced stability in the system .this finding is compatible only with some of the models proposed as an explanation of dos .such a selection is possible since the different prototype models have different , peculiar , `` noise signatures '' before an abrupt transition takes place .it must be stressed that the problem is an inverse one ; multiple mathematical models qualitatively consistent with the `` noise signatures '' can be developed , and only physical mechanisms can guide the final choice .we will focus on the three characteristics of the data mentioned above : the correlation coefficient of the time series at lag1 , the variance , and the dfa exponent .they are plotted , together with paleoclimatic time series used , in fig .[ fig : timeseries ] .a similar type of analysis has been used before in the search of ewss for abrupt climate transitions ( see e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?we use here ewss to put observational constraints to the various prototype models for dos , considering only the properties of the ensemble of events .in contrast to earlier studies we consider an ews not as a precursor of a single event , but analyse instead the average characteristics of the ews found in the ensemble of dos . it must be stressed that did consider the ensemble of events , but did not compute any average quantity .given the low to signal to noise ratio , computing the mean of the ensemble and an error estimate of it may uncover signals otherwise too weak to be detected .this approach is more robust than the one which considers each realisation ( each do ) separately , since ewss have a predictable behaviour only when computed as an average over an ensemble of events , as demonstrated by .we argue that only considering the mean of the events we can hope to draw general conclusions on the nature of these climate signals .the reason why we can discriminate between various prototype models is that , in the case of a system that crosses , or comes close to a bifurcation point , , and all increase before abrupt change occurs , while for other prototype models this is not the case . for each prototype model that we considered , the relevant equation , or set of equations ,is integrated with an euler maruyama method .the associated time series is then analysed with respect to lag1 autocorrelation , variance , and the dfa exponent ( details of the analyses are given in section [ sec : ewsmethods ] ) .ewss are visible if the dos are caused by a slowly changing external forcing , in which a critical parameter , like northward freshwater transport , varies over a large enough range for the meridional overturning circulation of the ocean to cross the two bifurcation points that mark the regime of multiple equilibria .this idea , in the context of paleoclimate , dates back to who suggested it for explaining heinrich events , and is often invoked more in general as a paradigm for understanding the meridional overturning circulation stability ( see e.g. * ? ? ?* ) . in physical terms, this model implies that a changing external forcing is causing a previously stable state to disappear . in figure[ fig : sketches]a this situation is sketched . as a prototype for a system crossing a bifurcation point , the same equation used in used : where is the state variable ( time dependent ) , is its time derivative , is the only system parameter and represents a white noise ( normal distribution ) , with standard deviation given by . in this case, the evolution of the system can be thought of as the motion of a sphere on a potential surface that has either one or two stable equilibrium states , divided by a potential ridge .this potential surface can change in time , due to slow changes in the parameter . in this case, will be time dependent , and will represent a forcing external to the dynamics of the system , represented only by the scalar function .when crosses one of the two bifurcation points ( ) , the system changes from having two possible equilibrium states to having only one . at these points ,one of the two states loses its stability and the system can jump to the second equilibrium ( figure [ fig : doublewell ] , shown with and going from to during the integration ) .if is instead constant in time ( and the system is thus an autonomous one ) , abrupt transitions can still occur if more than one equilibrium state is available , and noise is sufficiently strong to trigger the transition .an example of this case , in the simple case of a fixed double well with constant white noise , is shown in fig .[ fig : noise ] ( in this case , and ) . in this model ,the system is normally in the cold stadial state but can jump due to the noise in the forcing , say atmospheric weather , to a warmer , interstadial state . in this caseno trend is visible in the statistical properties of the time series , thus no ews can be detected .the two cases described are those considered by , to which we refer for a more detailed discussion . a slightly different case , again based on the model of eq .[ eq : doublewell ] , is that discussed by .they suggest that dos may be connected to stochastic resonance , where the parameter in eq .[ eq : doublewell ] becomes a periodic function of time .in this case , transitions from one state to the other take place preferentially when is closer to the bifurcation point , but are still forced by the noise term rather than by actually reaching its bifurcation point . in fig .[ fig : stocres ] this case is explored , using with and , the noise level is . in this case ,ewss are clearly present before some of the events ( e.g. the second and the fourth ) , while for others no clear ews is detected .. ] ewss are present in some cases , while absent in others , since transitions take place preferably when is closer to the bifuration point , but not only in this phase .transitions can still take place purely due to the presence of noise , and in those cases no clear ews will be detected .if the average from a sufficiently large ensemble of events is considered ewss can be detected unambiguously .a very similar situation will be found in the paleoclimatic data , and the presence of ewss will be shown there in the ensemble average .we will show that other models proposed in the literature for explaining dos do not posses any ews .no ews can be detected if dos are due to an unforced ( ocean ) mode of oscillation since , for an autonomous oscillation , there are in general no changes in the stability properties of the system while it evolves in time ( see e.g. * ? ? ?dos have been linked to oscillations close to a `` homoclinic orbit '' by .a homoclinic orbit is a periodic solution to a system of ordinary differential equations that has infinite period , but in the presence of a small amount of noise it features finite return times . a sketch of this prototype model is shown in figure [ fig : sketches]b . to describe this model the set of ordinary differential equations given in minimal mathematical model has been suggested before in the context of atmosphere regime behaviour , but it is mathematically equivalent to the mechanism for dos suggested in . to investigate ewss for a periodic cycle close to a homoclinic orbit , the following system of ordinary differential equations is integrated ( see figure [ fig : charneydevore ] ) : the system has six dimensions ( ) with parameters as in , chosen as they produce the abrupt , quasi periodic transitions observed in fig . [ fig : charneydevore ] , with , , , , , , , , , , , , .white noise is added to each component , the noise has standard deviation of .this model is chosen as it provides a simple description of the results of , having a very similar bifurcation diagram ( the fundamental element is the presence of a fold hopf bifurcation ) .as the time series is strongly non stationary , linear dfa is not a good measure of the fluctuation decay in this case , and higher order detrending is needed ( orders higher than 2 give results consistent with the case of quadratic detrending ) . after integrating these equations , again no precursor for abrupt changescan be detected ( fig .[ fig : charneydevore ] ) .it must be stressed that , in order to correctly compute the statistical properties of the time series , it is of fundamental importance that the data are detrended ( linear or higher order detrending ) , to ensure that the findings can be ascribed to random fluctuations and not to the non stationarity of the data . in particular, in the case of the dfa exponent computation for the latter prototype model , the strong non stationarity of the data requires a quadratic detrending .the results from other models that have been suggested as a prototype for oscillations close to a homoclinic orbit consistently show no ews before abrupt transitions ( not shown ) .we now have seen that ewss are characteristic of abrupt changes connected with the approach to a bifurcation point .the bifurcation point is either crossed while a parameter of the system is changed ( sec .[ sec : bifurcation ] ) or is only approached , without actually crossing it , as in the case of stochasitc resonance ( sec . [sec : stochasticres ] ) .if the ice core record also features ewss , the other prototype models are falsified on these bases .the ice core data are shown in figure [ fig : timeseries ] .the ewss of the time series are computed in a sliding window 250 years wide , and the results are plotted at the right end of the sliding window to avoid contamination from points after the do onset .this window size corresponds on average to 100 data points for the variance computation . for and ,the time series is first linearly interpolated to a step of 4 years and approximately 60 points are used in the window .the time step is chosen in order to avoid overfitting the data even in the parts of the time series with lower resolution .the window width choice is a compromise between the need to minimise noise ( larger window ) and to maximise the number of independent points for the computation of trends ( smaller window ) .the data is linearly detrended in the window before the computation , to compensate for potential nonstationarities in the time series . in the computation of the dfa exponent , 10 box lengths were used , ranging from 16 to 52 years .the number of time lengths is limited for computational reasons .different time steps for the interpolation and boxes lengths were tested .the results show small sensitivity to these choices , as long as the time step for the interpolation is kept below approximately 10 years .if one considers the successive dos one by one , it is clear that for some of the dos a marked increase is found ( e.g. the one at approximately 45,000 years b2k ) , implying an ews , but for other dos no ewss can be detected ( e.g. the last one ) .this has to be expected in general from a stochastic system for which only the average behaviour is predictable , and may also be the fingerprint of stochastic resonance , as discussed in sec .[ sec : stochasticres ] . as discussed by ,analysing the properties of an ensemble of events , instead of a single realisation , is a more robust approach and better justified on theoretical grounds .we thus use ewss to characterise the properties of the `` mean do '' instead of trying to predict the onset of the following transition . for this reason , we consider the whole ensemble of dos , instead of each single do . with this aim, the time series is cut into slices that end 100 years after the transition onset and start either 100 years after the previous do onset or , if the time span between two dos is sufficiently long , 2,900 years before the following do onset ( figure [ fig : doensemble]a ) .the onset of the transitions then are translated to year -100 . in this way ,an ensemble of dos is defined , being composed of slices of the original time series of variable length , synchronised at the onset of each do .if the quantities used for ews detection are then averaged for the whole ensemble , a moderate but significant increase is observed in all three fluctuation properties , starting approximately from year -1,800 until year -250 ( figure [ fig : doensemble]b d ) .the standard deviation of the ensemble is large , in particular at times far from the do onset because the ensemble has fewer members there ( at times smaller than -1000 years ) , but the trend can not be discarded as a random fluctuation for all three cases . to test this , a linear least square fit of the data in the interval -1,800 to -250 is performed , only using independent points from the ensemble ( thus at a distance of 250 years from each other , given the window width of 250 years ) , and obtaining an error interval from a bootstrapped ensemble of 50,000 members .the results of this fitting procedure are reported in table [ tab : linfit ] . in all three casesthe linear trends are well above their standard deviation , providing a strong indication of the robustness of the signal . in order to check the robustness of the findings on the order of detrending for dfa, the computation has been repeated with a quadratic detrending , actually giving a slightly stronger trend ( see table [ tab : linfit ] ) .these results are consistent with a scenario that connects do onset with either the crossing or the approach of a bifurcation point . in other words , these findings are consistent with either a model where the system is forced to shift from a steady state to a different one , or with the stochastic resonance mechanism , where transitions take place preferentially closer to the bifurcation point , even if this one is not actually reached , and transitions are due to the noise . the fact that and do not reach their theoretical values for a bifurcation point , respectively and , can easily be explained : the noise in the system triggers the transition before the actual bifurcation point is reached .this clearly must be the case for stochastic resonance , but is relevant also if the bifurcation point is crossed . also , the three quantities , and decrease again in the last few hundred years before the do onset .this may be a bias in the relatively small ensemble considered here , but a similar decrease has been shown by for various bifurcations in fast slow systems . for several idealised systems containing different forms of bifurcation points ( included the fold bifurcation considered here ) , a decrease in variance in the immediate vicinity of the bifurcation point is found , while the well known increase is observed when the bifurcation point is farther away in the parameter space . to confirm that the decrease observed in the data is consistent with the one discussed by , the time scale of the decrease should be linked to the distance of the bifurcation parameter from the bifurcation point .unfortunately , this is not possible , as we have no information on the parameter range that may be spanned by the real system .variance in particular starts to decrease quite far from the do onset ( approximately 700 years ) .this may indicate a weakness of the results ( even if , as discussed above , the increase is still statistically significant ) , but it has been shown that variance may not be the most robust indicator of ews , and autocorrelation may be given more emphasis ( see * ? ? ?the observation of a decrease in variance just before the do onset , after a longer period of increase , is an important lesson for the practical application of these techniques in the detection of approaching bifurcation points .our findings are in contrast with the previous suggestions of . considered all the events together , without considering neither the ensemble mean nor the individual events .this approach may prevent from uncovering a weak mean signal , as well as the clear ews visible before some of the events .we have seen in particular that in the case of stochastic resonance the presence of ews is not guaranteed before each abrupt transition .furthermore , strong noise may hide the increase in the indicators for some of the events even if a bifurcation is actually crossed .we do not think that our different results may be ascribed to a different choice in the parameters of the analysis , as several different parameter sets have been used , and the results are robust .still , an important caveat must be reminded , relevant for most of the works dealing with ewss .the signal detected as an ews is described by the simple models discussed , but other models may give a similar signal as well ; here we try to distinguish only among the models that have been proposed to describe dos .furthermore , the effect of other types of noise other than white and additive are not studied .this is the most common approach in the field ( one of the few exceptions , to our knowledge , is * ? ? ?. a closer investigation of the effects of using red and multiplicative noise is an interesting topic , but outstide the scope of this paper . from a broader perspective, the motivation for classifying dos into three groups may seem unclear , but this is instead a very important point .if dos are associated with the presence ( or the crossing ) of a bifurcation point , as our results suggests , this would mean that the climate can potentially show hystheresis behaviour , i.e. abrupt transitions in opposite directions take place under different forcing conditions , and abrupt transitions are thus to some extent irreversible , with obvious implications e.g. for global warming .in this work , we performed two sets of analysis on a well - known paleoclimatic record , the isotope data from the ngrip ice core , believed to be representative of temperature over the north atlantic sector .we assessed bimodality of the system using a phase space embedding technique , that guarantees that the bimodality is not an artifact of projection of the complex dynamics of the climate system on a scalar time series .we confirm with this technique the claim of , that a switch from bimodal to unimodal behaviour is observed in the time series around year 22,000 before present .secondly , we analysed the statistical properties of the fluctuations in the paleoclimatic time series , before the onset of dos .in particular , we focused on the average properties of the events considered as an ensemble instead of each separately . despite the high level of noise in the data, ewss can be detected in the ensemble average , consistently with the hypothesis that dos take place preferentially close to a bifurcation point in the climate system .in particular , our findings seem to be particularly close to the stochastic resonance scenario proposed by .other prototype models that have been proposed are less consistent with the data , as their mechanisms do not involve any transition from which ewss can , at least in general , be expected . came to opposite conclusions , but they did not consider the average behaviour of the ensemble , while we think that this may be a step of fundamental importance .a disclaimer has to be made : our conclusions hold for the ensemble of the events , and are a probabilistic statement : we can only claim that a scenario that does not include an approach to , or the crossing of , a bifurcation point with ews is unlikely .the trends of table [ tab : linfit ] are between two and three standard deviations apart from zero .this means that the probability that the real trend in each indicator is zero is low but non zero , in the order of , assuming a normal distribution .given the rather complex techniques used , we can not rule out the possibility that the error estimates given in the table may be underestimated . apart from the general limitations of the techniques used , we also want to remind that we considered only models already discussed in the literature in the context of dos .other models , giving similar signals , may be developed , but given the inverse nature of the problem faced here , the models considered must be limited in number .a connection with the meridional overturning circulation instability remains in the domain of speculation , but is a plausible hypothesis considering the large evidence linking dos signals to rapid changes in the meridional overturning circulation .further investigation is needed to confirm this hypothesis and , more importantly , to address the fundamental question that remains open : if dos are due to an external forcing , what is the nature of this forcing ?furthermore , the relation between the variability recorded in the time series and the amoc variability is still uncertain , and the link between the two may be far from direct , involving e.g. atmospheric or sea ice variability . looking beyond the results of the particular time series used here , we suggest that ewss may provide a useful guide for discriminating between different models meant to describe a climate signal .when data is scarce , the analysis of the average properties of the fluctuations can give important hints on the nature of the signal which produced it .the authors acknowledge peter ditlevsen ( university of copenhagen ) for discussion and providing the data , anders svensson ( university of copenhagen ) for making the data available , henk dijkstra ( university of utrecht ) for suggesting to use charney devore model as a prototype for homoclinic orbits and for pointing out some inconsistencies in an earlier version of the manuscript , timothy lenton ( university of exeter ) for valuable comments .a.a.c . acknowledges the netherlands organization for scientific research for funding in the alw program .v.l . acknowledges nerc and axa research fund for funding . the authors would also like to thank the reviewers for their comments and precious suggestions .bathiany , s. , claussen , m. , and fraedrich , k. : detecting hotspots of atmosphere vegetation interaction via slowing down ; part 1 : a stochastic approach , earth system dynamics discussions , 3 , 643682 , 2012 .ghil , m. , allen , m. r. , dettinger , m. d. , ide , k. , kondrashov , d. , mann , m. e. , robertson , a. w. , saunders , a. , tian , y. , varadi , f. , and yiou , p. : advanced spectral methods for climatic time series , reviews of geophysics , 40 , 1003 , 2002 ..*linear trends of ews .* results of the linear fit of the trends in the ensemble mean for autocorrelation ( ) , variance ( ) and dfa exponent ( ) in the time interval from -1800 to -250 years before the do onset .the last line refers to the result of the fit from the quadratic dfa ( see text ) .the error values are computed from a bootstrapped ensemble of 50,000 members . [tab : linfit ] [ cols="^,^,^",options="header " , ] , green circles ) and variance ( , black circles ) are shown . in the bottom panel , dfa exponent for linearly ( red circles ) or quadratic ( green triangles ) detrending case .grey vertical line marks the abrupt transition onset , the red line marks a critical or value .[ fig : doublewell ] ] , but for a noise induced transition .the external forcing is kept constant , but the noise can still trigger abrupt transitions between two available states . no trend in the statistical properties computed is detected , thus no ews is observed .[ fig : noise ] ] , but for the case of stochastic resonance .the forcing oscillates , and transitions take place preferentially when the system is closer to the bifurcation point .the deterministic part of the forcing ( see text ) is shown as a gray line on the top panel .[ fig : stocres ] ] , for the charney and devore model . in this case , the abrupt transitions are not connected with a changing external forcing , but are an autonomous mode of the equations .dimension number 5 ( ) of the 6dimensional ode system is considered .it is evident that , for ( bottom panel ) , a detrending of order higher than one is needed to remove the effect of non stationarities in the time series . after proper detrending , no ewss are observed in .red circles ( green triangles ) refer to the linear ( quadratic ) dfa .[ fig : charneydevore ] ]
|
dansgaard oeschger events are a prominent mode of variability in the records of the last glacial cycle . various prototype models have been proposed to explain these rapid climate fluctuations , and no agreement has emerged on which may be the more correct for describing the paleoclimatic signal . in this work , we assess the bimodality of the system reconstructing the topology of the multi dimensional attractor over which the climate system evolves . we use high resolution ice core isotope data to investigate the statistical properties of the climate fluctuations in the period before the onset of the abrupt change . we show that dansgaard oeschger events have weak early warning signals if the ensemble of events is considered . we find that the statistics are consistent with the switches between two different climate equilibrium states in response to a changing external forcing ( e.g. solar , ice sheets ) , either forcing directly the transition or pacing it through stochastic resonance . these findings are most consistent with a model that associates dansgaard oeschger with changing boundary conditions , and with the presence of a bifurcation point .
|
in the early 1950 s , brookhaven national laboratory began compiling and archiving nuclear reaction experimental data in the scisrs database . over the years, this project have grown and evolved into the exfor project .exfor is an international experimental nuclear data collection and dissemination project now led by the iaea nuclear data section , coordinating the experimental nuclear data compilation and archival work of several nations .the exfor nuclear experimental database provides the data which underpins nearly all evaluated neutron and charged particle evaluations in endf - formatted nuclear data libraries ( e.g. endf / b , jeff , jendl , ... ) .therefore , exfor is in many ways the `` mother library '' which leads to the nuclear data used in all applications in nuclear power , security , nuclear medicine , etc .the exfor database includes a complete compilation of experimental neutron - induced , a selected compilation of charged - particle - induced , a selected compilation of photon - induced , and assorted high - energy and heavy - ion reaction data .the exfor library is the most comprehensive collection of experimental nuclear data available .therefore , it is the best place to look for an overview of what the applied and basic experimental nuclear science communities feel are valuable experimental reactions and quantities. the basic unit of exfor is an entry .an entry corresponds to one experiment and contains the numerical data , the related bibliographic information and a brief description of the experimental method . what an exfor entry really represents is the results of work that was performed at a given laboratory at a given time .an entry does not necessarily correspond to one particular publication , but very often corresponds to several publications .the exfor compiler takes the essential information from all available sources , and if needed , contacts the author for more information .an entry is typically divided in several subents containing the data tables resulting from the experiment .each subent contains a reaction field which encodes what reaction was studied ( e.g. (,el ) ) and what quantity was measured ( e.g. cross - section or angular distribution ) .a subent may also contain a monitor field which encodes one or more well characterized reactions and quantities used to reduce or eliminate systematic experimental errors .thus , reaction monitors are an important part of experimental data reduction .often the measured data encoded in the reaction field is measured relative to the reaction / quantity encoded in the monitor field .there is usually a straightforward mapping between the reactions / quantities measured in exfor and the evaluated reactions / quantities stored in the endf libraries .several specific reaction / quantities are important enough , usually because of one or more specific applications , that the nuclear data community has elevated them to the level of an international reference standard .many experimenters use these reaction / quantities as monitors in their experiments .references provide details of well known neutron - induced , charged - particle and photonuclear standard reaction / quantities .we divided these references into two different classes .we define tier 1 observables as the product of sustained evaluation efforts , with periodic refinement .our tier 1 standards include the evaluations from the endf / b neutron standards project and the _ atlas of neutron resonances _ .our second tier encompasses standards that are of very high quality but are not performed as part of a sustained project .there may be follow ups or limited refinements .this second tier includes medical and dosimitry evaluations in ref . and the results of the irdff project .there is also a new tier 1 standards - level effort just beginning known as cielo pilot project .cielo promises to generate entire standards - level evaluations including all reactions / quantities needed for the endf - formatted libraries for neutron - induced reactions on , , , , and .when undertaking this project we specifically set out with the goal of answering some important questions .* what are the most connected targets ?what are the most connected reactions / quantities ? * are there reactions / quantities that are so connected that they should be a standard ? * what is the connection number distribution for targets and reactions ? what is the link number distribution between any two targets ? * can we use this information to inform new measurements that decrease the distance between important targets and standards ? * are there `` bottlenecks '' along the pathways from a given reaction to reaction standards that are not well measured ? * what elements and isotopes of reactions are not linked ?are any of them important for specific applications ? in order to attempt to resolve these questions we utilized graph theory as a tool to examine the connections between measurements in exfor . in this work, we take an abstract view of the exfor database and generate an undirected graph describing all the connections between reactions / quantities in the exfor database . from just these connections, we can infer what reactions / quantities the nuclear science community collectively ( and somewhat unconsciously ) views as important .this set of reactions / quantities does not exactly match our previous expectations .we will provide a series of recommendations for reactions / quantities that should be considered for elevation to the level of the standards in references or possibly included in a follow - on cielo project .we also find that our graph is disconnected , implying there are large numbers of reactions / quantities that are not pinned to any monitor . in many cases ,this is due to poor coding of the exfor entrys .although this is a serious problem in our study , there is no easy fix .even if additional information is given , it is often given in one of the free text fields in exfor which are difficult , if not impossible , to parse .we used the x4i code to read the exfor database and parse the reaction and monitor strings .we then built up the undirected graph within x4i and stored the resulting graph in a graphml formatted file . in this sectionwe detail how we construct the graph ..[table : nodes]types of nodes in our graph . [ cols="<,^",options="header " , ]it is clear from the analysis above that the following reaction / quantities have out - sized importance as measured by several different metrics : * aluminum reaction / quantities : * * n+ : the ( mg ) cross section * * p+ : the ( ) cross section and the and production cross sections * * + : the production cross section * molybdinum is also a very important structural material : * * p+ : the production cross section * * + : the production cross section all of these nodes reside in the main cluster of our graph so we ask * how are nodes in the main cluster connected to the tier 1 and 2 standards ? * can we improve this connectivity with the nodes we have identified as important ?the simplest measure of graph connectivity is the mean distance between nodes .the distance between any two connected nodes is the minimum number of edges separating the nodes including all possible paths between the nodes .here we are not interested in the distance between arbitrary nodes but are interested in the distance between any node and a tier 1 or 2 standard node . in figure[ figure : distancedist ] we show several histograms of distance from nodes to the nearest standards node for several cases . in this plot ,nodes with distance zero are the standards themselves .considering just the tier 1 standards , the distribution is rather broad and peaks at a distance of 5 nodes and extends to 10 nodes .we note that this peak is near , the average path length in the main cluster .the main cluster is a `` small world '' graph , so it has tight clusters within that can be used to increase connectivity , like the hubs of an airline network . adding in the large number of tier 2 standardsdramatically tightens up the distribution with the peak now at 2 the distribution now extends to 8 nodes . as we saw earlier, many of these tier 2 standards have high degree and can function as hubs . adding the nodes corresponding to our proposed list of nodes tightens the distribution up further , enhancing the connectivity to standards level nodes .we comment that there is a noticeable improvement from adding our seven proposed nodes , a surprisingly large improvement given the small number of added nodes .plot of the minimum distance to a standard .we show a line at the average path length for the main cluster .this is the cluster where all of the standards nodes reside.,scaledwidth=50.0% ]in this project , we created an undirected graph from the reaction and monitor strings from datasets in the exfor database .this graph is a large , nearly scale - free network composed of disconnected clusters .the largest clusters have a `` small - world '' character .our graph is in many ways typical for real world graphs . with our graph, we identify what reactions and quantities the nuclear science community views as important enough to directly measure or measure relative to .we do this in a relatively objective fashion . clearly the various standards projects in refs . have a good handle on what is important .also , the clustering coefficients in table [ table : topbyclustercoeff ] demonstrate how connected the cielo nodes are .however , it is clear from the analysis of our graph that the following reaction / quantities have out - sized importance and are not considered in any standards effort : * aluminum reaction / quantities : * * n+ : the ( mg ) cross section * * p+ : the ( ) cross section and the and production cross sections * * + : the production cross section * molybdinum also a very important structural material : * * p+ : the production cross section * * + : the production cross section we recommend that at the very least that and all of the mo isotopes be considered as a target material in either a follow - on cielo or irdff project .in addition , a standards level study of fission product yields of the major actinides as suggested in the discussions at the recent working party on evaluation cooperation subgroup 37 meeting would improve the connectivity of all fission product yield data .we want to thank m. herman ( bnl ) and j. fritz ( st .joseph s college ) for their support of this project and acknowledge the useful discussions with n. otsuka ( iaea ) , a. carlson ( nist ) , a. plompen ( irmm ) and r. capote ( iaea ) .the work at brookhaven national laboratory was sponsored by the office of nuclear physics , office of science of the u.s .department of energy under contract no .de - ac02 - 98ch10886 with brookhaven science associates , llc .this project was supported in part by the u.s .department of energy , office of science , office of workforce development for teachers and scientists ( wdts ) under the science undergraduate laboratory internships program ( suli ) . 1 holden , norman . `` a short history of csisrs '' , national nuclear data center , brookhaven national laboratory , bnl report bnl-75288 - 2005-ir ( 2005 ) . international network of nuclear reaction data centres ( nrdc ) , `` compilation of experimental nuclear reaction data ( exfor / csisrs ) '' , http://www-nds.iaea.org/exfor/ and http://www.nndc.bnl.gov/exfor/ ( 2012 ) .carlson , _ et al ._ `` international evaluation of neutron cross section standards '' , nucl .data sheets , 110.12 ( 2009 ) 3215 - 3324 .mughabghab , _ atlas of neutron resonances _ , elsevier science , ( 2006 ) .p.oblozinsky , `` charged - particle cross section database for medical radioisotope production ; diagnostic radioisotopes and monitor reactions '' , international atomic energy agency iaea , iaea - tecdoc-1211 http://www-nds.iaea.org/medical/ ( 2003 ) .zsolnay , r. capote noy , h.j .nolthenius , a. trkov , `` summary description of the new international reactor dosimetry and fusion file ( irdff release 1.0 ) '' international atomic energy agency iaea , indc(nds)-0616 , https://www-nds.iaea.org/irdff/ ( 2012 ) .m. chadwick , _ et al ._ `` cielo : a future collaborative international evaluated library '' , proceedings of the international conference of nuclear data for science and technology ( nd2013 ) ( 2013 ) .x4i : the exfor interface , version 1.0 _ , https://ndclx4.bnl.gov/gf/project/x4i/ ( 2011 ) .iaea technical meeting , july ( 2013 ) .o. schwerer , `` exfor formats description for users ( exfor basics ) '' , documentation series for the iaea nuclear data section , vienna ( 2008 ) .the cross section evaluation working group ( csewg ) , `` a csewg retrospective , '' bnl report bnl-52675 ( 2001 ) .a.gurbich , `` ion beam analysis nuclear data library , '' https://www-nds.iaea.org/exfor/ibandl.htm ( 2011 ) graphml file format , graphml.graphdrawing.org d.a .brown , `` visualizing the connections in the exfor database '' , proceedings of the international conference of nuclear data for science and technology ( nd2013 ) ( 2013 ) ; d.a .brown , j. hirdt , m. herman `` data mining the exfor database , '' nemea-7 workshop , geel , belgium , bnl report bnl-103473 - 2013-ir ( 2013 ) .a. hagberg , d. schult , p. swart , _ networkx version 1.8.1 _ ( 2012 ) t.p .peixoto , _ graph - tool version 2.2.27 _ , http://graph-tool.skewed.de/ ( 2013 ) .r. albert , a .-barabsi , rev .74 ( 2002 ) 47 - 97. l. page , s. brin , r. motwani , t. winograd `` the pagerank citation ranking : bringing order to the web '' , stanford university ( 1999 ) .wpec - sg37 meeting , `` improved fission product yield evaluation methodologies '' , nea headquarters , paris , france 22 may ( 2013 ) ; https://www.oecd - nea.org / science / wpec / sg37/meetings/2013_may/.
|
the exfor database contains the largest collection of experimental nuclear reaction data available as well as the data s bibliographic information and experimental details . we created an undirected graph from the exfor datasets with graph nodes representing single observables and graph links representing the various types of connections between these observables . this graph is an abstract representation of the connections in exfor , similar to graphs of social networks , authorship networks , etc . by analyzing this abstract graph , we are able to address very specific questions such as 1 ) what observables are being used as reference measurements by the experimental nuclear science community ? 2 ) are these observables given the attention needed by various nuclear data evaluation projects ? 3 ) are there classes of observables that are not connected to these reference measurements ? in addressing these questions , we propose several ( mostly cross section ) observables that should be evaluated and made into reaction reference standards .
|
physiological information about neural structure and activity was employed from the very beginning to construct effective mathematical models of brain functions .typically , neural networks were introduced as assemblies of elementary dynamical units , that interact with each other through a graph of connections . under the stimulus of experimental investigations ,these models have been including finer and finer details .for instance , the combination of complex single neuron dynamics , delay and plasticity in synaptic evolution , endogenous noise and specific network topologies revealed quite crucial for reproducing experimental observations , like the spontaneous emergence of synchronized neural activity , both _ in vitro _( see , e.g. , ) and _ in vivo _ , and the appearance of peculiar fluctuations , the so called up down " states , in cortical sensory areas .since the brain activity is a dynamical process , its statistical description needs to take into account time as an intrinsic variable .accordingly , non equilibrium statistical mechanics should be the proper conceptual frame , where effective models of collective brain activity should be casted in .moreover , the large number of units and the redundancy of connections suggest that a mean field approach can be the right mathematical tool for understanding the large scale dynamics of neural network models .several analytical and numerical investigations have been devoted to mean field approaches to neural dynamics .in particular , stability analysis of asynchronous states in globally coupled networks and collective observables in highly connected sparse network can be deduced in relatively simple neural network models through mean field techniques . in this paperwe provide a detailed account of a mean field approach , that has been inspired by the heterogeneous mean field " ( hmf ) formulation , recently introduced for general interacting networks .the overall method is applied here to the simple case of random networks of leaky integrate and fire ( lif ) excitatory neurons in the presence of synaptic plasticity .on the other hand , it can be applied to a much wider class of neural network models , based on a similar mathematical structure .the main advantages of the hmf method are the following : ( _ i _ ) it can identify the relation between the dynamical properties of the global ( _ synaptic _ ) activity field and the network topology , ( _ ii _ ) it allows one to establish under which conditions partially synchronized or irregular firing events may appear , ( _ iii _ ) it provides a solution to the inverse problem of recovering the network structure from the features of the global activity field . in section [ sec2 ] , we describe the network model of excitatory lif neurons with short term plasticity .the dynamical properties of the model are discussed at the beginning of section [ sec3 ] . in particular , we recall that the random structure of the network is responsible for the spontaneous organization of neurons in two families of _ locked _ and _ unlocked _ ones . in the rest of this sectionwe summarize how to define a _ heterogeneous thermodynamic limit _ , that preserves the effects of the network randomness and allows one to transform the original dynamical model into its hmf representation ) .the hmf equations provide a relevant computational advantage with respect to the original system .actually , they describe the dynamics of classes of equal in degree neurons , rather than that of individual neurons . in practice, one can take advantage of a suitable sampling , according to its probability distribution , of the continuous in degree parameter present in the hmf formulation .for instance , by properly `` sampling '' the hmf model into 300 equations one can obtain an effective description of the dynamics engendered by a random erds renyi network made of neurons . in section [ sec4 ]we show that the hmf formulation allows also for a clear interpretation of the presence of classes of _ locked _ and _ unlocked _ neurons in qse : they correspond to the presence of a _ fixed point _ or of an _ intermittent - like _ map of the return time of firing events , respectively .moreover , we analyze in details the stability properties of the model and we find that any finite sampling of the hmf dynamics is chaotic , i.e. it is characterized by a positive maximum lyapunov exponent , .its value depends indeed on the finite sampling of the in degree parameter . on the other hand, chaos is found to be relatively weak and , when the number of samples , , is increased , vanishes with a power law decay , , with .this is consistent with the mean field like nature of the hmf equations : in fact , it can be argued that , in the thermodynamic limit , any chaotic component of the dynamics should eventually disappear , as it happens for the original lif model , when a naive thermodynamic limit is performed . in section [ sec5 ]we analyze the hmf dynamics for networks with different topologies ( e.g. , erds renyi and in particular scale free ) .we find that the dynamical phase characterized by qse is robust with respect to the network topology and it can be observed only if the variance of the considered in degree distributions is sufficiently small .in fact , quasi - synchronous events are suppressed for too broad in degree distributions , thus yielding a transition between a fully asynchronous dynamical phase and a quasi - synchronous one , controlled by the variance of the in degree distribution .in all the cases analyzed in this section , we find that the global synaptic activity field characterizes completely the dynamics in any network topology .accordingly , the hmf formulation appears as an effective algorithmic tool for solving the following _ inverse problem _ : given a global synaptic activity field , which kind of network topology has generated it ? in section [ sec6 ] , after a summary of the numerical procedure used to solve such an inverse problem , we analyze the robustness of the method in two circumstances : when a noise is added to the average synaptic activity field , and when there are noise and disorder in the external currents .such robustness studies are particularly relevant in view of applying this strategy to real data obtained from experiments .finally , in section [ sec7 ] we show that a hmf formulation can be straightforwardly extended to non massive networks , i.e. random networks , where the in degree does not increase proportionally to the number of neurons . in this case the relevant quantity in the hmf - like formulation is the average value of the in degree distribution , and the hmf equations are expected to reproduce confidently the dynamics of non massive networks , provided this average is sufficiently large .conclusions and perspectives are contained in section [ sec8 ] .we consider a network of excitatory lif neurons interacting via a synaptic current and regulated by short term plasticity , according to a model introduced in .the membrane potential of each neuron evolves in time following the differential equation where is the membrane time constant , is the membrane resistance , is the synaptic current received by neuron from all its presynaptic neurons ( see below for its mathematical definition ) and is the contribution of an external current ( properly multiplied by a unit resistance ) . whenever the potential reaches the threshold value , it is reset to , and a spike is sent towards the postsynaptic neurons . for the sake of simplicitythe spike is assumed to be a function of time .accordingly , the spike train produced by neuron , is defined as , where is the time when neuron fires its -th spike . the transmission of the spike train is mediated by the synaptic dynamics .we assume that all efferent synapses of a given neuron follow the same evolution ( this is justified in so far as no inhibitory coupling is supposed to be present ) .the state of the -th synapse is characterized by three variables , , , and , which represent the fractions of synaptic transmitters in the recovered , active , and inactive state , respectively ( ) . the evolution equations are only the active transmitters react to the incoming spikes : the parameter tunes their effectiveness .moreover , is the characteristic decay time of the postsynaptic current , while is the recovery time from synaptic depression . for the sake of simplicity , we assume also that all parameters appearing in the above equations are independent of the neuron indices .the model equations are finally closed , by representing the synaptic current as the sum of all the active transmitters delivered to neuron where is the strength of the synaptic coupling ( that we assume independent of both and ) , while is the directed connectivity matrix whose entries are set equal to 1 or 0 if the presynaptic neuron is connected or disconnected with the postsynaptic neuron , respectively . since we suppose the input resistance independent of , it can be included into . in this paperwe study the case of excitatory coupling between neurons , i.e. .we assume that each neuron is connected to a macroscopic number , , of pre - synaptic neurons : this is the reason why the sum is divided by the factor .typical values of the parameters contained in the model have phenomenological origin . unless otherwise stated , we adopt the following set of values : ms , ms , ms , mv , mv , mv , mv and .numerical simulations can be performed much more effectively by introducing dimensionless quantities , and by rescaling time , together with all the other temporal parameters , in units of the membrane time constant ( for simplicity , we leave the notation unchanged after rescaling ) .the values of the rescaled parameters are : , , , , , and .as to the normalized external current , its value for the first part of our analysis corresponds to the firing regime for neurons . while the rescaled eqs .( [ dynsyn ] ) and ( [ contz ] ) keep the same form , eq .( [ eq1 ] ) changes to , a major advantage for numerical simulations comes from the possibility of transforming the set of differential equations ( [ dynsyn])([input ] ) and ( [ eq1n ] ) into an event driven map ( for details see and also ) .the dynamics of the fully coupled neural network ( i.e. , ) , described by eq.s ( [ eq1n ] ) and ( [ eq2])([input ] ) , converges to a periodic synchronous state , where all neurons fire simultaneously and the period depends on the model parameters .a more interesting dynamical regime appears when some disorder is introduced in the network structure .for instance , this can be obtained by maintaining each link between neurons with probability , so that the in - degree of a neuron ( i.e. the number of presynaptic connections acting on it ) takes the average value , and the standard deviation of the corresponding in - degree distribution is given by the relation .in such an erds - renyi random network one typically observes quasi synchronous events ( qse ) , where a large fraction of neurons fire in a short time interval of a few milliseconds , separated by an irregular firing activity lasting over some tens of ms ( e.g. , see ) .this dynamical regime emerges as a collective phenomenon , where neurons separate spontaneously into two different families : the _ locked _ and the _ unlocked _ ones .locked neurons determine the qse and exhibit a periodic behavior , with a common period but different phases .degree ranges over a finite interval below the average value .the unlocked ones participate to the irregular firing activity and exhibit a sort of intermittent evolution .their in - degree is either very small or higher than .as the dynamics is very sensitive to the different values of of , in a recent publication we have shown that one can design a _ heterogeneous mean - field _ ( hmf ) approach by a suitable thermodynamic limit preserving , for increasing values of , the main features associated with topological disorder .the basic step of this approach is the introduction of a probability distribution , , for the normalized in - degree variable , where the average and the variance are fixed independently of .a realization of the random network containing nodes ( neurons ) is obtained by extracting for each neuron ( ) a value from , and by connecting the neuron with randomly chosen neurons ( i.e. , , ) .for instance , one can consider a suitably normalized gaussian like distribution defined on the compact support , ] of by values , in such a way that is constant ( importance sampling ) .notice that the integration of the discretized hmf equations is much less time consuming than the simulations performed on a random network .for instance , numerical tests indicate that the dynamics of a network with neurons can be confidently reproduced by an importance sampling with . the effect of the discretization of on the hmf dynamics can be analyzed by considering the distance between the global activity fields and ( see eq.([meanfield ] ) ) obtained for two different values and of the sampling , i.e. : in general exhibits a quasi periodic behavior and is evaluated over a time interval equal to its period . in order to avoid an overestimation of due to different initial conditions , the field is suitably translated in time in order to make its first maximum coincide with the first maximum of in the time interval ] : the values of and mainly depend on and on its standard deviation ( more details are reported in ) . for what concerns _ unlocked _ neurons, exhibits the features of an intermittent - like dynamics .in fact , unlocked neurons with close to and spend a long time in an almost periodic firing activity , contributing to a qse , then they depart from it , firing irregularly before possibly coming back again close to a qse .the duration of the irregular firing activity of unlocked neurons typically increases for values of far from the interval . using the deterministic map ( [ mappa ] ), one can tackle in full rigor the stability problem of the hmf model .the existence of stable fixed points for the locked neurons implies that they yield a negative lyapunov exponent associated with their periodic evolution . as for the unlocked neurons , their lyapunov exponent , ,can be calculated numerically by the time - averaged expansion rate of nearby orbits of map ( [ mappa ] ) : ,\ ] ] where is the initial distance between nearby orbits and is their distance at the iterate , so that if this limit exists . the lyapunov exponents for the unlocked component vanish as . according to these results, one expects that the maximum lyapunov exponent goes to zero in the limit .in fact , at each finite , can be evaluated by using the standard algorithm by benettin et al . . in fig.[lyup_max ] we plot as a function of the discretization parameter .thus , is positive , behaving approximately as , with ( actually , we find ) .the scenario in any discretized version of the hmf dynamics is the following : _( i ) _ all _ unlocked neurons _exhibit positive lyapunov exponents , i.e. they represent the chaotic component of the dynamics ; _ ( ii ) _ is typically quite small , and its value depends on the discretization parameter and on ; _ ( iii ) _ in the limit and all s of unlocked neurons vanish , thus converging to a quasi periodic dynamics , while the _ locked neurons _ persist in their periodic behavior .the same scenario is observed in the dynamics of random networks built with the hmf strategy , where the variance of the distribution is kept independent of the system size , so that the fraction of locked neurons is constant .for the lif dynamics in an erdos renyi random network with neurons , it was found that in the limit . according to the argument proposed in , the value of the power - law exponent is associated to the scaling of the number of unlocked neurons , with the system size , namely .the same argument applied to hmf dynamics indicates that the exponent , ruling the vanishing of in the limit , stems from the fact that the hmf dynamics keeps the fraction of unlocked neurons constant . as a function of the sampling parameter : has been averaged also over ten different realizations of the network ( the error bars refer to the maximum deviation from the average ) .the dashed red line is the powerlaw , with . ]when the distribution is sufficiently broad , the system becomes asynchronous and locked neurons disappear .the global field exhibits fluctuations due to finite size effects and in the thermodynamic limit it tends to a constant value . from eq.s ( [ vk])([zk ] ) , one obtains that in this regime each neuron with in degree fires periodically with a period ~,\ ] ] while its phase depends on the initial conditions . in this caseall the lyapunov exponents are negative .for a given in degree probability distribution , the fraction of locked neurons ( i.e. , ) decreases by increasing . in particular , there is a critical value at which vanishes .this signals a very interesting dynamical transition between the quasi - synchronous phase ( ) to a multi - periodic phase ( ) , where all neurons are periodic with different periods .here we focus on the different collective dynamics that may emerge for choices of other than the gaussian case , discussed in the previous section .first , we consider a power law distribution where the constant is given by the normalization condition .the lower bound is introduced in order to maintain finite . for simplicity, we fix the parameter and analyze the dynamics by varying . notice that the standard deviation of distribution ( [ eqplaw ] ) decreases for increasing values of .the dynamics for relatively high is very similar to the quasi synchronous regime observed for in the gaussian case ( see fig .[ rp1 ] ) . by decreasing can observe again a transition to the asynchronous phase observed for in the gaussian case .accordingly , also for the power law distribution ( [ eqplaw ] ) a phase with locked neurons may set in only when there is a sufficiently large group of neurons sharing close values of . in fact , the group of locked neurons is concentrated at values of quite close to the lower bound , while in the gaussian case they concentrate at values smaller than .another distribution , generating an interesting dynamical phase , is i.e. the sum of two gaussians peaked around different values , and , of , with the same variance . is the normalization constant such that .we fix and vary both the variance , , and the distance between the peaks , .if is very large ( ) , the situation is the same observed for a single gaussian with large variance , yielding a multi periodic asynchronous dynamical phase . for intermediate values of .e. , the dynamics of the network can exhibit a quasi synchronous phase or a multi periodic asynchronous phase , depending on the value of .in fact , one can easily realize that this parameter tunes the standard deviation of the overall distribution : small separations amount to broad distributions . finally , when , a new dynamical phase appears .vs. for the probability distribution defined in eq.([dgauss ] ) , with , and .we have obtained the global field simulating the hmf dynamics with a discretization with classes of neurons .we have then used to calculate the of neurons evolving eq .( [ vk ] ) . in the insetwe show the raster plot of the dynamics : as in fig.1 , neurons are ordered along the vertical axis according to their in degree . ] for small values of ( e.g. ) , we observe the usual qse scenario with one family of locked neurons ( data not shown ) .however , when is sufficiently large ( e.g. ) , each peak of the distribution generates its own group of locked neurons .more precisely , neurons separate into three different sets : two locked groups , that evolve with different periods , and , and the unlocked group . in fig.[rpdue ] we show the dependence of on and the raster plot of the dynamics ( see the inset ) for .notice that the plateaus of locked neurons extend over values of on the left of and . in the inset of fig .[ fourier ] we plot the global activity field : the peaks signal the quasi - synchronous firing events of the two groups of locked neurons .one can also observe that very long oscillations are present over a time scale much larger than and .they are the effect of the _ firing synchrony _ of the of two locked families .in fact , the two frequencies and are in general not commensurate , and the resulting global field is a quasi periodic function. this can be better appreciated by looking at fig.[fourier ] , where we report the frequency spectrum of the signal ( red curve ) .we observe peaks at frequencies , for integer values of and . for comparison ,we report also the spectrum of a periodic , generated by the hmf with power law probability distribution ( [ eqplaw ] ) , with ( black curve ) : in this case the peaks are located at frequencies multiples of the frequency of the locked group of neurons . for different in degree probability distributions .the black spectrum has been obtained for the hmf dynamics with , generated by the power law probability distribution ( see eq.([eqplaw ] ) ) , with : in this case there is a unique family of locked neurons generating a periodic global activity field .the red spectrum has been obtained for a random network of neurons generated by the double gaussian distribution ( see eq.([dgauss ] ) ) described in fig.s 6 and 7 : in this case two families of locked neurons are present while , as reported in the inset , exhibits a quasi periodic evolution . ] on the basis of this analysis , we can conclude that slow oscillations of the global activity field may signal the presence of more than one group of topologically homogeneous ( i.e. locked ) neurons .moreover , we have also learnt that one can generate a large variety of global synaptic activity fields by selecting suitable in - degree distributions , thus unveiling unexpected perspectives for exploiting a sort of _ topological engineering _ of the neural signals .for instance , one could investigate which kind of could give rise to an almost resonant dynamics , where is close to a multiple of .the hmf formulation allows one to define and solve the following global inverse problem : how to recover the in degree distribution from the knowledge of the global synaptic activity field .here we just sketch the basic steps of the procedure . given , each class of neurons of in - degree evolves according to the hmf equations : the different fonts used here , with respect to eq.s ( [ vk])([meanfield ] ) , point out that in this framework the choice of the initial conditions is arbitrary and the dynamical variables , , in general may take different values from those assumed by , , , i.e. the variables generating in ( [ vk])([meanfield ] ) .however , one can exploit the self consistent relation for the global field : if and are known , this is a fredholm equation of the first kind for the unknown . if is a periodic signal , eq .( [ global ] ) can be easily solved by a functional montecarlo minimization procedure , yielding a faithful reconstruction of .this method applies successfully also when is a quasi - periodic signal , like the one generated by in degree distribution ( [ dgauss ] ) . in this sectionwe want to study the robustness of the hmf equations and of the corresponding inverse problem procedure in the presence of noise .this is quite an important test for the reliability of the overall hmf approach .in fact , a real neural structure is always affected by some level of noise , that , for instance , may emerge in the form of fluctuations of ionic or synaptic currents .moreover , it has been observed that noise is crucial for reproducing dynamical phases , that exhibit some peculiar synchronization patterns observed in _ in vitro _experiments . for the sake of simplicity, here we introduce noise by turning the external current , in eq .( [ vk ] ) , from a constant to a time and neuron dependent stochastic processes .precisely , the are assumed to be i.i.d .stochastic variables , that evolve in time as a random walk with boundaries , and ( the same rule adopted in ) .accordingly , the average value , of is given by the expression , while the amplitude of fluctuations is .at each step of the walk , the values of are independently updated by adding or subtracting , with equal probability , a fixed increment .whenever the value of crosses one of the boundaries , it is reset to the boundary value . since the dynamics has lost its deterministic character , its numerical integration can not exploit an event driven algorithm , and one has to integrate eq.s ( [ vk ] ) ([zk ] ) by a scheme based on explicit time discretization .the results reported hereafter refer to an integration time step , that guarantees an effective sampling of the dynamics over the whole range of parameter values that we have explored .we have assumed that is also the time step of the stochastic evolution of . herewe consider the case of uncorrelated noise , that can be obtained by a suitable choice of . in our simulations , that yields a value of the correlation time of the random walk with boundaries .this value , much smaller than the value typical of the isi of neurons , makes the stochastic evolution of the external currents , , an effectively uncorrelated process with respect to the typical time scales of the neural dynamics . of the hmf dynamics , sampled by classes of neurons , for a gaussian probability distribution , with and .lines of different colors correspond to different values of the noise amplitude , , added to the external currents : ( black line ) , ( red line ) , ( green line ) , ( blue line ) and ( orange line ) . ] in fig .[ camponoise ] we show , produced by the discretized hmf dynamics with and for a gaussian distribution , with and . curves of different colors correspond to different values of .we have found that up to , i.e. also for non negligible noise amplitudes ( ) , the hmf dynamics is practically unaffected by noise . by further increasing , the amplitude of decreases , as a result of the desynchronization of the network induced by large amplitude noise .also the inversion procedure exhibits the same robustness with respect to noise . as a crucial test, we have solved the inverse problem to recover by injecting the noisy signal in the noiseless equations ( [ vktil])([zktil ] ) , where ( see fig.[camponoise ] ) .the reconstructed distributions , for different , are shown in fig .[ noise_invert_1 ] .for relatively small noise amplitudes ( ) the recovered form of is quite close to the original one , as expected because the noisy does not differ significantly from the noiseless one . on the contrary , for relatively large noise amplitudes ( ), the recovered distribution is broader than the original one and centered around a shifted average value .the dynamics exhibits much weaker synchrony effects , the same indeed one could observe for the noiseless dynamics on the lattice built up with this broader given by the inversion method ., the reconstructed probability distribution ( red circles ) with the original gaussian distribution ( black line ) : the upper left panel corresponds to the noiseless case ( ) , while the upper right , the lower left and and the lower right correspond to , respectively . ] as a matter of fact , the global neural activity fields obtained by experimental measurements are unavoidably affected by some level of noise .accordingly , it is worth investigating the robustness of the inversion method also in the case of noise acting directly on . in order to tackle this problem ,we have considered a simple noisy version of the global synaptic activity field , defined as , where the random number is uniformly extracted , at each integration time step , in the interval ] ) .we compare , for different values of the noise amplitude , the reconstructed probability distribution ( red circles ) with the original gaussian distribution ( black line ) : the upper left , the upper right , the lower left and and the lower right panels correspond to , respectively . ]in this section we analyze the effectiveness of the hmf approach for sparse networks , i.e. networks where the neurons degree does not scale linearly with and , in particular , the average degree is independent of the system size . in this context , the coupling term describing the membrane potential of a generic neuron , in a network of neurons , evolves according to the following equation : while the dynamics of is the same of eq.s ( [ dynsyn])([contz ] ) .the coupling therm is now independent of , and the normalization factor , , has been introduced in order to compare models with different average connectivity .the structure of the adjacency matrix is determined by choosing for each neuron its in - degree from a probability distribution ( with support over positive integers ) independent of the system size . from sparse random networks with the same quantity generated by the corresponding hmf dynamics .we have considered sparse random networks with neurons . in the upper panelwe consider a gaussian probability distributions with different averages and variances , such that : correspond to the violet , orange , red and blue lines , respectively .the black line represents from the hmf dynamics ( ) , where is a gaussian probability distribution with and . in the lower panelwe consider the scale free case with fixed power exponent and different : correspond to the orange , red and blue lines , respectively .the black line represents from the hmf dynamics ( ) , where with cutoff .,title="fig : " ] from sparse random networks with the same quantity generated by the corresponding hmf dynamics .we have considered sparse random networks with neurons . in the upper panelwe consider a gaussian probability distributions with different averages and variances , such that : correspond to the violet , orange , red and blue lines , respectively .the black line represents from the hmf dynamics ( ) , where is a gaussian probability distribution with and . in the lower panelwe consider the scale free case with fixed power exponent and different : correspond to the orange , red and blue lines , respectively .the black line represents from the hmf dynamics ( ) , where with cutoff .,title="fig : " ] on sparse networks the hmf model is not recovered in the thermodynamic limit , as the fluctuations of the field received by each neuron of in degree do not vanish for .nevertheless , for large enough values of , one can expect that the fluctuations become negligible in such a limit , i.e. the synaptic activity field received by different neurons with the same in - degree is approximately the same .( [ vsparsa ] ) can be turned into a mean field like form as follows where represents the global field , averaged over all neurons in the network .this implies that the equation is the same for all neurons with in degree , depending only on the ratio . consequently ,also in this case one can read eq .( [ vsparsa_mean ] ) as a hmf formulation of eq .( [ vsparsa ] ) , where each class of neurons evolves according to to eq.s ( [ vk])([zk ] ) , with replacing , while the global activity field is given by the relation . in order to analyze the validity of the hmf as an approximation of models defined on sparse networks , we consider two main cases : ( _ i _ ) is a truncated gaussian with average and standard deviation ; ( _ ii _ ) is a power law ( i.e. , scale free ) distribution with a lower cutoff .the gaussian case ( _ i _ ) is an approximation of any sparse model , where is a discretized gaussian distribution with parameters and , chosen in such a way that .the scale free case ( _ ii _ ) approximates any sparse model , where is a power law with exponent and a generic cutoff .such an approximation is expected to provide better results the larger is , i.e. the larger is the cutoff of the scale free distribution . in fig .[ campi_sparsa ] we plot the global field emerging from the hmf model , superposing those coming from a large finite size realization of the sparse network , with different values of for the gaussian case ( upper panel ) and of for the scale free case ( lower panel ) .the hmf equations exhibit a remarkable agreement with models on sparse network , even for relatively small values of and .this analysis indicates that the hmf approach works also for non massive topologies , provided the typical connectivities in the network are large enough , e.g. in a gaussian random network with neurons ( see fig .( [ campi_sparsa ] ) ) .for systems with a very large number of components , the effectiveness of a statistical approach , paying the price of some necessary approximation , has been extensively proven , and mean field methods are typical in this sense . in this paper we discuss how such a method , in the specific form of heterogeneous mean field , can be defined in order to fit an effective description of neural dynamics on random networks .the relative simplicity of the model studied here , excitatory leaky integrate and fire neurons with short term synaptic plasticity , is also a way of providing a pedagogical description of the hmf and of its potential interest in similar contexts .we have reported a detailed study of the hmf approach including investigations on _ ( i ) _ its stability properties , _ ( ii ) _ its effectiveness in describing the dynamics and in solving the associated inverse problem for different network topologies , _ ( iii ) _ its robustness with respect to noise , and _( iv ) _ its adaptability to different formulations of the model at hand . in the light of _ ( ii )_ and _ ( iii ) _ , the hmf approach appears quite a promising tool to match experimental situations , such as the identification of topological features of real neural structures , through the inverse analysis of signals extracted as time series from small , but not microscopic , domains . on a mathematical ground ,the hmf approach is a simple and effective mean field formulation , that can be extended to other neural network models and also to a wider class of dynamical models on random graphs .the first step in this direction could be the extension of the hmf method to the more interesting case , where the random network contains excitatory and inhibitory neurons , according to distributions of interest for neurophysiology . this will be the subject of our future work .99 d. de santos - sierra , i. sendia - nadal , i. leyva , j. a. almendral , s. anava , a. ayali , d. papo , and s. boccaletti , plos one 9(1 ) : e85828 ( 2014 ) . v. volman , i. baruchi , e. persi and e. ben - jacob , physica a 335 , 249 ( 2004 ) .hidalgo j , seoane lf , corts jm , mu oz ma , plos one 7(8 ) , e40710 ( 2012 ) .j. f. mejias , h. j. kappen and j. j. torres , plos one 5(11 ) , e13651 ( 2010 ) n. brunel , journal of computational neuroscience , 8(3 ) , 183 - 208 ( 2000 ) .l. calamai , a. politi , a. torcini , phys .e 80 , 036209 ( 2009 ) d. millman , s. mihalas , a. kirkwood & e. niebur , nature physics , 6(10 ) , 801 - 805 ( 2010 ) .b. cessac , b. doyon , m. quoy m. & samuelides , physica d : nonlinear phenomena , 74(1 ) , 24 - 44 ( 1994 ) .bressloff , phys .e , 60(2 ) , 2160 ( 1999 ) .a. barrat , m. barthelemy , and a. vespignani , _ dynamical processes on complex networks _, cambridge university press , cambridge , uk ( 2008 ) .s. n. dorogovtsev , a.v .goltsev , and j. f. f. mendes , rev .mod . phys . 80 , 1275 ( 2008 ) .m. di volo , r. livi , s. luccioli , a. politi and a. torcini , phys .e 87 , 032801 ( 2013 ) .r. burioni , m. di volo , m. casartelli , r. livi and a. vezzani , scientific reports 4 , 4336 ( 2014 ) . m. tsodyks and h. markram , proc .usa 94 , 719 ( 1997 ) .m. tsodyks , k. pawelzik and h. markram , neural comput .10 , 821 , ( 1998 ) .m. tsodyks , a. uziel and h. markram , the journal of neuroscience 20 , rc1 ( 1 - 5 ) ( 2000 ) .r. brette , neural comput . 18 , 2004 ( 2006 ) .r. zillmer , r. livi , a. politi , and a. torcini , phys .e 76 , 046102 ( 2007 ) .m. di volo and r. livi , j. of chaos solitons and fractals 57 , 5461 ( 2013 ) . m. tsodyks , i. mitkov , and h. sompolinsky , phys .71 , 1280 ( 1993 ) .g. benettin , l. galgani , a. giorgilli and j .- m .strelcyn , meccanica 15 , 21 ( 1980 ) .r. kress , _ linear integral equations _applied numerical sciences , v.82 , springer - verlag , new york , ( 1999 ) .corticonics_. new york : cambridge up ( 1991 ) .p. bonifazi , m. goldin , m. a. picardo , i. jorquera , a. cattani , g. bianconi & r. cossart , science , 326(5958 ) , 1419 - 1424 ( 2009 ) .
|
we report about the main dynamical features of a model of leaky - integrate - and fire excitatory neurons with short term plasticity defined on random massive networks . we investigate the dynamics by a heterogeneous mean field formulation of the model , that is able to reproduce dynamical phases characterized by the presence of quasi synchronous events . this formulation allows one to solve also the inverse problem of reconstructing the in - degree distribution for different network topologies from the knowledge of the global activity field . we study the robustness of this inversion procedure , by providing numerical evidence that the in - degree distribution can be recovered also in the presence of noise and disorder in the external currents . finally , we discuss the validity of the heterogeneous mean field approach for sparse networks , with a sufficiently large average in degree .
|
fluid models have received much recent attention in the literature .they have been used to model statistical multiplexers in atm ( asynchronous transfer mode ) networks , , , packet speech multiplexers , buffer storage in manufacturing models , buffer memory in store - and - forward systems and high - speed digital communication networks . in these models the queue length is considered a continuous ( or fluid ) process , rather than a discrete random process that measures the number of customers .these models tend to be somewhat easier to analyze , as they allow for less randomness than more traditional queueing models .the following is a description of a fairly general fluid model , of which many variants and special cases have been considered .let denote the amount of fluid at time in the buffer .furthermore , let be a continuous - time markov process .the content of the buffer is regulated ( or driven ) by in such a way that the _ net input rate _ into the buffer ( i.e. , the rate of change of its content ) is ] given that is in state we shall assume throughout that for all states .we shall also assume that for at least one , since otherwise , in the steady state , the buffer is always empty .we let where an empty product should be interpreted as unity .the stationary probabilities of the birth - death process can then be represented as when the capacity of the buffer is infinitely large , in order that a stationary distribution for exists , the mean drift should be negative or , equivalently , the following _ stability condition _ should be satisfied we let and since we assume that the drift in each state is nonzero , we have setting ; \quadt,\ x\geq0,\quad k\in\mathcal{n,}\ ] ] the kolmogorov forward equations for the markov process ] and expanding in powers of we obtain the _ eikonal equation _ for and the _ transport equation _ for where to solve ( [ eik ] ) and ( [ trans ] ) we use the method of characteristics , which we briefly review below . given the first order partial differential equation where we search for a solution the technique is to solve the system of characteristic equations given by where we now consider to all be functions of the variables and here measures how far we are along a particular characteristic curve or ray and indexes them . for the eikonal equation ( [ eik ] ) , the characteristic equations are [ strip] the particular solution is determined by the initial conditions at . we shall show that for this problem two different types of solutions are needed ; these correspond to two distinct families of rays . setting and solving ( [ eqc])-([eqd ] ) , yields so that is constant along a ray .we now consider the family of rays emanating from the point evaluating ( [ eik ] ) at we get so that from ( [ eqb ] ) and ( [ pq ] ) , with the initial condition and using ( [ u0 ] ) , we obtain + 1 .\label{z(u)}\ ] ] from ( [ eqa ] ) , we have and from ( [ u0 ] ) we have{c}\lambda-\mu<0,\quad b=0\\ \mu-\lambda>0,\quad b=\ln\left ( \rho\right ) \end{array } \right . .\] ] using the initial condition and expanding in powers of , we get and in order to have for ( i.e. , for the rays to enter the domain ) we need to choose with since . integrating ( [ eqa ] ) and using ( [ z(u ) ] ) and ( [ u0 ] ) , we conclude that \label{y(s , t)}\]] + 1 .\label{z(s , t)}\ ] ] this yields the rays that emanate from in parametric form .several rays are sketched in figure [ raycor ] .for and each value of , ( [ y(s , t ) ] ) and ( [ z(s , t ) ] ) determine a ray in the plane , which starts from at .we discuss a particular ray which can be obtained in an explicit form . for we can eliminate from ( [ z(s , t ) ] ) and obtain and along this ray , and are related by for , we have both and increasing for . for the rays reach a maximum value in at , where and we have \label{ytmax}\]] + 1 .\label{ztmax}\ ] ] from ( [ eqa ] ) we see that the maximum value in is achieved at the same time that , and that occurs at with and .\ ] ] inverting the equations ( [ y(s , t)])-([z(s , t ) ] ) we can write and , \quad \mathbb{k(}y , z\mathbb{)}=k\left [ s\left ( y , z\right ) , t(y , z)\right ] .\ ] ] we will use this notation in the rest of the article . from ( [ eqe ] )we have + \lambda e^{-st}\left [ 1-\ln\left ( \rho\right ) + ts\right ] , \ ] ] which we can integrate to get -\frac{\lambda}{s}e^{-st}\left [ 2-\ln\left ( \rho\right ) + ts\right ] \label{psi1}\\ & + \psi(s,0)-\frac{\mu}{s}\left [ 2+\ln\left ( \rho\right ) \right ] + \frac{\lambda}{s}\left [ 2-\ln\left ( \rho\right ) \right ] .\nonumber\end{aligned}\ ] ] obviously , is a constant , since all rays start at the same point . setting in ( [ psi1 ] ) and using ( [ t0 ] ) , we obtain and therefore ,taking the limit as we get on the other hand , from ( [ ginfinity ] ) we have and we conclude that solving for in ( [ y(s , t)])-([z(s , t ) ] ) , we get , \quad e^{-st}=1+\frac{s}{2\lambda}\left [ z-1-ys - t\left ( \lambda+\mu\right ) \right ] .\label{expa}\ ] ] replacing ( [ expa ] ) in ( [ psi1 ] ) , we obtain \left ( z-1\right ) + \ln\left ( \rho\right ) . \label{psi2}\ ] ] we shall now solve the transport equation ( [ trans ] ) , which we rewrite as using ( [ y(s , t ) ] ) and ( [ z(s , t ) ] ) in ( [ transp1 ] ) , we have to solve ( [ transp2 ] ) , we need to compute as a function of and use of the chain rule gives and hence, where the jacobian is defined by .\label{j}\ ] ] using ( [ inversion ] ) we can show after some algebra that while ( [ expa ] ) gives thus , the transport equation ( [ transp2 ] ) becomes .\label{transp3}\ ] ] using ( [ y(s , t ) ] ) and ( [ z(s , t ) ] ) in ( [ j ] ) , we have .\label{transp4}\ ] ] combining ( [ transp3 ] ) and ( [ transp4 ] ) , we obtain whose solution is where is a function to be determined . from ( [ j ] ) ,we have since the jacobian vanishes as the ray expansion eases to be valid near the point where a separate analysis is needed .so far we have determined the exponent and the leading amplitude except for the function in ( [ k ] ) and the power in ( [ gray ] ) . in section 4 we will determine them by matching ( [ gray ] ) to a corner layer solution valid in a neighborhood of the point . denoting the domain in the plane by must determine what part of the rays from infinity fill .the expansion corresponding to these rays must satisfy the boundary condition ( [ ginfinity ] ) .thus , we have while ( [ eqa])-([eqb ] ) yield equations for the rays or , eliminating from the system ( [ yzinf ] ) and writing we get solving ( [ yzinf ] ) subject to the initial condition where we get .\label{yinf}\ ] ] from ( [ y(z ) ] ) , it follows that the minimum value in occurs when hence , for to be positive , we must have where was defined in ( [ y0 ] ) .therefore , the rays from infinity fill the region given by the complementary region is a _ shadow _ of the rays from infinity . in , is given by ( [ gray ] ) as only the rays from are present ( see figure [ rayinf ] ) .in the region both the rays coming from and the rays coming from infinity must be taken into account .we add ( [ gray ] ) and ( [ ginfinity ] ) to represent in the asymptotic form + \varepsilon^{\nu}\exp\left [ \frac{1}{\varepsilon}\psi(y , z)\right ] \mathbb{k}(y , z),\quad(y , z)\in r. \label{gr}\ ] ] we can show that in the interior of so that however , in we can write ( [ gr ] ) as \mathbb{k}(y , z). ] is well defined . for and have by ( [ fbc ] ) .we now examine how this boundary condition is satisfied by considering the scale and .note that this part of the boundary is in the region from ( [ z(s , t ) ] ) we have ^{2}-4\lambda\mu}}{2\mu},\quad z>1 . \label{st1}\ ] ] using ( [ st1 ] ) in ( [ y(s , t ) ] ) we get + 2\lambda\right\ } + o\left ( y\ln^{2}y\right ) , \label{s0}\ ] ] and using ( [ s0 ] ) in ( [ st1 ] ) y+o\left ( y^{2}\ln^{2}y\right ) , \label{t00}\ ] ] for and using ( [ s0 ] ) and ( [ t00 ] ) in ( [ psi3 ] ) and ( [ k2 ] ) , we find that + 2\right\ } \label{psi5}\\ + \frac{1}{z-1}\left\ { \left ( \lambda+\mu\right ) \ln\left [ \frac{\mu y}{\left ( z-1\right ) ^{2}}\right ] + \lambda-\mu\right\ } y,\quad y\rightarrow0.\nonumber\end{gathered}\ ] ] hence , we shall consider asymptotic solutions of the form \widetilde { k}(u,\varepsilon k ) , \label{f26}\ ] ] where and are to be determined . using ( [ f26 ] ) in ( [ eqg ] ) we get , to leading order, the most general solution to ( [ k4eq ] ) is hence, \frac{1}{u}\widetilde{k}\left ( \xi\right ) .\label{f27}\ ] ] to find and we will match ( [ f27 ] ) with the corner layer solution ( [ cornerint ] ) .recalling that and using the asymptotic formula ( [ asybessel ] ) we get , as with fixed -\frac{\left ( \lambda+\mu\right ) } { \theta}\right\ } .\label{j1}\ ] ] using ( [ j1 ] ) and writing ( [ cornerint ] ) in terms of and we have \sqrt{\frac{\varepsilon}{2\pi\left ( z-1\right ) } } \nonumber\\ \times\frac{1}{2\pi\mathrm{i}}{\displaystyle\int\limits_{\mathrm{br } } } \left\ { \frac{1}{\theta}\exp\left [ \frac{u\theta}{\varepsilon}+\left ( \frac{z-1}{\varepsilon}+\frac{\lambda+\mu}{\theta}\right ) \ln\left ( \frac{\sqrt{\mu\lambda}e\varepsilon}{\theta\left ( z-1\right ) } \right ) \right ] \right .\label{f17}\\ \left .\times\gamma\left ( \frac{\lambda+\mu}{\theta}+1-\alpha\right ) \exp\left [ \frac{\lambda-\mu}{\theta}-\left ( \frac{\lambda+\mu}{\theta } -\alpha\right ) \ln\left ( \sqrt{\rho}\ \frac{\lambda+\mu}{\theta}\right ) \right ] \right\ } d\theta,\nonumber\end{gathered}\ ] ] to evaluate ( [ f17 ] ) asymptotically as we shall use the saddle point method .we find that the integrand has a saddle point at so that \frac{1}{2\pi u}\frac{1}{\xi}\\ \times\gamma\left ( \frac{\lambda+\mu}{\xi}+1-\alpha\right ) \exp\left [ \frac{u\xi}{\varepsilon}+\left ( \frac{z-1}{\varepsilon}+\frac{\lambda+\mu } { \xi}\right ) \ln\left ( \frac{\sqrt{\mu\lambda}e\varepsilon}{\xi\left ( z-1\right ) } \right ) \right ] \\ \times\exp\left [ \frac{\lambda-\mu}{\xi}-\left ( \frac{\lambda+\mu}{\xi } -\alpha\right ) \ln\left ( \sqrt{\rho}\ \frac{\lambda+\mu}{\xi}\right ) \right ] , \end{gathered}\ ] ] or \label{f188}\\ \times\exp\left\ { \frac{\lambda+\mu}{\xi}\ln\left [ \frac{\varepsilon}{\xi u\left ( \rho+1\right ) } \right ] + \frac{2\lambda}{\xi}\right\ } .\nonumber\end{gathered}\ ] ] writing ( [ f27 ] ) in terms of , we obtain \label{f19}\\ & \times\exp\left [ \frac{\left ( \lambda+\mu\right ) } { \xi}\ln\left ( \frac{\mu\varepsilon}{u\xi^{2}}\right ) + \frac{\left ( \lambda-\mu\right ) } { \xi}\right ] \frac{1}{u}\widetilde{k}\left ( \xi\right ) .\nonumber\end{aligned}\ ] ] matching ( [ f188 ] ) with ( [ f19 ] ) , we have \end{aligned}\ ] ] and therefore , for \\ \times\exp\left\ { \frac{\lambda+\mu}{\xi}\ln\left [ \frac{\varepsilon}{\xi u\left ( \rho+1\right ) } \right ] + \frac{2\lambda}{\xi}\right\ } , \end{gathered}\ ] ] or \nonumber\\ \times\exp\left\ { \frac{\ln\left ( \rho\right ) } { \varepsilon}+\frac { z-1}{\varepsilon}\ln\left [ \frac{\lambda e^{2}\varepsilon u}{\left ( z-1\right ) ^{2}}\right ] + \alpha\ln\ \left [ \frac{\left ( \lambda + \mu\right ) u}{z-1}\right ] \right\ } \label{f20}\\ \times\exp\left\ { \frac{\left ( \lambda+\mu\right ) u}{z-1}\ln\left [ \frac{\varepsilon}{\left ( \rho+1\right ) \left ( z-1\right ) } \right ] + \frac{2\lambda u}{z-1}\right\ } , \nonumber\end{gathered}\ ] ] note that from ( [ f20 ] ) we have as will now find the equilibrium probability that the buffer content exceeds = 1-{\displaystyle\sum\limits_{k=0}^{\infty } } f_{k}(x ) \label{m}\ ] ] for various ranges of in this region we shall use the spectral representation of the corner layer solution . using the generating function , \ ] ] in the form = \left ( \sqrt{\rho } \right ) ^{-(j+1)}{\displaystyle\sum\limits_{l=-\infty}^{\infty } } j_{l-(j+1)}\left [ -\frac{2\sqrt{\rho}}{\rho+1}(j+1-\alpha)\right ] \left ( \sqrt{\rho}\right ) ^{l},\ ] ] we obtain from ( [ cornerspec ] ) .\nonumber\end{aligned}\ ] ] we shall now use the asymptotic solution in the region as given by ( [ g3 ] ) .we have \mathbb{k(}y , z)dz .\label{mar1}\ ] ] to evaluate ( [ mar1 ] ) as we use the laplace method . from ( [ pq ] )we get and therefore the main contribution to ( [ mar1 ] ) comes from and we obtain \mathbb{k(}y,1).\ ] ] using ( [ st1 ] ) in ( [ j])-([qz ] ) and ( [ psi3])-([k2 ] ) , we obtain while ( [ y(s , t ) ] ) gives with thus, .\label{m2}\ ] ] most of the domain the asymptotic expansion of is given by \mathbb{k}(y , z)\text { \ in } r^{c } \label{rc1}\ ] ] or \mathbb{k}(y , z)\text { \ in } r. \label{r1}\ ] ] if we consider the continuous part of the density , given by the transition between and disappears , and we have \mathbb{k}(y , z)=\varepsilon^{\frac{3}{2}}\exp\left [ \frac{1}{\varepsilon}\psi(s , t)\right ] sk(s , t ) , \label{density}\ ] ] everywhere in the interior of note that becomes infinite along ( i.e. , but the product remains finite . the asymptotic expansion of the boundary probabilities can be obtained by setting in ( [ r1 ] ) .this expression can be used to estimate the difference \ ] ] which is exponentially small for also , for a fixed is maximal at ( see figure [ y=0 ] ) . in other words , if the buffer will most likely be empty . for a fixed peeked along the curve ( see figure [ y = y0 ] ) . to see this better , we can use ( [ t0 ] ) , ( [ inversion ] ) and ( [ j ] ) in ( [ density ] ) , obtaining \right\ } , \quad z>1\ ] ] with or equivalently ^{2}\right\ } , \quad z>1.\ ] ] this means that given active sources , the most likely value of the buffer will be for a fixed achieves its maximum around ( see figure [ z=1 ] ) . to find an expression for is close to we use ( [ gauss ] ) and obtain , for fixed \right\ } , \ ] ] or .\ ] ] below we summarize the various boundary , corner and transition layer corrections to the results in ( [ g3 ] ) and ( [ g4 ] ) : 1 . d\theta,\end{aligned}\ ] ] where denotes the bessel function , the gamma function , is a vertical contour in the complex plane with and .\ ] ] 2 . , \ ] ] with 3 . \\ \times\left [ \frac{\xi(y)-1}{1-\rho\xi^{-1}(y)}\xi^{k}(y)+\rho^{k}\xi ^{-k}(y)\right ] \sqrt{\frac{\mu-\lambda}{2\pi\mathbf{j}_{0}(y)s(y,0)}}\left ( 1-\rho\right ) , \end{gathered}\]] < 0,\quad\mathbf{j}_{0}(y)=2\left [ \mu\xi(y)-\frac{\lambda}{\xi(y)}\right ] y-1<0,\\ \quad s(y,0)=\left ( \lambda+\mu\right ) -\mu\xi(y)-\lambda\xi^{-1}(y)<0,\quad\\ \left ( 1-\xi^{-1}\right ) \rho-\left ( 1-\xi\right ) -\left ( \rho+1\right ) \ln\left ( \xi\right ) = \mu\left [ \left ( 1-\xi^{-1}\right ) \rho+\left ( 1-\xi\right ) \right ] ^{2}y.\end{gathered}\ ] ] 4 . \\ \times\exp\left\ { \frac{\ln\left ( \rho\right ) } { \varepsilon}+\frac { z-1}{\varepsilon}\ln\left [ \frac{\lambda e^{2}\varepsilon u}{\left ( z-1\right ) ^{2}}\right ] + \alpha\ln\ \left [ \frac{\left ( \lambda + \mu\right ) u}{z-1}\right ] \right\ }\\ \times\exp\left\ { \frac{\left ( \lambda+\mu\right ) u}{z-1}\ln\left [ \frac{\varepsilon}{\left ( \rho+1\right ) \left ( z-1\right ) } \right ] + \frac{2\lambda u}{z-1}\right\ } .\end{gathered}\ ] ]this work was completed while d. dominici was visiting technische universitt berlin and supported in part by a sofja kovalevskaja award from the humboldt foundation , provided by professor olga holtz .he wishes to thank olga for her generous sponsorship and his colleagues at tu berlin for their continuous help .o. hashida and m. fujika . queueing models for buffer memory in store - and - forward systems . in _ proceedings of the seventh international teletraffic congress ( stockholm , sweden , june 1973 ) _ , pages 323/1323/7 .e. a. van doorn and w. r. w. scheinhardt . a fluid queue driven by an infinite - state birth - death process . in v.ramaswami and p. wirth , editors , _ teletraffic contributions for the information age _ , pages 465475 .elsevier , amsterdam , 1997 .proceedings of the 15th international teletraffic congress ( itc15 ) , washington , dc , usa , june 2227 , 1997 .
|
we analyze asymptotically a differential - difference equation , that arises in a markov - modulated fluid model . we use singular perturbation methods to analyze the problem with appropriate scalings of the two state variables . in particular , the ray method and asymptotic matching are used . keywords : fluid models , m / m/1 queue , differential - difference equations , ray method , asymptotics .
|
project meetings , such as big room design and construction project sessions ( khanzode & lamb , 2012 ) , are communicative events where participants discuss , negotiate , present , and create material jointly .participants express varying mental states in these group meetings due to individual factors and predispositions , the specific meeting agenda , and general team dynamics . to improve meeting quality and productivityit is important to understand the individual user states and the aggregated group state and through this the group dynamics .one characteristic associated with meeting quality , productivity , and effectiveness is engagement .this paper focuses on a multi - modal approach to detect engagement .engagement is defined as : `` the value that a participant in an interaction attributes to the goal of being together with the other participant(s ) and continuing interaction '' ( salam , chetouani 2015 , p. 3 ) `` the process by which two ( or more ) participants establish , maintain , and end their perceived connection .it is directly related to attention '' ( salam , chetouani 2015 , p. 3 ) .it can furthermore be defined as a meaningful involvement ( fletcher 2005 ) , that is enabled through vigor , dedication , and absorption ( schaufeli 2006 ) . for the present studywe define engagement as an attentive state of listening , observing , and giving feedback , leading into protagonistic action in group interaction .individual and group engagement level influences productivity of group interaction .hence , it is of interest to create transparency about engagement states of participants and group .an important aspect is the intention to become protagonist and the identification of the threshold between active and passive participation .this moment is crucial because it indicates a significant change in mental state of an individual .intention is understood as instructions that people give to themselves to behave in certain ways ( triandis 1980 ) , and one of the most important predictor of a person s behavior ( sheeran 2002 ) .it is an internal commitment to perform an action while in a certain mental state ( levesque et al .1990 ) and it predicts actual behavior .this paper defines intention as a mental state characterized by an internal commitment to perform a specific action in the immediate future .it is a specific form of engagement that is indicating future active meeting participation .we build on an early study and preliminary prototype , called ering ( engagement ring ) , at the pbl lab at stanford that focused on detecting and building awareness of learners degrees of engagement during globally distributed project team meetings ( ma & fruchter , 2015 ) .collocation offers a wealth of tacit cues about learners and team members .body position and gestures , as well as physical movement provide indicators of participant s intentions ( pease & pease , 2004 ) .these are cues that enable us to intuitively interpret and evaluate the state of learners and team members .integrated delivery process ( ipd ) is one of the disruptive forces that the design and construction industry is experiencing .this process leverages big room weekly or bi - weekly collocated project stakeholder meetings . during big room sessions a lot of timeis spent talking in large groups about issues , displaying side by side on smartboards 3d bim models of the future facility and the navisworks clashes that need to be resolved ( khanzode & lamb , 2012 ) .big room ipd increases the number of iterations and time - to - market ( fruchter & ivanov , 2011 ) . in order to achieve thisincreased productivity participant s need to engage and get feedback on their engagement state .this paper presents a more comprehensive engagement framework that builds on and expands the ering engagement concept and provides guidelines and instructions for the implementation of a multi - modal sensor - based work environment to detect engagement of participants in a collocated setting . in the preliminary ering prototypethe initial focus was on body motion and posture .the proposed engagement framework presented in this paper expands this approach to include other information streams such as facial expression , voice , and other biometric information .we demonstrate the accuracy of the engagement framework with a simple interaction scenario , and provide a direction for future implementation .the analysis of engagement has a long history in education and work .work engagement ( schaufeli 2006 ) influences project outcome .qualitative observation - based evaluation of an individual s state has shed light on the importance of engagement for productivity .the human - computer - interaction community started focusing on the automation of engagement detection for learning and games . to combine both methods of engagement detection , qualitative and automated ,enables us to provide new insights into automated work engagement detection in the work environment .a study by frank and fruchter ( 2015 ) uses cognitive flexibility as an internal evaluation of engagement during meetings .however , this approach as well as self - reports require active involvement of meeting participants .this paper focuses on external expressions of engagement that allow a non - intrusive evaluation of the participant without disrupting the activity .research in this field focuses on the human body and its features to generate indicators of mental states . non - verbal behavior and body language expresses emotions ( schindler et al .2008 , mead 2011 ) ; it conveys information about the speaker s internal state , and attitude toward the addressees ( gutwin & penner 2002 ) . both body posture and gesture ( ekman 1999 , ekman and friesen 1972 )have been identified as important social signals that indicate mental states and thereby also engagement .we use the preexisting qualitative research to develop an engagement framework offering quantitative measureable indicators .various approaches have been investigated to use body language to improve interactions with digital agents such as assistant robots and displays ( vaufrydaz 2015 , schwarz 2014 ) .these studies use varying aspects of non - verbal language , such as upper body pose ( mead 2011 , schwarz 2014 ) , proxemics ( vaufreydaz 2015 ) , facial expression and head posture ( vaufreydaz 2015 ) , or the absence of movement ( witchel 2014 ) and sometimes the combination of a small set of features from facial expression and posture in multi - modal approaches .weight and weight distribution on a seat has been used to measure engagement ( mota & picard 2003 , demello et al . 2007 , balaban 2004 ) .weight is an indicator of body posture that can be measured without affecting the participant .the aforementioned ering prototype ( ma & fruchter 2015 focuses on globally distributed learning settings which strip away the physical cues .the hypothesis of the ering study was that these cues can be reintroduced in virtual interactions , allowing learners to self - regulate and build an awareness of the overall state of the team .the pbl lab researchers developed a cloud service and application called ering ( engagement ring ) to detect and provide real - time feedback of learner s degree of engagement during a virtual interaction .ering collects body motion data using microsoft kinect sensor , analyzes and interprets a small set of the body motions and body positions including head , shoulder , and hand joints .ering provides real - time feedback of the degree of engagement based on three states disengaged , neutral listen , and engaged .the body motion analysis is performed for two units of analysis individual learner and team .ering runs on a microsoft azure cloud platform .ering was used in the architecture , engineering , construction ( aec ) global teamwork course testbed in 2014 and 2015 . however , the afore mentioned studies discuss limited sets of potential engagement indicators and do not make full usage of the amount of qualitative research conducted on non - verbal body language and its indication of mental states .hence , they all provide limited insight and accuracy in evaluating the intention to engage with an agent .they do not discuss the evaluation of engagement between humans in normal interaction or insights into potentially different levels of engagement and thereby provide only a binary classification .this paper discusses a more comprehensive analysis of engagement .building on previous studies , we developed an engagement framework in the form of an engagement scale for interaction .engagement is the level of participation in a meeting interaction .schwarz et al . ( 2014) define two states of disengaged and intention to interact .the ering prototype defines three states , disengaged , neutral listen , and engaged .the proposed engagement framework consists of six states to understand engagement in more detail .figure [ fig : engagement - framework ] presents the engagement framework consisting of six states .the first state is disengagement ; similar to schwarz et al .( 2014) and ma and fruchter ( 2015) disengagement is understood as a state of no participation , distraction and no attention to the meeting .the second state is relaxed engagement , understood as attention to the meeting , listening , observing , but no participation .this is most similar to the neutral listen state of ering .the third state is involved engagement it signifies attention and non - verbal feedback like nodding , small verbal confirmations , or specific facial expressions such as an open mouth .this state is the first that can be classified as engaged in the ering prototype , but counts as disengaged in the schwarz et al .the present framework differentiates between relaxed and involved engagement because they signify differing mental states and different focus and involvement to the meeting activity and thereby potential for productivity .the fourth state is the intention to act which is similar to the schwarz model .it is the preparation for active participation in protagonist role indicated through and increase in activity .it shows an involved participation but not yet a contribution .the fifth state is action and the process of speaking and/ or interacting with participants or content on displays .it is a calm form of protagonistic activity .the sixth and final state is involved action and a highly engaged and involved interaction with intense gesture and voice .this state is an active form of excitement and arousal in a meeting that fosters productivity .the six states help understand different team dynamics and user states that affect the meeting interaction , productivity and effectiveness .using multi - modal information our framework identifies specific combination of engagement classifiers associated with the six engagement states .literature identified specific observable patterns that indicate mental states such as engagement . in order to categorize the engagement of an individual into one of the six engagement states of the engagement framework we need these specific observable patterns .they elucidate which state a participant is in at any given type .the engagement framework draws on a variety of these identified observable patterns understood as indicators for engagement ( sanghvi et al .2011 , vaufrydaz et al .2015 , mead et al . 2011 ,bassetti 2015 , changing minds 2015 , schwarz 2014 , scherer 2011 ) .as humans we draw on a multitude of indicates to evaluate conversation partner s engagement . for this reason , the engagement framework seeks to implement many identified indicators of engagement to make a more accurate prediction of the engagement state .the indicators are multi - modal , based on body posture and motion , voice , and facial expression to create a more comprehensive perspective .the multi - modal indicators can be captured through ubiquitous sensors such as 3d cameras like kinect , 2d imaging through rgb cameras , and microphones .the input of each sensor and each mode ( 3d , 2d , sound ) is treated as a module .the module conceptualization allows for future expansion of sensors and indicators to be integrated into the classification of the engagement framework .the indicators are abstracted into classifiers .classifiers are specific components of body posture or motion , facial expression , voice , and other multi - modal information that can characterize engagement .the classifiers are based on heuristics . to date , we identified 25 indicators , such as volume of speech , smiling , and shaking the head , producing a range of classifiers .participants in meetings are normally seated behind a table .hence , the indicators are focused on the upper body with a differentiation between head , face , arms , body , and voice .table [ tab : params ] shows an example of indicators and their classifiers for the body .it names the indicator , its definition , the module , and the classifier in each of the six engagement states .classifiers are understood as being exhibited or not exhibited during an observation period and are expressed as a binary value .for example , the body lean angle is an indicator of engagement that contains six classifiers .the first classifier is a backwards lean angle that can indicate disengagement .the second classifier is a sideways lean angle that indicates relaxed engagement .the third classifier is no lean angle or sitting straight which shows an involved engagement .the fourth classifier is the process of leaning forward indicating the intention to act .the fifth classifier is a slight forward lean indication action . andthe sixth classifier is a rapid change in lean angles over the observation time indicating involved action ..example of classifier selection and definition for the body [ cols="<,^ , > " , ] [ tab : classifiers ] we captured and labeled 2,321 frames from 6 different subjects for intention to act and disengagement .five hundred frames ( about 20% of the frames ) are used for training and the remaining frames are used for testing our classifiers .each frame is classified as disengagement or intention to act .the action state occurs when a frame label is intention to act and the speed of one hand is not zero .the initial implementation reached a classification accuracy of 83.36% .the processing time for each frame is less than 10ms which indicates real - time usability of our algorithm since we receive 30 frames per second and should label them to one of the states of engagement .automated engagement detection can assist to increase our understanding of group dynamics , and participation habits .the presented engagement framework was implemented using research based observable engagement indicators and classifiers show a high accuracy in predicting participation behavior .this provides the basis for future comprehensive team engagement analyses .our proposed approach to engagement classification promises a more comprehensive observation of individual engagement and team engagement for various applications . throughout the meeting the participation level of participant changes .calculating the engagement state of individual and group in real time allows for feedback of group activity and enables managerial and environmental adjustment to create a more engaging and thereby productive work environment .[ [ engagement - feedback - for - meeting - participant - and - infrastructure - management ] ] engagement feedback for meeting participant and infrastructure management ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ a dynamic meeting feedback can trigger a signal at individual or team level ; either targeting the disengaged individual to encourage reengagement or the group to change the meeting focus or activity .groups would receive a joint message such as `` the group is disengaged '' or a motion signal is triggered like ering ( ma & fruchter 2015 ) , which means that the svm classified more than 40% of the individuals as disengaged .the public feedback is used as an encouragement to change the meeting activity or take a break to refocus .detailed insight is helpful to evaluate meeting effectiveness and efficiency .records of engagement levels will provide insight to bigger trends in meeting dynamic , such as overall engagement at specific meeting times , participation , and dominance in meetings .these insights allow for strategic adjustments of meeting times , necessary participants , necessary changes in interaction protocols , and others aspects related to meeting productivity .the engagement level can be used to identify the user intent to act with various responsive objects in a room by identifying directed engagement levels .it will give additional information to responsive objects as to which user wants to interact with which object in a room , based on directional body posture , voice , or motion .it will use an engagement table to track each participants potential engagement level with each object .the presented paper is an initial step in developing a complex responsive engagement system .the accuracy of the algorithm can be increased . in this initial stage of the project a limited number of classifiers have been implemented .currently the study focuses on body posture and motion .the classifiers are designed as modules that allow for expansion through different sensors , integrating for example voice data , and facial expression , up to visions of the smart office future including biometric information .the expansion of the classifier list will allow for a more complex understanding of engagement and user state .furthermore , on - going efforts focus on the implementation of a machine learning algorithm that will allow for a faster and more accurate classification of the six engagement states .the collection of more training data will further assist is making the framework implementation more robust .the test game presented in the paper is an abstraction and simplification of real interactions and hence future experimental scenarios will consider more complex future steps .we will present the teams with increasingly complex interaction tasks up to the implementation of the algorithm in realistic meetings .the final goal for the implementation of the engagement framework is to continuously classify the engagement of multiple meeting participants that are within the operating range of the sensors for all six engagement states . throughout a meeting participants listen ,observe , talk , or interact with a display .the behaviors are evaluated through multi - modal modules of engagement classifiers to categorize engagement into six stages .such a system will assist to detect and evaluate engagement states , and foster engagement in scenarios ranging from small meeting groups to big room project stakeholder meetings .hanan salam and mohamed chetouani .a multi - level context - based modeling of engagement in human - robot interaction . in _automatic face and gesture recognition ( fg ) , 2015 11th ieee international conference and workshops on _ , volume 3 , pages 16 .ieee , 2015 .renate fruchter and plamen ventsislavov ivanov .agile ipd production plans as an engine of process change . in _ proc .asce international workshop on computing in civil engineering _ ,pages 776784 , 2011 .maria frank , renate fruchter , and marianne leinikka .high engagement decreases fatigue effect in global learners . in _savi symposium on new ways to teach and learn for student engagement_. stanford university , 2015 .ross mead , amin atrash , and maja j matari .proxemic feature recognition for interactive robots : automating metrics from the social sciences . in _international conference on social robotics _ , pages 5261 .springer , 2011 .carl gutwin and reagan penner .improving interpretation of remote gestures with telepointer traces . in _ proceedings of the 2002 acm conference on computer supported cooperative work _ , pages 4957 .acm , 2002 .julia schwarz , charles claudius marais , tommer leyvand , scott e hudson , and jennifer mankoff . combining body pose , gaze , and gesture to determine intention to interact in vision - based interfaces . in _ proceedings of the 32nd annual acm conference on human factors in computing systems _ , pages 34433452 .acm , 2014 .harry j witchel , carina ei westling , julian tee , aoife healy , robert needham , and nachiappan chockalingam .what does not happen : quantifying embodied engagement using nimi and self - adaptors ., 11(1):304331 , 2014 .selene mota and rosalind w picard .automated posture analysis for detecting learner s interest level . in _ computer vision and pattern recognition workshop , 2003 .conference on _ , volume 5 , pages 4949 .ieee , 2003 .sidney dmello , patrick chipman , and art graesser .posture as a predictor of learner s affective engagement . in _ proceedings of the 29th annual cognitive science society _ , volume 1 , pages 905910 .citeseer , 2007 .carey d balaban , joseph cohn , mark s redfern , jarad prinkey , roy stripling , and michael hoffer .postural control as a probe for cognitive state : exploiting human information processing to enhance performance ., 17(2):275286 , 2004 .jyotirmay sanghvi , ginevra castellano , iolanda leite , andr pereira , peter w mcowan , and ana paiva .automatic analysis of affective postures and body motion to detect engagement with a game companion . in _2011 6th acm / ieee international conference on human - robot interaction ( hri ) _, pages 305311 .ieee , 2011 .stefan scherer , michael glodek , georg layher , martin schels , miriam schmidt , tobias brosch , stephan tschechne , friedhelm schwenker , heiko neumann , and gnther palm . a generic framework for the inference of user states in human computer interaction ., 6(3 - 4):117141 , 2012 .
|
group meetings are frequent business events aimed to develop and conduct project work , such as big room design and construction project meetings . to be effective in these meetings , participants need to have an engaged mental state . the mental state of participants however , is hidden from other participants , and thereby difficult to evaluate . mental state is understood as an inner process of thinking and feeling , that is formed of a conglomerate of mental representations and propositional attitudes . there is a need to create transparency of these hidden states to understand , evaluate and influence them . facilitators need to evaluate the meeting situation and adjust for higher engagement and productivity . this paper presents a framework that defines a spectrum of engagement states and an array of classifiers aimed to detect the engagement state of participants in real time . the engagement framework integrates multi - modal information from 2d and 3d imaging and sound . engagement is detected and evaluated at participants and aggregated at group level . we use empirical data collected at the lab of konica minolta , inc . to test initial applications of this framework . the paper presents examples of the tested engagement classifiers , which are based on research in psychology , communication , and human computer interaction . their accuracy is illustrated in dyadic interaction for engagement detection . in closing we discuss the potential extension to complex group collaboration settings and future feedback implementations . + + * keywords * : _ collaboration _ , _ engagement _ , _ feedback _ , _ meeting management _
|
ultracold neutrons ( ucn ) are defined via their unique property of being reflected under any angle of incidence from the surface of suitable materials with high material optical potential ( fermi potential ) .this is only occurring for neutrons with very low kinetic energies in the nev range , corresponding to velocities below 7 m/s or temperatures below 3mk . hence their name ultracold .well - suited materials are e.g. stainless steel , be , ni , nimo alloys or diamond - like carbon which display total ucn reflection up to their respective of 190 , 252 , 220 or 210 - 290nev , respectively . closed containers of such materials can confine ucn and serve as ucn storage vessels .evacuated tubes or rectangular shaped guides made of or coated with materials of high can be used to transport ucn over distances of several meters .the ultracold neutron source at psi is now in normal operation .about ten meters of neutron guides are necessary to transport ucn from the intermediate storage vessel to one of the three beam ports , traversing the several meter thick biological shield .the ucn guides , made from coated glass or coated stainless steel , are housed inside a stainless steel vacuum system .high vacuum conditions are required in order not to affect the ucn storage and transport properties .the main thrust for the construction and operation of high intensity ucn sources comes from the needs of high precision experiments like the search for a permanent electric dipole moment of the neutron .efficient transport of ucn from production to the experiment is a necessity .in addition , installation of the guides is a complex and lengthy procedure .moreover , the replacement of a guide would cause a long shutdown .therefore , it was decided to install only ucn guides tested with ucn and with known good ucn transmission .we have developed the prestorage method , described in section [ sec : prestorage ] , which allowed us to perform a quality check on ucn guides and a quantification of the ucn transmission properties before final installation .ignatovich dedicates a long chapter in his book to `` transporting ucn '' and gives the definition of ucn transmission as `` the transmission of a neutron guide is the ratio of ucn flux at the output to the flux at the input '' .this is applicable when regarding continuous ucn sources and continuous experiment operation . in storage type experiments ,when an experimental chamber has to be filled within a given time period and using a ucn source with ucn intensity decreasing with time , the integrated number of ucn and the ucn passage time is relevant . in our methodthe transmission of the time - integrated counts will be studied instead of the flux transmission .independent of the materials used , ucn transmission increases with guide diameter and decreases with guide length l. for comparison between different types of guides the normalized transmission per meter ( t ) is used .the total guide transmission t then follows as : }.\ ] ] fig .[ transmission - simulation ] shows the calculated behavior of the ucn transmission with increasing guide length for normalized ucn transmissions in a realistic range for good ucn guides .a length of about 10 m is necessary at the psi source to pass the biological shielding .[ transmission - simulation ] demonstrates the importance of even small improvements in ucn transmission for such long installations . in general , the properties of the materials and ucn exposed surfaces , are decisively separating the good and the bad guides . in the past, various measurements have been made to define and measure the properties of ucn guides with early attempts summarized in .the topic is also treated in recent publications , but experiments have notoriously been difficult with results not - necessarily transferable to other measurements .problematic issues were necessary assumptions on neutron flux , neutron energy distribution , detector efficiencies , and reproducibility of installations concerning e.g. small gaps in the setup causing ucn losses . assuming a single number for ucn transmission is a simplification as the transmission probability depends on the kinetic energy and angular distribution of the neutrons .furthermore , it is important to know how fast ucn traverse the guide , i.e. how fast one can fill an experiment on the exit side , which directly correlates to specular reflectivity and integral transmission . in principle it would be possible to repeat our measurements with monoenergetic ucn with a more complicated setup .the main parameters which define the neutron transmission of a ucn guide can be summarized as follows : * surface roughness : the total reflections of neutrons from surfaces can be classified in specular reflections , where the angle of incidence is co - planar and equals the reflection angle and diffuse reflections , where the reflection angle is independent of the incident angle and follows a cosine distribution with respect to the perpendicular direction .this simple view is valid for roughness values much larger than the neutron wavelength .for very low roughness , e.g. for highly polished copper or coated glass surfaces , diffraction effects become important and the probability of diffuse reflections will depend on the incident angle and neutron elocity ( see also ) .the influence of surface roughness on ucn reflection has been studied recently in detail using flat plates as reflector .high ucn transmission is obtained with negligible diffuse reflections . which results in a short passage times for the ucn through the guide .hence , low surface roughness is a main quality criterion with glass being the preferential material .* material optical potential : the coherent neutron scattering length and material density defines the absolute value of which determines the energy range where total ucn reflection under any angle of incidence occurs . a high value of therefore allows to transmit ucn with higher energies and hence increases the ucn intensity . * neutron losses via material interaction : ucn reflect from surfaces at any angle of incidence in case their kinetic energy is below .as described by an imaginary part of the potential , there is a small probability that the ucn undergoes nuclear capture during reflection due to the neutron wave - function which slightly penetrates the surface barrier .the ucn losses are therefore energy dependent .+ in addition , the ucn can also inelastically scatter from surface atoms or impurity atoms sticking to the surface . wall temperatures always exceed ucn temperatures , causing ucn acceleration out of the ucn regime via phonon scattering . the overall loss due to these surface effectscan be parametrized by a `` loss - per - bounce '' coefficient as a ratio of the imaginary and real parts of the optical potential .this coefficient is independent on kinetic energy . from this and the kinetic energy of the ucnone can calculate the loss per bounce probability by using eq .( 2.68 ) in .an energy - averaged loss per bounce probability can be estimated by transmission measurements . for the extraction of the loss - perbounce coefficient one needs to know the energy spectrum and the angular distribution of the ucn . *gaps : the passage of ucn through a guide can be regarded similar to the propagation of an ideal gas .gaps and holes , necessary e.g. for vacuum pumping , represent direct loss channels during ucn transport according to their relative surface area .in addition , areas of low which are directly visible to ucn also represent leaks , e.g. at positions where surface coating is missing . avoiding gaps and holesis therefore of great importance in order to achieve a high ucn transmission .the setup for the prestorage measurement is sketched in fig.[setup ] .a prestorage vessel is filled with a defined and known number of ucn in a vessel prior to their release into a sample guide or directly into the detector .these stored ucn can then be directly measured with a detector mounted onto the vessel and hence be used as calibration .then , an additional ucn guide - the one to be tested - is mounted between the prestorage vessel and the detector and the measurement is repeated .the comparison of the integrated ucn counts in the two measurements is defined as the ucn transmission through the test guide for the given ucn energy spectrum .this measurement setup resembles a small version of the psi ucn source setup , where the neutrons may also be stored in an intermediate storage vessel .all ucn with kinetic energies above the material optical potential will be rapidly lost during storage , defining a reproducible energy spectrum .the prestorage vessel also influences the momentum direction distribution due to diffuse reflections .ucn passing long ucn guides have momenta peaked along the direction of the guide axis .after a sufficiently long storage period of tens of seconds this peak is largely reduced .a similar approach to determine guide transmission was followed by for a longer and geometrically more complicated ucn guide .this measurement and analysis was criticized by as an invalid method .the criticism is based on the fact that the prestorage vessel in the experiment was emptied through a small orifice , thus , the experiment was mainly comparing the storage time of the vessel , the outflow time and the measurement time . the fact that in our case the geometry is much simpler , i.e. the diameter of the ucn guideis not reduced in comparison to the storage vessel and there are no bends in the setup , makes the method primarily sensitive to the ucn transmission of the guide .besides a simple analysis , our measurements can be used to tune monte carlo simulations in order to describe realistic guide performances .our experiment was carried out at the pf2 facility of the institut laue - langevin ( ill ) using the edm beamline of the ucn - turbine .ucn are typically guided from the turbine port towards the experiment using electro - polished stainless steel tubes manufactured by nocado .the filling line included two 90 bends , which allow for an accurate setup alignment and significantly decrease the amount of ucn with kinetic energies above the material optical potential of steel .the nocado tubes used have an inner diameter of 66 mm . at the end of the guides ,the ucn enter the prestorage vessel through a stainless steel adapter flange mounted on customized vacuum shutters , special din-200 shutters from vat with inside parts coated with diamond - like carbon ( see sec.[vat : section ] ) .these shutters were later installed as beam ports of the psi ucn source .the measurement setup consists of a prestorage unit and a detection unit , which both remain unchanged during the measurements . in the calibration measurementa stainless steel adapter connects these two units ( fig.[setup]a ) .an additional test guide is mounted between these units in a transmission measurement .( fig.[setup]b ) .the prestorage unit is confined by shutter 1 and shutter 2 .the storage vessel is a tube made from duran , a borosilicate glass , with 180 mm inside diameter , 5 mm wall thickness .the tubes are sputter - coated on the inside with about 400 nm of nickel - molybdenum ( nimo ) , at a weight ratio of 85 to 15 , an alloy with a curie temperature well below room temperature .the use of the same surface coating in the prestorage vessel as in the guides shapes the ucn energy spectrum in a suitable way .the detector unit consists of a similar vat shutter ( no . 3 ) which is only used as a connector unit to the 2d-200 cascade - u detector via a 150 mm long nimo coated glass guide contained in a vacuum housing .the cascade - u detector is a gas electron multiplier ( gem)-based ucn detector using a 200 nm thin - film of deposited on the inside of the 0.1 mm almg3 entrance window of the detector to convert ucn to two charged particles ( and ) .these particles ionize the detector gas and 82% ar . ] .the charge is amplified by the gem foils and detected by a pixelated readout structure .the sensitive area of the detector covers the inner diameter of the glass guides .our standard sequence for transmission measurements has the following scheme:- wait for the ucn turbine signal ( shutter 1 open , shutter 2 closed);- fill the storage vessel for an optimized filling time of 30s;- close shutter 1;- store ucn for a preset storage time of 5s;- open shutter 2;- count ucn as a function of arrival time in the detector;- close shutter 2 after the measurement time;- open shutter 1 and wait again for ucn.shutter 3 stays permanently open during the entire measurement sequence and functions in transmission measurements only as connector piece .however , it is used in storage measurements .the large vat shutters used have opening / closing times of about 1s .the prestorage method is sensitive to the precise timing of shutter operations .we therefore measured the timing properties and the shutter opening function which influences the path of the ucn and their arrival at the detector .the shutter body is made of aluminum containing a moving part with a round opening and closing disc .all parts seen by ucn in the open and closed position , or during movement are coated with dlc .when the closing disc is retracted , the 25 mm gap at the center of the shutter is covered by an expanding ring to close the gap .[ vat - image - moving ] shows the vat opening displaying the frame of the moving part in an intermediate position .the shutter is air - actuated and controlled with electric valves .we have measured the timing between the closing of shutter 1 and opening of shutter 2 .this time defines the ucn storage time and hence the number of neutrons released into the sample .its accuracy is crucial for the reproducibility of the measurements . in a separate measurement with similar conditions concerning actuating pressure andenvironmental temperature we determined without neutrons the relative timing stability of shutter 1 and 2 .the time of the shutter end - switch signals with respect to the slow control start signal was measured .the resulting time differences were filled into a histogram with 1ms bins , shown in fig [ vat : jitter ] .note that the experiment was set to have exactly 5s time difference which was accurate on the 10 level .the standard deviation of a gaussian fit is 3.2ms , reflecting an excellent reproducibility of the opening and closing times .the opening of the shutter also partially obstructs the path of the ucn during the movement , which is reflected in the opening function .we used a bright lamp and a camera for the measurement of the light transmission passing or deflecting on the intercepting shutter parts , supposing a comparable opening function for light and ucn .fig.[vat - opening - frames ] shows a selected sequence of pictures showing the shutter during opening .fig.[vat - opening ] shows the resulting opening function measured with three different actuator pressure settings of 3 , 5 , and 7bar .no significant difference was observed for the different pressures .the standard calibration setup is used to determine the ucn transmission through the setup without a test guide .the detection unit is connected to the prestorage vessel using a special adapter , a 60 mm short stainless steel piece . in order to optimize the transmission properties for the ucn and to minimize the influence of this adapter on the measurement , it was made as short as possible and the inner surface was hand - polished .in addition , the standard setup was modified in such a way that shutter 3 and the stainless steel adapter were removed . by comparing the results from the standard calibration measurement and the modified calibration measurement one obtains the influence of the stainless steel flange on the total count rate . in section [ calibration : section ] we show these measurements agree within statistical uncertainties . hence , the influence of the adapter can safely be neglected .a photo of the setup installed at the ill edm beamline for the transmission measurements is shown in fig.[ill - setup - foto ] .the test guides were mounted in a custom vacuum housing between shutters 2 and 3 .as all the guides were designed to be installed at the psi ucn source short adapter pieces had to be manufactured in order to connect the guides to the vat shutters in the transmission measurements .all adapter pieces were made of stainless steel with a maximal length of 40 mm .the inside surfaces which act as neutron guides were hand polished to have negligible influence on the transmission measurements .care was taken to minimize any possible gaps between guides and adapters .two flexible bellows at both ends of the vacuum housing setup allowed to adapt the vacuum housing length to the total guide and adapter length .table [ guide : lengths ] states names , lengths and materials of the measured guides .their names refer to the subsequent placement at the psi ucn source which is shown in fig.[fig : guide - names ] .the given guide lengths include the stainless steel end flanges which are permanently glued to the glass guide to allow for a stable connection and minimal gap widths between various parts in the final installation at psi .in addition , the total length includes the additional stainless steel adapter .all guides have inner diameters of 180 mm , only the guides 2w1 and ta - w2 have inner diameters of 160 mm .all guides , glass and stainless steel , were coated on the inside with the same nimo coating with a weight ratio of 85 to 15 percent which is non - magnetic at room temperature .the small section with the ucn butterfly valve shown in fig .[ butterfly - valve ] is coated with diamond - like carbon ( dlc ) .guides 1s1 and 1w3 are similar guides with identical dimensions and properties . .guide names and corresponding lengths .names refer to locations for final mounting in the ucn source setup .the given guide lengths include the nimo coated stainless steel flanges glued onto the glass guides .the total lengths include also the stainless steel adapter pieces necessary to mount the guides in the setup .the material column defines the material of the tube , namely glass or stainless steel .the stainless steel guides include the part with the neutron valve . 1s1 and 1w3 are similar guides with identical dimensions and properties . [ cols="^,^,^,^,^,^",options="header " , ]we have developed and used a prestorage method to determine ucn transmission of tubular ucn guides .most importantly , the measurements provided a quality control for the ucn guides prior to their installation at the psi ucn source .the results show excellent ucn transmission of the investigated guides .the measurements have shown that all guide tubes have transmission values above 95 per meter .the glass guides , which are dominantly used in the psi ucn source , have transmissions above 98 per meter .this work is part of the ph.d .thesis of l. gltl .we would like to thank all people which contributed to design and construction of our experiment .anghel , p. bucher , u. bugman , m. mhr and the ami shop , f. burri , m. dubs , j. ehrat , m. horisberger , r. knecht , m. meier , m. mller and his shop , t. rauber , p. rttimann , r. schelldorfer , t. stapf , j. welte ( all psi ) ; t. brenner ( ill ) ; f. lang , h. hse ( s - dh ) ; j. stdler ( glasform gossau ) ; m. klein ( c - dt ) .tu munich ( e18 ) allowed the use of their table installed at pf2 .support by the swiss national science foundation projects 200020_137664 and 200020_149813 is gratefully acknowledged .= 10000 a. anghel , f. atchison , b. blau , b. van den brandt , m. daum , r. doelling , m. dubs , p .- a .duperrex , a. fuchs , d. george , l. gltl , p. hautle , g. heidenreich , f. heinrich , r. henneck , s. heule , t. hofmann , s. joray , m. kasprzak , k. kirch , a. knecht , j. konter , t. korhonen , m. kuzniak , b. lauss , a. mezger , a. mtchedlishvili , g. petzoldt , a. pichlmaier , d. reggiani , r. reiser , u. rohrer , m. seidel , h. spitzer , k. thomsen , w. wagner , m. wohlmuther , g. zsigmond , j. zuellig , k. bodek , s. kistryn , j. zejma , p. geltenbort , c. plonka , s. grigoriev , the psi ultra - cold neutron source , nuclear instruments and methods in physics research section a 611 ( 2009 ) 272275 . c. baker ,g. ban , k. bodek , m. burghoff , z. chowdhuri , m. daum , m. fertl , b. franke , p. geltenbort , k. green , m. van der grinten , e. gutsmiedl , p. harris , r. henneck , p. iaydjiev , s. ivanov , n. khomutov , m. kasprzak , k. kirch , s. kistryn , s. knappe - gruneberg , a. knecht , p. knowles , a. kozela , b. lauss , t. lefort , y. lemiere , o. naviliat - cuncic , j. pendlebury , e. pierre , f. piegsa , g. pignol , g. quemener , s. roccia , p. schmidt - wellenburg , d. shiers , k. smith , a. schnabel , l. trahms , a. weis , j. zejma , j. zenner , g. zsigmond , the search for the neutron electric dipole moment at the paul scherrer institute , physics procedia 17 ( 0 ) ( 2011 ) 159 167 , 2nd international workshop on the physics of fundamental symmetries and interactions - psi2010 .s. afach , c. baker , g. ban , g. bison , k. bodek , m. burghoff , z. chowdhuri , m. daum , m. fertl , b. franke , p. geltenbort , k. green , m. van der grinten , z. grujic , p. harris , w. heil , v. helaine , r. henneck , m. horras , p. iaydjiev , s. ivanov , m. kasprzak , y. kermaidic , k. kirch , a. knecht , h .- c .koch , j. krempel , m. kuzniak , b. lauss , t. lefort , y. lemiere , a. mtchedlishvili , o. naviliat - cuncic , j. pendlebury , m. perkowski , e. pierre , f. piegsa , g. pignol , p. prashanth , g. quemener , d. rebreyend , d. ries , s. roccia , p. schmidt - wellenburg , a. schnabel , n. severijns , d. shiers , k. smith , j. voigt , a. weis , g. wyszynski , j. zejma , j. zenner , g. zsigmond , a measurement of the neutron to 199hg magnetic moment ratio , physics letters b 739 ( 2014 ) 128132 .s. afach , g. ban , g. bison , k. bodek , m. burghoff , m. daum , m. fertl , b. franke , z. grujic , v. helaine , m. kasprzak , y. kermaidic , k. kirch , p. knowles , h .- c .koch , s. komposch , a. kozela , j. krempel , b. lauss , t. lefort , y. lemiere , a. mtchedlishvili , o. naviliat - cuncic , f. piegsa , g. pignol , p. prashanth , g. quemener , d. rebreyend , d. ries , s. roccia , p. schmidt - wellenburg , a. schnabel , n. severijns , j. voigt , a. weis , g. wyszynski , j. zejma , j. zenner , g. zsigmond , constraining interactions mediated by axion - like particles with ultracold neutrons , physics letters b 745 ( 2015 ) 5863 .v. nesvizhevsky , polished sapphire for ultracold - neutron guides , nuclear instruments and methods in physics research section a : accelerators , spectrometers , detectors and associated equipment 557 ( 2 ) ( 2006 ) 576579 . c. plonka , p. geltenbort , t. soldner , h. haese , replika mirrors - nearly loss - free guides for ultracold neutrons - measurement technique and first preliminary results , nuclear instruments and methods in physics research section a 578 ( 2 ) ( 2007 ) 450452 .i. altarev , a. frei , p. geltenbort , e. gutsmiedl , f. hartmann , a. mueller , s. paul , c. plonka , d. tortorella , a method for evaluating the transmission properties of ultracold - neutron guides , nuclear instruments and methods in physics research section a 570 ( 1 ) ( 2007 ) 101106 .a. frei , k. schreckenbach , b. franke , f. hartmann , t. huber , r. picker , s. paul , p. geltenbort , transmission measurements of guides for ultra - cold neutrons using ucn capture activation analysis of vanadium , nuclear instruments and methods in physics research section a 612 ( 2 ) ( 2010 ) 349353 .m. daum , b. franke , p. geltenbort , e. gutsmiedl , s. ivanov , j. karch , m. kasprzak , k. kirch , a. kraft , t. lauer , b. lauss , a. mueller , s. paul , p. schmidt - wellenburg , t. zechlau , g. zsigmond , transmission of ultra - cold neutrons through guides coated with materials of high optical potential , nuclear instruments and methods in physics research section a 741 ( 0 ) ( 2014 ) 7177 .f. atchison , m. daum , r. henneck , s. heule , m. horisberger , m. kasprzak , k. kirch , a. knecht , m. kuzniak , b. lauss , a. mtchedlishvili , m. meier , g. petzoldt , c. plonka - spehr , r. schelldorfer , u. straumann , g. zsigmond , diffuse reflection of ultracold neutrons from low - roughness surfaces , the european physical journal a 44 ( 1 ) ( 2010 ) 2329 .f. atchison , t. brys , m. daum , p. fierlinger , p. geltenbort , r. henneck , s. heule , m. kasprzak , k. kirch , a. pichlmaier , c. plonka , u. straumann , c. wermelinger , g. zsigmond , loss and spinflip probabilities for ultracold neutrons interacting with diamondlike carbon and beryllium surfaces , phys .c 76 ( 4 ) ( 2007 ) 044001 .a. steyerl , h. nagel , f .- x .schreiber , k .- a .steinhauser , r. gaehler , w. glaeser , p. ageron , j. astruc , w. drexel , g. gervais , w. mampe , a new source of cold and ultracold neutrons , physics letters a 116 ( 7 ) ( 1986 ) 347 352 .f. atchison , t. brys m. daum , p. fierlinger , p. geltenbort , r. henneck , s. heule , m. kasprzak , k. kirch , a. pichlmaier , c. plonka , u. straumann , c. wermelinger , first storage of ultracold neutrons using foils coated with diamond - like carbon , physics letters b 625 ( 2005 ) 1925 .f. atchison , b. blau , m. daum , p. fierlinger , a. foelske , p. geltenbort , m. gupta , r. henneck , s. heule , m. kasprzak , m. kuzniak , k. kirch , m. meier , a. pichlmaier , c. plonka , r. reiser , b. theiler , o. zimmer , g. zsigmond , diamond - like carbon can replace beryllium in physics with ultracold neutrons , phys .b 642 ( 2006 ) 2427 .f. atchison , a. bergmaier , m. daum , m. doebeli , g. dollinger , p. fierlinger , a. foelske , r. henneck , s. heule , m. kasprzak , k. kirch , a. knecht , m. kuzniak , a. pichlmaier , r. schelldorfer , g. zsigmond , surface characterization of diamond - like carbon for ultracold neutron storage , nuclear instruments and methods in physics research section a 587 ( 1 ) ( 2008 ) 8288 .j. bertsch , l. gltl , k. kirch , b. lauss , r. zubler , neutron radiation hardness of vacuum compatible two - component adhesives , nuclear instruments and methods in physics research section a 602 ( 2 ) ( 2009 ) 552556 .
|
there are worldwide efforts to search for physics beyond the standard model of particle physics . precision experiments using ultracold neutrons ( ucn ) require very high intensities of ucn . efficient transport of ucn from the production volume to the experiment is therefore of great importance . we have developed a method using prestored ucn in order to quantify ucn transmission in tubular guides . this method simulates the final installation at the paul scherrer institute s ucn source where neutrons are stored in an intermediate storage vessel serving three experimental ports . this method allowed us to qualify ucn guides for their intended use and compare their properties . ultracold neutron , neutron transmission , neutron transport , neutron guide , ultracold neutron source 28.20.gd,28.20.-v,29.25.dz,61.80.hg
|
the motivation for this work comes from our attempts to create novel metrics for quantifying , comparing and cataloging large sets of complicated varying geometric patterns .random fields ( for a general background , see , as well as the references therein ) provide a framework in which to approach these problems and have , over the last few decades , emerged as an important tool for studying spatial phenomena which involve an element of randomness . for the types of applications , we have in mind , we are often satisfied with a topological classification of sub- or super - level sets of a scalar function .algebraic topology , and in particular homology , can be used in a computationally efficient manner to coarsely quantify these geometric properties . in past work , we developed a probabilistic framework for assessing the correctness of homology computations for random fields via uniform discretizations .the approach considers the homology of nodal domains of random fields which are given by classical fourier series in one and two space dimensions , and it provides explicit and sharp error bounds as a function of the discretization size and averaged sobolev norms of the random field . while we do not claim it is trivial there are complicated combinatorial questions that need to be resolved we believe that it is possible to extend the methods and hence the results of to higher - dimensional domains . where are independent standard gaussian random variables . in the left diagram , we consider random periodic functions , that is , the basis functions are given by and , in the right diagram they are the chebyshev polynomials . in each case , we choose . ]the more serious restriction in is the use of periodic random fields , which due to the fact that the associated spatial correlation function is homogeneous , simplifies many of the estimates . in general, however , one expects to encounter nonhomogeneous random fields . in such cases, it seems unreasonable to expect that uniform sampling provides the optimal choice .for example , in figure [ figsample ] , three sample functions each are shown for a random sum involving periodic basis functions and chebyshev polynomials .as one would expect , the zeros of the random chebyshev sum are more closely spaced at the boundary , and therefore small uniform discretization are most likely not optimal for determining the topology of the nodal domains . with this as motivation , we allow for a more general sampling technique .we remark that because of the subtlety of some of the necessary estimates we restrict our attention in this paper to one - dimensional domains .[ defcubappr ] consider a compact interval \subset\mathbb{r} ] , and a function \to\mathbb { r} ] _ is a collection of grid points and we define in the following .the _ cubical approximations _ of the _ generalized nodal domains of _ are defined as the sets \dvtx \pm\bigl ( ( u - \mu)(x_{k } ) \bigr ) \ge0 , k = 0,\ldots , m \bigr\ } .\ ] ] given a subset ] over the probability space .we are interested in optimally characterizing the topology , that is , determining the number of components , of the nodal domains in terms of the cubical approximations .in other words , our goal is to choose the -discretization of ] , we have .the random field is such that \ } = 0 ] and with ] as well as a constant such that for all ] we have in section [ seclocal ] , we prove the following result .[ thmtopsamp ] consider a probability space , a continuous threshold function \to\mathbb{r} ] over such that for -almost all the function \to\mathbb{r} ] over a probability space , we define its _ spatial correlation function _ ^ 2 \to\mathbb{r} ] over a probability space such that \to\mathbb{r} ] the expected value of satisfies 2 .the spatial correlation function is three times continuously differentiable in a neighborhood of the diagonal and the matrix is positive definite for all ] satisfying and , and a threshold function \to \mathbb{r } ] .finding the density of the zeros of random fields has been studied in a variety of settings , see , for example , , as well as the references therein .the following theorem can be found in , ( 13.2.1 ) , page 285 .[ thmdenszero ] consider a gaussian random field \times\omega\to\mathbb{r} ] the expected number of zeros of in is given by . while theorem [ thmdenszero ] has been known for quite some time ,its implications are surprising .as is demonstrated through examples in section [ secappl ] there is no simple discernible relationship between the function of theorem [ thmcorrsamp ] and the density function .as is made clear at the beginning of this introduction , our motivation is to develop optimal sampling methods for the analysis of complicated time - dependent patterns .thus , before turning to the proofs of the above - mentioned results , we begin , in section [ secappl ] , with demonstrations of possible applications and implications of theorem [ thmcorrsamp ] . in particular , we consider several random generalized fourier series \times\omega\to\mathbb{r} ] , , denotes a family of smooth functions and we assume that the gaussian random variables , , are defined over a common probability space with mean .we conclude the paper with a general discussion of future work concerning natural generalizations to higher dimensions .to demonstrate the applicability and implications of theorem [ thmcorrsamp ] , we consider in this section several random generalized fourier series \times\omega\to\mathbb{r} ] , , denote a family of smooth functions and we assume that the random variables , , are gaussian with vanishing mean , and defined over a common probability space .we would like to point out that these random variables do not need to be independent , and we define then one can easily show that if in addition the random variables are pairwise independent , then we have where for all . one can show that this diagonalization can always be achieved for gaussian random fields , provided the basis functions are chosen appropriately . for more details ,we refer the reader to , theorems 3.1.1 and 3.1.2 , lemma 3.1.4 . within the above framework of random generalized fourier series, we specifically consider several classes : * _ random chebyshev polynomials _ \times\omega \to\mathbb{r} ] of the form * _ random -periodic functions _ of the form \\[-8pt ] \eqntext{\mbox{with } \mathbb{e } ( g_k g_\ell ) = \delta_{k,\ell},}\end{aligned}\ ] ] with real constants .* _ random polynomials _ \times\omega\to\mathbb{r} ] _ with gaussian coefficients of unit variance _ of the form as is indicated in section [ secintro ] , we assume that all the random coefficients are centered gaussian random variables over a common probability space .we begin our applications by thresholding sample random sums at their expected value , that is , we use the threshold function . in this particular case, the function defined by ( [ thmcorrsamp1 ] ) in theorem [ thmcorrsamp ] simplifies to since both and vanish .for the case of random chebyshev polynomials ( [ exche ] ) , the left diagram in figure [ figcheb ] shows three normalized sample functions for .the right diagram shows the expected number of zeros of the random chebyshev polynomials as a function of ( red curve ) , which grows proportional to .thus , in order to sample the random field sufficiently fine , we expect to use significantly more than discretization points .the blue curve in the right diagram of figure [ figcheb ] shows the values of for which the bound in ( [ thmcorrsamp2 ] ) of theorem [ thmcorrsamp ] implies a correctness probability of , and a least squares fit of this curve furnishes . for comparison ,the green curve in the same diagram shows the values of for which the bound in our previous result ( , theorem 1.4 ) implies a correctness probability of , provided we apply this theorem with given as the } \mathcal{c}_0(x) ] and a continuous threshold \to \mathbb{r} ] , for .this is accomplished using the following framework which goes back to dunnage .[ defdco ] a continuous function \to\mathbb{r} ] _ , if for one choice of the sign .[ defadmint ] let \to\mathbb{r} ] are defined as the _ dyadic subintervals _ of ] for all and . *the interval \subset[a , b] ] .it was shown in that the concept of admissibility implies the suitability of our nodal domain approximations .more precisely , the following is a slight rewording of , proposition 2.5 .[ propvc ] let \to\mathbb{r} ] be a continuous threshold function .let denote the generalized nodal domains of , and let denote their cubical approximations as in definition [ defcubappr ] .furthermore , assume that the following hold : a. the function is nonzero at all grid points , for . b. the function has no double zero in , that is , if is a zero of , then attains both positive and negative function values in every neighborhood of .c. for every , the interval ] , and a random field \times\omega\to\mathbb{r} ], then \mbox { is not admissible for } u-\mu ) \nonumber\\[-8pt]\\[-8pt ] & & \qquad \leq \frac{4 \mathcal{c}_0(x)}{3 } \cdot\delta^3 + \biggl ( \frac{4l}{3 } + \frac{8\mathcal{c}_1}{7}\biggr ) \cdot\delta ^4,\nonumber\end{aligned}\ ] ] where \} ] is not admissible , then the function has a double crossover on one of its dyadic subintervals .if we now denote the dyadic points in by as in definition [ defadmint ] , then together with ( a3 ) one obtains the estimate since is continuously differentiable , we can define \} ] . the motivation for theorem [ thmtopsamp ] is now clear : condition ( [ heur3 ] ) suggests that for , the optimal estimate can be achieved by choosing the sampling points in an equi--area fashion , since the term approximates the intergral of over ] . furthermore , let } |d\mathcal{c}_0^{1/3 } / dx| ] satisfying and .for and sufficiently small values of , define the random vector via then is a centered gaussian random variable with positive definite covariance matrix .moreover , if we denote the eigenvalues of by , then where we use the notation introduced in , and .in addition , we can choose the normalized eigenvectors , and corresponding to these eigenvalues in such a way that finally , for a -function \to\mathbb{r} ] due to ( g2 ) .this immediately implies ( a1 ) .furthermore , ( a2 ) follows readily from , theorem 3.2.1 .thus , in order to apply theorem [ thmtopsamp ] we only have to verify ( a3 ) . for this, we apply corollary [ corptoola ] with and sign vector .fix and consider the -dependent three - dimensional random vector defined in ( [ lemcorrsamp1 ] ) .then according to lemma [ lemcorrsamp ] , this random vector satisfies all of the assumptions of proposition [ propptoola ] and corollary [ corptoola ] with as well as applying corollary [ corptoola ] , we then obtain where we used the formula for given in ( [ corptoola2 ] ) . in combination with the above expansions for and , this limit furnishes thus , assumption ( a3 ) is satisfied with , and theorem [ thmcorrsamp ] follows now immediately from theorem [ thmtopsamp ] .at first glance , the title of this paper may appear somewhat misleading or more ambitious than the results delivered .after all , the techniques of proof are based on classical probabilistic arguments . however , the results are new and the examples of section [ secappl ] demonstrate that they have interesting nonintuitive implications .a reasonable question is why were these results not discovered sooner .we believe that the answer comes from the fact that we are approaching the problem of optimal sampling from the point of view of trying to obtain topological information .this point of view had been taken previously in the work of adler and taylor .their main focus , however , was the estimation of excursion probabilities , that is , the likelihood that a given random function exceeds a certain threshold . in , it is shown that such excursion probabilities can be well - approximated by studying the geometry of random sub- or super - level sets of random fields . more precisely , it is shown that the expected value of the euler characteristic of super - level sets approximates excursion probabilities for large values of the threshold , and that it is possible to derive explicit formulas for the expected values of the euler characteristic and other intrinsic volumes of nodal domains of random fields .all of the above results concern the intrinsic volumes of the nodal domains which are additive set functionals , and therefore computable via local considerations alone .in contrast , in previous work we have demonstrated that the homological analysis of patterns of nodal sets can uncover phenomena that can not be captured using for example only the euler characteristic .the more detailed information on the geometry of patterns encoded in homology is an inherently global quantity and can not be computed through local considerations alone . on the other hand ,recent computational advances allow for the fast computation of homological information based on discretized nodal domains .for this reason , we focus on the interface between the discretization and the underlying nodal domain , rather than the homology of the nodal domain directly , and then quantify the likelihood of error in the probabilistic setting . in this sense ,our approach complements the above - mentioned results on the geometry of random fields by adler and taylor .given the current activity surrounding the ideas of using topological methods for data analysis and remote sensing , we believe the importance of this perspective will grow .thus , the title of our paper is chosen in part to encourage the interested reader to consider the natural generalizations of this work to higher - dimensional domains where the question becomes one of optimizing the homology of the generalized nodal sets in terms of homology computed using a complex derived from a nonuniform sampling of space .
|
topological measurements are increasingly being accepted as an important tool for quantifying complex structures . in many applications , these structures can be expressed as nodal domains of real - valued functions and are obtained only through experimental observation or numerical simulations . in both cases , the data on which the topological measurements are based are derived via some form of finite sampling or discretization . in this paper , we present a probabilistic approach to quantifying the number of components of generalized nodal domains of nonhomogeneous random processes on the real line via finite discretizations , that is , we consider excursion sets of a random process relative to a nonconstant deterministic threshold function . our results furnish explicit probabilistic a priori bounds for the suitability of certain discretization sizes and also provide information for the choice of location of the sampling points in order to minimize the error probability . we illustrate our results for a variety of random processes , demonstrate how they can be used to sample the classical nodal domains of deterministic functions perturbed by additive noise and discuss their relation to the density of zeros . .
|
decoherence is the process by which quantum systems lose their coherence information by coupling to the environment .the quantum system entangles to the states of the environment and the system density matrix can be diagonalized in a preferred basis states for the environment , dictated by the model of interaction hamiltonian .decoherence is now the biggest stumbling block towards exploitation of quantum speedup using finite quantum systems in information processing .many authors have addressed the control and suppression of decoherence in _ open - quantum systems _ by employing a variety of open loop and feedback strategies .effect of decoherence suppression under arbitrarily fast open loop control was studied by viola et al .another method along similar lines for control of decoherence by open - loop multipulses was studied by uchiyama et .al. . a very illustrating example of decoherence of single qubit system used in quantum information processing and its effective control using pulse methodwas worked out by protopopescu et al .shor and calderbank also came up with interesting error - correction schemes for detecting and reducing effects of decoherence on finite quantum registers . recentlymany authors have also studied the application of feedback methods in control of decoherence, .technological advances enabling manipulation , control of quantum systems and recent advances in quantum measurements using weak coupling , non - demolition principles etc , has opened up avenues for employing feedback based control strategies for quantum systems ,, . in this workwe analyze the effectiveness of feedback method in eliminating decoherence . a wave function approach as opposed to density matrices for the schrdinger equationis adopted which represents the system in an input - affine form and greatly enables one to exploit methodologies from systems theory .we first analyze what it means for a complex scalar function to be invariant of certain parameters .the generality of the treatment adopted here makes all types of quantum systems amenable to the results .it is also shown here that analysis of invariance of quadratic forms also lead to decoherence free subspaces ( dfs ) for the open quantum systems but from a different and general perspective .dfs was first shown to exist by lidar et al by analysis of markovian master equation for open quantum systems that naturally gives rise to subspaces that are immune to the effects of decoherence namely dissipation and loss of coherence .we explore the conditions for a scalar function represented by a quadratic form of a time varying quantum control system to be invariant of perturbation or interaction hamiltonian when coupled to a quantum environment .let \xi(t , x ) \label{opqusys}\end{aligned}\ ] ] be the governing schrodinger equation for a quantum system interacting with the environment .+ be the system s hilbert space .+ be the environment s hilbert space .+ could be finite or infinite dimensional and is generally infinite dimensional .+ be the wave function of the system and environment .+ and are respectively the drift hamiltonian of the system and environment while s are the control hamiltonian of the system . governs the interaction between the system and the environment .the above hamiltonian are assumed to be time varying and dependent on the spatial variable .consider a scalar function ( typically the expected value of an observable ) of the form , where again is assumed to be time - varying operator acting on system hilbert space .the above is the general form of a time dependent quantum system and we wish to study the invariance properties of the function with respect to the system dynamics .let be a complex scalar map of the system as a function of the control functions and interaction hamiltonian over a time interval .the function is said to be invariant of the interaction hamiltonian if for all admissible control functions and a given interaction hamiltonian .let be the manifold contained in the hilbert space on which the dynamics of the system is described .it could be a finite or infinite dimensional submanifold of , the unit sphere on the collective hilbert space .the quantum system is assumed to be governed by time varying hamiltonian and it is known that the system evolves on an analytic manifold , which is dense in and a submanifold of the unit sphere by nelson s theorem .recent analysis of controllability criteria and reachability properties of states as studied by schirmer et.al provides insight into behavior of quantum control systems on finite dimensional manifolds in .the controllability under various realistic potentials was also studied by dong et.al .however the analysis of time - varying systems carried out here assumes in general that the component hamiltonian operators carry explicit time dependence which is not under the control of an external agent .and we do so by introducing a time invariant system in the augmented state space domain .a similar scheme was also used by lan et.al to study controllability properties of such time - varying quantum systems .let , the new equation governing the evolution of the system can be written as , with , the vector fields and corresponding to drift , control and interaction can be identified to contribute to the dynamical evolution .[ opinvlem ] consider the quantum control system ( [ augsys ] ) and suppose that the corresponding output given by equation ( [ ndo ] ) is invariant under given .then for all integers and any choice of vector fields in the set we have before proving the above lemma it is useful to consider a simple extension .consider for a fixed number vector fields , with fixed and from the previous condition , combining the above conditions we get where . by finite mathematical induction over all the variables we can replace the vector fields with vector fields in .hence one can show that the previous condition is equivalent to the requirement that for all and any choice of vector fields of the form where , stand for the set of admissible control functions .+ _ proof _ now let be invariant under .then for small by equation ( [ cond9 ] ) where are of the form ( [ zform ] ) and are given by , and , the one parameter group of flow of the vector field .the left hand side of equation ( [ cond5 ] ) is the output for while the right hand side is for an arbitrary . differentiating both sides of ( [ cond5 ] ) with respect to at respectively yields , for all .now for the above equation yields , since , and using the above equation we can conclude , which is same as the equation ( [ cond3 ] ) for .again in general by induction we obtain , and using equation([ztildeform ] ) this yields , for all of the form ( [ zform ] ) . the sufficient condition for output invariancehowever requires a stronger condition of analyticity of the system .lemma [ opinvlem ] implies that the necessary conditions for output invariance are , for and , where are the vector fields of the augmented system and , the interaction vector field .the previous condition can also be restated thus , where span .the above restatement might be helpful in simplifying calculations for lie derivatives .suppose the system ( [ augsys ] ) is analytic , then is invariant under given if and only if ( [ cond6 ] ) is satisfied ._ proof _ consider a sequence of arbitrary control functions in .let and two time instances satisfying .we can then write , for some index variables such that and some such that and .let be the state of the system in the augmented manifold and let and be the state map of the quantum control system in the absence and presence of respectively , where is the initial state at time . define a smooth function on the augmented manifold .making use of following relation , without loss of generality , considering a piecewise constant control set the term inside the integral can be written as , where s are of the form ( [ zform ] ) . since the system was assumed to be analytic we can write , for some small such that the summation converges .the remaining terms can be expanded in the same way for any , since equation ( [ int ] ) is zero for any and any given sequence of control functions it follows that the individual terms in the summation vanish yielding condition ( [ cond7 ] ) and hence as a consequence condition ( [ cond6 ] ) has to hold ._ calculation of lie derivatives _ the lie derivatives in the above cases can be calculated for the special case when . for instructional purposes we present here two ways for calculating lie derivatives of the output with respect to the vector fields of augmented system , where is the co - vector field corresponding to the vector field , skew hermitian , , are conjugate variables and assumed to be independent of and in calculations .therefore , hence | \xi \rangle \nonumber\end{aligned}\ ] ] now consider , | \xi \rangle \nonumber\end{aligned}\ ] ] the variable is replaced with as it was only a dummy variable used for calculations .another approach follows directly from the geometrical interpretation of lie derivatives of scalar functions , with only the vector field turned on for ( i.e ) from straight forward calculations one obtains , | \xi \rangle\end{aligned}\ ] ] and similarly | \xi \rangle ] in general for decoupling . since the condition is true for any and any and since the vector space of bounded linear operators is complete we have =\sum_{i=0}^\infty \alpha_i[h_{sb},t_i]=0 ] which is true only when = 0 ] with vanishing higher order commutators .hence and since + { \partial c}/{\partial t } = 0 ] now translates to or nontrivially , , or that the two words have equal number of . the above calculations are valid for any finite , a specific example for is .of particular interest are terms like and as the corresponding which is a function of the coherence between the basis states and is predicted to be invariant under the interaction .it is worth noting that the operator acting on system hilbert space here need not necessarily be hermitian and only describes preserved information in a loose sense ._ decoherence in the presence of control : _ in the presence of the external controls , the invariance condition is no longer satisfied for the operator as ,\sigma_3^{(j)}]\neq 0 ] and the last two lines belong to .the above calculation can be extended to any number of terms to encompass the result . in generalone finds that , in the presence of feedback terms the condition for decouplability is relaxed to \subset\tilde{\mathcal{c}}(t ) \label{condfb}\ ] ] in order to solve eq.([egeq ] ) and consequently ( [ condfb ] ) for the feedback parameters , it has to be noted that the first two lines and last two lines of eq.([egeq ] ) denote operators acting on different hilbert spaces , namely the system - environment and just the system respectively and the two terms can not be reconciled unless they vanish individually which leads us back to original conditions for open loop invariance .in other words , in order for the feedback to be an effective tool in solving the decoherence problem , the control hamiltonians have to act non - trivially on both the hilbert spaces which would enable all the operators in ( [ egeq ] ) act on system - environment hilbert space .as stated above the fundamental conditions for invariance were , where .we now explore a larger class of vector fields containing that also satisfy the above conditions , i.e , set of such vector fields form a vector space or a distribution and constitute a invariant distribution in the sense described by the following theorems .+ _ definition _ the vector field satisfying equations ( [ invsp ] ) is said to be in the orthogonal subspace of the observation space spanned by the co - vector fields for all and .denoted by the distribution is invariant with respect to the vector fields under the lie bracket operation .( i.e ) if , then \in \mathcal{o}^\perp ] for can be computed as follows , = \left [ \begin{array}{cc } 0 & 0 \\ \dot{h}_i |\xi\rangle & h_i \end{array } \right ] \left(\begin{array}{c } 0 \\ h_\tau |\xi\rangle\end{array}\right ) \\ & - \left[\begin{array}{cc } 0 & 0 \\ \dot{h}_\tau |\xi\rangle & h_\tau \end{array } \right ] \left(\begin{array}{c } 0 \\ h_i|\xi\rangle \end{array } \right ) = \left ( \begin{array}{c } 0 \\ % need dummy parenthesis before lie bracket .. gives argument error otherwise .. { }[ h_\tau , h_i ] |\xi \rangle \end{array } \right ) \end{aligned}\ ] ] now using jacobi identity , }y(t ) & = \langle\xi|[c,[h_\tau , h_i]]|\xi\rangle\\ & = -\langle\xi|[h_\tau,[h_i , c]]|\xi\rangle -\langle\xi|[h_i,[c , h_\tau]]|\xi\rangle\\ & = -l_{k_\tau}l_{k_i}y(t)-l_{k_i}l_{k_\tau}y(t)\\ & = 0\end{aligned}\ ] ] now for and we have , \\ & = \left[\begin{array}{cc } 0 & 0 \\ 0 & h_0 \end{array } \right ] \left(\begin{array}{c } 0 \\ h_\tau |\xi\rangle \end{array}\right ) - \left[\begin{array}{cc } 0 & 0 \\ \dot{h}_\tau | \xi \rangle & h_\tau \end{array } \right ] \left(\begin{array}{c } 1 \\ h_0|\xi\rangle \end{array } \right ) \nonumber \\ & = \left ( \begin{array } { c } 0 \\ ( [ h_\tau , h_0 ] -\dot{h}_\tau ) | \xi \rangle \end{array } \right ) \nonumber \\ & l_{[k_\tau , k_0]}y(t ) = \langle\xi|[c,[h_\tau , h_0 ] ] - [ c,\dot{h}_\tau]|\xi\rangle \label{inveq}\end{aligned}\ ] ] we already have , +[[c , h_0],h_\tau]|\xi\rangle=0\\ l_{k_0}l_{k_\tau}y(t,\xi ) & = & \langle\xi|\frac{d}{dt}[c , h_\tau]+[[c , h_\tau],h_0]|\xi\rangle=0\end{aligned}\ ] ] adding the above equations and using jacobi identity we conclude that \in \mathcal{o}^\perp$ ] .we analyzed the conditions for eliminating the effects of decoherence on quantum system whose coherence can be monitored in the form of a scalar output equation .the results hold globally on the analytic manifold .the invariant distributions possess many desirable qualities and helps in control of decoherence .we wish to construct an algorithm to determine the invariant distribution for a given quantum system and its interactions .design and study of feedback and analysis of the resulting stability for quantum control system will help us solve the decoherence problem for practical quantum systems .the results can be extended and conditions can be derived for different types of measurements and information extraction schemes .this research was supported in part by the u. s. army research office under grant w911nf-04 - 1 - 0386 .t. j. tarn would also like to acknowledge partial support from the china natural science foundation under grant number 60433050 and 60274025 .the authors would also like to thank the reviewers for their invaluable comments and suggestions .c m caves , k s thorne , r w p drewer , v d sandberg and m zimmerman , on the measurement of a weak classical force coupled to a quantum - mechanical oscillator " , _ rev . of mod ._ , * 52(2 ) * , part i , 341 , 1980 .
|
quantum feedback is assuming increasingly important role in quantum control and quantum information processing . in this work we analyze the application of such feedback techniques in eliminating decoherence in open quantum systems . in order to apply such system theoretic methods we first analyze the invariance properties of quadratic forms which corresponds to expected value of a measurement and present conditions for decouplability of measurement outputs of such time - varying open quantum systems from environmental effects .
|
understanding materials response to irradiation is one of the key issues for the design and operation of nuclear fission and fusion power systems . lifetime and operational restrictions are typically influenced by limitations associated with the degradation of structural materials properties under radiation .the ability to accurately predict and understand behavior of structural materials under short term and long term radiation damage is thus important for designing improved materials for nuclear power applications .extensive computational research has been performed in order to better understand radiation damage in widely used structural materials , including ferritic alloys based on body - centered cubic ( bcc ) iron ( fe ) .common areas of interest with regards to radiation damage simulations have been general damage behavior , variations in primary knock - on atom ( pka ) energy , variations in simulation temperature during irradiation and the behavior and effect of extrinsic particles .however , one topic that has received much less attention is the effect of strain on the generation of radiation damage .although minimally studied , it is of critical importance to understand the effect of strains on damage generation and accumulation .structural materials in nuclear fission and fusion systems are often exposed to applied stresses , and they are subject to a dynamic strain environment due to a variety of phenomena such as void swelling , solute precipitation , solute segregation , etc . , there is a distinct need for expanded information related to radiation damage behavior under applied strain .three previous computational studies have been performed to examine strain effects on damage generation .miyashiro , _ et al . _ investigated the effects of strain on damage generation in face - centered cubic ( fcc ) copper .strain types analyzed included uniaxial tension and compression along the [ 111 ] axis , hydrostatic strain , and isometric strain ( tetragonal shear ) with maximum strain of up to 1 . for the pka ,a random set of directions was chosen , each with an energy of 10 kev .it was found that defect production increased with both uniaxial tension and compression .it was also found that the largest increase in defect production was due to isometric strain .another study was conducted by di , _ et al . _ on hexagonal close - packed ( hcp ) zirconium .they enforced tensile and compressive strains along the _ a _ and _ c _ axes with magnitudes up to 1 strain .various pka directions were investigated with energies of 10 kev . in this study , it was concluded that the main effect of strain is on the size of the defect clusters .however , there was no significant effect of applied strain on the total number of defects generated .finally , gao , _ et al . _ investigated bcc fe .the only strain type analyzed was tension along the [ 111 ] axis with a maximum magnitude of 1 strain .the pka direction was restricted to the [ 135 ] direction with an energy of 10 kev .it was found that there was a slight reduction in the point defect generation for 0.1 strain .however , higher strains seemed to yield negligible changes versus the unstrained system .these previous works have provided important insights into the effects of elastic strain on radiation damage production . however , particularly for bcc fe , the computational investigations are incomplete .gao , _ et al . _ investigated one type of strain and one direction for the pka . to develop a more detailed picture of the effects of strain on radiation damage ,various types of applied strain are analyzed for various pka directions in the current work .specifically , we apply molecular dynamics ( md ) simulations of pure bcc fe under several types of applied strain , to analyze the resulting effects on damage generation .molecular dynamics simulations are performed utilizing the lammps software package and the embedded - atom method ( eam ) interatomic potential developed for bcc fe by mendelev _a bcc supercell containing 250,000 atoms is relaxed for 100 ps at 300 k in an npt ensemble .a strain is applied to the supercell and the strained lattice is allowed to relax for 100 ps in an nvt ensemble .an atom near the center of the supercell is then given extra kinetic energy , with the velocity directed in varying prescribed directions .the time step is set to 0.2 fs and the simulation is run for 75000 steps .we utilize the gjf thermostat ( fix langevin gjf ) due to its robust configurational sampling properties .the damping parameter ( analogous to relaxation time ) is set to 1 ps .the number of stable frenkel pairs is determined after 15 ps via a wigner - seitz cell based algorithm .thus , this analysis does not take into account long - time thermal diffusion .all pka energies are set to 5 kev .the pka directions analyzed include [ 135 ] , [ 100 ] , [ 001 ] , [ 110 ] , and finally a set of 32 randomly selected directions . for each strain state and pka direction ,32 independent simulations are performed with a unique distribution of initial velocities .we have chosen the pka directions in order to provide a representative set of data for the bcc crystal .high symmetry directions parallel to and perpendicular to the applied strain could potentially see different effects based on the distortion of the local environment .also , the majority of pka collisions will not be along high symmetry directions . in the literature involving radiation damage simulations ,the [ 135 ] direction is commonly utilized as a direction representing average behavior for the bcc crystal system .this direction also has the added benefit of reducing channelling .thus , we analyze the [ 135 ] direction , as well as a set of 32 randomly chosen directions , allowing us to compare a direction commonly accepted as representative of random average behavior , to a set of truly random directions .the random directions were created by varying the tilt and azimuthal angles displayed in figure 1 . values were selected in the range 0 to /2 , values were selected in the range 0 to /4 . these ranges of and encompass a complete sampling of directions in a bcc supercell . ) was varied from 0 to /2 .the angle phi ( ) was varied from 0 to /4 . ]the number of defects created under applied strain for isotropic expansion / compression ( hydrostatic strain ) , uniaxial strain , monoclinic shear and tetragonal shear strains are displayed in figure 2 .details of each deformation type are shown in table 1 for a representative 1 applied strain .for hydrostatic and uniaxial strains , results for compressive ( negative ) and tensile ( positive ) strains of up to 2 are displayed in figure 2 .monoclinic shear strains are investigated up to 2 and tetragonal shear strains are investigated up to 5 .error bars included in figure 2 denote twice the standard error of the mean , which is defined as the standard deviation divided by the square root of the sample size .figure 2__a _ _ depicts results for hydrostatic strain , with pka directions along [ 135 ] , [ 100 ] , [ 110 ] and average behavior generated by averaging results over 32 random orientations ( denoted henceforth simply as random directions ) .for all directions analyzed , there appears to be significant variance with applied expansion and compression .as the system expands , there is a consistent trend towards increased defect generation .as the system is compressed , there is a consistent trend towards decreased defect generation ..details of each deformation for a representative 1 applied strain . [cols="^,^,^,^,^,^",options="header " , ] [ default ] in figure 2__b _ _ for uniaxial strain along the [ 001 ] direction , results for pkas in the [ 135 ] , [ 100 ] , [ 001 ] and random directions are displayed . in figure 2__b_ _ , it is shown that with varying strain , there is very little variation in the production of frenkel pairs .some variation does exist , but all data points fall within the statistical uncertainties of the results for an unstrained system , as will be discussed below .therefore , application of uniaxial strain in the [ 001 ] direction , with magnitudes ranging between -2 to 2 , has no statistically significant effect on the number of stable frenkel pairs created . in figure 2__c _ _ for monoclinic shear strain , the [ 135 ] , [ 100 ] , [ 110 ] and random directions are displayed . for all directions analyzed ,the simulation results show no clear effect of monoclinic shear strain on damage generation .there exists minimal variation from the unstrained system and all results fall within the statistical uncertainties of the results for an unstrained system . in figure2__d _ _ , results exploring the effect of tetragonal shear strain on defect production for pkas along the [ 135 ] , [ 100 ] , [ 110 ] and random directions are displayed .as described in table 1 , this strain state is characterized by strain with one sign along the z direction , and opposite signs along x and y , such that the volumetric strain is zero . as the elongation in the z axis increases, the bcc unit cell is moving along the bain path towards an fcc unit cell .results are shown for elongation along this path before the first inflection point in the energy versus strain curve from bcc to fcc . from figure 2__d_ _ , it is very difficult to determine if applied tetragonal shear strain creates a non - negligible effect on the number of stable frenkel pairs produced . ,[ 100 ] , [ 110 ] , [ 001 ] and a random set of directions .error bars denote twice the standard error of the mean . ] for all types of strain , it is shown that the high symmetry directions of [ 100 ] , [ 001 ] and [ 110 ] display the most variance .this is due to the fact that a pka moving along a high symmetry direction is highly likely to give rise to a near direct impact , potentially transporting a defect over a relatively large distance via rapid crowdion diffusion .thus , for a better summary of average behavior in these systems , it is most useful to look at only the results from our set of random pka directions .these results are displayed in figure 3 .the error bars included are twice the standard error of the mean of the data set ; i.e. , they represent 95 confidence intervals in the mean value . in figure3__a _ _ , for hydrostatic strain , a trend is clearly visible , with a steady increase in defects generated as the volume of the system is increased . for small applied expansion or compression ( below 1 ) , there is not a statistically significant effect present .the variance is approximately equal to the reported magnitude of the error bars .however , as the expansion or compression is increased up to 2 , a dramatic effect is observed . at 2 expansion, there is a 50 increase in the number of defects generated with respect to an unstrained system . at 2 compression ,the effect is not quite as pronounced , with a 25 decrease in the number of defects generated with respect to an unstrained system .thus , for isotropic expansion and compression , small amounts of volume change ( less than 1 ) lead to statistically insignificant changes in the number of defects generated .however , larger expansion / compression leads to significant changes in the defect production . in figure 3__b _ _ for uniaxial strain , it is shown that very minimal changes occur to the number of defects generated by a 5 kev pka with applied strain .any changes that do occur are not statistically significant for strains with magnitudes in the range of -2 to 2 . in figure 3__c _ _ for monoclinic shear strain , minimal variation is observed as a function of applied strain . the number of defects generated remains approximately constant as a function of applied monoclinic shear strain , leading to the same conclusion as uniaxial strains , in that there is no statistically significant impact on the number of defects generated with applied strain up to 2 . in figure 3__d __ for tetragonal shear strain , it is shown that there is a minimal amount of variation in the number of defects generated as a function of applied strain , and thus the average behavior is not impacted by applied tetragonal shear strain .therefore , applied tetragonal shear strain exhibits the same behavior as uniaxial and monoclinic shear strains , in that the number of defects generated remains approximately constant as a function of applied strain .the results from isotropic expansion and compression investigations lead to the question of what causes the variance as a function of applied strain . due to the fact that the largest volume changes occurred in expansion / compression ,perhaps simply volumetric changes can be the driving force for increases or decreases in defect generation .to investigate this hypothesis , very large uniaxial strains were imposed to create volume changes approaching 2 isotropic expansion and compression .the results of this investigation are shown in figure 4 , for uniaxial tension and compression up to 5 and 4 , respectively . in figure 4, it is shown that from 2 compression to 2 tension , there are negligible effects of applied strain on the number of defects generated .however , when the magnitude of the applied strain becomes more extreme , significant increases ( for tension ) and decreases ( for compression ) in the number of defects generated are observed .for 5 applied tension , there is approximately a 40 increase in the number of defects generated . for 4 applied compression, there is approximately a 20 decrease in the number of defects generated .thus , these results are comparable to the effects from isotropic expansion and compression shown in figure 3__a _ _ , when the applied uniaxial strains lead to comparable levels of volumetric expansion or compression. thus , substantial increases and decreases in volume can create significant effects on the number of defects generated from a cascade in bcc fe , whereas volume - conserving shear strains are not observed to create significant effects on the number of created defects . applied tension and 4 compression .the results displayed are only for a set of random directions .error bars denote twice the standard error of the sample . ]with the observation that volume changes can affect the number of defects generated , the specific nature of how these effects take place is now investigated .specifically , we focus on how volume changes affect point defect formation energies , the peak number of defects generated in cascades as well as survival fractions .this work is also performed for volume conserving strains as a comparison in order to specifically determine the effects of volume altering strains on the radiation damage behavior of bcc fe .systems with one frenkel pair are investigated at 300 k in order to provide a direct comparison with the results on defect generation in strained systems above .no interstitial diffusion occurs during these simulations .frenkel pair formation energies are calculated from equation 1 where e is the energy of the system with a non - interacting interstitial / vacancy pair and e is the energy of the system with no defects .we note that in the simulations used to compute e no interstital diffusion occurred , such that the excess energy defined in equation 1 is that of a single frenkel defect .frenkel pair formation energies are displayed in figure 5 as a function of applied _ a _ hydrostatic , _ b _ uniaxial , _ c _ monoclinic shear and _ d _ tetragonal shear strains .sixteen unique simulations were performed for each applied strain ( eight without defects , eight with defects ) to gain statistics in these systems .error bars in figure 5 denote twice the standard error of the mean . in figure 5__a _ _ and _ b _ , we observe significant variance in the frenkel pair formation energy as a function of applied strain . asapplied strain becomes more positive ( volume increases in the case of _ a _ and _ b _ ) , the frenkel pair formation energy decreases .thus , frenkel pair formation leads to a smaller energy change as volume increases .the converse is true for negative strain in figure 5__a _ _ and _ b _ , in that a volume decrease is associated with an increase in frenkel pair formation energy . for the volume - conserving strains in figure 5__c _ _ and _ d _ , no significant changes in the frenkel pair formation energy are observed with applied strain .all data points possess overlapping error bars with the unstrained system . to determine what role the variations in frenkel pair formation energy directly play on the cascade behavior in bcc fe , the peak number of defects created in a cascade is determined and plotted in figure 6 as a function of applied hydrostatic , uniaxial , monoclinic shear and tetragonal shear strains .this data is taken from the simulations of random pka directions displayed in figure 3 . in figure 6__a _ _ and _ b _ , there is a direct correlation between an increase in the volume of the system and the peak number of defects created in a displacement cascade .for both hydrostatic and uniaxial strains , a near linear dependence is observed for the peak number of defects created in this system as a function of applied strain . for the volume - conserving shear strains in figure 6__c _ _ and _ d _ , no significant changes in the peak number of defects created in a displacement cascade are observed . from the peak number of defects in figure 6 and the stable number of defects from figure 3, we calculate survival fractions across all applied strains for all systems .all survival fractions for strained systems are within 1 of the survival fraction for an unstrained system. therefore , applied strain produces no statistically significant variance in the defect survival fraction .the survival fraction for all applied strains is displayed in figure 7 . from these results, we can state that variations in the volume of bcc fe create variations in the frenkel pair formation energy , with increases in volume yielding decreases in formation energy and increases / decreases in the volume yielding conditions where it is easier / more difficult to create frenkel pairs .thus , under irradiation , for a cascade of a given energy , systems with a lower frenkel pair formation energy will exhibit a higher maximum number of created defects .systems with a high frenkel pair energy will exhibit a lower maximum number of created defects . since volume changes produce no statistically significant changes on the survival fraction of point defects during a cascade , systems with a higher maximum number of defects created during a cascade will also exhibit a higher number of stable defects .this is consistent with prior investigations that show that the number of defects increases with increasing pka energy , even though the fraction of surviving defects decreases .volume - conserving strains and transformations yield no such variations in frenkel pair formation energy and thus no such variations in the stable number of defects created via cascades in bcc fe .this work suggests a direct link between formation energy and threshold displacement energy .this is reasonable , since at 0 k the displacement energy is a combination of the formation energy of a frenkel pair and the migration barrier to form this frenkel pair .thus , it can reasonably be expected that an increase in the frenkel pair formation energy can lead to an increase in the threshold displacement energy and a subsequent decrease in defect generation . to verify this interpretation , a basic investigation was undertaken to analyze the effect of strain on the displacement energy .a bcc supercell containing 16,000 atoms is relaxed for 100 ps at 300 k in an npt ensemble .a strain is applied to the supercell and the strained lattice is allowed to relax for 100 ps in an nvt ensemble .an atom near the center of the supercell is then given extra kinetic energy , with the velocity directed in a prescribed direction .the pka direction analyzed is a single direction from the set of 32 randomly selected directions used in the previous simulations .the time step is set to 0.2 fs and the simulation is run for 30000 steps .we utilize the gjf thermostat and set the damping parameter to 1 ps .the existence of a stable frenkel pair is determined after 6 ps via a wigner - seitz cell based algorithm .a total of 64 independent simulations are performed on non - strained systems and 100 independent simulations are performed on strained systems in order to gain statistics .it is important to note that since only a single direction is investigated , the results do not represent the average displacement energy , but only the displacement energy in a given direction .although we focused on just one randomly selected direction , we believe the results suffice to establish the overall trend related to the effect of strain on displacement energy . in figure 8 , the probability of frenkel pair formation as a function of pka energy is displayed .red squares denote the unstrained system and a dashed trend line is fit to this data .the blue diamonds denote the system with 2 isotropic expansion and a solid trend line is fit to this data .a finely dashed horizontal line is overlaid at a probability of 0.5 .this is to illustrate that when the probability curve becomes greater than 0.5 , it is probable to form a frenkel pair from a given pka energy , and thus , this value of the pka energy is the displacement energy . from this data ,it is clear that with applied strain at a given pka energy , the probability of frenkel pair formation is greater .there is also a decrease in the displacement energy with applied strain of approximately 10 ev .this supports the previous interpretation in that applied strain affects the formation energy of frenkel pairs and in turn affects the displacement energy . in other words ,expansion of a system will result in a decrease in the displacement energy . isotropic expansion .the energy at which the probability curve becomes greater than 0.5 is the displacement energy . ]in this study , molecular dynamics simulations were performed on pure bcc fe to investigate the effects of applied strain on the generation of radiation damage .the pka directions analyzed included [ 135 ] , [ 100 ] , [ 001 ] , [ 110 ] , and finally a set of 32 randomly selected directions with a pka energy of 5 kev at 300 k. it was found that volume - conserving tetragonal shear and monoclinic shear strains yield no statistically significant variations in the stable number of defects created via cascades in bcc fe . however , isotropic expansion or compression greater than 1 or uniaxial strain greater than 2 produces a statistically significant effect on defect generation .an increase ( decrease ) in the volume of the system is found to yield an increase ( decrease ) in the number of stable defects generated via cascades in bcc fe .this work was supported by the us department of energy , project - ne0000536000 .w. phythian , r. stoller , a. foreman , a. calder , d. bacon , a comparison of displacement cascades in copper and iron by molecular - dynamics and its application to microstructural evolution , j. nucl .223 ( 1995 ) 245 .s. miyashiro , s. fujita , t. okita , md simulations to evaluate the influence of applied normal stress or deformation on defect production rate and size distribution of clusters in cascade process for pure cu , j. nucl .415 ( 2011 ) 14 .
|
radiation damage in body - centered cubic ( bcc ) fe has been extensively studied by computer simulations to quantify effects of temperature , impinging particle energy , and the presence of extrinsic particles . however , limited investigation has been conducted into the effects of mechanical stresses and strain . in a reactor environment , structural materials are often mechanically strained , and an expanded understanding of how this strain affects the generation of defects may be important for predicting microstructural evolution and damage accumulation under such conditions . in this study , we have performed molecular dynamics simulations in which various types of homogeneous strains are applied to bcc fe and the effect on defect generation is examined . it is found that volume - conserving shear strains yield no statistically significant variations in the stable number of defects created via cascades in bcc fe . however , strains that result in volume changes are found to produce significant effects on defect generation .
|
the past decade has seen a growing interest in the research of stochastic resonance ( sr ) phenomena in interdisciplinary fields , involving physics , biology , neuroscience , and information processing .conventional sr has usually been defined in terms of a metric such as the output signal - to - noise ratio ( snr ) being a non - monotonic function of the background noise intensity , in a nonlinear ( static or dynamic ) system driven by a subthreshold periodic input . for more general inputs , such as non - stationary , stochastic , and broadband signals , adequate sr quantifiersare information - theoretic measures .furthermore , aperiodic sr represents a new form of sr dealing with aperiodic inputs .the coupled array of dynamic elements and spatially extended systems have been investigated not only for optimal noise intensity but also for optimal coupling strength , leading to the global nonlinear effect of spatiotemporal sr . by contrast , the parallel uncoupled array of nonlinear systems gives rise to the significant feature that the overall response of the system depends on both subthreshold and suprathreshold inputs . in this way ,a novel form of sr , termed suprathreshold sr , attracted much attention in the area of noise - induced information transmissions , where the input signals are suprathreshold for the threshold of static systems or the potential barrier of dynamic systems .in addition , for a single bistable system , residual sr ( or aperiodic sr ) effects are observed in the presence of slightly suprathreshold periodic ( or aperiodic ) inputs .so far , the measure most frequently employed for conventional ( periodic ) sr is the snr .the snr gain defined as the ratio of the output snr over the input snr , also attracts much interest in exploring situations where it can exceed unity . within the regime of validity of linear response theory ,it has been repeatedly pointed out that the gain can not exceed unity for a nonlinear system driven by a sinusoidal signal and gaussian white noise . however , beyond the regime where linear response theory applies , it has been demonstrated that the gain can indeed exceed unity in non - dynamical systems , such as a level - crossing detector , a static two - threshold nonlinearity , and parallel arrays of threshold comparators or sensors , and also in dynamical systems , for instance , a single bistable oscillator , a non - hysteretic rf superconducting quantum interference device ( squid ) loop , and a global coupled network .archetypal over - damped bistable oscillators .each oscillator is subject to the same noisy signal but independent array noise . in this paper, we call the input noise and the array noise .[ fig : one ] ] a pioneering study of a parallel uncoupled array of bistable oscillators has been performed with a general theory based on linear response theory , wherein the snr gain is below unity .recently , casado _ et al _ reported that the snr gain is larger than unity for a mean - field coupled set of noisy bistable subunits driven by subthreshold sinusoids .however , each bistable subunit is subject to a _ net _sinusoidal signal without input noise .the conditions yielding a snr gain exceeding unity have not been touched upon in a parallel uncoupled array of bistable oscillators , in the presence of either a subthreshold or suprathreshold sinusoid and gaussian white noise . in practice , an initially given noisy input is often met , and a signal processor operating under this condition , with the feature of the snr gain exceeding unity , will be of interest .the snr gain has been studied earlier in the less stringent condition of narrowband noise . in the present paper , we address the more stringent condition of broadband white noise and the snr gain achievable by summing the array output , wherein extra array noise can be tuned to maximize the array snr gain . asthe array size is equal to or larger than two , the array snr gain follows a sr - type function of the array noise intensity .more interestingly , the regions where the array snr gain can exceed unity for a moderate array size , are demonstrated numerically for both subthreshold and suprathreshold sinusoids . since the array snr gain is amplified as the array size increases from two to infinity, we can immediately conclude that an infinite parallel array of bistable oscillators has a global maximum array snr gain for a fixed noisy sinusoid .for an infinite parallel array , a tractable approach is proposed using an array of two bistable oscillators , in view of the functional limit of the autocovariance function .we note that , for obtaining the maximum array snr gain , the control of this new class of array sr effect focuses on the addition of array noise , rather than the input noise .this approach can also overcome a difficult case confronted by the conventional sr method of adding noise .when the initial input noise intensity is beyond the optimal point corresponding to the sr region of the nonlinear system , the addition of more noise will only worsen the performance of system .finally , the optimization of the array snr gain in an infinite array is touched upon by tuning both an array parameter and array noise , and an optimal array parameter is expected to obtain the global maximum array snr gain .these significant results indicate a series of promising applications in array signal processing in the context of array sr effects .the parallel uncoupled array of archetypal over - damped bistable oscillators is considered as a model , as shown in fig .[ fig : one ] .each bistable oscillator is subject to the same signal - plus - noise mixture , where is a deterministic sinusoid with period and amplitude , and is zero - mean gaussian white noise , independent of , with autocorrelation and noise intensity . at the same time, zero - mean gaussian white noise , together with and independent of , is applied to each element of the parallel array of size .the array noise terms are mutually independent and have autocorrelation with a same noise intensity .the internal state of each dynamic bistable oscillator is governed by for .their outputs , as shown in fig .[ fig : one ] , are averaged and the response of the array is given as here , the real tunable array parameters and are in the dimensions of time and amplitude , respectively .we now rescale the variables according to where each arrow points to a dimensionless variable .equation ( [ eq : one ] ) is then recast in dimensionless form as , note that is subthreshold if the dimensionless amplitude , otherwise it is suprathreshold . in general , the summed output response of arrays is a random signal .however , since is periodic , will in general be a cyclostationary random signal with the same period .a generalized theory has been proposed for calculating the output snr . according to the theory in , the summing response of arrays , at any time ,can be expressed as the sum of its nonstationary mean ] , as .\label{eq : five}\ ] ] the nonstationary mean =(1/n)\sum_{i=1}^ne[x_i(t)] ] is given by =e[\widetilde{y}(t)\widetilde{y}(t+\tau)]+e[y(t)]e[y(t+\tau ) ] .\label{eq : seven}\ ] ] then , the stationary autocorrelation function for can be calculated by averaging ] represents the nonstationary variance of , which , after time averaging over a period , leads to \right\rangle ] . the power spectral density of eq .( [ eq : nine ] ) can then be rewritten as \right\rangle h(\nu)+ \sum_{n=-\infty}^{+\infty}\overline{y}_n\overline{y}_n^*\delta(\nu-\frac{n}{t_s } ) .\label{eq : eleven}\ ] ] the output snr is defined as the ratio of the power contained in the output spectral line at the fundamental frequency and the power contained in the noise background in a small frequency bin around , i.e. \right\rangle h(1/t_s)\delta b}. \label{eq : twelve}\ ] ] in addition , the output noise is a lorentz - like colored noise with the correlation time defined by at four representative rms amplitudes .( a ) the nonstationary mean ] in the context of bilateral power spectral density . here , the signal - plus - noise mixture of is initially given , and the theoretical expression of input snr can be computed as in the discrete - time implementation of the white noise , the sampling time and .the incoherent statistical fluctuations in the input , which controls the continuous noise background in the power spectral density , are measured by the variance . here , is the rms amplitude of input noise .thus , the array snr gain , viz .the ratio of the output snr of array to the input snr for the coherent component at frequency , follows as \rangle h(1/t_s ) } \frac{\sigma^2_{\xi}\delta t}{a^2/4}. \label{eq : fifteen}\end{aligned}\ ] ] equations ( [ eq : twelve])([eq : fifteen ] ) can at best provide a generic theory of evaluating snr of dynamical systems .if the array snr gain exceeds unity , the interactions of dynamic array of bistable oscillators and controllable array noise provide a specific potentiality for array signal processing . this possibility will be established in the next sections .we have carried out the simulation of parallel arrays of eq .( [ eq : one ] ) and evaluated the array snr gain of eq .( [ eq : fifteen ] ) , as shown in appendix a , based on the theoretical derivations contained in . here , we mainly present numerical result as follows .( ) , output snr ( ) at the left axes and array snr gain ( ) at the right axes , as a function of , in a single bistable oscillator for ( a ) , ( b ) , ( c ) and ( d ) .here , , and .[ fig : three ] ] if the array size and the response , this is the case of a single bistable oscillator displaying the conventional sr or residual sr phenomena . in figs .[ fig : two ] ( a)(c ) , we show the evolutions of ] has a same frequency , as shown in fig .[ fig : two ] ( a ) , and the largest amplitude of ] presents a non - monotonic behavior .as increases from to , and , the correlation time decreases from to , and , whereas equals to , , and , respectively .thus , these nonlinear characteristics of ] of for at , and ( from the top down ) . ] are same for , , , , at fixed , since =e[\sum_{i=1}^nx_i(t)/n]=\sum_{i=1}^ne[x_i(t)]/n = e[x_i(t ) ] .\label{eq : sixteen}\ ] ] however , we note that the amplitude of ] . and . ( a ) at for array sizes , , and ( from the top down ) .( b ) with array size as varies from zero to and ( from the top down ) .( c ) at for array sizes , , and ( from the top down ) .( d ) with array size as changes from zero to and ( from the top down ) . here , , and other parameters are the same as in fig .[ fig : four ] .[ fig : six ] ] figures [ fig : six ] ( a ) and ( c ) show that , at , and weaken as the array size increases . on the other hand , for a fixed array size such as , figs .[ fig : six ] ( b ) and ( d ) suggest that the output behaviors of and also weaken as increases from to and . correspondingly , the stationary variance , and , and the correlation time , and .an association of the time evolutions of ] , on the other hand , counteract the negative role of input noise and ` whiten ' the output statistical fluctuations . in other words , the stationary autocovariance function has a decreasing stationary variance and correlation time , as shown in figs .[ fig : six ] and [ fig : seven ] .for a given input noisy signal and a fixed array size , there is a local maximal snr gain , i.e. the maximum value of at the sr point of rms amplitude of array noise , as shown in fig .[ fig : four ]. clearly , this local maximal snr gain increases as array size increases , and arrives at its global maximum as .note that is obtained only via adding array noise .it is interesting to know if can be improved further by tuning both array noise and the array parameter . in eq .( [ eq : three ] ) , the signal amplitude is dimensionless , and the discrete implementation of noise results in the dimensionless rms amplitude of or ( where each arrow points to a dimensionless variable ) . the dimensionless ratio of , as , determines the input snr of eq .( [ eq : fourteen ] ) . in fig .[ fig : eight ] , we adopt two given input snrs and , this is , and .when the array parameter varies , but keeps , line comes into being , and is divided into subthreshold region ( ) and suprathreshold regime ( ) by line of , as shown in figs .[ fig : eight ] ( a ) and ( c ) .we select different points on line , being located in subthreshold region or suprathreshold region , for computing via increasing , as illustrated in figs .[ fig : eight ] ( b ) and ( d ) . figure [ fig : eight ] ( b ) shows that , at the given input snr , the global maximum snr gain increases from low amplitude , i.e. point , reaches its maximum around for , i.e. point , then gradually decreases as the amplitude increases to ( point ) .the same effect occurs for the given input snr , as shown in fig .[ fig : eight ] ( d ) , and ( point ) corresponds to the maximum around .these results indicate that , for a given input snr , we can tune the array parameter to an optimal value , corresponding to an optimized global maximum snr gain .versus input noise rms amplitude ( dimensionless variables ) .line is , and the corresponding input snr .line of divides line into subthreshold region ( below ) and suprathreshold section ( over ). points ( ) correspond to , , , , and , respectively .( b ) the global maximum snr gain , at fixed input snr , as a function of rms amplitude of array noise for points ( different amplitudes ) .( c ) plots of versus .line is , and .line is . points ( ) correspond to , , , and , respectively .( d ) , at , as a function of for points . here , , and .[ fig : eight ] ] however , we do not consider the other array parameter , which is associated with the time scale of temporal variables .then , the location of optimal array parameters in subthreshold or suprathreshold regions , associated with optimal , is pending . immediately ,an open problem , optimizing the global maximum snr gain via tuning array parameters ( and ) and adding array noise ( increasing ) , is very interesting but time - consuming .this paper mainly focuses on the demonstration of a situation of array signal processing where the parallel array of dynamical systems can achieve a maximum snr gain above unity via the addition of array noise .thus , the optimization of the maximum snr gain of infinite array is touched upon , and this interesting open problem will be considered in future studies .in the present work we concentrated on the snr gain in parallel uncoupled array of bistable oscillators . for a mixture of sinusoidal signal and gaussian white noise , we observe that the array snr gain does exceed unity for both subthreshold and suprathreshold signals via the addition of mutually independent array noise .this frequently confronted case of a given noisy input and controllable fact of array noise make the above observation interesting in array signal processing .we also observe that , in the configuration of the present parallel array , the array snr gain displays a sr - type behavior for array size larger than one , and increases as the array size rises for a fixed input snr .this sr - type effect of the array snr gain , i.e. array sr , is distinct from other sr phenomena , in the view of occurring for both subthreshold and suprathreshold signals via the addition of array noise .the mechanism of array sr and the possibility of array snr gain above unity were schematically shown by the nonstationary mean and the stationary autocovariance function of array collective responses .since the global maximum snr gain is always achieved by an infinite parallel array at non - zero added array noise levels , we propose a theoretical approximation of an infinite parallel array as an array of two bistable oscillators , in view of the functional limit of the autocovariance function .combined with controllable array noise , this nonlinear collective characteristic of parallel dynamical arrays provides an efficient strategy for processing periodic signals .we argue that , for a given input snr , tuning one array parameter can optimize the global maximum snr gain at an optimal array noise intensity .however , another array parameter , associated with the time scale of temporal variables is not involved .an open problem , optimizing the global maximum snr gain via tuning two array parameters and array noise , is interesting and remains open for future research .funding from the australian research council ( arc ) is gratefully acknowledged .this work is also sponsored by `` taishan scholar '' cpsp , nsfc ( no .70571041 ) , the srf for rocs , sem and phd pfme of china ( no .20051065002 ) .the corresponding measured power spectra of the collective response are computed in a numerical iterated process in the following way that is based on the theoretical derivations contained in : the total evolution time of eq .( [ eq : one ] ) is , while the first period of data is discarded to skip the start - up transient . in each period ,the time scale is discretized with a sampling time such that .the white noise is with a correlation duration much smaller than and .we choose a frequency bin , and we shall stick to , , and for the rest of the paper . in succession , we follow : ( * a * ) the estimation of the mean ] ( ) shall be tracked correctly in each periodic evolution of eq .( [ eq : one ] ) , i.e. , for .( * b * ) for a fixed time of ( ) , the products are calculated for .the estimation of the expectation ] and the stationary autocovariance function again . if the differences between ] , ] and stationary autocovariance function , the corresponding fourier coefficient and the power }h(1/t_s)\delta b$ ] of eq .( [ eq : eight ] ) contained in the noise background around can be numerically developed .the ratio of above numerical values leads to the array snr .the correlation time as .the numerical input snr can be also calculated by following steps ( * a*)(*d * ) , and compared with the theoretical value of of eq .( [ eq : thirteen ] ) .the snr gain will be finally figured out by eq .( [ eq : fifteen ] ) .mcdonnell , d. abbott , and c.e.m .pearce , a charactrization of suprathreshold stochastic resonance in an array of comparators by correlation coefficient , fluctuation and noise lett ., * 2 * , pp . l213-l228 ( 2002 ) . m.idykman , r. mannella , p.v.e .mcclintock , and n.g .stocks , fluctuation - induced transitions between periodic attractors : observation of supernarrow spectral peaks near a kinetic phase transition , phys ., * 65 * , pp . 48 - 51 ( 1990 ) . j. casado - pascual , c. denk , j. gmez - ordez , and m. morillo , gain in stochastic resonance : precise numerics versus linear response theory beyond two - mode approximation , phys . rev. e , * 67 * , 036109 ( 2003 ) .
|
we report the regions where a signal - to - noise ratio ( snr ) gain exceeding unity exists in a parallel uncoupled array of identical bistable systems , for both subthreshold and suprathreshold sinusoids buried in broadband gaussian white input noise . due to independent noise in each element of the parallel array , the snr gain of the collective array response approaches its local maximum exhibiting a stochastic resonant behavior . moreover , the local maximum snr gain , at a non - zero optimal array noise intensity , increases as the array size rises . this leads to the conclusion of the global maximum snr gain being obtained by an infinite array . we suggest that the performance of infinite arrays can be closely approached by an array of _ two _ bistable oscillators operating in different noisy conditions , which indicates a simple but effective realization of arrays for improving the snr gain . for a given input snr , the optimization of maximum snr gains is touched upon in infinite arrays by tuning both array noise levels and an array parameter . the nonlinear collective phenomenon of snr gain amplification in parallel uncoupled dynamical arrays , i.e. array stochastic resonance , together with the possibility of the snr gain exceeding unity , represent a promising application in array signal processing .
|
a hidden markov model ( hmm ) for a sequence of observations , where , is a discrete - time stochastic process with dynamics depicted in figure [ hmm ] .it is defined in terms of a hidden markov chain , the so - called _ signal _ , which in this paper will be taken to be the discrete - time sampling of a time - homogeneous continuous - time markov process , with state - space , transition kernel , and initial distribution .the observations relate to the signal by means of conditional distributions , assumed to be given by the kernel .we will assume that for some measure , in which case the corresponding densities are known as the _ observation _ or _ emission _ densities .the optimal filtering problem is the derivation of the conditional distributions of the unobserved signal given the observations collected up to time , henceforth denoted .these filtering distributions are the backbone of all statistical estimation problems in this framework , such as the prediction of future observations , the derivation of smoothing distributions ( i.e. , the conditional distribution of given past and future observations ) and the calculation of the likelihood function , that is , the marginal density of the observations when the emission distributions are dominated .see for details and applications . throughout the paper, we will assume that the signal is stationary and reversible with respect to a probability measure .section [ sec : disc ] shows how to extend our result to non - stationary signals .it is also appealing , from a modeling point of view , to assume that the signal evolves in continuous time , since there is a rich family of such models with a prespecified stationary measure .in addition , this assumption will give us a powerful tool to study optimal filtering by using the generator of the process , as we show in section [ sec : dual ] .in the examples of section [ sec : examples ] , the state space of the signal will either be a subset of or the -dimensional simplex .mathematically , optimal filtering is the solution of the recursion which involves the following two operators acting on probability measures : \mbox{prediction : } & \psi_t({{\xi } } ) \bigl({\mathrm{d}}x'\bigr)= { { \xi}}p_t\bigl({\mathrm{d}}x'\bigr)=\displaystyle\int _ { { \mathcal{x}}}{{\xi}}({\mathrm{d}}x)p_t\bigl(x,{\mathrm{d}}x ' \bigr ) . \end{array } \ ] ] the `` update '' is the application of bayes theorem , and the `` prediction '' gives the distribution of the next step of the markov chain initiated from .these operators have the following property when applied to _ finite mixtures of distributions _ : this implies that when is a finite set , there is a simple algorithm for the sequential computation of the filtering probabilities . to see this , note that we can think of a distribution on a finite set , specified in terms of probabilities , as a finite mixture of point masses , ; it is easy to compute and then use the above result to obtain the probabilities associated with the distributions and .this yields a popular algorithm for inference in hmms , commonly known as the baum welch filter , whose complexity is easily seen to be , where is the cardinality of . outside the finite state - space case , the iteration of these two operators typically leads to analytically intractable distributions . however , there are notable exceptions to this rule .the classic example is the linear gaussian state - space model , for which the filtering distributions are gaussian with mean and covariance that can be iteratively computed using the so - called kalman filter , at cost that grows linearly with .recent work by genon - catalot and collaborators uncovered that there exist interesting non - gaussian models for which the filtering distributions are _ finite mixtures _ of parametric distributions .see , where the authors show how to compute the corresponding parameters sequentially in these models .we revisit their findings in section [ sec : examples ] .however , the number of mixture components increases with in a way such that the cost of computing the filters grows polynomially with ( see section [ sec : dual ] for details ) .borrowing and adapting the terminology from , we will refer to filters with such computational cost as _computable _ , whereas filters whose cost grows linearly with as _ finite - dimensional_. the work by genon - catalot and collaborators raises four important questions , which we address in this paper : are there more models which admit computable filters ; do they share some basic structure ; is there a general methodology to identify such models and to obtain the algorithm which computes the sequence of parameters ; what is the computational complexity of such schemes and how can we obtain faster approximate filtering algorithms ?we show that the answer to all these questions relates to an important probabilistic object : the _ dual process_. duality methods have a long history in probability , dating back to the work of p. lvy ( see for a recent review ) .these have been widely applied to the study of interacting particle systems and proven to be a powerful method which provides alternative , and often simpler , tools for investigating the sample path properties of the process at hand .for example , the existence of a dual for a certain markov process ( and for a sufficiently large class of functions ) implies that the associated martingale problem is well defined , hence that the process is unique ; see section 4.4 of .see also and for applications of duality to population genetics . in this paper, we illustrate that dual processes play a central role in optimal filtering and to a great extent can be used to settle the four questions posed above .we also uncover their potential as auxiliary variables in monte carlo schemes for stochastic processes ( and , hence , as a variance reduction scheme ) . in our framework , the dual will in general be given by two components : a _ deterministic process _ , driven by an ordinary differential equation , and a ( multidimensional ) _ death process _ with countable state - space .we show how to derive an explicit , recursive filtering scheme once the dual is identified , and apply this methodology to three cases of fundamental interest . in doing so, we identify what , to the best of our knowledge , is a new _ gamma - type _ duality .the rest of the paper is organized as follows . in section [ sec : dual ] , we link optimal filtering to a specific type of duality , we show how to identify the dual in terms of the _ generator _ of , and study the complexity of the resulting filtering algorithm .section [ sec : examples ] analyzes three interesting models for which the dual process is derived : the cox ingersoll ross model , the ornstein uhlenbeck process and the -dimensional wright fisher diffusion .these models are reversible with respect to the gamma , gaussian and dirichlet distribution , respectively , and for the gaussian case the computable filter reduces to the kalman filter .section [ sec : disc ] discusses certain aspects of the methodology , including the extension to infinite - dimensional signals modeled as fleming viot processes .before presenting the main results , we introduce three fundamental assumptions which provide the general framework under which the results are derived .first , we will assume that is reversible with respect to a probability measure : * .section [ sec : disc ] discusses how this assumption can be relaxed to accommodate non - stationary signals . in order to state the second assumption, we need to introduce a certain amount of notation .define , for , the space of multi - indices we will use the symbol to denote the vector of zeros , for the vector in whose only non - zero element is found at the coordinate and equals , and let .furthermore , we will use the product order on , according to which for , if an only if for all . then , for , is the vector with element .additionally , if , define the notation for does not reflect its dependence on the dimension , but we will reserve boldface for elements of when ( or unspecified ) , whereas normal typeface will be used for elements of .finally , the following notations will be used to denote conditional expectations = { \mathbb{e}}\bigl[f(x_{t})\vert x_{0}=x\bigr ] = \int _ { \mathcal{x}}f\bigl(x'\bigr ) p_t\bigl(x , { \mathrm{d}}x'\bigr).\ ] ] the first denotes the action on of the semigroup operator associated to the transition kernel , where with some abuse of notation the same symbol is used both for the semigroup and the kernel . the second assumption is concerned with models where is _ conjugate _ to the emission density : * for , let be such that for all , and for some . then is assumed to be a family of probability measures such that there exist functions and with increasing and such that hence here with conjugacy , we intend the fact that the family of measures , which includes , is closed under the update operation . the assumption that is bounded in be discussed after the statement of assumption a4 . for as in ( [ update - predict ] ) , it is easy to check that in the context of a2 , we have which , despite its appearance , does not depend on .note that our definitions of and allow the possibility that or , in which case in a2 is function only of the variables with non - zero dimension , whereas the case is not of interest here . in the setting of assumption a2 and for the trivial markov dynamics , with , the filtering problem collapses to conjugate bayesian inference for the unknown parameter of the sampling density .see section 5.2 and appendix a.2 of for an exposition of conjugate bayesian inference and stylized conjugate bayesian models , and section [ sec : examples ] in this paper for examples within our framework .the third main assumption for our results concerns the existence of a certain type of dual process for the signal .* we assume that is such that the differential equation has a unique solution for all .let be an increasing function , be a continuous function , and consider a two - component markov process with state - space , where evolves autonomously according to ( [ eq : ode ] ) , and when at , the process jumps down to state with instantaneous rate we assume is _ dual _ to with respect to the family of functions defined in a2 , in the sense that = { \mathbb{e}}^{({\mathbf{m}},\theta)}\bigl[h(x , m_t,\theta_t)\bigr]\qquad \forall x \in{\mathcal{x } } , { \mathbf{m}}\in{\mathcal{m } } , \theta\in{\vartheta } , t\geq0.\ ] ] when or in a2 , the dual process is just or , respectively , and we adopt the convention that note that can only jump to `` smaller '' states according to the partial order on , and that ( [ eq : dual - rate ] ) implies that 0 is an absorbing state for each coordinate of , so that the vector of zeros is a global absorbing state . as mentioned in section [sec : intro ] , the notion of duality for markov processes with respect to a given function is well known .see , for example , section ii.4 in . among the most common type of duality relations we mention _ moment duality_ , that is duality with respect to functions of type , and _laplace duality _ , that is with respect to functions of type . see , for example , . in our framework , the duality functions are radon nikodym derivatives between measures that are conjugate to the emission density , and this setup is perfectly tailored to optimal filtering .furthermore , a3 specifies that we are interested in dual processes which can be decomposed into two parts : one purely _deterministic _ and the other given by a -dimensional _ pure death process _ , whose death rates are subordinated by the deterministic process .the transition probabilities of the death process , conditional on the initial state , will be denoted by ,\qquad { \mathbf{n } } , { \mathbf{m}}\in{\mathcal{m } } , { \mathbf{n}}\leq { \mathbf{m}}.\ ] ] it is worth mentioning that the requirements on the structure of the dual processes prescribed by assumption a3 , with particular reference to the intensity ( [ eq : dual - rate ] ) , are justified by the three main reasons .the first is that , as shown in section [ sec : examples ] , they define a framework general enough to identify duals of processes of interest , the incorporation of a deterministic component being necessary in this respect .the second reason is that the transition probabilities ( [ eq : trans - dual ] ) are analytically available , as provided by the following result , whose proof can be found in the .[ prop : trans probab ] let be as in 3 , with , and let .then the transition probabilities for are and , for any , where and is the multivariate hypergeometric probability mass function with parameters evaluated at . this result can be interpreted as follows .the probability that a one - dimensional death process with inhomogeneous rates decreases from to in the interval ] , and define for } \prod _ { k=1}^{i-1}\theta_{t_{k } } \mathrm{e}^{-\lambda_{m - k}\theta [ t_{k},t_{k+1}]}\,{\mathrm{d}}t_{k}\,\theta_{t_{i } } \mathrm{e}^{-\lambda_{m - i}\theta [ t_{i},t]}\,{\mathrm{d}}t_{i } , \\i_{1,\ldots , j-1,j,\ldots , i } & = & \int_{0}^{t}\cdots\int _ { t_{i-1}}^{t}\mathrm{e}^{-\lambda_{m}\theta[0,t_{1 } ] } \prod _ { k=1,k\ne j}^{i-1}\theta_{t_{k } } \mathrm{e}^{-\lambda_{m - k}\theta [ t_{k},t_{k+1}]}\,{\mathrm{d}}t_{k } \,\theta_{t_{i } } \mathrm{e}^{-\lambda_{m - i}\theta[t_{i},t]}\,{\mathrm{d}}t_{i},\end{aligned}\ ] ] where in .it can be easily seen that }-\mathrm{e}^{-\lambda_{m}\theta [ 0,t]}}{\lambda_{m}-\lambda_{m - i}}}.\ ] ] then we have where is the transition probability associated to the one - dimensional death process . by integrating twice , we obtain .\end{aligned}\ ] ] the iteration of the successive integrations can be represented as a binary tree with root , whose node branches into and , with both branches weighed , determined by the parent node s indices .the leaves correspond to nodes where the left coordinate touches zero if the right coordinate is already zero , or where the left crosses zero if the right coordinate is positive .the term associated to the leaf will be }$ ] weighed by some appropriate coefficient .the level before the leaves can be seen as the sequence where every sequence of terms is repeated with the last index augmented by one , and each produces the leaves and . hence given , there are terms , terms , , terms , terms and .note also that has paths in common with , paths in common with , paths in common with , , paths in common with .the correct coefficient for is computed by collecting some constants related to the paths that have the same last coefficient and simplifying . in particular ,given , the paths to be grouped for are those whose constants for indices greater than change , since according to the rule above , when is the rightmost index in , there is only one path down to .hence , given , term has coefficient by taking moduli and applying lemma [ lemma cg09 ] to the sum above , we obtain the result now follows from ( [ eq : i - i ] ) and ( [ eq : rescale - p - i ] ) , and from the fact that in the -dimensional case , the probability of going from to , conditional on , is second author is supported by the european research council ( erc ) through stg `` n - bnp '' 306406 .the authors would like to thank valentine genon - catalot and aleksandar mijatovi for helpful discussions .
|
we link optimal filtering for hidden markov models to the notion of duality for markov processes . we show that when the signal is dual to a process that has two components , one deterministic and one a pure death process , and with respect to functions that define changes of measure conjugate to the emission density , the filtering distributions evolve in the family of finite mixtures of such measures and the filter can be computed at a cost that is polynomial in the number of observations . special cases of our framework include the kalman filter , and computable filters for the cox ingersoll ross process and the one - dimensional wright fisher process , which have been investigated before . the dual we obtain for the cox ingersoll ross process appears to be new in the literature .
|
the -index has been introduced by hirsch ( 2005 ) as a research performance indicator for individual scholars .the -index is designed as a single score , balancing the two most important dimensions of research activity , _i.e. _ the productivity of a scholar and the corresponding impact on the scientific community . indeed , according to the original definition of the empirical -index provided by hirsch ( 2005 ) , `` a scientist has index , if of his or her papers have at least citations each , whereas the other papers have no more than citations each '' .notwithstanding that the -index has been only recently proposed , it is increasingly being adopted for evaluation and comparison purposes to provide information for funding and tenure decisions , since it is considered an appropriate tool for identifying `` good '' scientists ( ball , 2007 ) . as a matter of fact, several reasons explain its popularity and diffusion ( costas and bordons , 2007 ) .as it is apparent from its definition , the -index displays a simple structure allowing easy computation , using data from web of science , scopus or google scholar , while it is robust to publications with a large or small number of citations .in addition , the -index may be adopted for assessing the research performance of more complex structures , such as journals ( setting up as a competitor of the impact factor , see _e.g. _ braun _ et al ._ , 2006 ) , groups of scholars , departments and institutions ( molinari and molinari , 2008 ) and even countries ( nejati and hosseini jenab , 2010 ) . quite obviously, the -index has received considerable attention by researchers in the fields of scientometrics and information science ( van noorden , 2010 ) .even if the hirsch index was originally introduced in a descriptive framework , scientometricians often aim to assume a statistical model for citation distribution and the interest is focused on the empirical -index ( see _ e.g. _ glnzel , 2006 ) . in a proper statistical perspective , beirlant and einmahl ( 2010 ) have managed the empirical -index as the estimator for a suitable statistical functional of the citation - count distribution .accordingly , these authors have proven the consistency of the empirical -index and they have given the conditions for its large - sample normality .in addition , beirlant and einmahl ( 2010 ) have provided an expression for the large - sample variance of the empirical -index and a simplified formula for the same quantity when the underlying citation - count distribution displays pareto - type or weibull - type tails .these two special families have central importance in bibliometrics , since heavy - tailed citation - count distributions are commonly assumed ( see _ e.g. _ glnzel , 2006 , and barcza and telcs , 2009 ) .beirlant and einmahl ( 2010 ) have developed the asymptotic theory for the empirical -index by assuming a continuous citation - count distribution , even if the citation number is obviously an integer .hence , scientometricians may demand results on the empirical hirsch index under a more general approach . thus , on the basis of a suitable reformulation of the empirical -index , we provide distributional properties , as well as exact expressions for the mean and variance , of the empirical -index when citation counts follow a distribution supported by the integers .moreover , the general large - sample properties of the empirical -index are obtained and the link between the `` integer '' and the `` continuous '' cases is fully analyzed .in addition , a simple and consistent nonparametric estimator for the variance of the empirical -index is also introduced under very mild conditions .accordingly , the achieved theoretical results are assessed in a small - sample study by assuming some specific heavy - tailed distributions for the citation counts .finally , an application to the `` top - ten '' researchers for the web of science archive in the field of statistics and probability during the period 1985 - 2010 is carried out .let be a positive random variable ( r.v . ) and let be the corresponding survival function ( s.f . ) , _ i.e. _ even if is a discrete r.v . in the common bibliometric applications( since it represents the citation number for a paper of a given scholar ) , we actually provide a more general approach . similarly to beirlant and einmahl ( 2010 ) , it is assumed that for each since an unbounded support for the r.v . is usually required in scientometrics ( egghe , 2005 ) .if the left - hand limit of is denoted by on the basis of the hirsch definition of the empirical -index reported in the introduction , for each integer , a `` natural '' expression for the theoretical hirsch index , corresponding to the law of , is given by obviously , it turns out that since is a strictly positive function .it is at once apparent that ( 2.1 ) encompasses the definition of the theoretical -index given by beirlant and einmahl ( 2010 ) when the r.v . is assumed to be continuous .moreover , when the r.v . is integer - valued - the most interesting situation in bibliometrics - the theoretical hirsch index ( 2.1 ) reduces to the integer number defined by h_n&=\max\{j\in { { \mathop{{\rm i}\mskip-4.0mu{\rm n}}\nolimits}}:ns(j-1)\geq j\}\\ & = \sum_{j=1}^ni_{[j / n,1]}(s(j-1 ) ) , \end{aligned}\ ] ] where represents the usual indicator function of a set .it should be remarked that and as , as immediately follows from the definition ( 2.1 ) .since it holds that where denotes the function giving the greatest integer less than or equal to the function argument , it is worth noticing that turns out to be the -index corresponding to the law of .indeed , from the definition ( 2.1 ) we have and rephrasing the previous statement in its dual setting , if is an integer - valued r.v . , the -index corresponding to the law of turns out to be the integer part of the -index corresponding to the absolutely continuous law , where is a uniform r.v . on ] and \nearrow\infty ] as . the behavior of ] as will be considered at length in sections 3 and 4 .by means of expression ( 2.5 ) and considering the discussion following expression ( 2.2 ) , in order to explore the large - sample behavior of the empirical -index as , laws defined on a continuous support may be managed by considering laws concentrated on integers and _ vice versa_. moreover , if is an infinitesimal sequence and converges in distribution to , it follows that in addition , by noting that means asymptotic equivalence of the sequences and , _i.e. _ as , if ] , we have }{{\rm var}[\widetilde{h}_n]}=1\ ] ] and where is a consistent estimator of ] is of central importance in the most interesting case for the scientometricians , _i.e. _ when it holds that \rightarrow\infty ] and for the weibull - type family of laws satisfying the condition with ,1/2[ ] does not approach as .hence , it is convenient to introduce a further condition which solely involves the behavior of on and which implies condition ( 4.2 ) .more precisely , we consider the condition where obviously , when the r.v . is integer - valued , represents the probability function corresponding to .it should be noticed that condition ( 4.4 ) may also be reformulated as where is a positive infinitesimal sequence , and hence for each , it holds since 1&\leq\frac{s(n - m\sqrt{n})}{s(n+m\sqrt{n})}\sim\prod_{j=\lfloor - m\sqrt{n}\rfloor+1}^{\lfloor m\sqrt{n}\rfloor+1}\left(1+\frac{\gamma_{n+j}}{\sqrt{n+j}}\right)\\ & \leq\exp\left(\frac{2(m+1)\sqrt{n}\delta_{n+\lfloor - m\sqrt{n}\rfloor}}{\sqrt{n - m\sqrt{n}-1}}\right)\sim 1 \end{aligned}\ ] ] where .as proven in the following result , condition ( 4.4 ) ensures the unboundedness of ( 2.6 ) as .if the law of satisfies condition ( 4.4 ) , it holds that =\infty,\ ] ] in order to achieve consistent estimation of ( 2.6 ) , it is necessary to introduce a further condition , which is slightly more restrictive than condition ( 4.4 ) .more precisely , this condition assumes that for each it holds that where \cap{\mathop{{\rm i}\mskip-4.0mu{\rm n}}\nolimits} ] represents the function giving the integer closest to the argument .in addition , in order to assess the homogeneity of the theoretical -indexes for two scholars , a suitable test statistic is given by where and represent the empirical -indexes corresponding to the scholars , while and are the respective variance estimators as given by ( 4.7 ) .it is at once apparent that as , when and approaches normality .the test statistic is defined in a nonparametric setting , in contrast to the test statistic proposed in a semiparametric approach by beirlant and einmahl ( 2010 ) , which requires consistent estimation of the two paretian indexes of the scholar citation distributions . finally ,when the analysis of the theoretical -indexes corresponding to scholars ( ) is considered , simultaneous set estimation and homogeneity hypothesis testing could be managed by means of techniques similar to those suggested in marcheselli ( 2003 ) .these issues will be pursued in future research .in order to assess in practice the properties of the empirical -index achieved in the previous sections , a study was carried out for two heavy - tailed distributions .first , a discrete stable distribution for the r.v . was considered .this distribution may be specified via the probability generating function =\exp(-\lambda(1-s)^\alpha)\text { , } s\in[0,1],\ ] ] where ,1] ] ( steutel and van harn , 2004 , p.265 ) .the discrete stable distribution is paretian of order for ,1[ ] .obviously , it turns out that by assuming that , the values of , ] and of the large - sample variance approximation were computed for the discrete stable distribution with parameter vectors , , as well as for the discretized weibull distribution with parameters .these choices were made in order to fit , as close as possible , the real productivity and the real citation distributions of scholars with different scientific ages and belonging to different research areas and with more or less pronounced impact on research . in the study , variates were generated for each choice and for each considered distribution in order to achieve the monte carlo estimation of ] are similar even for small values and turns out to be nearly unbiased .in addition , it can be verified that the actual coverage of the large - sample confidence set ( 4.9 ) is almost equivalent to the nominal coverage even for small values .unfortunately , it can be assessed that the large - sample variance approximation ( 4.8 ) may be quite dissimilar from ]&]&coverage + &&&&&&&& + & & &&&&&& + & & &&&&&& + & & &&&&&& + & & &&&&&& + & & & & & & & & + &&&&&&&& + & & &&&&&& + & & &&&&&& + & & &&&&&& + & & &&&&&& + & & & & & & & & + &&&&&&&& + & & &&&&&& + & & &&&&&& + & & &&&&&& + & & &&&&&& + cccccccc + &&&]&& ] independent from , while let be the empirical -index based on independent copies of . since and ,the convergence in probability to of is in turn obtained from the continuous - setting result .moreover , the uniform integrability of follows by considering inequality ( 3.2 ) since e\left[\left(\sum_{j=\lfloor 2h_n\rfloor+1}^ny_{j , n}\right)^2\right]&={\rm var}\left[\sum_{j=\lfloor 2h_n\rfloor+1}^ny_{j , n}\right]+e\left[\sum_{j=\lfloor 2h_n\rfloor+1}^ny_{j , n}\right]^2\\ & \leq\sum_{j=\lfloor 2h_n\rfloor+1}^nr_{j , n}+\left(\sum_{j=\lfloor 2h_n\rfloor+1}^nr_{j , n}\right)^2 \end{aligned}\ ] ] { \rm var}[\widehat{h}_n]&\geq\sum_{j=1}^{\lfloor 2h_n\rfloor}p_{j , n}(1-p_{j , n})\geq\sum_{j\in d_n}p_{j , n}(1-p_{j , n})\\ & \geq cg(1)(1-g(1))-a\,\sum_{j\in d_n}\frac{v_{j , n}^3 + 1}{v_{j , n}^4(1+|x_{j , n}|)^6 } , \end{aligned}\ ] ] for a fixed , from condition ( 4.6 ) and from ( 4.5 ) it follows that thus , by means of expression ( a.4 ) , from theorem 4.2 it follows that { \rm var}[\widehat{h}_n]&\sim{\rm var}[\widetilde{h}_n]\sim\sum_{j=1}^{\lfloor 2h_n\rfloor}p_{j , n}(1-p_{j , n})+2\,\sum_{l=2}^{\lfloor 2h_n\rfloor}\,p_{l , n}\,\sum_{j=1}^{l-1}(1-p_{j , n})\\ & \sim 2\,\sum_{l=2}^{\lfloor 2h_n\rfloor}\,p_{l , n}\,\sum_{j=1}^{l-1}(1-p_{j , n})\\ & \sim\frac{2h_n}{(1+n\psi(\lfloor h_n\rfloor))^2}\,\int_{-\infty}^{2h_n}g(x)\,\text{d}x\,\int_{-\infty}^x(1-g(u))\,\text{d}u\\ & \sim\frac{h_n}{(1+n\psi(\lfloor h_n\rfloor))^2 } , \end{aligned}\ ] ] since
|
the hirsch index ( commonly referred to as -index ) is a bibliometric indicator which is widely recognized as effective for measuring the scientific production of a scholar since it summarizes size and impact of the research output . in a formal setting , the -index is actually an empirical functional of the distribution of the citation counts received by the scholar . under this approach , the asymptotic theory for the empirical -index has been recently exploited when the citation counts follow a continuous distribution and , in particular , variance estimation has been considered for the pareto - type and the weibull - type distribution families . however , in bibliometric applications , citation counts display a distribution supported by the integers . thus , we provide general properties for the empirical -index under the small- and large - sample settings . in addition , we also introduce consistent nonparametric variance estimation , which allows for the implemention of large - sample set estimation for the theoretical -index . , ,
|
the amount of multimedia traffic transfered over the internet is steadily growing , mainly driven by video traffic .a large fraction of this traffic is already carried by dedicated content delivery networks ( cdns ) and this trend is expected to continue .the largest cdns are distributed , with a herd of storage points and data centers spread over many networks . with the cooperation of the internet service providers, cdns can deploy storage points anywhere in the network . however , these cooperations require a lot of negociations and are therefore only slowly getting established , and it also seems necessary to consider alternatives to a global cache - network planning .an easily deployable solution is to make use of the ressources ( storage , upload bandwidth ) already present at the edge of the network .such an architecture offers a lot of potential for cost reduction and quality of service upgrade , but it also comes with new design issues as the operation of a very distributed overlay of small caches with limited capabilities is harder than that of a more centralized system composed of only a few large data centers . in this paper , we consider a hybrid cdn consisting of both a data center and an edge - assistance in the form of many small servers located at the edge of the network ; we call such a system an edge - assisted cdn .the small servers are here to reduce the cost of operation of the cdn , while the data center helps provide reliability and quality of service guarantees .we stress the fact that the small servers have a limited storage and service capacity as well as limited coordination capabilities , which makes cheap devices eligible for that role . as their respective roles hint , we assume the service of a request by the data center is much more costly than by a small server , thus requests are served from the small servers in priority .in fact , we only send requests to the data center if no idle server storing the corresponding content is present .we assume the data center is dimensioned to absorb any such remaining flow of requests .we restrict our attention here to contents which require immediate service , which is why we do not allow requests to be delayed or queued .stringent delay constraints are indeed often the case for video contents for example in the context of a video - on - demand ( vod ) service , which represent already more than half of the internet traffic . in that context , we address the problem of determining which contents to store in the small servers .the goal is to find content placement strategies which offload as much as possible the data center . in this optic , it is obvious that popular contents should be more replicated , but there is still a lot of freedom in the replication strategy . in order to compare different policies , we model the system constituted by the small servers alone as a loss network , where each loss corresponds to a request to the data center .the relevant performance metric is then simply the expected number of losses induced by each policy .we make three main contributions : first , using a mean - field heuristic for our system with a large number of servers with limited coordination , we compute accurately the loss rate of each content based on its replication ; building on this analysis , we derive an optimized static replication policy and show that it outperforms significantly standard replications in the literature ; finally , we propose adaptive algorithms attaining the optimal replication .contrary to most caching schemes , our eviction rules are based on loss statistics instead of access statistics ; in particular our algorithms do not know or learn the popularities of the contents and will adapt automatically .we also propose a simple mechanism relying heavily on our mean - field analysis that allows us to speed up significantly the rate of adaptation of our algorithms . at each step , extensive simulations support andillustrate our main points .the rest of the paper is organized as follows : we review related work in section [ sec : related work ] and describe more in detail our system model in section [ sec : model ] . in section [ sec : analysis by approximation ] , we analyse our model under a mean - field approximation and obtain an asymptotic expression for the optimal replication . then , in section [ sec : replication policy ] , leveraging the insights from the analysis we propose new adaptive schemes minimizing the average loss rate , as well as ways to further enhance their adaptation speed .our edge - assisted cdn model is related to a few other models .a major difference though is our focus on modeling the basic constraints of a realistic system , in particular regarding the limited capacity of servers ( in terms of storage , service and coordination ) and the way requests are handled , i.e. the _ matching _ policy , which role is to determine which server an incoming request should be directed to .we indeed consider the most basic matching policy : an idle server storing the content is chosen uniformly at random ; and no queueing of requests , no re - direction during service and no splitting are allowed .the reason for modeling such stringent constraints and refusing simplifying assumptions in this domain is that this should lead to fundamental intuitions and qualitative results that should apply to most practical systems .our edge - assisted cdn model is very similar to the distributed server system model of ; we merely use a different name to emphasize the presence in the core of the network of a data center , which makes it clear what performance metric to consider , while otherwise availability of contents for example may become a relevant metric .the edge - assisted cdn model is also related to peer - to - peer ( p2p ) vod systems , which however have to deal with the behavior of peers , and to cache networks , where caches are deployed in a potentially complex overlay structure in which the flow of requests entering a cache is the `` miss '' flow of another one .previous work on p2p vod typically does not model all the fundamental constraints we mentioned , and the cache networks literature tends to completely obliterate the fact servers become unavailable while they are serving requests and to focus on alternative performance metrics such as search time or network distance to a copy of requested contents . in distributed server networks , a maximum matching policy ( with re - packing of the requests at any time butno splitting or queueing ) has been considered in . in thissetting , in it was shown that replicating contents proportionally to their popularity leads to full efficiency in large system and large storage limits , and in it was further proved that there is actually a wide range of policies having such an asymptotic property and an exact characterisation of the speed at which efficiency decreases for a given policy was obtained . however , that formula for the efficiency is too complicated to derive any practical scheme from it , and anyway maximum matching at any time is probably unrealistic for most practical systems .finding efficient content replications in cache networks is a very active area of research .the optimal replication is not yet understood , however adaptive strategies are studied ; they create replicas for contents when incoming requests arrive and evict contents based on either random , least frequently used ( lfu ) or least recently used ( lru ) rules , or a variation of those .there is an extensive literature on the subject and we only mention the most relevant papers to the present work .an approximation for lru cache performance is proposed in and further supported in ; it allows to study cache networks with a two - levels hierarchy of caches under a mix of traffic with various types of contents .based on the differences in their respective popularity distributions , advocates that vod contents are cached very close to the network edge while user - generated contents , web and file sharing are only stored in the network core . in a loose sense, we also study such type of two - layer systems , with the small servers at the edge of the network and the data center in the core , and address the question of determining whether to store contents at the edge ( and in what proportion ) or to rely on the core to serve them . concerning p2p vod models ,content pre - fetching strategies have been investigated in many papers ( e.g. ) , where optimal replication strategies and/or adaptive schemes were derived .they work under a maximum matching assumption and most of them assume infinite divisibility of the requests , i.e. that a request can be served in parallel by all the servers storing the corresponding content ; we want to take into account more realistic constraints in that respect .one can also explore the direction of optimizing distributed systems with a special care for the geographical aspects as in .these papers solve content placement and matching problems between many regions , while not modeling in a detailed way what happens inside a region .finally , most of the work in these related domains focuses on different performance metrics : in hierarchical cache networks , addresses the problem of joint dimensioning of caches and content placement in order to reduce overall bandwidth consumed ; optimizes replication in order to minimize search time and devises elegant adaptive strategies towards that end .surprisingly , for various alternative objectives and network models , the proportional ( to popularity ) replication exhibits good properties : it minimizes the download time in a fully connected network with infinite divisibility of requests and convex server efficiency , as well as the average distance to the closest cache holding a copy of the requested content in a random network with exponential expansion ; and recall that also advocated using this replication policy .therefore , due to its ubiquity in the literature , proportional replication is the natural and most relevant scheme to which we should compare any proposed policy ; we will obtain substantial improvements over it in our model .in this section , we describe in detail our edge - assisted cdn model .as already mentioned , the basic components of an edge - assisted cdn are a data center and a large number of small servers .the cdn offers access to a catalog of contents , which all have the same size for simplicity ( otherwise , they could be fragmented into constant - size segments ) .the data center stores the entire catalog of contents and can serve all the requests directed towards it , whereas each small server can only store a fixed number of contents and can provide service for at most one request at a time .we can represent the information of which server stores which content in a bipartite graph , where is the set of servers , the set of contents , and there is an edge in between a content and a server if stores a copy of ; an edge therefore indicates that a server is able to serve requests for the content with which it is linked .given numbers of replicas for the contents , we assume that the graph is a uniform random bipartite graph with fixed degrees on the servers side and fixed degree sequence on the contents side .this models the lack of coordination between servers and would result from them doing independent caching choices .we do not allow requests to be queued , delayed or re - allocated during service , and as a result a server which begins serving a request will then remain unavailable until it completes its service .splitting of requests is not considered either , in the sense that only one server is providing service for a particular request . with these constraints , at any time , the subset of edges of over which some service is being provided form a generalized matching of the graph ( the analysis in relies heavily on this ) .the contents of the catalog may have different popularities , leading to different requests arrival rates .we let be the average request rate , i.e. . according to the independent reference model ( irm ) , the requests for the various contents arrive as independent poisson processes with rates .the service times of all the requests are independent exponential random variables with mean , which is consistent with all the servers having the same service capacity ( upload bandwidth ) and all the contents having the same size .we let denote the expected load of the system , i.e. .the distribution of popularities of the contents is left arbitrary , although in practice zipf distributions are often encountered ( see for a page - long survey of many studies ) and it seems therefore mandatory to evaluate the replication schemes proposed under a zipf popularity distribution . when a request arrives , we first look for an idle server storing the corresponding content ; if none is found then the request is sent to the data center to be served at a higher cost . as in the end all the requestsare served without delay , be it by the data center or by a small server , the performance of the system is mainly described by the cost associated with the service .this cost is mostly tied to the number of requests served by the data center , therefore the most relevant performance metric here is the fraction of the load left - over to the data center .then , it makes sense to view the requests that the small servers can not handle as `` lost '' .in fact , the system consisting of the small servers alone with fixed caches is a loss network in the sense of .this implies that the system corresponds to a reversible stochastic process , with a product form stationary distribution .however , the exact expression of that stationary distribution is too complex to be exploited in our case ( as opposed to where the maximum matching assumption yields a much simpler expression ) .we call the rate at which requests for content are lost , and the average loss rate among contents , i.e. . the main goal is to make as small as possible .we refer to the fraction of lost requests as the inefficiency of the system .finally , as in many real cases cdns are quite large , with often thousands of servers and similar numbers of contents , it seems reasonable to pay a particular attention to the asymptotics of the performance of the system as . to keep things simple, we can assume that the number of servers and the number of contents grow to infinity together , i.e. for some fixed .in addition , as storage is cheap nowadays compared to access bandwidth , it also makes sense to focus on a regime with large ( but still small compared to ) . in these regimes, the inefficiency will tend to as the system size increases under most reasonable replications ( as was shown in under the maximum matching assumption ) .however , as the cost of operation is only tied to losses , further optimizing an already small inefficency is still important and can lead to order improvements on the overall system cost .in this section , we propose an approximation to understand in a precise manner the relation between any fixed replication of contents and the loss rates in the system .this analytical step has many advantages : it allows us to formally demonstrate that to optimize the system one needs to make the loss rates equal for all the contents ; as a consequence we obtain an explicit expression for the optimal replication ( subsection [ sec : optimized replication ] ) ; finally , in subsection [ sec : virtual losses ] , we will leverage our analytical expression for the full distribution of the number of available replicas of a content to propose a mechanism enhancing the speed of adaptive algorithms .we validate our approximation and show that our optimized replication strategy largely outperforms proportional replication through extensive simulations . for a given fixed constitution of the caches ( i.e. a fixed graph ) ,the system as described in the previous section is markovian , the minimum state space indicating which servers are busy ( it is not necessarily to remember which content they serve ) .we want to understand how the loss rate for a particular content relates to the graph , but using too detailed information about , such as the exact constitution of the caches containing , would have the drawback that it would not lead to a simple analysis when considering adaptive replication policies ( as the graph would then keep changing ) . therefore , we need to obtain a simple enough but still accurate approximate model tying to .the expected loss rate of content is equal to its requests arrival rate multiplied by the steady state probability that has no replicas available .let be the total number of replicas of content and be its number of _ available _ replicas , i.e. those stored on a currently idle server .we thus want to compute to get access to .however , the markov chain describing the system is too complicated to be able to say much on its steady state . in order to simplify the system, one can remark that in a large such system the state of any fixed number of servers ( i.e. their current caches and whether they are busy or idle ) are only weakly dependent , and similarly the number of available replicas of any fixed number of contents are only weakly dependent .therefore , it is natural to approximate the system by decoupling the different contents and the different servers ( similar phenomenon are explained rigorously in ) . in other words , this amounts to forgetting the exact constitution of the caches ; as a result , the correlation between contents which are stored together is lost .then , the evolution of becomes a one dimensional markov chain , independent of the values of for other contents , and we can try to compute its steady - state distribution . for any ,the rate of transition from to is always : it is the rate at which one of the occupied servers storing completes its current service ; we do not need to distinguish whether a server is actually serving a request for or for another content as we assume the service times are independent and identically distributed across contents . for any , the transitions from to are more complicated .they happen in the two following cases : * either a request arrives for content and as it is assigned an idle servers storing ; * or a request arrives for another content and it is served by a server which also stores content . the first event occurs at rate and the second one at expected rate ,\ ] ] where indicates that the content is stored on the server . at any time and for any , is equal to .the term is equal in expectation to the number of idle servers storing ( i.e. ) times the probability that such a server also stores . as we forget about the correlations in the caching , this last probability is approximated as the probability to pick one of the remaining memory slots in an idle servers storing when we dispatch at random the available replicas of content between all the remaining slots in idle servers .thus , \approx\frac{z_{c'}(d-1)}{(1-\rhoeff)md}z_c,\ ] ] where is the average load effectively absorbed by the system , so that the total number of memory slots on the idle servers is .we also neglected the memory slots occupied by in these idle servers when computing the total number of idle slots .we obtain the following approximation for the expected rate at which the second event occurs : where we neglected the rate at which requests arrive for content compared to the aggregate arrival rate of requests , and we let . note that the interesting regime , with reasonably high effective load , corresponds to large values of , as implies .the markov chain obtained satisfies the local balance equations : for , this yields the following steady - state probability : where the binomial coefficient does not really have a combinatorial interpretation as one of its arguments is not an integer , but should only be understood as , for and .we now have an approximation for the full distribution of the number of available replicas of content , which yields an approximation for the loss rate .we can also compute the mode of this distribution : , which can be used as an approximation for the expectation $ ] ( in fact , simulations show the two are nearly indistinguishable ) .we further simplify the expression obtained by providing a good approximation for the normalization factor in equation ( [ eqn : approximation z_c ] ) : to that end , we use the fact that we aim at small loss rates , and thus most of the time at small probabilities of unavailability .therefore , the bulk of the mass of the distribution of should be away from , which means that the terms for close to should be fairly small in the previous expression .we thus extend artificially the range of the summation in this expression and approximate as follows : we obtain the following approximation for : finally , as we are interested in large systems and large storage / replication , we can leverage the following asymptotic behavior : for large enough and where is the gamma function : ; is the -th harmonic number : ; and is the euler - mascheroni constant : . for large number of replicas , we thus obtain the approximation : where we let and the term is an approximation of . the approximation for yields an approximation for the loss rate : note that the expression of involves the average effective load , which itself depends on the average loss rate .we thus have a fixed point equation in ( just as in ) , which we can easily solve numerically .indeed , the output value of our approximation is a decreasing function of the input value used in , which implies simple iterative methods converge exponentially fast to a fixed - point value for . in this subsection, we exploit the approximation obtained in equation ( [ eqn : loss rate ] ) to understand which replication strategy minimizes the total loss rate in the system .in other words , we approximately solve the optimization problem : note that the approximation from equation ( [ eqn : loss rate ] ) is consistent with the intuition that is a decreasing , convex function of .indeed , letting since we have in the regime of interest , we compute the loss rate is a convex function of as shown by as a consequence , the optimization problem ( [ eqn : optimization problem ] ) is approximately convex , and we thus obtain an approximate solution by making the first derivatives of the loss rates equal . keeping only the dominant orders in , we have in the first order , equalizing the first derivatives in ( [ eqn : first derivative ] ) using the expression in ( [ eqn : loss rate ] ) leads to equalizing the number of replicas for every content , i.e. setting where is the average number of replicas of contents . going after the second order term, we get we therefore obtain that the asymptotically optimal replication is uniform with an adjustment due to popularity of the order of the log of the average number of replicas .this agrees with the results in and goes beyond . finally , inserting back this expression for into equation ( [ eqn : loss rate ] ) yields which shows that the average loss rate under optimized replication behaves as the loss rate of an imaginary average content ( one with popularity and replication ) . in the process of approximating the loss rate of a content , we performed many approximations based on asymptotic behaviors for large systems and large storage / replication .it is therefore necessary to check whether the formula of equation ( [ eqn : loss rate ] ) is not too far off , which we do in this section via simulation .the systems we simulate are of quite reasonable size ( with a few hundred contents and servers ) . however , the accuracy of the approximation should only improve as the system grows .we use two scenarios for the popularity distribution of the contents : a class model ( similar to that of ) , which allows us to compare between many contents with similar characteristics , and a zipf popularity model , which is deemed more realistic .we evaluate the accuracy of the approximation using proportional replication ( other replications were also simulated and yield similar results ) . to compute the approximate expressions ,we solve numerically the fixed point equation in .we simulate systems under a reasonably high load of .as often , it is mainly interesting to know how the system behaves when the load is high , as otherwise its performance is almost always good .however , as requests arrive and are served stochastically , if the average load were too close to then the instantaneous load would exceed quite often , which would automatically induce massive losses and mask the influence of the replication .in fact , it is easy to see that we need to work with systems with a number of servers large compared to in order to mitigate this effect ..properties of the content classes , and numerical results on the accuracy of the approximation . [ cols="^,^,^,^,^,^,^",options="header " , ] the setup with the class model is the following : there are contents divided into classes ; the characteristics of the classes are given in the first part of table [ table : class properties ] .the popularities in table [ table : class properties ] together with and result in servers .each server can store contents , which is of the catalog of contents .we let the system run for units of time , i.e. contents of class 3 should have received close to requests each .{available_replicas_class_model.eps } & \includegraphics[width=1.7in , keepaspectratio]{loss_rates_class_model.eps } \end{array}\ ] ] figure [ fig : simulation vs approximation , class model ] and the second part of table [ table : class properties ] show the results for the class model : the left part of figure [ fig : simulation vs approximation , class model ] shows the distributions of the numbers of available replicas for all the contents against the approximation from equation ( [ eqn : approximation z_c ] ) for each class ; the right part shows the loss rates for all the contents against the approximation from equation ( [ eqn : loss rate ] ) for each class . although it is not apparent at first sight , the left part of figure [ fig : simulation vs approximation , class model ] actually displays a plot for each of the contents , but the graphs for contents of the same class overlap almost perfectly , which supports our approximation hypothesis that the stationary distribution of the number of available replicas is not very dependent on the specific servers on which a content is cached or on the other contents with which it is cached .this behavior is also apparent on the right part of figure [ fig : simulation vs approximation , class model ] , as the loss rates for the contents of the same class are quite similar . from figure[ fig : simulation vs approximation , class model ] and table [ table : class properties ] , it appears that the approximations from equations ( [ eqn : approximation z_c ] ) and ( [ eqn : loss rate ] ) are quite accurate , with for example a relative error of around in the inefficiency of the system ( vs ) .we consider such an accuracy is quite good given that some of the approximations done are based on a large storage / replication asymptotic , while the simulation setup is with a storage capacity of contents only and contents of class ( responsible for the bulk of losses ) have only replicas each .we now turn to the zipf popularity model . in this model ,the contents are ranked from the most popular to the least ; for a given exponent parameter , the popularity of the content of rank is given by .we use two different values for the zipf exponent , and , as proposed in .the exponent is meant to represent correctly the popularity distribution for web , file sharing and user generated contents , while the exponent is more fit for video - on - demand , which has a more accentuated popularity distribution .we simulate networks of contents and servers of storage capacity under a load .this yields an average content popularity .note that under proportional replication the numbers of replicas of the most popular contents are actually larger than the number of servers for ; we thus enforce that no content is replicated in more than of the servers . as expected and confirmed by the simulations , this is anyway benefical to the system .each setup is simulated for at least units of time .{log - log_available_zipf.eps } & \includegraphics[width=1.7in , keepaspectratio]{loss_rates_approximation_zipf.eps } \end{array}\ ] ] we show the results for the zipf model with both exponent values in figure [ fig : simulation vs approximation , zipf model ] and in the third part of table [ table : class properties ] : the left part of figure [ fig : simulation vs approximation , zipf model ] shows the expected number of available replicas for all the contents against the approximation from equation ( [ eqn : approximation z_c ] ) ; the right part shows the loss rates for all the contents against the approximation from equation ( [ eqn : loss rate ] ) .again , the results from both figure [ fig : simulation vs approximation , zipf model ] and table [ table : class properties ] confirm the accuracy of the approximations .we now turn to evaluating the performance of the optimized replication from equation ( [ eqn : optimal replication ] ) .figure [ fig : optimized replication ] shows the resulting loss rates under the optimized replication , with both zipf exponents .this figure is to be compared with the right part of figure [ fig : simulation vs approximation , zipf model ] , which showed the same results for proportional replication .it is clear that the optimized replication succeeds at reducing the overall inefficiency compared to proportional replication ( from to for , and from to for ) .note that in the case of zipf exponent the popularity distribution is more `` cacheable '' as it is more accentuated , and the optimized replication achieves extremely small loss rates .however , the loss rates for all the contents are not trully equalized , as popular contents are still too much favored ( as can be seen for zipf exponent ) . this may be because the expression for optimized replication is asymptotic in the system size and storage capacity .hence , there is still room for additional finite - size corrections to the optimized replication . as an outcome of this section ,we conclude that the approximations proposed are accurate , even at reasonable system size .in addition , the optimized scheme largely outperforms proportional replication . in the next section, we derive adaptive schemes equalizing the loss rates of all the contents and achieving similar performances as the optimized replication .in a practical system , we would want to adapt the replication in an online fashion as much as possible , as it provides more reactivity to variations in the popularity of contents .such variations are naturally expected for reasons such as the introduction of new contents in the catalog or a loss of interest for old contents .thus , blindly enforcing the replication of equation ( [ eqn : optimal replication ] ) is not always desirable in practice . instead, we can extract general principles from the analysis performed in section [ sec : analysis by approximation ] to guide the design of adaptive algorithms . in this section ,we first show how basic adaptive rules from cache networks can be translated into our context based on the insights from section [ sec : analysis by approximation ] to yield adaptive algorithms minimizing the overall loss rate .then , we show how the more detailed information contained in equation ( [ eqn : approximation z_c ] ) allows the design of faster schemes attaining the same target replication .finally , we validate the path followed in this paper by evaluating through simulations the adaptive schemes proposed .the analysis in section [ sec : optimized replication ] shows that the problem of minimizing the average loss rate is approximately a convex problem , and that therefore one should aim at equalizing the derivatives of the stationary loss rates of all the contents .in addition , equation ( [ eqn : first derivative ] ) points out that these derivatives are proportional to in the limit of large replication / storage , and thus equalizing the loss rates should provide an approximate solution for the optimization problem .an immediate remark at this point is that it is unnecessary to store contents with very low popularity if the loss rate of the other contents is already larger than .an adaptive replication mechanism is characterized by two rules : a rule for creating new replicas and another one for evicting contents to make space for the new replicas . in order to figure out how to set these rules, we analyze the system in the fluid limit regime , with a separation of timescales such that the dynamics of the system with fixed replication have enough time to converge to their steady - state between every modification of the replication . achieving this separation of timescales in practicewould require slowing down enough the adaptation mechanism , which reduces the capacity of the system to react to changes .therefore , we keep in mind such a separation of timescales as a justification for our theoretical analysis but we do not slow down our adaptive algorithms .when trying to equalize the loss rates , it is natural to use the loss events to trigger the creation of new replicas .then , new replicas for content are created at rate , and we let be the rate at which replicas of are deleted from the system . in the fluid limit regime under the separation of timescale assumption , the number of replicas of evolves according to , where all the quantities refer to expectations in the steady - state of the system with fixed replication . at equilibrium , we have and for all the contents , thus we need for all to equalize the loss rates .this would be achieved for example if we picked a content for eviction uniformly at random among all contents .however , the contents eligible for eviction are only those which are available ( although some systems may allow modifying the caches of busy servers , as serving requests consumes upload bandwidth while updating caches requires only download bandwidth ) .therefore , the most natural and practical eviction rule is to pick an _ available _ content uniformly at random ( hereafter , we refer to this policy simply as the random policy ) .then , the eviction rate for content is given by .so , at equilibrium , we can expect to have , . in a large system , with large storage / replication and loss rates tending to zero ,the difference with a replication trully equalizing the loss rates is negligeable .if we are confronted to a system with a large number of very unpopular contents though , we can compensate for this effect at the cost of maintaining additional counters for the number of evictions of each content .once the rule for creating replicas if fixed , we immediately obtain a family of adaptive algorithms by modifying the eviction rules from the cache network context as we did above for the random policy .instead of focusing on `` usage '' and the incoming requests process as in cache networks with the lfu and lru policies , we react here to the loss process .this yields the lfl ( least frequently lost ) and lrl ( least recently lost ) policies .random , lrl , and lfl are only three variants of a generic adaptive algorithm which performs the following actions at every loss for content : 1 .create an empty slot on an available server , using the eviction rule ( random / lrl / lfl ) ; 2 .add a copy of the content into the empty slot .the three eviction rules considered here require a very elementary centralized coordinating entity .for the random rule , this coordinator simply checks which contents are available , picks one uniformly at random , say , and then chooses a random idle server storing .the server then picks a random memory slot and clears it , to store instead a copy of . in the case of lrl ,the coordinator needs in addition to maintain an ordered list of the least recently lost content ( we call such a list an lrl list ) .whenever a loss occurs for , the coordinator picks the least recently lost available content based on the lrl list ( possibly restricting to contents less recently lost than ) and then updates the position of in the lrl list .it then picks a random idle server storing , which proceeds as for random . finally , for lfl , the coordinator would need to maintain estimates of the loss rates of each content ( by whatever means , e.g. exponentially weighted moving averages ) ; when a loss happens , the coordinator picks the available content with the smallest loss rate estimate and then proceeds as the other two rules .this last rule is more complicated as it involves a timescale adjustement for the estimation of the loss rates , therefore we will focus on the first two options in this paper .note that the lfl policy does not suffer from the drawback that the eviction rate is biased towards popular contents due to their small unavailability , and this effect is also attenuated under the lrl policy .we point out that it is possible to operate the system in a fully distributed way ( finding a random idle server first and then picking a content on it by whatever rule ) , but this approach is biased into selecting for eviction contents with a large number of available replicas ( i.e. popular contents ) , which will lead to a replication with reduced efficency .it is of interest for future work to find a way to unbias such a distributed mechanism . in the algorithms proposed in the previous subsection, we use the losses of the system as the only trigger for creating new replicas .it is convenient as these events are directly tied to the quantity we wish to control ( the loss rates ) , however it also has drawbacks .firstly , it implies that we want to generate a new replica for a content precisely when we have no available server for uploading it , and thus either we send two requests at a time to the data center instead of one ( one for the user which we could not serve and one to generate a new copy ) or we must delay the creation of the new replica and tolerate an inedaquate replication in - between , which also hinders the reactivity of the system . secondly ,unless in practice we intend to slow down the adaptation enough , there will always be oscillations in the replication . if losses happenalmost as a poisson process , then only a bit of slow - down is necessary to moderate the oscillations , but if they happen in batches ( as it may very well be for the most popular contents ) then we will create many replicas in a row for the same contents .if in addition we use the lrl or lfl rules for evicting replicas , then the same contents will suffer many evictions successively , fuelling again the oscillations in the replication .finally , it is very likely that popularities are constantly changing . if the system is slow to react , then the replication may never be in phase with the current popularities but always lagging behind .for all these reasons , it is important to find a way to decouple the adaptation mechanisms from the losses in the system to some extent , and at least to find other ways to trigger adaptation .we propose a solution , relying on the analysis in section [ sec : analysis by approximation ] .a loss for content occurs when , upon arrival of a request for , its number of available replicas is equal to . in the same way ,whenever a request arrives , we can use the current value of to estimate the loss rate of .indeed , equation ( [ eqn : approximation z_c ] ) tells us how to relate the probability of to the loss rate , for any .of course , successive samples of are very correlated ( note that losses may also be correlated though ) and we must be careful not to be too confident in the estimate they provide . a simple way to use those estimates to improvethe adaptation scheme is to generate _ virtual _ loss events , to which any standard adaptive scheme such as those introduced in the previous subsection may then react .to that end , whenever a request for arrives , we generate a virtual loss with a certain probability depending on the current number of available replicas .the objective is to define so that the rates of generation of virtual losses satisfy for all ( so that the target replication still equalizes loss rates ) and is as high as possible ( to get fast adaptation ) . as a first step towards setting the probability , we write equation ( [ eqn : approximation z_c ] ) as follows : this shows that , for any fixed value with , we can generate events at rate by subsampling at the time of a request arrival with with a first probability if on the contrary is such that , then the value of given above is larger than as we can not generate events at rate by subsampling even more unlikely events .if we generated virtual losses at rate as above for each value of , then the total rate of virtual losses for content would be , which clearly still depends on .we thus proceed in two additional steps towards setting : we first restrict the range of admissible values of , for which we may generate virtual losses , by excluding the values such that . in the regime , this can be done in a coarse way by letting and rejecting all the values .indeed , and the distribution of is unimodal with the mode at a smaller value .now , subsampling with probability when a request arrives for content and would generate events at a total rate .thus , it suffices to subsample again with probability to obtain the same rate of virtual losses for all the contents ( another approach would be to restrict again the range of admissible values of , e.g. to values around the mode ) . to sum up , our approach is to generate a virtual loss for content at each arrival of a request for with probability the rate at which virtual losses are generated for content is then given by , which is independent of as planned .whenever a virtual loss occurs , we can use whatever algorithm we wanted to use in the first place with real losses ; there is no need to distinguish between the real losses and the virtual ones .for example , if we use the lrl policy , we update the position of in the lrl list and create a new replica for by evicting a least recently lost available content ( from a server which does not already store ) .if we choose to test for virtual losses at ends of service for ( which yields the same outcome in distribution , as the system is reversible ) , the new replica can simply be uploaded by the server which just completed a service for . furthermore , in practice , we advocate estimating the values of and on the fly rather than learning these values for each content : can be computed from , which is naturally approximated by the ratio of the current number of busy servers to the total number of servers ; similarly , we can approximate by the current number of requests for being served . from these, we can compute and ; it is only necessary to maintain an estimate for , which can be for example an average over the few least popular contents updated whenever a service ends for one of them . in the next subsection ,we evaluate the performance of the adaptative schemes proposed and the virtual losses mechanism . before getting to the simulation results ,let us just mention that the complexity of simulating the adaptive algorithms grows very fast with the system size ( with , and ) .indeed , it is easy to see that simulating the system for a fixed duration requires operations .furthermore , the time needed for the random algorithm to converge , when started at proportional replication , is roughly of the order of , where is the average loss rate for the limit replication , which decreases exponentially fast in as seen in equation ( [ eqn : optimal gamma ] ) .therefore , if we want to compare all the adaptive schemes , we can not simulate very large networks .anyway , our intention is to show that our schemes work even for networks of reasonable size .as in section [ sec : simulation approximation ] , we used zipf popularity distributions with exponents and and a class model to evaluate the performance of the various schemes .the results are qualitatively identical under all these models , so we only show the results for zipf popularity with exponent .we compare the various schemes in terms of the replication achieved and its associated loss rates , as well as the speed at which the target replication is reached .we do not slow down the dynamics of the adaptive schemes even though this necessarily induces some oscillations in the replication obtained .nonetheless , this setup is already sufficient to demonstrate the performance of the adaptive schemes .it would be interesting to quantify the potential improvement if one reduces the oscillations of the replication obtained ( e.g. by quantifying the variance of the stationary distribution for the number of replicas for each content ) ; we leave this out for future work . also, we did not account for the load put on the data center to create new copies of the contents ; one can simply double the loss rates for the adaptive schemes to capture this effect .note that if adaptation speed is not a priority , one can trade it off to almost cancel this factor of .finally , concerning the virtual loss mechanism , we estimate all the quantities involved on the fly , as recommended in the previous section .{loss_rates_zipf.eps } & \includegraphics[width=1.55in , keepaspectratio]{log - log_replication_zipf.eps } \end{array}\ ] ] in figure [ fig : loss rates and replication ] , we show results for the various adaptive schemes . on the left part of the figure , we show the stationary loss rates of all the contents ; on the right part we show in log - log scale the stationary expectation of the numbers of replicas for each content .this plot shows firstly that all the adaptive schemes converge to the same replication and secondly that this replication equalizes the loss rates , as intended .in addition , the adaptive schemes perform even better than the optimized static replication , which suffers from finite network / finite storage effects , as they manage to find the right balance between popular and unpopular contents .we compare the adaptation speed of the various schemes on figure [ fig : time evolution ] , where we plot both the evolution of the average number of replicas of the most popular contents and that of the least popular ones , starting from proportional replication .as expected , the lrl schemes are faster than the random ones , but more importantly this plot clearly demonstrates the speed enhancement offered by the virtual loss method of section [ sec : virtual losses ] . regarding the benefits of such an enhanced reaction capability, there is an interesting property which we did not point out nor illustrate with the simulations : the virtual loss scheme has the potential to follow a constantly evolving popularity profile at no cost , as the required creations of replicas to adapt to the changing popularities can be done without requesting copies of the contents to the data center .we addressed the problem of content replication in edge - assisted cdns , with a special attention to capturing the most important constraints on server capabilities and matching policy .based on large system and large storage asymptotics , we derived an accurate approximation for the performance of any given replication , thereby allowing offline optimization of the replication .in addition , levaraging the insights gathered in the analysis , we designed adaptive schemes converging to the optimal replication .our basic adaptive algorithms react to losses , but we proposed further mechanisms to move away from losses and adapt even faster than they occur . , ``cisco visual networking index : forecast and methodology , 2012 - 2017 , '' may 2013 .[ online ] .available : http://www.cisco.com / en / us / solutions / collateral / ns341/ns525/ns537/ns705% /ns827/white_paper_c11 - 481360.pdf[http://www.cisco.com / en / us / solutions / collateral / ns341/ns525/ns537/ns705% /ns827/white_paper_c11 - 481360.pdf ] m. leconte , m. lelarge , and l. massouli , `` bipartite graph structures for efficient balancing of heterogeneous loads , '' _ acm sigmetrics performance evaluation review _40 , no . 1 ,pp . 4152 , 2012 .c. fricker , p. robert , j. roberts , and n. sbihi , `` impact of traffic mix on caching performance in a content - centric network , '' in _ ieee conference on computer communications workshops ( infocom wkshps ) _ , 2012 ,pp . 310315 .w. jiang , s. ioannidis , l. massouli , and f. picconi , `` orchestrating massively distributed cdns , '' in _ proceedings of the 8th international conference on emerging networking experiments and technologies_.1em plus 0.5em minus 0.4emacm , 2012 , pp .133144 . , `` max percentile replication for optimal performance in multi - regional p2p vod systems , '' in _ ninth international conference on quantitative evaluation of systems ( qest)_.1em plus 0.5em minus 0.4emieee , 2012 , pp .238248 .s. tewari and l. kleinrock , `` on fairness , optimal download performance and proportional replication in peer - to - peer networks , '' in _ networking 2005 .networking technologies , services , and protocols ; performance of computer and communication networks ; mobile and wireless communications systems_.1em plus 0.5em minus 0.4emspringer , 2005 , pp . 709717 .m. leconte , m. lelarge , and l. massouli , `` convergence of multivariate belief propagation , with applications to cuckoo hashing and load balancing , '' in _ proceedings of the twenty - fourth annual acm - siam symposium on discrete algorithms ( soda ) _, s. khanna , ed.1em plus 0.5em minus 0.4emsiam , 2013 , pp .
|
we address the problem of content replication in large distributed content delivery networks , composed of a data center assisted by many small servers with limited capabilities and located at the edge of the network . the objective is to optimize the placement of contents on the servers to offload as much as possible the data center . we model the system constituted by the small servers as a loss network , each loss corresponding to a request to the data center . based on large system / storage behavior , we obtain an asymptotic formula for the optimal replication of contents and propose adaptive schemes related to those encountered in cache networks but reacting here to loss events , and faster algorithms generating virtual events at higher rate while keeping the same target replication . we show through simulations that our adaptive schemes outperform significantly standard replication strategies both in terms of loss rates and adaptation speed .
|
one important function provided by social network is friend discovery .the problem of finding people of the same attribute/ interest/ community has long been studied in the context of social network .for example , profile - based friend discovery can recommend people who have similar attributes/ interests ; topology - based friend discovery can recommend people from the same community .one special requirement of algorithms operating on social network is that it must be privacy - preserving .for example , social network nodes may be willing to share their attributes/ interests with people having similar profile ; or they may be willing to share their raw connections with people in the same community .however , it is unfavourable to leak those private data to arbitrary strangers . towards this end ,the friend discovery routine should only expose minimal necessary information to involved parties . in the current model of large - scale osns ,service providers like facebook play a role of trusted - third - party ( ttp ) .the friend discovery is accomplished as follows : 1 ) every node ( user ) give his / her profile and friend list to ttp ; 2 ) ttp runs any sophisticated social network mining algorithm ( e.g. link prediction , community detection ) and returns the friend recommendations to only related users .the mining algorithm can be a complex one involving node - level attributes , netweork topology , or both .since ttp has all the data , the result can be very accurate .this model is commercially viable and successfully deployed in large - scale .however , recent arise of privacy concern motivates both researchers and developers to pursue other solutions .decentralized social network ( dsn ) like diaspora has recently been proposed and implemented .since it is very difficult to design , implement and deploy a dsn , much research attention was focused on system issues .we envision that the dsn movement will gradually grow with user s increasing awareness of privacy .in fact , diaspora , the largest dsn up - to - date , has already accumulated 1 million users . with the decentralized infrastructure established ,next question is : can we support accurate friend discovery under the constraint that each node only observes partial information of the whole social network ?note that the whole motivation of dsn is that single service provider can not be fully trusted , so the ttp approach can not be re - used . towards this end ,the computation procedure must be decentralized .one common approach in literature to achieve decentralized and privacy - preserving friend discovery is to transform it into a set matching problem .for the first type , it is natural to represent one s attributes/ interests/ social activities in form of a set . for the second type ,one straightforward way is to represent one s friend ( neighbour ) list in form of a set . in this way , both profile matching and common friend detection become a set intersection problem. there exists one useful crypto primitive called private set intersection ( psi ) .briefly and roughly speaking , given two sets and held by two node and , psi protocol can compute without letting either or know other party s raw input .resaerchers have proposed psi schemes based on commutative encryption , oblivious polynomial evaluation oblivious psudorandom function , index - hiding message encoding , hardware or generic construction using garbled circuit . the aforementioned privacy - preserving profile matching/ common friend detection protocols are variants of psi protocols in terms of output , adversary model , security requirement and efficiency .one major drawback of all the above works is that they can not fully utilize the topology of a social network .firstly , profile is just node - level information and not always available on every social network . on the contrary , topology ( connections/ friendship relations ) is the fundamental data available on social networks .secondly , common friend is just one topology - based approach and it only works for nodes within 2-hops .in fact , our previous investigation showed that common friend heuristic has a moderate precision and low recall for discovering community - based friendship .this result is unsurprising because a community can easily span multiple hops . towards this end , we focus on extending traditional secure friend discovery beyond 2-hops via community detection .note that topology - only community detection is a classical problem under centralized and non privacy - preserving setting , i.e. a single - party possesses the complete social graph and does arbitrary computation .although one can translate those algorithms into a privacy - preserving and decentralized protocol using generic garbled circuit construction , the computation and communication cost renders it impractical in the real world . to design an efficient scheme , we need to consider community detection accuracy and privacy preservation as a whole .a tradeoff among accuracy , privacy and efficiency can also be made when necessary .to summarize , this paper made the following contributions : * we proposed and formulated the first _ privacy - preserving _ and _ decentralized _ community detection problem , which largely improves the recall of topology - based friend discovery on decentralized social networks . * we designed the first protocol to solve this problem .the protocol transforms the community detection problem to a series of private set intersection ( psi ) instances via truncated random walk ( trw ) .preliminary results show that the protocol can uncover communities with overwhelming probability and preserve privacy .* we propose open problems and discuss future works , extensions and variations in the end .first type of related work is private set intersection ( psi ) as they are already widely used for secure friend discovery .second type of related work is topology - based graph mining .although our problem is termed `` community detection '' , the most closely related works are actually topology - based sybil defense .this is because previous community detection problems are mainly considered under the centralized scenario . on the contrary, sybil defense scheme sees wide application in p2p system , so one of the root concern is decentralized execution .note , there exist some distributed community detection works but they can not be directly used because nodes exchange too much information .for example allow nodes to exchange adjacency lists and intermediate community detection results , which directly breaks the privacy constraint that we will formulate in following sections .due to space limit , a detailed survey of related work is omitted .interested readers can see community detection surveys and sybil detection surveys .the notion of community is that intra - community is dense and inter - community linkage is sparse . in this section ,we first review classical community detection formulations under centralized scenario and our previous formulation under decentralized scenario .then we formulate the privacy - preserving version . to make the problem amenable to theoretical analysis, we consider a community - based random graph ( cbrg ) model in the last part .classical community detection is formulated as a clustering problem .that is , given the full graph , partition the vertex set into subsets ( a partitioning ) , such that and .a quality metric is defined over the partitions and a community detection algorithm will try to find a partitioning that maximize or minimize depending on its nature .this is for non - overlapping community detection and one can simply remove the constraint to get the overlapping version . note that is only an artificial surrogate to the axiomatic notion of community .the maximum does not necessarily corresponds to the best community .however , the community detection problem becomes tractable via well - studied optimization frameworks by assuming a form of e.g. modularity , conductance .most classical works are along this line mainly due to the lack of ground - truth data at early years .now consider the decentralized scenario .one node ( observer ) is limited to its local view of the whole graph .it is unreasonable to ask for a global partitioning in terms of sets of nodes .the tractable question to ask is : whether one node is in the same community as the observer or not ?this gives a binary classification formulation of community detection .the result of community detection with respect to a single observer can be represented as a length- vector .stacking all those vectors together , we can get a community encoding matrix : this matrix representation is subsumed by partitioning representation in general case . if restricted to non - overlapping case , the two representations are equivalent . since encodes all pair - wise outcome , it is immediately useful for friend discovery application . in what follows ,we will define accuracy and privacy in terms of how well can be learned by nodes or adversary .in this initial study , we focus on non collusive passive adversary .that is , dsn nodes all execute our protocol faithfully but they are curious to infer further information from observed protocol sequence .we use a single non - collusive sniff - only adversary to capture this notion .the system components are as follows :* graph : .the connection matrix is denoted as , where if ; otherwise , .the ground - truth community encoding matrix is denoted as , which is unknown to all parties at the beginning . for simplicity of discussion , we assume the nodes identifiers , i.e. , is public information .* nodes : .a node s initial knowledge is its own direct connections , i.e. .nodes are fully honest . their objective is to maximize the accuracy of detecting .eventually , a node can get full row ( column ) in denoted by ( ) .depending on the protocol choice , relevant cells in can be made available immediately or on - demand .* adversary : .it can passively sniff on one node . will observe all protocol sequence related with , including initial knowledge and the community detection result . s objective is to maximize successful rate in guessing and , using any probabilistic polynomial algorithms ( ppa ) .note , the full separation of nodes and adversary is for ease of discussion . in real dsn , this passive attacker can be a curious user who wants to infer more information of the network . as protocol designer ,our objectives are : * accurately detect community after execution of the protocol , i.e. making and as close as possible . * limit the successful rate of adversary s guessing of and , under the condition that gets the protocol sequence on node and makes best guess via ppa .one can see that our problem is multi - objective in nature .the accuracy part is a maximization problem and the privacy part is a is min - max problem .formal definition is given in eq .[ eq : model - pcd - multi - objective ] . in this formulation , `` protocol '' is an abstract notation of the protocol specification , not protocol execution sequence . is the information observed by adversary , which is dependent on protocol . is the measure of successful rate with symbols defined as follows : * are two matrix in the same size as and . * is the challenge relations . * to measure how close are the two matrix over the challenge set , we use the successful rate : that is , how likely a randomly selected pair of nodes from will have the same value in and .for the accuracy part , we define the challenge relation as because we want the result to be accurate for all nodes . for the privacy part, we define the challenge relation as , where denotes the set of nodes in the same community as .the reason to exclude nodes from the same community is obvious .since adversary will get after protocol execution , it already knows the community membership of . given the knowledge of community , one can make more intelligent guess of the connections .this is made clear in later discussions .before proceed , we remark that the problem defined in eq .[ eq : model - pcd - multi - objective ] is hard even without the privacy - preserving objective . in other words , the community detection problem ( accuracy )has not been fully solved even under the ttp scenario . to improve the accuracy ,researchers have already used heavy mathematical programming tools , try to incorporate more side information , develop problem - specific heuristics , or perform heavy - duty parameter tuning . to make our problem amenable to theoretical analysis, we consider a community - based random graph ( cbrg ) model in this paper .let be the ground - truth community encoding matrix .we generate the random connection matrix as follows : 1 ) if ( and are in the same community ) ; otherwise. there are communities and each of size , so the total number of vertices is .we denote such a random graph as .one example ground - truth community encoding matrix and the expected connection matrix are illustrated in fig .[ fig : cbrg - illustration ] . , \mathbbm{e } [ c ] = \left [ \begin{array}{llll } p & & & \\ p & p & & \\q & q & p & \\q & q & p & p \end{array } \right]\ ] ]in this section , we present our protocol and main results .our protocol involves the two stages : * pre - processing is done via truncated random walk .every node send out random walkers , , with time - to - live ( ttl ) values initially set to . upon receiving a random walker ( rw ) ,the node records the i d of , deducts its ttl , and sends it to a random neighbour if . at the end of this stage , each node accumulated a set of random walker ids . with proper parameters and , the truncated random walker issued by will more likely reach other nodes in the same community as .so by inspecting the intersection size of and , we can answer whether and are in the same community .this essentially transforms the community detection problem to a set intersection problem . * to uncover the relevant cells in pairwise community encoding matrix , we only need to perform privacy set intersection ( psi ) on two sets .psi schemes differ in their flavours : 1 ) reveal intersection set ( psi - set ) ; 2 ) reveal intersection size ( psi - cardinality ) ; 3 ) reveal whether intersection size is greater than a threshold ( psi - threshold ) .we use the 3rd type psi in our construction , which can be implemented by adapting . in what follows , we just assume existence of such a crypto primitive : it computes ] ) , this small advantage is averaged out over a large challenge relation set . the detailed proof is omitted and the main results are summarized in the following theorem .our protocol guarantees : * false positive rate : * false negative rate : ( ) * adversary s advantage : in the theorem , . denotes the probability to make successful guess based on mere prior information of . for example, suppose contains as majority , i.e. .the best guess is to let .one can show that the success probability is and this strategy is optimal if no other information is available . due to the specifics of our problem , adversary can make more intelligent guesses than random bit . towards this end ,the advantage is defined with respsect to successful rate of this priori - based strategy . due to the specifics of the problem, both accuracy and privacy guarantees are parameterized . to give an intuitive view of what can be achieved ,consider one instantiation of cbrg : ( # of communities ) , ( # of nodes in one community ) , ( intra - community edge generation probability ) , , ( inter - community edge generation probability ) .we can set protocol parameters as follows : ( # of rws issued by one node ) , ( length of rw ) and ( threshold of intersection size ) .this gives us following accuracy and privacy guarantees : * false negative rate : * false positive rate : * advantage for guessing : * advantage for guessing : one can see that our proposed protocol can accurately detect community and preserve privacy given proper parameters .note first that above and are casually selected by heuristics , which have not been jointly optimized .note second that the fpr and fnr can be exponentially reduced by repeated experiments , which only maps to a linear increase in .the example in this section is only to demonstrate the effectiveness of our protocol and a full exploration of design space is left for future work .we formulated the privacy - preserving community detection problem in this paper as a multi - objective optimization .we proposed a protocol based on truncated random walk ( trw ) and private set intersection ( psi ) .we have proven that our protocol detects community with overwhelming probability and preserves privacy .exploration of the design space and thorough experimentation on synthesized/ real graphs are left for future work . in following parts of this early report, we discuss several simpler candidate protocols and how they fail to meet our objective . this help to demonstrate the rationale of our formulation and protocol design .suppose we change the protocol such that and first exchange and and then run any intersection algorithm separately .after uncovering all related cells in , adversary knows . can directly calculate .this allows adversary to guess perfectly .from the community membership , can further infer links because intra - community edge generation probability and inter - community generation probability are different .this already allows better guess than using global prior of .furthermore , inferring links from measurements is a classical well - studied topic called network tomography . can actually re - organize s into a list of size- sets , each representing the nodes traversed by a rw .researchers have shown that links can be inferred from this co - occurrence data with good accuracy , e.g. nico .another natural thought to protect non - common set elements is via hashing .suppose there exists a cryptographic hash .we define .now , two nodes just compare and in the community uncover stage .this can protect true identities of the rws if their i d space is large enough .however , it does not prevent adversary from intelligent guess of and .methods noted in previous paragraph can also be used in this case . in our protocol , we used the psi - threshold version .that is , given and , the two parties know nothing except for the indicator $ ] .two weaker and widely studied variations are : psi - cardinality and psi - set .consider psi - set .the adversary now only knows elements in the intersection . based on his own and psi - set protocol sequence, he can get . can calculate the probability that a rw tranverses both and conditioned on tranverses . based on this information , can adjust threshold and to accurately detect communities .the derivation is similar to our protocol in this paper but more technically involved , which is also left as future work .the bottom line is that psi - set leaks enough information for more intelligent guesses . as for psi - cardinality , we are not sure at present what an adversary can do with .since the two variants leak more information and might be potentially exploited , we use psi - threshold in our protocol . *if we allow a small fraction of nodes to collude , how to define a reasonable security game ?what privacy - preserving result can we achieve ? * current scheme requires all nodes to re - run the protocol , if there is any change in the topology , e.g. new node joins or new friendship ( connection ) is formed .is it possible to find a privacy - preserving community detection scheme that can be incrementally updated ? *the privacy preservation of our proposed protocol is dependent on graph size .one root cause is that we only leveraged crypto primitives in the private set intersection ( psi ) part .the simulation of truncated random walk ( trw ) is done in a normal way .since random walk is a basic construct in many graph algorithms , it is of interest know how ( whether or not ) nodes can simulate random walk in a decentralized and privacy preserving fashion .
|
the problem of secure friend discovery on a social network has long been proposed and studied . the requirement is that a pair of nodes can make befriending decisions with minimum information exposed to the other party . in this paper , we propose to use community detection to tackle the problem of secure friend discovery . we formulate the first privacy - preserving and decentralized community detection problem as a multi - objective optimization . we design the first protocol to solve this problem , which transforms community detection to a series of private set intersection ( psi ) instances using truncated random walk ( trw ) . preliminary theoretical results show that our protocol can uncover communities with overwhelming probability and preserve privacy . we also discuss future works , potential extensions and variations .
|
the equation of state is one of the most important ingredients in solar and stellar modeling .assessing the quality of the equation of state is not easy , though . because of the relevant high temperatures and densities , there are no sufficiently accurate laboratory measurements of thermodynamic properties that could assist the solar modeler .however , the observational quality of helioseismology has become so high that high - precision thermodynamic quantities must be part of state - of - the - art solar models .so , strictly speaking , at the moment only theoretical studies of the equation of state can be pursued .but this is a too pessimistic view .helioseismology puts significant constraints on the thermodynamic quantities and has already delivered powerful tools to test the validity and accuracy of theoretical models of the thermodynamics of hot and dense plasmas .there are two important reasons why the thermodynamics can be tested with solar data .first , the accurate solar oscillation frequencies obtained from space and ground - based networks observation have led to a much better understanding of the solar interior with a very high level of accuracy .second , thanks to the existence of the solar convection zone , the effect of the equation of state can be disentangled from the other two important input - physics ingredients of solar models , opacity and nuclear reaction rates .this simplification is due to the fact that in large parts of the convection zone , the convective motion is very close to adiabatic , which leads to a stratification that is equally very close to adiabatic .thus , except in the small superadiabatic zone close to the surface , the uncertainty arising from our ignorance of the details of the convective motion does not matter . as a consequence , inversions for the thermodynamic quantities of the deeper layers of the convection zone promise to be sensitive to the small non - ideal effects employed in different treatments of the equation of state . before the 1980s, simple models of the equation of state were used in solar modeling .their physics was based on the one hand on ionization processes modeled by the saha equation and on the other hand on electron degeneracy .fairly good results had been obtained in this way , for instance , by the equation of state of .however , it turned out soon that the observational progress of helioseismology had reached such levels of accuracy that further small non - ideal terms in the equation of state became observable .inclusion of a ( negative ) coulomb pressure correction became imperative . with this coulomb correction being the main non - ideal term , the subsequent upgrade of the eff formalism to include the coulomb term became quite successful ; its realization is the so - called ceff equation of state .since then , other non - ideal effects , such as pressure ionization and detailed internal partition functions of bound states were also considered .examples of such more advanced equation of state are the mhd equation of state and the opal equation of state , as well as the sireff and its sibling , the eos-1 equation of state .these efforts in basic physics have paid off well , because a significantly better agreement between theoretical models and observational data has been achieved when using such improved equations of state . despite this progress , the theoretical models are not yet sufficient ( see section 3.4 ) . andit turns out that complicated many - body calculations are necessary even if the solar plasma is only slightly non - ideal . in recent efforts for a better equation of state , realistic microfield distributions and relativistic electrons have been introduced in solar models , but even these most refined solar models have discrepancies with respect to the observed solar structure that are much larger than the observational errors themselves .therefore , further investigations are still necessary .we note in passing that while the aforementioned non - ideal theories have all at least some theoretical footing , sometimes pure _ ad - hoc _ formalisms can mimic reality even better ; an example is the recent pressure - ionization parameterization of .the thermodynamic quantities of solar plasma do not only depend on temperature and density , but also on the characteristic properties of all atoms , ions , molecules , nuclei and electrons .a major complication is that the chemical composition changes through the lifetime of the sun .abundance changes are a consequence of the several major mixing processes that have been considered in solar models .another change is caused by the nuclear reactions in the solar center , which convert four hydrogen atoms into one helium atom , and produce , as by - products , several other species , such as , carbon , nitrogen and oxygen , during the p - p and cno chain reactions . and finally , even in the solar convective zone the composition can change in time .although at any given moment , all elements should be homogeneously distributed , due to the thorough stirring of the convective motion on a very short time scale ( on the order of a month and less ) , over longer time scales , the chemical composition can change nonetheless .the major effect is the so - called gravitational settling , which involves the depletion of heavier species in the stable layers immediately below the convection zone .this depletion is then propagated upward into the convection zone by convective overshoot that dips into the depleted regions . since the 1990s , helioseismology has been successfully put constraints on solar models with various kinds of mixing , but it is clear that a new , thermodynamics - based determination of the local abundance of heavy elements inside the sun would deliver a powerful additional constraint . in a first part ,this paper consists of a systematic study of the contribution of various heavy elements in a representative set of thermodynamic quantities .intuitively , one might think that unlike in the opacity , in the equation of state , the influence of details in the heavy elements is not as critical , because the leading ideal - gas effect of the heavy elements is proportional to their total particle number .their influence should therefore be severely limited , since all heavy elements together only contribute to about 2% in mass . however , as shown in and in section 3.4 , helioseismology has become so accurate that even details of the contribution of heavy elements beyond the leading ideal - gas term are in principle observable . in a further part of this paper, we show that one the one hand , there are element - dependent , but physics - independent , features of the heavy elements .this result has promising diagnostic possibilities for the helioseismic heavy - element abundance determination .on the other hand , we have also found features in the same thermodynamical quantities which do depend on the detailed physical formalism for the individual particles .such features will lend to a diagnosis of the physical foundation of the equation of state .there are two basic approaches : the so - called _ chemical _ and _ physical _ pictures . in the chemical picture, one assumes that the notion of atoms and ions still makes sense , that is , ionization and recombination is treated like a chemical reaction .one of the more recent realizations of an equation of state in the physical picture is the mhd equation of state .it is based on the minimization of a model free energy .the free energy models the modifications of atomic states by the surrounding plasma in a heuristic and intuitive way , using occupation probabilities .the resulting internal partition functions of species in mhd are \ , \ ] ] here , label states of species . are their energies , and the coefficients are the occupation probabilities that take into account charged and neutral surrounding particles . in physical terms , gives the fraction of all particles of species that can exist in state with an electron bound to the atom or ion , and gives the fraction of those that are so heavily perturbed by nearby neighbors that their states are effectively destroyed .perturbations by neutral particles are based on an excluded - volume treatment and perturbations by charges are calculated from a fit to a quantum - mechanical stark - ionization theory .hummer & mihalas s ( 1988 ) choice had been ^ 3 \sum_{\alpha \neq e}n_\alpha z_\alpha ^{3/2}\right\ } \ , \ ] ] here , the index runs over neutral particles , the index runs over charged ions ( except electrons ) , is the radius assigned to a particle in state of species , is the ( positive ) binding energy of such a particle , is a quantum - mechanical correction , and is the net charge of a particle of species . note that for large principal quantum numbers ( of state ) , and hence provides a density - dependent cut - off for .the physical picture provides a systematic method to include nonideal effects .an example is the opal equation of state , which starts out from the grand canonical ensemble of a system of the basic constituents ( electrons and nuclei ) , interacting through the coulomb potential .configurations corresponding to bound combinations of electrons and nuclei , such as ions , atoms , and molecules , arise in this ensemble naturally as terms in cluster expansions .effects of the plasma environment on the internal states , such as pressure ionization , are obtained directly from statistical - mechanical analysis , rather than by assertion as in the chemical picture .although the stellar plasma we deal with is assumed to be electrically neutral in a large volume , it is a mixture of charged ions and electrons inside .the coulomb force between these charged particles is long range . by doing the first order approximation , the so - called potential results .it is the approximation of the static - screen coulomb potential ( sscp ) , which describes the interaction between charged particles as where is the net charge of the particle , is electron charge , is the distance to the center of the target particle , and is the debye length . the corresponding free - energy , which is sometimes called the debye - hckel free energy , can then be obtained .it has become widely used . furthermore , in order to eliminate the short - range divergence in the debye - hckel potential , a cutoff function =3x^{-3}\,\int_0^x\frac{y^2}{1+y } \rm{d } y \, \label{fo : tau}\ ] ] was introduced , where , with being the distance of closest approach ( the minimum distance that the center of two particles can reach ) .this factor is essential to get rid of the possibility of a negative total pressure when density of the plasma is becoming high .however , it has become known for some time that this factor ( eq . [ fo : tau ] ) can produce some unforeseen and unjustified effects on some thermodynamic quantities in solar equation of state , which we will discuss in more detail in sect .4 . in order to clearly disentangle the effects of the heavy elements and those of the factor , in this paper we are using the mhd and ceff equations of state with a debye - hckel theory _ but without a correction _ unless explicitly stated . in helioseismic inversions for equation - of - state effects ,the resulting natural second - order thermodynamic quantity is the adiabatic gradient , which is itself closely related to adiabatic sound speed , which can also be the result of inversions .such inversions are called `` primary '' . however , in the diagnosis of physical effects , other second - order thermodynamic quantities can in principle be more revealing than the adiabatic gradient . unfortunately , the helioseismic inversion for these other thermodynamic quantities is less direct , because additional physical assumptions must be made .such inversions are therefore called `` secondary '' . in view of future primary and secondary inversions , we study here not only the adiabatic gradient , but more systematically the complete set of the following three second - order thermodynamic quantities where is pressure , is density , is temperature , and is specific entropy .all other second - order quantities can be derived as functions of these three quantities .it is well known that the heavy - element abundance of stars is crucial for opacity , where a certain single heavy element often leads to the dominant contribution .however , one would expects a much less dramatic effect in the equation of state .quantitatively , in the sun , the abundance of heavy - elements is small ( about 2 percent in mass ) and their influence in the equation of state is roughly proportional to their number abundance , which is about an order of magnitude smaller .nevertheless , helioseismology has already revealed heavy - element effects . in this paper, we study the effect of the heavy elements with the help of the mhd equation of state .we choose the mhd equation of state , because it contains relatively detailed physics , has been widely used in solar modeling ( thus giving a benchmark for our analysis ) , and most importantly , because among all the non - trivial equations of state it is the only one with which a systematic study of the influence of individual elements and their detailed physical treatment can be carried out .the other non - trivial equations of state are only available in pre - computed tabular form , with fixed heavy - element abundance .the physical conditions for which the different equations of state have been calculated are from a solar model [ model s of ] , restricted to the convection zone , that is , the range relevant for a helioseismological equation - of - state diagnosis . for our conceptual and qualitative study we do not need the most up - to - date values for the chemical composition .thus , for simplicity , we have chosen a typical chemical composition of mass fractions , and .since we are also comparing our result with the opal equation of state [ which is so far best for solar models ] , for the sake of consistency , we have chosen a distribution of heavy elements in exactly the same way as in the opal tables . for convenience, we here list this choice in table [ tb1 ] . in figs .[ fig1 ] and [ fig2 ] we show pressure , , and in a solar model with the 6-element mixture of table [ tb1 ] for several popularly used equations of state .both absolute values , and for finer details , relative differences with respect to the mhd equation of state are displayed .the equations of state are : + mhd - standard mhd with the usual occupation probabilities .+ mhd - standard mhd but with internal partition functions of heavy element truncated to the ground state term ( however , the internal partition function of h and he are not truncated , which is different from [ nd98 hereafter ] and the h - he mixture models of section 4 ) .+ opal - interpolated from the opal tables [ except for the h - he - c mixture models , which were directly calculated by one of us ( an ) ] . + ceff - + sireff - + as found by nd98 , for a pure hydrogen and a hydrogen - helium mixture under solar conditions , among the possible thermodynamic quantities ( that derive from the second derivatives of the free energy ) , it is the quantity [ fig .[ fig1 ] ( b ) ] which reveals most physical effects , and this already in the absolute values .of course , other thermodynamic quantities contain signatures of the same physical effects as well , but often they would show up only in the _ de - facto _ amplification of relative differences .the sensitivity of is mainly due to the fact that in ionization zones it varies considerably less than the other thermodynamic quantities .finer effects are therefore overlaid on a smaller global variation and show therefore up already in the absolute plots . specifically , the main result of nd98 was a `` wiggle '' in , resulting from the density dependent occupation probabilities for the excited states of hydrogen in mhd .our study deals with the combined effect of several different heavy elements and , importantly , several different physical mechanisms that describe the interaction between the ground and excited states of their atoms and ions with the surrounding plasma . in order to disentangle effects from individual elements ,we have studied the heavy - element effects separately for each element . before we interpret the heavy - element features in fig .[ fig1 ] , first we should like to mention an effect not related to heavy elements , which nonetheless demonstrates the power of our analysis .[ fig1 ] ( d ) shows , at very low temperature ( ) , a feature .for instance , in a dip appears in most equations of state except in ceff .it is obviously the signature of h molecular dissociation , a process not included in the ceff equation of state .next , we have compared the contribution of heavy elements in the various equations of state ( fig .[ fig3 ] ) .for each particular equation of state , we have computed the relative difference between the regular 6-element mixture and a hydrogen - helium mixture ( mass fractions being , ) .this difference is much smaller than the one given in fig .[ fig2 ] , suggesting that the biggest difference among different models of equation of state is related to the treatment of hydrogen and helium .however , as shown below , this does not mean that for second - order thermodynamic quantities in solar models of helioseismic precision the contribution of heavy elements would be negligible ( see section 3.4 ) . in order to reveal the contribution of each individual heavy element, we have calculated solar models with particular mixtures . in each case ,hydrogen and helium abundances were fixed with mass fractions and ; the remaining 2% heavy - element contribution was topped off by only one element , carbon , nitrogen , oxygen and neon , respectively .we have then compared these models with the hydrogen - helium mixture model in fig .[ fig4 ] ( this mixture is obtained by filling the two percent reserved for heavy elements with additional helium ) .the expectation is that the biggest deviations between these special models , and the one with the complete heavy - element mixture , will result ( i ) from the change in the total number of particles per unit volume and ( ii ) from the different ionization potentials of the respective elements .because the solar plasma is only slightly non - ideal , the leading pressure term is given by the ideal - gas equation with standing for pressure , the number of particles , and the boltzmann constant .because of their higher mass , the number of heavy - element atoms is obviously smaller than that of helium atoms representing the same mass fraction .however , against this reduction in total number there is an offset due to the ionization of their larger number of electrons which becomes stronger at higher temperatures .the net change of the total number of particles is therefore a combination of these two effects . in fig .[ fig5 ] we show the difference between the total number of particles in these models , and we see that indeed it explains the difference in total pressure well . since the second - order thermodynamic quantities ( eq . 5 - 7 ) [ see fig .[ fig4 ] ( b - d ) ] are independent of the total number of particles , they can reveal signatures of the internal structure of each element more directly .therefore , at low temperatures ( ) , the difference with respect to a hydrogen - helium mixture plasma is similar for all the heavy elements , because the difference in the second - order quantities mainly comes from the fact that the 2% helium in the h - he mixture are already partly ionized , while the replacing heavier elements are still mainly neutral .however , as temperature rises , a signature of the ionization of the individual heavy element appears .this selective modulation of second - order thermodynamic quantities , as well as another property discussed further below , allow us in principle to identify single heavy elements in the solar convection zone , in analogy to optical spectroscopy .our next comparison has the purpose to disentangle even further the contribution of the mere presence of each heavy element from the more subtle influence of different physical formalisms . in fig .[ fig6 ] the h - he - c models ( with mass fractions 0.70:0.28:0.02 ) are compared with the h - he models ( 0.70:0.30 ) for our set of different equations of state .this procedure eliminates most of the difference due to the treatment of hydrogen and helium in each individual equation of state and therefore isolates the behavior of carbon in each of our equations of state .we note that here , roughness appears in the opal equation of state , because the sensitivity of our analysis has almost reached the reasonable limit of accuracy that can be obtained by interpolation in the opal tables . from fig .[ fig6 ] ( a ) , we can see that among the thermodynamic quantities , reveals the biggest differences between the equations of state in the temperature range of ( named region `` a '' in the figure ) .common features can be identified when and .such features are also visible in the graph between h - he - c and h - he shown in fig .[ fig6 ] ( b ) , and between h - he - c and the regular 6-element mixture shown in fig .[ fig7 ] . in the region`` a '' of fig .[ fig7 ] the mhd and opal results together differ significantly from the other three formalisms ; they appear closer to the reference 6-element mixture model than to the other models .one obvious reason is that neither mhd nor ceff nor sireff include excited states of heavy elements ( in this figure carbon ) , while both mhd and opal do , although opal does it quite differently .our figure would then suggest that the difference between mhd and opal on the one hand , and the other three formalisms on the other hand , appears most likely due to the contribution of the excited states of carbon .in addition , it also appears that that it matters less that excited states are treated differently than that they are not neglected .it will be a challenge for helioseismology to distinguish between such small differences .the bump in in the region `` b '' of either fig .[ fig6 ] ( b ) or fig .[ fig7 ] is totally independent of the equation of state used . by comparing with fig .[ fig4 ] ( d ) , it follows that the feature is likely due to the ionization of carbon at that temperature in general , quite independent of details in the equation of state .the strength and robustness of this profile and its relative independence on the equation of state qualify it ideally for a helioseismic heavy - element abundance determination .this is true because the profile does not only appear for carbon , but also for all other heavy elements which exhibit a similar feature independent of the equation of state .as an example , the analogous phenomenon for nitrogen at its higher ionization temperature is revealed by the h - he - n model in fig .it is clear from fig .[ fig4 ] ( d ) that each heavy element has its own profile , and in analogy to the helioseismic helium - abundance determination , these profiles promise to be used in future inversions of solar oscillation frequencies to determine the abundance of the heavy - elements .another important issue in the study of heavy - element effects in the solar equation of state is how many elements should be considered , that is , what is the error if not enough elements are included .we address this question by adding species one by one until we reach the 15 element mix considered in the original mhd equation of state to see if there is subset that is adequate for helioseismic accuracy .to be more specific , we have again set the total abundance of all heavy elements to be fixed at , but this time we follow the abundance , which slightly differs from the simplified values of table [ tb1 ] .we label the mixtures by the number of species including h and he ( thus a 3-element mixture means h - he - c ; a 4-element h - he- c - n , _ etc . _ ) . to reconcile the absolute mass fractions of the mixture with our specification of a fixed mass fraction ,some form of topping off is necessary . in the case of the first 6 elements , we have chosen to readjust the last element to top off to , with the other heavy elements being set to the abundance .the same convention holds for the 7th element , which is iron .however , from this case onward to the full 15-element mixture , we have always used iron to top off to , with the other heavy elements always having their abundance .the results of this systematic procedure to add heavy elements one by one are shown in fig .we distinguish two cases .first , regarding pressure , the larger the number of heavy elements considered is , the closer pressure approaches that of the full mixture .this is easy to understand from the role of the effect of individual heavy elements discussed in sec .second , and more interesting , regarding the second - order thermodynamic quantities , the larger the number of heavy elements considered is , the smoother the curves overall become .the reason is that when more species are included the less weight each individual species obtains and thus its contribution with its specific features becomes reduced .in addition , different species have their own profile which sometimes leads to partial cancellations .overall , then , the effect of a mixture with a larger number of heavy elements is manifested by the relatively flat profiles of fig .[ fig9 ] ( b - d ) .third , regarding the minimum number of heavy elements necessary for accurate helioseismic studies , we conclude that the inclusion of new species of heavy elements becomes less and less critical if at least ten of the most abundant species are included .inclusion of more elements will lead to such small differences in the equation of state that they appear to be undetectable by current helioseismological studies ( see the following section ) .fourth , however , the currently popular 6-element mixture used in opal is still inadequate in as far as it leads to a deviation with respect to the full mixture , attaining up to in at the base on solar convection zone . andsince the opal data are so far only available in tabular form , the inevitable interpolation error , which is typically of the same order , only aggravates this situation .the observational resolution of helioseismic inversions is demonstrated in fig .[ fig10 ] .it shows the result of an inversion for the intrinsic difference between the sun and a solar model .the intrinsic difference is the part of the difference due to the difference in the equation of state itself [ and not the additional part implied by the change in solar model due to that of the equation of state , which induces a further difference ] .the main result of fig .[ fig10 ] is that present - day helioseismology has reached an accuracy of almost for .the effects discussed in this paper are therefore within reach of observational diagnosis .more specifically , studies such as the one shown in fig .[ fig10 ] reveal the influence of the equation of state by an analysis of the difference between the solar values obtained from inversions and the ones computed in reference models ( standard solar models ) . in the case of fig .[ fig10 ] , two sets of reference models are compared to the solar data , one set based on the mhd equation of state , the other set on opal .because of the differential nature of these inversions , they become more reliable when the reference model is close to the real solar structure . in the last years, the solar models were significantly improved , thanks to the constraints of helioseismology . in particular ,diffusion has now become part of the standard solar model and it was included in the calibrated reference models of ( models m1m8 of table [ tb2 ] ) .their composition profiles ( in particular the surface helium abundance ) were those obtained from helioseismological inversions by .all models had . in the inversions , tested the robustness of the inferred results against uncertainties in the solar - model inputs using a number of different solar models ( see their paper for details of the tests made ) .all of the models m1m8 had either the mhd or the opal equation of state , and they were all using opal opacities , supplemented by the low temperature opacities of . since one of the most uncertain aspects of solar modeling is always the formulation of the convective flux , two different formalisms were used standard mixing length theory ( mlt ) and the formalism ( cm ) .the two formalisms give fairly different stratifications in the outer regions of the sun .[ fig10 ] clearly shows that helioseismology has the potential to address the small effects from heavy elements such as discussed in fig .[ fig7 ] and fig .[ fig8 ] . while fig .[ fig10 ] is based on numerical inversions , asymptotic inversion techniques can shed light from a different angle .in particular , they are well suited for heavy - element abundance determinations , as demonstrated by fig .[ fig11 ] from .[ fig11 ] shows the result of an _inversion introduced by for the helioseismic helium abundance determination .the inversion is for the so - called quantity , which is a function of solar structure here , is the local gravitational acceleration at position in the sun .the quantity is useful because of its equality with a purely thermodynamic quantity if the stratification is assumed to be perfectly adiabatic ( otherwise , the equality is violated by the amount of the non - adiabaticity ) . for adiabatic stratification , the following relation holds with the derivatives while the reader is referred to for more detail , here we merely mention that similarly to fig .[ fig10 ] , fig .[ fig11 ] shows in the sun and in two artificial solar models , but in contrast to fig .[ fig10 ] , the inversions of fig .[ fig11 ] are absolute and not relative to a reference model .the comparison models of fig .[ fig11 ] are based on the h - he - c composition of section 3.2 ( where pure carbon is representing the total heavy - element abundance ) , and the analogous h - he - o composition , respectively .it is no surprise that since the quantity involves derivatives of , it is even more sensitive to equation - of - state effects than itself . that this is indeed the caseis clearly reflected in fig .[ fig11 ] , where the two models , which are , after all , very close to each other , lead to values of which differ by an amount far larger than the accuracy of the inversion ( indicated by error bars ) . since the models of fig .[ fig11 ] exhibit exactly the same variation of the heavy - element composition as the models of the present paper , it is clear that fig . [ fig11 ] has convincingly demonstrated that the effects found and studied here are already well within the reach of present - day observational accuracy .as we have mentioned in section 1 , the effect of the treatment of excited states of atomic and ionic species can show up under certain circumstances .nd98 pointed out that the existence of the wiggle in the diagram in the mhd equation of state [ see fig .[ fig1 ] ( b ) ] is a genuine effect of neutral hydrogen even if it occurs in a region where most of hydrogen is already ionized .the wiggle is caused by the specific form of the density - dependent occupation probabilities of excited states in mhd .however , nd98 , did not examine the influence of the ground state .thus , to see if the excited states do , or do not , behave in the same way as the ground states , we have carried out a numerical experiment in which we have switched on and off the mhd - type occupation probabilities [ see eq .( [ fo : w ] ) for definition ] of _ all ground states of all species_. to switch off the mhd - type occupation probability of a ground state , we have simply set , where stands for a given species ( atom or ion of an element ) , with all of the other remaining the same as usual . obviously , this is a purely academic exercise , because such a choice of occupation probabilities is physically inconsistent .more specifically , by robbing the ground - state occupation probability of the possibility to become less than , one disables the capacity of the formalism to model pressure ionization .however , here we merely intend to see what kind of role the occupation probability plays on the ground states and on the excited states .a motivation for such a question is given by the yet unexplained fact that the mhd equation of state is not as good as the opal equation of state for temperatures between and inside the sun ( see fig .[ fig10 ] ) . for this testwe have used a pure hydrogen - helium mixture . for consistency with the work by nd98, we have taken their run of temperature and density , which corresponds roughly to the solar convection zone ( here we refer to these conditions as `` solar track '' , to distinguish it from the more precise concept of a solar model used in other figures ) .our results are shown in fig .[ fig12 ] .besides the previously defined labels opal , ceff , and mhd , here the label mhd for the standard mhd internal partition function of hydrogen and helium truncated to the ground state term ( see also nd98 ) .mhd and mhd are the same as mhd and mhd , respectively , except that in them the occupation probabilities of all the ground states are set to be .the label mhd refers to a truncation to the ground state term of the helium internal partition function only .similarly , mhd refers to ground - states occupation probabilities in mhd that are set to .we again stress that the absence of the possibility to model pressure ionization makes the mhd model quite unphysical .indeed , it is found to deviate significantly from all other models at sufficiently high densities , where pressure ionization matter . in fig .[ fig13 ] , we see that the nd98-wiggle between and shows up more distinctly when the occupation probability of the ground state is set to .this clearly confirms the conclusion by nd98 that the wiggle must be a pure excited - states effect .an occupation probability of the ground states different from actually happens to reduce the wiggle . from fig .[ fig12 ] ( a - c ) we can see that mhd model is very close to mhd model , and mhd is very close to mhd . the presence of helium does not change very much , which is another confirmation of the conclusion by nd98 that the wiggle is an excited - states effect of pure hydrogen .furthermore , it can be seen from fig .[ fig12 ] ( a - c ) that the effect of the ground - state occupation probability shows up at temperatures , and it is most significant for .although none of our results appears to fit the opal equation of state closely ( not surprising with our academic exercise ) , nonetheless it follows that setting ground - state occupation probabilities to does bring the results somewhat closer to opal .it could well be that in the mhd occupation probability formalism with of [ see eq.([fo : w ] ) ] , the ground states might be too strongly perturbed .another interesting feature is shown in fig .[ fig12 ] ( d ) in the temperature range to . by comparison with fig .[ fig10 ] , we see that the difference in between mhd and mhd suspiciously mimics the difference between mhd and opal in that region . now , this is precisely the region where helioseismic inversions [ see , _ e.g. _ ] have given evidence that the mhd equation of state is not as good as opal .in both cases , the intersection of both mhd versions with opal happens at about the same temperature ( ) , and their slopes are about the same .this is an indication that , even if the extreme case of in the ground states can not be an overall improvement of the mhd the equation of state , the specific ground - state occupation numbers of [ see eq.([fo : w ] ) ] should be improved , most likely so that they will be closer to .[ fig14 ] also tells us that it is the choice of occupation probability of the ground states of hydrogen that is responsible for the difference between mhd and opal from to .this result shows another direction in which the mhd equation of state can be improved , namely by inclusion of a more realistic pressure - ionization mechanism , likely based on a hard sphere model [ see , for instance , ] .such a procedure could then assure physical consistency even if .special attention is also in order for the cases mhd and mhd , in the temperature range of , shown in fig .[ fig12 ] ( d ) .both these cases are close to opal .first , one can realize that it is the contribution of the excited states of helium that causes the behavior of the line at the low - temperature end .then , in this figure the wiggly feature from is once again the contribution of the excited states of hydrogen .if we assume that opal is the better equation of state around the temperature of then the occupation probability expression used by might indeed have altered the partition function of the ground states too strongly .another numerical experiment was dedicated to the validity of the correction to the debye - hckel term , such as employed in the mhd equation of state , but also in ceff . as mentioned in section 1 , the correctionis conventionally added to remedy the effects of the divergence of the debye potential at .the net effect of the correction is to prevent the negative debye - hckel pressure correction from exceeding the ideal - gas pressure which would , at very high densities , cause a negative total pressure . in fig .[ fig15 ] , we have plotted the value of for our solar model . because the debye - hckel correction reaches its maximum in the middle of the solar convection zone , is there also bigger than elsewhere . in parallel , in that region enhanced and reduced with respect to the other equations of state that have no correction ( fig .[ fig16 ] ) . a comparison of fig .[ fig17 ] ( a - c ) with fig .[ fig15 ] makes one realize that the contribution shows up clearly in pressure , and , revealing how these thermodynamic quantities differ from those of equations of state without a correction .the correction leads to a significant change of the thermodynamic quantities in the solar convection zone .if one assumes that opal ( which does not contain a correction ) is in all respects quite close to the true equation of state in this temperature range , one can conclude that a correction would lead to inconsistent quantities and .however , in [ fig . [ fig17 ] ( d ) ] , the behavior of the correction is more complicated and concealed .that is certainly one of the reason why helioseismic studies so far have not had problems with the correction .for instance , very successful solar models have been constructed with ceff and mhd . as a word of caution we mention , however , that the recipe of improving the mhd equation of state by simply removing its correction would not work in all stellar applications . for the sun it does work , because nowhere inside does the debye - hckel term come even close to cause negative pressure .in contrast , an application of the mhd equation of state to the physical conditions of low - mass stars would be confronted with this pathology , and adding the correction is a must , already for formal reasons , independent of physical merit . incidentally , the mhd equation of state with a correction has turned out to be a useful working tool for low - mass stars , but this might have been due to fortuitous circumstances . for a more realistic physical description , high - order contributions in the coulomb interaction beyond the debye - hckel theory will be needed .the first part of the paper has been dedicated to the influence of heavy elements in thermodynamic quantities . to isolate the contribution of a selected heavy elements separately ,we have compared the results from a h - he - c mixture with those of a h - he mixture .it has emerged that for temperatures between ( region `` a '' in fig .[ fig6 ] and fig .[ fig7 ] ) , the thermodynamic quantities are very sensitive to the detailed physical treatment of heavy elements , in particular regarding the excited states of all atoms and ions of the heavy elements .these findings carry an important diagnostic potential to use the sun as a laboratory to constrain physical theories of atoms and ions immersed in hot dense plasmas .however , we have also found that not all physical effects of heavy - elements can be detected with helioseismic studies .the reason is that in a realistic mixture with many heavy elements , at certain places in the solar convection zone , the profile of the relevant adiabatic gradient is significantly smoother than for less realistic artificial mixtures which contain a smaller number of heavy elements . as a consequence , in some locations, the adiabatic gradient can become quite independent of details of the physical treatment of the species .such a property is welcome news for solar modelers , because it reduces the uncertainty due to the equation of state. a prime beneficiary of this enhanced precision will be the helioseismic helium and heavy - element abundance determination . to this purpose ,we have identified in the adiabatic gradient a useful device in the form of a prominent , largely model - independent feature of heavy elements .this feature is found at temperatures around ( region `` b '' in fig .[ fig6 ] and fig .[ fig7 ] ) , corresponding roughly to the base of solar convection zone , where each heavy element exhibits its own ionization profile .we verified that these profiles are indeed quite independent of the details in the physics .they can serve as tracers for heavy elements , and they harbor the potential for a helioseismic determination of the relative abundance of heavy elements in the solar convection zone . the second part of the paper deals with related issues .it is no surprise that the major contribution of the heavy elements to pressure is given by the total number particles involved .these are nuclei and the electrons released by ionization , which is mainly determined by temperature and the relevant ionization energies .somewhat less expected is the result that with a larger number of heavy elements , the profile of thermodynamic quantities becomes smoother than in the case of a low number ( one or two ) representative heavy elements . in a quantitative study, we found that 6-element mixtures , which are still widely used in solar modeling , may contain errors in of up to due to the insufficient number of heavy elements .such a discrepancy does matter in present helioseismic studies ( section 3.4 ) .we conclude that in order to avoid this error , the element mixture must contain at least 10 of the most abundant elements . in a third part , rather as a by - product of the present study , we discovered one important reason why in helioseismic studies , the mhd equation of state does not produce as good results as the opal equation of state .we developed an unphysical diagnostic formalism , where the occupation probability of the ground states of all species was left untouched at . with this simple tool ,we have on the one hand confirmed the conjectures of nd98 about the importance of the ground - state contribution compared to that of the excited states . on the other hand, we have also shown that the difference between the mhd and opal equations of state around appears to be due to mhd s specific choice of the occupation probability of the ground state of hydrogen .we have realized that the ground - state contribution clearly moves away the mhd model both from helioseismologically determined values and opal .this result suggests that the specific occupation probability adopted by might perturb the ground states ( and perhaps also the low - lying excited states ) too strongly . in a final part , and as an another by - product, we have obtained quantitative results about the effect of the correction term , which is sometimes added to debye - hckel theory .we confirm earlier conjectures that the correction causes a significant spurious effect in the solar equation of state and is therefore inadmissible .the correction should therefore be taken out of the mhd equation of state ( and ceff for that matter ) .such a remedy will be acceptable for solar applications , because of the overall smallness of the debye - hckel correction in the sun .however , some form of a correction is still be required for low - mass stellar modeling with the mhd and ceff equations of state , because it has to prevent the total pressure from becoming negative at high densities and relatively low temperatures .we note in passing that the opal equation of state does not need a correction because it contains genuine higher - order coulomb correction terms .the major discrepancy between mhd and opal other than the aforementioned ground - state effect is not an absence of higher - order coulomb terms in mhd , but the presence of incorrect ones in the form of the correction .the mhd equation of state should be upgraded to include higher - order coulomb contributions .we thank forrest rogers for stimulating discussions and most of the actex equation of state data used in this study , jrgen christensen dalsgaard for the solar model used in this study , alan irwin and joyce guzik for the sireff program from which some of the comparison data are generated .this work was supported by the grants ast-9618549 and ast-9987391 of the national science foundation and the soho guest investigator grants nag5 - 7352 and nag5 - 7902 of nasa .soho is a project of international cooperation between esa and nasa .antia , h. m. and chitre , s. m. 1998 , a&a , 339 , 239 251 .bahcall , j. n. and pinsonneault , m. h. 1992 , reviews of modern physics , 64 , 885 basu , s. and christensen - dalsgaard , j. 1997 , a&a , 322 , l5 l8 .basu , s. , dppen , w. and nayfonov , a. 1999 , , 518 , 985 baturin , v. a. , dppen , w. , gough , d. o. and vorotsov , s. v. 2000 , m.n.r.a.s ., 316 , 71 baturin , v. a. , dppen , w. , wang , x. and yang , f. 1995 , stellar evolution : what should be done , 32nd lige international astrophysical collo . , ed .a. noels , d. fraipont - caro , m. gabriel , n. grevesse and p. demarque , ( universit de lige : lige , belgium ) , 33 brun , a. s. , turck - chize , s. and zahn , j. p. 1999, , 525 , 1032 charbonnel , c. , dppen , w. , bernasconi , p. , maeder , a. , meynet , g. , schaerer , d. and mowlavi , n. 1999 , a&as , 135 , 405 canuto , v. m. , and mazzitelli , i. 1991 , , 370 , 295 christensen - dalsgaard , c. and dppen , w. 1992 , , 4 , 267 christensen - dalsgaard , c. , dppen , w. and lebreton , l. 1988 , nature , 336 , 634 christensen - dalsgaard , j. , dppen , w. , and the gong team 1996 , science , 272 , 1286 cox , a. n. , guzik , j. a. and kidman , r. b. , 1989 , , 342 , 1187 dppen , w. 1996 , bull .india , 24 , 151 dppen , w. , gough , d. o. , kosovichev , a. g. and rhodes , e. j. jr .1993 , inside the stars , ed .w. w. weiss and a. baglin , proc .iau colloq .137 , ( asp : san fransisco ) , 304 dppen , w. , mihalas , d. , and hummer , d. g. 1988 , , 332 , 261 debye , p. and hckel , e. 1923 , physic .z. , 24 , 185 eggleton , p. p. , faulkner , j. and flannery , b. p. 1973 , a&a , 23 , 325 elliot j. r. and kosovichev a. g. 1998 , , 500 , l199 gong , z. g. and dppen , w. 2000 , the impact of large - scale surveys on pulsating star research , ed . l. szabados and d. kurtz , proc .iau colloq .176 , ( asp : san francisco ) , 388 gong , z. g. , dppen , w. and zejda , l. 2001 , , 546 , 1178 gough , d. o. 1984 , mem .55 , 13 graboske , h. c. jr . , harwood , d. j. and rogers , f. j. 1969 , phys ., 186 , 210 grevesse , n. and noels , a. 1993 , in origin and evolution of the elements , eds n. prantzos , e. vangioni - flam and m. cass ( cambridge : cambridge univ . press ) , 15 25 .grevesse , n. and sauval , a. j. 1998 , space science reviews , 85 , 161 guzik , j. a. and swenson , f. j. 1997 , , 491 , 967 hummer , d. g. , and mihalas , d. 1988 , , 331 , 794 iglesias , c. a. and rogers , f. j. 1996 , , 464 , 943 953 .irwin , a. , swenson , f. j. , vanderberg , d. and rogers , f. j. , 2001 , in preparation kippenhahn , r. and weigert , a. 1990 , stellar structure and evolution , springer - verlag : berlin kurucz , r. l. 1991 , in stellar atmospheres : beyond classical models , eds .crivellari , l. , hubeny , i. and hummer , d. g. , ( nato asi series , kluwer , dordrecht ) , 441 - 448 lebreton , y. and dppen , w. 1988 , seismology of the sun and sun - like stars , ed .v. domingo and e. j. rolfe , esa sp-286 , ( esa publications division : noordwijk , the netherlands ) , 661 michaud , g. and vauclair , s. 1991 , solar interior and atmosphere , ed .a. n. cox , w. c. livingston and m. s. matthews , ( tuscon : university of arizona press ) , 304 mihalas , d. , dppen , w. , and hummer , d. g. 1988 , , 331 , 815 mihalas , d. , hummer , d. g. , mihalas , b. w. and dppen , w. 1990 , , 350 , 300 nayfonov , a. and dppen , w. 1998 , , 499 , 489 nayfonov , a. , dppen , w. , mihalas , d. , and hummer , d. g. 1999 , , 526 , 451 proffitt , c. r. 1994 , , 425 , 849 richard , o. , vauclair , s. , charbonnel , c. and dziembowski , w. a. 1996 , a&a , 312 , 1000 rogers , f. j. 1986 , , 310 , 723 rogers , f. j. , swenson , f. j. , and iglesias , c. a. 1996 , , 456 , 902 saumon , d. and chabrier , g. 1992 , phys .a , 46 , 2084 shibahashi , h. , noels , a. and gabriel , m. 1983 , a&a , 123 , 283 thoul , a. a. , bahcall , j. n. and loeb , a. 1994 , , 421 , 828 ulrich , r. k. 1982 , , 258 , 404 vorontsov , s. v. , baturin , v. a. , gough , d. o. and dppen , w. the equation of state in astrophysics , ed .g. chabrier and e. schatzmann , proc .iau colloq .147 , ( cambridge : cambridge university press ) , 545
|
although 98% of the solar material consists of hydrogen and helium , the remaining chemical elements contribute in a discernible way to the thermodynamic quantities . an adequate treatment of the heavy elements and their excited states is important for solar models that are subject to the stringent requirements of helioseismology . the contribution of various heavy elements in a set of thermodynamic quantities has been examined . characteristic features that can trace individual heavy elements in the adiabatic exponent ( s being specific entropy ) , and hence in the adiabatic sound speed were searched . it has emerged that prominent signatures of individual elements exist , and that these effects are greatest in the ionization zones , typically located near the bottom of the convection zone . the main result is that part of the features found here depend strongly both on the given species ( atom or ion ) and its detailed internal partition function , whereas other features only depend on the presence of the species itself , not on details such as the internal partition function . the latter features are obviously well suited for a helioseismic abundance determination , while the former features present a unique opportunity to use the sun as a laboratory to test the validity of physical theories of partial ionization in a relatively dense and hot plasma . this domain of plasma physics has so far no competition from terrestrial laboratories . another , quite general , finding of this work is that the inclusion of a relatively large number of heavy elements has a tendency to smear out individual features . this affects both the features that determine the abundance of elements and the ones that identify physical effects . this property alleviates the task of solar modelers , because it helps to construct a good working equation of state which is relatively free of the uncertainties from basic physics . by the same token , it makes more difficult the reverse task , which is constraining physical theories with the help of solar data .
|
efforts are currently underway in the u.s .air force to utilize a heterogeneous set of physical links ( rf , optical / laser and satcom ) to interconnect a set of terrestrial , space and highly mobile airborne platforms ( satellites , aircrafts and unmanned aerial vehicles ( anps ) ) to form an airborne network ( an ) . the design , development , deployment and management of a network where the nodes are mobile are considerably more complex and challenging than a network of static nodes .this is evident by the elusive promise of the mobile ad - hoc network ( manet ) technology where despite intense research activity over the last fifteen years , mature solutions are yet to emerge .one major challenge in the manet environment is the unpredictable movement pattern of the mobile nodes and its impact on the network structure . in case of an airborne network ( an ) ,there exists considerable control over the movement pattern of the mobile platforms .a senior air force official can specify the controlling parameters , such as the _ location , flight path and speed _ of the anps to realize an an with desired functionalities .such control provides the designer with an opportunity of develop a topologically stable network , even when the nodes of the network are highly mobile .we view the an as an infrastructure ( a wireless mesh network ) in the sky formed by mobile platforms such as aircrafts , satellites and uavs to provide communication support to its clients such as combat aircrafts on a mission .just as an airborne warning and control system ( awacs ) aircraft plays a role in a mission by providing communication support to fighter aircrafts directly engaged in combat , we believe that the aircrafts and anps forming the an will provide similar support to the combat aircrafts over a much larger area . as shown in fig .[ fig : aircorridor1 ] , the combat air crafts on a mission fly through a zone referred to as an _ air corridor_. in addition to forming a connected backbone network , the anps are also required to provide complete _ radio coverage _ in the air corridor so that the combat aircrafts , irrespective of their locations within the air corridor , have access to at least one backbone node ( i.e. , an anp ) and through it , the entire network .accordingly , the an is required to have two distinct properties : ( 1 ) the backbone network formed by the anps must remain _ connected at all times _ , even though the topology of the network changes with the movement of the anps , and ( 2 ) the entire three dimensional space of the air corridor is _ covered at all times _ by the continuously moving anps . to the best of our knowledgethis is the first paper that proposes an architecture for an an and provide solutions for the _ all time connected - coverage problem of a three - dimensional space with mobile nodes_. one of the pioneering results in three dimensional coverage problem for sensor networks was presented by haas __ in which they concluded that the truncated octahedron has the highest volumetric quotient ( the ratio of the volume of a polyhedron to the volume of its circumsphere ) among all the space filling polyhedrons and utilized this to develop placement strategies for three dimensional underwater sensor networks .their scheme is a centralized one .distributed protocol of achieving three dimensional space coverage is found in the research of tezcan _ et al . later on introduced the notion of graphs and used it to obtain three dimensional sensor coverage .similar research aiming at the coverage problem in 3d was also presented by .however none of these researchers put any emphasis on the problem of obtaining coverage while the constituting nodes are mobile in a three dimensional space .the mobile nature of the anps in airborne networks add yet another dimension of difficulty to the 3d coverage problem . in this paper we first propose an architecture for an an where airborne networking platforms ( anps - aircrafts , uavs and satellites ) form the backbone or mobile base stations of the an , and the combat aircrafts on a mission function as mobile clients . we then proceed to determine the the number and initial location of the anps , their velocity and transmission range , so that the dynamically changing network retains properties ( 1 ) and ( 2 ) mentioned in the previous paragraph . the rest of the paper is organized as follows . in section[ sec : sysmodel ] , we provide the system model and an architecture of an airborne network .section [ sec : probformulation ] formally states the connectivity problem for an an . in section [ sec :connectivity ] , we provide an algorithm that finds the velocity and the transmission range of the anps so that the dynamically changing network remains connected at all times .section [ sec : routing ] presents a routing algorithm that ensures a connection between the source - destination node pair with the fewest number of path switching .given the dimensions of the air corridor and the radius of the coverage sphere associated with an anp , section [ sec : coverageprobformulation ] formulates the coverage problem for the air corridor .section [ sec : coveragesolution ] presents an algorithm that finds the fewest number of anps required to provide complete coverage of the air corridor at all times .the section [ sec : conncoverage ] combines results of sections [ sec : connectivity ] and [ sec : coveragesolution ] and presents an algorithm to provide connected - coverage to the air corridor at all times . in section [ sec : visualization ] we briefly describe a visualization tool that we developed to demonstrate the movement patterns of the anps and its impact on the resulting dynamic graph and the coverage volume of the backbone network .the results of experimental evaluations of our algorithms and related discussion is presented in section [ sec : experiments ] .section [ sec : conclusion ] concludes the paper .a schematic diagram of our view of an an is shown in fig . [fig : airborne ] . in the diagram , the black aircrafts are the airborne network platforms ( anp ) , the aircrafts that form the infrastructure of the an ( although in fig . [fig : airborne ] , only aircrafts are shown as anps , the uavs and satellites can also be considered as anps ) .we assume that the anps follow a circular flight path .the circular flight paths of the anps and their coverage area ( shaded spheres with anps at the center ) are also shown in fig .[ fig : airborne ] .thick dashed lines indicate the communication links between the anps .the figure also shows three fighter aircrafts on a mission passing through space known as _ air corridor _ , where network coverage is provided by anps 1 through 5 . when the fighter aircrafts are at point p1 on their flight path , they are connected to anp4 because point p1 is covered by anp4 only . as the fighter aircrafts move along their flight trajectories , they pass through the coverage area of multiple anps and there is a smooth hand - off from one anp to another when the fighter aircrafts leave the coverage area of one anp and enter the coverage area of another .the fighter aircrafts are connected to an anp as long as they are within the coverage area of that anp . at pointsp1 , p2 , p3 , p4 , p5 and p6 on their flight path in fig .[ fig : airborne ] , the fighter aircrafts are connected to the anps ( 4 ) , ( 2 , 4 ) , ( 2 , 3 , 4 ) , ( 3 ) , ( 1 , 3 ) and ( 1 ) , respectively .one major difference between the wireless mesh networks deployed in many u.s .cities and the ans is the fact that , while the nodes of the wireless mesh networks deployed in the u.s .cities are static , the nodes of an an are highly mobile . however , as noted earlier , the an designer has considerable control over the movements of the mobile platforms forming the an .she can decide on the _ locality _ where the aircraft / anps should fly , its _ altitude , flight path _ and _ speed of movement_. control over these four important parameters , together with the knowledge of the _ transmission range _ of the transceivers on the flying platforms , provides the designer with an opportunity for creating a fairly stable network , even with highly mobile nodes . in this paper , we make a simplifying assumptions that two anps can communicate with each other whenever the distance between them does not exceed the specified threshold ( transmission range of the onboard transmitter ) .we are well aware of the fact that successful communication between two airborne platforms depends not only on the distance between them , but also on various other factors such as ( i ) the line of sight between the platforms , ( ii ) changes in the atmospheric channel conditions due to turbulence , clouds and scattering , ( iii ) the banking angle , the wing obstruction and the dead zone produced by the wake vortex of the aircraft and ( iv ) doppler effect .moreover , the transmission range of a link is not a constant and is impacted by various factors , such as transmission power , receiver sensitivity , scattering loss over altitude and range , path loss over propagation range , loss due to turbulence and the transmission aperture size .however , the distance between the anps remains a very important parameter in determining whether communication between the anps can take place , and as the goal of this research is to understand the basic and fundamental issues of designing an an with twin invariant properties of coverage and connectivity , we feel such simplifying assumptions are necessary and justified .once the fundamental issues of the problem are well understood , factors ( i ) through ( iv ) can be incorporated into the model to obtain a more accurate solution .it is conceivable that even if the network topology changes due to movement of the nodes , some underlying structural properties of the network may still remain invariant .a structural property of prime interest in this context is the _ connectivity _ of the dynamic graph formed by the anps .we want the anps to fly in such a way , that even though the links between them are established and disestablished over time , the underlying graph remains connected at all times .although we give connectedness of the graph as an example of a structural property , many other graph theoretic properties can be specified as design requirements for the network .the problem can be described formally in the following way .consider nodes ( flying platforms ) in an -dimensional space ( for anp network scenario ) .we denote by the coordinates of the node at time , where by convention is considered a column vector , and by }^t ] and ^t$ ] are vectors , respectively .the network of flying platforms described by system ( 1 ) , gives rise to a _ dynamic graph _ . is a dynamic graph consisting of * a set of nodes indexed by the set of flying platforms , and * a set of edges with as the euclidean distance between the platforms and and is a constant .since we have control over the node dynamics , the question that naturally arises is whether we can control the motion of the anps so that retains graph - theoretic properties of interest for all time .a graph is connected if there exists a path between any two nodes of the graph .often times the property will correspond to the requirement that the graph remains connected at all times .formally the problem can be stated as follows .suppose that is the set of all graphs on nodes with property .is it possible to find a control law such that if then for all ?although a few researchers have studied problems in this domain , many important questions still remain unanswered .for example , in our study of the movement pattern of the anps to create a connected network , we assume that the flight paths of the mobile platforms are already known and we want to find out the speed at which these platforms should move , so that the resulting dynamic graph remains connected at all times .the studies undertaken in do not address such issues .although the movement of the airborne platforms will be in a three dimensional space , in a simplified version of the problem in two dimension ( i.e. , when all the aircrafts are flying at the same altitude ) the problem can be stated as follows : _ mobility pattern for connected dynamic graph ( mpcdg ) _ : this problem has five controlling parameters : + ( i ) a set of points \{ on a two ( or three ) dimensional space ( representing the centers of circular flight paths of the platforms ) , + ( ii ) a set of radii \{ } representing the radii of circular flight paths , + ( iii ) a set of points \{ } representing the initial locations ( i.e. , locations at time ) of the platforms on the circular flight paths , + ( iv ) a set of velocities \{ } representing the speeds of the platforms , and + ( v ) transmission range of the transceivers on the airborne platforms . in the mpcdg problem scenario , any structural property * p * of the resulting dynamic graph will be determined by the problem parameters ( i ) through ( v ) .the problems that arise in this formulation are as follows : given any four of the five problem parameters , how to determine the fifth one , so that the resulting dynamic graph retains property * p * at all times ?most often we would like to know that given ( i ) , ( ii ) , ( iii ) and ( v ) , at what speed the anps should fly so that the resulting graph is _ connected _ at all times .alternately , we may want to determine the minimum transmission range of the anps to ensure connectivity . in this case, the problem will be specified in the following way . given ( i ) ,( ii ) , ( iii ) and ( iv ) , what is the minimum transmission range of the anps so that the resulting graph is _ connected _ at all times ? in order to answer these questions , we first need to able to answer a simpler question . given all five problem parameters including the speed of the anps , how do you determine if the resulting dynamic graph is connected at all times ?we discuss this problem next . in this subsectionwe describe our technique to find answer to the question posed in the previous paragraph .suppose that two anps , represented by two points and ( either in two or in three dimensional space , the two dimensional case corresponds to the scenario where the anps are flying at same altitute ) are moving along two circular orbits with centers at and with orbit radius and as shown in fig .[ fig : pavelpolar ] with velocities and ( with corresponding angular velocities and ) , respectively . and ) of two points and at time moving along two circular orbits : ,scaledwidth=45.0% ] of point ; at time point is shown as ,scaledwidth=35.0% ] a moving node is specified by the radius vector directed from some origin point , and similarly for point .therefore the distance between the nodes at time is given by : as mentioned earlier , we have assumed that the communication between the anps is possible if and only if the euclidean distance between them does not exceed the communication threshold distance .this implies that the link between the nodes and is alive ( or active ) when in the analysis that follows , we have assumed that anps are flying at the same altitude , i.e. , we focus our attention to the two dimensional scenario .however , this analysis can easily be extended to the three dimensional case to model the scenario where the anps are flying at different altitude . in this casewe can view the anps as points on a two - dimensional plane moving along two circular orbits , as shown in fig .[ fig : pavelpolar ] . in fig .[ fig : pavelpolar ] , the vectors from the origin to the centers of the orbits and are given as and .the cartesian co - ordinates of the centers can be readily obtained as and .accordingly , can be expressed in polar coordinates : with respect to origin point , as shown in fig .[ fig : pavelpolar ] , and similarly for .the initial location of the points and are given . from fig .[ fig : betafig ] , the phase angle for node with respect to the center of orbit , can be calculated as ( by taking projection on the axes ) : since from fig .[ fig : pavelpolar ] , where ( since angle made by at time w.r.t . is given by ) .therefore , the angle between and is .hence , now taking the projection of on the and axes , we get recalling , and simplifying we get where and . combining equation [ eq : vs ] with equations [ eq : r ] and [ eq : cosij ] , we have : in equation [ eq : finals1 ] , all parameters on the right hand side are known from the initial state of the system , and thus the distance between the nodes at any time can be obtained .if the anps move at same velocity , i.e. , for all and the radius of the circular orbits are identical , i.e. , for all , and the above expression simplifies to : if the problem parameters ( i ) through ( v ) are specified , we can check if the dynamic graph is connected at all times following these two steps . in the first step ,we determine the lifetime ( active / inactive ) of a link between a pair of nodes and in the following way .* algorithm 1 : link lifetime computation * 1231231212123123123= 1 ._ begin _ + 2 . using equation ( [ eq : finals1 ] ) , compute and plot the distance + between a pair nodes and , , as a function + of time ; ( see fig .[ fig : waveform1 ] ) + 3 .draw a horizontal line in the versus plot with + , where is the communication threshold , + i.e. , communication between and is possible + if and impossible otherwise .+ call this line _ communication threshold line _the ctl is divided in to segments corresponding + to the parts where and where .projections of the ctl segments on the -axis + ( i.e. , the time line ) indicates the times when + the link between and is alive and + when it is not .( see fig .[ fig : waveform2 ] ) + 6 . _end _ using algorithm 1 , we can compute the life time of every link ( i.e. , every pair of nodes ) in the network . in step 2 , using algorithm 2 ( given below ) we divide the time line into smaller intervals and determine exactly the links that are active during each of these interval . for each of the intervals we check if the an graph is connected during that interval using connectivity checking algorithm in .the algorithm is described in detail next .* algorithm 2 : checking connectivity of airborne network + between time and * 1231231212123123123= 1 ._ begin _ + 2 . using the algorithm for link lifetime computation ,+ compute the lifetimes of links between all node pairs + and plot them over time line .( see fig .[ fig : interval ] ) + 3 .draw a vertical line through start and finish + time of each interval associated with a link on the + -axis ( time line ) + 4 .repeat step 3 for each link of the network + 5 .the -axis ( time line ) is divided into a number of + smaller intervals .( see fig . [fig : interval ] , intervals are + numbered from 1 through 17 ) . from the figure , + we can identify all the links that are alive + during any one interval .check if the an graph is connected with the set + of live links during one interval .this can be done + with the connectivity testing algorithm in + 7 .repeat step 6 for all the intervals between + and .if the an graph remains connected for all intervals , + conclude that the an remains connected during + the entire duration between and , + otherwise conclude that the specified problem + parameters does not ensure a an that remains + connected during the entire time interval + between and .end _ an example of a plot of equation ( [ eq : finals1 ] ) ( generated using matlab ) is shown in fig .[ fig : waveform1 ] with communication threshold distance .this implies that the link between the nodes and exists , when the distance between them is at most and the link does not exist otherwise .this is shown in fig .[ fig : waveform2 ] .the red part indicates the time interval when the link is _inactive_(or dead ) and the blue part indicates when it is _ active _ ( or live ) . thus using equation ( [ eq : finals1 ] ) and comparing the distance between any two nodes with the threshold distance , we can determine active / inactive times of all links .this can be represented as intervals on a time line as shown in fig .[ fig : interval ] . by drawing projections from the end - points of the active / inactive times of each link on the time line, we can find out all the links that are active during a interval on time line . as shown in fig .[ fig : interval ] , links 1 , 2 and 3 are active in interval 1 ; links 1 and 3 are active in interval 2 , links 1 , 2 and 3 are active in interval 3 and so on .once we know all the links that are active during a time interval , we can determine if the graph is connected during that interval using any algorithm for computing graph connectivity . by checkingif the graph is connected at all intervals , we can determine if the graph is connected at all times , when the anps are moving at specified velocities . in subsection , we have described a technique to determine if the an remains connected during the entire operational time between and ) , when all problem parameters , ( i)-(v ) are specified . in this subsectionwe try to determine the problem parameter ( iv ) ( i.e. , velocity of the anps ) that we ensure a connected an during the entire operational time between and ) when all other problem parameters have already been specified .the minimum and maximum operating velocities of the anps ( ) are known . by conducting a binary search on this range, we can compute the minimum velocity at which the anps should fly , so that the an remains connected during the entire operational time .alternately , we can also try to determine the velocity at which the anps should fly , so that the an remains connected during the entire operational time and _ fuel consumption by the anps is minimized_. if it is known that the fuel consumption is minimized when the anps fly with velocity , we can find the velocity that is closest to and also ensures connectivity of the an during entire operational time by a targeted search within the range ( ) .in this subsection we try to determine the problem parameter ( v ) ( i.e. , transmission range of the anps ) that we ensure a connected an during the entire operational time between and ) when all other problem parameters have already been specified .the maximum transmission range of an anp is known in advance ( . by conducting a binary search within the range , we can determine the the smallest transmission range that will ensure a connected an during the entire operational time when all other problem parameters have already been determined .in the previous section we described a procedure to determine the velocity of the anps so that the resulting dynamic graph is connected at all times .although the graph remains connected at all times , as the links come and go ( alive or dead ) a path between a source - destination pair may not exist for the entire duration of communication .suppose that a node has to communicate with another node from time to .since the graph is connected at all times , at least a path , say , exists from to at .however , this path may not exist till .suppose that as one of its link dies , breaks at .clearly can not be used for communication between and at .since the graph is connected at all times , there must exist at least one path , say , between and at .therefore data can be transferred from to using at .however , can break at , in which case a yet another path , say ( which is guaranteed to exist because the graph is connected at all times ) can be used for communication between and from to .in such a scenario , the path sequence is used for communication between and in the time interval to . in this scenariothe path has to be _ switched _ _ two times _ , once from and the other time from .however , it is possible that communication between and in the time interval to could have been achieved by only _one _ path switching , using a path from to and a path from to .since path switching involves a certain amount of overhead , it is undesirable and as such we would like to accomplish routing for the duration of communication with _ as few path switching as possible_. in fig .[ fig : interval ] we showed how `` lifetime '' of a link ( i.e. , alive / dead ) can be computed .since paths comprise of links , a path between a source - destination node pair will also be alive / dead at different points of time . from the lifetime of links ,we can compute the lifetime of paths in the following way . if the number of nodes ( i.e. , anps ) in the network is , there exists links with each having an individual lifetime .if a path is made up of links , the path is `` alive '' when all the links through are alive .therefore , similar to fig .[ fig : interval ] that shows the``lifetime '' of a link , we can construct a figure for `` lifetime '' of a path ( fig .[ fig : pathlife ] ) .paths between a source - destination pair , and the corresponding time intervals when they are alive . and need to communicate between time and ,scaledwidth=45.0% ] once we have knowledge of lifetimes of paths , we can construct a route from the source node to the destination node with the fewest number of path switching in the following way .the lifetimes of paths between a source - destination node pair and is shown in fig .[ fig : pathlife ] .the time intervals during which a path is alive is shown by solid lines in fig .[ fig : pathlife ] .we use the notation , to indicate that the path is alive during the time intervals , as shown in the fig .[ fig : pathlife ] . in the application scenario that we are considering ,we want a communication channel to be open between the source node and the destination node for the entire duration of time from to . since it is possible that no single path between to remains alive for the entire duration from to , a set of paths may constitute a communication channel from to for the duration , where each path in is alive only for a fraction of the time interval from to .next we focus on the number of paths between and that we need to consider .since the graph has nodes and links , there could be as many as paths corresponding to 1-hop , 2-hop , , -hop paths between and .it may also be noted that each of these paths will have a lifetime associated with it . since examining paths and their lifetimeswill be too time consuming , we restrict our attention to only those paths between and whose number of hops is at most , for some specified value of . restricting the number of hops in a path to at most ( from ) , we reduce the computational complexity from to .suppose the set of paths of at most hops is denoted at .the _ minimum path switch routing _ algorithm given below finds a subset so that paths in maintain a communication channel between and for the entire time duration from to with the fewest number of path switchings ._ minimum path - switch routing algorithm _ _ input : _ the set of paths between and with length at most -hops , and associated lifetimes of each each path , in the form of live intervals of . if there are live intervals of in the time interval between to , we denote ._ output : _ a subset so that paths in maintain a communication channel between and for the entire time duration from to with the fewest number of path switchings ._ comments : _ the algorithm uses a greedy ( locally optimum ) approach to find the paths needed to have one live path from to during the entire time interval and . in theorem 1 , we prove that this locally optimum greedy approach indeed finds the globally optimal solution .123123123123123123123= 1 ._ begin _ + 2 . ; + 3 . ; + 4 . ; + 5 .( ) _ do _+ + ( i ) _ for all _ _ do _ +( ( start_time + for some ) + + ( i ) + ( ii ) ; + + ; + + 6 . _end _ _ theorem 1 : _ the minimum path - switch routing algorithm finds a set of paths so that a communication channel is open during the entire duration from to with the fewest number of path switches . _proof : _ suppose that an optimal algorithm selected the paths and the minimum path - switch routingl algorithm selected the paths , where .the live intervals of the paths that were selected by the optimal algorithm are respectively . similarly , the live intervals of the paths that were selected by the minimum path - switch routing algorithm are respectively .the paths and intervals chosen by the two algorithms are shown in fig .[ fig : proofdiagram ] . as shown in fig .[ fig : proofdiagram ] , the finish times of the intervals are denoted as and the finish times of the intervals are denoted as .since the minimum path - switch routing algorithm chooses the path such that is live at and the finish time of the live interval containing is largest among the finish times of the intervals associated with all the paths , we can conclude that .therefore , replacing the path from the optimal soultion by the path we will have a new optimal solution .because of nature of the path selection criteria of the minimum path - switch routing algorithm , we can conclude that .therefore , replacing the path from the new optimal soultion by the path we will have yet another optimal solution . continuing this process, we can get an optimal solution .this implies that the minimum path - switch routing algorithm will select only paths instead of , ( ) to have an open communication channel for the enire duration of to , and since is the optimal number of paths needed for this purpose , the minimum path - switch routing algorithm produces an optimal solution .in this section , we discuss the coverage model of the network formed by the anps . as shown in fig .[ fig : aircorridor1 ] , an air corridor through which the combat aircrafts fly towards their destination can be modeled as a collection of rectangular parallelepipeds . as the combat aircrafts must have access to the anas they fly through the air corridor , all points inside the air - corridor must have radio coverage at all times . as the shape of an air corridor can be quite complex, we view that the complex shape can be approximated with rectangular parallelepipeds as shown in fig .[ fig : aircorridor2 ] .the length , width and height of this section of the air - corridor are denoted as , and , respectively .the radius of the circular orbit of the anps is denoted by and the number of anps in each such orbit is denoted by .we assume that the anps move around in their orbit with uniform velocities .the coverage volume of each anp is defined as a spherical volume of radius with the anp being at the center of the sphere .we assume that the orbits of the anps are located at the top surface of the air corridor so that they do not cause any hindrance in the flight path of the combat aircrafts .this is shown in fig .[ fig : aircorridor4 ] , where two circular orbits , each of them containing anps , are located at the top surface of the air corridor .the number of orbits located at the top surface of the air corridor section is denoted as , and accordingly the total number of anps in the network is given by .the goal of the coverage problem is to provide complete coverage at all times of the entire air corridor with the fewest number of anps . in this problem ,a complete coverage must be provided irrespective of the locations of the anps as they move continuously in their respective orbits . in figs .[ fig : fig123 ] , [ fig : fig456 ] , [ fig : fig789 ] , we can see the 3d view , top - view and front - view of anps in circular orbits and the volume covered by them for three different cases . in case i ( fig .[ fig : fig123 ] ) , the value of orbit radius ( ) is greater than that of the spherical coverage volume ( ) of each anp . it can be clearly seen from fig .[ fig : fig2 ] , that there is an open space inside the orbit , that is not covered by any of the anps as they move in the orbit . to increase the intersection volume, can be at most ( case ii ) . in fig .[ fig : fig456 ] , , and we can see from fig . [fig : fig5 ] , that the spheres meet at a single point inside the orbit .we call the intersection of adjacent spheres as _ leaf _ , the top view of which is visible in the figure . to increase the intersection of the spheresfurther , we need to decrease even more , and thus bringing in the anps even more closer to each other .this is case iii ( shown in fig .[ fig : fig789 ] ) , where .the coverage problem can be stated as follows : given the rectangular parallelepipeds in terms of , and and the radius of the coverage sphere associated with an anp , , find the radius of the orbit of the anps , and the number of anps in each orbit ( ) , the entire volume of the air corridor is covered at all times with the fewest number of anps .intersection of coverage spheres of the anps create a coverage volume . for two intersecting spheres , the intersection volume is shown in fig . [fig : int2s ] .as the anps move in their orbits , the associated coverage spheres move with them and consequently the volume that is covered by the moving anps also changes . as a consequence some volume will be covered only a part of the time. however , a part of the intersection volume will be covered at all times irrespective of the positions of the anps as they move in their orbits .this is defined as the _ invariant coverage volume_. we would like to use this _ invariant coverage volume _ as building blocks in order to fill up the air corridor modeled in the form of a rectangular parallelopiped . since the _ invariant coverage volume _ is irregular - shaped , it is difficult to use it as a building block . for ease of coverage using a building block with a regular shape , we extract a cylindrical volume out of this invariant volume and use it to fill up the rectangular parallelopiped . such a cylinder is shown in fig .[ fig : int2s ] .different views of such a cylindrical section for intersecting anps in a circular orbit are shown in fig .[ fig : fig101112 ] .as we have decided to use a cylinder as the building block to cover the air corridor , we need to know the height and radius of the circular surface of such cylindrical blocks , denoted by and , respectively .the height and radius of the invariant coverage cylinder are determined by ( i ) the orbit radius ( ) , ( ii ) the number of anps per orbit ( ) and ( iii ) the radius of the spherical coverage volume of each anp ( ) . as mentioned earlier , in this design the anps and their orbitsare placed on the top surface of the air corridor . as a consequence ,the top half of the invariant coverage cylinder can not be utilized and only the bottom half of the cylindrical volume ( of height ) will be used for the coverage of the rectangular parallelopiped . therefore , in order to cover the height of air corridor , one must satisfy the constraint ( fig .[ fig : aircorridor3 ] ) .once this constraint is satisfied , the problem reduces to cover the plane defined by with circles of radius with a goal to minimize the total number of anps required ( ) .we investigate the structure of the coverage volume . in fig .[ fig : top ] , we have shown the top view through the center of orbit of three consecutive spheres ( out of total of them ) intersecting with each other and moving around in the circular orbit with center at . the center of the spheres are denoted as , and .the radius of the circular orbit is .all the spheres are of uniform radius , and moving in a uniform velocity .the term _ leaf _ is used to refer to the intersection between two adjacent spheres . in the top view , it can be seen as the intersection of two circular arcs and .the length of the _ leaf _ is denoted by ( ) .distance from the center of the orbit center to the end point of the _ leaf _ is denoted by .therefore , the width of the _ leaf _ ( ) is denoted by .angle between two adjacent _ leaves _ is given by .for number of spheres moving in the orbit , is given by .therefore , now , in fig .[ fig : side ] , the side view of two intersecting spheres and leafs are shown .the intersecting spheres are shown using dashed line , whereas the intersecting _ leaves _ are shown using solid lines .we cut the largest cylindrical volume from the intersecting region such that this is covered by at least one sphere at all times , as the spheres move around in the orbit . from fig .[ fig : top ] : also , from : from equations [ eq : hlwl ] , [ eq : rswl ] and [ eq : rsro ] , it is clear that all variables , and can be represented through variables and only .now from fig .[ fig : side ] , it is clear that the cylindrical volume with largest height that remains covered during the movement of the spheres , has radius of the circular surface ( ) equal to length .this can be calculated as : the height of the cylinder is given by , calculated as : therefore , given the radius of the orbit , the number of spheres moving in each orbit , and radius of each sphere , the cylinder can be determined by using the above equations as follows : using equations [ eq : hlwl ] , [ eq : rswl ] , [ eq : rsro ] and [ eq : hcy ] , we can express as : the volume of cylinder is given by : the change of the radius ( ) , height ( ) and volume ( ) of the cylindrical region with the values of and for fixed value of can be seen in the plots in figs .[ fig : radius ] , [ fig : height ] and [ fig : volume ] .the coverage problem for an airborne network can be formally defined as follows .given : * the length , width and height of the air corridor as , and . * the radius of each coverage sphere associated with each anp .find ( i ) the orbit radius , , ( ii ) the number of anps in each orbit , , ( iii ) the number of orbits required to cover the air corridor , and ( iv ) the placement of the center of the orbits , such that : 1 .the orbits are placed only on the top surface of the air corridor .all points in the rectangular area defined by and is covered by at least one circle of radius with center at .3 . 4 .total number of spheres required = total number of orbits number of spheres per orbit = is minimized . in order to minimize the objective function subject to the constraints , we need to find and .this will determine the values of and .it can be seen from equations ( [ eq : rc ] ) and ( [ eq : hc ] ) , that decreasing , increases , but decreases . intuitively , with smaller value of , we will need more orbits ( ) to cover the rectangular parallelopiped , which will eventually increase the number of anps required ( ) .therefore , for given we need to set at the highest possible value that still satisfies the constraint . for the corresponding , will be determined by the placement strategy of the orbits on the top surface of the air corridor .the overall objective of the coverage problem is to minimize subject to the constraint .we use the following two strategies for placement of circular orbits on the top surface of the air corridor : * strategy 1 : * the largest square that can be inscribed in the circular surface of the cylinder is used as the building block to cover the rectangular region defined by and . with being the radius of the cylinder ,the length of each side of the square is .therefore , total number of orbits can be calculated as .hence , the optimization problem following strategy 1 can be formally stated as : * strategy 2 : * the largest rectangle that can be inscribed in the circular surface of the cylinder , and that has the same length to width ratio as that of the rectangular region defined by and , is used as the building block for covering the entire region .let and be the length and width of such a building block . being the radius of the cylinder , we get the following : from the above two relations we can get : since , total number of orbits ( cylinders ) required to cover the entire region can be calculated as : accordingly , the optimization problem following placement strategy 2 can be formally stated as : the diagrams for the placement of orbits , and hence the cylindrical regions following the above two strategies , are shown in fig .[ fig : illstrat1 ] and [ fig : illstrat2 ] , respectively .hence , the location of he center of the orbits can be easily determined .it may be observed both placement strategy 1 and 2 formulates the coverage problem as a non - linear optimization problem .we used the non - linear constrained program solver nimbus to solve optimization problems following strategies 1 and 2 .the results obtained from nimbus is discussed in section [ sec : experiments ] .in section [ sec : coverageprobformulation ] , we discussed the three dimensional air corridor coverage problem with the anps . as the anps are mobile , the coverage volume associated one anp is continuously changing with time . in section[ sec : coveragesolution ] we described techniques to find least cost solution to the air corridor coverage problem with mobile nodes .earlier in section [ sec : connectivity ] , we discussed how to determine the velocity and subsequently the transmission range of the of the anps , so that resulting backbone network formed by the anps remain connected at all times . in this section ,we discuss how to design a network of anps , so that ( i ) it remains connected at all times and ( ii ) it provides 100% coverage to the air corridor at all times .we provide a two phase solution to the connected coverage problem . in the first phase , using the techniques described in section [ sec : coveragesolution ] , we determine the number and orbit of the anps that will provide 100% coverage to air corridor at all times . once that is accomplished , in the second phase , using the techniques described in section [ sec : connectivity ] , we determine the the velocity and the transmission range of the anps so that the backbone network formed by the anps remains connected at all times .the visualization tool was designed for observing the movement of objects along circular orbits in a 3d plane .the euclidean distance between every pair of objects keeps changing due the their movement .each pair of objects has a threshold value specified .if the euclidean distance between this pair of objectss is within the threshold , then they are connected by a link .as soon as the pairwise distance goes beyond the threshold , the link is broken .the tool was designed using opengl and c++ .opengl is a 3d graphics api that works with c++ that provides dynamic interaction with the user .mostly it is used for game programming and creating 3d scenes . for this program , we utilized some of the basic features of the api to create an interactive application to control the variables of each particular orbiting objects .one of the features of opengl is the ability to alter the camera view .this allows us to see every angle of the orbiting objects and rotate around the scene . a snapshot of the visualization tool with three moving pointsis shown in fig .[ fig : viz ] .the floor is based on a grid .the center of each orbit has the ability to be moved along the axis , , and the axis .the -axis , which represents the height above the floor , allows the object to increase in height in the range .the radius of each circular orbit can be modified in the range .each object has a connectivity threshold to each other .each threshold , , , and has the following range . using the standard distance equation between two points in space, a link will appear to declare if the objects are within the given threshold limit .the speed of each object is based on the system clock , which will vary between each computer .the speed of each object can be increased up to units .prior to each object being set in motion , they can be positioned strategically around their own orbit .the motion of all objects can be paused to analyze a specific situation .in this section we present the experimental evaluation results of two strategies proposed in section [ sec : coveragesolution ] .the goal of these experiments were to find the impact of change of ( i ) radius of the coverage sphere ( ) and ( ii ) height of the air corridor ( ) on ( a ) radius of the circular orbit of the flying anps ( ) , ( b ) the number of anps in each orbit ( ) , and ( c ) the total number of anps ( ) needed to provide complete coverage for the air corridor , specified by its length , width and height parameters , respectively .[ fig : rs ] show impact of changing on and for different sets of values for and for two different strategies 1 and 2 .[ fig : h ] show impact of changing on , and for different set of values for , and for two different strategies 1 and 2 .the parameter values for and used for the experimentation are indicated in the figures . since the optimal coverage problem turned out to be a non - linear optimization problem ( equations ( [ eq : s1obj ] ) to ( [ eq : s2cons ] ) ) , we used the non - linear constrained program solver nimbus to solve it using two different anp orbit placement strategies 1 and 2 . in the following we discuss some experimental results , some of which are intuitive , some others are not . _ observation 1 : _ from fig .[ fig : rs ] , it can be seen that increase in results in increase in and decrease in for both the strategies 1 .this is somewhat intuitive as it is only natural to expect that as the radius of the coverage sphere increases , the radius of the circular orbit of the anps will increase and the total number of anps needed to cover the entire air corridor will decrease . it may also be noted that when is too small compared to , there may not be a feasible solution . _observation 2 : _ from fig .[ fig : h ] , it can be seen that increase in results in decrease in and increase in for both the strategies 1 and 2 .this is also somewhat intuitive , as the height of the air corridor increases , the radius of the circular orbit of the anps has to decrease ( please see discussion in section [ sec : coveragesolution ] ) and the total number of anps needed to cover the entire air corridor must increase ._ observation 3 : _ from figs .[ fig : rs ] and [ fig : h ] , it can be seen that , , the number of anps in an orbit remains a constant irrespective of changes in and .this result is not at all obvious .however , on closer examination of the objective function in equations ( [ eq : s1obj ] ) and ( [ eq : s2obj ] ) , one can find an explanation for this phenomenon .the plot of versus is shown in fig .[ fig : cos ] .this factor is present in the objective function for both the strategies . from fig .[ fig : cos ] , reaches its minimum value when .therefore the objective functions in equations ( [ eq : s1obj ] ) and ( [ eq : s2obj ] ) are minimized when .this nature of also explains the fact in observation 1 , where decreases when decreases ( i.e. , when increases ) .similarly , it also explains the fact in observation 2 , where increases when increases ( i.e. , when decreases ) ._ observation 4 : _ from the figs .[ fig : rs1 ] , [ fig : rs2 ] , [ fig : h1 ] , [ fig : h2 ] , it can be seen that the cost of the solution ( i.e. , the number of anps needed to provide complete coverage of the air corridor ) using strategy 1 is less than that of strategy 2 .although , the reason for this phenomenon may not be obvious at a first glance , on closer examination , we can explain the phenomenon .given the fact that and presence of these two terms in objective functions of strategies 1 and 2 ( equations ( [ eq : s1obj ] ) and ( [ eq : s2obj ] ) in page 8) , it is not surprising that cost of the solution strategy 1 is less than that of strategy 2 . from our experimentswe learn that ( i)strategy 1 performs better than strategy 2 in all cases , except where , for which both the strategies are identical , ( ii)the number of anps in an orbit remains a constant ( ) irrespective of the values of and , when the objective function is specified by equations ( [ eq : s1obj ] ) or ( [ eq : s2obj ] ) , and ( iii)to optimize the objective function , the radius of the circular orbit of the anps should be made as large as possible subject to the constraint that the height of the corresponding invariant coverage cylinder is at least as large as the height of the air corridor .existence of sufficient control over the movement pattern of the mobile platforms in airborne networks opens the avenue for designing topologically stable hybrid networks . in this paper , we discussed the system model and architecture for airborne networks ( an ) .we studied the problem of maintaining the connectivity in the underlying dynamic graphs of airborne networks with control over the mobility parameters and developed an algorithm to solve the problem .j. l. burbank , p. h. chimento , b. k. haberman , and w. t. kasch , `` key challenges of military tactical networking and the elusive promise of manet technology , '' _ ieee communication magazine _, november 2006 .f. chen , p. jiang , and a. xue , `` an algorithm of coverage control for wireless sensor networks in 3d underwater surveillance systems , '' _ advanced intelligent computing theories and applications _5226 , no .1206 - 1213 , 2008 . `` city - wide wi - fi projects . ''[ online ] .available : http://www.seattlewireless.net,http://www.wirelessphiladelphia.org , http://www.waztempe.com[http://www.seattlewireless.net,http://www.wirelessphiladelphia.org , http://www.waztempe.com ] a. tiwari , a. ganguli , and a. sampath , `` towards a mission planning toolbox for airborne networks : optimizing ground coverage under connectivity constraints , '' in _ ieee aerospace conference _ , march 2008 , pp .
|
the u.s . air force currently is in the process of developing an airborne network ( an ) to provide support to its combat aircrafts on a mission . the reliability needed for continuous operation of an an is difficult to achieve through completely infrastructure - less mobile ad hoc networks . in this paper we first propose an architecture for an an where airborne networking platforms ( anps - aircrafts , uavs and satellites ) form the backbone of the an . in this architecture , the anps can be viewed as mobile base stations and the combat aircrafts on a mission as mobile clients . availability of sufficient control over the movement pattern of the anps , enables the designer to develop a topologically stable backbone network . the combat aircrafts on a mission move through a space called _ air corridor_. the goal of the an design is to form a backbone network with the anps with two properties : ( i ) the backbone network remains _ connected at all times _ , even though the topology of the network changes with the movement of the anps , and ( ii ) the entire three dimensional space of the air corridor is under _ radio coverage at all times _ by the continuously moving anps . in addition to proposing an architecture for an an , the contributions of the paper include , ( i ) development of an algorithm that finds the velocity and transmission range of the anps so that the dynamically changing backbone network remains connected at all times , ( ii ) development of a routing algorithm that ensures a connection between the source - destination node pair with the fewest number of path switching , ( iii ) given the dimensions of the air corridor and the radius of the _ coverage sphere _ associated with an anp , development of an algorithm that finds the fewest number of anps required to provide complete coverage of the air corridor at all times , ( iv ) development of an algorithm that provides connected - coverage to the air corridor at all times , and ( v ) results of experimental evaluations of our algorithms , ( vi ) development of a visualization tool that depicts the movement patterns of the anps and the resulting dynamic graph and the coverage volume of the backbone network .
|
recently , adaptive beamforming has attracted considerable attentions and been widely used in wireless communications , radar , sonar , medical imaging and other areas .many existing methods have been presented in different communication systems - .blind adaptive beamforming , which is intended to form the array direction response without knowing users information beforehand , is an encouraging topic that deals with interference cancellation , tracking improvement and complexity reduction .the linearly constrained minimum variance ( lcmv ) method , with multiple linear constraints , is a common approach to minimize the beamformer output power while keeping the signal of interest ( soi ) from a given direction .however , because of the required input data covariance matrix , the lcmv beamformer can not avoid complicated computations , especially for large input data and/or large sensor elements .also , this method suffers from slow convergence due to the correlated nature of the input signal .choi and shim proposed another computationally efficient algorithm based on the stochastic gradient ( sg ) method for finding the optimal weight vector and avoiding the estimation of the input covariance matrix .as shown in , a cost function is optimized according to the minimum variance subject to a constraint that avoids the cancellation of the soi , i.e. the so - called constrained minimum variance ( cmv ) .nevertheless , this algorithm still can not avoid slow convergence rate .furthermore , another noticeable problem is how to define the range of step size values .the small value of step size will lead to slow convergence rate , whereas a large one will lead to high misadjustment or even instability . in sg algorithms using the constant modulus ( cm ) cost functionare reviewed by johnson __ for blind parameter estimation in equalization applications. similarly , the cm approach exploits the low modulus fluctuation exhibited by communications signals using constant modulus constellations to extract them from the array output .although it adapts the array weights efficiently regardless of the array geometry , knowledge of the array manifold or the noise covariance matrix , the cm - based beamformer is quite sensitive to the step size . in addition, the cm cost function may have local minima , and so does nt have closed - form solutions .xu and liu developed a sg algorithm based on constrained constant modulus ( ccm ) technique to sort out the local minimum problem and obtain the global minima .but they still can not find a satisfied solution in terms of the slow convergence . to accelerate convergence , xu and tsatsanis employed cmv with the recursive least squares ( rls ) optimization technique and derived the cmv - rls algorithm .it turns out that this method exhibits improved performance and enjoys fast convergence rate .however , it is known to experience performance degradation if some of the underlying assumptions are not verified due to environmental effects .the signature mismatch phenomenon is one of these problems . in this work, we propose a constrained constant modulus recursive least squares ( ccm - rls ) algorithm for blind adaptive beamforming .the scheme optimizes a cost function based on the constant modulus criterion for the array weight adaptation .we then derive an rls - type algorithm that possesses better performance than those of previous methods .we carry out a comparative analysis of existing techniques and consider two practical scenarios for assessment .specifically , we compare the proposed method with the existing cmv - sg , cmv - rls and ccm - sg .the remaining of this paper is organized as follows . in the next section ,we present a system model for smart antennas .based on this model , we describe the existing sg algorithms based on the constrained optimization of the minimum variance and constant modulus cost functions in section iii . in section iv ,the rls - type algorithms , including the proposed algorithm , are derived .simulation results are provided in section v , and some conclusions are drawn in section vi .in order to describe the system model , let us make two simplifying assumptions for the transmitter and receiver models .first , the propagating signals are assumed to be produced by point sources ; that is , the size of the source is small with respect to the distance between the source and the sensors that measure the signal .second , the sources are assumed to be in the `` far field , '' namely , at a large distance from the sensor array , so that the spherically propagating wave can be reasonably approximated with a plane wave .besides , we assume a lossless , nondispersive propagation medium , i.e. , a medium that does not attenuate the propagating signal further and the propagation speed is uniform so that the wave travels smoothly . let us consider the adaptive beamforming scheme in fig . [fig : model4 ] and suppose that narrowband signals impinge on the uniform linear array ( ula ) of ( ) sensor elements from the sources with unknown directions of arrival ( doas ) , , .the snapshot s vector of sensor array outputs can be modeled as where ^{t}\in\mathcal{c}^{m\times 1} ] .ls estimation , like the method of least squares , is an ill - posed inverse problem . to make it `` well posed , '' we need to renew the correlation matrix as follows {\boldsymbol x}(l){\boldsymbol x}^{h}(l)+\delta\alpha^{i}{\boldsymbol i}.\ ] ] where isa positive real number called the regularization parameter and is the identity matrix .the second term on the right side of ( 14 ) is included to stabilize the solution to the algorithm by smoothing the solution and has the effect of making the correlation matrix nonsingular at all stages of the computation .let $ ] and follow the recursion for updating the correlation matrix , ( 14 ) can be expressed as by using the matrix inversion lemma in ( 15 ) , we can obtain the inverse of here , we still define . for convenience of computation , we also introduce a vector as using these definitions , we may write ( 16 ) as until now , we develop a recursive equation to update the matrix by incrementing its old value .finally , using the fact that equals to , we get the proposed rls solution ^{-1}{\boldsymbol p}(i){\boldsymbol a}(\theta_{0})\ ] ] equations ( 17)-(19 ) , collectively and in that order , constitute the derived ccm - rls algorithm , as summarized in table [ tab : ccm - rls ] . .ccm - rls algorithm [ cols= " < , < " , ]we note that , ( 17 ) and ( 18 ) enable us to update the value of the vector itself .an important feature of this algorithm described by these equations is that the inversion of the correlation matrix is replaced at each step by a simple scalar division .also , in the summary presented in table [ tab : ccm - rls ] , the calculation of the vector proceeds in two stages : * first , an intermediate quantity , denoted by , is computed .* second , is used to compute .this two - stage computation of is preferred over the direct computation of it using ( 16 ) from a finite - precision arithmetic point of view . to initialize the ccm - rls method, we need to specify two quantities : * the initial weight vector .the customary practice is to set . *the initial correlation matrix . setting in( 14 ) , we find that . in terms of complexity , the ls requires ( ) arithmetic operations , whereas the rls requires ( ) .furthermore , we can notice that the step size in the sg algorithms is replaced by .this modification has a significant impact on improving the convergence behavior .the performance of the proposed ccm - rls algorithm is compared with three existing algorithms , namely cmv - sg , ccm - sg and cmv - rls , in terms of output signal - to - interference - plus - noise ratio ( sinr ) , which is defined as where is the autocorrelation matrix of the desired signal and is the cross - correlation matrix of the interference and noise in the environment .an ula containing sensor elements with half - wavelength spacing is considered .the noise is spatially and temporally white gaussian noise with power . for each scenario , iterations are used to get each simulated point . in all simulations ,the doa of the soi is and the desired signal power is . the interference - to - noise ratio ( inr ) , in all examples , is equal to .the bpsk scheme is employed to modulate the signals .the value of was set equal to in order to optimize the performance of the rls - type algorithms .[ fig : cmvccmsvefinal ] includes two experiments .[ fig : cmvccmsvefinal](a ) shows the output sinr of each method versus the number of snapshots , whose total is samples .there are two interferers with doas and . in this environment ,the actual spatial signature of the signal is known exactly .the result shows that the rls - type algorithms converge faster and have better performances than those of the sg algorithms .the steering vector mismatch scenario is shown in fig .[ fig : cmvccmsvefinal](b ) .we assume that this steering vector error problem is caused by look direction mismatch .the assumed doa of the soi is a random value located around the actual direction , whose mismatch is limited in a range of .compared with fig .[ fig : cmvccmsvefinal](a ) , fig .[ fig : cmvccmsvefinal](b ) indicates that the mismatch problem leads to a worse performance for all the solutions .the cmv - rls method is more sensitive to this environment , whereas the proposed ccm - rls algorithm is more robust to this mismatch . in fig .[ fig : moreusers3final ] , the scenario is the same as that in fig .[ fig : cmvccmsvefinal](a ) for the first samples . two more users , whose doas are and , enter the system in the second samples . as can be seen from the figure , sinrs of both the sg and rls - type algorithmsreduce at the same time .it is clear that the performance degradation of the proposed ccm - rls is much less significant than those of the other methods .in addition , rls - type methods can quickly track the change and recover to a steady - state . at samples , two interferers with doas and enter the system whereas one interferer with doa leaves it .the simulation shows nearly the same performance as that of the second stage .it is evident that the output sinr of our proposed algorithm is superior to the existing techniques .this figure illustrates that the ccm - rls is still better after an abrupt change , in a non - stationary environment where the number of users / interferers suddenly changes in the system . #1#21.02 # 1#21.02in this paper , a new algorithm enabling blind adaptive beamforming has been presented to enhance the performance and improve the convergence property of the previously proposed adaptive methods . following the ccm criterion ,a rls - type optimization algorithm is derived . in the place of step size, we employ the correlation matrix inversion instead for increasing the convergence rate .then , matrix inversion lemma was used to solve this inversion problem with reduced complexity .we considered different scenarios to compare ccm - rls algorithm with several existing algorithms .comparative simulation experiments were conducted to investigate the output sinr .the performance of our new method was shown to be superior to those of others , both in terms of convergence rate and performance under sudden change in the signal environment .sergiy a. vorobyov , alex b. gershman and zhi - quan luo , robust adaptive beamforming using worst - case performance optimization : a solution to the signal mismatch problem , " _ ieee trans .signal processing _ , vol .313 - 324 , feb . 2003 .s. choi and d. yun , design of an adaptive antenna array for tracking the source of maximum power and its application to cdma mobile communications , " _ ieee trans . antennas and propagation _ , vol .1393 - 1404 , sept . 1997 .r. c. de lamare and r. sampaio - neto , adaptive interference suppression for ds - cdma systems based on interpolated fir filters with adaptive interpolators in multipath channels , " _ ieee trans .vehicular technology _2457 - 2474 , sep . 2007 .r. c. de lamare and r. sampaio - neto , low - complexity variable step - size mechanisms for stochastic gradient algorithms in minimum variance cdma receivers " , _ ieee trans . signal processing _ , vol .2302 - 2317 , june 2006 .r. c. de lamare and r. sampaio - neto , blind adaptive code - constrained constant modulus algorithms for cdma interference suppression in multipath channels , " _ ieee communications letters _ , vol .334 - 336 , apr . 2005
|
in this paper , we study the performance of blind adaptive beamforming algorithms for smart antennas in realistic environments . a constrained constant modulus ( ccm ) design criterion is described and used for deriving a recursive least squares ( rls ) type optimization algorithm . furthermore , two kinds of scenarios are considered in the paper for analyzing its performance . simulations are performed to compare the performance of the proposed method to other well - known methods for blind adaptive beamforming . results indicate that the proposed method has a significant faster convergence rate , better robustness to changeable environments and better tracking capability . ignore
|
after the spread of the financial crisis in 2008 , the term systemic risk could be well regarded as the buzzword of these years .although there is no consensus on a formal definition of systemic risk , it usually denotes the risk that a whole system , consisting of many interacting agents , fails .these agents , in an economic context , could be firms , banks , funds , or other institutions . only very recently, financial economics is accepting the idea that the relation between robustness of individual institutions and systemic risk is not necessarily straightforward . the debate on systemic risk , how it originates and how it is affected by the structure of the networks of financial contracts among institutions worldwide , is only at the beginning .from the point of view of economic networks , systemic risk can even be conceived as an undesired externality arising from the strategic interaction of the agents .however , systemic risk is not only a financial or economic issue , it also appears in other social and technical systems .the spread of infectious diseases , the blackout of a power network , or the rupture of a fiber bundle are just some examples .systemic risk in our perspective is a macroscopic property of a system which emerges due to the nonlinear interactions of agents on a microscopic level . as in many other problems in statistical physics ,the question is how such a macroscopic property may emerge from local interactions , given some specific boundary conditions of the system .the main research question is then to predict the fraction of failed nodes in a system , either as a time dependent quantity or in equilibrium . here , we regard as a measure of systemic risk .in this paper we investigate systemic risk from a complex network perspective .thus , agents are represented by nodes and interactions by directed and weighted links of a network .each of the nodes is characterized by two discrete states , which can be interpreted as a susceptible and an infected state or , equivalently , as a healthy and a failed state . in most situations considered here ,the failure ( infection ) of a node exerts some form of stress on the neighbouring nodes which can possibly cause the failure ( infection ) of the neighbours , this way triggering a cascade , which means that node after node fails .this may happen via a redistribution mechanism , in which part of the stress acting on a node is transferred to neighboring nodes , which assumes that the total stress is conserved .there is another mechanism , however , where no such conserved quantity exist , for example in infection processes where the disease can be transferred to an unlimited number of nodes . in both mechanisms , the likelihood that a node fails increases with the number of failures in the proximity of the node .this is the essence of a contagion process .the specific dynamics may vary across applications , nevertheless there are common features which should be pointed out and systematically investigated .our paper contributes to this task by developing a general framework which encompass most of the existing models and allows to classify cascade models in three different categories .a number of works have investigated processes of this type , sometimes referred to as cascades or contagion. these were mostly dealing with interacting units with random mixing or , more recently , with fixed interaction structures corresponding to complex networks . on the one hand , there are models in which the failure dynamics is deterministic but the threshold , at which such a failure happens , is heterogeneous across nodes . for simplicity , we refer to these as _ cascade models _ even though , according to the discussion above , they also involve contagion . to this classbelong some early works on electrical breakdown in random networks and more recent ones on the fiber bundle model ( fbm ) , on fractures , cascades in power grids , or cascades in sand piles the bak - tang - wiesenfeld model ( btw ) .further work refers to congestion dynamics in networks , , cascades in financial systems and in social interactions , and overload distribution ( in abstract terms ) .the properties of self - organized criticality of some of these models are well understood .the presence of rare but large avalanches is of course relevant to systemic risk . on the other hand ,there are models in which the failure of a given node is stochastic but the threshold at which contagion takes place is homogeneous across nodes . for simplicity, we refer to this class as _ contagion models _ , even though they can lead to cascades as well .the best known example is epidemic spreading ( sis ) .the properties of these model have been investigated in great detail on various network topologies , e.g. in the presence of correlations or bipatite structure . however , as we will see later , we can also include the voter model ( vm ) and its variants into this class .it is interesting to note that , while the macroscopic behaviour of fbm and btw in a scale free topology is qualitatively similar to the one on regular and random graphs , the properties of sis are severely affected by the topology .the relation between cascading models and contagion models has not been investigated in depth , although some models interpolating between the two classes have been proposed to relate these two model classes of cascades and contagion , in the following we develop a general model of cascades on networks where nodes are characterized by a two continuous variables , _ fragility _ and _ threshold_. nodes fail of their fragility exceed their individual heterogeneous threshold .the key variable is the net fragility , i.e. the difference between fragility and threshold .this variable is related to the notion of distance to default used in financial economics . by specifying the the fragility of a node in terms of other nodes fragility and/or other nodes failure state ,we are able to recover various existing cascade models .in particular , we identify three classes of cascade models , referred to as ` constant load ' , ` load redistribution ' , ` overload redistribution ' .the three classes differ , given that a node fails , in how the increase in fragility ( called here the ` load ' ) of connected nodes is specified .we discuss the differences and similarities among these classes also with respect to models from financial economics and sociology . for all of the three classes we derive mean - field recursive equation for the asymptotic fraction of failed nodes , . clearly , this variable depends on the initial distributions of both fragility and threshold across nodes .for instance , if no node is fragile enough to fail in the beginning , then no cascade is triggered .we thus compare how different models behave depending on the mean and variance of the initial distribution of across nodes . as a further contribution , we extend the general framework to encompass models of stochastic contagion . in such a framework, the failure of a given node is a stochastic event depending both on the state of neighbourhood and on the individual threshold .we derive a general equation for the expected change of the fraction of failed nodes , from which one can recover the usual mean - field equations of the sis model , but interestingly also of the vm , as special cases .our work wishes to contribute to a better understanding of the relations between cascading models , contagion models and herding models on networks , from the point of view of systemic risk .in this section we develop a general framework to describe cascading processes on a network . this framework will be extended in sec . [sec : simple - cont - models ] to encompass also stochastic contagion models . on the microscopic side, we characterize each node of the network at time by a dynamic variable characterizing the failure state .the state is if the node has failed and otherwise .other metaphors apply equally well to our model , e.g. ` infected / healthy ' , ` immune / susceptible ' , or ` broken / in function ' . on the macroscopic side , the system state at time is encoded in the state vector , with being the number of nodes .the macrodynamic variable of interest for systemic risk is the total _ fraction of failed nodes _ in the system if values of close to one are reached the system is prone to systemic risk . when trajectories always stay close to zero the system is free of systemic risk .for simplicity , in the following , we will consider models which converge in to stationary states .so , the _ final fraction of failed nodes _ is our proxy for the systemic risk of the system . in order to describe various existing models in a single framework, we assume that the failure state of each node is , in turn , determined by a continuous variable , representing the _ fragility _ of the node .a node remains healthy as long as , where the constant parameter represents the _ threshold _ above which the fragility determines the failure .conversely , the node fails if . in other words , where is the heaviside function ( here meant to be if and if )the variable is called _net fragility_. as it is defined as the difference between fragility and failing threshold its absolute value has the same meaning of _ distance to default _ in finance , for .notice that in the equation above time runs in discrete steps , consistently with failure being a discrete event . this general framework can be applied to different models by specifying the functional form of fragility . as we will see , depending on the case under consideration, can be a function of the failure state vector and some static parameters , such as the network structure and the initial distribution of stress on the nodes .it can also be a function of the vector of fragility at previous times .the latter constitutes a coupled system with the vectors and as state variables . in any case , fragility depends on the current failure state and determines the new failures at the next time step .thus , cascades are triggered by the fact that failures induce other failures .specific models will be described in sec .[ sec : specific - cascade - models ] the interaction among nodes is specified by the ( possibly weighted ) adjacency matrix of the network , with .for specific models some restrictions to the adjacency matrix may apply , e.g. one may consider undirected links , no self - links or some condition on the weights . in this frameworkthe adjacency matrix of the network influences the dynamics only as a static parameter , i.e. , we do not consider feedbacks from the state of a node on the link structure as in .if we assume a large number of nodes , it makes sense to look at the distribution of the net fragility , in terms of its density function . then from eqn .[ eq : xt ] and [ eq : unify ] it follows that the fraction of failed nodes at the next time step is given by in the cascading process new failures modify over time the values of fragility of other nodes .we can also formulate the dynamics in the space of density functions : if we know both the density function of the fragility at time and the density function of the failing threshold , we can write with ` ' denoting the convolution .the expression above assumes that fragility and threshold are stochastically independent across nodes .depending on the specific model , the functional operator , in eqn .( [ eq : f ] ) , may also include dependencies on other static parameters .the general idea is to find a density that is an attractive fix point of , so that the asymptotic fraction of failed nodes is obtained via eqn . .in many cascading processes on networks , the failure of a node causes a redistribution of load , stress or damage to the neighbouring nodes . in our framework , such redistribution of load can be seen as if a failure causes an increase of fragility in the neighbours . in the following ,we distinguish three different classes of models , denoted as ( i ) ` constant load ' , ( ii ) ` load redistribution ' , and ( iii ) ` overload redistribution ' .we keep the term ` load ' because it is more intuitive .we will show how these model classes are described in our unifying framework in terms of fragility and threshold , and how some models known in the literature fit into these classes .the differences in the cascading process across the models will be illustrated by taking the small undirected network of figure [ fig : all : init ] as an example .is represented by the shape of the node .a healthy node has , a failed one .a failing node is a node with but , so it will switch to the failed state in the next time step .nodes are labeled with capital letters . the level of fragility ( which changes over time ) is indicated inside each node .the failing threshold ( constant over time ) is indicated as superscript to the node .the color code specified in the colorbar refers to the value of net fragility . ] for each model , we consider the same initial configuration with respect to the net fragility in which all nodes are healthy ( i.e. with negative ) . during the first time step , the value of node * c * is perturbed so that it fails .the subsequent time steps reveal how the propagation of failure occurs in the different models . model class ( i ) ( ` constant load ' ) assumes that the failure of a node causes a predetermined increase of fragility to its neighbours . the term ` constant ' does not imply that the increase is uniform for all nodes ( on the contrary , some nodes may receive more load than others ) .it means that the increase in the fragility of node , when its neighbor fails , is the same regardless of the fragility of and of the situation in the rest of the system .we can now distinguish two cases .in the first case , the increase in fragility of a node is proportional to the fraction of neighbors that fail .this is a reasonable assumption if the ties in the network represent for instance financial dependencies or social influence . in the second case ,the increase in fragility of a node , when neighbor fails , is inversely proportional to the number of neighbors of node . in other words , the load of shared equally among the neighbours and thus the more are its neighbours , the smaller is the additional load that each one , including , has to carry .we will refer to the first case as the _ inward variant _ of the model because the increase in fragility caused by the failure of one neighbour depends only on the in - degree of the node receiving the load .in contrast , we will refer to the second case as the _ outward variant _ , because the increase in fragility depends only on the out - degree of the failing node .we now start by casting in our framework the well known threshold model of collective behavior by granovetter .the model was developed in the context of social unrest , with people going on riot when the fraction of the population which is already on riot exceeds a given individual activation threshold .this model has been more recently reproposed as generic model of cascades on networks .we assume an initial vector of failing thresholds , and initial failing states for all . we define fragility as simply the fraction of failed neighbors , with being the set of all in - neighbors of in the network and being the cardinality of the set ( i.e. the in - degree of ) .this means that a node fails when the fraction of its failed neighbors exceeds its failing threshold .consequently , the initial fragility across nodes is zero for all and the dynamical equation ( [ eq : unify ] ) implies .thus , nodes with negative threshold correspond to initial failures at time step .interestingly , we can map our inward cascading model with constant load also to an economic model of bankruptcy cascades introduced in . inthat model firms are connected in a network of credit and supply relations .each firm is characterised by a financial robustness which is a real number , where the condition determines the default of the firm .given a vector of initial values of robustness across firms and a vector of failure states , the robustness of firm at the next time step is computed as with being the set of in - neighbors of , the in - degree of , and a parameter measuring the intensity of the damage caused by the failure .new vectors of failing state vectors and robustness are then computed iteratively until no new failures occur .mathematically , this process is equivalent to our inward variant model specified by eqn . .the equivalence is obtained by defining fragility as in eqn . andby setting we note that the model specified in also includes a dynamics on the robustness inbetween two cascades of failures , which is not part of our framework .let us now turn to the outward variant of the constant load model .it can be described within our framework by defining fragility as with being the out - degree of node .if the network is undirected and regular , i.e. , all nodes have the same degree , the inward and the outward model variants , are equivalent and lead to identical dynamics .however , if the degree is heterogeneous , then the number and the identity of the nodes involved in the cascade differ , as shown in the example of figure [ fig : fdfail3 ] . .initially , node * c * is forced to failure by setting its failure threshold to zero .subsequent time steps in the evolution of the cascade are represented downward in the figure . ]notice that the influence of high and low out - degree nodes interchange in the two variants , as well as the vulnerability of high and low in - degree nodes . in the inward variant ,high in - degree nodes are more protected from contagion as they only fail when many neighbours have failed . in turn ,when a high out - degree node fails , it causes a big damage if it has many neighbors with low in - degree .in contrast , in the outward variant , a failing low out - degree node generates a larger impact on its neighbours since the load is distributed among fewer nodes .thus , a high in - degree node is more exposed to contagion if it is connected to low out - degree nodes . on the other hand ,a failing high out - degree node does not cause much damage to its neighbors because the damage gets divided between many nodes . in the examples reported in the figures ,the network is undirected and in - degree and out - degree coincide .still the roles of high - degree and low - degree nodes interchange as discussed above . as another important difference between the two variants ,the maximal fragility is bounded by the value one in the inward variant , while it is bounded by the number of nodes in the outward variant , which is realized in a star network .further , both variants strongly differ regarding the impact of the position of the initial failure .figure [ fig : fdfail9 ] ( in appendix [ sec : further - examples ] ) shows an example , where node * i * initially fails ( instead of node * c * in figure [ fig : fdfail3 ] ) . the cascade triggered by that event is larger in the outward variant than in the inward variant , in contrast to what seen in figure [ fig : fdfail3 ] .eventually , figure [ fig : fdfail5 ] illustrates the dynamics of a cascade triggered by the failure of node * e * , which has the highest degree .this results in a full cascade in the inward variant , while there is no cascade at all in the outward variant .this observation illustrates the different influence of nodes with high degree in the inward and the outward variant , as explained above . model class ( ii ) ` load redistribution ' is our second class of cascading models . in this classall nodes are initially subject to a certain amount of load .actually , in this model class fragility coincides with load .when a node fails , all of its load is redistributed among the first neighbours .this mechanism differs from class ( i ) because in class ( ii ) the increase in fragility among the neighbours of depends on the actual value of s fragility and not just on the fact that it exceeds the threshold .the damage caused by one failure can thus not be specified a priori .models belonging to this class include the fiber bundle model ( fbm ) and models of cascades in power grids . in some cases it is possible to define the total load of the system , which, additionally , but not necessarily , may be a conserved quantity .for instance , in the fbm a constant force is applied to a bundle of fibers each of which is characterized by a breaking threshold . when a fiber breaks , the load it carries is redistributed equally to all the remaining fibers , so the total load is conserved by definition . in the context of networks ,a node represents a fiber and if the node fails the load is transferred locally to the first neighbours in the network .an analogy to power grids is also possible , with nodes representing power plants , links representing transmission lines , fragility representing demand and threshold representing capacity , respectively .there are , several ways to specify the mechanism of local load transfer .a first variant is the fbm with local load sharing ( lls ) and load conservation , investigated in .we refer to this variant as llsc . despite the fact that load sharing is local , total loadis strictly conserved at any time , due to the condition that links to failed nodes remain able to transfer load ( in other words , links do not fail ) .a second variant implies load shedding instead , and we refer to it as llss . in this variant ,all links to failed nodes are removed and the load of a failing node is transferred only to the first neighbours that are not about to fail .these are the nodes that are healthy and below the threshold and thus will be still alive at the next time ( although they may reach the threshold meanwhile ) .however , if there are no surviving neighbours , the load is eventually lost ( or shed ) . in the first variant we can cast the fbm - lls and extend it to the case of heterogeneous load and directed networks . from now on ,we interpret ` load ' as ` fragility ' , and ` capacity ' as failing threshold. let be the vector of initial fragility ( corresponding to the initial load carried by each node ) , and the vector of failing thresholds ( or maximal capacity ) .( for comparison : in the threshold for node is denoted by with values taken from a uniform distribution between zero and one .the load of each node is the same and called , with being the total load . )we define as the set of healthy nodes which are reachable from node following directed paths consisting only of failed nodes ( except ) .let be the cardinality of such set .moreover , we define to be the set of nodes from which node can be reached along directed paths consisting of failed nodes ( except ) .both sets of nodes defined above have to be computed dynamically based on the current vector of failing states and the network .finally , given the initial fragility vector , the failure state vector , and the network , we define the fragility of node at time in the llsc variant as we add that for an undirected network and uniform initial load , such a definition becomes equivalent to the _ load concentration _ factor of node , as defined in .the assumption that links do not break and remain able to transfer load is not always satisfactory .some models have thus investigated the llss variant of the model in which the load is transferred only to the surviving first neighbours . in this casethe load transfer is truly local and there is no transmission along a chain of failed nodes .this implies that during a cascade of failures , at some point in time the network might split into disconnected components which can not transfer load to each other .in particular , if one of these subnetworks fails entirely , all the load carried by this subnetwork is shed . as a consequence of the llss assumption ,fragility now is not just a function of the current state vector and some static parameters ( such as the network matrix and the initial fragility ) .in contrast , it has to be defined through a dynamic process as a function of the fragility vector at previous time , according to the following equation : with being the set of in - neighbors of which fail at time ( but have not already failed ! ) , and the set of out - neighbors of which remain healthy at time thus , eqn .is well defined unless is empty . in this case, there is no healthy neighbour of to which the load can be transferred , thus the load has to be shed .the remaining healthy nodes remain unaffected .figure [ fig : fbfail3 ] illustrates , as an example , the different outcomes of the dynamics in the llsc and llss variants .the initial load is set to one for all nodes , thus the total load on the system is nine .the values of the threshold are set in order to have the same values of at each node as in the example of figure [ fig : all : init ] . as in figure[ fig : fdfail3 ] , we set the failing threshold of node * c * to one in order to trigger an initial failure .on one hand , we could expect that cascades triggered by the failure of one node are systematically wider in the llsc variant than in the llss variant because in the first one the total load is conserved . on the other hand , in the llsc , the fragility is redistributed also to indirect neighbours thus leading to a smaller increase of fragility per node and therefore possibly to smaller cascades .in fact there seems to be no apparent systematic result , the outcome being dependent on the network structure and the position of the initial failure . in the example shown in figure [ fig : fbfail3 ]the cascade stops sooner in the llsc variant than in the llss one , due to the rebalancing of load across the network .in other cases , however , if for instance node * e * initially fails , we find that the load shedding has a stronger impact and the cascade is smaller in the llss case .we conclude our classification with class ( iii ) ` overload redistribution ' .when a node fails in these models , only the difference between the load and the capacity is redistributed among the first neighbours . actually , the overload of a node is its net fragility .this class is more realistic in applications , where a failed node can still hold its maximum load and only has to redistribute its overload .the eisenberg - noe model is an important example of an economic model in which firms are connected via a network of liabilities .when the total liabilities of a firm exceed its expected total cash flow ( consisting of the operating cash flow from external sources and the liabilities of the other firms towards ) , the firm goes bankrupt . when a new bankruptcy is recognized the expected payments from others decline , but they do not vanish entirely . thus the loss spreading to the creditors is mitigated .with respect to our framework , we can identify the total liability minus the currently expected payments ( from the liabilities of others ) with fragility .similarly , operating cash flow corresponds to the failing threshold .the relation between the eisenberg - noe model and the overload redistribution class is discussed more in detail in appendix [ sec : eisenbergnoe ] .therefore , we adapt the two variants of load redistribution defined as llsc and llss in section [ sec : models - with - redistr - of - load ] to the case of overload redistribution by subtracting the threshold value in the nominator of eqns .[ eq : phi : fiberllsc ] and [ eq : phi : fiberllss ] .we have as definition of fragility in the llsc version when links remain , and as dynamical equation of fragility in the llss version when links break . using our small example of figure [ fig : all : init ] ,the cascading dynamics for the model class ( iii ) is presented in figure [ fig : fbofail3 ] . in general , as we will see in section [ sec : macr - reform ] this class of models leads to much smaller cascades , compared to class ( ii ) . in this example , we have set the initial fragility of node * c * high enough so that a large cascade is triggered . a very high initial overload is needed to trigger a cascade of failures because this overload is the only amount which is transferred through the whole system . on a failure nothing newis added to the total amount , because the node stays with its maximum capacity .notice that the models of overload redistribution are invariant to joint shifts in the initial fragility and in the failing threshold .in other words , a system with and leads to the same trajectory of failure state and fragility .thus , it is enough to study the model with without loss of generality .in the previous section we have seen that the different classes of cascading models lead to a diverse behaviour , at least in small scale examples , even if initial conditions for net fragility are the same . in this section , by studying simple mean - field approximations of the processes we find that there are significant differences also at the macroscopic level . in order to compare the different model classes under the same conditions , we have set the probability density functions of initial values of the net fragility to be equal for all models . for the cases ( i ) constant load , and ( iii ) overload redistribution we set .notice , that we can set in case ( iii ) without loss of generality . for case ( ii )load redistribution , instead , it is necessary to have ( otherwise there is no load to redistribute ) and we have .we further assume that the initial fragility is uniform across nodes in model class ( ii ) .even a basic mean - field approach allows for an interesting comparison of the three model classes .to do so , we replace the distribution of fragility at time , with the delta function centered on the mean fragility .this is equivalent to assuming a fully connected network since in such a case eqns .( [ eq : phi : granovetter]-[eq : phi : fiberllss ] ) yield the same fragility for every node .if the two distributions are independent , from we get convolution with a delta corresponds to a shift in the variable , so that , and from eqn .we obtain where is the cumulative distribution function of .this is equivalent to a change of variable in the probability distribution and in the integral .however , the procedure with convolution can be carried out also if is not assumed to be a delta function . at this point, we have to express the mean fragility in terms of the current fraction of failed nodes , .for case ( i ) constant load , in a fully connected network , eqns .( [ eq : phi : granovetter ] ) and ( [ eq : phi : damagetransfer2 ] ) yield both the following mean fragility : for case ( ii ) load redistribution , assuming that the surviving nodes equally share the initial load , we can write for the mean fragility : this is obtained from eqn .[ eq : phi : fiberllsc ] at microscopic level by taking the mean over all on both sides now , assuming for all nodes and the network as fully connected , we have that : coincides with ; the sum over the set ( which now coincides with the set of failed nodes ) equals ; and equals , because we count all healthy nodes .thus , we obtain for case ( iii ) overload redistribution , we can proceed similarly starting from eqn .[ eq : phi : overloadllsc ] . setting , without loss of generality , and taking the mean over on both sides yields again, we can replace the sum over by , and by . however , now the average of the threshold values across all failed nodes ( as indicated by the sum ) is not simply .it is instead the mean of that part of the distribution where failed nodes are located .these are the nodes with and their probability mass has to sum up to . for a given distribution and a given fraction of failed nodes ,the mean threshold of failed nodes is defined as denotes the -quantile of the distribution , i.e. a fraction of the probability mass lies below : thus , is the first moment of below the value , normalized by the probability mass of the distribution in the same interval . replacing this into eqn .( [ eq : deriveoverloadmacro ] ) yields as mean fragility for case ( iii ) overload redistribution : notice , that the mean of the threshold of the failed nodes is negative , thus the minus in front of ensures that fragility is positive . by replacing the expressions of in terms of in eqn .( [ eq : xvsmeanphi ] ) we obtain simple recursive equations in for the different cases : for case ( i ) constant damage for case ( ii ) load redistribution and for case ( iii ) overloadredistribution as the functions on the right hand sides of eqns .( [ eq : recur : fd])-([eq : recur : or ] ) are monotonic non - decreasing and bounded within ] , where } ] and zero elsewhere .the integral of this function over the whole real axis gives the fraction of healthy nodes , while the fraction of failed nodes is given by notice that the total mass of the function is in general smaller than one and decreases over time ( therefore , strictly speaking is not a pdf ) . because only counts the healthy nodes , the _ fraction of currently failing _( and not already failed ! ) nodes , , is defined as we can then write the recursive equation } p^h_{z(t ) } ) \nonumber \\ & = \sum_{i=0}^k b(j , k , x_f(t ) ) { \mathbb{\bf 1}}_{[-\infty,\frac{j}{k } ] } p^h_{z(t)+\frac{j}{k}}.\label{eq : fdrecusionk2}\end{aligned}\ ] ] summarizing , from eqn .( [ eq : fdrecusionk2 ] ) we can solve for the limit distribution or compute it numerically ( after binning the ) .this last method is denoted as mf3 .methods mf2 and mf3 can be understood as conceptually different by focussing on the net fragility of a single node coming for the distribution . for mf2we compute the probability of a node to have a certain net fragiity by its possibilities to have failed neigbors , thus the maximum increase in fragility is over all time steps .this fits to the inward agent - based dynamics , because we focus on the receiving node which has in - neighbors . in mf3 instead , after each time step the whole distribution of the net fragility is reshaped .thus , there is a nonzero probabiity that one node gets more than increases in net fragility in successive time steps .we compute how the fraction of currently failing nodes reshapes the distribution of net fragility .thus , we focus on the spreading node here and there is a nonzero probability that one node can receive more than increases of in two successive time steps , as it is also possible in a network where in - degrees vary slightly. thus , mf3 fits to the outwards agent - based dynamics .figures [ fig : failfracfd12 ] , [ fig : difffailfracfd12 ] , and [ fig : difffailfracfd12_2 ] plot the limit fraction of failed nodes in the plane obtained from the recursive equations ( mf2 ) and ( mf3 ) , as well as a comparison with the case of fully connected network ( mf1 ) and a comparison between each other .notice that , similar to mf1 , we still observe in both mf2 and mf3 a discontinuity line which vanishes as and increase .the shape of the line varies in the three analyses . in the third approach , mf3 ,the values of are systematically smaller than in mf2. moreover , in mf1 the region of high systemic risk is less extended than in the other approaches , although for intermediate values of , values of in mf1 are larger than in mf2 , mf3 ( blue regions in fig .[ fig : difffailfracfd12_2 ] ) .this is due to the fact that when many links are present , nodes are spreading the fragility more evenly and so less failures take place , given the same initial fragility .after the critical point the avalanche is larger .on the other hand , approach mf2 always yields larger systemic risk than mf3 which takes into account the whole distribution ( see fig . [fig : difffailfracfd12_2 ] ) .thus , the inwards version of the ` constant load ' model is more prone to systemic risk than the outward version , this is especially relevant in the region of very low , where mf2 shows full cascades up to , while mf3 is already free from full cascades .a full cascade in mf3 is triggered only for slightly higher . on a regular graph with degree .plots are constructed in analogy to figure [ fig : failfrac ] .left : mean field solution from eqn .right : mean field solution from eqn ., title="fig : " ] on a regular graph with degree .plots are constructed in analogy to figure [ fig : failfrac ] .left : mean field solution from eqn .right : mean field solution from eqn ., title="fig : " ] + on a regular graph with degree .plots are constructed in analogy to figure [ fig : failfrac ] .left : mean field solution from eqn .right : mean field solution from eqn ., title="fig : " ] on a regular graph with degree ( shown in figure [ fig : failfracfd12 ] ) and on a fully connected network ( shown in figure [ fig : failfrac ] ) , based on different mean field approaches . left : plot of the difference .right : .color code as in fig .[ fig : comparefailfrac ] . .,title="fig : " ] on a regular graph with degree ( shown in figure [ fig : failfracfd12 ] ) and on a fully connected network ( shown in figure [ fig : failfrac ] ) , based on different mean field approaches . left : plot of the difference .right : .color code as in fig .[ fig : comparefailfrac ] ..,title="fig : " ] + on a regular graph with degree ( shown in figure [ fig : failfracfd12 ] ) and on a fully connected network ( shown in figure [ fig : failfrac ] ) , based on different mean field approaches . left : plot of the difference .right : .color code as in fig .[ fig : comparefailfrac ] . .,title="fig : " ] between final fraction of failed nodes obtained with approaches mf2 and mf3 shown in figure [ fig : failfracfd12 ]. color code as in fig .[ fig : comparefailfrac ] . , title="fig : " ] between final fraction of failed nodes obtained with approaches mf2 and mf3 shown in figure [ fig : failfracfd12 ] . color code as in fig .[ fig : comparefailfrac ] ., title="fig : " ]in section [ sec : form - model - cont ] we have introduced a general model of cascades based on a deterministic dynamics of the state of a node , eqn .( [ eq : unify ] ) , with a sharp transition from healthy to failed state , at exactly . in this sectionwe propose a generalization of such process to a stochastic setting .interestingly , it will be possible to derive the voter model as well as the stochastic contagion model sis as particular cases .this exercise will shed some new light on the connections between cascade models and contagion models .we assume that the failure of a node is a stochastic event occurring with some probability dependent on the net fragility , , but possibly also conditional to the current state , .we have in mind a situation in which the probability to fail increases monotonically as becomes positive .conversely , nodes can switch from the failed state back to the healthy state and this is more likely if becomes negative .notice that we introduce an asymmetry , as in general . compared to equation ( [ eq : unify ] ) ,now the dynamics is defined as here , denotes the probability to find node in state 1 at time , conditional that it was in state 0 at time , etc . obviously , in the following , we abbreviate the relevant conditional probabilities as , and denote them as transition probabilities ( per unit of time ) . under markov assumptions the chapman - kolmogorov equation holds for the probability to find node in state 1 at time : with and eqn .( [ eq : hold ] ) , this results in the dynamic equation \end{aligned}\ ] ] stationarity , i.e. , implies the so - called detailed balance condition : a very common assumption for is the logit function : the parameters , measure the impact of stochastic influences on the transition into the failed state and back into the healthy state , accordingly . by varying , the deterministic case ( ) as well as the random case ( ) can be covered .figure [ fig:2 ] shows the dependency of the probability with respect to for the symmetric case , , . ,( [ eq : logit ] ) , dependent on for several values of , to indicate the crossover from a random to a deterministic transition : ( blue ) , ( green ) , ( red ) , ( black ) , width=264 ] the transition probabilities can be chosen in accordance with eqs .( [ eq : detailed ] ) , ( [ eq : logit ] ) as follows : the parameters , set the range of the functions and should be equal only if the detailed balance condition holds .the different thresholds , shift the position of the transition from one state to the other .the transition probabilities thus depend on two sets of parameters , , , characterizing the transition into the failed state , and , , for the transition into the healthy state .these sets differ in principle , but they play the same role in the transitions . in analogy to section [ sec : macr - reform ] , we want to derive a dynamics at the macro level for the expected fraction of failed nodes at time . to this end , we start from the micro dynamics given by eqn .( [ eq : chap2 ] ) . as a first mean - field assumption ,we neglect correlations between fragility and thresholds across nodes in the network . in other words, we assume that the values of and are drawn from the same probability distribution , regardless of the identity of the node .the expected change in the probability for node is obtained by integration : =\int_\mathbb{r } p_z(z(t ) ) p(1|0;z ) p_{i}(0,z , t ) dz \nonumber\\ & - & \int_\mathbb{r } p_z(z'(t ) ) p(0|1;z'_i ) \,p_{i}(1,z',t ) ] dz'.\end{aligned}\ ] ] to avoid any confusion with the notation , we recall that is the density function of the net fragility , while is the probability that a node with net fragility switches from state to state , and finally is the probability that node with net fragility is in state at time .we now average both sides of the equation above across nodes .in particular , the average of the r.h.s . yields noticing that , for large we get equation ( [ eq : master4 ] ) describes the dynamics of the expected fraction of failures in a system with both heterogeneity of threshold , , or fragility , , and with stochasticity in the cascading mechanism .we can now obtain mean - field equations for various existing models , by specifying ( 1 ) the transition probabilities and , and ( 2 ) the distribution for the thresholds . in order to recover the deterministic models of section [ sec : specific - cascade - models ] ,we first notice that in those cases the transition from state to is not really conditional to the state at previous time .actually , in these models a node changes to a certain state with a probability which is independent of its current state .we emphasize , however , that our framework is general enough to cover cases in which failure is really conditional on . for the models discussed in section [ sec : specific - cascade - models ] , we can assume and thus , and further , .we have then we now set , which implies that the transition probability in equation ( [ eq : transition - prob ] ) tends to the heaviside function : since , for any real function holds we obtain because of , this finally yields eqn .( [ eq : master8 ] ) coincides with eqn .( [ eq : xmacro ] ) .in order to recover models of herding and stochastic contagion , we instead keep the stochastic nature of the failure but we assume that the failure threshold is the same across nodes , , . in a mean field approximation , we replace the individual fragility with the average one , so that also is constant across the nodes .then the probability density of in equation ( [ eq : master4 ] ) is equivalent to a delta function and the integral over drops .the macroscopic mean - field equation then reads eqs .( [ eq : master5 ] ) will be the starting point for discussing specific contagion models in sections [ sec : voter ] , [ sec : sis ] .the linear voter model ( lvm ) is a very simple model of herding behavior .the dynamics is given by the following update rule : a voter , i.e. a node of the network , is selected at random and adopts the state of a randomly chosen nearest neighbor .after such update events , time is increased by 1 . the probability to choose a node in state 1 from the neighborhood of node is proportional to the frequency of nodes with state 1 in that neighborhood , ( and conversely for state 0 ) .consequently , the transition probability towards the opposite state is proportional to the local frequency of the opposite state .it is also independent of the current state of the node . in order to match this dynamics within our framework , we consider values of of the order of 1 . from equation ( [ eq : transition - prob ] ) ,we obtain in linear approximation : \nonumber \\p(0|1,z_{i } ) & = & \frac{\gamma'}{2}\left[1-\beta ' z'_i\right].\end{aligned}\ ] ] with , , , this matches the transition probabilities for the lvm provided that : & = & 2f_{i } \nonumber \\ \gamma \left[1-\beta ( \phi-\theta_{i})\right ] & = & 2\left(1-f_{i}\right ) \end{aligned}\ ] ] this is realized by choosing we note that the threshold coincide with the unstable equilibrium point of the lvm , , that distinguishes between minorities and majorities in the neighborhood .the fragility equals the local frequencies of infected nodes , and does not depend on the node itself .if a majority of nodes in the neighborhood has failed , this more likely leads to a failed state of node ; if the failed nodes are the minority , this can lead to a transition into the healthy state . since the fragility coincides with fraction of neighbours in state 1 ( or 0 ), vm fits in the first model class described in sec .[ sec : models - with - constant - damage ] , with the specificity that the failure process is stochastic and the threshold is homogeneous across nodes . as a consistency check ,if we assume for one moment the failure process to be deterministic , one could directly apply eqn .( [ eq : recur : fd ] ) .since the probability distribution of the threshold would be trivially a delta function , its cumulative distribution would be the heaviside function .this would imply that the dynamics reaches as stable fix point as soon as and viceversa for .coming back to the usual stochastic vm , in order to obtain the mean - field dynamics , we now approximate with , i.e. we replace , in equation ( [ eq : master5 ] ) .this recovers the well known mean - field dynamics of the lvm , , i.e. the expected asymptotic fraction of failures ( which differs from the individual realizations ) coincide with the initial fraction of failed nodes . with a similar procedure we can also account for nonlinear vm , in which the probability to switch to a failed or healthy stateis a non - linear function of the fraction of failed nodes in the neighborhood : and are frequency dependent functions which describe the non - linear response of a node on the fraction of failed nodes in the neighborhood .if we again replace with the global frequency of failures , we arrive at the macroscopic dynamics in mean - field limit : \end{aligned}\ ] ] in the linear case , the prediction for the expected value of does not give sufficient information about individual realizations of the voter dynamics .in fact , it is well known that the global outcome of the lvm leads to global failure with a probability equal to the initial fraction of infected nodes , . in other words , if we run a simulation with e.g. for 100 times , then in 30 cases we will reach a state of global failure , whereas in 70 cases , no failure at all will prevail .this differs from the case of the cascading models described in sect .[ sec : specific - cascade - models ] , in which the mean field dynamics gives us some more information about individual realizations . in the non - linear case ( , ) , different scenarios arise depending on the nonlinearity . in was shown that even a small non - linearity may lead to either states where global failure is always reached , or to states with a coexistence of failed and healthy nodes .it is worth noticing that both of these scenarios are obtained with _ positive frequency dependence _ , i.e. a transition probability to fail that increases monotonically with the local frequency .thus , small deviations in the nonlinear response can either enhance systemic risk , or completely prevent it .the sis model is the most known model of epidemic spreading . on the microlevel, healthy nodes get infected with probability if they are connected to one or more infected nodes . in other words ,the parameter measures the infectiousness of the disease in case of contact with an infected node .this means that the effective transition probability of node from healthy to infected state is proportional to the probability that a neighbour is infected times the degree of the node .indeed , the larger the number of contacts , the more likely it is to be in contact with an infected node . on the other hand ,failed nodes recover spontaneously with probability .the transition probabilities are then as follows : with .we do not redefine , as usually done , the infection rate as with , because we want to cover the case , as we will see below .we interpret of course infection state as failure state .matching the transition probabilities of sis with the ones in our framework , we obtain : & = & 2 \nu \ , k_{i } \ , q \nonumber \\\gamma ' \left[1-\beta ' ( \phi-\theta'_{i})\right ] & = & 2 \delta\end{aligned}\ ] ] this implies that our framework recovers the transition probabilities of the sis model , provided that : in order to understand the relation of sis with the other models , we can approximate the probability that a node fails with the fraction of failed neighbours .the resulting expression for the fragility , , is proportional to the fraction of failed nodes as in model class ( i ) of sec .[ sec : models - with - constant - damage ] .however , the term implies that the infection probability grows with the number of connections in the network .this feature makes the biggest difference between the sis model and the cascade models studied in the previous sections , apart of course from the fact that the contagion process is stochastic and the threshold homogeneous .another important feature that emerges is the asymmetry in the transition probabilities between healthy and failed state and backwards . in order to derive a macroscopic dynamics, we apply the mean - field approximation and we assume a homogeneous network with for all nodes . starting from equation ( [ eq : master5 ] ) , we obtain the last negative term in the r.h.s . of eqn .( [ eq : sis - growth ] ) implies that there is no global spreading of infection if and the only stable fix point is .for there is a unique stable fix point with .as it is well known , the existence of a critical infection rate does not hold , however , if , instead of the mean - field limit with homogeneous degree , a heterogeneous degree distribution of the nodes is assumed .the implications of degree heterogeneity and degree - degree correlation in epidemic spreading have been investigated in a number of works .the si model , in which no transition into the healthy state is possible , is recovered setting additionally .we then obtain the logistic growth equation : where is the only stable fix point of the dynamics .any initial disturbance of a healthy state eventually leads to complete infection .we conclude by noting that , despite its simplicity , the si model has been used to describe a number of real contagious processes , such as the spread of innovations or herding behavior in donating money . in the latter case ,the mean - field interaction was provided by the mass media . in other words , because of the constant and homogeneous information about other people s donations , the individual transition depends on the global ( averaged ) frequency of donations instead of the local one .interestingly , it could be shown that in the particular example of epidemic donations , the time scale depends itself on time , indicating a slowing down of the dynamics due to a decrease in public interest .in this paper , we wish to clarify the meaning and the emergence of systemic risk in networks with respect to several existing models . to unify their description ,we propose a framework in which nodes are characterized ( 1 ) by a discrete failure state ( healthy or failed ) and ( 2 ) by a continuous variable , the _ net fragility _ , , capturing the difference between fragility of the node and its failing threshold . by choosing an appropriate definition of fragility in terms of the failure state and/or the fragility of neighbouring nodes , we are able to recover , as special cases , several cascade models as well as contagion models previously studied in the literature .our paper contributes to the investigation of these models in several ways .first of all , we have provided a novel framework to cover both cascade and contagion models in a deterministic approach , which is further suitable to be generalized also to the stochastic case .secondly , our framework allows us to unify a number of existing , but seemingly unrelated models , pointing out to their commonalities and differences .thirdly , we are able to identify three different model classes , which are each characterized by a specific mechanism of transferring fragility between different nodes .these are ( i ) constant load , ( ii ) load redistribution , and ( iii ) overload redistribution. systemic risk , within our framework , is defined as the stable fraction of failed nodes in the system . as denotes the complete breakdown of a system , we are interested in trajectories of the system where is much below one . in order to determine these trajectories , in this paper we derive a _ macroscopic dynamics _ for based on the microscopic dynamics . as a major contribution of this paper ,we are able to find , for each of the three classes , macroscopic equations for the final fraction of failed nodes in the mean - field limit . in order to compare the systemic risk between the three classes ,we have studied the macroscopic dynamics of each of them with the same initial conditions .most importantly , we found that the differences on the microscopic level translate into important differences on the macroscopic level , which are visualized in a phase diagram of systemic risk .this indicates , for each of the model classes , which given initial conditions result into what total fraction of failure .this way we could verify that , for instance , in class ( ii ) there is a first - order transition between regions of high systemic risk and regions with low systemic risk .in contrast , class ( i ) displays such a sharp transition only in some smaller part of the phase space , while class ( iii ) does not display any abrupt transition at all .such an insight helps us to understand whether and for which parameters small variations of initial conditions may lead to an abrupt collapse of the whole system , in contrast to a gradual increase of systemic risk .in addition to the macro dynamics , we have also investigated the different model classes on the microscopic level .a number of network examples made clear how the different transfer mechanisms affect the microdynamics of cascades .as an interesting insight , we could demonstrate that the role of nodes with high degree change depending on type of load transfer . in the inwards variant of first class model , high degree nodesare more protected from contagion , whereas in the outwards variant of the same class , they become more exposed to contagion if they are connected to many low degree nodes ( which holds for disassortative networks ) .furthermore , we could point out that the results strongly depend on the position of the initially failing node .a systematic analysis identifying the crucial nodes from a systemic risk point of view is left for future work .finally we have extended our general framework so to encompass models of stochastic contagion , as known from vm and sis .both of these models belong to the first class , but differently from the models studied in section [ sec : models - with - constant - damage ] , the threshold is homogeneous and the failure is stochastic .hence , it becomes more clear how these established models of herding behavior and epidemic spreading are linked to the cascade models discussed in the literature .our work could be extended in several ways .first , one could apply techniques to deal with heterogeneous degree distributions to the three classes of cascading model , introduced in sec .[ sec : specific - cascade - models ] . this could be carried out also in the presence of degree - degree correlation , as recently discussed for contagion models .furthermore , one could investigate more in detail the case of both heterogeneous threshold and stochastic failures .compared to the simple sis , it seems more realistic to assume that the probability of contagion depends on an intrinsic heterogeneous property of the nodes ( the threshold ) .such heterogeneity could also play a crucial role , as it has been found for the heterogeneity in the degree .a last remark is devoted to the discussion of systemic risk . in our paper , we have provided mean - field equations to calculate the total fraction of failed node in a system , which we regard as a measure of systemic risk .this implies that systemic risk is associated with a system state of global failure , i.e. there is no risk anymore , as almost all nodes already failed .in contrast , it could be also appropriate to define systemic risk as a situation , where the system has not failed yet , but small changes in the initial conditions or fluctuations during the evolution may lead to its complete collapse .our general framework has already contributed insight into this problem , by identifying those areas in the ( mean - field ) phase diagram where we can observe a sharp transition between a globally healthy and a globally failed system .this is related to precursors of a crisis as it identifies parameter constellations to make a system vulnerable that looks apparently healthy .on the other hand , using our approach we were able to assess that for certain transfer mechanisms such an abrupt change in the global state is not observed at all which means that systems operating under some conditions are less vulnerable to small changes .this work is part of a project within the cost action p10 `` physics of risk '' .j.l . and f.s .acknowledge financial support from the swiss state secretariat for education and research ser under the contract number c05.0148 .s.b . and f.s .acknowledge financial support from the eth competence center coping with crises in complex socio - economic systems ( ccss ) through eth research grant ch1 - 01 - 08 - 2 .35 natexlab#1#1url # 1`#1`urlprefixselectlanguage # 1 avellaneda , m. ; zhu , j. ( 2001 ) .. _ risk _ * 14(12 ) * , 125129 .battiston , s. ; gatti , d. d. ; gallegati , m. ; greenwald , b. c. n. ; stiglitz , j. e. ( 2007 ) .credit chains and bankruptcies avalanches in production networks ._ journal of economic dynamics and control _ * 31(6 ) * , 20612084 .battiston , s. ; gatti , d. d. ; gallegati , m. ; greenwald , b. c. n. ; stiglitz , j. e. ( 2009 ) .liaisons dangereuses : increasing connectivity , risk sharing and systemic risk . _ forthcoming _ .bianconi , g. ; marsili , m. ( 2004 ) . ._ physical review e _* 70(3 ) * , 35105 .bogun , m. ; pastor - satorras , r. ; vespignani , a. ( 2003 ) . ._ physical review letters _* 90(2 ) * , 28701 .brunnermeier , m. ( 2008 ) .. _ journal of economic perspectives _ .carreras , b. ; lynch , v. ; dobson , i. ; newman , d. ( 2004 ) . ._ chaos : an interdisciplinary journal of nonlinear science _ * 14 * , 643 .caruso , f. ; latora , v. ; pluchino , a. ; rapisarda , a. ; tadi , b. ( 2006 ) . ._ the european physical journal b - condensed matter _* 50(1 ) * , 243247 .crucitti , p. ; latora , v. ; marchiori , m. ( 2004 ) . model for cascading failures in complex networks ._ physical review e _ * 69 * , 045104 .dodds , p. ; payne , j. ( 2009 ) . ._ phys rev e _ * 79 * , 066115 .dodds , p. ; watts , d. ( 2004 ) .universal behavior in a generalized model of contagion ._ physical review letters _* 92(21 ) * , 218701 .eisenberg , l. ; noe , t. ( 2001 ) .systemic risk in financial systems . _ management science _* 47(2 ) * , 236249 .goh , k. ; lee , d. ; kahng , b. ; kim , d. ( 2003 ) . ._ physical review letters _* 91(14 ) * , 148701 .gmez - gardees , j. ; latora , v. ; moreno , y. ; profumo , e. ( 2008 ) . ._ proceedings of the national academy of sciences _ * 105(5 ) * , 1399 .granovetter , m. ( 1978 ) .. _ american journal of sociology _ * 83(6 ) * , 1420 .jackson , m. ; rogers , b. ( 2007 ) . ._ the be journal of theoretical economics _* 7(1 ) * , 113 .kahng , b. ; batrouni , g. ; redner , s. ; de arcangelis , l. ; herrmann , h. ( 1988 ) . ._ physical review b _* 37(13 ) * , 76257637 .kim , d. ; kim , b. ; jeong , h. ( 2005 ) . ._ physical review letters _* 94(2 ) * , 25501 .kinney , r. ; crucitti , p. ; albert , r. ; latora , v. ( 2005 ) . ._ the european physical journal b - condensed matter and complex systems _ * 46(1 ) * , 101107 .knig , m. d. ; battiston , s. ; napoletano , m. ; schweitzer , f. ( 2008 ) . on algebraic graph theory and the dynamics of innovation networks . _ networks and heterogeneous media _ * 3(2 ) * , 201219 .kun , f. ; zapperi , s. ; herrmann , h. ( 2000 ) . ._ the european physical journal b - condensed matter and complex systems _ * 17(2 ) * , 269279 .lorenz , j. ; battiston , s. ( 2008 ) . systemic risk in a network fragility model analyzed with probability density evolution of persistent random walks .. media _ * 3(2 ) * , 185 .moreno , y. ; gomez , j. ; pacheco , a. ( 2002 ) . ._ europhysics letters _* 58(4 ) * , 630636 .morris , s. ; shin , h. ( 2008 ) . financial regulation in a system context ._ brookings panel on economic activity , september _ .motter , a. ( 2004 ) . ._ physical review letters _* 93(9 ) * , 98701 .pastor - satorras , r. ; vespignani , a. ( 2001 ) . ._ physical review letters _* 86(14 ) * , 32003203 .schweitzer , f. ; behera , l. ( 2009 ) . ._ the european physical journal b _ * 67(3 ) * , 301318 .schweitzer , f. ; fagiolo , g. ; sornette , d. ; vega - redondo , f. ; vespignani , a. ; white , d. r. ( 2009 ) .economic networks : the new challenges . _ science _ * 325(5939 ) * , 422425 .schweitzer , f. ; mach , r. ( 2008 ) . ._ plos one _ * 3(1)*. sornette , d. ( 2009 ) . ._ forthcoming on international journal of terraspace science and engineering , arxiv:0907.4290 _ .sornette , d. ; andersen , j. ( 1998 ) . ._ the european physical journal b - condensed matter and complex systems _ * 1(3 ) * , 353357 .stark , h. ; tessone , c. ; schweitzer , f. ( 2008 ) . ._ physical review letters _* 101(1 ) * , 18701 .vespignani , a. ; pastor - satorras , r. ( 2002 ) . epidemic spreading on scale - free networks . _ physical review e _ * 65 * , 035108 .vespignani , a. ; zapperi , s. ( 1998 ) .how self - organized criticality works : a unified mean - field picture . _ phys .e _ * 57(6 ) * , 63456362 .watts , d. ( 2002 ) . ._ proceedings of the national academy of sciences _ * 99(9 ) * , 5766 .an interesting model of contagion which has not been investigated in the econophysics literature is the one developed by eisenberg and noe .it introduces a so called fictitious default algorithm as a clearing mechanism in a financial system of liabilities .when some agents in the system can not meet fully their obligations , the task of computing how much each one owes to the other becomes nontrivial in presence of cycles in the network of liabilities .the basic assumptions of the clearing mechanism are ( i ) limited liability ( a firm need not spend more than it has ) , ( ii ) absolute priority of debt over cash ( a firm has to spend all available cash to satisfy debt claims first ) , ( iii ) no seniority ( all claims have the same priority ) .a financial system of firms is described by a vector of total obligations , a matrix of relative liabilities , and a vector of operating cash flows . is the total amount of liabilities firm has towards other firms , specifies what fraction of its own total obligations firm owes to firm , and determines the liquid amount of money of firm .thus , is the nominal liability has to .the matrix is row - stochastic , which means all entries are non - negative and rows sum up to one .this condition ensures that individual obligations sum up to the total obligations .the expected payments to firm from its debtors is thus : if it happens that the total cash flow , i.e. , the expected repayments of others plus operating cash - flow , is less than the total obligations , i.e. firm can not meet its obligation in full and defaults .this implies a reduction of the expected payments to its creditors , which might in turn default as a second - order effect , and so on .this makes this model close to the class of overload redistribution because the expected payments of a node do not vanish entirely when it fails .the fictitious default algorithm defined in consists of finding a clearing vector of total payments which fulfills the equation for all . as shown in , using mild assumptions , such a clearing vector exists and is unique and the fictitious default algorithm , with , is well defined .the sequence represents a decreasing sequence of clearing vector candidates which terminates in at most steps at the clearing vector . the new clearing vector candidate is computed from a given candidate taking into account the first order defaults given clearing vector candidate , but not the second order defaults .these are checked in the successive time steps .following our general framework presented in section [ sec : form - model - cont ] , we can define fragility as which is the amount of debt which has to be covered by the operating cash flow , given the current candidate for the clearing vector . from , , , and we can determine the failing state as in equation ( [ eq : unify ] ) as given a clearing vector candidate , the _ value of the equity _ of firm is given by which is the operating cash flow plus the expected amount of payments received by others minus the payment to others which are possible , given the currently expected payments from the other .a new clearing vector candidate is computed from by determining the failing state .this leads to a simple fix point equation ( see ) , which usually has a unique fix point .that means , the fictitious default algorithm is constituted in such a way that it solves a system of linear equations .if successful , the algorithm runs until a clearing vector is found which gives that the value of the equity is zero ( and not negative ) for a firm in default , and positive for a non - defaulting firm .at least one non - defaulting firm should be found by the fictitious default algorithm , which means that there is at least one firm for which it holds holds . if not , then the clearing vector candidates diverge toward , and the algorithm fails .this represents a full break down of the financial system .the relation between this model and our third model class is not straightforward because the new clearing vector candidate is not necessarily always uniquely defined by the current fragility as given in .therefore , we chose to study the simpler models in sec .[ sec : models - with - redistr - overload ] . investigating the macro - perspective as in section [ sec : macr - reform ], one finds that the eisenberg - noe model can be approximated by the macroscopic equation for the overload redistribution .the approximations would be fairly good when the system is close to a fully connected network ( everybody borrows equally from everybody else ) and uncorrelated operating cash flows .section [ sec : specific - cascade - models ] has pointed out that the propagation of cascades , in addition to the mechanism of transfer , strongly depends on the initial condition , in particular on the position of the first failing node . in order to further illustrate this important point , we present additional examples with a different initially failing node .all these examples start from the setup shown in figure [ fig : all : init ] .their outcome should be compared to the respective examples discussed in figures [ fig : fdfail3 ] , [ fig : fbfail3 ] , [ fig : fbofail3 ] but with highest degree node * e * failing initially .the example clearly shows a difference in the spreading properties of hubs : in the ` inwards variant ' the hub spreads failures to low - degree nodes ; the opposite for the ` outwards variant ' . ] . here, node * d * fails initially .w.r.t to the previous case , this leads to a propagation of failures in the opposite direction , in the llsc variant ( links fail ) , while nothing changes in the llss variant ( links remain ) . ]
|
we introduce a general framework for models of cascade and contagion processes on networks , to identify their commonalities and differences . in particular , models of social and financial cascades , as well as the fiber bundle model , the voter model , and models of epidemic spreading are recovered as special cases . to unify their description , we define the net fragility of a node , which is the difference between its fragility and the threshold that determines its failure . nodes fail if their net fragility grows above zero and their failure increases the fragility of neighbouring nodes , thus possibly triggering a cascade . in this framework , we identify three classes depending on the way the fragility of a node is increased by the failure of a neighbour . at the microscopic level , we illustrate with specific examples how the failure spreading pattern varies with the node triggering the cascade , depending on its position in the network and its degree . at the macroscopic level , systemic risk is measured as the final fraction of failed nodes , , and for each of the three classes we derive a recursive equation to compute its value . the phase diagram of as a function of the initial conditions , thus allows for a prediction of the systemic risk as well as a comparison of the three different model classes . we could identify which model class lead to a first - order phase transition in systemic risk , i.e. situations where small changes in the initial conditions may lead to a global failure eventually , we generalize our framework to encompass stochastic contagion models . this indicates the potential for further generalizations . * pacs : * 64.60.aq networks , 89.65.gh economics ; econophysics , financial markets , business and management , 87.23.ge dynamics of social systems , 62.20.m- structural failure of materials
|
during the last decade there has been a large interest in the study of large complex networks ; see e.g. dorogovtsev and mendes ( 2003 ) and newman et al .( 2006 ) and the references therein .due to the rapid increase in computer power , it has become possible to investigate various types of real networks such as social contact structures , telephone networks , power grids , the internet and the world wide web .the empirical observations reveal that many of these networks have similar properties . for instance , they typically have power law degree sequences , that is , the fraction of vertices with degree is proportional to for some exponent . furthermore , many networks are highly clustered , meaning roughly that there is a large number of triangles and other short cycles . in a social network , this is explained by the fact that two people who have a common friend often meet and become friends , creating a triangle in the network .a related explanation is that human populations are typically divided into various subgroups working places , schools , associations etc which gives rise to high clustering in the social network , since members of a given group typically know each other ; see palla et al .( 2005 ) for some empirical observations .real - life networks are generally very large , implying that it is a time - consuming task to collect data to delineate their structure in detail .this makes it desirable to develop models that capture essential features of the real networks . a natural candidate to modela network is a random graph , and , to fit with the empirical observations , such a graph should have a heavy - tailed degree distribution and considerable clustering .we will quantify the clustering in a random graph by the conditional probability that three given vertices constitute a triangle , given that two of the three possible links between them exist .other ( empirical ) definitions occur in the literature see e.g. newman ( 2003 ) but they all capture essentially the same thing .obviously , the classical erds - rnyi graph will not do a good job as a network model , since the degrees are asymptotically poisson distributed . moreover , existing models for generating graphs with a given degree distribution see e.g. molloy and reed ( 1995 , 1998 ) typically have zero clustering in the limit . in this paper, we propose a model , based on the so - called random intersection graph , where both the degree distribution and the clustering can be controlled .more precisely , the model makes it possible to obtain arbitrary prescribed values for the clustering and to control the mean and the tail behavior of the degree distribution .the random intersection graph was introduced in singer ( 1995 ) and karoski et al .( 1999 ) , and has been further studied and generalized in fill et al .( 2000 ) , godehardt and jaworski ( 2002 ) , stark ( 2004 ) and jaworksi et al .newman ( 2003 ) and newman and park ( 2003 ) discuss a similar model . in its simplest formthe model is defined as followslet be a set of vertices and a set of elements . for ] .stark ( 2004 ; theorem 2 ) shows that in a random intersection graph with the above choice of , the distribution of the degree of a given vertex converges to a point mass at 0 , a compound poisson distribution or a poisson distribution depending on whether , or .this means that the current model can not account for the power law degree distributions typically observed in real networks . in the above formulation of the model , the number of groups that a given individual belongs to is binomially distributed with parameters and .a generalization of the model , allowing for an arbitrary group distribution , is described in godehardt and jaworski ( 2002 ) .the degree of a given vertex in such a graph is analyzed in jaworski et al .( 2006 ) , where conditions on the group distribution are specified under which the degree is asymptotically poisson distributed . in the current paper , we are interested in obtaining graphs where non - poissonian degree distributions can be identified . to this end, we propose a generalization of the original random intersection graph where the edge probability is random and depends on weights associated with the vertices . other work in this spirit include for instance chung and lu ( 2002:1,2 ) , yao et al .( 2005 ) , britton et al .( 2006 ) , bollobs et al.(2007 ) and deijfen et al .the model is defined as follows : 1 .let be a positive integer , and define with .as before , take to be a set of vertices and a set of elements .also , let be an i.i.d .sequence of positive random variables with distribution , where is assumed to have mean 1 if the mean is finite .finally , for some constant , set now construct a bipartite graph with vertex sets and by adding edges to the elements of for each vertex independently with probability .the random intersection graph is obtained as before by drawing an edge between two distinct vertices if and only if they have a common adjacent vertex in . in the social network setting , the weights can be interpreted as a measure of the social activity of the individuals .indeed , vertices with large weights are more likely to join many groups and thereby acquire many social contacts .there are several other examples of real networks where the success of a vertex ( measured by its degree ) depends on some specific feature of the vertex ; see e.g. palla et al .( 2005 ) for an example in the context of protein interaction networks .furthermore , an advantage of the model is that it has an explicit and straightforward construction which , as we will see , makes it possible to exactly characterize the degree distribution and the clustering in the resulting graph .our results concern the degree distribution and the clustering in the graph as .more precisely , we will take the parameters , , and the weight distribution to be fixed ( independent of ) and then analyze the degree of a given vertex and the clustering in the graph as .it turns out that the behavior of these quantities will be different in the three regimes , and respectively .the interesting case is , in the sense that this is when both the degree distribution and the clustering can be controlled .the cases and are included for completeness . as for the degree, we begin by observing that , if has finite mean , then the asymptotic mean degree of vertex , conditional on , is given by for all values of . [ prop : vv ] let be the degree of vertex in a random intersection graph with and as in .if has finite mean , then , for all values of , we have that \to\beta\gamma^2w_i ] as , we observe that summing the expectation of the right - hand side over , keeping fixed , gives ( recall the truncation at 1 in ( [ p_i ] ) ) \leq \beta\gamma n^{(1+\alpha)/2}\e[p_k'']\leq \beta\gamma\left(\gamma\e[w_k'']+n^{(1+\alpha)/2}\p\big(\gamma w_k\geq n^{(1+\alpha)/2}\big)\right),\ ] ] where both terms on the right hand side converge to 0 as since has finite mean . as for , we have the sum over of the expectation of the first term equals ] ( since has finite mean ) and the sum of the expectation of the second term converges to 0 ( since ) . since , this proves the proposition. the following theorem , which is a generalization of theorem 2 in stark ( 2004 ) , gives a full characterization of the degree distribution for different values of .[ th : deg_distr ] consider the degree of vertex in a random intersection graph with and as in , and assume that has finite mean .* if , then converges in distribution to a point mass at 0 as .* if , then converges in distribution to a sum of a poisson( ) distributed number of poisson( ) variables , where all variables are independent . *if , then is asymptotically poisson( ) distributed . to understand theorem [ th : deg_distr ] , note that the expected number of groups that individual belongs to is roughly .if and has finite mean , this converges to 0 in probability , so that the degree distribution converges to a point mass at 0 , as stated in ( a ) ( the group size however goes to infinity , explaining why the expected degree is still positive in the limit ) . for ,the number of groups that individual is a member of is poisson( ) distributed as , and the number of other individuals in each of these groups is approximately poisson( ) distributed , which explains ( b ) .finally , for , individual belongs to infinitely many groups as .this means that the edges indicators will be asymptotically independent , giving rise to the poisson distribution specified in ( c ) . moving on to the clustering ,write for the event that individuals have a common group in the bipartite graph that is , is equivalent to the event that there is an edge between vertices and in and let be the probability measure of conditional on the weights . for distinct vertices , define that is , is the edge probability between and in given that they are both connected to , conditional on the weights . to quantify the asymptotic clustering in the graphwe will use ,\ ] ] where the expectation is taken over the weights , that is , is the limiting probability that three given vertices constitute a triangle conditional on that two of the three possible edges between them exist ( the vertices are indistinguishable , so indeed does not depend on the particular choice of and ) .this should be closely related to the limiting quotient of the number of triangles and the number of triples with at least two edges present , which is one of the empirical measures of clustering that occur in the literature ; see e.g. newman ( 2003 ) . establishing this connection rigorously however requires additional arguments .the asymptotic behavior of is specified in the following theorem . by bounded convergenceit follows that is obtained as the mean of the in - probability - limits .[ th : clustering ] let be three distinct vertices in a random intersection graph with and as in ( [ p_i ] ) . if has finite mean , then * in probability for ; * in probability for ; * in probability for . to understand theorem [ th : clustering ] ,assume that and share a group and that and share a group .the probability that and also have a common group then depends on the number of groups that the common neighbor belongs to .indeed , the fewer groups belongs to , the more likely it is that and in fact share the same group with .recall that the expected number of groups that belongs to is roughly .if , this goes to 0 as . since it is then very unlikely that belongs to more than one group when is large , two given edges and are most likely generated by the same group , meaning that and are connected as well . on the other hand , if , the number of groups that belongs to is asymptotically infinite .hence , that and each belong to one of these groups , does not automatically make it likely that they actually belong to the same group . if , individual belongs to groups on average , explaining the expression in part ( b ) of the theorem . from theorem [ th : clustering ]it follows that , to get a nontrivial tunable clustering , we should choose . indeed, then we have ] .this proves ( a ) , since clearly if individual 1 is not a member of any group . to prove ( b ) and ( c ) , first recall the definition of the weights and truncated from above and below respectively at and the corresponding degree variables and from the proof of proposition [ prop : vv ] .we have already showed ( in proving proposition [ prop : vv ] ) that \to 0 ] ) .hence it suffices to show that the generating function of converges to the generating function of the claimed limiting distribution . to this end, we condition on the weight , which is thus assumed to be fixed in what follows , and let ( ) denote the number of common groups of individual and individual when the truncated weights are used for . since two individuals are connected if and only if they have at least one group in common , we can write .furthermore , conditional on and , the random variables , , are independent and binomially distributed with parameters and .hence , with denoting the probability measure of the bipartite graph conditional on both and , the generating function of can be written as = \e\left[\prod_{i=2}^n\e\left[t^{\mathbf{1}\{x_i'\geq 1\}}\big|\{w_i'\},n\right]\right ] = \e\left[\prod_{i=2}^n\left(1+(t-1)\bar{\bar{\p}}_n(x_i'\geq 1)\right)\right]\ ] ] where ] , it takes values between 0 and 1 and , since ] .furthermore , recalling that , we have for that implying that in probability and thus , by bounded convergence , \to 0 ] if ; * \to e^{\beta\gamma^2 w_1(t-1)} ] , it follows that almost surely .hence , and it follows from bounded convergence that the expectation converges to the same limit , proving ( i ) . for , define . with and , we get after some rewriting , that by the law of large numbers , almost surely , and , since as , it follows that the right hand side above converges to almost surely as . by ( [ ( i , ii)gen_fct ] ) and bounded convergence , this proves ( ii). this section , we prove theorem [ th : clustering ] .first recall that denotes the event that the individuals share at least one group. it will be convenient to extend this notation .to this end , for , denote by the event that there is at least one group to which all three individuals , and belong , and write for the event that there are at least three _ distinct _ groups to which and , and , and and respectively belong . similarly , the event that there are two distinct groups to which individuals and , and and respectively belong is denoted by .the proof of theorem [ th : clustering ] relies on the following lemma .[ lemma : clustering ] consider a random intersection graph with and defined as in ( [ p_i ] ) .for any three distinct vertices , we have that * ; * ; * ; * . * proof .* as for ( a ) , the probability that three given individuals , and do not share any group at all is . using the definitions of and the edge probabilities , it follows that to prove ( b ) , note that the probability that there is exactly one group to which both and belong is .given that and share one group , the probability that and share exactly one of the _ other _ groups is .finally , the conditional probability that there is a third group to which both and belong given that the pairs and share one group each is . combining these estimates , and noting that scenarios in which and or and share more than one group have negligible probability in comparison , we get that part ( c ) is derived analogously .as for ( d ) , note that the event occurs when there is at least one group that is shared by all three vertices , and and a second group shared by either and or and .denote by the probability that individual and at least one of the individuals and belong to a fixed group .then , and , conditional on that there is exactly one group to which all three individuals , and belong ( the probability of this is ) , the probability that there is at least one other group that is shared either by and or by and is .it follows that using lemma [ lemma : clustering ] , it is not hard to prove theorem [ th : clustering ] .* proof of theorem [ th : clustering ] .* recall the definition ( [ c_n ] ) of and note that as for ( a ) , applying the estimates of lemma [ lemma : clustering ] and merging the error terms yields }.\label{lower_bound}\end{aligned}\ ] ] by markov s inequality and the fact that , and are independent and have finite mean , it follows that goes to 0 in probability when .similarly , in probability .hence , the quotient in ( [ lower_bound ] ) converges to 1 in probability for , as claimed . to prove part ( b ) , note that , for , the lower bound ( [ lower_bound ] ) for converges in probability to . to obtain an upper bound ,we apply lemma [ lemma : clustering ] with to get that }.\nonumber\end{aligned}\ ] ] here converges to 0 in probability by markov s inequaliy , and ( b ) follows . as for ( c ) , combining the bound in ( [ cl_bd ] ) with the estimates in lemma [ lemma : clustering ] yields }.\ ] ] since in probability , this bound converges to 0 in probability for , as desired. , the clustering is given by $ ] . herewe investigate this expression in more detail for the important case that is a power law .more precisely , we take to be a pareto distribution with density when , this distribution has mean 1 , as desired .the asymptotic clustering is given by the integral defining , we obtain where is the hypergeometric function . for , a series expansion of the integrand yields that where is the lerch transcedent .furthermore , when is an integer , we get .\end{aligned}\ ] ] figure [ graph_clust ] ( a ) and ( b ) show how the clustering depends on and respectively . for any and a given tail exponent , we can find a value of such that the clustering is equal to . combining this with a condition on , induced by fixing the mean degree in the graph , the parameters and can be specified .apart from the degree distribution and the clustering , an important feature of real networks is that there is typically significant correlation for the degrees of neighboring nodes , that is , either high ( low ) degree vertices tend to be connected to other vertices with high ( low ) degree ( positive correlation ) , or high ( low ) degree vertices tend to be connected to low ( high ) degree vertices ( negative correlation ) .a next step is thus to quantify the degree correlations in the current model .the fact that individuals share groups should indeed induce positive degree correlation , which agrees with empirical observations from social networks ; see newman ( 2003 ) and newman and park ( 2003 ) . also other features of the model are worth investigating . for instance , many real networks are `` small worlds '' , meaning roughly that the distances between vertices remain small also in very large networks . it would be interesting to study the relation between the distances between vertices , the degree distribution and the clustering in the current model .finally , dynamic processes behave differently on clustered networks as compared to more tree - like networks .most work to date has focused on the latter class . in brittonet al.(2008 ) however , epidemics on random intersection graphs without random weights are studied and it is investigated how the epidemic spread is affected by the clustering in the graph . it would be interesting to extend this work to incorporate weights on the vertices , allowing to tune also the ( tail of the ) degree distribution and study its impact on the epidemic process .* acknowledgement . *we thank remco van der hofstad and wouter kager for valuable suggestions that have improved the manuscript .bollobs , b. , janson , s. and riordan , o. ( 2007 ) : the phase transition in inhomogeneous random graphs , _random structures algorithms _ * 31 * , 3 - 122 . fill , j. , scheinerman , e. and singer - cohen , k. ( 2000 ) : random intersection graphs when : an equivalence theorem relating the evolution of the and models , _ random structures algorithms _ * 16 * , 156 - 176 .godehardt , e. and jaworski , j. ( 2002 ) : two models of random intersection graphs for classification , in _ exploratory data analysis in empirical research _ , eds.schwaiger m. and opitz , o. , springer , 67 - 81 .
|
a random intersection graph is constructed by assigning independently to each vertex a subset of a given set and drawing an edge between two vertices if and only if their respective subsets intersect . in this paper a model is developed in which each vertex is given a random weight , and vertices with larger weights are more likely to be assigned large subsets . the distribution of the degree of a given vertex is characterized and is shown to depend on the weight of the vertex . in particular , if the weight distribution is a power law , the degree distribution will be so as well . furthermore , an asymptotic expression for the clustering in the graph is derived . by tuning the parameters of the model , it is possible to generate a graph with arbitrary clustering , expected degree and in the power law case tail exponent . _ keywords : _ random intersection graphs , degree distribution , power law distribution , clustering , social networks . ams 2000 subject classification : 05c80 , 91d30 .
|
fractals are an interesting mathematical structure and are gaining more attention day by day . a _fractal _ is a set that possesses a non - ending repeating pattern .a set may be formally defined as a fractal if its dimension is fractional .in addition to this , fractals hold many interesting properties .most fractals are self - similar , and can be defined either iteratively or recursively .the fractal dimension is mostly calculated using either the hausdorff dimension or box - counting dimension .the former one is more stable , but the latter is easier to calculate .these and other definitions of dimension can be seen in detail in . for a bounded subset of , let denote the smallest number of boxes of side length covering .then , by the lower box - counting dimension of is defined as similarly , upper box - counting dimension of is given as the box - counting dimension of denoted as ( also known as box dimension or capacity dimension ) is then defined as there exists structures possessing rotataion of the portions being removed for which the existing definition of the box dimension using -mesh cubes is hard to calculate . the aim of this paper is to deal with this problem by presenting an alternate definition of the box dimension , initially proposed by .this article extends the work of by providing a curve fitting comparison . inthe following section we provide an alternate definition of box dimension which will be used for a particular structure in section 3 .recall from that for a bounded subset of , ( as mentioned in ) can be any of the following : * closed balls of radius that cover , * cubes of side that cover , * -mesh cubes that intersect , * sets of diameter at most that cover , * disjoint balls of radius with centres in . note that in ( i)-(iv ) , is the smallest number and in ( v ) the largest number of such sets . for two - dimensional objects ,the box dimension is usually calculated using -mesh squares .the following theorem as mentioned in proposes a new variant of the box counting dimension using -mesh triangles .for this , let denote the number of right - angled triangles of side length having non - empty intersection with .let be a bounded subset of .the lower and upper box - counting dimensions of in terms of are given by and moreover , if limit exists , we have the box - counting dimension as the proof is similar to the proof for as for ( iii ) in . consider the -mesh squares of the form \times [ n \delta , ( n + 1 ) \delta],\ ] ] where and are integers .clearly each square gives rise to two right angled ( isoceles ) triangles each of shorter side and longer side .let and be the number of squares and triangles , respectively , covering .then , since the collection gives rise to -squares of diameter , it can be obviously seen that the collection of sets covering gives the relation assuming that is sufficiently small we have thus , taking limits as , we obtain and now , any set of diameter at most is contained in -mesh cubes .thus , taking logarithms , we will get letting , we obtain the other side , i.e. and hence , the box - counting dimension can equivalently be defined using a collection of -mesh triangles .in this section we consider a spiral as a fractal . the construction of the spiral is taken from and is assigned the named bradley spiral. the portions being removed from the initial set follow rotation , resulting in the spiral shape .the construction is as follows : consider a closed unit square .contruct a square inscribed in by joining the midpoints of each side of .this produces four right - angled isoceles triangles with length of shorter side .remove the upper right triangle , say , keeping the boundary of the longer side ( i.e. the boundary intersecting with ) .similarly to the previous step , form a square inscribed in , yielding four right - angled isoceles triangles with length of shorter side .remove the right triangle , say , and remove the boundary intersecting with ( keeping the other two ) . in the third stage ,a square inscribed in is formed by joining the midpoints of each side of .this leaves four right - angled isoceles triangles with length of shorter side .remove the lower right triangle , say , and the boundary intersecting with .continuing iteratively , at the -th stage , a square inscribed in is formed yielding four right - angled isoceles triangles with length of shorter side .one triangle , say , is removed along with the boundary of intersecting with .thus , the triangles removed form a spiral and the resulting set is named the bradley spiral , and is denoted by .figure 1 shows first three steps and the pattern obtained at -th step of the bradley spiral .now , we calculate the box dimension of the bradley spiral in terms of triangles .[ figure-1 ] the bradley spiral has box dimension . at the -th stage , _ s _ is covered by -mesh triangles of length .then by observe that for large , .thus , \notag \\ & = 2 .\notag\end{aligned}\ ] ]in this section , we present an algorithm for calculating the box dimension using triangle mesh of the bradley spiral , and also show a curve fitting comparison for the box dimension for both the triangle mesh and the square mesh . for the algorithm, we choose equal spacing for triangle meshes .the implementation is straight forward in matlab using the and matlab commands .we have also verified the results for numerically and the corresponding algorithm is given below : we now discuss the curve fitting for the box dimension for both the triangle mesh and the square mesh .the curve fitting here is computed from the linear least - square regression obtained for the triangle and square mesh and presented in figure 2 .we observe that the value for is found to increase linearly with an increase in , which is in good agreement with the linear least - square regression curve .further , it is to be noted that , in case of the square mesh the change in is greater than in the triangle mesh . in conclusion, we observe that , for structures involving rotation , this alternate definition of the box dimension is easier to calculate and gives more accurate results than the box dimension using square mesh. it would be interesting to investigate the box dimension of other two dimensional fractals using this proposed definition .further , generalization of this idea to is still an open problem .10 g. l. bradley and k. smith , _ calculus _ , prentice hall , 1940 .k. falconer , _ fractal geometry : mathematical foundations and applications _ , john wiley and sons , 1990. n. khalid , _ a topological treatment of 2-d fractals _ , bachelors project report , comsats institute of information technology , islamabad , pakistan , 2013 .
|
an alternate definition of the box - counting dimension is proposed , to provide a better approximation for fractals involving rotation such as the bradley spiral structure . a curve fitting comparison of this definition with the box - counting dimension is also presented .
|
multi - boson interference based on correlated measurements is at the heart of many fundamental phenomena in quantum optics and of numerous applications in quantum information .recent works have demonstrated the feasibility of multi - boson experiments based on higher - order correlation measurements well beyond the first two - boson interference experiments . in particular , a lot of attention in the research community was drawn to the so - called boson sampling problem , formulated as the task to sample from the probability distribution of finding single input bosons at the output of a passive linear interferometer .this probability distribution depends on permanents of random complex matrices .the matrix permanent is defined similarly to the determinant aside from the minus signs appearing in the determinant . however , differently from the determinant , the permanent can not be calculated or even approximated in polynomial time by a classical computer ( more precisely these computational problems are in the complexity class # p ) .it has been argued that this also implies that the boson sampling problem can not be solved efficiently by a classical computer .this result has triggered several multi - boson interference experiments as well as studies of its characterization . in its current formulation ,the boson sampling problem relies only on sampling over all possible subsets of detected output ports regardless of the time and the polarization associated with each detection .however , thanks to the modern fast detectors and the possibility of producing single photons with arbitrary temporal and spectral properties time - resolved correlation measurements are today at hand experimentally .this has motivated us to introduce the novel problem of multi - boson correlation sampling ( mbcs ) , which considers the sampling process from the interferometer output probability distribution depending on the output ports where the photons are detected and the corresponding detection times and polarizations .the paper is organized as follows : in section [ sec : mbcsp ] we give a general description of the mbcs problem for correlation measurements of arbitrary order in a -port interferometer including the case of photon bunching at the detectors . in section [ sec : mbcsrates ] , we then analyze the multi - photon indistinguishability at the detectors in the context of time- and polarization - resolved detections . in the limit where no bunching occurs, we discuss the degree of multi - photon correlation interference for different scenarios of multi - photon distinguishability in section [ sec : mbcp11 ] . in section [ sec : integrated ], we describe the case of detectors averaging over the detection times and polarizations .we consider again the limit in section [ sec : integrated1 ] and show in section [ sec : bsp ] that , for identical input photons , this scenario corresponds to the description of the well known boson sampling problem .let us describe the physical problem of multi - boson correlation sampling ( mbcs ) .first , a random linear interferometer with ports is implemented ( see fig . [fig : interferometersetup](a ) ) .this requires only a polynomial number ( in ) of passive linear optical elements . single photons are then prepared at input ports of the interferometer , where ( with ) is the chosen set of occupied input ports . in the following, we will label the operators at the occupied input ports only with the index to simplify the notation .the -photon input state \rangle}_{s_i } \bigotimes_{s \notin \mathcal{s } } { \vert 0 \rangle}_{s } , \label{eqn : statedefinition}\end{aligned}\ ] ] is then defined , in a given polarization basis , by the single - photon multimode states \rangle}_{s_i } { \mathrel{\vcenter{\baselineskip0.5ex \lineskiplimit0pt \hbox{\scriptsize.}\hbox{\scriptsize . } } } = } \sum_{\lambda=1,2 } \int_{0}^{\infty } { d \omega\ , } \left ( { \mathbold{e}}_{\lambda}\cdot { \mathbold{\xi}}_{i}(\omega ) \right ) \hat{a}^{\dagger}_{i,\lambda } ( \omega ) { \vert 0 \rangle}_{s_i } , \label{eqn : singlephotonstate } \noeqref{eqn : singlephotonstate}\end{aligned}\ ] ] with the creation operator associated with the port , the frequency mode , and the polarization . the complex spectral distribution is defined by the polarization , the spectral shape with normalization condition the color or central frequency , and the time of emission of the photon injected in the port . by using detectors at the output ports of the interferometer , we consider the sampling process from all possible correlated detections of the input photons depending on the detection times and polarizations .in particular , the input photons can be detected in the output ports , defining the port sample where a port is contained times if photons are detected in that port .thereby , each possible measurement outcome is defined by the port sample together with the respective detection times and polarizations , with .we will now determine the -photon probability rates for each possible multi - boson correlation sample defined by the port sample and the respective detection times and polarizations .we consider input photons with frequency spectra satisfying the narrow bandwidth approximation and a polarization - independent interferometric evolution with approximately equal propagation time for each possible path .moreover , we define the matrix {\substack{j=1,\dots , n \\i=1,\dots , n } } { \mathrel{\vcenter{\baselineskip0.5ex \lineskiplimit0pt \hbox{\scriptsize.}\hbox{\scriptsize . } } } = } [ \mathcal{u}_{d_j , s_i } ] _ { \substack{j=1,\dots , n \\i=1,\dots , n}}\end{aligned}\ ] ] obtained from the unitary matrix describing the interferometer evolution . here , the elements of different rows can be the same if multiple photons are detected at the same port .the field operators at the detected ports can then be written in terms of the operators at the input ports as we calculate now the rate of an -fold detection event for ideal photodetectors given by the - order correlation function where is the component of the electric field operator in eq . in the detected polarization . from an experimental point of view, we can assume that for any sample the integration time of the detectors is short enough that the probability rate in eq. remains constant during . by using eq . and defining the operator matrices {\substack{j=1,\dots , n\\i=1,\dots , n } } , \label{eqn : operatormatrixdef}\end{aligned}\ ] ] eq .reduces to ( see app .[ app : operatorpermanents ] ) where is the symmetric group of order and we used the definition of matrix permanents in eq .( [ eqn : correlationoperatormatrix ] ) the permanent structure of the correlation function emerges already in terms of the operators contributing to the expectation value for each given sample .further , by using the fourier transforms (t-{\deltadefault\!}t ) = { \mathbold{v}}_i \ , \chi_i(t - t_{0i}-{\deltadefault\!}t ) { \operatorname{e}^{{\mathrm{i}}\omega_{0i}(t - t_{0i}-{\deltadefault\!}t ) } } \label{eqn : fourier}\end{aligned}\ ] ] of the frequency distributions , where is the fourier transform of in eq ., we define the matrices {\begin{array}{l}\scriptstyle j=1,\dots , n \\[-0.2em]\scriptstyle i=1,\dots , n \label{eqn : corrmatrixdefintion } \end{array}}.\end{aligned}\ ] ] here , for each multi - boson correlation sample , each matrix entry describes the probability amplitude for the quantum path from the source port to the port where a single - photon detection occurs at time with polarization .each possible product of entries in distinct rows and columns , associated with a permutation , describes a probability amplitude for an -photon detection .each -photon amplitude depends on the interaction with the passive optical elements in the interferometer through the term as well as on the contribution depending on the photonic spectral distributions , the propagation times , the detection times and polarizations .the interference of all possible detection amplitudes finally leads to the -photon probability rate uniquely defined by the permanent of the matrix in eq . .we refer to app .[ app : correlationfunctions ] for a detailed derivation of this result .the superposition of multi - photon amplitudes in the permanent in eq . will be fundamental in achieving computational hardness , as we will show in the next section . in the limit where the number of input photons is much less than the number of interferometer ports ,the detection events corresponding to boson bunching can be neglected and the port samples in eq .reduce to all the possible sets of distinct values of the indices ( see fig .[ fig : interferometersetup](b ) ) . in this case, we can directly use the indices and as labels for the detectors and the input photons , respectively , and write with the matrices {\substack{d\in \mathcal{d } \\s\in \mathcal{s}}}. \label{eqn : corrmatrixdefinition2}\end{aligned}\ ] ] when does -photon interference occur ?this only happens when all the interfering -photon detection amplitudes in eq .are non - vanishing .to establish if there is any time - polarization sample where multi - photon interference occurs for a general interferometer transformation , we define the -photon interference matrix with elements with , depending on the pairwise overlaps of the moduli of the temporal single - photon detection amplitudes and on the pairwise overlaps of the polarizations .the elements are always independent of the central frequencies of the photons . for equally polarized input photons , eq .reduces to where is the modulus of the vector . as an example , for input photons with gaussian spectral shape \end{aligned}\ ] ] with bandwidth in eq . , corresponding to \end{aligned}\ ] ] in eq ., each interference - matrix element takes the form ,\end{aligned}\ ] ] which has been plotted in fig .[ fig : overlapmodulus ] for . of the moduli of the temporal distributions and of the two photons coming from the ports and in the case of gaussian spectral shapes with equal bandwidth and equal polarization . this overlap does not depend on the central frequencies but only decays exponentially with the difference in the initial times of the photons .this is the reason why interference between photons of different colors contributes to time - resolved correlation measurements . ] in general , for non - vanishing elements a finite temporal overlap of the moduli of all single - photon detection amplitudes in at least one common polarization component is ensured .this means that a time interval and at least a polarization exist such that is non - vanishing for all times in .therefore , for the corresponding detection samples with and , all corresponding -photon quantum - path amplitudes characterizing the permanent in eq .are non - vanishing and contribute to the interference .thereby , the interference matrix represents a signature of the number of time - polarization samples for which -photon interference occurs . in ref . , we have demonstrated that mbcs is intractable for a classical computer with a polynomial number of resources in the condition , assuming that the intractability of boson sampling with identical photons is correct .further , all -photon quantum paths are always indistinguishable by the detection times or polarizations if in eq . .then , all terms of the permanent in eq .contribute to the correlation function for all possible samples .hereafter , we obviously exclude all the trivial samples with probability rates that vanish independently of the interferometer transformation .complete indistinguishability by detection times or polarizations can arise in two cases . either, all the input photons are completely identical or they differ only by their color , i.e. their central frequency . in ref . , we have shown how approximate mbcs with photons of different colors , even if distinguishable from each other ( ) , is at least of the same complexity class as the standard boson sampling problem with identical photons . if all input photons can be distinguished at the detectors for any possible time - polarization sample , in eq ., each photon can at most contribute to the detection at one specific detector .thus , for all possible samples , only a single -photon quantum path ( with and a bijective map from to ) contributes to the correlation function thereby , in such a case , mbcs becomes trivial from a computational point of view and , of course , this would be the same for the standard boson sampling problem .we now address the intermediate case where only the input photons in certain disjoint subsets of input ports , with , are always indistinguishable at the detectors , corresponding to then , for any possible time - polarization sample , multi - photon interference only occurs between photons in the same subset with elements . in this case, all the possible -photon detection events correspond to detector samples which can be divided into subsamples such that therefore , eq .reduces to where the matrices are now of order , differently from the matrices in eq . .we now consider the case of correlation measurements which do not resolve the detection times and polarizations , resulting in an average over these degrees of freedom . in this case, we obtain the probability ( see app . [app : nonresolving : bunching ] ) to detect the photons injected in the input ports at the output ports , where the factor arises from the symmetry of the correlation function under the exchange of arguments corresponding to a detection in the same output port .in order to calculate the probability in eq .( [ eqn : integratednotexplicit ] ) , it is useful to introduce the two - photon distinguishability factors with , and the _ -photon amplitude overlaps _ with a permutation from the symmetric group . in the case of input photons with gaussian spectral distributions in eq . , we find that ( see app . [app : overlapgaussianpulses ] ) \\ & \hspace{3cm}\times\exp\left [ -\frac{({\deltadefault\!}\omega_i)^2 ( { \deltadefault\!}\omega_{i'})^2}{({\deltadefault\!}\omega_i)^2+({\deltadefault\!}\omega_{i'})^2 } ( t_{0i}-t_{0i'})^2\right ] \\ & \hspace{3 cm } \times \exp\left [ -{\mathrm{i}}\frac{\omega_{0i}({\deltadefault\!}\omega_{i'})^2 + \omega_{0i'}({\deltadefault\!}\omega_i)^2}{({\deltadefault\!}\omega_i)^2 + ( { \deltadefault\!}\omega_{i'})^2 } ( t_{0i}- t_{0i ' } ) \right ]. \label{eqn : gaussiantwophotonoverlap}\end{aligned}\ ] ] the absolute value is plotted in fig . [ fig : overlap ] for equal bandwidths and equal polarizations . by defining the matrices {\substack{j=1,\dots ,i=1,\dots , n } } \label{eqn : amatrix},\end{aligned}\ ] ] the probability of an -fold detection in the sample can be expressed , in the narrow - bandwidth approximation , as ( see app . [ app : nonresolving : integration ] ) .\label{eqn : integratedgeneral } \end{aligned}\ ] ] in eq . for two photons coming from the ports and in the case of gaussian spectra in eq . with equal central frequencies and equal polarizations :the overlap is maximal ( ) if the spectra are identical and decays exponentially with the difference of the emission times of the photons and with the difference of their central frequencies . ]the time- and polarization - averaged probability in eq . associated with the detection of photons in the -port sample comprises contributions for each permutation .each contribution contains all cross terms arising from the interference of the interferometer - dependent multi - photon amplitudes , with , in the condition that the photon pairs for each cross term are fixed by a given permutation .moreover , each factor describes the degree of pairwise indistinguishability for the set of source pairs .( , ) in the two possible cases ( indistinguishability factor ) and ( for incomplete overlap of the photonic spectral distributions ) .in both cases , pairs of sources , each indicated by a separate color , are coupled in the possible interference terms defined by the two ways of connecting the two pairs to the two detectors and .these possibilities sum up to the permanents .thereby , eq . becomes ., title="fig : " ] ( , ) in the two possible cases ( indistinguishability factor ) and ( for incomplete overlap of the photonic spectral distributions ) . in both cases , pairs of sources , each indicated by a separate color , are coupled in the possible interference terms defined by the two ways of connecting the two pairs to the two detectors and .these possibilities sum up to the permanents .thereby , eq . becomes ., title="fig : " ] as pointed out in section [ sec : mbcp11 ] , in this limit the port samples in eq .reduce to all the possible sets of distinct values of the index .thereby , since and , eq . becomes alternate expressions for the detection probabilities in eq . can also be found in . in fig .[ fig : interferenceterms ] we illustrate as an example the case of . here , for a unitary matrix describing a 50/50 beam splitter transformation , eq .fully describes the famous two - photon interference dip .interestingly , for partially distinguishable photons , the complexity of the problem of sampling from the probability distribution in eq . is still an open question .here , we will address the two limiting cases of identical and fully distinguishable input photons . in the case of identical input photons ,all photonic spectral distributions fully overlap pairwisely , corresponding to the complete overlaps associated with the interfering -photon amplitudes . accordingly , eq .reduces to this dependence of all the probabilities on permanents of random complex matrices , given by submatrices of the random unitary matrix , is at the heart of the computational complexity of the boson sampling problem .this case corresponds to -photon amplitude overlaps leading to no multi - photon interference .the probability in eq .is given by the completely incoherent superposition with the non - negative matrix {\substack{d\in \mathcal{d } \\s\in \mathcal{s}}}12 & 12#1212_12%12[1][0] link:\doibase 10.1103/revmodphys.84.777 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.114.243601 [ * * , ( ) ] link:\doibase 10.1038/nphoton.2011.354 [ * * , ( ) ] link:\doibase 10.1038/ncomms3451 [ * * , ( ) ] link:\doibase 10.1038/ncomms2349 [ * * , ( ) ] link:\doibase 10.1126/science.1231440 [ * * , ( ) ] link:\doibase 10.1038/nphoton.2013.112 [ * * , ( ) ] link:\doibase 10.1038/nphoton.2013.102 [ * * , ( ) ] link:\doibase 10.1103/physrevx.5.041015 [ * * , ( ) ] link:\doibase 10.1126/science.1231692 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.112.050504 [ * * , ( ) ] link:\doibase 10.1126/sciadv.1400255 [ * * , ( ) ] in link:\doibase 10.1016/0920 - 5632(89)90440 - 4 [ _ _ ] ( , ) pp . , link:\doibase 10.1103/physrevlett.61.2921 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.59.2044 [ * * , ( ) ] in http://dl.acm.org/citation.cfm?id=1993682[__ ] ( , , ) pp .link:\doibase 10.1126/science.1234061 [ * * , ( ) ] link:\doibase 10.1038/nphoton.2013.175 [ * * , ( ) ] link:\doibase 10.1016/0304 - 3975(79)90044 - 6 [ * * , ( ) ] link:\doibase 10.1038/nphoton.2014.135 [ * * , ( ) ] link:\doibase 10.1038/nphoton.2014.152 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.113.020502 [ * * , ( ) ] link:\doibase 10.1103/physreva.89.063819 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.110.113603 [ * * , ( ) ] link:\doibase 10.1038/nature02961 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.101.103601 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.109.053602 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.93.070503 [ * * , ( ) ] in link:\doibase 10.1117/12.2030215 [ _ _ ] , ( ) p. link:\doibase 10.1103/physrevlett.112.103602 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.115.243605 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.73.58 [ * * , ( ) ] _ _ , ed . , oxford science publications ( , , ) _ _ ( , , ) _ _ , , vol .( , , ) link:\doibase 10.1103/revmodphys.78.1267 [ * * , ( ) ] link:\doibase 10.1103/physreva.91.012307 [ * * , ( ) ] link:\doibase 10.1103/physreva.91.022316 [ * * , ( ) ] link:\doibase 10.1103/physreva.91.013844 [ * * , ( ) ] link:\doibase 10.1080/09500340.2015.1088096 [ * * , ( ) ] link:\doibase 10.1145/1008731.1008738 [ * * , ( ) ] link:\doibase 10.1103/physreva.90.063836 [ * * , ( ) ] link:\doibase 10.1142/s0219749915600175 [ * * , ( ) ] link:\doibase 10.1007/s11128 - 015 - 1190-y [ ( ) , doi:10.1007/s11128 - 015 - 1190-y ] link:\doibase 10.1007/s11128 - 015 - 1189 - 4 [ ( ) , doi:10.1007/s11128 - 015 - 1189 - 4 ] link:\doibase 10.1080/09500340903254700 [ * * , ( ) ] link:\doibase 10.1103/physreva.83.020304 [ * * , ( ) ] link:\doibase 10.1007/s10701 - 010 - 9522 - 3 [ * * , ( ) ]
|
we give a full description of the problem of _ multi - boson correlation sampling _ ( mbcs ) at the output of a random interferometer for single input photons in arbitrary multimode pure states . the mbcs problem is the task of sampling at the interferometer output from the probability distribution associated with polarization- and time - resolved detections . we discuss the richness of the physics and the complexity of the mbcs problem for nonidentical input photons . we also compare the mbcs problem with the standard boson sampling problem , where the input photons are assumed to be identical and the system is `` classically '' averaging over the detection times and polarizations . in memory of dr . howard brandt
|
eukaryotic genes have a split nature , in which the exons , that encode the information for the final product of a messanger rna ( mrna ) , are interrupted by non - coding introns in the dna and in the precursor mrna ( pre - mrna ) transcript . the intron excision and the concomitant joining of exons , which basically represent the splicing process , are a necessity in order to obtain a mature mrna that can be exported in the cytoplasm and for example correctly translated into a protein .this process is carried out by the spliceosome , a macromolecular ribonucleoprotein complex , that assembles on pre - mrna in a stepwise manner .the first requirement is the correct recognition of the intron / exon boundaries by small nuclear ribonucleoproteins ( snrnps ) and some auxiliary splicing factors by binding to sequences located at the ends of introns .subsequently the splice - site pairing takes place , bringing the two exons near to each other and looping the intron that have to be cut away .although the molecular players and the key steps of spliceosome assembly are remarkably conserved through different species , there are two alternative pathways of splice - site recognition : _ intron definition _ and _ exon definition _ .+ intron definition ( see figure [ fig1]a ) begins with the direct interaction of the u1 snrnp with the splice - site in the upstream end of the intron ( 5 splice - site ) .the splice - site in the downstream end ( 3 splice - site ) is then recognized by the u2 snrnp and associated auxiliary factors such as u2af and sf1 . when the two complexes are constructed on the intron / exon boundaries they can be juxtaposed , closing an intron loop which is then spliced away in order to correctly glue the exons .the interaction of the splicing factors bound at the splice - sites occurs in this case across the intron .the exon definition ( see figure [ fig1]b ) requires instead that the initial interaction between the factors bound at the splice - sites occurs across the exon : the u1 and u2 snrnp and associated splicing factors bind to the 3 and 5 ends of an exon and a complex is built across it ( usually with the participation of serine / arginine - rich ( sr ) proteins ) ; then complexes on different exons join together so as to allow intron removal .previuos studies have shown that the length of the intron that has to be removed has a key role in the choice of the splice - site recognition modality .short introns are spliced away preferentially through intron definition , while longer introns seem to require an exon definition process .in particular the analysis of suggests the presence of a threshold in intron length - between 200 and 250 nucleotides ( nt ) long- above which intron - defined splicing ceases almost completely .lower eukaryotes present typically short introns , below the threshold , so it is expected that intron removal proceeds through intron definition .higher eukaryotes instead have an intron length distribution presenting two pronounced peaks , with the threshold in between ( see figure[intron - distribution ] ) , so even if the vast majority of introns are above the threshold ( data in table [ tabella ] ) , the first peak contains introns suitable for intron definition .this agrees with several studies which have shown that both ways of splice - site recognition are present in higher eukaryotes , even if the exon definition pathway seems to be the prevalent one .as it can be seen in figure [ intron - distribution ] , not only the shape of the distribution is quite conserved through different species , but also the position of the peaks .the first goal of our paper is to propose a simple physical model of early steps of spliceosome assembly on a pre - mrna , taking into account possible entropic contributions to the splicing process .subsequently we will show that , despite its simplicity , the model is able to produce quantitative predictions which are in rather good agreement with experimental and bioinformatical observations . +our starting point is the assumption that the splicing complexes , which are immersed in the crowded nuclear environment ( and reference therein ) , feel the so called `` depletion attraction '' .this interaction is essentially an entropic effect due to the fact that when two large complexes ( like the splicing ones ) approach each other , they reduce the volume between them excluded to the depleting particles . if the complexes are immersed in an environment crowded of macromolecules of smaller ( but comparable ) size , then this excluded volume effect induces an attractive interaction between the two complexes .this simple geometrical reasoning forms the basis for the asakura - oosawa ( ao ) theory . in more recent years, a more sophisticated hypernetted - chain - based theory describing depletion forces in fluids has been developed and tested in monte carlo simulations . however , as discussed in , the ao theory is an approximation that remains quite accurate up to , with representing the fraction of volume occupied by the crowding molecules . as far as the value inside a living cell has been estimated between 0.2 - 0.3 can safely use in the following the simpler ao description of depletion effects .since the two splicing complexes are joined by a freely fluctuating rna chain the depletion - based interaction becomes effectively long range , with a logarithmic dependence on the chain length .we suggest that this depletion attraction is the driving force which allows the splicing complexes to meet and join one another , in order to start up the splicing process . as we shall see this assumption naturally leads to a smooth cross - over from an intron defined to an exon defined splicing pathway as the chain length increases .let us model , as a first approximation , the pre - mrna as a freely jointed chain ( fjc ) , i.e a succession of infinitely penetrable segments , each of length equal to the kuhn length of the single strand rna ( ssrna ) .the estimated kuhn length of ssrna is approximately in the range 2 - 4 nm , i.e 3 - 6 nt .we chose to neglect the self avoidance in order to use the analytical tractable fjc and moreover the diameter of ssrna , approximately 2 nm , is not so relevant with respect to long chains : as reported in the fjc modelization is suitable for ssrna chains with a length greater than 5 - 6 kuhn segment , as will always be the case in the following .the two complexes , composed by u1 , u2 and splicing factors , that bind to the exon / intron boundary in the intron definition process , will be modeled as spheres with a diameter ( the dimensions of the major components u1 and u2 are quite similar , both of the order of nm , see and for details ) . the same geometrical approximation will be done for complexes constructed across exons in exon definition .they will be considered as spheres of diameter , with since they are composed by both the u1 and the u2 subcomplexes plus the exon in between , usually with sr proteins bound to it .+ the simple fjc model allows the analytical calculation of the radial probability distribution of the end - to - end distance : where , is the number of indipendent segments in the fjc and is the length of a segment ( in our case the kuhn length of mrna ) .following , in order to include the depletion attraction contribution , we weighed the radial probability distribution of the end - to - end distance ( we assume that the ends of the intron can be considered as the center of the beads ) with a boltzmann factor , which takes into account the depletion attraction potential and which is non - zero in the range .this potential is easy to evaluate in this `` hard sphere '' approximation ( see for instance ) and takes a particularly simple expression in the limit .we can therefore define a new function as the weighed fjc radial probability distribution : where denotes the volumetric concentration of the small molecules and their typical size . with the typical values of these quantities for the nuclear environment : and finds for the problem at hand a potential energy of the order of one hydrogen bond , which is exactly in the range of energies needed to join together the two ends of an intron of length of about 10 kuhn length ( equivalent to 50 nucleotides ). passing to the variabile ( distance from the surfaces of the two spheres ) , we construct our probability distribution as : which can be simply normalized as : it s now straigthforward to define the looping probability as the probability of finding the surfaces of the two beads at the end of the chain within a sufficiently short distance ( choosen as 5 nm in the following , in line with ) : we reported the equations for the case for the sake of simplicity , but in the numerical estimates reported in the following sections we used the full effective potential of depletion attraction taken from .the appealing feature of this model is that it introduces in a natural way a logarithmic relation between the intron length and the dimensions of spliceosome subcomplexes attached to its ends , if we constrain the system to keep a fixed looping probability . with ) for different intron lengths as a function of the diameter of the spheres attached to the ends .following the kuhn length of the chain was fixed to 5 nt ( about 3 nm ) .however it is well known that many regulatory proteins can be bound to the pre - mrnas and that the latter may fold into rather complex secondary structures .both these factors have the effect of increasing the stiffness of the pre - mrna thus increasing its kuhn length .unfortunately so far there are no experimental estimates of the kuhn length in these conditions , so the value derived for ssrna should be better considered as a lower bound .the diameter of the small crowding molecules is assumed as 5 nm ( see and references therein ) . ]this can be seen by looking at figure [ loop - prob - distribution ] where we plotted the looping probability for different intron lengths as a function of the diameter of the spheres attached to its ends .if we increase the intron length of an order of magnitude the beads diameter must be enlarged by a ( roughly constant ) multiplicative factor in order to obtain the same looping probability .this observation may be used to explain the switch from intron to exon definition as the intron length increases . when the intron length becomes too large the dimensions of merely u1 and u2subcomplexes is not sufficient to ensure a reasonable looping probability .this does not mean that such a process is forbidden but simply that it would require much longer times .for large enough introns it becomes more probable that the two complexes instead join across the exon ( a process mediated again by the depletion attraction ) , if it is sufficiently short .the complexes constructed across exons can actually result large enough to maintain a suitable looping probability , even in the case of long introns .looking at figure [ loop - prob - distribution ] we see that while the model works nicely from a qualitative point of view it predicts intron lengths which are slightly smaller than those actually observed .in fact , in order to make the model more realistic and to be able to obtain also a quantitative agreement with the data , we must take into account two other ingredients .the first one is that pre - mrnas can be bound to various regulatory proteins which have the effect of increasing their kuhn length .unfortunately no direct estimate of the kuhn length in this conditions exists , thus to obtain the curves reported in figure [ loop - prob - distribution ] we were compelled to use the kuhn length of pure ssrnas .hence the intron length reported in the figure should be better considered as lower bounds .the second one is that the splicing ( sub)complexes are rather far from the hard sphere approximation . if the irregular shape of the molecules allows a snugly fit or if parts of the two subcomplexes can intermingle , the free energy gain will be larger .again this suggests that our results should be better considered as lower bounds . in this casehowever we can slightly improve our model and obtain also a reliable upper bound for our looping probability .the maximal relaxation of the hard hypothesis can be achieved considering that the two spheres can fuse with volume conservation ( _ soft hypothesis _ ) . while we ca nt actually write the analytical expression of the potential in this `` soft beads '' case , it s undemanding to calculate the free energy gain obtained by the complete fusion of the two spheres .it s directly related to the portion of volume that becomes available to the crowding molecules : following again we may at this point assume that the functional dependence on of the potential is the same as in the hard - hypothesis scenario and that the free energy gain reported in equation [ soft gain ] can be a good estimate of the variation of the potential from zero at to its maximal absolute value at ( i.e. when the beads are in contact ) .starting from these resonable assumptions we may write the weighted radial probability distribution as in equation [ boltzman - factor ] , by simply substituting the maximal free energy gain of the hard beads scenario ( which is proportional to ) with that of equation [ soft gain ] : from this expression it is straightforward to obtain the probability distribution of the end - to - end distance , i.e the corrisponding of equations [ probability - distribution ] and [ probability - normalized ] , and obtain curves analogous to those reported in figure [ loop - prob - distribution ] .if the depletion attraction plays a role in exon juxtaposition , the typical length of introns with different splicing fate should be in a range suitable to obtain an high looping probability , given the diameter of the beads attached to their ends . in figure[ cumulative - diameter ] we report the diameter of the beads needed to have a looping probability of 99% , in the hard sphere hypothesis ( blue line ) and soft sphere hypothesis ( yellow line ) .to be more precise , the two colored regions represent the d values , obtained by numerical integrations for different intron lengths , for which ( see equation [ loop - prob ] ) , with the radial probability distributions ( described by equation [ probability - normalized ] ) , derived starting from equation [ boltzman - factor ] ( hard - sphere ) or from equation [ boltzman - factor - soft ] ( soft - sphere ) . in figure [ cumulative - diameter ] we also plot two vertical lines corresponding to the intron lengths of the left and right peak of the distribution in figure [ intron - distribution ] as typical values for the introns devoted to intron definition and exon definition respectively . remarkably enough in both casesthe actual dimensions of the splicing complexes ( the black dots along the vertical lines in the figure ) lie exactly in between the two bounds . moreover looking at the curves it is easy to see that moving from the first to the second peak , the subcomplexes size must increase roughly of the amount actually observed in the transition from intron definition to exon definition in order to keep the same looping probability . ) .black squares represent the estimated diameter of spliceosome ( sub)complexes for the two corresponding ways of splice - site recognition . while for the intron - definition caseestimates for the dimensions of the involved snrnp can be found in literature ( ) , less information is known for the typical size of the complex contructed across exons in exon definition . in the figure we made the ( rather conservative ) assumption that the diameter of this complex is twice that of the subcomplexes involved in intron - defined splicing . ]obviously many other types of specific and elaborate regulation of the splicing dynamic are present in the cell , but the atp - free depletion attraction could explain the widespread importance of the aspecific intron length variable and the necessity of exon definition when the intron length is increased. another interesting extension of the model that we propose occurs if the u1 and u2 subcomplexes can form intermolecular bonding . in this casethere would be an additional force driving the intron looping , besides depletion attraction .unfortunately , even if it is likely that such an interaction is present , there is yet no definitive experimental evidence supporting it and , what is more important for our purposes , the nature and form of its potential is still unclear .in particular even the occurrence of a direct interaction is still under debate : while evidences of such an interaction were proposed in some early paper , more recent works suggest instead that intermediate protein(s ) are needed to mediate the interaction .for instance the need of the protein prp5 acting as a bridge between u1 and u2 was recently discussed in . in any case , once the interaction potential will be known , it will be rather straigthforward to generalize our model keeping it into account by suitably modifying the boltzmann factor in equation [ probability - distribution ] .generally speaking , protein - protein interactions are usually short - range ( for example an hydrogen bond is formed at distances of the order of 0.1 - 1 nm ) and in a range of energy compatible with the energy gain due to depletion attraction ( see section [ presentation ] ) .thus we may safely predict that an additional short - range attraction would only lead to an overall increasing of the looping probability .qualitatively the effect would be a left translation of the curves in figure [ loop - prob - distribution ] and a lowering of the curves in figure [ cumulative - diameter ] , but this would not change the main results of this paper . as a matter of fact only a contribution of the depletion attraction type , introducing a dependence of the looping probability on the diameter of subcomplexes ,could explain the switch from intron definition to exon definition .following the idea that the choice of exon or intron definition is related to the looping probability , it is expected that organisms which prevalently use intron definition present a strict constraint on their intron length but not on their exon length , while the opposite is expected for organisms that prevalently use exon definition . as reported in many previously published studies ( and reference therein ) lower eukaryotes ,that prevalently choose intron definition , present a genomic architecture typified by small introns with flanking exons of variable length .higher eukaryotes have the intron length distribution shown in figure [ intron - distribution ] , with the vast majority of introns devoted to exon definition ( see table [ tabella ] ) , but a strictly conserved distribution of exon length , with a single peack around 100 nt . as shown in the upper right panel of figure [ exon - intron ] , the position of the typical exon length is approximately the same of the length of introns devoted to intron definition .these values , as discussed above , ensure an high probability of juxtapose the two u1 and u2 subcomplexes .+ in the case of lower eukaryotes ( three examples in figure [ exon - intron ] ) the intron length distribution presents a single narrow peak in a range compatible with high probability of looping . at the same time no constraint on exons are necessary and indeed the distribution of exons length is quite broad with a long tail towards large lengths .+ if the dimensions of merely the u1 and u2 subcomplexes are not enough to ensure an high looping probability across the intron , the exon length is constrained to values that give a sufficient looping probability across the exon , allowing the construction of a larger subcomplex that can then lead to the looping of long introns , as discussed in the previous section . with the exon length distribution of the human genome ( butthis distribution is again well conserved through different higher eukaryotes ) . in the other three panelthe superpositions of intron and exon length distributions for three different organisms ( d.melanogaster,a.gambiae,c.elegans ) that according to prevalently use intron definition ] so far we completely neglected the cooperative effects that could arise from the presence of more than two beads on the mrna string .as discussed in , the pairing of more than two beads moves the energetic balance towards the free energy gain .for example , clustering three beads implies three excluded volumes that overlap , but only two loops that have to be closed ; four beads give a sixfold free energy gain at the cost of closing only three loops , and so forth .however self avoidance can not be neglected in this case , as the increasing number of intron chains progressively makes the looping more energetically expensive . as observed in ( and reference therein ) in three dimensions the entanglement constraints become important when more than eight beads cluster together . above this threshold the free energygain / loss ratio starts to decrease , setting the optimal number of beads around eight . in the framework of exon - defined splicing, each bead corresponds to a complex constructed across an exon .remarkably enough the median value of the number of exons per gene is strongly conserved in higher eukaryotes ( which make an extensive use of exon - defined splicing ) and almost coincides with the optimal number of beads in the depletion attraction model ( see table [ tabella ] and figure [ exon - number ] ) . the same is not true for lower eukaryotes that prevalently use intron definition as shown in table [ tabella2 ] for three model organisms ..for each species we report : the median ( chosen instead of the mean because of the skewness of the distribution ) of the overall distribution of the number of exons per gene ( first column ) ; the mean of the gaussian fit made over the same distribution , discarding the intronless genes ( second column ) ; the percentage of introns which undergo exon - defined splicing according to ( third column ) . [ cols="<,<,<,<",options="header " , ] many more refined and energetically costing mechanisms of splicing are surely present in the cell , and many genes present a huge number of exons ( up to about 490 in human ) , but the fact that the typical value is mantained in different organism around , or just below , eight , as predicted by the model , seems to suggests an evolutionary attempt to mantain the number of beads that maximise the depletion attraction effect in exon juxtaposition .our simple modelization does not ensure the joining of exons in the specific order given by the pre - mrna transcript , allowing the possibility of scrumbled exons in the mature mrna . despite the fact that there are several cases reported in literature of this scrumbling of exons , the spliced mrna usually reproduce the original sequence of exons in the dna gene , eventually with exon skipping or other splicing variations which however do not affect the exons order .this is probabily due to the coupling of splicing to transcription by rna polymerase , which naturally introduces a polarity in the transcript and makes the exons available to the splicing machinery in a sequential manner .we presented a model that highlights the possible role of depletion attraction in the splicing process and we showed that this entropic contribution can explain also quantitatively some empirical and bioinformatical observations .spliceosomal introns can perform various functions and the resulting selective forces to mantain or introduce introns during evolution can explain the genome architecture of higher eukaryotes , characterized by many introns with a typical large size . the necessity to attain a high regulatory capacity within introns can for example explain the average increase of intron size in the mammal branch of the tree of life . at the same timeanother splice - site recognition modality has been introduced in higher eukaryotes : the exon definition . in the perspective of our modelthe exon definition pathway was selected by evolution as the simplest way to mantain a balance between the free energy gain due to depletion attraction and the free energy loss caused by looping longer introns . as shown in section [ intron - length ]the relation between the dimensions of spliceosome subcomplexes and typical intron lengths is in good agreement with our model predictions . with similar arguments we are able to explain the constraints on exons length : if the length of introns increases , decreasing their looping probability , the system is compelled to mantain an exons length suitable for the looping , which is essential to pass to exon definition and obtain diameters of subcomplexes sufficiently large to accomplish the exon juxtaposition .+ on the other hand several selective forces can also favour short introns , for example the high fitness of short introns can be due to a reduction of the time and energy cost of transcription and splicing , if the conditions favour economy over complexity as in the case of highly expressed genes . despite the possible selective forces behind - extensively discussed in the case of drosophila melanogaster in - usually the introns of lower eukaryotes have been maintained short by evolution . at the same time , there are no evidences of constraints on exon length , a behaviour again perfectly compatible with our model : the complexes on intron boundaries have a dimension which is sufficient to loop the short introns and proceed with the splicing , so no constraint on exons length is required .moreover evolution led to a proliferation of the number of introns in higher eukaryotes , leading to the genomes with the highest density of introns per gene .this contributes significantly to their proteome complexity : a gene with many exons can be spliced in many alternative ways to produce different protein products from a single gene . notwithstanding this, the typical number of exons per gene seems constrained around eight in those species that make an extensive use of exon definition .this coincides precisely with the number that allows an optimal exploitation of the depletion attraction in exon juxtaposition .this result may suggest a trade off between the advantages of a high number of exons - in terms of complexity - and the usage of the uncosting entropic effect of depletion in the splicing process .+ this is an author - created , un - copyedited version of an article accepted for publication in physical biology .we thank u. pozzoli and m.cereda for very useful discussion and i. molineris and g. sales for technical support .this work was partially supported by the fund for investments of basic research ( firb ) from the italian ministry of the university and scientific research , no .99 m.j.schellenberg et al . , trends bioch sci * 33*,(2008 ) 243 .
|
it has been recently argued that the depletion attraction may play an important role in different aspects of the cellular organization , ranging from the organization of transcriptional activity in transcription factories to the formation of the nuclear bodies . in this paper we suggest a new application of these ideas in the context of the splicing process , a crucial step of messanger rna maturation in eukaryotes . we shall show that entropy effects and the resulting depletion attraction may explain the relevance of the aspecific intron length variable in the choice of the splice - site recognition modality . on top of that , some qualitative features of the genome architecture of higher eukaryotes can find an evolutionary realistic motivation in the light of our model . _ keywords _ : splicing , depletion attraction , introns , macromolecular crowding
|
in this work we study the evolution of the interface between two different incompressible fluids with the same viscosity coefficient in a porous medium with two different permeabilities .this problem is of practical importance because it is used as a model for a geothermal reservoir ( see and references therein ) .the velocity of a fluid flowing in a porous medium satisfies darcy s law ( see ) where is the dynamic viscosity , is the permeability of the medium , is the acceleration due to gravity , is the density of the fluid , is the pressure of the fluid and is the incompressible velocity field . in our favourite units, we can assume the spatial domains considered in this work are ( infinite depth ) and ( finite depth ) .we have two immiscible and incompressible fluids with the same viscosity and different densities ; fill in the upper domain and fill in the lower domain .the curve is the interface between the fluids .in particular we are making the ansatz that and are a partition of and they are separated by a curve .the system is in the stable regime if the denser fluid is below the lighter one , _i.e. _ .this is known in the literature as the rayleigh - taylor condition .the function that measures this condition is defined as in the case with , the motion of a fluid in a two - dimensional porous medium is analogous to the hele - shaw cell problem ( see and the references therein ) and if the fluids fill the whole plane ( in the case with the same viscosity but different densities ) the contour equation satisfies ( see ) they show the existence of classical solution locally in time ( see and also ) in the rayleigh - taylor stable regime which means that , and maximum principles for and ( see ) .moreover , in the authors show that there exists initial data in such that blows up in finite time .furthermore , in the authors prove that there exist analytic initial data in the stable regime for the muskat problem such that the solution turns to the unstable regime and later no longer belongs to . in authors show an energy balance for and that if initially , then there is global lipschitz solution and if the initial datum has then there is global classical solution . in authors study the case with different viscosities . in authors study the case where the interface reach the boundary in a moving point with a constant ( non - zero ) angle .the case where the fluid domain is the strip , with , has been studied in . in this regimethe equation for the interface is }d\eta . \label{iieq0.1}\end{gathered}\ ] ] for equation the authors in obtain the existence of classical solution locally in time in the stable regime case where the initial interface does not reach the boundaries , and the existence of finite time singularities .these singularities mean that the curve is initially a graph in the stable regime , and in finite time , the curve can not be parametrized as a graph and the interface turns to the unstable regime .also the authors study the effect of the boundaries on the evolution of the interface , obtaining the maximum principle and a decay estimate for and the maximum principle for for initial datum satisfying smallness conditions on and on .so , not only the slope must be small , also amplitude of the curve plays a role .both result differs from the results corresponding to the infinite depth case .we note that the case with boundaries can also be understood as a problem with different permeabilities where the permeability outside vanishes . in the forthcoming work authors compare the different models , and from the point of view of the existence of turning waves .in this work we study the case where permeability is a step function , more precisely , we have a curve separating two regions with different values for the permeability ( see figure [ iischeme ] ) .we study the regime with infinite depth , for periodic and for `` flat at infinity '' initial datum , but also the case where the depth is finite and equal to . in the region above the curve the permeability is , while in the region below the curve the permeability is .note that the curve is known and fixed .then it follows from darcy s law that the vorticity is where corresponds to the difference of the densities , corresponding to the difference of permeabilities and is the usual dirac s distribution .in fact both amplitudes for the vorticity are quite different , while is a derivative , the amplitude has a nonlocal character ( see , and section [ iisec2 ] ) .the equation for the interface , when and the fluid fill the whole plane , is with if the fluids fill the whole space but the initial curve is periodic the equation reduces to where the second vorticity amplitude can be written as if we consider the regime where the amplitude of the wave and the depth of the medium are of the same order then the equation for the interface , when the depth is chosen to be , is where with a schwartz function . for notational simplicity, we denote and we drop the dependence .the plan of the paper is as follows : in section [ iisec2 ] we derive the contour equations , and . in section [ iisec3 ]we show the local in time solvability and an energy balance for the norm . in section [ iisec5 ]we perform numerics and in section [ iisec4 ] we obtain finite time singularities for equations and when the physical parameters are in some region and numerical evidence showing that , in fact , every value is valid for the physical parameters .in this section we derive the contour equations , and , _ i.e. _ the equations for the interface .first we obtain the equation in the infinite depth case , both , flat at infinity and periodic . given a scalar , curves , and a spatial domain or , we denote the birkhoff - rott integral as where denotes the kernel of ( which depends on the domain ) .if the domain is we have for we have and for the kernel is ( see ) using the kernel , we obtain where we have and we observe that is the limit inside ( the upper subdomain ) and is the limit inside ( the lower subdomain ) .the curve does nt touch the curve , so , the limit for the curve are in the same domain . using darcy s law and assuming that the initial interface is in the region with permeability , we obtain where in the last equality we have used the continuity of the pressure along the interface ( see ) . using we conclude we need to determine .we consider &=&\left(\frac{v^-(h(\alpha))}{\kappa^2}-\frac{v^+(h(\alpha))}{\kappa^1}\right)\cdot{\partial_\alpha}h(\alpha)\\ & = & -{\partial_\alpha}(p^-(h(\alpha))-p^+(h(\alpha)))\\ & = & 0,\end{aligned}\ ] ] where the first equality is due to darcy s law .using the expression we have =\left(\frac{1}{\kappa^2}-\frac{1}{\kappa^1}\right)\left(br(\varpi_1,z)h+br(\varpi_2,h)h\right)\cdot{\partial_\alpha}h(\alpha)+\left(\frac{1}{2\kappa^2}+\frac{1}{2\kappa^1}\right)\varpi_2.\ ] ] we take , with a fixed constant . then where denotes the hilbert transform . finally , we have ( see remark 1 for the definition of ) .the identity gives us and thus , and due to the conservation of mass the curve is advected by the flow , but we can add any tangential term in the equation for the evolution of the interface without changing the shape of the resulting curve ( see ) , _ i.e. _ we consider that the equation for the curve is taking , we conclude by choosing this tangential term , if our initial datum can be parametrized as a graph , we have therefore the parametrization as a graph propagates .finally we conclude as the evolution equation for the interface ( which initially is a graph above the line ) .we remark that the second vorticity can be written in equivalent ways notice that in the case with different viscosities the expression for the amplitude of the vorticity located at the interface ( see equation ) is no longer valid . instead , we have to this integral equation , we add the equation or .thus , one needs to invert an operator .this is a rather delicate issue that is beyond the scope of this paper ( see for further details in the case ) .we have that is still valid , but now are periodic functions and . using complex variables notationwe have changing variables and using the identity we obtain equivalently , recall that and are still valid if for a fixed constant .we have thus , the velocity in the curve when the correct tangential terms are added is we can do the same in order to write as an integral on the torus . the initial datum can be parametrized as a graph the equation for the interface reduces to , where the second vorticity amplitude can be written as now we consider the bounded porous medium ( see figure [ iischeme ] ) .this regime is equivalent to the case with more than two because the boundaries can be understood as regions with .as before , we assume that with .we have that is given by . the main difference between the finite depth and the infinite depth is at the level of . as in the infinite depth casewe have where now has the usual definition in terms of in expression .in the unbounded case we have an explicit expression for in terms of and , but now we have a fredholm integral equation of second kind : after taking the fourier transform , denoted by , and using some of its basic properties , we have we can solve the equation for for any with we obtain now we observe that if is a function in the schwartz class , , such that we have that and we obtain recall here that in order to obtain we invert an integral operator . in generalthis is a delicate issue ( compare with ) , but with our choice of this point can be addressed in a simpler way . using and adding the correct tangential term, we obtain if the initial curve can be parametrized as a graph the equation reduces to where is defined in .if by an explicit computation we obtain , thus , any is valid . moreover , we have tested numerically that the same remains valid for any , so would be correct for any .here we obtain an energy balance inequality for the norm of the solution of equation .we define , and . for every smooth solutions of in the stable regime , _i.e. _ , case verifies we define the potentials we have in each subdomain . since the velocity is incompressible we have moreover , the normal component of the velocity is continuous through the interface and the line where permeability changes . using the impermeable boundary conditions, we only need to integrate over the curve and .indeed , we have inserting in we get thus , summing and together and using the continuity of the pressure and the velocity in the normal direction , we obtain integrating in time we get the desired result .let be the spatial domain considered , _i.e. _ or . in this sectionwe prove the short time existence of classical solution for both spatial domains .we have the following result : consider a fixed constant and the initial datum , , such that .then , if the rayleigh - taylor condition is satisfied , _ i.e. _ , there exists an unique classical solution of ,h^k(\omega)) ] [ iiteo1 ] we prove the result in the case , being the case similar .let us consider the usual sobolev space endowed with the norm where .define the energy :=\|f\|_{h^3}+\|d^h[f]\|_{l^\infty},\ ] ] with (x,\beta)=\frac{1}{(x-\beta)^2+(f(x)+h_2)^2}.\ ] ] to use the classical energy method we need _ a priori _ estimates . to simplify notationwe drop the physical parameters present in the problem by considering and . the sign of the difference between the permeabilities will not be important to obtain local existence .we denote a constant that can changes from one line to another .* estimates on : * given such that <\infty ] for some constants .we proceed now to prove this claim .we start with the norm . changing variables in we have the inner term , can be bounded as follows \|_{l^\infty}^2(1+\|f\|_{l^\infty})^2\|{\partial_x}f\|_{l^2}^2.\end{gathered}\ ] ] in the last inequality we have used cauchy - schwartz inequality and tonelli s theorem . for the outer partwe have where we have used that and cauchy - schwartz inequality .we change variables in to obtain now it is clear that is at the level of in terms of regularity and the inequality follows using the same techniques . using sobolev embedding we conclude this step . * estimates on \|_{l^\infty} ] .from here we obtain ,h^s(\omega))\cap l^\infty([0,t],h^3(\omega)) ] where .moreover , we have ,c({\mathbb r}))\cap c([0,t],c^2({\mathbb r})). ] represents the distance between and and ] and consider as defined in .then we have that + 1)^k ] .using young inequality we conclude + 1)^k.\ ] ] * estimates on \|_{l^\infty} ] : * the integrals corresponding to in can be bounded ( see ) as + 1)^k.\ ] ] the new terms are the integrals and , those involving in .we have , when splitted accordingly to the decay at infinity , where \|_{l^\infty}+1\right),\end{gathered}\ ] ] and \|_{l^\infty}+1\right).\end{gathered}\ ] ] we conclude the following useful estimate + 1)^k.\ ] ] we have =-\frac{\sin(f(x)+h_2){\partial_t}f(x)}{(\cosh(x-\beta)-\cos(f(x)+h_2))^2}\leq d^h[f]\|d^h[f]\|_{l^\infty}\|{\partial_t}f\|_{l^\infty}.\ ] ] thus , using and integrating in time , we obtain the desired bound for ] we proceed in the same way and we use ( see for the details ) * estimates on : * as before , see for the details concerning the terms coming from in .it only remains the terms coming from : we split the lower order terms ( l.o.t . ) can be obtained in a similar way , so we only study the terms .we have + 1\|),\end{gathered}\ ] ] + 1\|).\end{gathered}\ ] ] the term is given by integrating by parts \|^2_{l^\infty}+1)\|\varpi_2\|_{l^\infty}(1+\|{\partial_x}f\|_{l^\infty})\end{gathered}\ ] ] * regularization and uniqueness : * these steps follow the same lines as in theorem [ iiteo1 ] .this concludes the result .in this section we perform numerical simulations to better understand the role of .we consider equation where , and .for each initial datum we approximate the solution of corresponding to different .indeed , we take different to get and . to perform the simulations we follow the ideas in .the interface is approximated using cubic splines with spatial nodes .the spatial operator is approximated with lobatto quadrature ( using the function _quadl _ in matlab ) .then , three different integrals appear for a fixed node . the integral between and , the integral between and and the nonsingular ones . in the two first integrals we use taylor theorem to remove the zeros present in the integrand . in the nonsingular integralsthe integrand is made explicit using the splines .we use a classical explicit runge - kutta method of order 4 to integrate in time . in the simulationswe take and .the case 1 ( see figure [ iicase1 ] and [ iicase1b ] ) approximates the solution corresponding to the initial datum for different in case 1 . ] for different in case 1 . ]the case 2 ( see figure [ iicase2 ] and [ iicase2b ] ) approximates the solution corresponding to the initial datum for different in case 2 . ] for different in case 2 . ] the case 3 ( see figure [ iicase3 ] and [ iicase3b ] ) approximates the solution corresponding to the initial datum for different in case 3 . ] for different in case 3 . ] in these simulations we observe that decays but rather differently depending on . if the decay of is faster when compared with the case . in the case where the term corresponding to slows down the decay of but we observe still a decay .particularly , we observe that if ( ) the decay is initially almost zero and then slowly increases .when the evolution of is considered the situation is reversed .now the simulations corresponding to have the faster decay . with these resultwe can not define a _regime for in which the evolution would be _ smoother_. recall that we know that there is not any hypothesis on the sign or size of to ensure the existence ( see theorem [ iiteo1 ] and [ iiteo3 ] ) .in this section we prove finite time singularities for equations , and .these singularities mean that the curve turns over or , equivalently , in finite time they can not be parametrized as graphs .the proof of turning waves follows the steps and ideas in for the homogeneus infinitely deep case where here we have to deal with the difficulties coming from the boundaries and the delta coming from the jump in the permeabilities .let us suppose that the rayleigh - taylor condition is satisfied , _ i.e. _ . then there exists , an admissible ( see theorem [ iiteo1 ] ) initial datum , such that , for any possible choice of and , there exists a solution of and for which in finite time . for short time the solution can be continued but it is not a graph .[ iiteo2 ] to simplify notation we drop the physical parameters present in the problem by considering .the proof has three steps .first we consider solutions which are arbitrary curves ( not necesary graphs ) and we _ translate _ the singularity formation to the fact . the second step is to construct a family of curves such that this expression is negative .thus , we have that _ if there exists , forward and backward in time , a solution in the rayleigh - taylor stable case corresponding to initial data which are arbitrary curves _ then , we have proved that there is a singularity in finite time .the last step is to prove , using a cauchy - kovalevsky theorem , that there exists local in time solutions in this unstable case .the previous hypotheses mean that is a curve satisfying the arc - chord condition and only has vertical component . due to these conditions on we have and is odd ( and then the second derivative at zero is zero ) and we get that .for we get we integrate by parts and we obtain , after some lengthy computations , for the term with the second vorticity we have and , after an integration by parts we obtain putting all together we obtain that in the flat at infinity case the important quantity for the singularity is where , due to , is defined as we apply the same procedure to equation and we get the importat quantity in the periodic setting ( recall the superscript in the notation denoting that we are in the periodic setting ) : and , due to , * taking the appropriate curve : * to clarify the proof , let us consider first the periodic setting . given , we consider constants such that and let us define and due to the definition of , we have and using , we get inserting this curve in we obtain with and is the integral involving the second vorticity .we remark that does not depend on .the sign of is the same as the sign of , thus we get and this is independent of the choice of and .now we fix and we take sufficiently large such that we can do that because or , equivalently , if is large enough . the integral is well defined and positive , but goes to zero as grows .then , fixed and in such a way , we take sufficiently large such that .we are done with the periodic case .we proceed with the flat at infinity case .we take as before and and define and we have and we assume . inserting the curve and in and changing variables ,we obtain and we have and using that is such that , we get the remaining integral is and using that is such that , we get putting all together we get and using this bound in we get then , as before , where are the integrals on the intervals and , respectively .we have thus , to ensure that the decay of is faster than the decay of we take .now , fixing , we can obtain and such that and .taking we obtain a curve such that . in order to conclude the argumentit is enough to approximate these curves and by analytic functions .we are done with this step of the proof . *showing the forward and backward solvability : * at this point , we need to prove that there is a solution forward and backward in time corresponding to these curves and . indeed ,if this solution exists then , due to the previous step , we obtain that , for a short time , the solution is a graph with finite energy ( in fact , it is analytic ) .this graph at time has a blow up for and , for a short time , the solution can not be parametrized as a graph .we show the result corresponding to the flat at infinity case , being the periodic one analogous .we consider curves satisfying the arc - chord condition and such that we define the complex strip and the spaces with norm where denotes the hardy - sobolev space on the strip with the norm ( see ) .these spaces form a banach scale . for notational conveniencewe write , .recall that , for , we consider the complex extension of and , which is given by with recall the fact that in the case of a real variable graph has the same regularity as , but in the case of an arbitrary curve is , roughly speaking , at the level of the first derivative of the interface .this fact will be used below .we define (\gamma,\beta)=\frac{\beta^2}{(z_1(\gamma)-z_1(\gamma-\beta))^2+(z_2(\gamma)-z_2(\gamma-\beta))^2},\ ] ] (\gamma,\beta)=\frac{1+\beta^2}{(z_1(\gamma)-(\gamma-\beta))^2+(z_2(\gamma)+h_2)^2}.\ ] ] the function is the complex extension of the _ arc chord condition _ and we need it to bound the terms with .the function comes from the different permeabilities and we use it to bound the terms with .we observe that both are bounded functions for the considered curves .consider and the set \|_{l^\infty({\mathbb b}_r)}<r , \|d^h[z]\|_{l^\infty({\mathbb b}_r)}<r\},\ ] ] where ] are defined in and .then we claim that , for , the righthand side of , is continuous and the following inequalities holds : \|_{h^3({\mathbb b}_{r'})}\leq\frac{c_r}{r - r'}\|z\|_r,\\ \label{iieq21}&&\|f[z]-f[w]\|_{h^3({\mathbb b}_{r'})}\leq\frac{c_r}{r - r'}\|z - w\|_{h^3({\mathbb b}_r)},\\ \label{iieq22}&&\sup_{\gamma\in { \mathbb b}_r,\beta\in{\mathbb r}}|f[z](\gamma)-f[z](\gamma-\beta)|\leq c_r|\beta|.\end{aligned}\ ] ] the claim for the spatial operator corresponding to has been studied in , thus , we only deal with the new terms containing . for the sake of brevity we only bound some terms , being the other analogous . using tonelli s theorem and cauchy - schwartz inequality we have that \|_{l^\infty}(1+\|z_2\|_{l^\infty({\mathbb b}_{r'})})\|{\partial_\alpha}z_2\|_{l^2({\mathbb b}_{r'})}.\ ] ] moreover ,we get for the procedure is similar but we lose one derivative . using and sobolev embedding we conclude from here inequality follows . inequality , for the terms involving , can be obtained using the properties of the hilbert transform as in .let s change slightly the notation and write (\gamma) ] .we conclude the proof of the theorem .we observe that in the periodic case the curve is of the same order as , so , even if , this result is not some kind of _ linearization_. the same result is valid if for any ( see theorem [ iiteo4 ] ) .moreover , we have numerical evidence showing that for every and ( and not ) there are curves showing turning effect .let us consider first the periodic setting .recall the fact that and let us define inserting this curve in we obtain that for any possible , in particular let us introduce the algorithm we use . we need to compute where means the in .recall that is two times differentiable , so , we can use the sharp error bound for the trapezoidal rule .we denote the mesh size when we compute the first integral .we approximate the integral of using the trapezoidal rule between .we neglect the integral in the interval , paying with an error denoted by .the trapezoidal rule gives us an error .as we know the curve , we can bound .we obtain we take .putting all together we obtain then , we can ensure that we need to control analytically the error in the integral involving .this second integral has the error coming form the numerical integration , and a new error coming from the fact that is known with some error .we denote this new error as .let us write the mesh size for the second integral . then , using the smoothness of , we have we take it remains the error coming from .the second vorticity , , is given by the integral .we compute the integral using the same mesh size as for , .thus , the errors are putting all together we have and we conclude now , using and , we obtain , and we are done with the periodic case .we proceed with the flat at infinity case .we have to deal with the unboundedness of the domain so we define inserting this curve in we obtain that for any possible , then , as before , the function is lipschitz , so the same for , where now are the expressions in and the second integral is over an unbounded interval .to avoid these problems we compute the numerical aproximation of recall that is given by and then , due to the definition of , we can approximate it by an integral over .the lack of analyticity of and the truncation of introduces two new sources of error .we denote them by and .we take and . using the bounds , and we obtain we have and we get for . using this inequalitywe get the desired bound for the second error as follows : the other errors can be bounded as before , obtaining , we conclude and putting together and we conclude . in order to complete a rigorous enclosure of the integral , we are left with the bounding of the errors coming from the floating point representation and the computer operations and their propagation . in a forthcoming paper ( see ) we will deal with this matter . by using interval arithmetics we will give a computer assisted proof of this result .in this section we show the existence of finite time singularities for some curves and physical parameters in an explicit range ( see ) .this result is a consequence of theorem 4 in by means of a continuous dependence on the physical parameters . as a consequence the range of physical parameters plays a roleindeed , we have let us suppose that the rayleigh - taylor condition is satisfied , _ i.e. _ , and take . there are , an admissible ( see theorem [ iiteo3 ] ) initial datum , such that , for any small enough , there exists solutions of such that for . for short time the solution can be continued but it is not a graph .[ iiteo4 ] the proof is similar to the proof in theorem [ iiteo2 ] .first , using the result in we obtain a curve , , such that the integrals in coming from have a negative contribution .the second step is to take small enough , when compared with some quantities depending on the curve , such that the contribution of the terms involving is small enough to ensure the singularity .now , the third step is to prove , using a cauchy - kovalevsky theorem , that there exists local in time solutions corresponding to the initial datum . to simplify notationwe take . then the parameters present in the problem are and . * taking the appropriate curve and : * from theorem 4 in we know that there are initial curves such that .we take one of this curves and we denote this smooth , fixed curve as .we need to obtain such that . as inwe define (\gamma,\beta)=\frac{\cosh^2(\beta/2)}{\cosh(z_1(\gamma)-(\gamma-\beta))-\cos(z_2(\gamma)+h_2)},\ ] ] and (\gamma,\beta)=\frac{\cosh^2(\beta/2)}{\cosh(z_1(\gamma)-(\gamma-\beta))+\cos(z_2(\gamma)-h_2)}.\ ] ] from the definition of it is easy to obtain where from the definition of for curves ( which follows from in a straightforward way ) we obtain \|_{l^\infty}+\|d_2^h[z]\|_{l^\infty}\right)\left(1+\frac{\mathcal{k}}{\sqrt{2\pi}}\|g_{h_2,\mathcal{k}}\|_{l^1}\right).\ ] ] fixing and collecting all the estimates we obtain \|_{l^\infty}+\|d_2^h[z]\|_{l^\infty}\right)\left(1+\frac{\mathcal{k}}{\sqrt{2\pi}}\sup_{|\mathcal{k}|<1}\|g_{h_2,\mathcal{k}}\|_{l^1}\right).\ ] ] now it is enough to take latexmath:[\[\label{iieqk1 } * showing the forward and backward solvability : * we define (\gamma,\beta)=\frac{\sinh^2(\beta/2)}{\cosh(z_1(\gamma)-z_1(\gamma-\beta))-\cos(z_2(\gamma)-z_2(\gamma-\beta))},\ ] ] and (\gamma,\beta)=\frac{\cosh^2(\beta/2)}{\cosh(z_1(\gamma)-z_1(\gamma-\beta))+\cos(z_2(\gamma)-z_2(\gamma-\beta))}.\ ] ] using the equations , , and , the proof of this step mimics the proof in theorem [ iiteo2 ] and the proof in and so we only sketch it . as before , we consider curves satisfying the arc - chord condition and such that we define the complex strip and the spaces with norm ( see ) .we define the set \|_{l^\infty({\mathbb b}_r)}<r , \|d^+[z]\|_{l^\infty({\mathbb b}_r)}<r,\\ \|d_1^h[z]\|_{l^\infty({\mathbb b}_r)}<r,\|d_2^h[z]\|_{l^\infty({\mathbb b}_r)}<r\ } , \end{gathered}\ ] ] where ] are defined in , , and , respectively . as before , we have that , for , complex extension of , is continuous and the following inequalities holds : \|_{h^3({\mathbb b}_{r'})}\leq\frac{c_r}{r - r'}\|z\|_r,\\ & & \|f[z]-f[w]\|_{h^3({\mathbb b}_{r'})}\leq\frac{c_r}{r - r'}\|z - w\|_{h^3({\mathbb b}_r)},\\ & & \sup_{\gamma\in { \mathbb b}_r,\beta\in{\mathbb r}}|f[z](\gamma)-f[z](\gamma-\beta)|\leq c_r|\beta|.\end{aligned}\ ] ] we consider , \ ; z_0=z(0).\ ] ] using the previous properties of we obtain that , for small enough , for all .the rest of the proof follows in the same way as in .
|
in this work we study the evolution of the free boundary between two different fluids in a porous medium where the permeability is a two dimensional step function . the medium can fill the whole plane or a bounded strip . the system is in the stable regime if the denser fluid is below the lighter one . first , we show local existence in sobolev spaces by means of energy method when the system is in the stable regime . then we prove the existence of curves such that they start in the stable regime and in finite time they reach the unstable one . this change of regime ( turning ) was first proven in for the homogeneus muskat problem with infinite depth . * keywords * : darcy s law , inhomogeneous muskat problem , well - posedness , blow - up , maximum principle . * acknowledgments * : the authors are supported by the grants mtm2011 - 26696 and sev-2011 - 0087 from ministerio de ciencia e innovacin ( micinn ) . diego c ' ordoba was partially supported by stg-203138cdsif of the erc . rafael granero - belinchn is grateful to department of applied mathematics `` ulisse dini '' of the pisa university for the hospitality during may - july 2012 . we are grateful to instituto de ciencias matemticas ( madrid ) and to the dipartimento di ingegneria aerospaziale ( pisa ) for computing facilities .
|
constructing a confidence interval for a proportion based on a binomial sample is a basic but important problem in statistics . due to the discreteness of the binomial distribution ,it is not possible to construct confidence intervals with exact coverage .thus an interval based on normal approximation , known as the wald interval , is taught in virtually every introductory statistics course .the interval is , where is the sample proportion , and is the percentile of the standard normal distribution .numerous authors have remarked on the surprisingly poor performance of the wald interval .errors in the approximation due to discreteness and skewness ( for small ) can have significant impact on the coverage of the interval even for large . in recent years, its weaknesses have been thoroughly investigated in comparisons of confidence intervals for . gave examples of the erradic behaviour of the wald interval , compared several intervals in terms of coverage and expected length and obtained general asymptotic results using edgeworth expansions . compared twenty methods using different criteria . for recent developments and discussions ,see for instance .a natural alternative to the wald interval is the clopper - pearson interval .it is based on the inversion of the equal - tailed binomial test and hence the interval contains all values of that are nt rejected by the test at confidence level .the lower limit is thus given by the value of such that and the upper limit is given by the such that the computation of and is simplified by the following equality from : where is the density function of a random variable .consequently , the endpoints of the clopper - pearson interval are beta quantiles : is exact in the sense that the minimum coverage over all is at least . for most values of however , especially values close to 0 or 1 , it is far too conservative , giving a coverage that is much larger than the nominal coverage .as several authors have pointed out it is often more natural to study the _mean _ coverage rather than the minimum coverage . in this paperwe construct coverage - adjusted clopper - pearson intervals with the mean coverage in mind , combining bayesian and frequentist reasoning .the intervals are adjusted to have mean coverage with respect to either a prior or a posterior distribution of .the corrected intervals are seen to have several desirable properties in the frequentist setting .a class of coverage - adjusted clopper - pearson intervals is introduced in section [ coverage ] . in section [ comparisons ] these intervals are compared to other popular intervals and new heatmap - style plots for comparing confidence intervals are introduced .the text concludes with a discussion in section [ discussion ] and an appendix with proofs , tables and several figures .as has already been mentioned , is often unnecessarily conservative .this is illustrated in figure [ cpcov ] .it is clear from the figure that if we are willing to accept an interval which has a coverage less than for _ some _ values of , the performance of can be improved by choosing a larger , in which case the actual coverage would be closer to the desired coverage for _ most _ values of .the question , then , is how to choose the new .we propose that should be chosen to satisfy a mean coverage criterion .let be a density function on .a mean coverage corrected clopper - pearson interval is given by the unique solution to where satisifies .e. is such that the mean coverage of with respect to is .note that this simply is the ordinary clopper - pearson interval , with chosen so that the mean coverage is .what differs is that needs to be determined before the endpoints are computed .it should be pointed out that the adjusted intervals inherit important properties from .they are fully boundary - respecting , so that , and equivariant in the sense of , meaning that the corresponding interval for is .furthermore , they have very favourable location properties in terms of the box - cox index of symmetry and balance of mesial and distal non - coverage , as described by .finally , the minimum coverage over all is guaranteed to be at least .the choice of affects the performance of greatly . can be thought of as a weight function on , used to put more weight on the performance for certain parts of the parameter space . in the following, we will refer to as being either a prior or posterior density , to show the connection between this weight function and bayesian ideas .the use of a prior distribution for coverage - adjustments can be motivated by the fact that in virtually all investigations , the experimenters will have some prior idea about how large is .in particular , it is often clear beforehand if is close to or far away from . is symmetric in in the sense that the interval has the same properties for and .for this reason , it is reasonable to use a symmetric prior for . priors , being conjugate priors of the binomial distribution , are a natural choice here .we divide the parameter space into three cases : _ close to 0 or 1 ._ when is small , a prior with should be used , as such priors put more weight on the tails of the distribution .we will use the prior in the following , but smaller can certainly be used .the coverage - adjustments will generally be larger for small , as the overcoverage of is largest in this part of the parameter space . _ close to 1/4 or 3/4 ._ for medium - sized , we wish to put approximately the same weight on the tails and the centre of the distribution .the uniform prior is ideal for this .the resulting interval will however give a slight undercoverage for closer to 1/2 , so if there is some worry that that may be above 0.40 , say , a prior with slightly greater than 1 could be used . the interval constructed using the uniform prior seems to coincide with a corrected interval that was described informally by . _ close to 1/2 ._ if is believed to be closer to , a prior with is recommendable .we will use the prior .the coverage - adjustments will be smaller in this part of the parameter space , as comes closest to attaining its nominal coverage around . .] having used priors for coverage correction , it seems natural to consider using a posterior distribution of for coverage - adjustments , in order to get closer to the nominal coverage in areas of the parameters space that given the data are more likely to contain . with a for , the posterior distribution is with density function where is the beta function .thus the posterior coverage corrected clopper - pearson interval is , given , determined by the condition ( [ condition ] ) with the function in the comparison later in the text , we will use the , and priors , with the same reasoning as in the previous section .conditioning the coverage - adjustments on the data may seem hazardous in a frequentist setting , but as we will demonstrate in section [ comparisons ] , this approach leads to short confidence intervals with good coverage properties . while can be approximated by using an asymptotic expansion for the coverage to solve the equation ( [ condition ] ) approximately , it is more convenient to use a numerical method with exact coverages .the following lemma ensures that is continuous and decreasing in .this guarantees that easily can be found numerically by using for instance bisection to solve the equation .the proof of the lemma is given in the appendix , along with a table of for different choices of and for .[ app1 ] let be the clopper - pearson interval and let , , be the density of the distribution .the mean coverage of with respect to the density , is continuous and strictly decreasing in .the algorithm for finding using bisection is as follows .given a tolerance , , and a density : 1 .start with an initial lower bound and an upper bound .the initial guess is .2 . set .3 . while : * if then , and .* else , and . * .4 . .for the algorithm to converge , two conditions must be satisfied .first , , i.e. must not exceed the upper bound .second , must be computed with sufficient precision ( determines what is sufficient ) .implementations of the above algorithm in r and ms excel are available from the author .we illustrate the use of the coverage - adjustments with clinical data from an influenza vaccine study performed by . fully vaccinated children younger than 2 years were included in the study . of these contracted influenza during the 2007 - 08 influenza season .the clopper - pearson interval for the proportion of vaccinated children younger than 2 years that will contract influenza is . using a prior correction, we get . letting the quantile function of the distribution , the coverage - adjusted confidence interval is using a posterior correction , and the interval is .following the comparison performed by , two confidence intervals for have emerged as being the intervals to which all other intervals should be compared .these are the wilson and jeffreys prior intervals , presented next . _ the wilson interval ._ like the wald interval , the score interval is based on an inversion of the large sample normal test where is the standard error of . unlike the wald interval , however , the inversion is obtained using the null standard error instead of the sample standard error .the solution of the resulting quadratic equation leads to the confidence interval typically has coverage close to the nominal coverage and comparatively short expected length .indeed , it can be shown that has some near - optimal length properties among intervals with nominal coverage . is therefore the natural benchmark for new confidence intervals .the main drawback of is that its coverages oscillates too much for fixed and close to 0 or 1 .recently , proposed a small modification of the interval that solves this problem .the improved coverage comes at the cost of a slightly wider interval .as the results of our comparison would not change qualitatively if the guan interval were to be used instead of , we stick to the more familiar unmodified version . _the jeffreys prior interval ._ let and let have prior distribution .then the posterior distribution is and letting denote the -quantile of the distribution , a bayesian interval is used the uniform prior in their comparison , whereas used the jeffreys prior .the difference between the two intervals is small .we use the latter and denote it . has performance close to that of the , and is often prefered when is believed to be close to 0 or 1 . in figure [ figcov1 ]the actual coverages of some confidence intervals with nominal coverage are shown for when .all intervals are equivariant in the sense that the coverage is the same for and .note that the coverages have been computed exactly ( up to machine epsilon ) and thus havent been obtained by simulation .the wilson interval has fairly good coverage properties when is nt close to 0 , in which case it oscillates wildly .the jeffreys prior interval has similar coverage , except for one big dip for a moderately sized .the coverage - adjusted intervals tend to have good coverage properties in the areas dictated by the prior distribution used for the adjustment .some of the prior corrected intervals suffer from either undercoverage or overcoverage for the parts of the parameter space that put low weight on .the expected length of the intervals are shown in figure [ figlen1 ] . in many cases ,the corrected intervals have shorter expected length than the wilson and jeffreys prior intervals . for some intervals , this is due to undercoverage caused by putting more weight on a different part of the parameter space , but in some cases it is a consequence of a succesful correction . in order to compare intervals over the entire parameter space for different values of , we use heatmap - type plots in figures [ figexp1]-[figexp6 ] . studying the plots for coverage andexpected length at the same time gives us a good way of comparing the intervals .the heatmap - type plots , combined with more traditional plots such as those in figures [ figcov1]-[figlen1 ] , give a more complete comparison of different intervals than what has previously been possible .the wilson interval is compared to the prior corrected clopper - pearson interval in figures [ figexp1]-[figexp2 ] and to the posterior corrected clopper - pearson interval in figures [ figexp3]-[figexp4 ] .the corrected intervals simultaneously offer greater coverage and shorter expected length for small .the difference is larger for small and is particularly noticeable at the 99 % confidence level . in the comparison between the jeffreys prior interval andthe posterior corrected clopper - pearson interval in figures [ figexp5]-[figexp6 ] , the corrected interval is found to offer at least as short intervals with the same actual coverage as the jeffreys prior interval .we introduced coverage - adjusted clopper - pearson intervals , where the intervals are adjusted to give mean coverage with respect to either a prior or posterior distribution of .we investigated the properties of several such intervals .the numerical results were presented graphically , partially with new heatmap - type plots . in the comparison with the benchmark wilson and jeffreys prior intervals , we found the coverage - adjusted clopper - pearson intervals to be preferable if is believed to be close to 0 or 1 , as these intervals have both better coverage and shorter expected length in this setting .we have thus seen that it is possible to improve upon the wilson and jeffreys prior intervals for close to 0 or 1 , if we are willing to accept that we use intervals that may have bad coverage properties in regions of the parameter space that are far from where our prior information indicates that is . in conclusion ,the coverage - adjusted clopper - pearson intervals seem to be strong competitors against other methods for constructing confidence intervals for small binomial proportions . for to 0.5 , the wilson interval seems to be preferable .the extension of the ideas presented here to one - sided intervals and to other distributions , such as the poisson and negative binomial distributions is straightforward .likewise , it should be possible to apply such corrections to tests about the difference of two binomial proportions .it remains to be seen whether the corrections yield intervals with interesting properties in these cases as well . apart from mean coverage , several other conditions can be used to ensure that the coverage is close to on average .examples include median coverage conditions , minimum mean squared coverage error conditions and minimum absolute coverage error conditions .the author wishes to thank an anonymous reviewer and silvelyn zwanzig for several helpful comments and sven erick alm , who proposed the idea of a posterior correction .all figures were produced using r.changing the order of summation of integration , the mean coverage can be rewritten as since sums of continuous functions are continuous , it suffices to show that is continuous for fixed , , and . seeing as ,this definite integral is a polynomial in and .since the quantile functions of the beta distributions , and thus the limits of integration , are continuous in , the continuity of in follows .similarly , as is strictly increasing in and is strictly decreasing in , and since for all , the definite integral ( [ cont1 ] ) is strictly decreasing in . is the sum of strictly decreasing functions and thus also strictly decreasing . for a given prior distribution , is easily computed numerically given , and , in the case of a posterior correction , .we give a table of for the prior corrected clopper - pearson interval as an example .intervals for different and . in the black pointsthe prior corrected interval has greater coverage , in the white points the wilson interval has greater coverage and in the grey points the intervals have equal coverage ( when rounded to 3 decimal places ) . ]intervals for different and . in the black pointsthe prior corrected interval has greater coverage , in the white points the wilson interval has greater coverage and in the grey points the intervals have equal coverage ( when rounded to 3 decimal places ) . ]intervals for different and . in the black pointsthe posterior corrected interval has greater coverage , in the white points the wilson interval has greater coverage and in the grey points the intervals have equal coverage ( when rounded to 3 decimal places ) . ]intervals for different and . in the black pointsthe posterior corrected interval has greater coverage , in the white points the wilson interval has greater coverage and in the grey points the intervals have equal coverage ( when rounded to 3 decimal places ) . ]intervals for different and . in the black pointsthe posterior corrected interval has greater coverage , in the white points the bayesian jeffreys prior interval has greater coverage and in the grey points the intervals have equal coverage ( when rounded to 3 decimal places ) . ]intervals for different and . in the black pointsthe posterior corrected interval has greater coverage , in the white points the bayesian jeffreys prior interval has greater coverage and in the grey points the intervals have equal coverage ( when rounded to 3 decimal places ) . ]heinonen , s. , silvennoinen , h. , lehtinen , p , et al .( 2011 ) , effectiveness of inactivated influenza vaccine in children aged 9 months to 3 years : an observational cohort study , _ the lancet infectious diseases _ , vol .23 - 29 krishnamoorthy , k. , peng , j. ( 2007 ) , some properties of the exact and score methods for binomial proportion and sample size calculation , _ communications in statistics - simulation and computation _ , vol .1171 - 1186
|
we consider the classic problem of interval estimation of a proportion based on binomial sampling . the exact clopper - pearson confidence interval for is known to be unnecessarily conservative . we propose coverage - adjustments of the clopper - pearson interval using prior and posterior distributions of . the adjusted intervals have improved coverage and are often shorter than competing intervals found in the literature . using new heatmap - type plots for comparing confidence intervals , we find that the coverage - adjusted intervals are particularly suitable for close to 0 or 1 . + * keywords : * binomial distribution ; confidence interval ; proportion .
|
key component of a multiple - input - multiple - output ( mimo ) communication system in terms of performance and complexity is the mimo detector , which is used for separating independent data streams at the receiver .the maximum likelihood ( ml ) detectors achieve the optimal error rate performance . however , these types of detectors , including the near - optimal sphere decoder and its variants , are usually not suitable for practical systems due to their high complexity .linear detectors , such as zero - forcing ( zf ) and mmse , achieve suboptimal performance , however , they are widely used in practical systems due to their low complexity implementations . among linear receivers , mmse is the optimal solution and seems to be the mainstream implementation choice due to its superior performance over zf detectors .perfect channel state information ( csi ) is usually assumed in the literature when simulating or analyzing the performance of linear detectors .however , in practice the channel estimates are inherently noisy .important work , has characterized the error rate performance of zf receivers in the presence of channel estimation error .nevertheless , less is known for the case of mmse detectors in practical scenarios . for zf and mmse receivers , the joint effect of phase noise and channel estimation erroris considered in and the performance is analyzed in terms of the degradation in signal - to - noise - plus - interference - ratio ( sinr ) without expressing the closed form performance indicators or error rate analysis .the sinr derivations for the mmse case in are done only for low snr region . in both and , channel estimation error varianceis assumed to be constant for all snrs .this is not realistic approach for packet based or bursty communication systems as the channel estimation error is in fact a function of the snr . in this letter, we analyze the mmse receivers in the presence of channel estimation error , and derive a closed form post - processing snr expression , which provides an accurate estimate of the error rate performance .the error rate performance is investigated for both the constant channel estimation error variance case and the case with a realistic channel estimation algorithm where the estimation error variance is clearly dependent on the channel snr .we believe that it is a very useful tool for throughput prediction in link adaptation protocols and for error rate analysis in general .accuracy of the analytical results is verified through simulations .we consider a mimo system where the transmitter is equipped with antennas , and the receiver uses antennas .the received signal vector can be expressed as where is the transmitted signal vector , is the channel matrix , and is the additive gaussian noise vector with zero mean and covariance matrix =n_0\mathbf{i} ] .the second terms is =n_0\mathbf{w}\mathbf{w}^h ] .below , we calculate the first and last terms in by plugging the error matrix into .rcl [ term1incov ] + & & e + & & + e + & & -e + & & + e + & & + e + & & -e + & & -e + & & -e + & & + e it can be proven that =e\left[\delta\mathbf{h}^h\mathbf{a}\delta\mathbf{h}^h\right]=0 ] , and obtain rcl + & & _e^2(^h^h^h)^h^h + & & + _e^2(^h^h^h^h)^h + & & -_e^2(^h^h)^h + & & -_e^2(^h^h^h)^h + _e^2(^h)^h similarly , the last term in , ] into and obtain the ppsnr in the presence of channel estimation error as . & -e_s\sigma_e^{2}{\mbox{tr}}\!\left(\mathbf{h}\mathbf{k}\mathbf{h}^h\mathbf{h}\mathbf{h}^h\right)\!\mathbf{k}\mathbf{k}^h\!\!-\!e_s\sigma_e^{2}{\mbox{tr}}\!\left(\mathbf{h}\mathbf{h}^h\mathbf{h}\mathbf{k}^h\mathbf{h}^h\right)\!\mathbf{k}\mathbf{k}^h\!\!+\!e_s\sigma_e^{2}{\mbox{tr}}\!\left(\mathbf{h}\mathbf{h}^h\right)\!\mathbf{k}\mathbf{k}^h \\[-3pt ] & + n_0\mathbf{w}\mathbf{w}^h+n_0\sigma_e^{2}{\mbox{tr}}\!\left(\mathbf{k}\mathbf{h}^h\mathbf{h}\mathbf{k}^h\right)\mathbf{k}\mathbf{h}^h\mathbf{h}\mathbf{k}^h+n_0\sigma_e^{2}{\mbox{tr}}\!\left(\mathbf{h}\mathbf{k}\mathbf{h}^h\mathbf{h}\mathbf{k}^h\mathbf{h}^h\right)\mathbf{k}\mathbf{k}^h \\[-3pt ] & -n_0\sigma_e^{2}{\mbox{tr}}\!\left(\mathbf{h}\mathbf{k}\mathbf{h}^h\right)\mathbf{k}\mathbf{k}^h - n_0\sigma_e^{2}{\mbox{tr}}\!\left(\mathbf{h}\mathbf{k}^h\mathbf{h}^h\right)\mathbf{k}\mathbf{k}^h+n_0\sigma_e^{2}n_r\mathbf{k}\mathbf{k}^h \end{split } \right)_{\!k , k}}\ ] ] the ber of the system in the presence of channel estimation error can be found simply by plugging as the symbol snr into the awgn ber formulas .for example , the ber of stream for bpsk is , and for gray - coded 16qam .in order to test the performance of the analysis , we simulated transmission of thousands of packets through uncorrelated rayleigh flat fading channels . for each snr point on the ber plots , we randomly generate 1000 i.i.d .realizations of the channel matrix . for each specific realization of the channel ,we transmit 500 packets each of which carries 2000 information symbols .we perform channel estimation for each packet as explained below in _ case 1_. _ case 1 : _ in our simulations , we employed the maximum likelihood ( ml ) channel estimation ( ce ) algorithm , in which the channel estimate is obtained via training symbols that are known to the receiver . during the training phase ,the training matrix is transmitted where is the number of training symbols .the received signal is where is the noise matrix .then , the ml estimate of the channel is given as it was shown that the optimal training signal has the property of .when this orthogonal training signal is employed , the entries of are i.i.d . with , andthe channel estimation noise variance can also be defined as depending on snr definition . ]is .the estimation error in this case is caused by the awgn in this case .the following training signal , which is taken from 802.11n standard , was employed in the simulations . where is the submatrix formed by first rows and first columns of the bigger matrix matrix here is for maximum of 4 spatial streams since the standard supports up to 4 streams . for .] , i.e. $ ]. \ ] ] it should be noted that with this choice of the training matrix , the ml channel estimation at the receiver becomes a very simple operation since the matrix inversion , , is now a trivial operation .we present the simulation results for bpsk in fig .[ bpsk_all_latex3 ] and 16qam in fig .[ qam16_all_latex3 ] with , , mimo configurations . is used in all the simulations .the case of , i.e. perfect channel estimation , is also included in the results . for each channel instance , analytical ber results are obtained by using the ppsnr derived in the previous section .then , these bers are averaged over all realizations of the channel . first thing to notice in fig .[ bpsk_all_latex3 ] and fig .[ qam16_all_latex3 ] is that for , simulation and analysis curves exactly match .performance is significantly degraded for the systems experiencing channel estimation errors .this is particularly evident for the configurations . as it can be seen in fig .[ bpsk_all_latex3 ] and fig .[ qam16_all_latex3 ] , our analysis gives a very tight approximation of the real performance .for bpsk , and all of the 16qam configurations the analysis results exactly match the simulated performances .for bpsk , and configurations the analysis results are upper - bounds to the real performance at high snr , however , they are still very close to the real performances .the analysis results become tighter for higher order modulations and higher order mimo configurations .this is because of the fact that the gaussian assumption , which is made for the post - detection noise , is more valid at higher order modulations and mimo configurations . at low snrs ,the total post detection noise is dominated by the additive white gaussian noise component therefore the assumption is valid even for lower configurations .however , at high snrs the residual interference components from other spatial streams becomes dominant and is loosely approximated as gaussian for lower order constellations and mimo configurations .it is interesting to note that in contrast to the results obtained for zf detector by , we do not observe any error floor on the performance .this is due to the fact that the channel estimation error variance for ml estimation gets smaller as snr increases .this is the situation that occurs in practical packet based or bursty communication systems where the channel estimation is performed for every packet prior to data detection , and hence experiences the same noise variance as the data transmission .therefore the channel estimation quality is dependent on the snr . on the other hand ,error floors are observed in because of the assumption that remains constant independent of the snr .this case is investigated below in _ case 2_. _ case 2 : _ in addition to ml channel estimation results , we also performed simulations with constant . unlike the first case ,the channel estimation quality is independent of the snr .this situation might arise either when there is a ready channel estimate to be used by the receiver formed elsewhere with a different additive noise variance , or the channel estimation is outdated and the major error in the channel estimation comes from the mobility changes in the channel . in fig .[ qpsk ] , the ber performance of a qpsk system is investigated for using the estimation error model in ( 4 ) .each packet observes a different realization of the random matrix with the designated variance . as expected , we observe error floor in the performance due to the constant estimation error variance as in the zf detector case studied in .more importantly , these error floors are the same as the ones observed by zf detector because of the fact that the mmse and zf detectors exhibit the same behaviour at asymptotically high snr .the simulation results in this case also agree with the analysis .in this letter , we presented the analysis of post - processing snr for practical mimo mmse receivers which experience imperfect channel estimation .performance of mmse receivers in the presence of channel estimation error is investigated and shown to be accurately estimated via analytical results .we verified the tightness of the analytical results via simulations . besides the theoretical contributions, we believe that our closed form ppsnr expression can be useful for link adaptation purposes in real mimo systems . thereexist link adaptation algorithms based on ppsnr , however perfect csi is always assumed which might lead to incorrect prediction of the throughput .more accurate prediction can be achieved using the results presented in this paper .nobandegani and p. azmi , `` effects of inaccurate training - based minimum mean square error channel estimation on the performance of multiple input - multiple output vertical bell laboratories space - time zero - forcing receivers , '' _ comm ., iet , _ vol .4 , no . 6 , pp.663674 , apr .r. corjova and a. g. armada , `` sinr degradation in mimo - ofdm systems with channel estimation errors and partial phase noise compensation , '' _ ieee trans .comm . , _ vol .58 , no . 8 , pp .21992203 , march 2007 .
|
we present the derivation of post - processing snr for minimum - mean - squared - error ( mmse ) receivers with imperfect channel estimates , and show that it is an accurate indicator of the error rate performance of mimo systems in the presence of channel estimation error . simulation results show the tightness of the analysis . mimo , mmse receiver , post - processing snr .
|
slave clocks method is proposed to overcome the limit of conventional ieee 1588 clock synchronization approaches causing clock synchronization errors in asymmetric links by way of using only one - way exchange of timing messages , where the authors claim that multiple unknown parameters in clock synchronization i.e. , clock offset , clock skew , and master - to - slave delay can be estimated simultaneously using more equations than usual resulting from the use of dual slave clocks . the claim ,however , is contrary to the well - known fact that the clock offset and the delay can not be differentiated with only one - way message dissemination . to investigate the issues in the formulation of multi - parameter estimation in , we first model the master clock and the dual slave clocks generated by a common signal in terms of an _ ideal , global reference time _ . for simplicity ,we consider continuous clock models and ignore clock jitters in modeling ; in this case , the master clock and the dual slave clocks at are given by where and are the frequencies of the master clock and the common clock driving the dual slave clocks , and , , and are the phase differences of the master clock and the dual slave clocks ( i.e. , clock 1 and clock 2 shown in fig. of ) with respect to the ideal reference clock , respectively .given these models , we can describe the two slave clocks in terms of the master clock ( i.e. , ) as follows : where note that at the slave , ( i.e. , normalized clock skew ) and are unknown because these are parameters related with the master clock , while and though their true values are also unknown are controllable and their ratio ( i.e. , ) can be set to the frequency ratio between the slave clocks due to the dual clock generation described in ( i.e. , 2 when both clocks begin at the same time ; see fig.[fg : phase_ratio ] for illustration ) .now consider the equations ( 5 ) and ( 6 ) of describing a relationship between the times of slave clocks when the -th sync message is received : where is the master - to - slave delay , is a common offset of the slave clocks , and and are random jitters of slave clock 1 and 2 within a period , respectively .we can see that , except for the noise components ( i.e. , and representing clock jitters ) , ( [ eq : master_slave_clock1 ] ) and ( [ eq : master_slave_clock2 ] ) are generalized expressions ( i.e. , at any time instant ) for time relationship between the master and the slave clocks .chin and chen argue that , because a single signal drives both the clocks as shown in fig . 3 of , they have the same offset ( i.e. , ) with respect to the master clock .this , however , is not the case in general ; from ( [ eq : slave_clock1_phase ] ) and ( [ eq : slave_clock2_phase ] ) , we obtain the condition for the slave clocks to have a common offset with respect to the master clock ( i.e. , ) as follows : as we discussed with ( [ eq : clock_skew])([eq : slave_clock2_phase ] ) , the number of parameters of the left - hand side of ( [ eq : phase_condition ] ) can be reduced to one ( e.g. , when ) , but at the the right - hand side , both and are unknown at the slave and parameters to be estimated . in other words , for the equations ( 5 ) and ( 6 ) of to be valid , not only the value of ( or ) but also the values of and should be known at the slave , which verifies that chin and chen s formulation of multi - parameter estimation is invalid .note that in case of ( i.e. , both the slave clocks begin ( or reset ) at the same time ) , the times of slave clocks should meet the following condition ( again , ignoring random noise components for simplicity ) ) . ] : in such a case the equations ( 5 ) and ( 6 ) of should be rewritten as unlike the original equations of , ( [ eq : new_slave_clock1 ] ) and ( [ eq : new_slave_clock2 ] ) can not be manipulated to separate the delay ( ) from the clock offset ( ) or vice versa , which invalidates the resulting maximum likelihood ( ml ) estimates in .note that , in practical applications like clock synchronization in wireless sensor networks ( wsns ) , the best one can do with these equations is the estimation of the clock skew and the clock offset assuming that is negligible and that as suggested in .
|
in the above letter , chin and chen proposed an ieee 1588 clock synchronization method based on dual slave clocks , where they claim that multiple unknown parameters i.e. , clock offset , clock skew , and master - to - slave delay can be estimated with only one - way time transfers using more equations than usual . this comment investigates chin and chen s dual clock scheme with detailed models for a master and dual slave clocks and shows that the formulation of multi - parameter estimation is invalid , which affirms that it is impossible to distinguish the effect of delay from that of clock offset at a slave even with dual slave clocks . clock synchronization , clock offset , clock skew , path delay .
|
we are currently witnessing the beginning of a quantum engineering revolution , marking a shift from `` classical devices '' which are macroscopic systems described by deterministic or stochastic equations , to `` quantum devices '' which exploit fundamental properties of quantum mechanics , with applications ranging from computation to secure communication and metrology . while control theory developed from the need for predictability in the behavior of `` classical '' dynamical systems , quantum filtering and quantum feedback control theory deals with similar questions in the mathematical framework of quantum dynamical systems .system identification is an essential component of control theory , which deals with the estimation of unknown dynamical parameters of input - output systems .a similar task arises in the quantum set - up , and various aspects of the _ quantum system identification _ problem have been considered in the recent literature , cf . for a shortlist of recent results . in this paper, we focus on the class of _ passive linear quantum system _ which have been extensively studied in recent years , and can be used to describe or approximate a large number of physical systems .the system consists of a number of quantum variables ( e.g. the electromagnetic field inside an optical cavity ) , and is coupled with the quantum stochastic input consisting of non - commuting noise processes ( e.g. a laser impinging onto the cavity mirror ) . as a result of the quantum mechanical interaction between system and input , the latter is transformed into an output quantum signal which can be measured to produce a classical stochastic measurement process . in this context , we address the problem of identifying the linear system by appropriately choosing the state of its input and performing measurements on the output ( see figure [ sys i d setup ] ) . in this `` time - dependent input '' scenariowe characterize the equivalence class of systems with the same input - output relation ( transfer function ) , find a canonical parametrization of the space of equivalence classes in terms of physical parameters , and investigate special parametric models in which the system is fully identifiable .these results are the first steps of an ongoing project aimed at designing statistically and computationally efficient input states , output measurements , and estimators , as well as characterising the class of equivalent systems in the `` stationary input '' scenario , which involves the analysis of the power spectrum .although our investigations are guided and motivated by classical results going back to , the quantum setting imposes specific constraints and the goal is to identify quantum features related with e.g. unitarity , entanglement and uncertainty principles .setup of system identification for linear quantum systems .the experimenter can prepare a time - dependent input state , and perform a continuous - time measurement on the output , from which the unknown system parameters are estimated .the input - output relation is encoded in the transfer function . ]the paper is structured as follows . in sectionii we introduce the set - up of linear quantum systems , and formulate the system identification problem . in section iiiwe show that two minimal systems have the same transfer function if and only if their associated hamiltonian and coupling matrices are related by a unitary transformation ( cf .theorem [ th.equivalence.class ] ) , amounting to a change of basis in the space of continuous variables modes .this yields a necessary and sufficient condition for the identifiability of a passive linear system , which is then applied to several examples with different hamiltonian parametrizations . in sectioniv we provide a concrete characterisation of the identifiable set of parameters with direct interpretation in terms of the coupling and hamiltonian matrices . finally , in section v we characterize a wide class of identifiable quantum linear networks , by employing the concept of `` infection '' introduced in .we use the following notations : for a matrix , the symbols , represent its hermitian conjugate and transpose of , i.e. , and , respectively . for a matrix of operators , , we use the same notation , in which case denotes the adjoint to .in this section we briefly review the framework of linear classical and quantum dynamical systems and then formulate the quantum identifiability problem .a classical linear system is described by the set of differential equations where is the state of the system , is an input signal , and is the output signal .the observer can control the input signal and observe the output , but does not have access to the internal state of the system .the input signal can be deterministic , in which case we deal with a set of odes or stochastic , in which case the equations should be interpreted as sdes . apart from the input and the initial state of the system , the dynamics is determined by the ( real ) matrices . to find the relation between input and output it is convenient to work in the laplace domain .the laplace transform of is defined by (s ) : = \int_0^\infty e^{-st}x(t)dt,\ ] ] where .then , the following input - output relation holds : (s ) = \xi(s){\cal l}[u](s),\ ] ] where is the _ transfer function matrix_. system identification deals with the problem of estimating the matrices or certain parameters on which they depend , from the knowledge of the input and output processes . from it is clear that the observer can at most determine the transfer function by preparing appropriate inputs and observing the output .the identifiability problem is closely related to the fundamental system theory concepts of _ controllability _ and _ observability_. the system is controllable if for any states and times there exists a ( piece - wise continuous ) input such that the initial and final states are given by and , respectively .this turns out to be equivalent to the fact that the controllability matrix ] .this is in turn equivalent to the fact that the observability matrix ^{t} ] and annihilation operators ^{t} ] whose algebraic properties are characterised by the commutation relations = \min\{s ,t\ } \delta_{ij } \hat{1},\ ] ] or alternatively by = \delta(t - s ) \delta_{ij}\hat{1}.\ ] ] where ^t ] of is defined as in , for . as we will be assuming that the system is stable , the initial state of the system is irrelevant in the long time limit , and we can set its mean to zero . in the laplace domainthe input - output relation is a simple multiplication (s ) = \xi(s){\cal l}[\hat{{\mbox{\boldmath }}}](s),\ ] ] where is the transfer function matrix : broadly speaking , by system identification we mean the estimation of the parameters and which completely characterize the linear quantum system described by and .this task can be analysed in various scenarios , depending on the experimenter s ability to prepare the state of the input and the initial state of the system , and the type of measurements used for extracting information about the dynamics . herewe restrict to a scenario which is most suitable for quantum control applications , and is a close extension of the classical set - up .more precisely , we assume that the experimenter can prepare input field in a coherent state whose mean vector has a desired time dependence and can perform standard ( e.g. homodyne and heterodyne ) measurements on the output , but does not have direct access to the system . by equation ,the frequency domain output modes are related to their input correspondents , through the linear transformations .note that since must satisfy canonical commutation relations similar to , the matrix must be unitary for all , as it can also be verified by direct computation .it is clear that the experimenter can at most determine the transfer function , and this can be done by preparing appropriate inputs ( e.g. sinusoids with a certain frequency ) and observing the outputs .the transfer function matrix can be determined from the relation between input and output fields (s ) = \xi(s ) { \cal l}[\beta](s).\ ] ] more generally , we can consider that the system matrices are not completely unknown but depend on an unknown parameter such that and correspondingly .the task is then to estimate the unknown parameter using the input and output relations ( see fig .[ sys i d setup ] ) , and we define the identifiability of the system as follows .[ def of identifiability ] the system parameter is identifiable if for all implies .in this section , we provide some basic necessary and sufficient conditions for the passive linear system and to be identifiable , in the sense of definition [ def of identifiability ] .the concepts of controllability and observability have a straightforward , though arguably non unique , extension to the quantum domain .the system defined by and is controllable if for any times and any means there exists a coherent input which drives the initial coherent state into the final state over the time interval ] .this is equivalent to the fact that the _ observability matrix _ ^{t}\ ] ] has full row rank . as in the classical case ,if the system is not controllable or observable then there exists a lower dimensional system with the same transfer function as the original one .therefore , we focus on minimal quantum systems .the following lemma shows that in the case of passive quantum linear systems we need to check only one of the controllability and observability conditions to verify that the system is minimal .for the quantum linear system and , the controllability and the observability conditions are equivalent . from the result of system theory , controllability is equivalent to the following condition : this implies that the following condition holds : to prove this , suppose that there exists a vector satisfying and .this implies that and , thus we have .but now is contained in the kernel of , this is contradiction to .the condition is the iff condition for controllability , thus equivalently observability .we here provide an additional result for later use .[ stability lemma ] the quantum linear system and having the minimality property is stable .because of the minimality , the system satisfies the condition ; hence is an eigenvector of and is the corresponding eigenvalue .then the relation readily leads to , which is strictly negative due to .therefore is a hurwitz matrix . as noted above, by appropriately choosing the input signal , the observer can effectively identify the transfer function , while other independent parameters in the system matrices are not identifiable .the following theorem gives a precise characterization of systems which are equivalent in the sense that they can not be distinguished based on the input - output relation .[ equivalent class ] let and be two linear systems as defined in and , and assume that both systems are minimal. then they have the same transfer function if and only if there exists a unitary matrix such that it is well known in the system control theory that two minimal systems have equal transfer functions ( we here omit the trivial constant term ) if and only if there exists an invertible matrix satisfying note that is not assumed to be unitary . using the second and third conditions we have , which further gives =0 ] holds . combining these two resultswe obtain =0 ] with , where means the symmetrization , e.g. , for the relations can be proved by induction with respect to , noting that for we have and respectively , and the step can be obtained from the expansion of , by using all the previous steps .examples of quantum linear networks ; ( a ) chain , ( b ) tree , and ( c ) ring . in each case , only the first node is coupled to the fields ( wavy lines ) .the straight lines represent interactions between nodes . ]we will apply now the identifiable conditions to some quantum linear networks . for simplicity, we will assume that the coupling matrix is known .consider a chain with three nodes , as depicted in fig .[ simplenetworks ] ( a ) .the system couples with a single input field ( wavy lines ) through the first node , such that and the coupling matrix is .\ ] ] the system hamiltonian has two terms ( straight lines ) describing the interaction between neighboring nodes , and is given by hence , \qquad \theta= ( \theta_{1 } , \theta_{2}),\ ] ] where and are unknown coupling constants which need to be estimated . to test the identifiabilitywe apply lemma [ simple test ] and compute therefore implies , and implies .the two conditions together give , if .one can verify that no additional relations arise from with , and therefore the parameters are identifiable up to a sign . for this system , we immediately see that the controllability condition ( thus the observability condition as well ) is satisfied. thus we can apply corollary [ similarity transformation ] and find that the unitary matrices , , and satisfy and transform the system with parameter into that with the system with parameter and , respectively .this gives an alternative proof of the fact that the equivalence class consists of the parameters .a third route is to look directly at the transfer function given by and note that the poles give us enough information to determine both and .note when ( i.e. , there is no connection between nodes 1 and 2 ) , , hence clearly can not be estimated .the second example is a simple tree shown in fig . [ simplenetworks ]node 1 is coupled to the input field and is connected to both nodes 2 and 3 , thus the system hamiltonian is of the form hence we have .\ ] ] the matrix is the same as in . the additional hamiltonian is necessary for the parameters and to be identifiable , due to the following reason .if , we obtain when is an even number while it takes otherwise .therefore , we can identify but not the individual components .this is due to the fact that when we can not distinguish the nodes 2 and 3 . to break this symmetry we introduce the term and we get implies that all parameters are identifiable up to the signs of and .the last example is a ring - type network shown in fig .[ simplenetworks ] ( c ) , where only the first node is accessible like the previous two cases . for this systemthe hamiltonian is given by thus we have ,\ ] ] while the matrix is given by ] , we have ,~~ c=[\sqrt{2a_1},~0],\ ] ] which have exactly the same forms as those in example 3.1 with specifically taken .note that the condition yields ; indeed this relation is satisfied for the two - node chain network , as easily seen by again setting in example 3.1 . in section [ reconstruction from classical method ]we have shown that the system matrices can be reconstructed through typical realization methods employed in classical system theory .we here present another procedure that directly reconstructs the equivalent class of system matrices , using the specific constraints on the quantum systems .we begin with the simple siso model where the coupling matrix is of the form with is an unknown parameter ; that is , we assume that only a single node is accessible , as in the previous examples .however , we do not assume a specific structure on and write it as ,\ ] ] where is a hermitian matrix with dimension .in this case , the transfer function is given by note again that we are assuming that is identifiable ; that is , is a known complex rational function .the parameters are then reconstructed as follows .first , through a straightforward calculation we have which thus leads to next , with the use of the knowledge about obtained above , we can identify using the following equation : .\ ] ] now , and have been obtained in addition to .this means that the function is known .we diagonalize as with .then , is of the form where is the -th element of .this implies that can be detected by examining the function ; that is , is the value on the imaginary axis such that diverges .then , ( assuming that has non - degenerate spectrum ) we can further determine from , let us express as with real phases and define .then , can be written \left [ \begin{array}{cc } \omega_{11 } & |e'| \\ |e'|^\top & \tilde\lambda \\\end{array } \right ] \left [ \begin{array}{cc } 1 & 0 \\ 0 & e^{i\phi } v^\dagger \\ \end{array } \right],\ ] ] where ] is the basis vector having zeros except the element .we further assume that the coupling between the system and the field is known and specified by the matrix whose support is spanned by a set of basis vectors for some set of vertices , the restriction of to this subspace being strictly positive . infection property .the colored node indicates that it is infected , and the arrow indicates that the infection occurs along that edge . through the steps from (a ) to ( e ) , the network becomes infected . ] the crucial property we will require of is that it is _ infecting _ for the graph , which can be defined sequentially by the following conditions ( see fig .[ infection graph ] ) : * at the beginning the vertices in are infected ; * if an infected vertex has only one non - infected neighbour , the neighbour gets infected ; * after some interactions all nodes end up infected .roughly speaking , this infection property means that the network is similar to a chain " , where the neighboring nodes are coupled to each other .such a chain structure often appears in practical situations , and as shown in , it can be fully controlled by only accessing to its local subsystem .also it is notable that in general a chain structure realizes fast spread of quantum information and is thus suitable for e.g. distributing quantum entanglement .the result we present here is that such a useful network is always identifiable under a mild condition .[ infection theorem ] let be given by eq . , and assume that the support of is spanned by with having the infecting property .then , if the system is minimal , is identifiable .recall that by theorem [ similarity transformation ] two parameters are in the same equivalence class if and only if there exists an unitary matrix such that and the latter condition implies = 0 $ ] and in particular commutes with projection onto the support of so that with unitary on the orthogonal complement of the support of .let us write the hamiltonian in the block form according to the partition : then eq . implies that the identity means that furthermore , since is infecting , there exists at least one vertex which is connected to exactly one vertex , so that can be written as eq. then implies which means that is an eigenvector of and for some phase .but since the coefficients of are assumed to be real , this implies that additionally , since , a decomposition of the form holds with the identity block supported by the index set .the same argument can now be repeated for the set , and by using the infecting property , all vertices will be eventually included in the growing set of indices , so that at the end we have .consequently , from corollary [ similarity transformation ] , the system is identifiable . from this result , we now readily see that the chain system studied in example 3.1 in section iii c is identifiable , since clearly this system is infecting . on the other hand ,the tree and the ring structural systems examined in the same section are not infecting , hence theorem [ infection theorem ] states nothing about the identifiability of these systems ; in fact , as shown there , the tree system is identifiable , while the ring one is not .in theorem [ equivalent class ] we have shown that minimal passive linear systems with the same transfer function are related by unitary transformations .this is different from classical systems where the equivalence class is given by similarity transformations , the reason for that being the more rigid structure of the system equations and in the case of passive quantum systems .the characterisation of the equivalence class for general `` active '' linear systems remains an open problem .another important issue which was addressed only briefly here is how to actually estimate the system parameters .more precisely , what kind of input state should be chosen and what measurement should be performed on the output .the performance of such a design of experiment can be measured by using statistical tools such as fisher information and asymptotic normality .finally , the system identification problem can be considered in a different setting , where the input fields are stationary ( quantum noise ) but have a non - trivial covariance matrix ( squeezing ) . in this case the characterisation of the equivalence classes boils down to finding the systems with the same power spectral density , a problem which is well understood in the classical setting but not yet addressed in the quantum domain .
|
system identification is a key enabling component for the implementation of new quantum technologies , including quantum control . in this paper we consider a large class of input - output systems , namely linear passive quantum systems , and study the following identifiability question : if the system s hamiltonian and coupling matrices are unknown , which of these dynamical parameters can be estimated by preparing appropriate input states and performing measurements on the output ? the input - output mapping is explicitly given by the transfer function , which contains the maximum information about the system . we show that two minimal systems are indistinguishable ( have the same transfer function ) if and only if their hamiltonians and the coupling to the input fields are related by a unitary transformation . furthermore , we provide a canonical parametrization of the equivalence classes of indistinguishable systems . for models depending on ( possibly lower dimensional ) unknown parameters , we give a practical identifiability condition which is illustrated on several examples . in particular , we show that systems satisfying a certain hamiltonian connectivity condition called `` infecting '' , are completely identifiable .
|
recently , internet trading has become very popular .obviously , the rate ( or price ) change of the trading behaves according to some unknown stochastic processes , and numerous studies have been conducted to reveal the statistical properties of its nonlinear dynamics .in fact , several authors have analysed tick - by - tick data of price changes including the currency exchange rate in financial markets .some of these studies are restricted to the stochastic variables of price changes ( returns ) and most of them are specified by terms such as the _ fat _ or _ heavy tails _ of distributions .however , fluctuation in time intervals , namely , the duration in the point process might also contain important market information , and it is worthwhile to investigate these properties .such fluctuations in the time intervals between events are not unique to price changes in financial markets but are also very common in the real world . in fact, it is wellknown that the spike train of a single neuron in the human brain is regarded as a time series , wherein the difference between two successive spikes is not constant but fluctuates .the stochastic process specified by the so - called inter - spike intervals ( isi ) is one such example .the average isi is of the order of a few milli - second and the distribution of the intervals is well - described by a _gamma distribution _ . on the other hand , in financial markets , for instance , the time interval between two consecutive transactions of bund futures ( bund is the german word for bond ) and btp futures ( btps are middle- and long - term italian government bonds with fixed interest rates ) traded at london international financial futures and options exchange ( liffe ) is seconds and is well - fitted by the _ mittag - leffler function _ .the mittag - leffler function behaves as a stretched exponential distribution for short time - interval regimes , whereas for the long time - interval regimes , the function has a power - law tail .thus , the behaviour of the distribution described by the mittag - leffler function changes from the stretched exponential to the power - law at some critical point .however , it is nontrivial to determine whether the mittag - leffler function supports any other kind of market data , e.g. the market data filtered by a rate window .similar to the stochastic processes of price change in financial markets , the us dollar / japanese yen ( usd / jpy ) exchange rate of sony bank , which is an internet - based bank , reproduces its rate by using a _ rate window _ with a width of yen for its individual customers in japan .that is , if the usd / jpy market rate changes by more than yen , the usd / jpy sony bank rate is updated to the market rate . in this sense , it is possible for us to say that the procedure of determination of the sony bank usd / jpy exchange rate is essentially a first - passage process . in this paper , we analyse the average time interval that a customer must wait until the next price ( rate ) change after they log in to their computer systems .empirical data analysis has shown that the average time interval between rate changes is one of the most important statistics for understanding market behaviour . however , as internet trading becomes popular , customers would be more interested in the average waiting time defined as the average time that customers have to wait between any instant and the next price change when they want to observe the rate , e.g. when they log in to their computer systems rather than the average time interval between rate changes . to evaluate the average waiting time, we use the so - called renewal - reward theorem which is wellknown in the field of queueing theory . in addition , we provide a simple formula to evaluate the expected reward for customers . in particular , we investigate these important quantities for the sony bank usd / jpy exchange rate by analysing a simple probabilistic model and computer simulations that are based on the _ arch ( autoregressive conditional heteroscedasticity ) _ and _ garch ( generalised arch ) _ stochastic models with the assistance of empirical data analysis of the sony bank rate .this paper is organised as follows . in the next section ,we explain the method being used by the sony bank and introduce several studies concerning empirical data about the rate . in sec .iii , we introduce a general formula to evaluate average waiting time using the renewal - reward theorem and calculate it with regard to sony bank customers .recently , one of the authors provided evidence implying that the first - passage time ( fpt ) distribution of the sony bank rate obeys the _ weibull distribution _this conjecture is regarded as a counter part of studies that suggest that the fpt should follow an exponential distribution ( see example in ) . in the same section, we evaluate the average waiting time while assuming that fpt obeys an exponential distribution .next , we compare it with the result for the weibull distribution and perform empirical data analyses .we find that the assumption of exponential distributions on the first - passage process should be rejected and that a weibull distribution seems more suitable for explaining the first - passage processes of the sony bank rate .thus , we can predict how long customers wait and how many returns they obtain until the next rate change after they log in to their computer systems . in sec .iv , to investigate the effects of the rate window of the sony bank , we introduce the arch and garch models to reproduce the raw data before filtering it through the rate window . in sec .v , we evaluate the reward that customers can expect to obtain after they log in to their computer systems .the last section provides a summary and discussion .the sony bank rate is the foreign exchange rate that the sony bank offers with reference to the market rate and not their customers orders . in fig .[ fig : sazuka3000 ] , we show a typical update of the sony bank rate . if the usd / jpy market rate changes by greater than or equal to yen , the sony bank usd / jpy rate is updated to the market rate . in fig .[ fig : fg_window ] , we show the method of generating the sony bank rate from the market rate . in time ( shaded area ) from the market rate ( solid line ) .the units of the horizontal and the vertical axes are ticks and yen , respectively.,width=377 ] in this sense , the sony bank rate can be regarded as a kind of first - passage processes . in table [tab : table1 ] , we show data concerning the sony bank usd / jpy rate vs. tick - by - tick data for the usd / jpy rate from bloomberg l.p ..[tab : table1 ] the sony bank usd / jpy rate vs. tick - by - tick data for the usd / jpy rate . [ cols="<,^,^",options="header " , ] from these tables and figures , an important question might arise .namely ; how long should sony bank customers wait between observing the price and the next price change ?this type of question never arises in the case of isi or bund futures , because the average time intervals are too short to evaluate such measures .we would like to stress that in this paper , we do not discuss the market data underlying the sony bank rate ; however , as we will see in the following sections , the main result obtained in this study , i.e. how long a trader must wait for the next price change of the sony bank rate from the time she or he logs on to the internet , is not affected by this lack of information .from table [ tab : table1 ] , we find that the number of data per day is remarkably reduced from to because of an effect of the rate window of -yen width . as a result , the average interval of exchange rate updates is extended to min .this quantity is one of the most important measures for the market , however , customers might seek information about the _ average waiting time _ , which is defined as the average time interval that customers have to wait until the next change of the sony bank usd / jpy rate after they log in to their computer systems . to evaluate the average waiting time, we use the _ renewal - reward theorem _ , which is wellknown in the field of queueing theory .we briefly explain the theorem below .let us define as the number of rate changes within the interval ] [ min ] from the inset of this figure . from reference , information about the scaling parameter not available ; however , we can easily obtain the value as discussed next . as mentioned above, several empirical data analyses revealed that parameter for the sony bank rate .in fact , it is possible for us to estimate by using the fact that the average interval of the rate change is [ min ] ( table [ tab : table1 ] ) .next , we obtain the following simple relation : that is .therefore , substituting and [ s ] , we obtain the parameter for the sony bank rate as .it should be noted that the average waiting time is also evaluated by a simple sampling from the empirical data of the sony bank rates .the first two moments of the first - passage time distribution are easily calculated from the sampling , and then , the average waiting time is given by .we find that [ min ] which is not much different from that obtained by evaluation ( [ min ] ) by means of the renewal - reward theorem ( [ eq : rr ] ) .thus , the renewal - reward theorem introduced here determines the average waiting time of the financial system with adequate accuracy , and our assumption of the weibull distribution for the first - passage time of the sony bank rates seems to be reasonable .we would particularly like to stress that the data from the asymptotic regime does not contribute much to the average waiting time .a detailed account of this fine agreement between and , more precisely , the factors to which we can attribute the small difference between and , will be reported in our forthcoming paper .we would now like to discuss the average waiting time for the case in which the first - passage process can be regarded as a poisson process . in the poisson process, the number of events occurring in the interval ] are cut off : where is a heaviside step function and is defined by .the variable can not take any values lying in the intervals |y_{n}|= . in fig .[ fig : fg7 ] , we show the skew - normal distribution ( [ eq : skew - normal ] ) for and .note that becomes a normal distribution for the limits and . for this skew - normal distribution ( [ eq : skew - normal ] ) , we easily obtain the average as the second and the third moments lead to the skewness of the distribution is written in terms of these moments as follows : where is the standard deviation . in following ,we evaluate the reward rate as a function of the skewness for the parameter values and . in fig .[ fig : fg8 ] , we plot them for the cases of and . from this figure , we find that the reward rate increases dramatically as the skewness of the skew - normal distribution ( [ eq : skew - normal ] ) becomes positive . as the skewness increases ,the reward rate saturates at [ yen / s ] for our model system , wherein the time interval of the rate change obeys the weibull distribution with , and the difference of the rate follows the skew - normal distribution ( [ eq : skew - normal ] ) . in this figure, we also find that if we increase the width of the rate window from to , the reward rate decreases . for companies or internet banks , this kind of information might be useful because they can control the reward rate ( this takes both positive and negative values ) for their customers by tuning the width of the rate window in their computer simulations .moreover , we should note that we can also evaluate the expected reward , which is the return that the customers can expect to encounter after they log in to their computer systems . by combining the result obtained in this section [ yen / s ] and the average waiting time [ s ] ,we conclude that the expected reward should be smaller than [ yen ] .this result seems to be important and useful for both the customers and the bank , e.g. in setting the transaction cost .of course , the probabilistic model considered here for or is just an example for explaining the stochastic process of the real or empirical rate change .therefore , several modifications are needed to conduct a much deeper investigation from theoretical viewpoint .nevertheless , our formulation might be particularly useful in dealing with price changes in real financial markets .in this paper , we introduced the concept of queueing theory to analyse price changes in a financial market , for which , we focus on the usd / jpy exchange rate of sony bank , which is an internet - based bank . using the renewal - reward theorem and on the assumption that the sony bank rate is described by a first - passage processwhose fpt distribution follows a weibull distribution , we evaluated the average waiting time that sony bank customers have to wait until the next rate change after they log in to their computer systems .the theoretical prediction and the result from the empirical data analysis are in good agreement on the value of the average waiting time .moreover , our analysis revealed that if we assume that the sony bank rate is described by a poisson arrival process with an exponential fpt distribution , the average waiting time predicted by the renewal - reward theorem is half the result predicted by the empirical data analysis .this result justifies the non - exponential time intervals of the sony bank usd / jpy exchange rate .we also evaluated the reward that a customer could be expected to encounter by the next price change after they log in to their computer systems .we assumed that the return and fpt follow skew - normal and weibull distributions , respectively , and found that the expected return for sony bank customers is smaller than yen .this kind of information about statistical properties might be useful for both the costumers and bank s system engineers .as mentioned earlier , in this paper , we applied queueing theoretical analysis to the sony bank rate , which is generated as a first - passage process with a rate window of width .we did not mention the high - frequency raw data underlying behind the sony bank rate because sony bank does not record raw data and the data itself is not available to us .although our results did not suffer from this lack of information , the raw data seems to be attractive and interesting as a material for financial analysis . in particular ,the effect of the rate window on high - frequency raw data should be investigated empirically .however , it is impossible for us to conduct such an investigation for the sony bank rate . to compensate for this lack of information, we performed the garch simulations , wherein the duration between price changes in the raw data obeys a weibull distribution with parameter .next , we investigated the effect of the rate window through the first - passage time distribution under the assumption that it might follow a weibull distribution with parameter .the - plot was thus obtained .nevertheless , an empirical data analysis to investigate the effect might be important .such a study is beyond the scope of this paper ; however , it is possible for us to generate the first - passage process from other high - frequency raw data such as btp futures .this analysis is currently under way and the results will be reported in our forthcoming article .as mentioned in table i , the amount of data per day for the sony bank rate is about points , which is less than the tick - by - tick high - frequency data .this is because the sony bank rate is generated as a first - passage process of the raw data .this means that the number of data is too few to confirm whether time - varying behaviour is actually observed . in our investigation of the limited data, we found that the first - passage time distribution in a specific time regime ( e.g. monday of each week ) obeys a weibull distribution , but the parameter is slightly different from .however , this result has not yet confirmed because of the low number of data points .therefore , we used the entire data ( about points from september 2002 to may 2004 ) to determine the weibull distribution . in this paper , we focused on the evaluation of the average waiting time and achieved the level of accuracy mentioned above . in addition , if sony bank records more data in future , we might resolve this issue , namely , whether the sony bank rate exhibits time - varying behaviour .this is an important area for future investigation .although we dealt with the sony bank usd / jpy exchange rate in this paper , our approach is general and applicable to other stochastic processes in financial markets .we hope that it is widely used to evaluate various useful statistics in real markets .we thank enrico scalas for fruitful discussion and useful comments .was financially supported by the _ grant - in - aid for young scientists ( b ) of the ministry of education , culture , sports , science and technology ( mext ) _ no .15740229 . and the _ grant - in - aid scientific research on priority areas deepening and expansion of statistical mechanical informatics ( dex - smi ) " of the ministry of education , culture , sports , science and technology ( mext ) _ no . 18079001. n.s . would like to acknowledge useful discussion with shigeru ishi , president of sony bank .r. gorenflo and f. mainardi , _ the asymptotic universality of the mittag - leffler waiting time law in continuous random walks _ , lecture note at we - heraeus - seminar on physikzentrum bad - honnef ( germany ) , 12 - 16 july ( 2006 ) .
|
we propose a useful approach for investigating the statistical properties of foreign currency exchange rates . our approach is based on queueing theory , particularly , the so - called renewal - reward theorem . for the first passage processes of the sony bank us dollar / japanese yen ( usd / jpy ) exchange rate , we evaluate the average waiting time which is defined as the average time that customers have to wait between any instant when they want to observe the rate ( e.g. when they log in to their computer systems ) and the next rate change . we find that the assumption of exponential distribution for the first - passage process should be rejected and that a weibull distribution seems more suitable for explaining the stochastic process of the sony bank rate . our approach also enables us to evaluate the expected reward for customers , i.e. one can predict how long customers must wait and how much reward they will obtain by the next price change after they log in to their computer systems . we check the validity of our prediction by comparing it with empirical data analysis .
|
let us suppose that is a general lvy process with law and lvy measure . that is to say , is a markov process with paths that are right continuous with left limits such that the increments are stationary and independent and whose characteristic function at each time is given by the lvy - khinchine representation [ eq1 ] [ e^ix_t]=e^-t ( ) , , where we have , and is a measure supported on with and .starting with the early work of madan and seneta , lvy processes have played a central role in the theory of financial mathematics and statistics ( see for example the books ) .more recently they have been extensively used in modern insurance risk theory ( see for example klppelberg et al . , song and vondraek ) .the basic idea in financial mathematics and statistics is that the log of a stock price or risky asset follows the dynamics of a lvy process whilst in insurance mathematics , it is the lvy process itself which models the surplus wealth of an insurance company until ruin .there are also extensive applications of lvy processes in queuing theory , genetics and mathematical biology as well as through their appearance in the theory of stochastic differential equations . in both financial and insurance settings ,a key quantity of generic interest is the joint law of the current position and the running maximum of a lvy process at a fixed time if not the individual marginals associated with the latter bivarite law .for example , if we define then the pricing of barrier options boil down to evaluating expectations of the form ] at the boundary point .it is now well understood that the issue of regularity of the upper and lower half line for the underlying lvy process ( see chapter 6 of for a definition ) is responsible the appearance of a discontinuity at in such functions ( cf .the nature of our wiener - hopf method naturally builds the distributional atom which is responsible for this discontinuity into the simulations .additional advantages to the method we propose include its simplicity with regard to numerical implementation .moreover , as we shall also see in section [ sect : extensions ] of this paper , the natural probabilistic structure pertaining to lvy processes that lies behind our so - called wiener - hopf monte - carlo method also allows for additional creativity when addressing some of the deficiencies of the method itself .the basis of the algorithm is the following simple observation which was pioneered by carr and subsequently used in several contexts within mathematical finance for producing approximate solutions to free boundary value problems that appear as a result of optimal stopping problems that characterise the value of an american - type option .suppose that are a sequence of i.i.d .exponentially distributed random variables with unit mean .suppose they are all defined on a common product space with product law which is orthogonal to the probability space on which the lvy process is defined . for all , we know from the strong law of large numbers that -almost surely .the random variable on the left hand side above is equal in law to a gamma random variable with parameters and .henceforth we write it . recall that is our notation for a general lvy process .then writing we argue the case that , for sufficiently large , a suitable approximation to is .this approximation gains practical value in the context of monte - carlo simulation when we take advantage of the fundamental path decomposition that applies to all lvy processes over exponential time periods known as the wiener - hopf factorisation .[ thm_main ] for all and define . then [ distr_identity ] ( x_(n , ) , _( n , ) ) ( v(n , ) , j(n , ) ) where and are defined iteratively for as v(n , ) & = & v(n-1,)+s^(n)_+ i^(n ) _ + j(n , ) & = & ( j(n-1 , ) , v(n-1,)+ s^(n ) _ ) and . here , , are an i.i.d .sequence of random variables with common distribution equal to that of and are another i.i.d . sequence of random variables with common distribution equal to that of .suppose we define .then it is trivial to note that where .next we prove by induction that for each note first that the above equality is trivially true when on account of the wiener - hopf factorisation .indeed the latter tells us that and are independent and the second of the pair is equal in distribution to .now suppose that ( [ induction ] ) is true for .then stationary and independent increments of together with the wiener - hopf factorisation imply that where is an independent copy of , and .the induction hypothesis thus holds for . for , stationary and independent increments of allows us to write hence for > from ( [ max ] ) and ( [ induction ] ) the result now follows .note that the idea of embedding a random walk into the path of a lvy process with two types of step distribution determined by the wiener - hopf factorisation has been used in a different , and more theoretical context , by doney . given ( [ slln ] ) it is clear that the pair converges in distribution to .this suggests that we need only to be able to simulate i.i.d .copies of the distributions of and and then by a simple functional transformation we may produce a realisation of random variables . given a suitably nice function , using standard monte - carlo methods one estimates for large \simeq \frac{1}{k}\sum_{m=1}^k f(v^{(m)}(n , n / t ) , j^{(m)}(n , n / t ) ) \label{wh - mc}\ ] ] where are i.i.d .copies of .indeed the strong law of large numbers implies that the right hand side above converges almost surely as to which in turn converges as to .the central limit theorem indicates that the right hand side of ( [ wh - mc ] ) converges to ] to the desired expectation ] .the fact that belongs to the domain of the infinitesimal generator guarantees that function satisfies kolmogorov equation , in particular function is differentiable in the variable . using ( [ distr_identity ] ) andthe fact that is independent of we find &=&{\bbb{e}}\left [ f(x_{\mathbf{g}(n , n / t ) } , \overline{x}_{\mathbf{g}(n , n / t ) } ) \right]\\ & = & \int\limits_{\r^+ } { \bbb{e}}\left[f(x_s , \overline{x}_s ) \right ] { \bbb{p}}(\mathbf{g}(n , n / t ) \in\d s)\\ & = & \frac{n^n}{t^n ( n-1)!}\int\limits_{\r^+ } h(s ) s^{n-1 } e^{-\frac{ns}{t } } \d s.\end{aligned}\ ] ] the remainder of the proof is a classical application of the stationary point method ( see ) . the right side of the above equation is equal to [ proof_asympt1 ] & & + & & = ( + o(1/n ) ) _-^ h(t+ ) e^-2 u where we have changed the variable of integration and have used stirling s formula .next , using power series expansion one can check that ( 2+n(1 + ) -u ) = ( - + ) = 1 + + o(1/n ) and combining this with ( [ proof_asympt1 ] ) and the fact that we obtain ( [ asymptotic_speed ] ) .the algorithm described in the previous section only has practical value if one is able to sample from the distributions of and .it would seem that this , in itself , is not that much different from the problem that it purports to solve .however , it turns out that there are many tractable examples and in all cases this is due to the tractability of their wiener - hopf factorisations . whilst several concrete cases can be handled from the class of spectrally one - sided lvy processesthanks to recent development in the theory of scale functions which can be used to described the laws of and ( cf . ) , we give here two large families of two sided jumping lvy processes that have pertinence to mathematical finance to show how the algorithm may be implemented .the -class of lvy processes , introduced in , is a 10-parameter lvy process which has characteristic exponent with parameter range and . here is the beta function ( see ) .the density of the lvy measure is given by although takes a seemingly complicated form , this particular family of lvy processes has a number of very beneficial virtues from the point of view of mathematical finance which are discussed in .moreover , the large number of parameters also allows one to choose lvy processes within the -class that have paths that are both of unbounded variation ( when at least one of the conditions , or holds ) and bounded variation ( when all of the conditions , and hold ) as well as having infinite and finite activity in the jumps component ( accordingly as both or not ) .what is special about the -class is that all the roots of the equation are completly identifiable which leads to semi - explicit identities for the laws of and as the following result lifted from shows .[ kuz1 ] for , all the roots of the equation are simple and occur on the imaginary axis .they can be enumerated by on the positive imaginary axis and on the negative imaginary axis in order of increasing absolute magnitude where moreover , for , where a similar expression holds for with the role of being played by and replaced by .note that when is irregular for the distribution of will have an atom at which can be computed from ( [ max - density ] ) and is equal to .alternatively , from remark 6 in this can equivalently be written as .a similar statement can be made concerning an atom at for the distribution of when is irregular for .conditions for irregularity are easy to check thanks to bertoin ; see also the summary in kyprianou and loeffen for other types of ly processes that are popular in mathematical finance . by making a suitable truncation of the series ( [ max - density ] )one may easily perform independent sampling from the distributions and as required for our monte - carlo methods .the forthcoming discussion will assume familiarity with classical excursion theory of lvy processes for which the reader is referred to chapter vi of or chapter 6 of . according to vigon s theory of philanthropy , a ( killed ) subordinatoris called a _ philanthropist _ if its lvy measure has a decreasing density on . moreover , given any two subordinators and which are philanthropists , providing that at least one of them is not killed , there exist a lvy process such that and have the same law as the ascending and descending ladder height processes of , respectively .suppose we denote the killing rate , drift coefficient and lvy measures of and by the respective triples and .then shows that the lvy measure of satisfies the following identity where is the density of . by symmetry, an obvious analogue of ( [ phillm ] ) holds for the negative tail , .a particular family of subordinators which will be of interest to us is the class of subordinators which is found within the definition of kuznetsov s -class of lvy processes .these processes have characteristics where , and .the lvy measure of such subordinators is of the type > from proposition 9 in , the laplace exponent of a -class subordinator satisfies [ kuz ] ( ) = + + \{ b(1-+ , - ) - b ( 1-++/ , - ) } for where is the drift coefficient and is the killing rate .let and be two independent subordinators from the -class where for with respective drift coefficients , killing rates and lvy measure parameters .their respective laplace exponents are denoted by , . in vigons theory of philanthropy it is required that . under this assumption ,let us denote by the lvy process whose ascending and descending ladder height processes have the same law as and , respectively .in other words , the lvy process whose characteristic exponent is given by it is important to note that the gaussian component of the process is given by , see .> from ( [ phillm ] ) , the lvy measure of is such that assume first that , taking derivative in and computing the resulting integrals with the help of we find that for the density of the lvy measure is given by ( x)&=&-(,-_2 ) e^-x ( 1+_1-_1 ) _2f_1(1+_1,;-_2;e^-x ) + & + & c_1 ( k_2 + ( 1+_2-_2,-_2 ) ) - _ 2 c_1 where .the validity of this formula is extended for by analytical continuation . the corresponding expression for be obtained by symmetry considerations .we define a general hypergeometric process to be the 13 parameter lvy process with characteristic exponent given in compact form where . the inclusion of the two additional parameters is largely with applications in mathematical finance in view . without these two additional parametersit is difficult to disentangle the gaussian coefficient and the drift coefficients from parameters appearing in the jump measure .note that the gaussian coefficient in ( [ compact ] ) is now .the definition of general hypergeometric lvy processes includes previously defined hypergeometric lvy processes in kyprianou et al . , caballero et al . and lamperti - stable lvy processes in caballero et al . . just as with the case of the -family of lvy processes , because can be written as a linear combination of a quadratic form and beta functions , it turns out that one can identify all the roots of the equation which is again sufficient to describe the laws of and .[ hg ] for , all the roots of the equation are simple and occur on the imaginary axis .moreover , they can be enumerated by on the positive imaginary axis and on the negative imaginary axis in order of increasing absolute magnitude where moreover , for , where moreover , a similar expression holds for with the role of replaced by and replaced by .the proof is very similar to the proof of theorem 10 in .formula ( [ compact ] ) and reflection formula for the beta function ( see ) [ eq_beta_reflection ] b(-z;-)=b(1+z+;- ) tell us that as , and since we conclude that has a solution on the interval .other intervals can be checked in a similar way ( note that are laplace exponents of subordinators , therefore they are positive for ) .next we assume that . using formulas ( [ compact ] ) , ( [ eq_beta_reflection ] ) andan asymptotic result = z^a+o(z^a-1 ) , z+ which can be found in , we conclude that has the following asymptotics as : ( i)&=&-12 ( ^2 + 2_1 _ 2 ) ^2+o(^1+_2 ) + & - & using the above asymptotic expansion and the same technique as in the proof of theorem 5 in we find that as there exists a constant such that _ n^+ = ( n+1+_2-_2 ) + c_1 n^_2 - 1 + o(n^_2 - 1- ) with a similar expression for . thus we use lemma 6 from ( and the same argument as in the proofs of theorems 5 and 10 in ) to show that first there exist no other roots of meromorphic function except for , and secondly that we have a factorisation = _n1_n1 , the wiener - hopf factoris are identified from the above equation with the help of analytical uniqueness result , lemma 2 in . formula ( [ hypsup ] ) is obtained from the infinite product representation for using residue calculus .this ends the proof in the case , in all other cases the proof is almost identical , except that one has to do more work to obtain asymptotics for the roots of .we summarise all the possible asymptotics of the roots below ^+_n= ( n-_2+_2)+c n^_2+o(n^_2-)n . where the coefficients and are presented in table [ table2 ] .corresponding results for can be obtained by symmetry considerations .[ ht ] .coefficients for the asymptotic expansion of . [ cols="^,^,^,^ " , ] similar remarks to those made after theorem [ kuz1 ] regarding the existence of atoms in the distribution of and also apply here .it is important to note that the hypergeometric lvy process is but one of many examples of lvy processes which may be constructed using vigon s theory of philanthropy . with the current monte - carlo algorithm in mind, it should be possible to engineer other favourable lvy processes in this way .the starting point for the wiener - hopf monte - carlo algorithm is the distribution of and , and in section [ sec_impl ] we have presented two large families of lvy processes for which one can compute these distributions quite efficiently .we have also argued the case that one might engineer other fit - for - purpose wiener - hopf factorisations using vigon s theory of philanthropy .however , below , we present another alternative for extending the the application of the wiener - hopf monte - carlo technique to a much larger class of lvy processes than those for which sufficient knowledge of the wiener - hopf factorisation is known .indeed the importance of theorem [ thm_cmpnd_poisson ] below is that we may now work with any lvy processes whose lvy measure can be written as a sum of a lvy measure from the -family or hypergeometric family plus * any * other measure with finite mass .this is a very general class as a little thought reveals that many lvy processes necessarily take this form .however there are obvious exclusions from this class , for example , cases of lvy processes with bounded jumps .[ thm_cmpnd_poisson ] let be a sum of a lvy process and a compound poisson process such that for all , y_t = x_t+_i=1^n_t _ i where is a poisson process with intensity and is a sequence of i.i.d .random variables .define iteratively for v(n , ) & = & v(n-1,)+s^(n)_++ i^(n)_++ _ n ( 1-_n ) + j(n , ) & = & ( v(n , ) , j(n-1 , ) , v(n-1,)+ s^(n)_+ ) where , sequences and are defined in theorem [ thm_main ] , and are an i.i.d . sequence of bernoulli random variables such that . then [ distr_identity2 ] ( y_(n , ) , _( n , ) ) ( v(t_n , ) , j(t_n , ) ) where .consider a poisson process with arrival rate such that points are independently marked with probability .then recall that the poisson thinning theorem tells us that the process of marked points is a poisson process with arrival rate .in particular , the arrival time having index is exponentially distributed with rate .suppose that is the first time that an arrival occurs in the process , in particular is exponentially distributed with rate .let be another independent and exponentially distributed random variable , and fix and . then making use of the wiener - hopf decomposition , if we momentarily set then by the poisson thinning theorem it follows that is equal in distribution to .moreover , again by the poisson thinning theorem , is equal in distribution to .this proves the theorem for the case . in the spirit of the proof of theorem [ thm_main ] , the proof for can be established by an inductive argument . indeed ,if the result is true for then it is true for by taking then appealing to the lack of memory property , stationary and independent increments of and the above analysis for the case that .the details are left to the reader . a particular example where the use of the above theorem is of pertinence is a linear brownian motion plus an independent compound poisson process .this would include for example the so - called kou model from mathematical finance in which the jumps of the compound poisson process have a two - sided exponential distribution . in the case that is a linear brownian motion the quantities and are both exponentially distributed with easily computed rates .next we consider the problem of sampling from the distribution of the three random variables , which is also an important problem for applications , in particular with regard to the double - sided exit problem and , in particular , for pricing double barrier options .the following slight modification of the wiener - hopf monte - carlo technique allows us to obtain two estimates for this triple of random variables , which in many cases can be used to provide upper and lower bounds for certain functionals of .[ thm_3d_distr ] given two sequences and introduced in theorem [ thm_main ] we define iteratively for [ def_5_rand_vrls ] v(n , ) & = & v(n-1,)+s^(n)_+ i^(n ) _+ j(n , ) & = & ( j(n-1 , ) , v(n-1,)+ s^(n ) _ ) + k(n , ) & = & ( k(n-1 , ) , v(n , ) ) + j(n , ) & = & ( j ( n-1 , ) , v(n , ) ) + k(n , ) & = & ( k(n-1 , ) , v(n-1,)+ i^(n ) _ ) where .then for any bounded function which is increasing in -variable we have [ f(v(n , ) , j(n , ) , k(n , ) ) ] & & [ f(x_(n,),_(n,),_(n , ) ] [ bias-1 ] + [ f(v(n , ) , k(n , ) , j(n , ) ) ] & & [ f(x_(n,),_(n , ) , _( n , ) ] [ bias-2 ] from theorem [ thm_main ] we know that has the same distribution as , and , for each , . the inequality in ( [ bias-1 ] ) now follows .the equality in ( [ bias-2 ] ) is the result of a similar argument where now , for each , and .theorem [ thm_3d_distr ] can be understood in the following sense .both triples of random variables and can be considered as estimates for , where in the first case has a positive bias and in the second case has a negative bias .an example of this is handled in the next section .in this section we present numerical results .we perform computations for a process in the -family with parameters ( a , , _ 1,_1,_1 , c_1 , _ 2,_2,_2 , c_2 ) = ( a , , 1 , 1.5 , 1.5 , 1 , 1 , 1.5 , 1.5 , 1 ) where the linear drift is chosen such that with , for no other reason that this is a risk neutral setting which makes the process a martingale .we are interested in two parameter sets .set 1 has and set 2 has .note that both parameter sets give us proceses with jumps of infinite activity but of bounded variation , but due to the presence of gaussian component the process has unbounded variation in the case of parameter set 1 . as the first examplewe compare computations of the joint density of for the parameter set 1 .our first method is based on the following fourier inversion technique . as in the proof of theorem [ thm_main ], we use the fact that and are independent , and the latter is equal in distribution to , to write ( _ _ 1/ x ) ( -__1/ y ) & = & ( _ _ 1/ x , _ _1/-x__1/ y ) + & = & _ ^+ e^-t ( _ t x,_t - x_t y ) t writing down the inverse laplace transform we obtain [ eq_fourier ] ( _ t x,_t - x_t y)= _ _ 0+i ( _ _ 1/ x ) ( -__1/ y ) ^-1 e^t where is any positive number .the values of analytical continuation of for complex values of can be computed efficiently using technique described in .our numerical results indicate that the integral in ( [ eq_fourier ] ) can be computed very precisely , provided that we use a large number of discretization points in space coupled with filon - type method to compute this fourier type integral .thus first we compute the joint density of using ( [ eq_fourier ] ) and take it as a benchmark , which we use later to compare the wiener - hopf monte - carlo method and the classical monte - carlo approach . for both of these methods we fix the number of simulations and the number of time steps . for fair comparisonwe use time steps for the classical monte - carlo , as wiener - hopf monte - carlo method with time steps requires simulation of random variables .all the code was written in fortran and the computations were performed on a standard laptop ( intel core 2 duo 2.5 ghz processor and 3 gb of ram ) .figure [ fig_2d_density ] presents the results of our computations . in figure [ fig_2d_density_a ]we show our benchmark , a surface plot of the joint probability density function of produced using fourier method ( [ eq_fourier ] ) , which takes around 40 - 60 seconds to compute .figures [ fig_2d_density_b ] , [ fig_2d_density_c ] and [ fig_2d_density_d ] show the difference between the benchmark and the wiener - hopf monte - carlo result as the number of time steps increases from 20 to 50 to 100 .computations take around 7 seconds for , and 99% of this time is actually spent performing the monte - carlo algorithm , as the precomputations of the roots and the law of take less than one tenth of a second .figure [ fig_2d_density_e ] shows the result produced by the classical monte - carlo method with ( which translates into 200 random walk steps according to our previous convention ) ; this computation takes around 10 - 15 seconds since here we also need to compute the law of , which is done using inverse fourier transform of the characteristic function of given in ( [ eq1 ] ) .finally , figure [ fig_2d_density_f ] shows the difference between the monte - carlo result and our benchmark .the results illustrate that in this particular example the wiener - hopf monte - carlo technique is superior to the classical monte - carlo approach .it gives a much more precise result , it requires less computational time , is more straightforward to programme and does not suffer from some the issues that plague the monte - carlo approach , such as the atom in distribution of at zero , which is clearly visible in figure [ fig_2d_density_e ] .next we consider the problem of pricing up - and - out barrier call option with maturity equal to one , which is equivalent to computing the following expectation : .\ ] ] here $ ] is the initial stock price .we fix the strike price , the barrier level .the numerical results for parameter set 1 are presented in figure [ fig_barrier1 ] .figure [ fig_barrier1_a ] shows the graph of as a function of produced with fourier method similar to ( [ eq_fourier ] ) , which we again use as a benchmark .figures [ fig_barrier1_b ] , [ fig_barrier1_c ] and [ fig_barrier1_d ] show the difference between the benchmark and results produced by wiener - hopf monte - carlo ( blue solid line ) and classical monte - carlo ( red line with circles ) for .again we see that wiener - hopf monte - carlo method gives a better accuracy , especially when the initial stock price level is close to the barrier , as in this case monte - carlo approach produces the atom in the distribution of at zero which creates a large error .figure [ fig_barrier2 ] shows corresponding numerical results for parameter set 2 . in this casewe have an interesting phenomenon of a discontinuity in at the boundary .the discontinuity should be there and occurs due to the fact that , for those particular parameter choices , there is irregularity of the upper half line .irregularity of the upper half line is equivalent to there being an atom at zero in the distribution of for any ( also at independent and exponentially distributed random times ) .we see from the results presented in figures [ fig_barrier1 ] and [ fig_barrier2 ] that wiener - hopf monte - carlo method correctly captures this phenomenon ; the atom at zero is produced if and only if the upper half line is irregular , while the classical monte - carlo approach always generates an atom .also , analyzing figures [ fig_barrier2_b ] , [ fig_barrier2_c ] and [ fig_barrier2_d ] we see that in this case classical monte - carlo algorithm is also doing a good job and it is hard to find a winner .this is not surprising , as in the case of parameter set 2 the process has bounded variation , thus the bias produced in monitoring for supremum only at discrete times is smaller than in the case of process of unbounded variation . finally , we give an example of how one can use theorem [ thm_3d_distr ] to produce upper / lower bounds for the price of the double no - touch barrier call option .\ ] ] first , we use identity and obtain ^dnt(s ) = ^uo(s ) - e^-r .function is increasing in both variables and , thus using theorem [ thm_3d_distr ] we find that _ 1^dnt(s ) & = & ^uo(s ) - e^-r + _ 2^dnt(s ) & = & ^uo(s ) - e^-r are the lower / upper bounds for .figure [ barrier_dnt1 ] illustrates this algorithm for parameter set 1 , the other parameters being fixed at , , and the number of time steps ( 400 for the classical monte - carlo ) .we see that monte - carlo approach gives a price which is almost always larger than the upper bound produced by the wiener - hopf monte - carlo algorithm .this is not surprising , as in the case of monte - carlo approach we would have positive ( negative ) bias in the estimate of infimum ( supremum ) , and given that the payoff of the double no - touch barrier option is increasing in infimum and decreasing in supremum this amplifies the bias .aek and kvs would also like to thank alex cox for useful discussions .aek and jcp are grateful for support from epsrc grant number ep / d045460/1 . aek and kvsgratefully acknowledge support form the axa research fund , ak s research is supported by the natural sciences and engineering research council of canada .hubalek , f. and kyprianou , a.e .( 2010 ) old and new examples of scale functions for spectrally negative lvy processes . in _ sixth seminar on stochastic analysis , random fields and applications , eds r. dalang , m. dozzi , f. russo . _progress in probability , birkhuser .kyprianou , a.e . and loeffen , r. ( 2005 ) lvy processes in finance distinguished by their coarse and fine path properties . inexotic option pricing and advanced lvy models eds .a. kyprianou , w. schoutens and p. wilmott .
|
we develop a completely new and straightforward method for simulating the joint law of the position and running maximum at a fixed time of a general lvy process with a view to application in insurance and financial mathematics . although different , our method takes lessons from carr s so - called ` canadization ' technique as well as doney s method of stochastic bounds for lvy processes ; see carr and doney . we rely fundamentally on the wiener - hopf decomposition for lvy processes as well as taking advantage of recent developments in factorisation techniques of the latter theory due to vigon and kuznetsov . we illustrate our wiener - hopf monte - carlo method on a number of different processes , including a new family of lvy processes called hypergeometric lvy processes . moreover , we illustrate the robustness of working with a wiener - hopf decomposition with two extensions . the first extension shows that if one can successfully simulate for a given lvy processes then one can successfully simulate for any independent sum of the latter process and a compound poisson process . the second extension illustrates how one may produce a straightforward approximation for simulating the two sided exit problem . key words and phrases : lvy processes , exotic option pricing , wiener - hopf factorisation . msc 2000 subject classifications : 65c05 , 68u20 . a. kuznetsov , a. e. kyprianou , j. c. pardo , k. van schaik
|
protein crystals contain between around 30% and 70% of solvent , most of which is disordered in the solvent channels among the protein molecules of the crystal lattice .thus the electron densities of the protein molecules , with typical values of 0.43 e / , are surrounded by a continuous disordered solvent electron density ranging between 0.33 e / for pure water and 0.41 e / for 4 m ammonium sulphate .if we do not account for any model for this continuous disordered solvent electron density , atomic protein models are thought as if it were placed in vacuum .the electron density itself is overestimated , the calculated structure factor amplitudes are systematically much larger than the observed ones , and it is commonly believed that the latter condition especially occurs at low resolution .the higher is the discrepancy among the calculated structure factors and the observed ones , the more difficult is the data scaling , the least - square refinement and the electron density map rendering .cutting low resolution data has been a widely adopted method to step over the problem , although it was rough and , somehow , intrinsically wrong : indeed , it introduced distortions of the local electron density ( an optical example is discussed in ) .a great effort has been recently devoted to devise a reliable method accounting for the disordered solvent effects in the protein region .two of them deserve a brief description . * the exponential scaling model . +this model is obtained by the direct application of babinet s principle to the calculated structure factors .the solvent structure factors moduli are assumed to be proportional to the protein ones , whereas the phases are opposite .the lower is the resolution the more satisfactory is the agreement between the observed structure factor and the calculated one . due to its simplicity , this model has been implemented in most of the crystallographic refinement programs .the weakness of this model is strictly related to the resolution at which it is expected to work properly .indeed the approximation embodied in this method is true at resolutions below 15 although it can be stretched up to 5 by downscaling the structure factors . * the mask model . + this model is an improvement of the previous one , since it aims to sum up the protein structure factor and the solvent one vectorially ,_ i.e. _ by accounting for both the modulus and the phase of the two structure factors . in the mask modelthe protein molecules are placed on a grid in the unit cell whereas the grid points outside the protein region are filled with the disordered solvent electron density .the protein boundary is mainly determined by the van der waals radii .the disordered solvent electron density is to fill in the empty space and the calculation of the solvent structure factor is straightforward .although the mask model works rather well , there are three major drawbacks of it : too many parameters have to be fitted and the relatively large ratio weakens the model at high resolution ( overfitting ) ; finally , the disordered solvent electron density is unrealistically assumed to be step shaped and flat .some strategies have been already devised to improve the latter ones .the aim of this paper is to focus on a recently developed statistical method and its application to disentangle the protein and the solvent contributions out of crystallographic data ; we show its major advantages and drawbacks .up to our knowledge this method has never been applied to this field .a comparison of the protein fraction in the unit cell , calculated by this method , with the same quantity computed by the most popular method used nowadays is satisfactory .the plan of the paper is as follows : the _ theory _ section provides the reader with the basic concepts of the independent component analysis ; the _ results and discussion _ section applies the theory to the specific case of a 2-dimensional problem ( _ i.e. _ solvent / protein system ) we are interested in ; it concludes with the calculation of the protein fraction for several protein structures and with a comparison of this quantity with the analogous one calculated by the matthews model accounting for the protein content only ._ conclusions _ section summarizes the paper s content and suggests further investigations .several techniques have been devised so far to deal with protein crystallography . among themwe quote the isomorphous derivative ( sir , mir ) and the anomalous dispersion ( sad , mad ) ones ( we address the reader to a number of review papers for details on these techniques ; see , for instance , and references therein ) .the theory described hereafter can be applied to protein crystallography regardless of the specific technique we are using and without any substantial modification ; therefore , for the sake of simplicity , we shall focus on the isomorphous derivative one . anywaythe method will be finally applied to several proteins : among them some are anomalous dispersion structures and some others refer to the isomorphous derivative technique . a protein and its isomorphous derivatives crystallize in a solvent .imagine that you are measuring the diffraction intensities out of a crystallized protein sample and one of its isomorphous derivatives .each of these recorded signals is a weighted sum of the signals emitted by the two main sources ( _ i.e. _ protein / derivative and solvent ) , which we denote by and , _ i.e. _ the protein / derivative and solvent structure factors , respectively .we can express each of them as a linear combination actually if we knew the parameters we would solve the problem at a once by classical methods .unfortunately this is not the case and the problem turns out to be much more difficult . under the hypothesis of statistical independence of the structure factor phase differences , _i.e. _ , we can write where is the resolution shell averaged intensity and the approximation in the last equation written above is justified by the isomorphism among the protein and its derivatives . are some parameters that depend on the hidden variables of the problem .of course we are interested in spotting the two original sources and by using only the recorded signals and . using some information about the statistical properties of the original signals and is a possible approach to estimate the parameters . the _ statistical independence _ of the two sources is not surprising whereas the fact that the above condition is not only necessary but also sufficient is .the independent component analysis is a technique recently developed to estimate the parameters based on the information of the statistical independence of the original sources .it allows to separate the latter ones from their mixtures and .several applications of ica have been recently devised and , therefore , a unified mathematical framework is required . to begin with ,we rigorously define ica by referring to a statistical variables model , _i.e. _ where j runs over the number of linear mixtures we observe and n is the number of hidden sources . the statistical model defined in eq.([ica - def ] ) is called independent component analysis .it describes the generation of observed data as a result of an unknown mixture of unknown sources .finding out both the mixing matrix and the hidden sources is the aim of this method . in order to do so, ica assumes that * the components are statistically independent , * the components are random variables and their distribution is gaussian , * the mixing matrix is square , although this hypothesis can be sometimes relaxed . for a detailed discussionsee .let us suppose that the mixing matrix has been computed ; the inverse mixing matrix is achievable and the problem is readily solved : for each hidden source .adding some noise terms in the measurements is certainly a more realistic approach although it turns out to be more tricky : for the time being , we shall skip this aspect in order to focus on a free - noise ica model .of course extending the conclusions to more complicated models is straightforward .without loss of generality we shall assume that are standardized random variables , _i.e. _ , .the latter choice is always possible since both and are known for the starting data samples .indeed , we can always replace the starting set of random variables with the new one as follows moreover ica aims to disentangle the hidden sources ( ) and , therefore , looking at preprocessing techniques to uncorrelate the _ would - be _ sources before applying any ica algorithm is a major advantage .therefore this procedure , named data whitening , is certainly a useful preprocessing strategy in ica .the eigenvalue decomposition ( evd ) is the most popular way to whiten data : the starting set of variables is linearly transformed according to the following rule where and are , respectively , the eigenvalues and the eigenvectors of the n - rank covariance matrix for the starting set of statistical variables .the covariance matrix for the new set of variables is diagonal .of course the data whitening modifies the mixing matrix ; infact , by applying the ica definition of the eq.([ica - def ] ) to both sets of variables ( and ) in the eq.([whitened - vars ] ) , we get where i runs over 1, ... ,n .it turns out that data whitening has considerably simplified the initial problem since the new mixing matrix is orthogonal and , therefore , the n components of have been reduced to . after having standardized and whitened the starting set of statistical variables , we are ready to implement the ica algorithmactually we are looking for a unique matrix that combines with the variables in order to get the hidden sources satisfying the ica prescriptions . indeed the conditions of ica are readily achieved as soon as we note that the product of the whitened set of standardized random variables by any n - dimensional orthogonal matrix leaves the variables uncorrelated , whitened , standardized and , moreover , it leaves the mixing matrix orthogonal .therefore we shall limit ourselves to an n - dimensional orthogonal matrix and we shall fix its degrees of freedom by assuming the nongaussianity of the probability distribution functions of the hidden sources .there are several measures of nongaussianity and a full discussion is beyond the scope of this paper ( for more details see ) ; instead we briefly introduce the measure of nongaussianity we shall adopt : the negentropy .entropy is a fundamental concept of information theory .the entropy of a random variable is its coding length ( for details see ) . for a discrete random variable, entropy is defined as follows where are the possible values of .one of the main result of the information theory is that a gaussian variable has the largest entropy among the random variables with the same .therefore we argue that the less structured is a random variable the more gaussian is its distribution . in order to geta nonnegative measure of a random variable nongaussianity , whose value is zero for a gaussian variable , it is worth to introduce the following quantity where is the entropy of a gaussian random variable .hereafter we shall refer to the eq.([negentropy ] ) as to the negentropy of a random variable .the negentropy of a random variable , as defined in the eq.([negentropy ] ) , is well defined by the statistical theory and , moreover , it can be easily generalized to a system of random variables : infact the additivity of the entropy is immediately extended to the negentropy .moreover negentropy is invariant under an invertible linear transformation .the major drawback of the negentropy , as defined in the eq.([negentropy ] ) , is the computation itself since the precise evaluation of it requires the nonparametric estimation of the probability distribution function for the random variable we are dealing with .several simplifications of negentropy have been devised and we shall focus on two of them . *the kurtosis .+ the kurtosis is the 4 order momentum of a random variable probability distribution function , _i.e. _ , where is a random variable . for a gaussian random variable kurtosis equals 0 .the negentropy of eq.([negentropy ] ) is readily simplified : .+ the major drawback of the kurtosis approximation of negentropy is the lack of robustness , since its computation out of a data sample can be very sensitive to the outliers . *maximum entropy .+ in order to step over the unrobustness of the negentropy approximation described above , it is useful to introduce a conceptually simple and fast to be computed approximation of the negentropy based on maximum entropy principle .we write the negentropy according to the following formula ^ 2 \;\ ; , \ ] ] where are suitable coefficients , are nonquadratic functions , is a unit variance random variable and is a unit variance gaussian random variable .the approximation of eq.([negentropy - approx ] ) generalizes the kurtosis one ; infact for a single function ( _ i.e. _ n=1 ) the choice exactly leads to the kurtosis approximation described above .the slower is the growing of the functions the more robust is the approximation of the negentropy .both of the approximations described above satisfy the main features of the negentropy , _i.e. _ the nonnegativity , the zero value for a gaussian random variable and the additivity . before showing the details of the ica application to the protein crystallography , we spot some intrinsic ambiguities of the ica procedure . *the of the independent components can not be determined since the hidden sources and the mixing matrix are unknown and they can be simultanously scaled by the same quantity without modifying any conclusion . the choice leaves the ambiguity of the sign .* the order of the independent components can not be determined .indeed any permutation of the hidden sources leads to a similarity transformation on the mixing matrix and since both of them are unknown the permutation does not affect the algorithm itself .the ambiguities described above can be solved by means of physical contraints featuring the ica solutions of the specific problem .we will discuss how to overcome this problem in the next section .we have focused on the 2-dimensional problem described in the introduction and briefly formalized at the beginning of the theory section . after having recalled the definitions of the eq.([lin - comb ] ) , we proceed with the standardizing and the withening procedures of the starting set of random variables ^{p/ d+s } \;\ ; , \label{std - whitening}\ ] ] where is the whitening matrix defined by the eigenvalue decomposition ( evd ) of the matrix . at this stagewe apply the ica algorithm to . in two dimensionsan orthogonal matrix is determined by a single angle parameter ; we get where the last formula of the eq.([ica - matrix ] ) defines the -dependent solutions of ica , _i.e. _ the standardized , whitened random variables depending on the single parameter that has to be fixed by maximizing the total negentropy as follows where we use a single function ( _ i.e. _ n=1 in the eq.([negentropy - approx ] ) ) and , according to , we adopt . in the eq.([max - negentropy ] ) the negentropy additivity has been applied .other choices for are possible and we have checked that neither the solutions nor the algorithm are sensitive to them .we denote with the angle where the total negentropy attains its maximum . therefore we can conclude in that respect are the standardized , whitened and maximally nongaussian random variables corresponding to the hidden sources of the initial problem . as to the ambiguities of this technique , mentioned at the end of the previous section , we solve the first one by taking the absolute value of ,_ i.e. _ we introduce the quantities . at this stagewe define the protein / solvent fraction as follows } \;\ ; , \label{prot - frac}\ ] ] where runs over the number of resolution shells according to eq .( [ lin - comb ] ) and , being the shell averaged resolution . the protein fraction definition of eq.([prot - frac ] )is justified by the kinematic theory stating that the diffraction intensity is expected to depend on the crystal volume and on the unit cell volume v according to the ratio . according to relevant information of the protein structures is contained in three resolution ranges , , and .the first range information is dominated by the protein structure at atomic level while in the third one the solvent content is overwhelming ( the density modification procedures aim to re - scale the structure factors moduli in the low resolution range to account for the bulk disordered solvent ) .the scattering powers of protein and solvent are of the same order in the range . hence the expression ( [ prot - frac ] )is evaluated in this range .the comparison between the protein fraction value obtained by ica with the one by matthews method , computed as in , finds out the correct order of the independent components .the results of this comparison are shown in table [ tab : fp ] for several proteins .the agreement between the protein fractions obtained by the two methods is quite satisfactory .the proteins reported in table [ tab : fp ] are named according to their codes . for each of themwe have the crystallographic data of the native and of one derivative for the isomorphous derivative technique . for the anomalous dispersion technique, we use the crystallographic data of the native collected at one wavelength . on the last row in the table [ tab : fp ] the errors are the protein fraction for the two different methods .the average values as well as the are comparable . in fig.[fig :crys - vol1 ] we report the protein fraction distribution for the proteic structures listed in table [ tab : fp ] and computed by ica . figure [ fig : crys - vol2 ] shows the corresponding distribution of the crystal volume per unit of protein molecular weight calculated according to the formula in ref. ( for a full comparison see fig.2 in the reference ) .according to our analysis the most probable value for the crystal volume per unit of protein molecular weight falls into the range 1.85 - 2.25 /dalton .in this paper we have applied a new technique , the independent component analysis , to calculate the protein fraction out of crystallographic data . the analysis here presented aims to disentangle the protein and the disordered solvent contributions . provided a sufficient number of crystallographic data ( at least as many as the supposed hidden sources ) ,this method has given convincing results , as compared to available ones in the literature .it is a promising tool to investigate some features of protein structures even if its applicability as a robust guideline at the future protein crystallography refinement programs deserves a deeper investigation .* phasing procedures . indeed weighting the protein structure factors according to the resolution shells of the crystallographic data could provide a crucial improvement of the relevant formulas for the protein phasing procedure implemented in the most popular crystallography refinement programs . *disentangling crystallized and disordered solvent contributions out of the crystal forms of proteins .infact the larger is the number of the independent crystallographic data referring to the same protein structure , the larger is the number of hidden sources this method can account for , the more precise is the determination of the single hidden source out of the recorded signals .* model independence of ica results in protein crystallography .moore mh , gulbis jm , dodson ej , demple b , moody pce .crystal - structure of a suicidal dna - repair protein - the ada o6-methylguanine- dna methyltransferase from escherichia - coli .embo journal 1994;13:14951501 .toro i , basquin j , teo dreher h , suck d. archaeal sm proteins form heptameric and hexameric complexes : crystal structures of the sm1 and sm2 proteins from the hyperthermophile archaeoglobus fulgidus . j mol biol 2002;320:129 .hofmann b , budde h , bruns k , guerrero sa , kalisz hm , menge u , montemartini m , nogoceke e , steinert p , wissing jb , flohe l , hecht hj .structure of tryparedoxins revealing interaction with trypanothione .biol chem 2001;382:459 .glover i , haneef i , pitts j , woods s , moss d , tickle i , blundell tl .conformational flexibility in a small globular hormone : x- ray analysis of avian pancreatic polypeptide at 0.98resolution .biopolymers 1983;22:293304 .dauter z , wilson ks , sieker lc , meyer j , moulis jm . atomic resolution ( 0.94 ) structure of clostridium acidurici ferredoxin . detailed geometry of [ 4fe-4s ] clusters in a protein .biochemistry 1997;36:1606516073 .matak vinkovic d , vinkovic m , saldanha sa , ashurst ja , von delft f , inque t , miguel rn , smith ag , blundell tl , abell c. crystal structure of escherichia coli ketopantoate reductase at 1.7 resolution and insight into the enzyme mechanism .biochemistry 2001;40:14493 ...numerical values for protein fractions .the third column refers to our method , the fourth column refers to matthews method .the protein fraction calculated by ica is averaged on the whole crystallographic data resolution range .sir , mir , sad and mad refer to the diffraction technique adopted to collect data . on the last column we report the error estimate between the two methods .[ cols="^,^,^,^,^ " , ]
|
an analysis of the protein content of several crystal forms of proteins has been performed . we apply a new numerical technique , the independent component analysis ( ica ) , to determine the volume fraction of the asymmetric unit occupied by the protein . this technique requires only the crystallographic data of structure factors as input . 0.5 cm corresponding author , e_mail : massimo.ladisa.cnr.it , phone : + 39 0805442419 , fax : + 39 0805442591 + short title : ica in protein crystallography + keywords : matthews coefficient , protein crystallography , structure factors , solvent fraction , ica algorithm
|
we consider the reinforcement learning ( rl ) problem of optimizing rewards in an unknown markov decision process ( mdp ) . in thissetting an agent makes sequential decisions within its enironment to maximize its cumulative rewards through time .we model the environment as an mdp , however , unlike the standard mdp planning problem the agent is unsure of the underlying reward and transition functions . through exploring poorly - understood policies ,an agent may improve its understanding of its environment but it may improve its short term rewards by exploiting its existing knowledge .the focus of the literature in this area has been to develop algorithms whose performance will be close to optimal in some sense . there are numerous criteria for statistical and computational efficiency that might be considered .some of the most common include pac ( probably approximately correct ) , mb ( mistake bound ) , kwik ( knows what it knows ) and regret .we will focus our attention upon regret , or the shortfall in the agent s expected rewards compared to that of the optimal policy .we believe this is a natural criteria for performance during learning , although these concepts are closely linked . a good overview of various efficiency guarantees is given in section 3 of li et al . .broadly , algorithms for rl can be separated as either model - based , which build a generative model of the environment , or model - free which do not .algorithms of both type have been developed to provide pac - mdp bounds polynomial in the number of states and actions .however , model - free approaches can struggle to plan efficient exploration .the only near - optimal regret bounds to time of have only been attained by model - based algorithms .but even these bounds grow with the cardinality of the state and action spaces , which may be extremely large or even infinite .worse still , there is a lower bound for the expected regret in an arbitrary mdp . in special cases , where the reward or transition function is known to belong to a certain functional family, existing algorithms can exploit the structure to move beyond this ` tabula rasa' ( where nothing is assumed beyond and ) lower bound .the most widely - studied parameterization is the degenerate mdp with no transitions , the mutli - armed bandit .another common assumption is that the transition function is linear in states and actions .papers here establigh regret bounds for linear quadratic control , but with constants that grow exponentially with dimension .later works remove this exponential dependence , but only under significant sparsity assumptions .the most general previous analysis considers rewards and transitions that are -hlder in a -dimensional space to establish regret bounds .however , the proposed algorithm uccrl is not computationally tractable and the bounds approach linearity in many settings . in this paper we analyse the simple and intuitive algorithm _ posterior sampling for reinforcement learning _ ( psrl ) .psrl was initially introduced as a heuristic method , but has since been shown to satisfy state of the art regret bounds in finite mdps and also exploit the structure of factored mdps .we show that this same algorithm satisfies general regret bounds that depends upon the dimensionality , rather than the cardinality , of the underlying reward and transition function classes . to characterize the complexity of this learning problem we extend the definition of the eluder dimension , previously introduced for bandits , to capture the complexity of the reinforcement learning problem .our results provide a unified analysis of model - based reinforcement learning in general and provide new state of the art bounds in several important problem settings .we consider the problem of learning to optimize a random finite horizon mdp in repeated finite episodes of interaction . is the state space , is the action space , is the reward distribution over and is the transition distribution over when selecting action in state , is the time horizon , and the initial state distribution .all random variables we will consider are on a probability space . a policy is a function mapping each state and to an action . for each mdp and policy , we define a value function : \ ] ] where ] with a single active component equal to 1 and 0 otherwise .in fact , the notational convention that should not impose a great restriction for most practical settings . for any distribution over , we define the one step future value function to be the expected value of the optimal policy with the next state distributed according to ..\ ] ] one natural regularity condition for learning is that the future values of similar distributions should be similar .we examine this idea through the lipschitz constant on the means of these state distributions .we write \in { \mathcal{s}} ] and additive -sub - gaussian noise .we let be the -covering number of with respect to the -norm and write for brevity .finally we write for the eluder dimension of at precision , a notion of dimension specialized to sequential measurements described in section [ sec : eluder ] . our main result , theorem [ thm : main regret ] , bounds the expected regret of psrl at any time . [ thm : main regret ] fix a state space , action space , function families and for any .let be an mdp with state space , action space , rewards and transitions .if is the distribution of and is a global lipschitz constant for the future value function as per then : \le \big [ c_{\mathcal{r}}+ c_{\mathcal{p}}\big ] + \tilde{d}({\mathcal{r } } ) + + { \mathds{e}}[k^*]\left(1+\frac{1}{t-1}\right ) \tilde{d}({\mathcal{p}})\end{aligned}\ ] ] where for equal to either or we will use the shorthand : + .theorem [ thm : main regret ] is a general result that applies to almost all rl settings of interest . in particular, we note that any bounded function is sub - gaussian .to clarify the assymptotics if this bound we use another classical measure of dimensionality .[ def : kol ] the kolmogorov dimension of a function class is given by : using definition [ def : kol ] in theorem [ thm : main regret ] we can obtain our corollary .[ cor : ass regret ] under the assumptions of theorem [ thm : main regret ] and writing : = \tilde{o } \left ( \ \sigma_{\mathcal{r}}\sqrt{d_k({\mathcal{r } } ) d_e({\mathcal{r } } ) t } + { \mathds{e}}[k^ * ] \sigma_{\mathcal{p}}\sqrt{d_k({\mathcal{p } } ) d_e({\mathcal{p } } ) t } \\right)\ ] ] where ignores terms logarithmic in .in section [ sec : eluder ] we provide bounds on the eluder dimension of several function classes . these lead to explicit regret bounds in a number of important domains such as discrete mdps , linear - quadratic control and even generalized linear systems . in all of these casesthe eluder dimension scales comparably with more traditional notions of dimensionality . for clarity ,we present bounds in the case of linear - quadratic control .[ cor : lqr ] let be an -dimensional linear - quadratic system with -sub - gaussian noise .if the state is -bounded by and is the distribution of , then : = \tilde{o } \left ( \sigma c \lambda_1 n^2 \sqrt{t } \ \right).\ ] ] here is the largest eigenvalue of the matrix given as the solution of the ricatti equations for the unconstrained optimal value function .we simply apply the results of for eluder dimension in section [ sec : eluder ] to corollary [ cor : ass regret ] and upper bound the lipschitz constant of the constrained lqr by , see appendix [ app : bounded lqr ] .algorithms based upon posterior sampling are intimately linked to those based upon optimism . in appendix[ app : ucrl - eluder ] we outline an optimistic variant that would attain similar regret bounds but with high probility in a frequentist sense .unfortunately this algorithm remains computationally intractable even when presented with an approximate mdp planner .further , we believe that psrl will generally be more statistically efficient than an optimistic variant with similar regret bounds since the algorithm is not affected by loose analysis .to quantify the complexity of learning in a potentially infinite mdp , we extend the existing notion of eluder dimension for real - valued functions to vector - valued functions . for any we define the set of mean functions : = \{f | f={\mathds{e}}[g ] \text { for } g \in { \mathcal{g}}\} ] and zero mean noise . intuitively , the eluder dimension of is the length of the longest possible sequence such that for all , knowing the function values of will not reveal .we will say that is -dependent on is -independent of iff it does not satisfy the definition for dependence .[ def : eluder ] the eluder dimension is the length of the longest possible sequence of elements in such that for some every element is -independent of its predecessors . traditional notions from supervised learning , such as the vc dimension , are not sufficient to characterize the complexity of reinforcement learning . in fact , a family learnable in constant time for supervised learning may require arbitrarily long to learn to control well .the eluder dimension mirrors the linear dimension for vector spaces , which is the length of the longest sequence such that each element is linearly independent of its predecessors , but allows for nonlinear and approximate dependencies .we overload our notation for and write ,\epsilon) ] with . define to be the condition number . if then for any : \right ) + 1 = \tilde{o}(r^2 np)\ ] ]we now follow the standard argument that relates the regret of an optimistic or posterior sampling algorithm to the construction of confidence sets .we will use the eluder dimension build confidence sets for the reward and transition which contain the true functions with high probability and then bound the regret of our algorithm by the maximum deviation within the confidence sets . for observations from will center the sets around the least squares estimate where is the cumulative squared prediciton error .the confidence sets are defined where controls the growth of the confidence set and the empirical 2-norm is defined . for , we define the distinguished control parameter : this leads to confidence sets which contain the true function with high probability .[ prop : conf sets ] for all and and the confidence sets for all then : we combine standard martingale concentrations with a discretization scheme .the argument is essentially the same as proposition 6 in , but extends statements about to vector - valued functions .a full derivation is available in the appendix [ app : conf sets ] .we now bound the deviation from by the maximum deviation within the confidence set .for any set of functions we define the width of the set at to be the maximum l2 deviation between any two members of evaluated at . we can bound for the number of large widths in terms of the eluder dimension . [ lem : big widths ] if is a nondecreasing sequence with then this result follows from proposition 8 in but with a small adjustment to account for episodes . a full proof is given in appendix [ app : large widths ] .we now use lemma [ lem : big widths ] to control the cumulative deviation through time .[ prop : widths ] if is nondecreasing with and for all then : once again we follow the analysis of russo and strealine notation by letting abd . reordering the sequence such that we have that : . by the reordering we knowthat means that . from lemma [ lem : big widths ] , . so that if then . therefore , will now show reproduce the decomposition of expected regret in terms of the bellman error . from here, we will apply the confidence set results from section [ sec : conf ] to obtain our regret bounds .we streamline our discussion of and by simply writing in place of or and in place of or where appropriate ; for example . the first step in our ananlysis breaks down the regret by adding and subtracting the _ imagined _ optimal reward of under the mdp . here is a distinguished initial state , but moving to general poses no real challenge .algorithms based upon optimism bound with high probability .for psrl we use lemma [ lem : ps ] and the tower property to see that this is zero in expectation .[ lem : ps ] if is the distribution of then , for any -measurable function , = { \mathds{e}}[g(m_k ) | h_{t_k } ] \ ] ] we introduce the bellman operator , which for any mdp , stationary policy and value function , is defined by this returns the expected value of state where we follow the policy under the laws of , for one time step .the following lemma gives a concise form for the dynamic programming paradigm in terms of the bellman operator .[ lem : dpl ] for any mdp and policy , the value functions satisfy for , with . through repeated application of the dynamic programming operator and taking expectation of martingale differences we can mirror earlier analysis to equate expectedregret with the cumulative bellman error : = \sum_{i=1}^\tau ( { \mathcal{t}}^k_{k , i}-{\mathcal{t}}^*_{k , i})v^k_{k , i+1}(s_{t_k+i})\ ] ] efficient regret bounds for mdps with an infinite number of states and actions require some regularity assumption .one natural notion is that nearby states might have similar optimal values , or that the optimal value function function might be lipschitz .unfortunately , any discontinuous reward function will usually lead to discontious values functions so that this assumption is violated in many settings of interest .however , we only require that the _ future _ value is lipschitz in the sense of equation. this will will be satisfied whenever the underlying value function is lipschitz , but is a strictly weaker requirement since the system noise helps to smooth future values . since has -sub - gaussian noise we write in the natural way .we now use equation to reduce regret to a sum of set widths . to reduce clutter andmore closely follow the notation of section [ sec : eluder ] we will write . & \le & { \mathds{e}}\left [ \sum_{i=1}^\tau \left\ { \overline{r}^k(x_{k , i } ) - \overline{r}^*(x_{k , i } ) + u^k_{i}(p^k(x_{k , i } ) ) - u^k_{i}(p^*(x_{k , i } ) ) \right\ } \right ] \nonumber \\ & \le & { \mathds{e}}\left [ \sum_{i=1}^\tau \left\ { | \overline{r}^k(x_{k , i } ) - \overline{r}^*(x_{k , i})| + k^k \|\overline{p}^k(x_{k , i } ) - \overline{p}^*(x_{k , i } ) \|_2 \right\}\right]\end{aligned}\ ] ] where is a global lipschitz constant for the future value function of as per .we now use the results from sections [ sec : eluder ] and [ sec : conf ] to form the corresponding confidence sets and for the reward and transition functions respectively .let and and condition upon these events to give : & \le & { \mathds{e}}\left [ \sum_{k=1}^m \sum_{i=1}^\tau \left\ { | \overline{r}^k(x_{k , i } ) - \overline{r}^*(x_{k , i})| + k^k \|\overline{p}^k(x_{k , i } ) - \overline{p}^*(x_{k , i } ) \|_2 \right\}\right ] \nonumber \\ & & \hspace{-10 mm } \le \sum_{k=1}^m \sum_{i=1}^\tau \left\ { w_{{\mathcal{r}}_k}(x_{k , i } ) + { \mathds{e}}[k^k|a , b ] w_{{\mathcal{p}}_k}(x_{k , i } ) + 8 \delta ( c_{\mathcal{r}}+ c_{\mathcal{p } } ) \right\}\end{aligned}\ ] ] the posterior sampling lemma ensures that = { \mathds{e}}[k^*] ] by a union bound on .we fix to see that : \le ( c_{\mathcal{r}}+ c_{\mathcal{p } } ) + \sum_{k=1}^m \sum_{i=1}^\tau w_{{\mathcal{r}}_k}(x_{k , i } ) + { \mathds{e}}[k^ * ] \left(1+\frac{1}{t-1 } \right)\sum_{k=1}^m \sum_{i=1}^\tau w_{{\mathcal{p}}_t}(x_{k , i})\ ] ] we now use equation together with proposition [ prop : widths ] to obtain our regret bounds . for ease of notation we will write and . & \le & 2 + ( c_{\mathcal{r}}+ c_{\mathcal{p } } ) + \tau(c_{\mathcal{r}}d_e({\mathcal{r}})+ c_{\mathcal{p}}d_e({\mathcal{p } } ) ) + \nonumber \\ & & \ 4 \sqrt{\beta_t^*({\mathcal{r}},1/8t,\alpha ) d_e({\mathcal{r } } ) t } + 4 \sqrt{\beta_t^*({\mathcal{p}},1/8t,\alpha ) d_e({\mathcal{p } } ) t}\end{aligned}\ ] ] we let and write for and to complete our proof of theorem [ thm : main regret ] : \le \big [ c_{\mathcal{r}}+ c_{\mathcal{p}}\big ] + \tilde{d}({\mathcal{r } } ) + { \mathds{e}}[k^*]\left(1+\frac{1}{t-1}\right ) \tilde{d}({\mathcal{p}})\end{aligned}\ ] ] where is shorthand for .the first term ] and conditional cumulant generating function ] .[ lem : mg conc ] for adapted real random variables adapted to .we define the conditional mean ] . both of these lemmas are available in earlier discussion for real - valued variables .we now specialize our discussion to the vector space where the inner product . to simplify notationwe will write and for arbitrary .we now define so that clearly . now since we have said that the noise is -sub - gaussian , \le \exp \left ( \frac { \| \phi \|_2 ^ 2 \sigma^2}{2 } \right ) \text { } \forall \phi \in \mathcal{y} ] with .define to be the condition number . if then for any : \right ) + 1 = \tilde{o}(r^2 np)\ ] ] this proof follows exactly as per the linear case , but first using a simple reduction on the form of equation . to which we can now apply lemma [ lem : norms trace ] with the rescaled by .following the same arguments as for linear functions now completes our proof .we imagine a standard linear quadratic controller with rewards with the state - action vector .the rewards and transitions are given by : where is positive semi - definite and projects x onto the -ball at radius .in the case of unbounded states and actions the ricatti equations give the form of the optimal value function for . in this casewe can see that the difference in values of two states : where is the largest eigenvalue of and is an upper bound on the -norm of both and .we note that works as an effective lipshcitz constant when we know what can bound .we observe that for any projection for $ ] and that for all positive semi - definite matrices , .using this observation together with reward and transition functions we can see that the value function of the bounded lqr system is always greater than or equal to that of the unconstrained value function .the effect of excluding the low - reward outer region , but maintaining the higher - reward inner region means that the value function becomes more flat in the bounded case , and so works as an effective lipschitz constant for this problem too .for completeness , we explicitly outline an optimistic algorithm which uses the confidence sets in our analysis of psrl to guarantee similar regret bounds with high probability over all mdp .the algorithm follows the style of ucrl2 so that at the start of the episode the algorithm form and then solves for the optimistic policy that attains the highest reward over any in .
|
we consider the problem of learning to optimize an unknown markov decision process ( mdp ) . we show that , if the mdp can be parameterized within some known function class , we can obtain regret bounds that scale with the dimensionality , rather than cardinality , of the system . we characterize this dependence explicitly as where is time elapsed , is the kolmogorov dimension and is the _ eluder dimension_. these represent the first unified regret bounds for model - based reinforcement learning and provide state of the art guarantees in several important settings . moreover , we present a simple and computationally efficient algorithm _ posterior sampling for reinforcement learning _ ( psrl ) that satisfies these bounds .
|
maxwell s equations may be written in differential form as follows : the fields ( magnetic flux density ) and ( electric field strength ) determine the force on a particle of charge travelling with velocity ( the lorentz force equation ) : the electric displacement and magnetic intensity are related to the electric field and magnetic flux density by : the electric permittivity and magnetic permeability depend on the medium within which the fields exist .the values of these quantities in vacuum are fundamental physical constants . in si units : where is the speed of light in vacuum .the permittivity and permeability of a material characterize the response of that material to electric and magnetic fields . in simplified models ,they are often regarded as constants for a given material ; however , in reality the permittivity and permeability can have a complicated dependence on the fields that are present .note that the _ relative permittivity _ and the _ relative permeability _ are frequently used . these are dimensionless quantities , defined by : that is , the relative permittivity is the permittivity of a material relative to the permittivity of free space , and similarly for the relative permeability .the quantities and are respectively the electric charge density ( charge per unit volume ) and electric current density ( is the charge crossing unit area perpendicular to unit vector per unit time ) . equations ( [ eq : maxwell2 ] ) and ( [ eq : maxwell3 ] ) are independent of and , and are generally referred to as the `` homogeneous '' equations ; the other two equations , ( [ eq : maxwell1 ] ) and ( [ eq : maxwell4 ] ) are dependent on and , and are generally referred to as the `` inhomogeneous '' equations .the charge density and current density may be regarded as _sources _ of electromagnetic fields . when the charge density and current density are specified ( as functions of space , and , generally , time ), one can integrate maxwell s equations ( [ eq : maxwell1])([eq : maxwell4 ] ) to find possible electric and magnetic fields in the system .usually , however , the solution one finds by integration is not unique : for example , the field within an accelerator dipole magnet may be modified by propagating an electromagnetic wave through the magnet .however , by imposing certain constraints ( for example , that the fields within a magnet are independent of time ) it is possible to obtain a unique solution for the fields in a given system of electric charges and currents .most realistic situations are sufficiently complicated that solutions to maxwell s equations can not be obtained analytically .a variety of computer codes exist to provide solutions numerically , once the charges , currents , and properties of the materials present are all specified , see for example references . solving for the fields in realistic ( three - dimensional ) systems often requires a reasonable amount of computing power ; some sophisticated techniques have been developed for solving maxwell s equations numerically with good efficiency .we do not consider such techniques here , but focus instead on the analytical solutions that may be obtained in idealized situations .although the solutions in such cases may not be sufficiently accurate to complete the design of a real accelerator magnet , the analytical solutions do provide a useful basis for describing the fields in real magnets , and provide also some important connections with the beam dynamics in an accelerator .an important feature of maxwell s equations is that , for systems containing materials with constant permittivity and permeability ( i.e. permittivity and permeability that are independent of the fields present ) , the equations are _ linear _ in the fields and sources .that is , each term in the equations involves a field or a source to ( at most ) the first power , and products of fields or sources do not appear . as a consequence ,the _ principle of superposition _ applies : if and are solutions of maxwell s equations with the current densities and , then the field will be a solution of maxwell s equations , with the source given by the total current density .this means that it is possible to represent complicated fields as superpositions of simpler fields .an important and widely used analysis technique for accelerator magnets is to decompose the field ( determined from either a magnetic model , or from measurements of the field in an actual magnet ) into a set of multipoles .while it is often the ideal to produce a field consisting of a single multipole component , this is never perfectly achieved in practice : the multipole decomposition indicates the extent to which components other than the `` desired '' multipole are present .multipole decompositions also produce useful information for modelling the beam dynamics .although the principle of superposition strictly only applies in systems where the permittivity and permeability are independent of the fields , it is always possible to perform a multipole decomposition of the fields in free space ( e.g. in the interior of a vacuum chamber ) , since in that region the permittivity and permeability are constants . however , it should be remembered that for nonlinear materials ( where the permeability , for example , depends on the magnetic field strength ) , the field inside the material comprising the magnet will not necessarily be that expected if one were simply to add together the fields corresponding to the multipole components .solutions to maxwell s equations lead to a rich diversity of phenomena , including the fields around charges and currents in certain simple configurations , and the generation , transmission and absorption of electromagnetic radiation .many existing texts cover these phenomena in detail ; see , for example , the authoritative text by jackson . therefore, we consider only briefly the electric field around a point charge and the magnetic field around a long straight wire carrying a uniform current : our main purpose here is to remind the reader of two important integral theorems ( gauss theorem , and stokes theorem ) , of which we shall make use later . in the following sections , we will discuss analytical solutions to maxwell s equations for situations relevant to some of the types of magnets commonly used in accelerators .these include multipoles ( dipoles , quadrupoles , sextupoles , and so on ) , solenoids , and insertion devices ( undulators and wigglers ) .we consider only static fields .we begin with two - dimensional fields , that is fields that are independent of one coordinate ( generally , the coordinate representing the direction of motion of the beam ) .we will show that multipole fields are indeed solutions of maxwell s equations , and we will derive the current distributions needed to generate `` pure '' multipole fields .we then discuss multipole decompositions , and compare techniques for determining the multipole components present in a given field from numerical field data ( from a model , or from measurements ) .finally , we consider how the two - dimensional multipole decomposition may be extended to three - dimensional fields , to include ( for example ) insertion devices , and fringe fields in multipole magnets .guass theorem states that for any smooth vector field : where is a volume bounded by the closed surface .note that the area element is oriented to point _ out _ of .gauss theorem is helpful for obtaining physical interpretations of two of maxwell s equations , ( [ eq : maxwell1 ] ) and ( [ eq : maxwell2 ] ) .first , applying gauss theorem to ( [ eq : maxwell1 ] ) gives : where is the total charge enclosed by .suppose that we have a single isolated point charge in an homogeneous , isotropic medium with constant permittivity .in this case , it is interesting to take to be a sphere of radius . by symmetry, the magnitude of the electric field must be the same at all points on , and must be normal to the surface at each point .then , we can perform the surface integral in ( [ eq : coulomb1 ] ) : this is illustrated in fig .[ fig : coulombslaw ] : the outer circle represents a cross - section of a sphere ( ) enclosing volume , with the charge at its centre . the black arrows in fig .[ fig : coulombslaw ] represent the electric field lines , which are everywhere perpendicular to the surface . since , we find coulomb s law for the magnitude of the electric field around a point charge : .the field lines are everywhere perpendicular to a spherical surface centered on the charge .[ fig : coulombslaw ] ] applied to maxwell s equation ( [ eq : maxwell2 ] ) , gauss theorem leads to : in other words , the magnetic flux integrated over any closed surface must equal zero at least , until we discover magnetic monopoles . lines of magnetic flux occur in closed loops ; whereas lines of electric field can start ( and end ) on electric charges .stokes theorem states that for any smooth vector field : where the loop bounds the surface . applied to maxwell s equation ( [ eq : maxwell4 ] ), stokes theorem leads to : which is ampre s law . from ampre s law, we can derive an expression for the strength of the magnetic field around a long , straight wire carrying current .the magnetic field must have rotational symmetry around the wire .there are two possibilities : a radial field , or a field consisting of closed concentric loops centred on the wire ( or some superposition of these fields ) .a radial field would violate maxwell s equation ( [ eq : maxwell2 ] ) .therefore , the field must consist of closed concentric loops ; and by considering a circular loop of radius , we can perform the integral in eq .( [ eq : ampere1 ] ) : where is the total current carried by the wire . in this case, the line integral is performed around a loop centered on the wire , and in a plane perpendicular to the wire : essentially , this corresponds to one of the magnetic field lines , see fig .[ fig : ampereslaw ] .the total current passing through the surface bounded by the loop is simply the total current . .[ fig : ampereslaw ] ] in an homogeneous , isotropic medium with constant permeability , , and we obtain the expression for the magnetic flux density at distance from the wire : this result will be useful when we come to consider how to generate specified multipole fields from current distributions . finally ,applying stokes theorem to the homogeneous maxwell s equation ( [ eq : maxwell3 ] ) , we find : defining the electromotive force as the integral of the electric field around a closed loop , and the magnetic flux as the integral of the magnetic flux density over the surface bounded by the loop , eq .( [ eq : faraday1 ] ) gives : which is faraday s law of electromagnetic induction .faraday s law is significant for magnets with time - dependent fields , such as pulsed magnets ( used for injection and extraction ) , and magnets that are `` ramped '' ( for example , when changing the beam energy in a storage ring ) . the change in magnetic field will induce a voltage across the coil of the magnet , that must be taken into account when designing the power supply .also , the induced voltages can induce eddy currents in the core of the magnet , or in the coils themselves , leading to heating .this is an issue for superconducting magnets , which must be ramped slowly to avoid quenching .gauss theorem and stokes theorem can be applied to maxwell s equations to derive constraints on the behaviour of electromagnetic fields at boundaries between different materials . here, we shall focus on the boundary conditions on the magnetic field : these conditions will be useful when we consider multipole fields in iron - dominated magnets ., title="fig : " ] , title="fig : " ] consider first a short cylinder or `` pill box '' that crosses the boundary between two media , with the flat ends of the cylinder parallel to the boundary , see fig .[ fig : boundaryconditionb](a ) . applying gauss theorem to maxwell s equation ( [ eq : maxwell2 ] ) gives : where the boundary encloses the volume within the cylinder .if we take the limit where the length of the cylinder ( ) approaches zero , then the only contributions to the surface integral come from the flat ends ; if these have infinitesimal area , then since the orientations of these surfaces are in opposite directions on opposite sides of the boundary , and parallel to the normal component of the magnetic field , we find : where and are the normal components of the magnetic flux density on either side of the boundary .hence : in other words , the normal component of the magnetic flux density is continuous across a boundary .a second boundary condition , this time on the component of the magnetic field parallel to a boundary , can be obtained by applying stokes theorem to maxwell s equation ( [ eq : maxwell4 ] ) . in particular , we consider a surface bounded by a loop that crosses the boundary of the material , see fig .[ fig : boundaryconditionb](b ) . if we integrate both sides of eq .( [ eq : maxwell4 ] ) over that surface , and apply stokes theorem ( [ eq : stokestheorem ] ) , we find : where is the total current flowing through the surface . now, let the surface take the form of a thin strip , with the short ends perpendicular to the boundary , and the long ends parallel to the boundary .in the limit that the length of the short ends goes to zero , the area of goes to zero : both the current flowing through the surface , and the electric displacement integrated over become zero . however , there are still contributions to the integral of around from the long sides of the strip .thus , we find that : where is the component of the magnetic intensity parallel to the boundary at a point on one side of the boundary , and is the component of the magnetic intensity parallel to the boundary at a nearby point on the other side of the boundary . in other words ,the _ tangential _ component of the magnetic intensity is continuous across a boundary .we can derive a stronger contraint on the magnetic field at a boundary in the case that the material on one side of the boundary has infinite permeability ( which can provide a reasonable model for some ferromagnetic materials ) .since , it follows from ( [ eq : boundaryparallelh ] ) that : and in the limit , while remains finite , we must have : in other words , the magnetic flux density at the surface of a material of infinite permeability must be perpendicular to that surface .of course , the permeability of a material characterizes its response to an applied external magnetic field : in the case that the permeability is infinite , a material placed in an external magnetic field acquires a magnetization that exactly cancels any component of the external field at the surface of the material .consider a region of space free of charges and currents ; for example , the interior of an accelerator vacuum chamber ( at least , in an ideal case , and when the beam is not present ) . if we further exclude propagating electromagnetic waves , then any magnetic field generated by steady currents outside the vacuum chamber must satisfy : eq .( [ eq : magnetostatic1 ] ) is just maxwell s equation ( [ eq : maxwell2 ] ) , and eq .( [ eq : magnetostatic2 ] ) follows from maxwell s equation ( [ eq : maxwell4 ] ) given that , , and derivatives with respect to time vanish .we shall show that a magnetic field with constant , and , given by : where and is a ( complex ) constant , satisfies eqs .( [ eq : magnetostatic1 ] ) and ( [ eq : magnetostatic2 ] ) .note that the field components and are real , and are obtained from the imaginary and real parts of the right hand side of eq .( [ eq : multipolefield ] ) . to show that the above field satisfies eqs .( [ eq : magnetostatic1 ] ) and ( [ eq : magnetostatic2 ] ) , we apply the differential operator to each side of eq .( [ eq : multipolefield ] ) . applied to the left hand side, we find : applied to the right hand side of eq .( [ eq : multipolefield ] ) , the differential operator ( [ eq : diffop ] ) gives : combining eqs .( [ eq : multipolefield ] ) , ( [ eq : diffmultipolelhs ] ) and ( [ eq : diffmultipolerhs ] ) , we find : finally , we note that is constant , so any derivatives of vanish ; furthermore , and are independent of , so any derivatives of these coordinates with respect to vanish .thus , we conclude that for the field ( [ eq : multipolefield ] ) : and that this field is therefore a solution to maxwell s equations within the vacuum chamber .of course , this analysis tells us only that the field is a _possible _ physical field : it does not tell us how to generate such a field . the problem of generating a field of the form eq .( [ eq : multipolefield ] ) we shall consider in section [ sec : generatingmultipoles ] .fields of the form ( [ eq : multipolefield ] ) are known as _ multipole fields_. the index ( an integer ) indicates the _ order _ of the multipole : is a dipole field , is a quadrupole field , is a sextupole field , and so on .a solenoid field has for all , and non - zero ; usually , a solenoid field is not considered a multipole field , and we assume ( unless stated otherwise ) that in a multipole magnet . note that we can apply the principle of superposition to deduce that a more general magnetic field can be constructed by adding together a set of multipole fields : a `` pure '' multipole field of order has for only that one value of .the coefficients in eq .[ eq : multipolesum ] characterise the strength and orientation of each multipole component in a two - dimensional magnetic field .it is sometimes more convenient to express the field using polar coordinates , rather than cartesian coordinates .writing and , we see that eq .( [ eq : multipolesum ] ) becomes : by writing the multipole expansion in this form , we see immediately that the strength of the field in a pure multipole of order varies as with distance from the magnetic axis .we can go a stage further , and express the field in terms of polar components : thus : by writing the field in this form , we see that for a pure multipole of order , rotation of the magnet through around the axis simply changes the sign of the field .we also see that if we write : then the value of ( the phase of ) determines the orientation of the field .conventionally , a pure multipole with is known as a `` normal '' multipole , while a pure multipole with is known as a `` skew '' multipole . [ cols="^,^ " , ] it is worth making a few final remarks about mode decompositions for three - dimensional fields .first , as already mentioned , in many cases a full three - dimensional mode decomposition will not be necessary . while this does provide a detailed description of the field in a form suitable for beam dynamics studies , three - dimensional decompositionsdo rely on a large number of accurate and detailed field measurements .while such `` measurements '' may be conveniently obtained from a model , it may be difficult or impractical to make such measurements on a real magnet .fortunately , in many cases , a two - dimensional field description in terms of multipoles is sufficient .generally , a three - dimensional analysis only need be undertaken where there are grounds to believe that the three - dimensional nature of the field is likely to have a significant impact on the beam dynamics .second , we have already emphasised that to obtain an accurate description of the field within some region in terms of a mode decomposition , the mode amplitudes should be determined by a fit on a surface enclosing the region of interest . outside the region bounded by the surface of the fit , the fitted field can be expected to diverge exponentially from the real field . however , in choosing the surface for the fit , the geometry of the magnet will impose some constraints .a magnet with a wide rectangular aperture may lend itself to a description using a cartesian basis ( fitting on the surface of a rectangular box ) ; a circular aperture , however , is more likely to require use of a polar basis ( fitting on the surface of a cylinder with circular cross - section ) .both cases have been described above .it may be appropriate in other cases to perform a fit on the surface of a cylinder with elliptical cross - section .the basis functions in this case involve mathieu functions .for further details , the reader is referred to work by dragt and by dragt and mitchell .our analysis of iron - dominated multipole magnets in section [ sec : irondominatedmultipoles ] was based on the magnetic scalar potential , .the magnetic flux density can be derived from a scalar potential : in the case that the flux density has vanishing divergence and curl : more generally ( in particular , where the flux density has non - vanishing curl ) one derives the magnetic flux density from a vector potential , using : although we have not required the vector potential in our discussion of maxwell s equations for accelerator magnets , it is sometimes used in analysis of beam dynamics . in particular , descriptions of the dynamics based on hamiltonian mechanics generally use the vector potential rather than the magnetic flux density or the magnetic scalar potential .we therefore include here a brief discussion of the vector potential , paying attention to aspects relevant to the descriptions we have developed for two - dimensional and three - dimensional magnet fields .first , we note that the divergence of any curl is identically zero : for any differentiable vector field .thus , if we write , then maxwell s equation ( [ eq : maxwell2 ] ) : is automatically satisfied .maxwell s equation ( [ eq : maxwell4 ] ) in uniform media ( constant permeability ) , with zero current and static electric fields gives : where is the current density .this leads to the equation for the vector potential : eq .( [ eq : curlcurlaequalsmuj ] ) is a second - order differential equation for the vector potential in a medium with permeability , where the current density is .this appears harder to solve than the first - order differential equation for the magnetic flux density , eq .( [ eq : curlbequalsmuj ] ) .however , eq . ( [ eq : curlcurlaequalsmuj ] ) may be simplified significantly , if we apply an appropriate _gauge condition_. to understand what this means , recall that the magnetic flux density is given by the curl of the vector potential , and that the curl of the gradient of any scalar field is identically zero .thus , we can add the gradient of a scalar field to a vector potential , and obtain a new vector potential that gives the same flux density as the old one . that is , if : and : for an arbitrary differentiable scalar field , then : in other words , the vector potential leads to exactly the same flux density as the vector potential .since the dynamics of a given system are determined by the fields rather than the potentials , either or is a valid choice for the description of the system .( [ eq : gaugetransformation ] ) is known as a _ gauge transformation_. the consequence of having the freedom to make a gauge transformation means that the vector potential for any given system is not uniquely defined : given some particular vector potential , it is always possible to make a gauge tranformation without any change in the physical observables of a system . the analogue in the case of electric fields , of course ,is that the `` zero '' of the electric scalar potential can be chosen arbitrarily : only _ changes _ in potential ( i.e. energy ) are observable , so given some particular scalar potential field , it is possible to add a constant ( that is , a quantity independent of position ) and obtain a new scalar potential that gives the same physical observables as the original scalar potential . for magnetostatic fields , we can use a gauge transformation to simplify eq .( [ eq : curlcurlaequalsmuj ] ) .suppose we have obtained a vector potential for some particular physical system .define a scalar field , which satisfies : then define : since and are related by a gauge transformation , they lead to the same magnetic flux density , and the same physical observables for the system . however , the divergence of vanishes : where we have used eq .( [ eq : psidef ] ) .thus , given any vector potential , we can make a gauge transformation to find a new vector potential that gives the same magnetic flux density , but has vanishing divergence . the _ gauge condition _ : is known as the _ coulomb gauge_. it is possible to work with other gauge conditions ( for example , for time - dependent electromagnetic fields the lorenz gauge condition is often more appropriate ) ; however , for our present purposes , the coulomb gauge leads to a simplification of eq .( [ eq : curlcurlaequalsmuj ] ) , which now becomes : eq .( [ eq : delsquaredaequalsminusmuj ] ) is poisson s equation for a vector field .note that despite being a second - order differential equation , it is in a sense simpler than maxwell s equation ( [ eq : maxwell4 ] ) , since we have `` decoupled '' the components of the vectors ; that is , we have a set of three uncoupled second - order differential equations , where each equation relates a component of the vector potential to the corresponding component of the current density .( [ eq : delsquaredaequalsminusmuj ] ) has the solution : in this form , we see that the potential at a point in space is inversely proportional to the distance from the source .now , consider the potential given by : taking derivatives , we find that : then , since and are zero , we have : hence : which is just the multipole field .thus , eq . ( [ eq : multipolevectorpotential ] ) is a potential that gives a multipole field .note also that , since is independent of , this potential satisfies the coulomb gauge condition ( [ eq : coulombgaugecondition ] ) : an advantage of working with the vector potential in the coulomb gauge is that , for multipole fields , the transverse components of the vector potential are both zero .this simplifies , to some extent , the hamiltonian equations of motion for a particle moving through a multipole field .however , note that the longitudinal component of the magnetic flux density is zero in this case .to generate a solenoidal field , with equal to a non - zero constant , we need to introduce non - zero components for , or , or both . for example , a solenoid field with flux density may be derived from the vector potential : let us return for a moment to the case of multipole fields .if we work in a gauge in which the transverse components of the vector potential are both zero , then the field components are given by : from these expressions , we see that if we take any two points with the same coordinate , then the difference in the vector potential between these two points is given by the `` flux '' passing through a line between these points : similarly for any two points with the same coordinate : in general , for a field that is independent of , and working in a gauge where , we can write : where is the change in the vector potential between two points and in a given plane ; and is the magnetic flux through a rectangular `` loop '' with vertices , , and : see fig .[ fig : vectorpotentialinterp ] . and are points obtained by transporting and a distance parallel to the axis .( [ eq : twodflux ] ) can also be obtained by applying stokes theorem to the loop , with the relationship ( [ eq : bequalcurla ] ) between and : hence : finally , we give the vector potentials corresponding to three - dimensional fields . in the cartesian basis , with the field given by eqs .( [ eq : threedcartnbx])([eq : threedcartnbz ] ) , a possible vector potential ( in the coulomb gauge ) is : in the polar basis , with the field given by eqs .( [ eq : threedmodepolarbr])([eq : threedmodepolarbz ] ) , a possible vector potential is : however , note that this potential does not satisfy the coulomb gauge condition .
|
magnetostatic fields in accelerators are conventionally described in terms of multipoles . we show that in two dimensions , multipole fields do provide solutions of maxwell s equations , and we consider the distributions of electric currents and geometries of ferromagnetic materials required ( in idealized situations ) to generate specified multipole fields . then , we consider how to determine the multipole components in a given field . finally , we show how the two - dimensional multipole description may be extended to three dimensions ; this allows fringe fields , or the main fields in such devices as undulators and wigglers , to be expressed in terms of a set of modes , where each mode provides a solution to maxwell s equations .
|
at a first glance it may seem surprising that many chaotic dynamical systems have explicit analytical solutions .but many examples are readily at hand .the shift map for instance , is `` bernoulli '' , the strongest form of chaos .nevertheless the shift map is readily solved explicitly , another example is provided by the logistic mapping ,\ \0\leq \mu\leq 4 , \ \n=0,1,2,\ldots , \label{logisticmap}\ ] ] widely used in population dynamics . for this mapping is equivalent with the shift map and therefore again completely chaotic . yet an explicit solution , valid at , is given by : . \label{logisol}\ ] ] therefore , as far as classical chaos is concerned , there is no basis for the belief that classically chaotic systems do not allow for explicit analytical solutions .but what about the quantized versions of classically chaotic systems , commonly known as quantum chaotic systems ? here ,too , the answer is affirmative .it was shown recently that regular quantum graphs provide a large class of explicitly solvable quantum chaotic systems . in order to strengthen first analytical and numerical results presented in ,it is the purpose of this paper to show that the explicit periodic orbit expansions obtained in are more than formal identities. we will prove below that ( i ) the spectrum of regular quantum graphs is computable explicitly and analytically , state by state , via convergent periodic orbit expansions and ( ii ) the periodic orbit series converge to the correct spectral eigenvalues .the main body of this paper is organized as follows . in sec .ii we summarize the basics of quantum graph theory and derive the general form of the spectral equation . in sec .iii we define regular quantum graphs . in sec .iv we present the explicit periodic orbit expansions of individual eigenvalues of regular quantum graphs .we also specify a summation scheme that guarantees convergence to the correct results .the derivations presented in sec .iv are mathematically rigorous except for one step where we interchange integration and summation to arrive at our final results .this is a `` dangerous operation '' , which is not allowed without further investigation .this point is resolved in sec .v , where we present the analytical proofs that justify the interchange of integration and summation performed in sec .this result establishes that the periodic orbit expansions investigated in this paper converge in the usual sense of elementary analysis . in sec .vi we investigate the convergence properties of the periodic orbit series obtained in sec .we prove analytically that there exists at least one quantum graph for which the convergence is not absolute , but conditional .according to riemann s well - known reordering theorem ( see , e.g. , volume ii , page 33 ) , it is possible to reorder the terms of a conditionally convergent sum in such a way that it converges to any prescribed number .this is the reason why in sec .iv we place so much emphasis on specifying a summation scheme that guarantees convergence of the periodic orbit sums to the correct spectral eigenvalues . in sec .vii we present an alternative way of solving the spectral equations of regular quantum graphs : lagrange s inversion formula .although a perfectly sound and fast - converging method for solving the spectral equation , it lacks the physical appeal of periodic orbit expansions which are based on concrete physical objects such as periodic orbits and their geometrical and dynamical properties . in sec .viii we discuss our results , point out promising research directions and conclude the paper .since many scientists will find the existence of convergent periodic orbit expansions surprising , we found it necessary to establish this result with an extra degree of rigor .this necessarily requires strong , but somewhat technical proofs .however , in order not to break the flow of the paper , we adopt a hierarchical approach presenting the material in three stages .the most pertinent aspects of our results are presented in the main text as outlined above .supporting higher - level , but formal , material is presented in appendix a. lower level material , such as formulae , definitions and lemmas , is relegated to appendix b. in order for the proofs to be convincing , and to be accessible to a wide readership , we used only concepts of elementary undergraduate analysis in our proofs , altogether avoiding advanced mathematical concepts , such as distributions .this is an important point since our paper seeks mathematical rigor on a common basis acceptable to all readers . in this spiritthe alert reader will notice that we completely avoid the use of dirac s delta `` function '' .this is necessary since the delta `` function '' is a distribution , a concept we found to be highly confusing for the general reader .although there is nothing `` wrong '' with the delta `` function '' , if treated properly as a distribution or linear functional , the confusion surrounding the delta `` function '' started with von neumann s critique at a time when the modern tools of distribution theory were not yet available . therefore , although ultimately completely unjustified , von neumann s critique tainted the delta `` function '' to such an extent that it still ca nt be used in a rigorous context without causing heated debatethus , for the sake of simplicity and clarity of our arguments , we prefer to avoid it . as a consequence ,the presentation of the material in this paper and all of our proofs are conducted without using the concept of level densities , which are usually defined with the help of dirac s delta `` function '' .we emphasize that there is a fundamental difference between an analytical solution and an _ explicit _ analytical solution .while the spectral equation for quantum graphs is known in great detail , and periodic orbit expansions for the spectral density and the staircase function of quantum graphs are known , these results are all implicit .this means that they do not yield the spectral eigenvalues in the form `` '' .it is the novelty of this paper to obtain explicit analytical expressions of this type for a wide class of quantum chaotic systems , and prove their validity and convergence with mathematical rigor .the properties of the laplacian operator on graphs have been studied in great detail in the mathematical literature and the study of quantum mechanics on graphs has attracted considerable attention among chemists and physicists ( see , e.g. , and references therein ) , especially in the quantum chaos community . the purpose of this section is to acquaint the reader with the main ideas of quantum graph theory .since many excellent publications on the theory of quantum graphs are available ( see , e.g. , ) , we will present only those ideas and facts that are of direct relevance to the subject of this paper .a quantum graph consists of a network of bonds and vertices with a quantum particle travelling on it .an example of a graph with ten bonds and six vertices is shown in fig . 1 . the number of bonds is denoted by , the number of vertices by . in this paperwe focus entirely on finite quantum graphs , i.e. , .we define directed bonds on the graph such that the bond connecting vertex number with vertex number is different from the bond connecting the vertices in the opposite direction .there are directed bonds .it is useful to define the linearized bond index : , , and connected .the index labels sequentially all directed bonds of the graph with . for the directed bonds with we define . this way the sign of the counting index reflects the directionality of the bond .the network of bonds and vertices defines the graph s topology .the topology of a graph alone does not completely specify the quantum graph .this is so because , for instance , the bonds of the graph may be dressed with potentials . since the quantum graphs we study in this paper are finite , bounded systems , their spectra are discrete .the spectrum of a quantum graph is obtained by solving the one - dimensional schrdinger equation on the graph subjected to the usual boundary conditions of continuity and quantum flux conservation . a particularly useful way of obtaining the spectral equation for a given quantum graph is the scattering quantization approach which yields the spectral equation in the form =0 , \label{specdet}\ ] ] where is the quantum scattering matrix of the graph and is the wave number , related to the energy via . for our purposesit is sufficient to know that the matrix is of dimension and can be decomposed into where is a unitary matrix and is a diagonal matrix of the form \ \delta_{\lambda,\lambda ' } , \ \\lambda,\lambda'=\pm 1,\ldots,\pm n_b \label{dd}\ ] ] with , .the ordering of the matrices and in ( [ decomp1 ] ) is neither unique nor important in the present context .it depends on the details of how `` in '' and `` out '' channels are assigned to the indices of the matrix .since the computation of the energy spectrum involves only traces and determinants , the precise ordering of and in ( [ decomp1 ] ) does not affect our final results .physically the quantities are the time - reversal invariant parts of the bond actions . a possible time - reversal breaking part of the bond actionsis understood to be absorbed in the matrix . for simplicitywe will in the following refer to as the bond action of the bond .in this paper we focus exclusively on scaling quantum graphs . in this casethe matrix is a constant matrix , independent of , and the actions in ( [ dd ] ) split into the product , where is a constant , the reduced bond action of the bond .physically the scaling case is an important case since it describes systems free from phase - space metamorphoses .scaling systems of this type arise in many physical contexts , for instance in microwave cavities partially filled with a dielectric , or rydberg atoms in external fields .it is possible to write ( [ specdet ] ) as a linear combination of trigonometric functions whose frequencies are directly related to the bond actions . in order to derive this representation we start by noting that =\det\left(s^{1/2}\right)\ , ( -4)^{n_b}\ , \prod_{l=1}^{2n_b}\ , \sin(\sigma_l/2 ) , \label{dstruc}\ ] ] where are the eigenphases of the unitary matrix . > from ( [ dstruc ] )we obtain the important result that \ , \in\ , { \rm{\bf r}}. \label{real1}\ ] ] according to ( [ decomp1 ] ) we have where is the eigenphase of the unitary matrix , i.e. . using the decomposition ( [ decomp1 ] ) of the matrix we write the spectral determinant ( [ specdet ] ) in the form =0 , \label{specdet1}\ ] ] so that we can directly apply the results in ref .according to ref . , p. 87, the determinant ( [ specdet1 ] ) is a polynomial in the variables , , whose coefficients are the principal sub - determinants of . using together with the fact that is the principal sub - determinant of of order zero , we obtain that ( [ specdet1 ] ) is of the form = \det(u^{-1 } ) + \sum_{n=1}^{n_b}\sum_{i_1,\ldots , i_n}\sum_{\alpha^{(n)}}\ , a_{n;i_1,\ldots , i_n;\alpha^{(n)}}\ , \exp\left[i\sum_{\lambda\in\{i_1,\ldots , i_n\ } } \alpha^{(n)}_{\lambda}l_{\lambda}^{(0)}k\right ] , \label{xyz}\ ] ] where , is an integer array of length containing a `` 1,2-pattern '' , i.e. , and are complex coefficients that can be computed from the principal sub - determinants of .because of ( [ real1 ] ) , we have defining and using ( [ det1 ] ) and ( [ xyz ] ) , we obtain ( [ real2 ] ) in the form + \sum_{n=1}^{n_b}\sum_{i_1,\ldots , i_n}\sum_{\alpha^{(n)}}\ , a_{n;i_1,\ldots , i_n;\alpha^{(n)}}\ , \exp\left\{-i[\beta_{n;i_1,\ldots , i_n;\alpha^{(n)}}k -\tau/2]\right\ } \ \ \in\ { \rm{\bf r } } , \label{real3}\ ] ] where we define the frequencies because of for all and the structure of , the largest frequency in ( [ omdef ] ) is defined in ( [ omega0 ] ) .we now scan the frequencies defined in ( [ omdef ] ) and collect the pairwise different ones into a set , where is the number of pairwise different frequencies and . since the derivation of ( [ real3 ] ) involved only factoring nonzero terms out of the left - hand side of ( [ specdet ] ) , the zeros of ( [ real3 ] ) and the zeros of ( [ specdet ] ) are identical . takingthe real part of the real quantity ( [ real3 ] ) shows that ( [ specdet ] ) can be written in the form where and , are real constants .in general it is difficult to obtain an explicit analytical result for the zeros of ( [ specequ ] ) .there is , however , a subset of quantum graphs defined in the following section , that allows us to compute an explicit analytical solution of ( [ specequ ] ) .a subset of quantum graphs are regular quantum graphs .they fulfil the condition where the constants are the coefficients of the trigonometric functions in ( [ phi ] ) .although regular quantum graphs are a restricted sub - set of all quantum graphs , they are still quantum chaotic with positive topological entropy . because of ( [ regul ] ) we have for all , and the zeros of ( [ specequ ] ) are given by : where \label{kbar}\ ] ] and -{\pi\over 2}\right ] \label{ktilde}\ ] ] may be interpreted as the average and fluctuating parts of the zeros of ( [ specequ ] ) , respectively . since ( [ specequ ] )is the spectral equation of a physics problem , we only need to study the positive solutions of ( [ specequ ] ) .therefore we introduced the constant in ( [ kbar ] ) which allows us to adjust the counting scheme of zeros in such a way that is the first nonnegative solution of ( [ specequ ] ) .this is merely a matter of convenience and certainly not a restriction of generality .because of ( [ regul ] ) , the boundedness of the trigonometric functions in ( [ phi ] ) and the properties of the function , the fluctuating part of the zeros is bounded .we have <{\pi\over 2\omega_0}. \label{bound}\ ] ] therefore , roots of ( [ specequ ] ) can only be found in the intervals ] . for we have . for root intervals grow in size towards , but for any the end points and of are not roots of ( [ specequ ] ) .the boundedness of also implies the existence of two root - free intervals in .they are given by ] .thus , roots can not be found in the union of these two intervals , the root - free zone .we also have .for an illustration of the various intervals defined above , and their relation to each other , see fig . 2 .the intervals together with their limiting points provide a natural organization of the axis into a periodic structure of root cells .we now define , which transforms ( [ specequ ] ) into where and .since , as discussed in sec .ii , is the largest frequency in ( [ specequ ] ) , we have , , and theorem t2 ( appendix a ) is applicable .it states that there is exactly one zero of ( [ specequ ] ) in every open interval , .consulting fig .3 this fact is intuitively clear since the function in ( [ specequ ] ) is `` fast '' , and , containing only frequencies smaller than 1 , is a `` slow '' function .thus , as illustrated in fig . 3 , and proved rigorously by t2 ( appendix a ) , there is one and only one intersection between the fast function and the slow function in every interval of length . transformed back to the variable this implies that there is exactly one zero of ( [ specequ ] ) in every interval . since this zerocan not be found in the root - free zone , it has to be located in .thus there is exactly one root of ( [ specequ ] ) in every root - interval .this fact is the key for obtaining explicit analytical solutions of ( [ specequ ] ) as discussed in the following section .for the zeros of ( [ specequ ] ) we define the spectral staircase where is heavyside s function . based on the scattering quantization approach it was shown elsewhere that where and is the unitary scattering matrix ( [ decomp1 ] ) of the quantum graph . since , according to our assumptions , is a finite , unitary matrix , existence and convergence of ( [ n1 ] ) is guaranteed according to l17 , l18 and l19 ( appendix b ) .therefore , is well - defined for all .since can easily be constructed for any given quantum graph , ( [ n1 ] ) provides an explicit formula for the staircase function ( [ stair ] ) .combined with the spectral properties of regular quantum graphs discussed in sec .iii , this expression now enables us to explicitly compute the zeros of ( [ specequ ] ) . in sec .iii we proved that exactly one zero of ( [ specequ ] ) is located in . integrating from to and taking into account that jumps by one unit at ( see illustration in fig .4 ) , we obtain +n(\hat k_n)[\hat k_n - k_n ] . \label{nint}\ ] ] solving for and using and ( see fig . 4 ) , we obtain since we know explicitly , ( [ explic ] ) allows us to compute every zero of ( [ specequ ] ) explicitly and individually for any choice of . the representation ( [ explic ] ) requires no further proof since , as mentioned above , is well - defined everywhere , and is riemann - integrable over any finite interval of .another useful representation of is obtained by substituting ( [ n1 ] ) with ( [ nbar ] ) into ( [ explic ] ) and using ( [ kbar ] ) : according to theorem t3 ( appendix a ) presented in sec .v , it is possible to interchange integration and summation in ( [ explic ] ) and we arrive at in many cases the integral over can be performed explicitly , which yields explicit representations for .finally we discuss explicit representations of in terms of periodic orbits .based on the product form ( [ decomp1 ] ) of the matrix and the explicit representation ( [ dd ] ) of the matrix elements of , the trace of is of the form }\ , a_m[l ] \ , \exp\left\{il_m^{(0)}[l ] k\right\ } , \label{po1}\ ] ] where ] is the weight of orbit number of length , computable from the matrix elements of , and ] in ( [ explic ] ) .if we denote by the amplitude of the prime periodic orbit , then =l_{\cal p}\ , a_{m_{\cal p}}^{\nu}. \label{ppo2}\ ] ] this is so , because the prime periodic orbit is repeated times , which by itself results in the amplitude .the factor is explained in the following way : because of the trace in ( [ explic ] ) , every vertex visited by the prime periodic orbit contributes an amplitude to the total amplitude ] that does not contain a point mod .define let } + i{\pi-\sigma\,{\rm mod}\ , 2\pi\over 2}.\ ] ] then , according to formulas f2 and f3 ( appendix b ) , in ] , converges uniformly in ] for which mod ] and ] and /n\ , dx ] is a non - singular , smooth function at , there is no problem with taking for the first two integrals on the right - hand side of ( [ fpf ] ) .therefore , integration and summation on the left - hand side of ( [ fpf ] ) can be interchanged if this is guaranteed according to t3 ( appendix a ) .assuming that has only a finite number of zeros mod in , we can break into sub - intervals containing a single zero only , in which the interchange of summation and integration is allowed .this proves ( [ inter ] ) .returning to the crucial step from ( [ explic ] ) to ( [ explic ] ) we have to show that since is unitary , it is diagonalizable , i.e. there exists a matrix such that where are the eigenphases of .because of the structure ( [ decomp1 ] ) of the matrix in conjunction with the smoothly varying phases ( [ dd ] ) , the eigenphases of the matrix have only a finite number of zeros mod in any finite interval of .this is important for later use of ( [ inter ] ) which was only proved for this case .we now make essential use of our focus on finite quantum graphs , which entails a finite - dimensional matrix , and therefore a finite - dimensional matrix . in this case matrix multiplication with leads only to finite sums . since for finite sums integration and summation is always interchangeablewe have this equation justifies the step from ( [ explic ] ) to ( [ explic ] ) , which proves the validity of ( [ explic ] ) and ( [ explic ] ) .in this section we prove rigorously that ( [ explic ] ) contains conditionally convergent as well as absolutely convergent cases . we accomplish this by investigating the convergence properties of ( [ explic ] ) in the case of the dressed three - vertex linear graph shown in fig .the potential on the bond is zero ; the potential on the bond is a scaling potential explicitly given by where is the energy of the quantum particle and is a real constant with . the quantum graph shown in fig .5 was studied in detail in . denoting by the geometric length of the bond and by the geometric length of the bond ,its spectral equation is given by where with the spectral equation ( [ 3specequ ] ) is precisely of the form ( [ specequ ] ) . since according to ( [ omegas ] ) , the regularity condition ( [ regul ] ) is fulfilled and ( [ 3specequ ] ) is the spectral equation of a regular quantum graph .this means that we can apply ( [ explic ] ) for the computation of the solutions of ( [ 3specequ ] ) . in order to do so ,we need a scheme for enumerating the periodic orbits of the three - vertex linear graph .it was shown in that a one - to - one correspondence exists between the periodic orbits of the three - vertex linear graph and the set of binary plya necklaces .a binary necklace is a string of two symbols arranged in a circle such that two necklaces are different if ( a ) they are of different lengths or ( b ) they are of the same length but can not be made to coincide even after applying cyclic shifts of the symbols of one of the necklaces . for the graph of fig . 5 it is convenient to introduce the two symbols and , which can be interpreted physically as the reflection of a graph particle from the left ( ) or the right ( ) dead - end vertices , respectively . since strings of symbolsare frequently referred to as words , we adopt the symbol to denote a particular necklace . for a given necklace is convenient to define the following functions : , which counts the number of in , , which counts the number of in , the pair function , which counts all occurrences of -pairs or -pairs in and the function , which counts all occurrences of or symbol combinations in .we also define the function , which returns the total binary length of the word , and the phase function , defined as the sum of and the number of -pairs in .we note the identity in evaluating the functions defined above , we have to be very careful to take note of the cyclic nature of binary necklaces .therefore , for example , , , and , which also checks ( [ ident ] ) .in addition we define the set of all binary necklaces of length .let us look at .this set contains three necklaces , , ( cyclic rotation of symbols ) and .the necklace is not a primitive necklace , since it consists of a repetition of the primitive symbol .the same holds for the necklace , which is a repetition of the primitive symbol .the necklace is primitive , since it can not be written as a repetition of a shorter string of symbols .this motivates the definition of the set of all primitive binary necklaces and the set containing all primitive binary necklaces of length .an important question arises : how many primitive necklaces are there in ? in other words , how many members are there in ?the following formula gives the answer : where the symbol `` '' denotes `` is a divisor of '' , and is euler s totient function defined as the number of positive integers smaller than and relatively prime to with as a useful convention . it is given explicitly by where is a prime number .thus the first four totients are given by , , and .a special case of ( [ nneck ] ) is the case in which is a prime number . in this casewe have explicitly this is immediately obvious from the following combinatorial argument . by virtue of being a prime numbera necklace of length can not contain an integer repetition of shorter substrings , except for strings of length 1 or length .length is trivial .it corresponds to the word itself .length 1 leads to the two cases and , where the symbols and , respectively , are repeated times .so , except for these two special necklaces , any necklace of prime length is automatically primitive .thus there are different necklaces with symbols and symbols , where the factor takes care of avoiding double counting of cyclically equivalent necklaces . in total , therefore , we have primitive necklaces of length , in agreement with ( [ nneck ] ) .the sum in ( [ nneck ] ) ranges from 1 to since would correspond to the composite , non - primitive necklace and would correspond to the composite , non - primitive necklace .we are now ready to apply ( [ explic ] ) to the three - vertex linear graph . in `` necklace notation ''it is given by \ , \sin\left[{\nu\pi\over 2\omega_0}\ , l_w^{(0)}\right ] , \label{explic(1)}\ ] ] where is the reduced action of the primitive necklace , given by \label{redact}\ ] ] and the amplitude of the primitive necklace is given by where and are defined in ( [ omegas ] ) .the notation refers to a necklace of binary length that consists of concatenated substrings .note that the summations in ( [ explic(1 ) ] ) are ordered in such a way that for fixed we sum over all possible primitive words and their repetitions such that the total length of the resulting binary necklace amounts to , and only then do we sum over the binary length of the necklaces .this summation scheme , explicitly specified in ( [ explic(1 ) ] ) , complies completely with the summation scheme defined in sec .iv . since we proved in sec .iv that ( [ explic ] ) converges , provided we adhere to the correct summation scheme , so does ( [ explic(1 ) ] ) .a numerical example of the computation of the spectrum of ( [ 3specequ ] ) via ( [ explic(1 ) ] ) was presented in where we chose , and . for , 10 , 100 we computed the exact roots of ( [ 3specequ ] ) numerically by using a simple numerical root - finding algorithmwe obtained 4.107149 , 39.305209 and 394.964713 .next we computed these roots using the explicit formula ( [ explic(1 ) ] ) . including all binary necklaces up to , which amounts to including a total of approximately primitive periodic necklaces, we obtained 4.105130 , 39.305212 and 394.964555 .given the fact that in sec .iv we proved exactness and convergence of ( [ explic ] ) ( ( [ explic(1 ) ] ) , respectively ) , the good agreement between and , , is not surprising .nevertheless we found it important to present this simple example here , since it illustrates the abstract procedures and results obtained in sec .iv , checks our algebra and instills confidence in our methods .we now investigate the convergence properties of ( [ explic(1 ) ] ) for two special cases of dressed linear three - vertex quantum graphs ( see fig .5 ) defined by in this case the reduced actions ( [ redact ] ) reduce to \label{redact'}\ ] ] and is given by using ( [ ident ] ) , the amplitudes ( [ ampl ] ) are we now show that for the first -term in ( [ explic(1 ) ] ) is always zero , and thus ( [ explic(1 ) ] ) converges ( trivially ) absolutely in this case . for ( [ redact ] )becomes =2a\ell(w ) .\label{redact''}\ ] ] also , according to ( [ kbar ] ) and ( [ params ] ) is given by .\label{kbar'}\ ] ] thus , for the argument of the first -term in ( [ explic(1 ) ] ) is given by this is an integer multiple of , and thus all terms in the periodic - orbit sum of ( [ explic(1 ) ] ) vanish identically .therefore we proved that there exists at least one case in which the periodic - orbit sum in ( [ explic(1 ) ] ) is ( trivially ) absolutely convergent .we now prove rigorously that there exists at least one non - trivial case in which ( [ explic(1 ) ] ) converges only conditionally . since we already proved in sec .iv that ( [ explic(1 ) ] ) always converges , all we have to prove is that there exists a case in which the sum of the absolute values of the terms in ( [ explic(1 ) ] ) diverges . in order to accomplish this ,let us focus on the case and estimate the sum \ , \sin\left[{\nu\pi\over 2\omega_0}\ , l_w^{(0)}\right ] \right| .\label{explic(2)}\ ] ] we now restrict the summation over all integers to the summation over prime numbers only .moreover , we discard all non - primitive necklaces of length , which is equivalent to keeping terms with only . observing that trivially for all necklaces in , we obtain : \ , \sin\left[{\pi\over 2\omega_0}\ , l_w^{(0)}\right ] \right| , \label{explic(3)}\ ] ] where the sum is over all prime numbers . for the reduced actionsare given by \label{redact'''}\ ] ] and we use these relations to evaluate the arguments of the two -functions in ( [ explic(3 ) ] ) .we obtain \ , ( n+\mu+1 ) \label{psi1}\ ] ] and , \label{psi2}\ ] ] respectively .we see immediately that all terms in ( [ explic(3 ) ] ) are zero if is divisible by 3 .this provides additional examples of ( trivially ) absolutely convergent cases of ( [ explic(1 ) ] ) . in case not divisible by 3 , only those terms contribute to ( [ explic(3 ) ] ) for which is not divisible by 3 . following the reasoning that led to ( [ nneck ] ) , ranges from 1 to for .then , ranges from to in steps of 1 .since is never divisible by 3 for prime and , the number of primitive necklaces of length with the property that is not divisible by 3 is at least , \label{subsm}\ ] ] where the sum over the binomial coefficients was evaluated with the help of formula 0.1521 in .therefore , with ( [ redact ] ) , ( [ psi1 ] ) , ( [ psi2 ] ) , ( [ subsm ] ) , for all and for , we obtain , \label{explic(4)}\ ] ] which obviously diverges exponentially .the physical reason is that the quantum amplitudes , which contribute the factor in ( [ explic(4 ) ] ) are not able to counteract the proliferation of primitive periodic orbits ( primitive binary necklaces ) in ( [ explic(4 ) ] ) .analogous results can easily be obtained for graphs with in ( [ fampar ] ) .in summary we established in this section that the convergence properties of ( [ explic ] ) depend on the details of the quantum graph under investigation .we proved rigorously that both conditionally convergent and absolutely convergent cases can be found .we emphasize that the degree of convergence does not change the fact , proved in sec .iv , that ( [ explic ] ) always converges , and always converges to the exact spectral eigenvalues .the periodic orbit expansions presented in sec . iv are not the only way to obtain the spectrum of regular quantum graphs explicitly .lagrange s inversion formula offers an alternative route . given an implicit equation of the form s inversion formula determines a root of ( [ lagr1 ] ) according to the explicit series expansion provided is analytic in an open interval containing and since ( [ zero1 ] ) is of the form ( [ lagr1 ] ) , and the regularity condition ( [ regul ] ) ensures that ( [ lagr3 ] ) is satisfied , we can use lagrange s inversion formula ( [ lagr2 ] ) to compute explicit solutions of ( [ specequ ] ) . in order to illustrate lagrange s inversion formulawe will now apply it to the solution of ( [ 3specequ ] ) . defining , the root of ( [ 3specequ ] ) satisfies the implicit equation , \label{impl}\ ] ] where and . for the same parameter values as specified in and already used above in sec .vi we obtain , and .we now re - compute these values using the first two terms in the expansion ( [ lagr2 ] ) .for our example they are given by \left\{(-1)^n + { r\rho\cos(\rho\pi n)\over \sqrt{1-r^2\sin^2(\rho\pi n)}}\right\}. \label{lagexpl}\ ] ] we obtain , and , in very good agreement with , and .although both , ( [ explic ] ) and ( [ lagr2 ] ) are exact , and , judging from our example , ( [ lagr2 ] ) appears to converge very quickly , the main difference between ( [ explic ] ) and ( [ lagr2 ] ) is that no physical insight can be obtained from ( [ lagr2 ] ) , whereas ( [ explic ] ) is tightly connected to the classical mechanics of the graph system providing , in the spirit of feynman s path integrals , an intuitively clear picture of the physical processes in terms of a superposition of amplitudes associated with classical periodic orbits .there are only very few exact results in quantum chaos theory .in particular not much is known about the convergence properties of periodic orbit expansions . sincequantum graphs are an important model for quantum chaos , which in fact have already been called `` paradigms of quantum chaos '' , it seems natural that they provide a logical starting point for the mathematical investigation of quantum chaos .the regular quantum graphs defined in this paper are important because they provide the first example of an explicitly solvable quantum chaotic system .moreover regular quantum graphs allow us to prove two important results : ( a ) not all periodic orbit expansions diverge .there exist nontrivial , convergent , periodic orbit expansions .( b ) there exist explicit periodic orbit expansions that converge to the exact values of individual spectral points .the main result of this paper is an analytical proof of the validity and the convergence of the explicit spectral formulas ( [ explic ] ) and ( [ explic ] ) , respectively .this result is novel in two respects .( i ) while periodic orbit expansions of the spectral density and the spectral staircase of a quantum system are basic tools of quantum chaos , the very concept of a periodic orbit expansion for individual spectral eigenvalues is new .( ii ) due to the exponential proliferation of the number of periodic orbits with their ( action ) lengths , it is frequently assumed in the quantum chaos community that periodic orbit expansions are formal tools at best , but do not converge .we proved in this paper that , at least as far as regular quantum graphs are concerned , and despite the exponential proliferation of periodic orbits in this case , the periodic orbit expansion ( [ explic ] ) converges in the usual sense of elementary analysis .this result is also new .the main ingredient in the proof of ( [ explic ] ) is theorem t2 ( appendix a ) , i.e. an analytical proof that there is exactly one spectral point in every root cell . in discussions with our colleagues we found that while many pointed out the necessity of justifying the interchange of integration and summation in ( [ inter ] ) ( now established in sec .v with t3 ( appendix a ) ) , many were initially puzzled by the existence of root intervals and the organization of the spectral points into root cells , now guaranteed by t2 ( appendix a ) .this is so because regular quantum graphs have a positive topological entropy and are in this sense quantum chaotic systems .hence the spectrum of regular quantum graphs is expected to be `` wild '' , in complete contrast to the fact , proved in this paper , that the spectrum of regular quantum graphs can actually be organized into regular root cells . in this senseregular quantum graphs are closely related to other quantum chaotic systems that also show marked deviations from the expected universal behavior . as a specific example we mention chaotic billiards on the hyperbolic plane generated by arithmetic groups .we hope that the pedagogical presentation of the proofs in appendices a and b , with their hierarchical structure and the use of only elementary analysis concepts will help to establish theorems t2 and t3 ( appendix a ) , and their consequence , the existence of explicit , convergent periodic orbit expansions .we mention that the spectral equation ( [ specequ ] ) of a finite quantum graph is an example of an almost periodic function .more information on the analytical structure of the zeros of almost periodic functions can be found in .there are many basic quantum mechanical problems that lead to transcendental equations of the type ( [ specequ ] ) .so far the recommended method is to solve them graphically or numerically ( see , e.g. , ) .based on the results presented in this paper , a third method is now available for presentation in text books on quantum mechanics : explicit analytical solutions .when the regularity condition ( [ regul ] ) is satisfied , either the lagrangian inversion method or the periodic orbit expansion ( [ explic ] ) may be employed .since the lagrangian method is a purely mathematical tool without immediate physical meaning , the periodic orbit expansion may be preferred due to its direct physical relevance in terms of concrete classical physics . having been established with mathematical rigor in this paper , formula ( [ explic ] ) may serve as the starting point for many further investigations .we mention one : since according to ( [ explic ] ) is known explicitly , so is the level spacing .this may give us an important handle on investigating analytically and exactly the nearest - neighbor spacing statistics of regular quantum graphs .whatever the precise properties of will be , one result is clear already : due to the existence of the root - free zones , established in sec .iii , is not wignerian . thus , regular quantum graphs will join the growing class of classically chaotic systems which do not show the generic properties of typical quantum chaotic systems .a corollary of some significance is the following . since we proved that for regular quantum graphs there is exactly one root of ( [ specequ ] ) in , this proves rigorously that for regular quantum graphs the number of roots of ( [ specequ ] ) smallerthan grows like ( weyl s law ) .an open question is the generalization of our results to the case of infinite quantum graphs . in case , it seems straightforward to generalize the regularity condition ( [ regul ] ) to the case of infinite quantum graphs . in summarywe proved a rigorous theorem on the existence and convergence of explicit periodic orbit expansions of the spectral points of regular quantum graphs .we hope that this paper will lay the foundation for further rigorous research in quantum graph theory .y.d . and r.b .gratefully acknowledge financial support by nsf grant phy-9900730 and phy-9984075 ; y.d .by nsf grant phy-9900746 .let , , , , , , and . then : ^ 2\over 1-\left[\sum_{i=1}^n a_i\cos(\omega_i x+\alpha_i)\right]^2 } < 1 \x\in { \rm { \bf r}}.\ ] ] define , .then : ^ 2\over 1-\left[\sum_{i=1}^n |a_i|\ , |\cos(\theta_i)|\right]^2 } < { \left[\sum_{i=1}^n \right]^2\over 1-\left[\sum_{i=1}^n |a_i|\ , |\cos(\theta_i)|\right]^2 } .\ ] ] define the three functions : where . since there is always an such that , , we prove t1 by showing that . because of , the function is well - defined and singularity - free in . since is differentiable in we prove by looking for the extrema of : let be a solution of ( [ t1 ] ) .there are three different cases .( i ) . in this casewe have .( ii ) and .in this case we have .( iii ) and . in this case( [ t1 ] ) reduces to for evaluated at of ( [ t2 ] ) we obtain : ^ 2\ { \left[\sum_{i=1}^n |a_i|\cos(x_i^*)\right]^2 \over 1-c^2(\vec x^ * ) } \ = \ { 1-c^2(\vec x^ * ) \over s^2(\vec x^ * ) } \ = \ { 1\over g(\vec x^ * ) } .\ ] ] this implies , or , since in , .since there are no boundaries to consider where absolute maxima of might be located , the local extrema of encompass all the maxima of in and we have in .this proves t1 .consider the spectral equation where with , , , , , , and .then there is exactly one zero of ( [ s1 ] ) in every open interval , , .\(i ) first we observe that are not roots of ( [ s1 ] ) : .this means that roots of ( [ s1 ] ) are indeed found only in the _ open _ intervals .\(iii ) we define the closures ] .therefore , roots of are fixed points of .\(vi ) in : ^ 2 = { \left [ \sum_{i=1}^n a_i \omega_i \sin(\omega_i\xi+\alpha_i+n\pi\omega_i)\right]^2 \over 1-\left [ \sum_{i=1}^n a_i \cos(\omega_i\xi+\alpha_i+n\pi\omega_i)\right]^2 }\ { \buildrel t1\over < } \ 1\ \ \rightarrow \beta_n'(\xi)<1.\ ] ] ( vii ) because of ( vi ) the conditions for l16 are fulfilled and has at most one fixed point in .this means that has at most one root in .since according to ( iv ) there is at least one root of in , it follows that has exactly one root in . where , mod and is continuous and has a continuous first derivative .the two limits in ( [ t31 ] ) are independent .therefore , splitting the integration range into two pieces ( allowed with l11 ) , one from to , and the other from to , it is enough to prove where .the case , covering the other integral in ( [ t31 ] ) is treated in complete analogy . the first equality in ( [ t32 ] )follows immediately from f2 , f3 and the fact that according to l20 the real part has a riemann - integrable log singularity at and the imaginary part has a riemann - integrable jump - singularity at .the second equality is more difficult to prove . for the following considerations we assume .we will comment on the case below .according to l10 , for .let : \over n}\ , dx\right| \{ \buildrel \rm l10 \over < }\left| \int_{x^*}^{x^*+\epsilon}\ , { \exp[in\sigma(x)]\over n}\ , 2{\sigma'(x)\over\sigma'(x^ * ) } \ , dx\right| = \ ] ] - 1 \right|\leq { 4\over n^2|\sigma'(x^*)| } , \label{t3a}\ ] ] where the last estimate is a simple consequence of the fact that the exponential function is unimodular .while this simple estimate will be useful later on for the case of large , we need a better estimate for small : - 1 \right|= \left|\exp\left\{in\epsilon { [ \sigma(x^*+\epsilon)-\sigma(x^*)]\over \epsilon } \right\}\ -1\ , \right| .\label{t3b}\ ] ] now because is differentiable , we have according to the intermediate value theorem of differential calculus : /\epsilon=\sigma'(\xi) ] . therefore : - 1 \right|= \left|\exp[in\sigma'(\xi)\epsilon]-1\right| \ { \buildrel \rm l6 \over < } 2|n\sigma'(\xi)\epsilon|=2n\epsilon|\sigma'(\xi)| \label{t3c}\ ] ] for , or , .let where is the floor function ( : largest integer smaller than ) . then : \over n}\ , dx\right|\leq { 4\epsilon\over n}\ , \left|{\sigma'(\xi(\epsilon))\over \sigma'(x^*)}\right|\ { \buildrel \rm l10 \over < } \ { 6\epsilon\over n}\ \ { \rm for } \n\leq n(\epsilon ) .\label{t3e}\ ] ] so now we have \over n } dx \right| \leq\ ] ] \over n } dx \right|+ \sum_{n = n(\epsilon)+1}^{\infty}\ , \left| \int_{x^*}^{x^*+\epsilon } { \exp[in\sigma(x)]\over n } dx \right|\leq\ ] ] both sums vanish in the limit of .we show this in the following way . for the first sumwe obtain : = 6\epsilon\left(1+\ln\left\lfloor{1\over \epsilon -\ln\left| { \sigma'(\xi(\epsilon))\over \sigma'(x^*)}\right|-\ln|\sigma'(x^*)|\right\ } < \ ] ] for the second sum we obtain : where we used latexmath:[\[n(\epsilon)>{2\over\epsilon |\sigma'(x^*)|}\ , \left|{\sigma'(x^*)\over\sigma'(\xi(\epsilon))}\right|\ -1\ { \buildrel { \rm l10}\over > } \ { 4\over 3\epsilon for the above proof we assumed .but our proof still works for the case if we use instead of in ( [ t3a ] ) . after a partial integration and noting that ( i ) and ( ii ) ^ 2\leq c(\epsilon)|\sigma'(x)| ] . this inequality can be used in two different ways : ( i ) .this is the first inequality in ( # ) .( ii ) - 1\leq let be continuous in ] with . because of l13 there exists with .because of l14 there exists with . define . then and . since is continuous , so is .then , because of the intermediate value theorem of calculus , , with , i.e. .assume that has exactly fixed points in ] .then , according to weierstra , there exists an accumulation point in ] .now assume that has a continuum of fixed points in ] .therefore , in summary , there can not be any finite number of fixed points , nor can there be infinitely many fixed points of in $ ] .the only alternatives are zero or one fixed point , i.e. at most one fixed point , as stated in l16 .let be a unitary matrix of finite dimension , .denote by , , , its eigenvalues where and mod .define .then : , .since is unitary , there exists a unitary matrix with , , .also : .define the series . according to l12 these series are convergent .then , according to l11 , is convergent , and therefore finite , because only a finite sum over is involved . again with l11 : .this means that since is finite , so is , i.e. . r. l. devaney , _ a first course in chaotic dynamical systems _ ( addison - wesley publishing company , inc . , reading , massachusetts , 1992 ) .e. ott , _ chaos in dynamical systems _( cambridge university press , cambridge , 1993 ) .r. m. may , in _ dynamical chaos _ , m. v. berry , i. c. percival , and n. o. weiss , editors ( princeton university press , princeton , new jersey , 1987 ) , p. 27 . s. m. ulam and j. von neumann , bull .soc . * 53 * , 1120 ( 1947 ) .r. blmel and w. p. reinhardt , _ chaos in atomic physics _( cambridge university press , cambridge , 1997 ) .m. gutzwiller , _ chaos in classical and quantum mechanics _( springer , new york , 1990 ) .stckmann , _ quantum chaos _ ( cambridge university press , cambridge , 1999 ) .r. blmel , yu . dabaghian , and r. v. jensen , phys .lett . * 88 * , 044101 ( 2002 ) .y. dabaghian , r. v. jensen , and r. blmel , phys .e * 63 * , 066201 ( 2001 ) .r. blmel , yu . dabaghian , and r. v. jensen , phys .e , in press ( 2002 ) .k. endl and w. luh , _ analysis _ , three volumes ( akademische verlagsgesellschaft , frankfurt am main , 1972 ) .a. s. demidov , _ generalized functions in mathematical physics _( nova science publishers , huntington , 2001 ) .w. walter , _ einfhrung in die theorie der distributionen _( bibliographisches institut , mannheim , 1974 ) .a. m. dirac , _ the principles of quantum mechanics _ , fourth edition ( oxford university press , oxford , 1958 ) .j. von neumann , _ mathematical foundations of quantum mechanics _ ( princeton university press , princeton , 1955 ) . t. kottos and u. smilansky , phys .lett . * 79 * , 4794 ( 1997 ) ; ann .( n.y . ) * 274 * , 76 ( 1999 ) .k. rudenberg and c. scherr , j. chem . phys .* 21 * , 1565 ( 1953 ) .e. akkermans , a. comtet , j. desbois , g. montambaux , and c. texier , ann .( n.y . ) * 284 * , 10 ( 2000 ). proceedings of the international conference on _ mesoscopic and strongly correlated electron systems `` chernogolovka 97 '' _ , chapter 4 : _ quasi-1d systems , networks and arrays _ , edited by v. f. gantmakher and m. v. feigelman , usp . fiz .nauk * 168 * , pp . 167189 ( 1998 ) .h. schanz and u. smilansky , phys .lett . * 84 * , 1427 ( 2000 ) ; philos . mag .b * 80 * , 1999 ( 2000 ) .u. smilansky , j. phys . a ( mathematical and general ) * 33 * , 2299 ( 2000 ) .f. barra and p. gaspard , j. stat .* 101 * , 283 ( 2000 ) ; phys .e * 65 * , 016205 ( 2002 ) .p. pakonski , k. zyczkowski , and m. kus , j. phys . a ( mathematical and general ) * 34 * , 9303 ( 2001 ) .u. smilansky , in _ mesoscopic quantum physics _ , les houches , session lxi , 1994 , edited by e. akkermans , g. montambaux , j .- l . pichard and j. zinn - justin ( elsevier science publishers , amsterdam , 1995 ) , pp . 373433 .r. blmel and yu .dabaghian , j. math . phys . *42 * , 5832 ( 2001 )c . lai , c. grebogi , r. blmel , and m. ding , phys .a * 45 * , 8284 ( 1992 ) .l. sirko , p.m. koch , and r. blmel , phys .lett . * 78 * , 2940 ( 1997 ) .a. c. aitken , _ determinanten und matrizen _ ( bibliographisches institut ag , mannheim , 1969 ) .r. courant , _ differential and integral calculus _ , second edition , two volumes ( interscience publishers , new york , 1937 ) .r. courant and d. hilbert , _ methods of mathematical physics _ , first english edition , two volumes ( interscience publishers , new york , 1953 ). j. h. van lint and r. m. wilson , _ a course in combinatorics _ ( cambridge university press , cambridge , 1992 ) . , edited by m. abramowitz and i. a. stegun ( national bureau of standards , washington d.c . , 1964 ) .i. s. gradshteyn and i. m. ryzhik , _ table of integrals , series , and products _ , 6th edition , a. jeffrey , editor , d. zwillinger , assoc .editor ( academic press , san diego , 2000 ) .g. sansone and h. gerretsen , _ lectures on the theory of functions of a complex variable _ ( noordhoff , groningen , 1960 ) .b. j. lewin , _ nullstellenverteilung ganzer funktionen _( akademie - verlag , berlin , 1962 ) .l. i. schiff , _ quantum mechanics _ , second edition ( mcgraw - hill , new york , 1955 ) , p. 37 .a. messiah , _ quantenmechanik _ , volume i ( walter de gruyter , berlin , 1976 ) , p. 89 . * fig . 2 : * structure of root cell . to the left and tothe right of are the root - free intervals and , respectively .together they form the root - free zone .roots of the spectral equation are found in the interval .the delimiters of are and .the average location ( star ) of the root is given by .* fig . 3 : * graphical solution of the spectral equation . since is `` faster '' than , one and only one solution exists in every interval .this fact is proved rigorously in sec .* fig . 4 : * detail of the spectral staircase illustrating the computation of the integral ( [ nint ] ) over the spectral staircase from to for the purpose of obtaining an explicit expression for .* fig . 5 : * three - vertex linear quantum graph .the vertices are denoted by , and , respectively . and are `` dead - end '' vertices .the bonds are denoted by and , respectively . while there is no potential on the bond ( indicated by the thin line representing the potential - free bond ) ,the bond is `` dressed '' with the energy - dependent scaling potential ( indicated by the heavy line representing the `` dressed '' bond ) .
|
we define a class of quantum systems called regular quantum graphs . although their dynamics is chaotic in the classical limit with positive topological entropy , the spectrum of regular quantum graphs is explicitly computable analytically and exactly , state by state , by means of periodic orbit expansions . we prove analytically that the periodic orbit series exist and converge to the correct spectral eigenvalues . we investigate the convergence properties of the periodic orbit series and prove rigorously that both conditionally convergent and absolutely convergent cases can be found . we compare the periodic orbit expansion technique with lagrange s inversion formula . while both methods work and yield exact results , the periodic orbit expansion technique has conceptual value since all the terms in the expansion have direct physical meaning and higher order corrections are obtained according to physically obvious rules . in addition our periodic orbit expansions provide explicit analytical solutions for many classic text - book examples of quantum mechanics that previously could only be solved using graphical or numerical techniques .
|
extreme value theory ( evt ) can be seen as a branch of probability theory which studies the stochastic behaviour of extremes associated to a set of random variables with a common probability distribution . in recent years, several statistical techniques capable of better quantifying the probability of occurence of rare events have grown in popularity , especially in areas such as finance , actuaries and environmental sciences ( see for example , , ) . for a good review of both theory and interesting applications of evtthe main reference is still .natural phenomena like river flows , wind speed and rain are subject to extreme values that can imply in great material and financial losses . financial markets where large amounts of money invested can have an impact in the economy of a country need to have their risks of large losses and gains quantified . in risk analysis , estimating future losses by modelling events associated to default is of fundamental importance . in insurance , the potencial risk of high value claims needs to be quantified and associated to possible catastrofic events due to the large amount of money involved in payments .the usual approach for the analysis of extreme data is based on the generalized extreme value ( gev ) distribution which distribution function is given by , where , and are location , scale and shape parameters respectively .the sign denotes the positive part of the argument .we use the notation .the value of the shape parameter defines the tail behaviour of the distribution . if the distribution is defined for and is called a gumbel distribution ( exponentially decaying tail ) .if the distribution is defined for values , has a lower bound and is called a frchet distribution ( slowly decaying tail ) .if the distribution is defined for values , has an upper bound and is called a negative weibull distribution ( upper bounded tail ) .the density function of the gev distribution is given by , which is illustrated in figure [ fig1 ] for , and .+ figure [ fig1 ] about here .+ now suppose that we have observed data and assume that they are realizations from independent and identically distributed random variables with .we wish to make inferences about the unknown parameters , and .the likelihood function is given by , ^{-1/\xi-1 } \exp\left\{-\sum_{i=1}^n\left(1+\xi~\dfrac{y_i-\mu}{\sigma}\right)^{-1/\xi}\right\}\ ] ] for when and for when .otherwise the likelihood function is undefined . a bayesian analysisis then carried out by assigning prior distributions on , and .simulation methods , in particular markov chain monte carlo ( mcmc ) methods , are now routinely employed to produce a sample of simulated values from the posterior distribution which can in turn be used to make inferences about the parameters . in gev models ,the random walk metropolis algorithm is usually employed where a proposal distribution must be chosen and tuned , for which a poor choice will considerably delay convergence towards the posterior distribution .our main motivation to investigate alternative algorithms is computational and we hope that our findings are useful for the applied user of this class of models . in the next sectionwe describe an alternative algorithm to generate these posterior samples in a much more efficient way .this is compared with the traditional mcmc methods in section [ sec : app ] in terms of computational efficiency through a real dataset and a simulation study . in section[ sec : ar ] a time series ingredient is included in the model to analyse time series of extreme values .some final comments are given in section [ sec : conclusion ] .hamiltonian monte carlo ( hmc ) was originaly proposed by for simulating molecular dynamics under the name of hybrid monte carlo . in what followswe present the hmc method in a compact form which will be used in the context of gev models .the reader is referred to for an up to date review of theoretical and practical aspects of hamiltonian monte carlo methods .let denote a -dimensional vector of parameters , denote the posterior density of and denote a vector of auxiliary parameters independent of and distributed as .if is interpreted as the position of a particle and describes its potential energy while is the momentum with kinetic energy then the total energy of a closed system is the hamiltonian function , where .+ the ( unormalized ) joint density of is then given by , .\end{aligned}\ ] ] for continuous time , the deterministic evolution of a particle that keeps the total energy constant is given by the hamiltonian dynamics equations , where is the gradient of with respect to .so , the idea is that introducing the auxiliary variables and using the gradients will lead to a more efficient exploration of the parameter space .+ however these differential equations can not be solved analytically and numerical methods are required .one such method is the strmer - verlet ( or leapfrog ) numerical integrator ( ) which discretizes the hamiltonian dynamics as the following steps , for some user specified small step - size . after a given number of time steps this results in a proposal . in appendix[ appendix ] we provide details on the required expressions of partial derivatives for hmc . a metropolis acceptance probabilitymust then be employed to correct the error introduced by this discretization and ensure convergence to the invariant distribution . since the joint distribution of is our target distribution , the transition to a new proposed value accepted with probability , & = & \min\left[\frac{f({\hbox{\boldmath}}^*,{\hbox{\boldmath}}^*)}{f({\hbox{\boldmath}},{\hbox{\boldmath}})},1\right]\\\\ & = & \min\left[\exp[h({\hbox{\boldmath}},{\hbox{\boldmath } } ) - h({\hbox{\boldmath}}^*,{\hbox{\boldmath}}^*)],1\right].\end{aligned}\ ] ] in the distribution of the auxiliary parameters , is a symmetric positive definite mass matrix which is typically diagonal with constant elements , i.e. .the hmc algorithm in its simplest form taking is given by , 1 .give an initial position and set , 2 .[ step2 ] draw and , 3 .set and , 4 .repeat the strmer - verlag solution times,.2 cm * * * + .2 cm 5 .set and , 6 .compute ] , 7 .set if > upg\thetag\thetap ] and ] , , where is the sample covariance matrix .we set and repeated the strmer - verlet solution 13 times .the simulation results are reported in table [ ressim2 ] as bias and mean square errors as defined in expressions ( [ eq1 ] ) and ( [ eq2 ] ) . for models of orders 1 and 2 andthe three sample sizes considered the performances in terms of bias are barely similar but these are in general smaller for the rmhmc algorithm .this is also true for the model of order 3 and sample sizes 60 and 150 , but for samples of size 300 the hmc algorithm underestimates and more severely and , except for , the biases are smaller for the rmhmc algorithm .when we look at the mean square errors , the comparison is in general more favorable to the rmhmc specially for larger sample sizes . in particular , for the - model the mean square error tends to decrease ( sometimes dramatically ) for all sample sizes . at this point ,an explanation for the large values of mse for and in the - model is in order .recall that we comparing the performances of the two algorithms based on relatively few mcmc iterations .so , for samples of size 300 the initial values where probably far from regions of higher posterior probabilities and the hmc would require more iterations while for the rmhmc these initial values were much less influencial .all in all , we consider that this simulation study provides empirical evidence of a better performance of the rmhmc algorithm and we would recommend this approach to the applied user dealing with time series of extreme values .table [ ressim2 ] about here . in this application, each observation represents the maximum annual level of lake michigan , which is obtained as the highest mean monthly level , 1860 to 1955 ( observations ) .the time series data can be obtained from the time series data library repository at https://datamarket.com/data/set/22p3/ based on the autocorrelation and partial autocorrelation functions of the data we propose a - model for this dataset . to assess the quality of predictions, we removed the last three observations from estimation .the predictions are then compared with the actual data .the rmhmc algorithm was applied with a fixed metric evaluated at the map estimate to simulate values from the posterior distribution of .after a short pilot tunning a step - size was taken and the strmer - verlet solution was repeated 11 times at each iteration . a total of 21000 values were simulated discarding the first 1000 as burn - in .table [ tablear1 ] shows the approximations for the marginal posterior mean , standard deviation , mode , median and credible interval for the model parameters . from table[ tablear1 ] we note that the estimated model is stationary with high probability and the point estimate of is about with a small standard deviation thus characterizing a distribution with moderate asymetry .convergence of the markov chains was assessed by visual inspection of trace and autocorrelation plots ( not shown ) and all indicated that the chains reached stationarity relatively fast with low autocorrelations . in the bayesian approach , given ,the -steps ahead predictions are obtained from the predictive density of which is given by , .\end{aligned}\ ] ] here we propose to compute a point prediction of as a monte carlo approximation of the predictive expectation , = e[e[y_{_{t+j}}|\mu,\theta,\sigma,\xi,{\hbox{\boldmath}}]] ] .the nonzero elements are given by , = ( t - p)\dfrac{a}{\sigma^{2 } } \\-e \left ( \dfrac{\partial^2 \ell}{\partial \mu \partial \theta_j } \right ) & = & ( t - p)\dfrac{a}{\sigma^{2 } } e[y_{t - j } ] = \mu_{y_t}(t - p)\dfrac{a}{\sigma^{2}}\\ -e \left(\dfrac{\partial^2 \ell}{\partial \mu \partial \sigma } \right ) & = & - e \left [ e\left ( \dfrac{\partial^2 \ell}{\partial \sigma \partial \mu_t } \middle| d_{t-1 } \right ) \right ] = -(t - p ) \dfrac{1}{\sigma^{2}\xi}[a - \gamma(2+\xi ) ] \\-e \left ( \dfrac{\partial^2 \ell}{\partial \mu \partial \xi } \right ) & = & - e \left [ e\left ( \dfrac{\partial^2 \ell}{\partial \xi \partial \mu_t } \middle| d_{t-1 } \right ) \right ] = - ( t - p)\dfrac{1}{\sigma\xi}\left(b - \dfrac{a}{\xi } \right)\\ -e \left ( \dfrac{\partial^2 \ell}{\partial \theta_i \partial \theta_j } \right ) & = & - e \left [ e\left ( \dfrac{\partial^2\ell}{\partial \mu_t^2 } y_{t - i } y_{t - j } \middle| d_{t-1}\right)\right ] = ( t - p)\dfrac{a}{\sigma^{2 } } e[y_{t - i}y_{t - j } ] \\ -e \left ( \dfrac{\partial^2 \ell}{\partial \sigma \partial \theta_j } \right ) & = & - e \left [ e\left ( \dfrac{\partial^2\ell}{\partial \sigma \partial \mu_t } y_{t - j }\middle| d_{t-1 } \right ) \right]\\ & = & -(t - p ) \dfrac{1}{\sigma^{2}\xi}[a - \gamma(2+\xi ) ] e[y_{t - j}]\\ & = & -(t - p ) \dfrac{1}{\sigma^{2}\xi}[a - \gamma(2+\xi ) ] \mu_{y_t}\\ -e \left ( \dfrac{\partial^2 \ell}{\partial \xi \partial \theta_j } \right ) & = & - e \left [ e\left ( \dfrac{\partial^2\ell}{\partial \xi \partial \mu_t } y_{t - j } \middle| d_{t-1 } \right ) \right]\\ & = & -(t - p ) \dfrac{1}{\sigma\xi}\left(b - \dfrac{a}{\xi } \right ) e[y_{t - j } ] \\ & = & -(t - p ) \dfrac{1}{\sigma\xi}\left(b - \dfrac{a}{\xi } \right)\mu_{y_t}\\ -e\left ( \dfrac{\partial^2 \ell}{\partial \xi \partial \sigma } \right ) & = & - ( t - p ) \dfrac{1}{\sigma\xi^{2}}\left[1 - \gamma + \dfrac{1 - \gamma(2+\xi)}{\xi } - b + \dfrac{a}{\xi } \right]\end{aligned}\ ] ] where , $ ] , is the gamma function , is the digamma function and is the euler s constant ( ) .
|
in this paper we propose to evaluate and compare markov chain monte carlo ( mcmc ) methods to estimate the parameters in a generalized extreme value model . we employed the bayesian approach using traditional metropolis - hastings methods , hamiltonian monte carlo ( hmc ) and riemann manifold hmc ( rmhmc ) methods to obtain the approximations to the posterior marginal distributions of interest . applications to real datasets of maxima illustrate illustrate how hmc can be much more efficient computationally than traditional mcmc and simulation studies are conducted to compare the algorithms in terms of how fast they get close enough to the stationary distribution so as to provide good estimates with a smaller number of iterations . .5 cm key words : extreme value ; bayesian approach ; hamiltonian monte carlo ; markov chain monte carlo .
|
rooted in a host organism or patch of habitat such as a dead log , tens of thousands of species of filamentous fungi rely on spores shed from mushrooms and passively carried by the wind to disperse to new hosts or habitat patches .a single mushroom is capable of releasing over a billion spores per day , but it is thought that the probability of any single spore establishing a new individual is very small . nevertheless in the sister phylum of the mushroom - forming fungi , the ascomycota , fungi face similarly low likelihoods of dispersing successfully , but spore ejection apparatuses are highly optimized to maximize spore range , suggestive of strong selection for adaptations that increase the potential for spore dispersal .spores disperse from mushrooms in two phases : a powered phase , in which an initial impulse delivered to the spore by a surface tension catapult carries it clear of the gill or pore surface , followed by a passive phase in which the spore drops below the pileus and is carried away by whatever winds are present in the surrounding environment .the powered phase requires feats of engineering both in the mechanism of ejection and in the spacing and orientation of the gills or pores . however , spore size is the only attribute whose influence on the passive phase of dispersal has been studied .spores are typically less than 10 m in size , so can be borne aloft by an upward wind of only 1 cm / s .buller claimed that such wind speeds are usually attained beneath fruiting bodies in nature : indeed peak upward wind velocities under grass canopies are of order 0.1 - 1 cm / s .however even if the peak wind velocity in the mushroom environment is large enough to lift spores aloft : 1 .the average vertical wind velocity is zero , with intervals of downward as well as upward flow and 2 . mushrooms frequently grow in obstructed environments , such as close to the ground or with pilei crowded close together ( fig 1 a - b ). the pileus traps a thin boundary layer of nearly still air , with typical thickness , where is the horizontal wind velocity , the size of the pileus , and the viscosity of air ; no external airflow can penetrate into gaps narrower than .for typically sized mushrooms under a grass canopy ( with cm , cm / s , m ) , we find that mm .if the gap thickness between pileus and ground is smaller than then no external wind will penetrate into the gap .* spores can disperse from thin gaps beneath pilei without external winds . *we analyzed spore deposition beneath cultured mushroom ( shiitake ; _ lentinula edodes _ , and oyster ; _ pleurotus ostreatus _, sourced from ccd mushroom , fallbrook , ca ) as well as wild - collected _ agaricus californicus_. pilei were placed on supports to create controllable gap heights beneath the mushroom , and placed within boxes to isolate them from external airflows .we measured spore dispersal patterns by allowing spores to fall onto sheets of transparency film , and photographing the spore deposit ( fig .1c ) . in all experiments , spore deposits extended far beyond the gap beneath the pileus .spores were deposited in asymmetric patterns , both for cultured and wild - collected mushrooms ( fig .1d ) . using a laser light sheet and high speed camera (see materials and methods ) , we directly visualized the flow of spores leaving the narrow gap beneath a single mushroom ( fig . 1e and movie s1 ) : our videos show that spores continuously flow out from thin gaps , even in the absence of external winds .what drives the flow of spores from beneath the pileus ?some ascomycete fungi and ferns create dispersive winds by direct transfer of momentum from the fruiting body to the surrounding air .for example , some ascomycete fungi release all of their spores in a single puff ; the momentum of the spores passing through the air sets the air into motion .fern sporangia form over - pressured capsules that rupture to create jets of air . however , the flux of spores from a basidiomycete pileus is thousands of times smaller than for synchronized ejection by a ascomycete fungus , and pilei have no known mechanism for storing or releasing pressurized air .the only mechanism that we are aware of for creating airflows without momentum transfer is by the manipulation of buoyancy an effect that underpins many geophysical flows and has recently been tapped to create novel locomotory strategies .* mushrooms evaporatively cool the surrounding air . *many mushrooms are both cold and wet to the touch ; the expanding soft tissues of the fruit body are hydraulically inflated , but also lose water quickly ( fig .we made a comparative measurement of water loss rates from living mushrooms and plants .the rate of water loss from mushrooms greatly exceeds water loss rates for plants , which use stomata and cuticles to limit evaporation ( fig .for both plants and pilei , rates of evaporation were larger when tissues were able to actively take in water via their root - system / mycelium , than for cut leaves or pilei .however , the pilei lost water more quickly than all species of plants surveyed under both experimental conditions .in fact , evaporation rates from cut mushrooms were comparable to a sample of water agar hydrogel ( 1.5% wt / vol agar ) , while evaporation rates from mushrooms with intact mycelia , were twofold larger ( fig 2b ) . taken together , these data suggest that pilei are not adapted to conserve water as effectively as the plant species analyzed. the high rates of evaporation lead to cooling of the air near the mushroom , and may have adaptive advantage to the fungus .previous observations have shown that evaporation cools both the pileus itself and the surrounding air by several degrees celsius . specifically , since latent heat is required for the change of phase from liquid to vapor , heat must continually be transferred to mushroom from the surrounding air .we compared the ambient temperature of the air between 20 cm and 1 m away from _p. ostreatus _ pilei with intact mycelia , with temperatures in the narrow gaps between and beneath pilei , using a traceable liquid / gas probe ( control company , forestwood , tx ) .we found that gap temperatures were consistently 1 - 2 cooler than ambient ( fig . 2c )the surface temperature of the pileus , measured with a dermatemp infra - red thermometer ( exergen , watertown , ma ) was up to 4 cooler than ambient , consistent with previous observations ( and fig . 2c ) .evaporation alone can account for these temperature differences : in a typical experimental run , a pileus loses water at a rate of kg / m ( comparable with the data of ) . evaporating this quantity of water requires w / m of vaporization enthalpy . at steady state the heat flux to the mushroommust equal the enthalpy of vaporization ; newton s law of cooling gives that the heat flux ( energy / area ) will be proportional to the temperature difference , between the surface of the mushroom and the ambient air : where w / m is a heat transfer coefficient . from this formulawe predict that , in line with our observations .* increasing air density by cooling produces dispersive currents*. cooling air from the laboratory ambient ( ) down to increases the density of the air by kgm , where is the coefficient of expansion of air . cold dense air will tend to spread as a gravity current , and an order of magnitude estimate for the spreading velocity of this gravity current from a gap of height cm is given by von karman s law : / s .although the air beneath the pileus is laden with spores , spore weight contributes negligibly to the creation of dispersive winds : in typical experiments , spores were released from the pileus at a rate of 490 spores / cm s ( data from _ p. ostreatus _ mushrooms ) . if the mass of a single spore is kg and its sedimentation speed is m / s ( see discussion preceding equation ( 1 ) ) , then the contribution of spores to the density of air beneath the pileus is kgm , less than half of the density increase produced by cooling .indeed , water evaporation , rather than spores and the water droplets that propel them , constitute most of the mass lost by a mushroom .to prevent spore ejection , we applied a thin layer of petroleum jelly to the gill surfaces of cut mushrooms .treated mushrooms lost mass at a statistically indistinguishable rate to cut mushrooms that were allowed to shed spores ( fig .although our experiments were performed in closed containers to exclude external airflows it is still possible that spore deposit patterns were the result of convective currents created by temperature gradients in the lab , rather than airflows created by the mushroom itself . to confirm that spores were truly dispersed by airflows created by the mushroom we rotated the mushroom either 90 or 180 halfway through the experiment and replaced the transparency sheet . since the box remained in the same orientation and position in the lab, we would expect that if spores are dispersed by external airflows , the dispersal pattern would remain the same relative to the lab .in fact we found consistently that the direction of the dispersal current rotated along with the mushroom ( see figure s1 ) , indicating that mushroom generated air - flows are dispersing spores .we explored how the distance dispersed by spores depended on factors under the control of the parent fungus .the distance spores dispersed from the pileus increased in proportion to the square of the thickness of the gap beneath the pileus ( , fig .however , we found no correlation between spore dispersal distance and the diameter of the pileus or the rate at which spores were produced ( and respectively , figure s2 ) .spores were typically deposited around mushrooms in asymmetric patterns , suggesting that one or two tongues of spore laden air emerge from under the pileus , and spores do not disperse symmetrically in all directions ( fig .these tongues of deposition were seen in wild - collected as well as cultured mushrooms ( fig .we dissected the dynamics of one of these tongues by building a two dimensional simulation of the coupled temperature and flow fields around the pileus .although real dispersal patterns are three dimensional , these simulations approximate the 2d dynamics along the symmetry plane of a spreading tongue . in our simulationswe used a boussinesq approximation for the equations of fluid motion and model spores as passive tracers , since their mass contributes negligibly to the density of the gravity current .initially we modeled the pileus by a perfect half - ellipse whose diameter ( 4 cm ) and height ( 0.8 cm ) matched the dimensions of a _l. edodes _ pileus used in our experiments . however ,if cooling was applied uniformly over the pileus surface then spores dispersed weakly ( fig .weak symmetric dispersal can be explained by conservation of mass : cold outward flow of spore - laden air must be continually replenished with fresh air drawn in from outside of the gap . in a symmetric pileus , the cool air spreads along the ground and inflowing air travels along the under - surface of the pileus .so initially on leaving the gills of the mushroom , spores are drawn inward with the layer of inflowing warm air ; and only after spores have sedimented through this layer into the cold outflow beneath it do they start to travel outward ( fig 4a , upper panel ) .* asymmetric airflows are necessary for dispersal . * to understand how mushrooms can overcome the constraints associated with needing to maintain both inflow and outflow , we performed a scaling analysis of our experimental and numerical data .although the buoyancy force associated with the weight of the cooled air draws air downward , warm air must be pulled into the gap by viscous stresses . for fluid entering a gap of thickness , at speed , the gradient of viscous stress can be estimated as : , where is the viscosity of air .we estimate the velocity by balancing the viscous stress gradient with the buoyancy force ; i.e. : .we then adopt the notation that if is a quantity of interest ( e.g. temperature or gap height ) that can vary over the pileus , then we write for the value of on the right edge of the pileus and for its value on the left edge of the pileus .if then there is the same inflow on the left and right edges of the pileus , and dispersal is symmetric and weak .if there are different inflows on the left or right side of the pileus , then there can be net unidirectional flow beneath the pileus , carrying spores further .assuming , without loss of generality , that the net dispersal of spores is rightward , the spreading velocity of the gravity current can be estimated from the difference : ( right - moving inflow minus left - moving inflow ) . the furthest traveling spores originate near the rightward edge of the pileus and fall a distance ( the gap width on the rightward edge ) before reaching the ground . since the gravity current spreads predominantly horizontally , the vertical trajectories of spores are the same as in still air , namely they sediment with velocity and take a time to be deposited . by balancing the weight of a spore against its stokes drag , we obtain where kg / m is the density of the spore , and m is the radius of a sphere of equivalent volume .the sedimentation velocity , , can vary between species , ( typically / s ) , but does not depend on the flow created by the pileus .the maximum spore dispersal distance is then : ^\ell \frac{h_r}{v_s } \label{eq : scaling}\ ] ] where we use the notation ^l_r ] ( and ^\ell_r h_r^2 ] ( and ^\ell_r h_r ] , ^l_r ] ( gray lines ) .( e , f ) we test the scalings for real mushrooms using digital piv to measure spore velocities . in ( e ) colorsgive spore velocity in m / s , scale bar : 1 cm ) . in ( f ) , the mean spore velocity at the beginning of the gravity current is proportional to gap width ( black line ) , consistent with equation ( 1 ) for shape - induced asymmetry.,scaledwidth=50.0% ] * demonstration that spore deposition patterns rotate with the mushroom .* spore dispersal distance does not depend on the pileus diameter or on the rate of spore production . *numerical simulations show that gravity currents can carry spores up barriers . or 180 .in all cases we saw that the pattern of spore deposition rotated with the mushroom .two representative experiments are shown here .the orange shapes show the approximate shape of the pileus , and arrows show orientation of arbitrary reference points on the pileus .( a - b ) deposition from a _p. ostreatus _ mushroom .( a ) deposition after 2 hours .( b ) the mushroom was rotated by 180 , a new transparency added , and the experiment continued for another 2 hours .( c - d ) deposition from a _l. edodes _ mushroom .( c ) deposition after 2 hours .( d ) the mushroom was rotated by 90 , a new transparency was added , and the experiment continued for another 2 hours . ]
|
thousands of fungal species rely on mushroom spores to spread across landscapes . it has long been thought that spores depend on favorable airflows for dispersal that active control of spore dispersal by the parent fungus is limited to an impulse delivered to the spores to carry them clear of the gill surface . here we show that evaporative cooling of the air surrounding the mushroom pileus creates convective airflows capable of carrying spores at speeds of centimeters per second . convective cells can transport spores from gaps that may be only a centimeter high , and lift spores ten centimeters or more into the air . the work reveals how mushrooms tolerate and even benefit from crowding , and provides a new explanation for their high water needs .
|
the term _ causality _ is frequently used in a way which suggests that an intrinsic causal structure underlies the universe . in relativitythis is reinforced by the assumption of a metric tensor with a lorentzian signature .this gives the traditional light cone structure associated with spacelike and timelike intervals , and imposes conditions on the possible trajectories of particles and quantum field theory operator commutation relations .we shall discuss the idea that _ causality is a convenient account designed to satisfy and conform to the patterns of classical logic that the human theorist wishes to believe underlies the dynamics of space , time and matter ._ in this approach causality need not be associated with any _ a priori _ concept of metric tensor .this view of causality has been suggested by various philosophers and scientists .hume argued that causality is a fiction of the mind .he said that causal reasoning is due to the mind s expectation that the future is like the past and because people have always associated an effect with a cause .kant believed that causality is a category used to classify experience .lentzen said that causality is a relation within the realm of conceptual objects .lurchin said that causality is a personal way of thought that is not among our immediate sensual data , but is rather our basic way to organize that data . for maxwellthe principle of causality expresses the general objective of theoretical sciences to achieve deterministic explanations and according to heisenberg , causality only applies to mathematical representations of reality and not to individual mechanical systems .to avoid confusion we distinguish three sorts of time : _ process time _ is the hypothesized time of physical reality .although there is geological and astrophysical evidence for some sort of temporal ordering in reality , process time need not exist in any real sense and may just be a convenient way for humans to think about the universe . _ physiotime _ is the subjective time that humans sense and which they believe runs monotonically forwards for them .it is the end product of complex bio - dynamical processes occurring in process time .its origins are not understood currently .many physicists believe that this feeling is an illusion .what matters here is the undeniable _ existence _ of this feeling , because humans are driven by this sensation of an ever increasing time to believe that descriptions of reality must involve such a concept . _ mathematical times _ are conceptual inventions of human theorists designed to model process time .examples are newtonian absolute time , relativistic coordinate times , proper time and cosmic time .mathematical times usually involve subsets of the real line , which has an ordering property .this ordering is used to model the notions of _ earlier _ and _ later . _this presupposes something about the nature of process time that may be unwarranted . in the euclidean formulation of field theory for examplethere is no dynamical ordering parameter .we shall use the term _ theorist _ to denote the human mind operating at its clearest and most rational in physiotime .the theorist has the status of an observer or deity overseeing the mathematically consistent development of chosen mathematical models that are used to represent phenomena in process time ._ free will_ enters into the discussion here as the freedom of the theorist to choose boundary conditions in these models . whether free will is an illusion or not is regarded here as irrelevantthe need to seek causal explanations stems from the peculiarities of human consciousness .humans generally want to _ explain _ phenomena .when they do this they invariably try to invoke what may be called _ classical logic ._ this is the everyday logic that postulates that statements are _ either true or else false _ and that conclusions can be drawn from given premises .it is also the logic of vision , which generally informs the brain that an object either is in a place or else is not in that place .the rational conscious mind tends to believe that the external universe follows this logic , and this is the basis for the construction of cm ( classical mechanics ) and all the belief structures which it encodes into its view of reality .it is also the logic of jurisprudence and common sense .this logic served humanity extremely well for millennia , until technological advances in the early years of the twentieth century revealed that quantum phenomena did not obey this logic in detail .a cm theorist is anyone who believes in a classical view of reality . in the mindset of a cm theorist , realityis assumed to be strictly single valued at each and every time even in the absence of observation .philosophers say that reality is determinate__. _ _ the cm theorist attempts to make unique predictions wherever possible , such as where a planet will be at a future time .the assumption is made that the planet will be somewhere at that time and not nowhere , and that it will not be in two or more places at that time . in general, quantum theory requires a pre - existing classical conceptual framework for a sensible interpretation .for example , relativistic quantum field theory assumes a classical lorentzian metric over spacetime , and only the fields are quantised .for this reason , we shall focus our attention on a classical formulation of causality .to set up our framework for causality , it will be useful to review the definition of a _ function _ : [ [ definition-1 ] ] * definition 1 : * a _ _ function _ _ is an ordered triple where and are sets which satisfy the following : 1 . is a subset of the cartesian product ; 2 . for each element in is exactly one element in such that the ordered pair is an element of .physicists tend to write . is called the _ domain _ of ( definition of ) and is the _ _ range _ _ of _ _ _ _ the _ _ image _ _ of _ _ _ _ is the subset of such that for each element in there is at least one element in such that without further information it can not be assumed that but in our work this must be assumed to hold .otherwise , there arises the possibility of having a cm where something could happen without a cause , i.e. an element of could exist for which there is no in such that .the range and domain of a function do not have a symmetrical relationship and the ordering of the sets in _ is crucial .usually , no pair of component sets in the definition of a function can be interchanged without changing the function .this asymmetry forms the basis of the time concept discussed in this article and defines what we call a _ mathematical arrow of time_. assuming that is single valued , then we may employ the language of dynamics here , though this may seem unusual .we could say that is _ determined _ by the process _ _ _ _ , _ _ or that _ _ causes __ .then is a _ cause _ , is its _ effect _ and is the _ mechanism of causation_. although _ definition _ carefully excludes the concept of a many - valued explicit function , such a possibility arises when we discuss implicit functions .[ [ graphical - notation ] ] graphical notation : we shall use the following graphical notation : the process of mapping elements of into via will be represented by the lhs ( left hand side ) of , where the large circles denote domain and image sets , the small circle denotes the function , and arrows indicate the direction of the mapping .an alternative representation is given by the rhs ( right hand side ) of , where the labels , and are understood . herethe integers and represent the ordering of the function _ from _ _ _ _ _ _ to _ , so that the arrows are not needed . with domain and range ; the rhs is an equivalent representation of the same with , and understood .the ordering of the digits , represents the direction of the mapping .( b ) a representation of an implicit function of two variables.,width=381,height=246 ] _ definition _ raises the question : _ which particular _ _ in _ _ caused a given _ _ in _ as given , nothing in _ definition _ rules out many - to - one mappings , so without further information about the function there may be more than one such the assumption of a unique pre - image is equivalent to the belief that , given the present state of the universe , there was a unique past which gave rise to it . t hooft has recently discussed this in the context of gravitation using equivalence classes of causes .this is bound up with the notion of irreversibility .if however it is the case that is one - to - one and onto , then its inverse function exists and so any element in can be mapped back to a unique cause in via in such a case there would be no inherent difference in principle between the roles of and . in the language of dynamicsthe mechanics would be _ reversible . _suppose now that the relationship between and is implicit rather than explicit .for example , let and each be a copy of the real line with elements , belonging to and respectively and suppose that the only dynamical information given was an implicit equation of the form then our graphical notation for this is given by , where the small circle now denotes an implicit equation or _ relating elements of and .without further information no arrows are permitted at this stage .given an implicit equation in two variables such as , suppose now that it could proved that there was always a unique solution for in given any in ( the solution of course depending on the value of .then our convention is that now arrows pointing from into via the link may be added to indicate this possibility , giving the lhs of with replaced by .but now the relationship between and is formally equivalent in principle to having an explicit function of ( even if could not be obtained analytically ) . in such a casewe will also say that any given element of _ causes _ or _ determines _ a corresponding element of and that the link between and may be _ resolved _ from _ to _ ( or in favour of ) .our concept of resolution depends only on the _ existence _ of a unique solution and does not imply that a solution could actually be computed by the theorist in practice .computability is an attribute associated with _physiotime _ , and is not here regarded as an essential ingredient of our version of causality .a more severe definition of causality however might be to impose the restriction of computability .we do not do this here because we wish to avoid anthropomorphism .what is important in classical mechanics is the existence of a unique resolution ; the universe does not actually `` compute '' anything when this resolution occurs .it is possible to consider links which are not equations but more general relations such as inequalities .for example , suppose and are copies of the real line with elements denoted by , respectively and consider a link defined by given there is an infinity of solutions for , so in this case one possible interpretation would be that the link is equivalent to a many valued function of , although this way of putting things might seem unusual .on the other hand , given a there is also an infinity of solutions for .this leads to an alternative interpretation of the link as a many valued function of .such examples do not generate a classical tra ( temporal resolution of alternatives ) and so would not occur normally in our classical spacetime dynamics .now suppose we were given an implicit equation for and such that we could prove the existence of a unique solution for either or given the other variable .then the arrows could point either way and this would correspond to a choice of causation . because cause and effect can be interchanged in such a case, it then becomes meaningful to talk about this dynamics being _ reversible ._ clearly this is formally equivalent to having an invertible explicit function .when discussed in this way it becomes clear that in general , spacetime dynamics will be irreversible .reversibility will occur only under very special conditions , which of course is the experience of experimentalists .in general the sets and need not be restricted to the reals .they could be any sort of sets , such as vector spaces , tensor product spaces , quaternions , operator rings , group manifolds , etc .whatever their nature , they will always be referred to as _ events _ for convenience . in the sorts of dynamics we have in mind eventswill often be sets such as group manifolds ._ links _ are defined as specific relationships between events .they may be more complicated than those in the above examples , and may have several or many components relating different components of different events .the specification of a set of events and corresponding links will be said to specify a ( classical ) _ spacetime dynamics . _an important generalization is that a link might involve more than two events .given for example five events , , , and with elements and respectively then a classical dynamics involving these events would give some set of equations or link of the generic form this will be represented by .then arrows are added as on the lhs .an equivalent diagram is given on the rhs with the ordering of the integers indicating the direction of the resolution.,width=422,height=511 ] suppose now , given such an , that it could be proven that there is always a unique solution given the other elements and .in such a case this will be indicated by arrows pointing from , , and into and an arrow pointing into as in the lhs of .then it will be said that _ can be ( causally ) resolved in _ _( favour of ) _ , and will be called the _ resolved event_. by definition , classical resolution is always in favour of a single resolved event , given initial data about the other events associated with the link .this does not imply anything about the possibility of resolving _ _ _ _ in favour of any of these other events. it may be or not be possible to do this .suppose the theorist could in principle resolve if they were given and , and also resolve if they were given and .then the theorist has to make a choice of resolution and choose one possible resolution and exclude the other .it would not be meaningful classically to resolve in favour of and at the same moment of physiotime , the reason being that these alternative resolutions employ different and inconsistent initial data sets ( boundary conditions ) .initial data sets are equivalent to information , and it is a self - evident premise that a theorist can have at most one initial data set from a collection of possible and mutually inconsistent initial data sets at a given moment of physiotime .there remains one additional exotic possibility .if might be the case that given say and , both and were implied by a knowledge of the link .for example , suppose the link was equivalent to the equation where .assuming that and were always required to be real , then we could always find and from a given and simply by equating real and imaginary parts .a situation where a given link can be resolved in two or more events given just one initial data set at that link will be called a _ fluctuation process . _fluctuation processes will be excluded from our notion of classical causality .they may have a role in the qm ( quantum mechanics ) version of causality , which is beyond the scope of this article .one reason for excluding fluctuation processes is that this guarantees that information flows ( in physiotime ) from a link into a _single _ event .this is related to the concept of _ cosmic time _discussed below and to the idea that classical mathematical time has one dimension .when theorists discuss models with more than one parameter called a time , all but one of these has to be hidden or eliminated at the end of the day if a classical picture is to emerge .it may be argued therefore that _ the mechanism of classical resolution is the origin of the concept of time , and that time , like causality , is a no more than a convenient theoretical construct designed by the human mind to provide a coherent description of physical reality . _this carries no implication that what we called process time is really a linear time . as we said before, process time is just a convenient label for something which may be quite different to what we believe it to be . when a choice of resolution exists and is made , then as an additional simplification and provided there is no confusion with other relations to which may be a party ( not shown ) , the diagram on the lhs of may be replaced by the rhs of .here the numbers zero and one indicate the ordering of the resolution . because may be regarded as _ caused _ by the other events , it can be regarded as later and so has a greater associated discrete time .such times will be called _dates_. in general a link may be a whole collection of relations and equations .if there is just one small part of these equations which does not determine a unique solution fully , i.e. , prevents a resolution , then arrows or dates are in principle not permitted .however , under some circumstances it may be reasonable to ignore some part of a dynamical relation in such a way that arrows could be justified as far as the remaining parts were concerned .for example , the microscopic laws of mechanics appear to be resolvable forwards and backwards in time ( i.e. , are reversible ) provided the `` small '' matter of neutral kaon decays and the thermodynamic arrow of time ( which could involve the gravitational field ) are ignored . in , is the _ complete cause _ of ; in , is a _ partial cause _ of .the _ complete cause _ of is the collection of events , , and , but only for this choice of resolution .the theorist could decide to alter boundary conditions so that was no longer regarded as the resolved event .having outlined our ideas on functions and links , we shall apply them now to discrete spacetime .fourier s _ principle of similitude _ states that a system similar to but smaller than another system should behave like it is the physicist s analogue of continuity in mathematics , and is of course an erroneous principle when applied to matter , as evidenced by the observation of atoms and molecules .it is generally supposed that this principle will also break down in the microscopic description of space and time .classical gr ( general relativity ) may therefore be an approximation , albeit a remarkably good approximation , to some model of space and time which is not intrinsically a four - dimensional pseudo - riemannian manifold .there have been numerous suggestions concerning the fundamental nature and meaning of space and time , such as twistor theory , point set theory , etc . , and each of these suggestions makes a specific set of mathematical assumptions about spacetime .likewise , in this article a specific view of space time and dynamics is proposed and its consequences explored .of course , there is an important question concerning the use of classical or quantum physics here . in this paperthe proposals are based on classical ideas and the ramifications of quantum physics are explored elsewhere . from before the time of newton, physicists took the view that material objects have definite spatial positions at definite times . in the theorists went further and developed to the extreme the wellsian or block universe view that space and time exist in some physically meaningful sense , even in the absence of matter .whatever that sense is , be it a physical one or simply an approximate relationship of sorts between more complex attributes of reality , most physicists agree on the prime status of spacetime as the arena in which or over which physical objects exist .this is certainly the case for classical physicists and to various degrees for quantum physicists . in this paperthe focus is on a classical description of a _ discrete _ spacetime structure .discreteness is considered here for several reasons .first , as mentioned above , it would be too much to hope that fourier s principle of similitude should apply to spacetime and not to matter .second , there is a strong feeling in the subject of quantum gravity that the planck scales are significant .third , discreteness has the advantage over continuity in being less mathematically restrictive .theories based on discrete principles can usually encompass the properties of those based on continuous principles via appropriate limit processes , yet retain features which can not occur in the continuum .discrete spacetime structure and its relationship to causality has been discussed by a number of authors , notably by sorkin et al and finkelstein et al .a basic difference between those approaches and that taken here is that no _ a priori _ underlying spacetime manifold is assumed here .[ [ proposition-1 ] ] * proposition* : classical spacetime may be modelled by some discrete set .elements of will be denoted by capital letters such as . will be called a _spacetime _ and its elements referred to as _ events _ even if subsequently turns out to have only an indirect relationship with the usual four - dimensional spacetime of physics .what these events mean physically depends on the model .it is simplest to think of events as labels for mathematical structures representing the deeper physical reality associated with process time .events are meaningful only in relationship with each other and it is meaningless to talk about a single event in spacetime without a discussion of how the theorist relates it to other events in spacetime .this is done by specifying the _ links _ or relationships between the events .links are as important as the events themselves and it is the totality of links and events which makes up our spacetime dynamics .this should include all attributes relating to matter and gravitation , and it is in principle not possible to discuss one without the other .the structure of our spacetime dynamics is really all there is ; links and events .no preordained notion of metrical causality involving spacelike and timelike intervals is assumed from the outset .all of that should emerge as part of the implications of the theory . in classical continuous spacetime theories , on the other hand, the metric is usually assumed to exist independently of any matter , even before it is found via the equations of gr . as we said before, this metric carries with it lightcone structure and other pre - ordained attributes of causality .discrete spacetime carries with it the astonishing possibility of providing a natural explanation for length , area and volume . according to this idea , attributed to riemann , these are simply _numerical counts _ of how many events lie in certain subsets of spacetime .discreteness may also provide a natural scale for the elimination of the divergences of field theory , and permits all sorts of novelties to occur which are difficult if not impossible to build into a manifold .recently , the study of spin networks in quantum gravity has revealed that quantization of length , area and volume can occur .in cm the aim is usually to describe the temporal evolution of chosen dynamical degrees of freedom .these take on many possible forms , such as position coordinates or various fields variables such as scalar , vector and spinor fields . in our approach ,we associate with each event in a given spacetime an internal space of dynamical degrees of freedom called _ event state space_. this space could be whatever the theorist requires to model the situation. moreover , it could be different in nature at each event in this spacetime .elements of will be denoted by lower case greek letters , such as etc . anda chosen element of will be called a _ state _ _ of the event _ .elementary examples of event state spaces are : [ [ i - scalar - fields ] ] * scalar fields * : a real valued scalar field on a spacetime is simply a rule which assigns at each event some real number this may be readily generalized to complex valued functions .if no other structure is involved then obviously where is a copy at of the real line a classical configuration of spacetime in this model would then be some set .this configuration `` exists '' at some moment of the theorist s physiotime but the model itself would not necessarily have any causal ordering . that would have to be determined by the theorist in the manner discussed below .[ [ ii - vector - fields ] ] * vector fields :* suppose at each event there is a copy of some finite dimensional vector space elements etc .then a vector field is simply a rule which assigns at each event some element [ [ iii - group - manifolds ] ] * group manifolds : * at each event p we choose a copy of some chosen abstract group , such as or to be our event space . [[ iv - spin - networks ] ] * spin networks : * a spin network is a graph with edges labelled by representations of a lie group and vertices labelled by intertwining operators .spin networks were originally invented by penrose in an attempt to formulate spacetime physics in terms of combinatorial techniques but they may also defined as graphs embedded in a pre - existing manifold . the state event space at each event is a copy of in this particular model the sort of events we are thinking of in spacetime would be associated with the geometrical links of a triangulation and the links ( the dynamical relationship between our events ) would be associated with the geometrical vertices of the triangulation , that is , with the intertwining operators . for physically realistic models the number of events in the corresponding spacetime will be vast , possibly infinite .sorkin et al give a figure of the order per cubic centimetre - second , assuming planck scales for the discrete spacetime structure .we shall find it useful to discuss some examples with a finite number of events for illustrative purposes . in our spacetime diagrams we will follow the convention established for functions in the previous section ; large circles denote events and small circles denote links , with lines connecting events and links . before any temporal resolutionis attempted , no arrows can be drawn . shows a finite spacetime with 14 events and 9 links .[ [ definition-2 ] ] * definition * * * : * * the ( _ local _ ) _ environment _ _ of an event _ is the subset of links which involves , that is , all those links to which is party , and the _ degree of an event _ is the number of elements in its local environment . for example , from , the local environment of the event labelled is the set of links , and so is a third degree event .[ [ definition-3 ] ] * definition * * * : * * the _ neighbourhood _ _ of an event _ is the set of events linked to via its environment .for example , from the neighbourhood of event is the set of events and and are the _ neighbours _ of [ [ definition-4 ] ] * definition * * * : * * the _ domain _ _ of a link _ is the set of events involved in that link , and the _ order of a link _ is the number of elements in its domain . for example , from , and so is a fourth order link .the local environment of an event will be determined by the underlying dynamics of the spacetime , i.e. the assumed fundamental laws of physics . currently these laws are still being formulated and discussed , so only a more general ( and hence vague ) discussion can be given here with some simple examples .a spacetime and its associated structure of neighbourhoods and local environments will be called a _ spacetime dynamics ._ is the local environment of and is its neighbourhood . is the domain of .( b ) a chosen initial data set is shaded grey . its full implication ( or future ) is .its ( absolute ) past is inaccessible from this initial data set and can not be influenced by any changes on the initial data set.,width=455,height=649 ]there are two interrelated aspects of any spacetime dynamics : the nature of the event state space associated with each event and the nature of the links connecting these events .the way in which events are related to each other structurally via the links will be called the _ local discrete topology ._ if this discrete topology has a regularity holding for all links and events , such as that of some regular lattice network , then we shall call this a _ homogeneous _ discrete topology .otherwise it will be called _ inhomogeneous ._ we envisage three classes of classical spacetime dynamics : 1 .* type * * * : * * spacetime dynamics with a homogeneous and fixed discrete topology with a variable event state configuration which does not affect the discrete topology .this corresponds to ( say ) field theory over minkowski spacetime ; 2 . *type * * * : * * spacetime dynamics with inhomogeneous but fixed discrete topology with a variable event state configuration which does not affect the discrete topology .this corresponds to field theory over a fixed curved background spacetime , such as in the vicinity of a black hole ; 3 . *type * * * : * * spacetime dynamics with discrete topology determined by the event state configuration ; this corresponds to gr with matter .type and spacetime dynamics are relatively easy to discuss .once a fixed discrete topology is given this provides the template or matrix for the theorist to `` slot in '' the causal patterns associated with initial event state configurations ( initial data sets ) . in this sense , types and are not genuinely background free , but they are independent in a sense of any preordained lorentzian metric structure__. _ _type presents an altogether more interesting scenario to discuss and demonstrates the basic issue in classical gr which is that the spacetime dynamics should determine its own structure , including its topology .gr without matter of any sort does not make sense in our approach , because we need to specify event state spaces in order to define the links .spins are needed to specify spin networks , for example . in our approach, gravitation is intimately bound up with discrete spacetime topology .a spacetime diagram need not be planar .indeed , there need not be any concept of spacetime dimension at this stage .sorkin et al suggest that at different scales , a given discrete spacetime structure might appear to be approximated by different continuous spacetime dimensions , such as , , or depending on the scale .the ppm model was introduced as an approach to the modelling of process time . working in physiotime ,the theorist first decides on a spacetime dynamics and from an initial data set then determines a consistent event state at each event in the spacetime . because of the existence of the links , however , these event states can not all be independent , and this interdependence induces our notion of causality , as we now explain . first , restrict the discussion to type and spacetimes and suppose that each link is _fully resolvable_. by this is meant that if the order of a link is and the event states are specified for any of the events in the domain of the link , then the remaining event space in the domain is uniquely resolved .[ [ example-1 ] ] * example * * * : * * an example of a fully resolvable link is the following : let be a link of order , with domain the event space at is a copy of some group such as or and suppose the link is defined by where and is the group identity ]. then clearly , and similarly for any of the other events states .in other words , we can always resolve any one of the events in uniquely in terms of the others .now suppose the theorist chooses one link , such as in , which is a fourth order link .then its domain can be identified immediately from the spacetime dynamics to be .the theorist is free to specify the states at three of these without any constraints , and this represents an initial data set called .suppose these are events and .if now the structure of the link is such that there is only one possible solution in , then we have a classical resolution of alternatives ] and the emergence of a causal structure .we can use the language of dynamics and say that events and _ cause _ and denote it by a diagram such as .but it should be kept in mind that the theorist has _ decided on which three sets to use as an initial data set . _our interpretation of causality is that it is dictated partly by the spacetime dynamics and partly by the choices made by the theorist . in general ,classical resolution involves using information about event states at an order link to determine the state of the remaining event in the domain of that link .an initial data set may involve more than one link , such as shown in . in that diagram, events are labelled by integers representing the discrete times at which their states may be fixed .events in the chosen initial data set are labelled with a time and shaded grey .these are the events given an initial data set , the theorist can then use the links to deduce the _ first( or primary ) implication _ .this is the set labelled by integer time in , and consists of events the event state at each of the events in the primary implication is determined from a knowledge of the event states on the initial data set , assuming the links do indeed permit a classical tra. it could be the case that the primary implication is the empty set .given a knowledge of the event states on and its primary implication then the _ second _ ( or _ secondary _ ) _ implication _ can now be found and its events labelled by the discrete time .this process is then repeated until the _ full implication _ of is determined . in is the set in this example it is assumed that each of the links is fully resolvable .if any link is not fully resolvable , it may still be possible to construct a non - empty full implication for certain initial data sets .several important concepts can be discussed with this example .the full implication of an initial data set consists of those events whose event states follow _ from _ given initial conditions , and so it is not unreasonable to call the full implication of any initial data set the _ future _ of that set . whilst the term _ from _ in the preceding sentence refers to a process of inference or implication carried out in physiotime by the theorist, the result is that diagram for example now carries _ dates _ ( or equivalently _ arrows _ ) _ _ _ _ as a consequence .this ordering may now be regarded as an attribute of the mathematical model rather than of physiotime , and this is the _ mathematical arrow of time _ referred to previously .the resulting structure can then be used to represent phenomena in process time .however , some caution should be taken here .we may encounter spacetime dynamics which are equivalent to _ reversible dynamics _ in continuous time mechanics .the full implication of some initial data sets for such spacetime mechanics may propagate both into what we would normally think of as the conventional future _ and _ into the conventional past .this is in accordance with the general problem encountered with any reversible dynamics : we can never be sure which direction is the real future and which is the real past unless external information is supplied to tell us .an example of a reversible discrete topology is given in .this is the topology of the discrete time harmonic oscillator . assuming the links are fully resolvable, we see that the initial data set shown ( shaded and labelled ) has a full implication which extends to the right and to the left of the diagram , i.e. into the conventional past and future . in set is inaccessible from the given initial data set , that is , its intersection with both the initial data set and its full implication is the empty set .such an inaccessible set may be interpreted in a number of ways .it could be thought of as the _ absolute past _ of the initial data set because in this particular example , the initial data set implies nothing about but specifying the states at and would fix . in other examplessuch inaccessible events could be interpreted as beyond an event horizon of some sort .the general feature of inaccessible events is that they can not be affected by any changes to the event states in an initial data set ._ archaeology _ is our term for the process of reconstructing portions of an absolute past from new and limited information added to some initial data set .it means the process whereby specifying one or more event states in an absolute past has the immediate consequence _ _ _ _ that the theorist can deduce even more information about that past .[ [ example-2 ] ] * example * * * : * * consider a spacetime dynamics based on the infinite regular triangular topology shown in , with an initial data set shaded and labelled by its first , second and third implications are labelled by and respectively .the full implication of the initial data set in fact extends to infinity on the right .suppose the theorist now determines in some way or decides on the state at the event shaded and dated in .then the events in the past of the initial data set dated by and unshaded can now be resolved , thereby extending the original full implication one layer to the left of the initial data set .just one extra piece of information can trigger an implication with an infinite number of elements .this process of retrodiction is called _ archaeology _ for obvious reasons , and for this spacetime could be continued indefinitely into the past .the theorist can , by providing new initial data in this way , eventually cover the entire spacetime as the union of an extended initial data set and its full implication . , , and respectively .the past of this initial data set is the set of all events to the left of the zeros . by providing just one extra data event ( shaded and labelled )the full implication can be extended to include the events unshaded and labelled by .similarly , by providing another extra data set ( shaded and labelled by ) , more evidence about what the past must have been can be deduced.,width=384,height=373 ]suppose we have been given an initial data set and have worked out its full implication .now pick any event in and change its state .the consequence of this is to change the states in some events in , but not necessarily in all events in subset of consisting of all those events changed by the change in will be called the _ causal propagator _ associated with _ _ _ _ and . will be called the _ vertex of the propagator . _ _ _ _ _ we note the following : 1 . for any event in an initial data set the causal propagator entirely within the full implication of 2 .a causal propagator depends on a vertex _ and _ on an associated initial data set ; 3 .a causal propagator divides spacetime into three sets : the vertex , those events which can not be affected by any change at the vertex , and those events which could be changed .this structure is rather like the lightcone structure in special relativity which separates events into those which are timelike , lightlike , or spacelike relative to the vertex of the lightcone . in our context , we could in some sense talk about the _ speed of causality _ , analogous to the speed of light in relativity , as the limiting speed with which disturbances could propagate over our spacetime .[ [ example-3 ] ] * example * as an example we give a spacetime lattice of type labelled by two integers running from to .the state space at each event is the set and a state at will be denoted by .the links are given by the equations now choose an initial data set this corresponds to selecting the index as a spatial coordinate and the index as a timelike coordinate .the initial data set is then equivalent to specifying the initial values and initial time derivatives of a scalar field on a hyperplane of simultaneity in a two dimensional spacetime . because this spacetime dynamics is reversible, the full implication of this initial data set is the entire spacetime minus the initial data set . .in this diagram time appears superficially to run left to right or right to left , but actually the causal flow is from the initial data set outwards to the left and to the right.,width=446,height=452 ] now consider changing the state at from to in we show a bitmap plot of all those events whose state is changed by the change in the event .the structure looks just like a lightcone , with complex fractal - like patterns developing inside the retarded and advanced parts of the lightcone .the speed of causality in this example is evidently unity if we interpret the indices in the manner discussed above .for some spacetime dynamics a global temporal ordering _ _ _ _ can constructed by assigning an integer to each event as follows . if is earlier then ( i.e. _ p _ is a partial or complete cause of then some integer is assigned to and some integer to such that integers are called _ dates _ above .if it is possible to find a consistent ordering over the whole of based on the above rule then we may say that a _ cosmic time_ exists for that spacetime .a cosmic time can not be constructed for a finite spacetime dynamics if there are no events of degree there are two situations where a cosmic time may be possible : either the spacetime is finite with one or more events of degree one , or the spacetime is infinite . however , these are not sufficient properties to guarantee a cosmic time can be constructed .it is possible to find spacetime dynamics for which more than one cosmic time pattern can be established . shows part of a spacetime dynamics containing a closed _ causal _ ( _ timelike ) loop . _no cosmic time can be found for such a spacetime .this corresponds to the situation in gr where the existence of a closed timelike loop in a spacetime precludes the possibility of finding a global cosmic time coordinate for that spacetime .embedded in it then no global cosmic time ordering is possible.,width=125,height=136 ] in the spacetime depicted in the concept of metric was not introduced . nevertheless , it is possible to give a definition of a _ spacelike ( hyper ) _ surface in this and other spacetime__. _ _ whether this corresponds to anything useful depends on the details of the spacetime dynamics .[ [ definition-5 ] ] * definition 5 : * a _ spacelike hypersurface _ of a spacetime is any subset of which would have a consistent full implication if it were used as an initial data set .we have already used the term _ initial data set _ for such subsets . however , not all initial data sets need be consistent .a spacelike hypersurface is just a consistent initial data set. there may be many spacelike hypersurfaces associated with a given spacetime and they need not all be disjoint .the definition of spacelike hypersurface involves a choice of causation by the theorist . in generalthere may be more than one spacelike hypersurface passing through a given event , and it is the theorist s choice which one to use .the possible non - uniqueness of spacelike hypersurface associated with a given event is desirable , because this is precisely what occurs in minkowski spacetime , where there is an infinite number of spacelike hypersurfaces passing through any given event , corresponding to hyperplanes of simultaneity in different inertial frames .causal sets are sets with some concept of ordering relationship , which makes them suitable for discussions concerning causality .this presupposes some pre - existing temporal structure independent of any dynamical input .this is not a feature of our dynamics , where causal structure emerges only after the theorist has chosen the initial data set .the ppm model leaves certain fundamental questions unanswered , such as the origin of physiotime and whether process time is a meaningful concept .however , once these are accepted as given and the model used in the right way , then causality and time itself emerge as observer ( theorist ) oriented concepts . many if not all of the phenomena associated with metric theories of spacetime can be recovered , which suggests that further investigation into this approach to spacetime dynamics may prove fruitful . a good question to ask is : where would lorentzian causality come from in our approach ?the answer is that it is embedded or encoded in the definition of the links .only when full implications are worked out from given initial data sets would it be noticed that the dynamics itself naturally forces certain patterns to emerge and not others . only at that stagewould the theorist would recognize an underlying bias in the dynamics in favour of certain more familiar interpretations .for example , given the continuous time equation the notation suggests that a time , so we could attempt to solve it with initial data on the hyperplane however , it would soon emerge that evolution in gave runaway solutions . in other words , the equation itself would carry the information that it would be wiser to define initial data on the hyperplane .we would not need to invoke the spurious concept of lorentzian signature metric embedded in spacetime to discover this . at this stagewe might suddenly realise that we had by some chance interchanged the symbols for time and space in an otherwise ordinary klein - gordon equation with a real mass .so rather than deal with which behaves like a klein - gordon equation with an imaginary mass , we would simply interchange the symbols and then define initial data on the hyperplane now defined by .the above is an expanded version of the author s talk at the first international interdisciplinary workshop on _ `` studies on the structure of time : from physics to psycho(patho)logy '' _ , 23 - 24 november 1999 , consiglio nazionale delle richerche , area della ricerca di palermo , palermo , sicily , italy .the proceedings of this conference , including this article , will be published presently by kluwer academic ( new york ) .the author is grateful to the university of nottingham for support , to the organisers of the conference for their assistance , to the other participants for their thoughts , and to kluwer academic ( new york ) for permission to place this article in these electronic archives .d. meyer l. bombelli , j. lee and r. sorkin , _ space - time as a causal set _lett . , 59(5 ) , pp521524 ( 1987 ) ;d. p. rideout and r.d .sorkin , _ a classical sequential growth dynamics for causal sets , _ gr - qc/9904062
|
a mathematical definition of classical causality over discrete spacetime dynamics is formulated . the approach is background free and permits a definition of causality in a precise way whenever the spacetime dynamics permits . it gives a natural meaning to the concepts of cosmic time , spacelike hypersurfaces and timelike or lightlike flows without assuming the notion of a background metric . the concepts of causal propagators and the speed of causality are discussed . in this approach the concepts of spacetime and dynamics are linked in an essential and inseparable whole , with no meaning to either on its own .
|
the reflexive game theory ( rgt ) has been entirely developed by lefebvre and is based on the principles of _ anti - selfishness or egoism forbiddeness _ and human _ reflexion processes _ . therefore rgt is based on the human - like decision - making processes .the main goal of the theory is to model behavior of individuals in the groups .it is possible to predict choices , which are likely to be made by each individual in the group , and influence each individual s decision - making due to make this individual to make a certain choice . in particular ,the rgt can be used to predict terrorists behavior . in general ,the rgt is a simple tool to predict behavoir of invididuals and influence individuals choices .therefore it makes possible to control the individuals in the groups by guiding their behavoir ( decision - making , choices ) by means of the corresponding influences . on the other hand ,now days robots have become an essential part of our life .one of the purposes robots serve to is to substitute human beings in dangerous situations and environments , like defuse a bomb or radioactive zones etc .in contrast , human nature shows strong inclinations towards the risky behavior , which can cause not only injuries , but even threaten the human life .the list of these reasons includes a wide range starting from irresponsible kids behavior to necessity to find solution in a critical situation .in such a situation , a robot should full - fill a function of refraining humans from doing risky actions and perform the risky action itself , if needed .however , robots are forbidden and should not physically force people , but must convince people on the mental level to refrain from doing a risky action .this method is more effective rather than a simple physical compulsion , because humans make the decisions ( choices ) themselves and treat these decisions as their own .such technique is called a _ reflexive control _ .the task of finding appropriate reflexive control is closely related with the inverse task , when we need to find suitable influence of one subject on another one or on a group of subject on the subject of interest .therefore , it is needed to develop the framework of how to solve the inverse task .this is the primary goal of this study .however , for better understanding of the gist of the inverse task and its intrinsic relationships with other issues of the rgt , we introduce the entire spectrum of the tasks , which can be solved by the rgt .this forms the scope of inference algorithms used in the rgt .we present the rgt algorithms in the form of the _ schemas of control systems _ that can be instantly applied for developement of soft- or / and hardware solutions .we develop a hierarchy of control systems for abstract individual ( including human subject ) and robotic agent ( robot ) based on these control schemas .finally , we illustrate application of the inverse task together with other rgt inference algorithms to model robot s behavior in the mixed groups of humans and robots .the rgt deals with groups of abstract subjects ( individuals , humans , autonomous agents etc ) .each subject is assigned a unique variable ( _ subject variable _ ) .any group of subjects is represented in the shape of _ fully connected graph _ , which is called a _relationship graph_. each vertex of the graph corresponds to a single subject .therefore the number of vertices of the graph is in one - to - one correspondence with overall number of subjects in the groups .each vertex is named after the corresponding subject variable .the rgt uses the set theory and the boolean algebra as the basis for calculus .therefore the values of subject variables are elements of boolean algebra .all the subjects in the group can have either alliance or conflict relationship .the relationships are identified as a result of group macroanalysis .it is suggested that the installed relationships can be changed .the relationships are illustrated with graph ribs .the solid - line ribs correspond to alliance , while dashed ones are considered as conflict . for mathematical analysis allianceis considered to be conjunction ( multiplication ) operation ( ) , and conflict is defined as disjunction ( summation ) operation ( + ) . the graph presented in fig .[ fig : fig1]a or any graph containing any sub - graph isomorphic to this graph are not decomposable . in this case , the subjects are excluded from the group one by one , until the graph becomes decomposable .the exclusion is done according to the importance of the other subjects for a particular one .any other fully connected graphs are decomposable . any decomposable graph can be presented in an analytical form of a corresponding _polynomial_. any relationship graph of three subjects is decomposable ( see ) .consider three subjects and .let subject is in alliance with other subjects , while subjects and are in conflict ( fig .[ fig : fig1]b ) .the polynomial corresponding to this graph is . , [ b] ] are elementary polynomials.,height=75 ] regarding a certain relationship , the polynomial can be stratified ( decomposed ) into _ sub - polynomials _each sub - polynomial belongs to a particular level of stratification .if the stratification regarding alliance was first built , then the stratification regarding the conflict is implemented on the next step .the stratification procedure finalizes , when the _ elementary polynomials _ , containing a single variable , are obtained after a certain stratification step .the result of stratification is the _ polynomial stratification tree ( pst)_. it has been proved that each non - elementary polynomial can be stratified in an unique way , i.e. , each non - elementary polynomial has only one corresponding pst ( see considering one - to - one correspondence between graphs and polynomials ) .each higher level of the tree contains polynomials simpler than the ones on the lower level . for the purpose of stratificationthe polynomials are written in square brackets .the pst for polynomial is presented in fig.[fig : fig2 ] .next , we omit the branches of the pst and from each non - elementary polynomial write in top right corner its sub - polynomials . the resulting tree - like structure is called a _ _ diagonal form__ .consider the diagonal form corresponding to the pst in fig .[ fig : fig2 ] : + [ c ] } \\ { } & { [ a][b + c ] } & { } \\ { [ a(b + c ) ] } & { } & { \;\;\;\;\;\;\;\;\;\;\;\;\ ; . } \\ \end{array}\ ] ] hereafter , the diagonal form is considered as a function defined on the set of all subsets of the _universal set_. the universal set contains the _elementary actions_. for example , these actions are actions and . by definition ,boolean algebra _ of the universal set includes four elements : , , and the empty set 0 = .these elements are all the possible subsets of universal set and considered as alternatives that each subject can choose .the alternative is interpreted as an inactive or idle state . in general , boolean algebra consists of alternatives , if universal set contains actions .accroding to definition given by lefebvre , we present here exponential operation defined by formula where stands for negation of .this exponential operation is used to fold the diagonal form . during the folding , round and square bracketsare considered to be interchangeable .the following equalities are also considered to be true : and .next we implement folding of diagonal form of polynomial : + [ c ] } & { } & { } & { } & { } \\ { } & { [ a][b + c ] } & { } & { } & { } & { [ a]([b + c ] + \overline{[b ] + [ c ] } ) } & { } \\ { [ a(b + c ) ] } & { } & { } & = & { [ a(b + c ) ] } & { } & { = a(b + c ) + \overline a \ ; . } \\ \end{array}\ ] ] it is considered that the levels of the pst represent different processing levels of natural or artificial cognitive system .each level is considered as an images .the root of the tree is the input into the cognitive system and , therefore can be considered as the image of the world ( environment including self and others ) , perceived by the subject .as it follows from the pst , there is a hierarchy of images , corresponding to a particular cognitive level . during processing along this hierarchy in the _ bottom - up _ manner , the image on the lower level undergoes an extensive process of simplification by the means of decomposition into simpler parts on the higher level .these parts are considered to be the images of the image on the previous level .therefore , the images on the second level are different representions of the original image of the world .this procedure repeats until we obtain elementary part ( elementary polynomials ) . on the other hand ,the pst folding procedure can be referred as _ top - down _ intergration process of simpler images from the higher levels .therefore , the stratification procedure of original polynomial together with the folding procedure of the diagonal form illustrate the interplay of _ bottom - up _ and _ top - down _ information processes , which are widely imployed in biological and artificial information processing systems .the idea of hierarchical structure is highly coherent with hierarchical organization of majority of natural ( inanimate objects ) and biological ( living creatures ) entities .furthermore , it has been shown that hierarchical structure is intrinsic for the relationships in societies of insects , animals and human beings .therefore hierarchical representation of the groups in the form of pst correspond to extraction of the hierarchical structure of the given group , while fusion of the pst and its diagonal form with diagonal form folding procedure closely resembles the way of information processing within a single independent congnitive system as discussed above .thus , rgt imploys the fundamental principles of hierarchical organization on both group ( reflects structure of the groups ) and individual ( illustrates information processing within independent cognitive system of a single unit ) levels .this makes rgt universal tools that mildly bridges the gap between representation and analysis .the goal of each subject in a group is to choose an alternative from the set of alternatives under consideration . to obtain choice of each subject , we consider the _ decision equations _ , which contain subject variable in the left - hand side and the result of diagonal form folding in the right - hand side : to find solution of the decision equations , we consider the following equation : where is the subject variable , and and are some sets . eq.([canfrom ] ) represents _ the canonical form of decision equation_. this equation has solution if and only if the set is contained in set : .if this requirement is satisfied , then eq.([canfrom ] ) has at least one solution from the interval . otherwise , the decision equation has no solution , and it is considered that subject can not make a decision .in such situation , the subject is in frustration state . therefore , to find solutions of decision equation , one should first transform it into the _canonical form_. out of three presented equations only the decision equation for subject is in the canonical form , while other two should be transformed .we consider explicit transformation only of decision equation for subject : + + .+ therefore , the transformation of equation for subject be can be easily derived by analogy : .next we consider two tasks , which can be formulated regarding the decision equation in the canonical form and provide methods to solve each task .the variable in the left - hand side of the decision equation in canonical form is the variable of the equation , while other variables are considered as influences on the subject from the other subjects .the _ forward task _ is formulated as a task to find the possible choices of a subject of interest , when the influences on him from other subjects are given .after transformation of arbitral decision equation into its canonical form , the sets and are functions of other subjects influences .for example , if we consider group of subjects , , , etc .togehter with the abstract representation of decision equation in canonical form for subject , the sets and will be the functions of subject variables , , etc . : in the case of only three subjects , and , and .all the influences are presented in influence matrix ( table [ infmat ] ) .the main diagonal of influence matrix contains the subject variables .the rows of the matrix represent influences of the given subject on other subjects , while columns represent the influences of other subjects on the given one .the influence values are used in decision equations . ' '' '' a&a&& + ' '' '' b&&b& + ' '' '' c&&&c + [ table : tab1 ] for subject : . for subject : . for subject : .equation for subject does not have any solutions , since set is contained in set : . thus ,subject can not make any decision .therefore he is considered to be in frustration state .equation for subject has at least one solution , since .the solution belongs to the interval .therefore subject can choose any alternative from boolean algebra , which contains alternative .these alternatives are and .equation for subject turns into equality .this is possible only in the case , when . here .in contrast to the forward task , the _ inverse task _ is formulated as a task to find all the simultaneous ( or joint ) influences of all the subjects together on the subject of interest that result in choice of a particular alternative or subset of alternatives .we call the subject of interest to be a _ controlled subject_. let subject be a controlled subject and is a fixed value , representing an alternative or subset of alternatives , which subjects , , etc. want subject to choose .we call value to be a _ target choice_. by substituting subject variable with fixed value , we obtain the _ influence equation_. if we substitute the subject variable with fixed value in the canonical form of the decision equation ( eq .( [ ftcnf ] ) ) , we obtain _ the canonical form of the influence equation _ : for only three subjects , and , and .in contrast to the decision equation , which is equation of a single variable , the influence equation is the equation of multiple variables . however , the number of variables of influence equation is not trivial question .in fact , the number of variables in influence equation can be less then , where is the total number of subjects in the group .there are groups , in which sets and are functions of less than variables ( see appendix [ appen1 ] ) .therefore the variables that present in influence equation are called _ effective variables_. the inverse task is by definition itself only guaratee that the original decision equation turns into true equality , but it is not guaranteed that these solutions are the only ones that turn decision equation into true equality . ] formalized as to find all the joint solutions of all subjects in the group , except for the controlled one , when the target choice is represented by interval , where and are some sets and .in such a case , to solve the inverse task , one should solve the system of influence equations : align a(b , c , ... ) = _ 1 [ sys11 g ] + b(b , c , ... ) = _ 2 [ sys21 g ] if the target choice is a single alternative , then ._ the solutions of the system ( [ sys11g]-[sys21 g ] ) are considered as reflexive control strategies .the solution of the inverse task in particular is characterized from two points .the first point is whether it is required to find the influence of a particular single subject or joint influences of a group of subjects .the second one is whether the target choice is represented as a single alternative or as an interval of alternatives . to illustrate these points , we introduce a particular group of subjects .let subjects and are in alliance with each other and in conflict with subject .the polynomial corresponding to this graph is . the diagonal form corresponding to this polynomial and its folding is [b ] } & { } & { } \\ { } & { [ ab ] } & { } & { + [ c ] } & { } \\ { [ ab + c ] } & { } & { } & { } & { = ab + c } \\ \end{array}\ ] ] therefore the decision equation for all the subjects in the group is where can be any subject variable , or ._ influence of a single subject vs joint influences of a group . _first we consider example , when the influence of a single subject is required .let subject makes influence and .then we need to find influences of a single subject , which result in solution of decision equation .the canonical form of this influence equation is .since , , we obtain a system of equations : align \ { } + c = \ { } [ sys11 ] + c = \ { } [ sys21 ] therefore , the straight forward solution of this system is .this simple example illustrates the very gist of the _ inverse task _ - to find the appropriate influences , which result in target choice .next , we consider that influence of subject is not known .therefore , we obtain system align b + c = \ { } [ sysinf11 ] + c = \ { } [ sysinf12 ] in this case , we need to find the values of variable , which together with , result in solution . in other words , we need to find all the pairs , resulting in solution .these pairs are solutions of the system ( [ sysinf11]-[sysinf12 ] ) .therefore , we run all the possible values of variable and check if the first equation of the system ( [ sysinf11]-[sysinf12 ] ) turns into true equality : + ; + ; + ; + .therefore , out of four possible values of variable , only two values and are appropriate .thus , we obtain two pairs : and . _ a single target alternative vs interval of alternatives ._ in the previous examples we considered a target choice to be only a single alternative . herewe illustrate the case , when a target choice is an interval .let , and .to find corresponding influences of subject , we solve the system of equations : align \ { } + c = 1 [ sysinf21 ] + c = \ { } [ sysinf22 ] again , we instantly obtain the solution of this system : . in this section ,we have formulated the inverse task in general and considered its particular formalization depending on the number of influences and what is the target choice .however , we do not have a method to solve arbitral influence equation .therefore , we solve this problem in the next section .as an introduction for this section , we consider the fundamental proposition , which will be the conner stone to solve the influence equations . [ lem1 ] let p and q be some abstract sets . then . _ necessity ._ let , then therefore if , then ._ sufficiency ._ let , then . now let us consider the new type of equation : this equation has solution if and only if .there are three operations defined on the boolean algebra .they are conjunction ( or multiplication ) , disjunction ( + or summation ) and negation ( , where is subject variable ) .the negation operation is unary operation , while other two operations are binary .using combination of these three operations , we can compose any influence equation . since , it is obvious how to solve the equation including only unary operation , we discuss how to solve influence equations including a single binary operation . for this perpose ,we consider two abstract subject variables and and abstract alternative .[ lem ] the solution of equation regarding variable , where , is given by the interval , where .[ lemma1 ] according to proposition 1 , , , and . therefore , .consequently , we obtain eq.([diseq ] ) : we solve eq.([diseq ] ) regarding variable .first , we transform eq.([diseq ] ) into canonical form : therefore , the solution of eq.([diseqcn ] ) is given by the interval since variables and are interchangable and it is possible to solve eq.([diseq ] ) regarding variable as well , the general form of solution of eq.([dise ] ) is the interval where and [ lem3 ] the solution of equation regarding variable , where , is given by the interval , where .[ lemma2 ] according to proposition 1 , , , and . therefore , + .thus , we obtain eq.([coneq ] ) : we solve eq.([coneq ] ) regarding variable .first , we transform eq.([coneq ] ) into canonical form : since , the solution of eq.([coneqcn ] ) is given by the interval since variables and are interchangable and it is possible to solve eq.([coneq ] ) regarding variable as well , the general form of solution of eq.([cone ] ) is the interval where and since one bound of the solution intervals for eqs.([dise ] ) and ( [ cone ] ) are functions of the second variable , we need to run all the possible values of the second variable in order to obtain all possible solutions of these equations in the form of pairs .next we consider several examples , illustrating application of lemmas [ lemma1 ] and [ lemma2 ] . _example 1_. for illustration , we solve equation .consider , and , we obtain the solution interval for variable : .after simplfication , we get interval ( [ inter1 ] ) : next we consider examples with particular alternatives. let it be alternative .the solution interval is then .since the lower bound of this interval is a function of variable , to find all solutions of equation , we calculate value of expression for all possible values of variable ( table [ solns1 ] ) .to reesure that solutions are correct , we check that decision equation turns into true equality for the obained pairs : + : is true ; : is true ; : is true ; : is true ; : is true ; : is true . + so far , we have illustrated how to solve the influence equation .we as well showed that the pairs obtained by solving equation in accordance with proposition 1 and lemmas 1 and 2 are indeed solutions of this equation .c c c c c ' '' '' values of &&&1 & 0 + ' '' '' & & & & + ' '' '' & & & & + [ solns1 ] [ solns1 ] _ example 2_. we consider influence equation for subject obtained from eq.([canonb ] ) . first , we transform the left - hand side of eq.([ex2eq ] ) : + . + therefore ,eq.([ex2eq ] ) can be rewritten as follows : considering , and , we instantly obtain the solution interval of eq.([ex2eq1 ] ) : .finally , _ example 3_. next , we consider influence equation considering , and , we instantly obtain the solution interval or therefore , in order to find all solutions of eq.([ex3eq ] ) , we need to solve the equations where is any sub - set of set ( ). each equation can be solved according to lemma [ lemma2 ] . _example 4_. as a final example , we again consider influence equation and show how application of lemma [ lemma1 ] essentially simplifies its solution .we get the system of influence equations : align b + c = \ { } ; [ sys2a ] + c = \ { } .[ sys2b ] from this system we obtain a single equation : according to lemma [ lemma1 ] , we instantly obtain the solution interval of eq.([deqs ] ) : thus , eq.([deqs ] ) has two solutions : and . therefore the solution of system ( [ sys2a]-[sys2b ] ) consists of two pairs and . to conclude this section, we provide its brief summary .we have shown how to solve the inverse task by means of influence equations .we have proved two fundamental lemmas , which allow to solve any influence equation regardless of the number of variables .finally , we have illustrated several examples of how apply these lemmas . in this sectionwe analyze the situation , when subject can appear in frustration state , from the point of view of the inverse task .let us consider the polynomial discussed in the section [ repres ] .the decision equation that corresponds to this polynomial is , where can be any subject variable .next we try to find all the pairs such that result in selection of a particular alternative by subject . the decision equation for subject is .the solution interval of this decision equation is .we need to check which alternative subject can be convinced to choose . to do this , we consider the system of equation for each alternative .alternative : align b + c = \ { } [ sys11 ] + 1 = \ { } [ sys21 ] alternative : align b + c = \ { } [ sys12 ] + 1 = \ { } [ sys22 ] alternative : align b + c = 0 [ sys13 ] + 1 = 0 [ sys23 ] in these systems the second equation is incorrect equality . therefore these systems have no solution .alternative : align b + c = 1 [ sys14 ] + 1 = 1 [ sys24 ] the second equation is correct equality .therefore this system has solution .thus , out of four possible alternatives , subject actually can choose only alternative .to find solutions , resulting in selection of the alternative , we need to solve only eq.([sys14 ] ) , since eq.([sys24 ] ) turns into the true equality . according to lemma [ lemma1 ] ,we instantly obtain the solution interval for eq.([sys14 ] ) : we calculate the pairs for all possible values of variable ( table [ tab3 ] ) .c c c c c ' '' '' values of &&&1 & 0 + ' '' '' & & & & + ' '' '' & & & & + ' '' '' & & & & + ' '' '' & & & & + [ tab3 ] therefore , the influence analysis of the decision equation shows that the only alternative that subject can choose is alternative .the influence analysis provides us with the set ( exhaustive list ) of pairs of joint influences resulting in selection of alternative .therefore , if the pair of influences does not match any pair from this list , the decision equation has no solution and this results in frustration state . summarizing , this section we note that in general there are two sets .the set contains alternatives that a controlled subject can choose .the set is the set of altertanives of the target choice .therefore , the need to put subject into frustration state emerges , if the target choice of a controlled subject can not be made by this subject . in other words, we need to put a subject into frustration state , if . among all the possible groups ,there are groups , in which subjects will always choose only the alternative regardless of the influence of other subjects .such groups are called _ super - active groups_. next we consider one special case of super active groups - the groups .the group is called , if all the subjects in the group are connected with the same relationship .here we provide proof of the lemma about homogenous groups originally formulated by lefebvre .[ lem4 ] any homogenous group is the super - active group .[ lemma3 ] we consider the homogenous groups , where all the subjects are connected with alliance ( alliance groups ) and conflict ( conflict groups ) relationship , separately . without loss of generallity, we suggest that there are subjects ._ alliance groups_. the polynomial corresponding to the alliance group of subject is .next we construct the diagonal form and apply folding procedure : [{a_2}] ... [{a_n } ] } & { } \\ { [ { a_1}{a_2} ... {a_n } ] } & { } & { = [ { a_1}{a_2} ... {a_n } ] + \overline { [ { a_1}][{a_2}] ... [{a_n } ] } = 1 \ . } \\ \end{array}\ ] ] therefore the alliance groups are always super - active ._ conflict groups_. the polynomial corresponding to the conflict group of subject is .next we construct the diagonal form and apply folding procedure : + [ { a_2 } ] + ... + [ { a_n } ] } & { } \\ { [ { a_1 } + { a_2 } + ... + { a_n } ] } & { } & { = } \\ { [ { a_1 } + { a_2 } + ... + { a_n } ] + } & { \overline { [ { a_1 } ] + [ { a_2 } ] + ... + [ { a_n } ] } = 1 \ . } & { } \end{array}\ ] ] therefore the conflict groups are always super - active . since both the alliance and the conflict groups are super - active , this lemma is proved . however , there are non - homogenous super - active groups as well ( see appendix [ appen3 ] ) . summarizing this section, we note that subjects in the super - active groups can not be controlled in their choices and the entire groups is uncontrolable .therefore , once the super - active groups emerges , the only way to make it controllable is to change the relationships in the group .we have presented the detailed description of the rgt including solution of the forward and inverse tasks .we have also considered the extream cases of decisions like putting a subject into frustration state or changing structure of a super - active group . as a final stroke ,we summarize all the presented material in the form of _ basic control schema of an abstract subject ( bcsas ) in the rgt_. and .,height=302 ] the input comes from the environment and is formalized in the form of external influences on the subject , the boolean algebra of alternatives and structure of a group .information about the influences , boolean algebra and group structure is propagated into the _ decision module_. the decision module implements solution of the forward task .therefore the output set of the decision module is the set of possible alternatives , which subject can choose under the given conditions .the information about boolean algebra and group structure is propagated into the _ influence module_. the influence module solves the inverse task .the output set of the influence module is the set of the pairs , where is the target alternative , the set is the set of all the joint influences , resulting in selection of the target choice ; and represents a subject variable .each represents a reflexive control strategy .therefore , the decision to put a subject into state is justified if it is impossible to make subject choose the target alternative , i.e. , if for pair set , and subject should not choose any other alternative except for the target one .the alternatives with corresponding non - empty sets are included into the set . herewe introduce set to store the non - empty sets .the schema of the algorithm for extracting sets and is presented in fig .[ fig4 ] . first the sets and are empty : and .the algorithm reads the set of pairs and stores it in array , where is a counting variable , is the total number of pairs. then it is checked for each pairs from array whether set is empty : . if yes , the algorithm increments counting variable and proceeds to the next pair from array pairs . if no , then alternative is included into the set ( ) , set is saved , the set is included into set ( ) and set is saved .the process is run while . in this iterative algorithm, we separately store the alternatives , which can be chosen by a certian subject , in the set and the joint influences , which result in selection of alternative , in the set .therefore , we should modify the schema of influence module in bcsas as follows .we present elaborated schema , where sub - module `` solution : '' is accompanied with sub - module `` solution : '' .together these sub - modules are included into the `` solutions '' sub - module .bcsas is the fundamental schema of an abstract subject , which is used through out the rgt .the bcsas is presented in fig.[fig311 ] .this concludes the overview of rgt and description of tasks within the scope of the general theory .therefore , we continue with application of the rgt to the mixed groups of humans and robots .as we have noted in the introduction section , the goal of the robots in mixed groups of humans and robots is to refrain human subject from choosing risky actions , which might result in injuries or even threaten live .it is considered by default that robot follows the program of behavior .such program consists of at least three modules .the module 1 implements robot s ability of human - like decision - making based on the rgt .the module 2 contains the rules , which refrain robot from making a harm to human beings .the module 3 predicts the choice of each human subject and suggests the possible reflexive control strategies .the modules 1 and 3 are inhereted from the bcsas of an abstract individual .they correspond to decision module and influence module of the bcsas ( fig .[ fig311 ] ) , respectively .therefore all the properties and meaning of outputs of the modules 1 and 3 are the same as the ones for decision and influence modules , respectively .the module 2 is the new module , which is intrinsic for robotic agents studied in the context of mixed groups of humans and robots .this module is responsible for extraction of only harmless or non - risky alternatives for human subject .we suggest to apply asimov s three laws of robotics , which formulate the basics of the module 2 : + 1 ) a robot may not injure a human being or , through inaction , allow a human being to come to harm ; + 2 ) a robot must obey any orders given to it by human beings , except where such orders would conflict with the first law ; + 3 ) a robot must protect its own existence as long as such protection does not conflict with the first or second law .we consider that these laws are intrinsic part of robot s `` mind '' , which can not be erased or corrupted by any means .the interaction of modules 1 and 2 is performed in the interaction module 1 .the interaction of modules 3 and 2 is implements in the interaction module 2 .the boolean algebra is filtered according to asimov s laws in module 2 .the output of module 2 is set of approved alternatives .this data is then propagated into interaction modules .the output of the module 1 is set of alternatives , which robot has to choose under the given joint influences . in the interaction module 1, the conjunction of sets and is performed : .if set is not empty set , this means that there are aproved alternatives among the alternatives that robot should choose in accordance with the joint influences .therefore , robot can implement any alternative from the set .if set is empty , this means that under given joint influences robot can not choose any approved alternative , therefore robot will choose an alternative from set .this is how the interaction module 1 works .the output of the module 3 contains sets and .the goal of the robot is to refrain human subjects from choosing risky alternative .this can be done by convincing human subjects to choose alternatives from the set .first , we check whether contains any approved alternative .we do so by performing conjunction of sets and : .if set is not empty , then it means that it is possible to make a human subject to choose some non - risky alternative .therefore , we should choose the corresponding reflexive control strategy from the set . however , if set is empty , we have to find the reflexive control strategy that will make human subject to select approved alternative from set . for this purpose ,we construct set by including all the joint influences for approved alternatives : .next we check whether set is empty .if set is empty this means it is impossible to convince a human subject to choose non - risky alternative .therefore , the only option of reflexive control in this case is to put this subject into frustration state . however, if set is not empty , this means that there exist at least one reflexive control strategy that results in selection of alternative from the set of the approved ( non - risky ) ones .therefore , the bcsra inherits the entire structure of the bcsas and augments it with module 2 of asimov s laws together with interaction modules 1 and 2 .the original schema of robot s control system has been recently presented in .the bcsra is extended version of the original schema .the bcsra provides comprehensive approach of how forward and inverse tasks are solved in the robot s `` mind '' .thus , in this section we have presented the formalization of robotic agent in the rgt .we outlined the specific features of robotic agents , which distinguish them from other subjects .furthermore , we provided detailed explanation of how the forward and inverse tasks are solved in the framrework of control system ( bcsra ) of robots .next , we proceed with consideration of sample sutiations of interactions between humans and robots .here we elaborate two examples , presented in the previous study , of how robots in the mixed groups can make humans refrain from risky actions .we discuss the application of the extended schema of robot s control system and provide explicit derivation of reflexive control strategies , which has been applied in these examples in the prevous study .suppose robots have to play a part of baby - sitters by looking after the kids .we consider a mixed group of two kids and two robots .each robot is looking after a particular kid .having finished the game , kids are considering what to do next .they choose between `` to compete climbing the high tree '' ( action ) and `` to play with a ball '' ( action ) .together actions and represent the active state 1= .therefore the boolean algebra of alternatives consists of four elements : 1 ) the alternative is to climb the tree ; 2 ) the alternative is to play with a ball ; 3 ) the alternative means that a kid is hesitating what to do ; and 4 ) the alternative means to take a rest .we consider that each kid considers his robot as ally and another kid and his robot as the competitors .the kids are subjects and , while robots are subjects and .the relationship graph is presented in fig .[ fig : fig4 ] .next we calculate the diagonal form and fold it in order to obtain decision equation for each subject : [b ] } & { } & { [ c][d ] } & { } \\ { } & { [ ab ] } & { } & { + [ cd ] } & { } & { } \\ { [ ab + cd ] } & { } & { } & { } & { } & { = ab + cd \ ; . } \\ \end{array}\ ] ] from two actions and , action is a risky action , since a kid can fall from the tree and this is real threat for his health or even life . therefore according to asimov s laws, robots can not allow kids to start the competition .thus , robots have to convince kids not to choose alternative . in terms of alternatives ,the asimov s laws serve like filters which filter out the risky alternatives .the remaining alternatives are included into set . in this case , .next we solve the inverse taks , regarding alternatives and .we conduct the analysis regarding kid .this analysis can be further extended for kid in the similar manner ._ solution of the inverse task for kid with approved alternatives as target choice ._ the decision equation for kid is .first , we transform it into canonical form : .next we consider system of influence equations : align b+cd = [ sysbs1a ] + cd = , [ sysbs1b ] where alternative . regarding eq.([sysbs1b ] ) , eq.([sysbs1a ] )is transformed into equation the solution of eq.([rbeq1 ] ) directly follows from lemma [ lemma1 ] : .therefore for and the solutions are and , respectively .the eq.([sysbs1b ] ) can be instantly solved according to lemma [ lemma2 ] : ._ consider first_. then . by varying values of variable , we obtain all the pairs : d = 1 : .therefore the solution is pair ; d = 0 : . since , there is no solution ; d = : .since , there is no solution ; d = : . therefore there are two solutions and . therefore equation has three solutions , and .thus , we have solved both equations from system ( [ sysbs1a]-[sysbs1b ] ) .the solutions of this system are the triplets of joint influences , which are all possible combinations of solutions of both equations . since there are two solution of eq.([sysbs1a ] ) and three solutions of eq.([sysbs1b ] ) , there are six triplets in total : and ; and ; and . _ now we consider the case , when . then .we obtain pairs for all values of variable : : .thus , there is only one solution ( 0,1 ) ; : .thus , there are four solutions and ; : . thus , there are four solutions and ; : . thus , there are four solutions and . in total , equation has 9 solutions .therefore system ( [ sysbs3a]-[sysbs3b ] ) also has 9 solutions as triplets : , , , , , , , and .we have considered two cases , when both upper and lower bounds of the interval of decision equation equal to the same alternative .now we discuss a new situation , when variable should take not a single value , but several values . in this case , we should find the joint influences that result in selection of either alternative or . since , , we need to find all the triplets , resulting in the solution of decision equation as interval .thus , .therefore , we need to solve the following system of equations : align b+cd = \ { } [ sysbs3a ] + cd = 0 .[ sysbs3b ] the eq.([sysbs3a ] ) turns into equality , and we need to solve eq.([sysbs3b ] ) .however , this equation has been already solved in the previous example .therefore we obtian the solutions of the system ( [ sysbs3a]-[sysbs3b ] ) : , , and + . comparing solutions of all three system of influence equation, we can see that there are four remarkable solutions and ; and . the first pair of solution results in choice of only alternative , while second pair of solutions results in selection of eighter alternative or alternative .these four solutions together illustrate that if , it is guaranteed that regardless of influence of kid , kid will choose either of approved alternatives . by analogy , we can see that among solutions of system ( [ sysbs1a]-[sysbs1b ] ) with , there are four solutions , , and . therefore , if , kid will choose alternative regardless of influence of kid .these two examples of binding variables and were considered in _ scenario 1 _ and _ scenario 2 _ of sample situation with robot baby - sitters , originally presented in .summarizing the results of this section , we have shown that robots can successfully control kids behavior by refraining them from doing risky actions .the basic of this control is entirely based on the proposed schema of robot s control system .we have analyzed all the possible reflexive control strategies by solving three systems of influence equation : two systems regarding a single alternative and one system regarding the interval of alternatives .therefore , we have shown how the inverse task can be effectively solved by our proposed algorithm in situation similar to the real conditions .we consider that there are two climbers in the mountain and rescue robot .the climbers and robot are communicating via radio .one of the climbers ( subject ) got into difficult situation and needs help .suggest , he fell into the rift because the edge of the rift was covered with ice .the rift is not too deep and there is a thick layer of snow on the bottom , therefore climber is not hurt , but he can not get out of the rift himself . the second climber ( subject ) wants to rescue his friend himself ( action ) , which is risky action .the second option is that robot will perform rescue mission ( action ) .since inaction is inappropriate solution according to the first law , the set of approved alternatives for robot includes only alternative .the goal of the robot is to refrain the climber from choosing alernative and perform rescue mission itself .we suggest that from the beginning all subjects are in alliance .the corresponding graph is presented in fig .[ fig : fig1]c and its polynomial is .therefore by definition it is homogenous group and , consequently , it is super - active group according to lemma [ lemma3 ] .thus , any subject in the group is in active state .therefore , group is uncontrollable ( see section [ supac ] ) . in this case, robot makes decision to change his relationship with the climber from alliance to conflict .robot can do that , for instance , by not responding to climber s orders . _which reflexive control leads to frustration state ?_ then the polynomial corresponding to the new group is .this polynomial has been already broadly discussed in the section [ frust ] .therefore , we know decision equation for subject : .we have shown as well that subject can choose only alternative , if appropriate joint influences are applied ( see section [ frust ] ) , overwise subject is in frustration state and can not make any choice . therefore , in order to put subject into frustration state , the reflexive control strategy should be selected from the list of solutions ( section [ frust ] ) : ; ; ; ; ; ; ; and . herewe provide two examples of such joint influences : and ._ whether robot can complete mission regardless of joint influences of other subjects ? _ the decision equation for robot is .the corresponding solution interval is .here we analyze all 16 possible reflexive control strategies that climbers can apply to robot . for , there will be the same situation regardless of value of variable : . for , there will be the same situation regardless of value of variable : . for : . for : . therefore in these cases set .next we consider other pairs . : .here set . : .here set . : .therefore , set .since , for all the cases considered above , robot will choose alternative from the set .consider the following pairs : : .therefore , set . : .thus , set . : .thus , set . : .thus , set . : .thus , set . since , for all the cases considered above , robot will choose alternative from the set .+ thus , we have shown that under all 16 reflexive control strategies , robot can choose the alternative , which is to perform the rescue mission itself .therefore robot will choose alternative regardless of the joint influences of the climbers .the discussed example illustrates how robot can transform uncontrollable group into controllable one by manipulating the relationships in the group . in the controllable group by its influence on the human subjects ,robot can refrain the climber from risky action to rescue climber .robot achieves its goal by putting climber into frustration state , in which climber can not make any decision . onthe other hand , set of approved alternatives guarantees that robot itself will choose the option with no risk for humans and implement it regardless of climber s influence .therefore , in this section we have illustrated robot s ability to refrain human being from risky actions and to perform these risky actions itself .this proves that our approach achieves both goals of robotic agent : 1 ) to refrain people from risky actions and 2 ) to perform risky actions itself regardless of human s influences .summarizing , the results of this paper , we outline the most important of them .first of all , we have introduced the inverse task and developed the ultimate methods to solve it .we have provided a comprehensive tutorial to the * brand new reflexive game theory * recently formulated and proposed by vladimir lefebvre .the tutoral contains the detailed description of the forward and inverse tasks together with methods to solve them .we propose control schemas for both abstract subject ( bcsas ) and robotic agent ( bcsra ) .these schemas were specially designed to incorporate solution of the forward and inverse tasks , thus providing us with autonomous units ( individuals , subjects , agents ) capable of making decisions in the human - like manner .we have shown that robotic agents based on bcsra can be easily included into the mixed groups of humans and robots and effectively serve their fundamental goals ( refraining humans from risky actions and , if needed , perform the risky acions itself ) .therefore , we consider that present study provides the comprehensive overview of the classic rgt proposed by vladimir lefebvre and newly developed self - consistent framework for analysis of different kinds of groups and societies , including human social groups and mixed groups of humans and robots together with application tutorial of this new framework . this framework is entirely based on the principles of the rgt and brings together all its elements .the solution of the inverse task , presented in this paper , plays a crutial role in formation of this framework .therefore , by having the inverse task as one of its fundamentals , this framework illustrates the role of the inverse task and its relationship with other issues considered in the rgt .00 lefebvre , v.a . : lectures on reflexive game theory .leaf & oaks , los angeles ( 2010 ) .lefebvre , v.a . : lectures on reflexive game theory .cogito - center , moscow ( 2009 ) [ in russian ] .lefebvre , v.a . : the basic ideas of reflexive game s logic . problems of research of systems and structures .73 - 79 ( 1965 ) [ in russian ] .lefebvre , v.a . : reflexive analysis of groups . in : argamon ,s. and howard , n. ( eds . ) computational models for counterterrorism .173 - 210 .springer , heidelberg ( 2009 ) .lefebvre , v.a . : algebra of conscience .d. reidel , holland ( 1982 ) .lefebvre , v.a . : algebra of conscience .2nd edition .holland : kluwer ( 2001 ) .batchelder , w.h . ,lefebvre , v.a . : a mathematical analysis of a natural class of partitions of a graph .124 - 148 ( 1982 ) .kobatake , e. , and tanaka , k. : neuronal selectivities to complex object features in the ventral pathway of the macaque monkey .journal of neurophysiology , 71 , 3 , pp .856 - 867 ( 1994 ) .koerner , e. , gewaltig , m .- o ., koerner , u. , richter , a. , and rodemann , t. : a model of computation in neocortical architecture .neural networks , 12 , pp .989 - 1005 ( 1999 ) .lcke , j. , and von der malsburg , c. : rapid processing and unsupervised learning in a model of the cortical macrocolumn .neural computation , 16 , pp .501 - 533 ( 2003 ) .schrander , s. , gewaltig , m .- o . ,krner , u. and krner , e. : cortext : a columnarmodel of bottom - up and top - down processing in the neocortex .neural networks , 22 , pp .1055 - 1070 ( 2009 ) .fukushima , k. : neocognitron : a self - organizing neural network model for a mechanism of pattern recognitition unaffected by shift and position , biological cybernatics , 36 , pp .193 - 201 ( 1980 ) .riesenhuber , m. and poggio , t. : hierarchical models of object recognition in cortex .nature neuroscience , 2 , 11 , pp .109 - 125 ( 1999 ) .t. serre , l. wolf , s. bileschi , m. riesenhuber , and t. poggio . : robust object recognition with cortex - like mechanisms , ieee transactions on pattern analysis and machine intelligence , 29 , 3 , pp .411 - 426 ( 2007 ) .hienze , j. : hierarchy length in orphaned colonies of the ant temnothorax nylanderi naturwissenschaften , 95 , 8 , pp .757 - 760 ( 2008 ) .chase , i. , d. : models of hierarchy formation in animal societies .behavioral science , 19 , 6 , pp .374 - 382 ( 2007 ) .chase i. , tovey c. , spangler - martin d. , manfredonia m. : individual differences versus social dynamics in the formation of animal dominance hierarchies .pnas , 99 , 9 , pp .5744 - 5749 ( 2002 ) .buston p. : social hierarchies : size and growth modification in clownfish .nature , 424 , pp .145 - 146 ( 2003 ) .asimov , i. : runaround. astounding science fiction , march , pp .94 - 103 ( 1942 ) .tarasenko , s. : modeling mixed groups of humans and robots with reflexive game theory . in lamers ,m.h . , and verbeek , f.j .( eds . ) : hrpr 2010 , lincst 59 , pp . 108 - 117 ( 2011 ) .consider groups of four subjects and .suggest the polynomial corresponding to this group is .next we construct diagonal form and perform folding operation : + [ d ] } & { } & { } \\ { } & { } & { [ b][a + d ] } & { } & { } & { } \\ { } & { [ b(a + d ) ] } & { } & { } & { + [ c ] } & { } \\ { [ b(a + d ) + c ] } & { } & { } & { } & { } & = \\ \end{array}\ ] ] ([a + d ] + \overline { [ a ] + [ d ] } ) } & { } & { } \\ { } & { [ b(a + d ) ] } & { } & { + [ c ] } & { } \\ { [ b(a + d ) + c ] } & { } & { } & { } & = \\ \end{array}\ ] ] } & { } & { } \\ { } & { [ b(a + d ) ] } & { } & { + [ c ] } & { } \\ { [ b(a + d ) + c ] } & { } & { } & { } & = \\ \end{array}\ ] ] next we simplify the resultant expression of diagonal form folding : consequently , + \overline { [ b ] } + [ c ] } & { } \\ { [ b(a + d ) + c ] } & { } & { = b + c}\\ \end{array}\ ] ] therefore , the decision equation includes only two subject variables instead of four .consequenly , for subjects and the decision equations in canonical forms are thus , the sets and for subjects and are equal . the sets and are functions of only variables and : and .the canonical forms of decision equations for subjects and are : therefore , set for both subjects . setb is a functions of a single variable : and for subjects and , respectively .here we provide an example of non - homogenous super - active group .
|
the reflexive game theory ( rgt ) has been recently proposed by vladimir lefebvre to model behavior of individuals in groups . the goal of this study is to introduce the inverse task . we consider methods of solution together with practical applications . we present a brief overview of the rgt for easy understanding of the problem . we also develop the schematic representation of the rgt inference algorithms to create the basis for soft- and hardware solutions of the rgt tasks . we propose a unified hierarchy of schemas to represent humans and robots . this hierarchy is considered as a unified framework to solve the entire spectrum of the rgt tasks . we conclude by illustrating how this framework can be applied for modeling of mixed groups of humans and robots . all together this provides the exhaustive solution of the inverse task and clearly illustrates its role and relationships with other issues considered in the rgt .
|
` big data ' often refers to a large collection of observations and the associated computational issues in processing the data .some of the new challenges from a statistical perspective include : 1 .the analysis has to be computationally efficient while retaining statistical efficiency ( * ? ? ?2 . the data are ` dirty ' : they contain outliers , shifting distributions , unbalanced designs , to mention a few .there is also often the problem of dealing with data in real - time , which we add to the ( broadly interpreted ) first challenge of computational efficiency ( * ? ? ?* cf . ) .we believe that many large - scale data are inherently inhomogeneous : that is , they are neither i.i.d . nor stationary observations from a distribution .standard statistical models ( e.g. linear or generalized linear models for regression or classification , gaussian graphical models ) fail to capture the inhomogeneity structure in the data . by ignoring it , prediction performance can become very poor and interpretation of model parameters might be completely wrong .statistical approaches for dealing with inhomogeneous data include mixed effect models , mixture models and clusterwise regression models : while they are certainly valuable in their own right , they are typically computationally very cumbersome for large - scale data .we present here a framework and methodology which addresses the issue of inhomogeneous data while still being vastly more efficient to compute than fitting much more complicated models such as the ones mentioned above . [[ subsampling - and - aggregation . ] ] subsampling and aggregation .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + if we ignore the inhomogeneous part of the data for a moment , a simple approach to address the computational burden with large - scale data is based on ( random ) subsampling : construct groups with , where denotes the sample size and is the index set for the samples .the groups might be overlapping ( i.e. , for ) and do not necessarily cover the index space of samples . for every group , we compute an estimator ( the output of an algorithm ) and these estimates are then aggregated to a single `` overall '' estimate , which can be achieved in different ways .if we divide the data into groups of approximately equal size and the computational complexity of the estimator scales for samples like for some , then the subsampling - based approach above will typically yield a computational complexity which is a factor faster than computing the estimator on all data , while often just incurring an insubstantial increase in statistical error .in addition , and importantly , effective parallel distributed computing is very easy to do and such subsampling - based algorithms are well - suited for computation with large - scale data .subsampling and aggregation can thus partially address the first challenge about feasible computation but fails for the second challenge about proper estimation and inference in presence of inhomogeneous data .we will show that a tweak to the aggregation step , which we call `` maximin aggregation '' , can often deal also with the second challenge by focusing on effects that are common to all data ( and not just mere outliers or time - varying effects ) .[ [ bagging - aggregation - by - averaging . ] ] bagging : aggregation by averaging . + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in the context of homogeneous data , showed good prediction performance in connection with mean or majority voting aggregation and tree algorithms for regression or classification , respectively .bagging simply averages the individual estimators or predictions .[ [ stacking - and - convex - aggregation . ] ] stacking and convex aggregation .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + again in the context of homogeneous data , the following approaches have been advocated . instead of assigning a uniform weight to each individual estimator as in bagging , and proposed to learn the optimal weights by optimizing on a new set of data .convex aggregation for regression has been studied in and has been proved to lead to to approximately equally good performance as the best member of the initial ensemble of estimators .but in fact , in practice , bagging and stacking can exceed the best single estimator in the ensemble if the data are homogeneous .[ [ magging - convex - maximin - aggregation . ] ] magging : convex maximin aggregation .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + with inhomogeneous data , and in contrast to data being i.i.d . or stationary realizations from a distribution ,the above schemes can be misleading as they give all data - points equal weight and can easily be misled by strong effects which are present in only small parts of the data and absent for all other data .we show that a different type of aggregation can still lead to consistent estimation of the effects which are common in all heterogeneous data , the so - called maximin effects . the maximin aggregation , which we call magging , is very simple and general andcan easily be implemented for large - scale data .we now give some more details for the various aggregation schemes in the context of linear regression models with an predictor ( design ) matrix , whose rows correspond to samples of the -dimensional predictor variable , and with the -dimensional response vector ; at this point , we do not assume a true p - dimensional regression parameter , see also the model in ( [ mod1 ] ) .suppose we have an ensemble of regression coefficient estimates , where each estimate has been obtained from the data in group , possibly in a computationally distributed fashion .the goal is to aggregate these estimators into a single estimator . bagging simply averages the ensemble members with equal weight to get the aggregated estimator one could equally average the predictions to obtain the predictions .the advantage of bagging is the simplicity of the procedure , its variance reduction property , and the fact that it is not making use of the data , which allows simple evaluation of its performance .the term `` bagging '' stands for * * b**ootstrap * * agg**regat*ing * ( mean aggregation ) where the ensemble members are fitted on bootstrap samples of the data , that is , the groups are sampled with replacement from the whole data . and propose the idea of `` stacking '' estimators .the general idea is in our context as follows .let be the prediction of the -th member in the ensemble .then the stacked estimator is found as where the space of possible weight vectors is typically of one of the following forms : if the ensemble of initial estimators is derived from an independent dataset , the framework of stacked regression has also been analyzed in .typically , though , the groups on which the ensemble members are derived use the same underlying dataset as the aggregation .then , the predictions are for each sample point defined as being generated with , which is the same estimator as with observation left out of group ( and consequently if ) . instead of a leave - one - out procedure, one could also use other leave - out schemes , such as e.g. the out - of - bag method . to this end, we just average for a given sample over all estimators that did not use this sample point in their construction , effectively setting if .the idea of `` stacking '' is thus to find the optimal linear or convex combination of all ensemble members .the optimization is -dimensional and is a quadratic programming problem with linear inequality constraints , which can be solved efficiently with a general - purpose quadratic programming solver .note that only the inner products and for are necessary for the optimization .whether stacking or simple mean averaging as in bagging provides superior performance depends on a range of factors .mean averaging , as in bagging , certainly has an advantage in terms of simplicity .both schemes are , however , questionable when the data are inhomogeneous .it is then not evident why the estimators should carry equal aggregation weight ( as in bagging ) or why the fit should be assessed by weighing each observation identically in the squared error loss sense ( as in stacked aggregation ) .we propose here * * m**aximin * * agg**regat*ing * , called magging , for heterogeneous data : the concept of maximin estimation has been proposed by , and we present a connection in section [ sec.maximin ] .the differences and similarities to mean and stacked aggregation are : 1 .the aggregation is a weighted average of the ensemble members ( as in both stacked aggregation and bagging ) .2 . the weights are non - uniform in general ( as in stacked aggregation ) .3 . the weights do not depend on the response ( as in bagging ) .the last property makes the scheme almost as simple as mean aggregation as we do not have to develop elaborate leave - out schemes for estimation ( as in e.g. stacked regression ) .magging is choosing the weights as a convex combination to minimize the -norm of the fitted values : if the solution is not unique , we take the solution with lowest -norm of the weight vector among all solutions .the optimization and computation can be implemented in a very efficient way .the estimators are computed in each group of data separately , and this task can be easily performed in parallel . in the end , the estimators only need to be combined by calculating optimal convex weights in -dimensional space ( where typically and ) with quadratic programming ; some pseudocode in ` r ` for these convex weights is presented in the appendix .computation of magging is thus computationally often massively faster and simpler than a related direct estimation estimation scheme proposed in .furthermore , magging is very generic ( e.g. one can choose its own favored regression estimator for the -th group ) and also straightforward to use in more general settings beyond linear models .the magging scheme will be motivated in the following section [ sec.maximin ] with a model for inhomogeneous data and it will be shown that it corresponds to maximizing the minimally `` explained variance '' among all data groups .the main idea is that if an effect is common across all groups , then we can not `` average it away '' by searching for a specific convex combination of the weights .the common effects will be present in all groups and will thus be retained even after the minimization of the aggregation scheme .the construction of the groups for magging in presence of inhomogeneous data is rather specific and described in section [ subsec.genunknown ] for various scenarios .there , examples 1 and 2 represent the setting where the data within each group is ( approximately ) homogeneous , whereas example 3 is a case with randomly subsampled groups , despite the fact of inhomogeneity in the data .we motivate in the following why magging ( maximin aggregation ) can be useful for inhomogeneous data when the interest is on effects that are present in all groups of data . in the linear model setting , we consider the framework of a mixture model where is a univariate response variable , is a -dimensional covariable , is a -dimensional regression parameter , and is a stochastic noise term with mean zero and which is independent of the ( fixed or random ) covariable .every sample point is allowed to have its own and different regression parameter : hence , the inhomogeneity occurs because of changing parameter vectors , and we have a mixture model where , in principle , every sample arises from a different mixture component .the model in ( [ mod1 ] ) is often too general : we make the assumption that the regression parameters are realizations from a distribution : where the s do not need to be independent of each other .however , we assume that the s are independent from the s and s ._ example 1 : known groups ._ consider the case where there are known groups with for all .thus , this is a clusterwise regression problem ( with _ known _ clusters ) where every group has the same ( unknown ) regression parameter vector .we note that the groups are the ones for constructing the magging estimator described in the previous section ._ example 2 : smoothness structure ._ consider the situation where there is a smoothly changing behavior of the s with respect to the sample indices : this can be achieved by positive correlation among the s . in practice, the sample index often corresponds to time .there are no true ( unknown ) groups in this setting . _ example 3 : unknown groups ._ this is the same setting as in example 1 but the groups are unknown . from an estimation point of view , there is a substantial difference to example 1 . in model ( [ mod1 ] ) and in the examples13 mentioned above , we have a `` multitude '' of regression parameters .we aim for a single -dimensional parameter , which contains the common components among all s ( and essentially sets the non - common components to the value zero ) .this can be done by the idea of so - called maximin effects which we explain next . consider a linear model with the fixed -dimensional regression parameter which can take values in the support of from ( [ mod2 ] ) : where and are as in ( [ mod1 ] ) and assumed to be i.i.d .we will connect the random variables in ( [ mod1 ] ) to the values via a worst - case analysis as described below : for that purpose , the parameter is assumed to not depend on the sample index .the variance which is explained by choosing a parameter vector in the linear model ( [ mod.b ] ) is where denotes the covariance matrix of .we aim for maximizing the explained variance in the worst ( most adversarial ) scenario : this is the definition of the maximin effects ._ definition . _the maximin effects parameter is and note that the definition uses the negative explained variance .the maximin effects can be interpreted as an aggregation among the support points of to a single parameter vector , i.e. , among all the s ( e.g. in example 2 ) or among all the clustered values ( e.g. in examples 1 and 3 ) , see also fact [ fact1 ] below .the maximin effects parameter is different from the pooled effects ] .assume that is in the convex hull of .( a2 ) : : we assume random design with a mean - zero random predictor variable with covariance matrix and let be the empirical gram matrices .let be the estimates in each group .assume that there exists some such that where is the minimal sample size across all groups .( a3 ) : : the optimal and estimated vectors are sparse in the sense that there exists some such that assumption ( a1 ) is fulfilled for known groups , where the convex hull of is equal to the convex hull of the support of and the maximin - vector is hence contained in the former .example 1 is fulfilling the requirement , and we will discuss generalizations to the settings in examples 2 and 3 below in section [ subsec.genunknown ] .assumptions ( a2 ) and ( a3 ) are relatively mild : the first part of ( a3 ) is an assumption that the underlying model is sufficiently sparse .if we consider standard lasso estimation with sparse optimal coefficient vectors and assuming bounded predictor variables , then ( a2 ) is fulfilled with high probability for of the order ( faster rates are possible under a compatibility assumption ) and of order , where denotes the minimal sample size across all groups ; see for see for example .define for , the norm and let be the magging estimator ( [ eq : maximin ] ) .[ theo : main ] assume ( a1)-(a3 ) . then a proof is given in the appendix .the result implies that the maximin effects parameter can be estimated with good accuracy by magging ( maximin aggregation ) if the individual effects in each group can be estimated accurately with standard methodology ( e.g. penalized regression methods ) .theorem [ theo : main ] hinges mainly on assumption ( a1 ) .we discuss the validity of the assumption for the three discussed settings under appropriate ( and setting - specific ) sampling of the data - groups ._ example 1 : known groups ( continued ) ._ obviously , the groups are chosen to be the true known groups .assumption ( a1 ) is then trivially fulfilled with known groups and constant regression parameter within groups ( clusterwise regression ) ._ example 2 : smoothness structure ( continued ) ._ we construct groups of non - overlapping consecutive observations . for simplicity, we would typically use equal group size so that . when taking sufficiently many groups and for a certain model of smoothness structure , condition ( a1 ) will be fulfilled with high probability : it is shown there that it is rather likely to get some groups of consecutive observations where the optimal vector is approximately constant and the convex hull of these `` pure '' groups will be equal to the convex hull of the support of ._ example 3 : unknown groups ( continued ) ._ we construct groups of equal size by random subsampling : sample without replacement within a group and with replacement between groups .this random subsampling strategy can be shown to fulfill condition ( a1 ) when assuming an additional so - called pareto condition . as an example ,a model with a fraction of outliers fulfills ( a1 ) and one obtains an important robustness property of magging which is closely connected to section [ subsec.robustness ] . .the second column shows the realizations of for the first groups , while the third shows the least - squares estimates of the signal when projecting onto the space of periodic signals in a certain frequency - range .the last column shows from top to bottom : ( a ) the pooled estimate one obtains when adding all groups into one large dataset and estimating the signal on all data simultaneously ( the estimate does not match closely the common effects shown in red ) ; ( b ) the mean aggregated data obtained by averaging the individual estimates ( here identical to pooled estimation ) ; ( c ) the ( less generic ) maximin effects estimator from , and ( d ) magging : maximin aggregated estimators ( [ eq : maximin ] ) , both of which match the common effects quite closely . , scaledwidth=99.0% ]we illustrate the difference between mean aggregation and maximin aggregation ( magging ) with a simple example .we are recording , several times , data in a time - domain .each recording ( or group of observations ) contains a common signal , a combination of two frequency components , shown in the top left of figure [ fig.example ] .on top of the common signal , seven out of a total of 100 possible frequencies ( bottom left in figure [ fig.example ] ) add to the recording in each group with a random phase .the 100 possible frequencies are the first frequencies , for periodic signal with periodicity defined by the length of the recordings .they form the dictionary used for estimation of the signal . in total recordingsare made , of which the first 11 are shown in the second column of figure [ fig.example ] .the estimated signals are shown in the third column , removing most of the noise but leaving the random contribution from the non - common signal in place .averaging over all estimates in the mean sense yields little resemblance with the common effects .the same holds true if we estimate the coefficients by pooling all data into a single group ( first two panels in the rightmost column of figure [ fig.example ] ) .magging ( maximin aggregation ) and the closely related but less generic maximin estimation , on the other hand , approximate the common signal in all groups quite well ( bottom two panels in the rightmost column of figure [ fig.example ] ) . provide other real data results where maximin effects estimation leads to better out - of - sample predictions in two financial applications .large - scale and ` big ' data poses many challenges from a statistical perspective .one of them is to develop algorithms and methods that retain optimal or reasonably good statistical properties while being computationally cheap to compute .another is to deal with inhomogeneous data which might contain outliers , shifts in distributions and other effects that do not fall into the classical framework of identically distributed or stationary observations .here we have shown how magging ( `` maximin aggregation '' ) can be a useful approach addressing both of the two challenges .the whole task is split into several smaller datasets ( groups ) , which can be processed trivially in parallel .the standard solution is then to average the results from all tasks , which we call `` mean aggregation '' here .in contrast , we show that finding a certain convex combination , we can detect the signals which are common in all subgroups of the data . while `` mean aggregation '' is easily confused by signals that shift over time or which are not present in all groups , magging ( `` maximin aggregation '' ) eliminates as much as possible these inhomogeneous effects and just retains the common signals which is an interesting feature in its own right and often improves out - of - sample prediction performance .breiman , l. ( 1996a ) . bagging predictors . , 24:123140 .breiman , l. ( 1996b ) . ., 24:4964 .breiman , l. ( 2001 ) . ., 45:532 .bhlmann , p. and yu , b. ( 2002 ) . analyzing bagging . ,30:927961 .bunea , b. , tsybakov , a. , and wegkamp , m. ( 2007 ) .aggregation for gaussian regression ., 35:16741697 .chandrasekaran , v. and jordan , m. i. ( 2013 ) .computational and statistical tradeoffs via convex relaxation . , 110:e1181e1190 .desarbo , w. and cron , w. ( 1988 ) . a maximum likelihood methodology for clusterwise linear regression ., 5:249282 .mahoney , m. w. ( 2011 ) .randomized algorithms for matrices and data ., 3:123224 .mclachlan , g. and peel , d. ( 2004 ) . .john wiley & sons .meinshausen , n. and bhlmann , p. ( 2014 ) .maximin effects in inhomogeneous large - scale data .preprint arxiv:1406.0596 .pinheiro , j. and bates , d. ( 2000 ) . .springer .( 2014 ) . .r foundation for statistical computing , vienna , austria .wolpert , d. ( 1992 ) . . ,_ proof of theorem [ theo : main ] : _ define for ( where is as defined in ( [ eq : maximin ] ) the set of positive vectors that sum to one ) , and let for , then and and and .now , using ( a3 ) hence , as and , for , where follows by the definition of the maximin vector . combining the last inequality with ( [ eq:1 ] ) , furthermore , by ( a3 ) , using the equality for , combining ( [ eq:2 ] ) and ( [ eq:3 ] ) , which completes the proof. hats< - t(x ) % * % x /n # empirical covariance matrix of x h < - t(theta ) % * % hats % * % theta # assume that it is positive definite # ( use h + xi * i , xi > 0 small , otherwise ) a < - rbind(rep(1,g),diag(1,g ) ) # constraints b < - c(1,rep(0,g ) ) d < - rep(0,g ) # linear term is zero w < - solve.qp(h,d,t(a),b , meq = 1 ) # quadratic programming solution to # argmin(x^t h x ) such that ax > = b and # first inequality is an equality ....
|
large - scale data analysis poses both statistical and computational problems which need to be addressed simultaneously . a solution is often straightforward if the data are homogeneous : one can use classical ideas of subsampling and mean aggregation to get a computationally efficient solution with acceptable statistical accuracy , where the aggregation step simply averages the results obtained on distinct subsets of the data . however , if the data exhibit inhomogeneities ( and typically they do ) , the same approach will be inadequate , as it will be unduly influenced by effects that are not persistent across all the data due to , for example , outliers or time - varying effects . we show that a tweak to the aggregation step can produce an estimator of effects which are common to all data , and hence interesting for interpretation and often leading to better prediction than pooled effects .
|
this article describes a computationally efficient method for constructing symmetric factorization of large dense matrices .the symmetric factorization of large dense matrices is important in several fields , including , among others , data analysis , geostatistics , and hydrodynamics .for instance , several schemes for multi - dimensional monte carlo simulations require drawing covariant realizations of multi - dimensional random variables . in particular , in the case where the marginal distribution of each random variable is normal ,the covariant samples can be obtained by applying the _ symmetric factor _ of the corresponding covariance matrix to independent normal random variates .the symmetric factorization of a symmetric positive definite matrix can be computed as the factor in .one of the major computational issues in dealing with these large covariance matrices is that they are often dense .conventional methods of obtaining a symmetric factorization based on the cholesky decomposition are expensive , since the computational cost scales as for a matrix .relatively recently , however , it has been observed that large dense ( full - rank ) covariance matrices can be efficiently represented using hierarchical decompositions .taking advantage of this underlying structure , we derive a novel symmetric factorization for large dense hierarchical off - diagonal low - rank ( hodlr ) matrices that scales as .i.e. , for a given matrix , we decompose it as . a major difference of our scheme versus the cholesky decomposition is the fact that the matrix is no longer a triangular matrix . in fact , the matrix is a product of matrices that are block low - rank updates of the identity matrix , and the cost of applying the factor to a vector scales as .hierarchical matrices were first introduced in the context of integral equations arising out of elliptic partial differential equations and potential theory . since then , it has been observed that a large class of dense matrices arising out of boundary integral equations , dense fill - ins in finite element matrices , radial basis function interpolation , kernel density estimation in machine learning , covariance structure in statistic models , bayesian inversion , kalman filtering , and gaussian processes can be efficiently represented as data - sparse hierarchical matrices . after a suitable ordering of columns and rows , these matrices can be recursively sub - divided using a tree structure and certain sub - matrices at each level in the tree can be well - represented by low - rank matrices .we refer the readers to for more details on these matrices . depending on the tree structure and low - rank approximation technique ,different hierarchical decompositions exist .for example , the original fast multipole method accelerates the calculation of long - range gravitational forces for -body problems by hierarchically compressing ( via a quad- or oct - tree ) certain interactions in the associated matrix operator using analytical low - rank considerations .the low - rank sparsity structure of these hierarchical matrices can be exploited to construct fast dense linear algebra schemes , including direct inversion , determinant computation , symmetric factorization , etc .most of the existing results relevant to symmetric factorization of low - rank modifications to the identity are based on rank or rank modifications to the cholesky factorization , which are computationally expensive , i.e. , their scaling is at least .we do not seek to review the entire literature here , except to direct the readers to a few references . to our knowledge ,the scheme presented in this paper is the first symmetric factorization for hierarchical matrices that scales nearly linearly .in fact , replacing the hodlr structure with other ( more stringent ) hierarchical structures would yield linear schemes .this extension is currently under investigation .it is worth pointing out that xia and gu discuss a cholesky factorization for hierarchically semi - separable ( hss ) matrices that scales as .the paper is organized as follows : section [ section_symfactor_lowrank ] contains the key idea behind the algorithm discussed in this paper : a fast , symmetric factorization for low - rank updates to the identity .section [ section_hierarchical ] extends the formula of section [ section_symfactor_lowrank ] to a nested product of block low - rank updates to the identity .the details of the compatibility of this structure with hodlr matrices is discussed .section [ section_numerical ] contains numerical results , accuracy and complexity scaling , of applying the factorization algorithms to matrices relevant to problems in statistics , interpolation and hydrodynamics .section [ section_conclusion ] summarizes the previous results and discusses further extensions and areas of ongoing research .almost all of the hierarchical factorizations are typically based on incorporating low - rank perturbations in a hierarchical manner . in this section ,we briefly discuss some well - known identities which allow for the rapid inversion and determinant computation of low - rank updates to the identity matrix .if the inverse of a matrix is already known , then the inverse of subsequent low - rank updates , for and , can be calculated as where we should point out that the quantity is only a matrix .this formula is known as the sherman - morrison - woodbury ( smw ) formula . further simplifying , in the case where , we have note that the smw formula shows that the inverse of a low - rank perturbation to the identity matrix is also a low - rank perturbation to the identity matrix . furthermore , the row - space and column - space of the low - rank perturbation and its inverse are the same .the main advantage of equation is that if , we can obtain the inverse ( or equivalently solve a linear system ) of a rank perturbation of an identity matrix at a computational cost of . in general ,if is a low - rank perturbation of , then the inverse of is also a low - rank perturbation of the inverse of .it is also worth noting that if and are well - conditioned , then the sherman - morrison - woodbury formula is numerically stable .the smw formula has found applications in , to name a few , kalman filters , recursive least - squares , and fast direct solvers for hierarchical matrices .calculating the determinant of an matrix , classically , using a cofactor expansions requires operations .however , if the or eigenvalue decomposition is obtained , this cost is reduced to .recently , it was shown that the determinant of hodlr matrices could be calculated in time using sylvester s determinant theorem , a formula relating the determinant of a low - rank update of the identity to the determinant of a smaller matrix .determinants of matrices are very important in probability and statistics , in particular in bayesian inference , as they often serve as the normalizing factor in likelihood calculations and in the evaluation of the conditional evidence .sylvester s determinant theorem states that for , where the determinant on the right hand side is only of a matrix .hence , the determinant of a rank perturbation to an identity matrix , where , can be computed at a computational cost of .this formula has recently found applications in bayesian statistics for computing precise values of gaussian likelihood functions ( which depend on the determinant of the corresponding covariance matrix ) and computing the determinant of large matrices in random matrix theory . in the spirit of the sherman - morrison - woodbury formula and sylvester s determinant theorem, we obtain a formula that enables the symmetric factorization of a rank perturbation to the identity at a computational cost of . in particular ,for a symmetric positive definite ( from now on abbreviated as spd ) matrix of the form , where is an identity matrix , , , and , we obtain the factorization we now state this as the following theorem .[ thm_main ] for rank matrices and , if the matrix is spd then it can be symmetrically factored as where is obtained as the matrix is the symmetric factor of , and is the symmetric factor of , i.e. , we first prove two lemmas related to the construction of in equation , which directly lead to the proof of theorem [ thm_main ] . in the subsequent discussion , we will assume the following unless otherwise stated : 1 . is the identity matrix .2 . . is of rank .4 . is of rank . is spd .it is easy to show that the last item implies that the matrix is symmetric .the first lemma we prove relates the positivity of the smaller matrix to the positivity of the larger matrix , .[ prop_2 ] let denote a symmetric factorization of , where .if the matrix is spd ( semi - definite ) , then is also spd ( semi - definite ) . to prove that is spd , it suffices to prove that given any non - zero , we have . note that since is full rank , the matrix is invertible .we now show that given any , there exists an such that .this will enable us to conclude that is positive definite since is positive definite .in fact , we will directly construct such that .let us begin by choosing .then , the following two criteria are met : 1 . 2 . expanding the norm of we have : this proves criteria ( i ) .furthermore , by our choice of , we also have that . therefore, this proves criteria ( ii ) . from the above, we can now conclude that hence , if is spd , so is .an identical calculation proves the positive semi - definite case .we now state and prove a lemma required for solving a quadratic matrix equation that arises in the subsequent factorization scheme .[ prop_3 ] a solution to the quadratic matrix equation with and a full rank matrix is given by where is a symmetric factorization of , that is , .first note that from lemma [ prop_2 ] , since is positive definite , the symmetric factorization exists .now the easiest way to check if equation satisfies equation is to plug in the value of from equation in equation .this yields : further simplifying the expression , we have : therefore , we have that we are now ready to prove the main result , theorem [ thm_main ] .( proof of theorem [ thm_main ] ) the proof follows immediately from the previous two lemmas . with and previously defined as in equation , we have since and , from lemma [ prop_3 ] we have that . substituting in the previous equation, we get this proves the symmetric factorization .a slightly more numerically stable variant of factorization is : where is a unitary matrix such that .even though the previous theorem only addresses the symmetric factorization problem with no restrictions on the symmetric of the factors , we can also easily obtain a _ square - root factorization _ in a similar manner . by thiswe mean that for a given symmetric positive definite matrix , one can obtain a symmetric matrix such that .the key ingredient is obtaining a square - root factorization of a low - rank update to the identity : where is a symmetric matrix and satisfies the solution to equation is given by where and are symmetric square - roots of and : these factorizations can easily be obtained via a singular value or eigenvalue decomposition .this can then be combined with the recursive divide - and - conquer strategy discussed in the next section to yield an algorithm for computing square - roots of hodlr matrices .theorem [ thm_main ] has the two following useful corollaries . [ cor_1 ] if , i.e. , the perturbation to the identity in equation is of rank , then where .this result can also be found in .corollary [ cor_2 ] extends low - rank updates to spd matrices _ other _ than the identity .[ cor_2 ] given a symmetric factorization of the form , where the inverse of can be applied fast ( i.e. , the linear system can be solved fast ) , then a symmetric factorization of a spd matrix of the form , where and , can also be obtained fast . for instance , if the linear system can be solved at a computational cost of , then the symmetric factorization can also be obtained at a computational cost of . a numerical example demonstrating corollary [ cor_2 ]is contained in section [ sec - fast ] .note that the factorizations in equations and are similar to the sherman - morrison - woodbury formula ; in each case , the symmetric factor is a low - rank perturbation to the identity .furthermore , the row - space and column - space of the perturbed matrix are the same as the row - space and column - space of the symmetric factors .another advantage of the factorization in equations and is that the storage cost and the computational cost of applying the factor to a vector , which is of significant interest as indicated in the introduction , scales as .we now describe a computational algorithm for finding the symmetric factorization described in theorem [ thm_main ] .algorithm [ algorithm_main ] lists the individual steps in computing the symmetric factorization and their associated computational cost .the only computational cost is the in the computation of the matrix in equation .note that the dominant cost is the matrix - matrix product of an matrix with a matrix .the rest of the steps are performed on a lower -dimensional space . [ cols="^,<,^",options="header " , ] note that since the rpy tensor is singular , on and manifolds the ranks of the off - diagonal blocks would grow as and , respectively . since the computational cost of the symmetric factorization scales as , the computational cost for the symmetric factorization to scale as and on and manifolds , respectively .the numerical benchmarks also validate this scaling of our algorithm in all three configurations .the article discusses a fast symmetric factorization for a class of symmetric positive definite hierarchically structured matrices .our symmetric factorization algorithm is based on two ingredients : a novel formula for the symmetric factorization of a low - rank update to the identity , and a recursive divide - and - conquer strategy compatible with hierarchically structures matrices . in the case where the hierarchical structure present is that of hierarchically off - diagonal low - rank matrices , the algorithm scales as .the numerical benchmarks for dense covariance matrix examples validate the scaling .furthermore , we also applied the algorithm to the mobility matrix encountered in brownian - hydrodynamics , elements of which are computed from the rotne - prager - yamakawa tensor . in this case , since the ranks of off - diagonal blocks scale as , when the particles are on a three - dimensional manifold , the algorithm scales as .obtaining an symmetric factorization for the mobility matrix is a subject of ongoing research within our group .it is also worth noting that with nested low - rank basis of the off - diagonal blocks , i.e. , if the hodlr matrices are assumed to have an hierarchical semi - separable structure instead , then the computational cost of the algorithm would scale as .extension to this case is relatively straightforward .sivaram ambikasaran , arvind krishna saibaba , eric f darve , and peter k kitanidis .ast algorithms for bayesian inversion . in_ computational challenges in the geosciences _ , pages 101142 .springer , 2013 .matthieu geist and olivier pietquin .statistically linearized recursive least squares . in _machine learning for signal processing ( mlsp ) , 2010 ieee international workshop on _, pages 272276 .ieee , 2010 . dingding wang , tao li , shenghuo zhu , and chris ding .multi - document summarization via sentence - level semantic analysis and symmetric matrix factorization . in_ proceedings of the 31st annual international acm sigir conference on research and development in information retrieval _ , pages 307314 .acm , 2008 .
|
we present a fast direct algorithm for computing symmetric factorizations , i.e. , of symmetric positive - definite hierarchical matrices with weak - admissibility conditions . the computational cost for the symmetric factorization scales as for hierarchically off - diagonal low - rank matrices . once this factorization is obtained , the cost for inversion , application , and determinant computation scales as . in particular , this allows for the near optimal generation of correlated random variates in the case where is a covariance matrix . this symmetric factorization algorithm depends on two key ingredients . first , we present a novel symmetric factorization formula for low - rank updates to the identity of the form . this factorization can be computed in time , if the rank of the perturbation is sufficiently small . second , combining this formula with a recursive divide - and - conquer strategy , near linear complexity symmetric factorizations for hierarchically structured matrices can be obtained . we present numerical results for matrices relevant to problems in probability & statistics ( gaussian processes ) , interpolation ( radial basis functions ) , and brownian dynamics calculations in fluid mechanics ( the rotne - prager - yamakawa tensor ) . symmetric factorization , hierarchical matrix , fast algorithms , covariance matrices , direct solvers , low - rank , gaussian processes , multivariate random variable generation , mobility matrix , rotne - prager - yamakawa tensor . 15a23 , 15a15 , 15a09
|
the goal of the present paper is to present a new approach to the construction of asymptotic ( approximating ) solutions to parabolic pde by using the characteristics .this approach allows one to construct global in time solutions not only for the usual cauchy problems but also for the inverse problems . we will work with kolmogorov feller - type equations with diffusion , potential , and jump termsthe equation under study has the form : where is the symbol of the kolmogorov feller operator , is a small parameter characterizing the frequency and the amplitude of jumps of the markov stochastic process with transition probability given by . to be more precise ,we bear in the mind the following form of : where is a positive smooth matrix , is a family of positive bounded measures smooth with respect to such that and and are smooth in ( more precise conditions see below ) .the construction of forward in time global asymptotic solution to equations of this type was developed by v. maslov , , for a version of this construction , see also in .maslov s approach is based on ideas similar to those used in his famous canonical operator construction ( or in fourier integral operators theory ) .this construction is based on some integral representation and is not suitable for constructing backward in time solutions .another approach to the global asymptotic solution construction was suggested in and is based on the construction of generalized solutions to continuity equation in a discontinuous velocity field .we assume that the class of solutions under study admits the following limits : \(1 ) logarithmic pointwise limit .we denote this limit by and assume that it is a piecewise smooth function with bounded first - order derivatives and a singular support in the form of a stratified manifold .\(2 ) weak limit of the expression .we denoted it by and assume that is the sum of the function ( ) smooth outside and the dirac -function on .note that here we deal with the limit in the weighted weak sense !if and are smooth function , then the following representation is true ( in the usual sense ) * example 1 : * where is a smooth function , . herea wkb - like approach can be used ( yu . kifer , ; v. maslov , ) .it gives an asymptotic ( approximating ) solution in the form ( cf . ) for arbitrary . here is the solution to the cauchy problem for the hamilton jacobi equation and is the solution to the transport equation both of the solutions and are defined via solutions of the hamilton system they are smooth while there are symplectic geometry objects corresponding to this construction : \(1 ) the phase space ; \(2 ) the lagrangian manifold , where is a shift mapping along the hamiltonian system trajectories ; \(3 ) the projection mapping with jacobi matrix .the main assumption that is required is the following one .the trajectories of the hamilton system form a manifold of the phase space ( at least in the area of the phase space under study ) .let for ] , the singular support of the velocity field preserves its structure ( the mapping of the singular support induced by the shift along the hamilton flow is a diffeomorphism ) , then it is possible to show that the singular support of the velocity has the required structure .if the structure is changing ( e.g. , a jump appears , see fig.5 and fig.4-the last step of evolution in time ) , then one can use the weak asymptotics method to construct a global solution to the hamilton jacobi and continuity equations .this approach is based on a `` new ( generalized ) characteristics '' constructed by v. danilov and d. mitrovic , in the case where the strata of the singular support are of codimension 1 .the main idea of this approach is to consider the singularity origination as a result of nonlinear solitary wave interaction .a simple example is the hamilton flow corresponding to the heat equation from the previous example . the hamilton jacobi equation in this case is equivalent to the hopf equation for the momentum : the solution in this case has the form and is plotted below ., width=377 ] to consider , we have to calculate the product here the following equality holds : where is an arbitrary small parameter and is a small quantity in the sense of distributions , for each , which is a test function .the time evolution of the function is such that the slanting intercept of a straight line preserves its shape until it takes the vertical position and then a jump begins to propagate .this means that , at every time instant , the solution anzatz can be presented in the form of a linear combination of heaviside functions .this allows one to use a formula which express the product of heaviside functions as their linear combination , and hence , uniformly in time , we see that the functions and ( the last up to a small quantity ) belong to the same linear space , for detail , see .thus , we can prove the following theorem .assume that the following conditions are satisfied for $ ] , : \(1 ) there exists a smooth solution of the hamiltonian system , \(2 ) the singularities of the velocity field form a stratified manifold with smooth strata and .then there exists a generalized solution of the cauchy problem for continuity equation in the sense of the integral identity introduced in and at the points where the projection is bijective , the asymptotic solution of the cauchy problem for kolmogorov - feller type equation has the form was shown above , all that we need to go forward in time is the hamilton system : let us change the time direction as , then we want to solve the inverse problem : the right - hand sides are considered as given data , and we are looking for , for .obviously , in our case the solutions have the form conclusion : we can use the `` same '' trajectories to move forward and backward in time .but the incoming trajectories become outcoming and vice versa .but if there are no jumps ( singularities of the projection mapping ) , then our geometry ( and the asymptotic solution ! ) is invertible in time .this means that if we take the cauchy problem solution for parabolic pde such that then the asymptotic solution for has the `` wkb '' form then taking the last function as the initial data for parabolic pde in inverse time ( let be its asymptotic solution ) , we get : it can be easily verified in the case i.e. for simplest heat equation .if one constructs the solution of inverse heat equation with initial data at of the form using the green function and calculates the integral be saddle point method at the following result will be obtained we want to stress once again that this statement is true if there is no singularities of projection mapping .there is no unique reconstruction of the part of lagrangian manifold coming to the vertical line as increases ( see fig.9 ) ! but , fortunately , we can move ahead using the sense considerations .the main point is that the function can not attain its minimum ( maximum ) inside the `` terra incognita '' .this allows one to calculate the integrals containing the reconstructed solution without taking `` terra incognita '' into account in the case where the integrand support contains this `` terra incognita '' , and we can formulate the following statement .let the symbol defined by be such that the function and measure do not depend on and .let be a solution of the cauchy problem to a kolmogorov feller - type equation , and assume that , for some , there exists a logarithmic limit , namely , the action function and the generalized amplitude . 1 .let , then relation ( [ alpha1 ] ) is true ; 2 .assume that , is a union of segments , and the intersection between some and is not empty but does not belong to , then the limit of the integral as equals , where is the minimal value of at the ends of .these statements actually mean that , from the viewpoint of the weak sense ( momentum ) , the density reconstructed arbitrarily inside the `` terra incognita '' and according to the characteristics outside it can be used in the same manner as the leading term of the asymptotic solution constructed earlier by maslov s tunnel canonical operator and its modifications , .now i will briefly speak about the proofs of the statements about the invertibility in time that was formulated above .each of them can be divided into two parts .first , it is to prove that , for all smooth reconstructions of lagrangian manifold in a `` terra incognita '' domain , the corresponding function ( for some fixed ) can not attain its minimum value inside this domain , see the lemma below .this allows one to apply the laplace method for calculating the integrals mentioned in those statements taking into account that , due to this method and lemma , the results of these calculations do not depend on the values of the integrands inside the `` terra incognita '' domain .the proof is finished by taking account of the estimation which is true outside the singular support of the function , see theorem above .now i formulate the lemma .let the symbol defined by be such that the function and the measure do not depend on and .then the function , i.e. , the action function corresponding to the lagrangian manifold , can not attain the minimal value inside the `` terra incognita '' domains .we begin the consideration with a particular case when operator symbol does not depend on and restrict ourselves by one dimensional case studying .let ( a , b ) is an interval inside the `` terra incognita '' domain , and let .we will proceed by contradiction .assume that the function attain its minimal value at the point .we prove that , along the trajectory of hamilton system whose projection starts at , the following inequality is true : this inequality leads to a contradiction because of the assumption that belongs to the `` terra incognita '' domain , and hence it belongs to the projection of the image of the singular ( vertical ) part of the lagrangian manifold under backward in time shift along the trajectories of the hamilton system . in turn, this means that the projections of all trajectories whose starting points are projected to the `` terra incognita '' must intersect at a point for the forward in time motion .so the above - mentioned inequality leads to a contradiction . to prove this inequality , we write the projection of the hamilton system trajectory starting at .it has the form it is clear that .thus , taking into account that that because of the convexity of and due to the assumption that is a point of minimal value , we get the needed inequality .a multidimensional case differs from the case considered by changing the scalar values and by vectors ( gradients ) . in turn, this gives matrix inequalities in and .the problem is to prove that the eigenvalues of the matrix are nonnegative by using and .for this , we can make a change of variables reducing the matrix to diagonal form .this transformation induces the corresponding transformation in the -plane that transforms the matrix to a new symmetric positive matrix .now we note that the principal minors of the new matrix product are products of matrix - factor principal minors ( because the second one is of diagonal form ) .the determinants of the matrix - factor principal minors are nonnegative , so the spectrum of the matrix is also nonnegative and we again get .to finish our consideration , we have to investigate the case where the symbol depends on . we again stay at the point , where the function attains its minimum .if so , then and , by assumptions , for .the system for the matrices and follows from the hamilton system and has the form because of our assumptions ( see the lemma formulation ) , we have , and .this means that , along the hamilton system trajectory starting from the point , , we have and .thus , along the above - mentioned trajectory , equations and have the form integrating over , we can transform the first equation to the form of eq . and apply all above arguments concerning this equality .thus we came to the relation with the same properties as .one can proceed further and generalize the statement to the case of an arbitrary drift .but up to now the presence of the potential destroys our picture and i will think about it .v. g. danilov , `` on singularities of continuity equations , '' nonlinear analysis ; theory , methods and applications , * 68 * , 6 , 1640 - 1651 , ( 2008 ) , preprint 2006 - 41 , http://www.math.ntnu.no/coservation/2006 s. albeverio , v. g. danilov , _ global in time solutions to kolmogorov - feller pseudodifferential equations with small parameter _ , russian journal math .phys . , v.18 , n1,pp.10 - 25,2011 , issue dedicated to the memory of academician v. ginzburg .v. maslov , `` global exponential asymptotic behavior of solutions of the tunnel equations and the problem of large deviations , '' ( russian ) international conference on analytical methods in number theory and analysis ( moscow , 1981 ) .trudy mat .inst . steklov .* 163 * ( 1984 ) , pp .150180 .v. n. kolokoltsov and v. p. maslov , _ idempotent analysis and its applications_. translation of _ idempotent analysis and its application in optimal control _ ( russian ) , `` nauka '' moscow , 1994 .translated by v. e. nazaikinskii . with an appendix by pierre del moral .mathematics and its applications , 401 .kluwer academic publishers group , dordrecht , 1997 .v. g. danilov , `` a representation of the delta function via creation operators and gaussian exponentials and multiplicative fundamental solution asymptotics for some parabolic pseudodifferential equations , '' russian j. math . phys .* 3 * , no . 1 p. 25( march 1995 ) .v .g .danilov , d. mitrovic , delta shock wave formation in the case of triangular hyperbolic system of conservation laws , journal of differential equations , in press , corrected proof , available online 16 april 2008 .v. g. danilov , d. mitrovic , _ shock wave formation process for a multidimensional scalar conservation law_. quarterly of appl .
|
the goal of the present paper is to present a new approach to the construction of asymptotic ( approximating ) solutions to parabolic pde by using the characteristics .
|
compared to qubits , higher - dimensional quantum systems improve performance of many protocols and algorithms of quantum information processing .for example , additionally to their increased capacity , they make quantum cryptography more secure , or lead to a greater reduction of communication complexity . one way to deal with a qudit is to find a convenient physical system representing it .a beautiful example is a photon with many accessible propagation paths .another approach , studied here , is to treat many systems of lower dimensions as a global higher - dimensional object a composite qudit .the challenge is to prepare and operate on entangled states of subsystems and to experimentally realize all global observables .this usually requires difficult conditional operations . the preparation of entangled states ( at least some of them ) is well within a reach of current technology .for example , entanglement of two photons , in all degrees of freedom , was demonstrated in ref .each of these photons can be regarded as a composite qudit , with subsystems represented by different degrees of freedom .further , a system of two photons can be thought of as an even higher dimensional qudit . here , measurements on such qudits are studied .some examples are already proposed and realized in a context of bell s theorem .the present paper is a generalization of that work .first , the requirements for an arbitrary global observable are given .next , a specific class of operators , unitary generalizations of pauli operators , is described in detail .the importance of this class comes from its applications .for example , the operators form a full tomographic set ( allow for a reconstruction of a density matrix ) , appear in quantum cryptography , or tests of local realism . a solution to the eigenproblem of these operatorsis constructed in full generality and , for a two - component case , the schmidt representation of the eigenstates is derived .all the eigenvectors are shown to have the same schmidt number .thus , the sets of entangled and disentangled eigenbases are identified , which respectively define the sets of `` more difficult '' and `` easier '' realizable operators . a beautiful method to solve the eigenproblem of the generalized pauli operators , based on euclid s algorithm , was given by nielsen _however , since their interests were different , they did not present an explicit solution .the generalized pauli operators have been studied in various levels of detail in many other papers .nevertheless , the present author could not find a general form of the eigenbasis . here ,an explicit compact formulae for eigenvectors and eigenvalues are given , as well as a practical procedure how to compute them .although unitary , the generalized pauli operators are measurable . in quantum mechanics , different outcomes of a measurement apparatuscorrespond to different orthogonal states of a system . due to the fact that most often measurement outcomes are expressed in form of real numbers we are used to connect hermitian operators with observables . however , there are measurement apparatuses which _ do not _ output a number .take a device which clicks if a photon is detected or a bunch of such photo - detectors which monitor many possible propagation paths of a photon .the operator associated with this apparatus has a specific spectral decomposition ( different clicks find the system in different orthogonal states ) .however , the eigenvalues assigned to the clicks can be arbitrary , as long as the assignment is consistent , i.e. clicks of the same detector always reveal the same eigenvalue .if one finds it useful to work with complex eigenvalues , as it is often the case when considering higher - dimensional quantum systems , one can use operators which are unitary , with eigenvalues given by the complex roots of unity . with any generalized pauli operator one can associate a measurement device capable to measure it .we present such devices for polarisation - path qudits , and prove that quantum cryptography with two bases is relatively easy to realize as it does not require any joint measurements on the subsystems . as the unitary operators correspond to certain measurement apparatuses , they will be often called `` observables '' .consider a qudit composed of many subsystems , possibly of different dimensions .the measurement of any global observable can be viewed as a unitary evolution of the whole system which transforms the eigenvectors of the observable into the eigenvectors which can be distinguished by the measurement apparatus . for subsystems of equal dimensionsarbitrary global unitary operation can be decomposed into local and two - body conditional operations .this proof can be almost directly applied to the problem studied here , and it will not be repeated .individual measurements , local and conditional two - body operations are sufficient to realize any global measurement on a composite qudit . instead of finding the evolution , one can decompose a global observable into ( possibly joint ) measurements on subsystems and classical communication .eigenbases of individual measurements form a product basis in a global hilbert space .eigenvectors of any global observable can be decomposed in this basis .if the eigenvectors factorize , that is , where is a state of subsystem , then there are two possible scenarios : ( a ) a global measurement can be performed with individual measurements on separate subsystems , ( b ) it can be done with an additional use of a feed - forward technique , i.e. a subsequent measurement setting depends on the outcomes of all previous measurements . to see this , note that orthogonality of vectors implies certain orthogonalities of the states of subsystems , . in the simplest case , for each subsystem the vectors form a basis .then , the first scenario , ( a ) , can be applied .the other possibility is that vectors , say , form an orthogonal basis , and for every one has a _ different _ set of orthogonal vectors of another subsystem , say , and so on . in this case ,one first measures the particle the states of which span the full basis ( in our case subsystem `` 0 '' ) .next , depending on the outcome , another subsystem is measured in a suitable basis .further on , depending on both previous outcomes , yet another subsystem is measured , etc .this is what is called feed - forward technique , ( b ) .if some eigenstates of a multisystem observable do not factorize , joint measurements are necessary to measure it .in any case , the realisation of a global observable is based on the solution of its eigenproblem . here , a general solution to the eigenproblem of the generalized pauli operators is presented . in the hilbert - schmidt space of operators acting on vectors in a hilbert space of dimension ,one can always find a basis set of unitary operators .it has been shown that one can construct such a set using the following relation : where the action of the two operators on the right - hand side , on the eigenvectors of operator , , is defined by : with the number is a primitive complex root of unity , whereas the addition , here , is taken modulo . unless explicitly stated all additionsare taken modulo .the operators are called generalized pauli operators as for they reduce to standard pauli operators .they share some features with them .the matrix of any , written in the basis , has only non - vanishing entries , one per column and row : the only non - vanishing element of the first column , a `` 1 '' , appears in the row ( recall that ) .generally , the matrix elements of operator , {rm} ] , where is the kronecker delta .since every is unitary it can be diagonalized : where is a unitary matrix the columns of which are eigenstates of , , and is a diagonal matrix with entries being eigenvalues of , denoted by .the form of {rm} ] ( the symbol {d_0} ] . to calculate s one divides by , and denotes the integer part of this division by .thus , one can write . the integers and can have common factors , and the fraction may be simplified to an irreducible form .thus , for and ( and any multiple of ) equals zero .for the values of repeat themselves .if one takes an integer and computes the value of for : {d_0 } = [ [ d_0f]_{d_0}+[xf]_{d_0}]_{d_0 } = [ xf]_{d_0}$ ] , it is the same as for ( we have used the properties of addition in modulo calculus ) .thus there are different values of , or orthogonal states in the decomposition of every .moreover , for the value of again equals zero , i.e. each state ( one of distinct states ) appears in exactly the same number of times .this gives the number of orthogonal states associated with any given , which will be denoted as .since for different vectors are orthogonal , the states of subsystem `` 1 '' , associated with the same must be orthogonal .notice that factorizes into , and this is a general property of an operator .one can introduce an integer to enumerate distinct states of subsystem `` 0 '' .in a similar way , for a fixed state , one can enumerate orthogonal states of subsystem `` 1 '' with an integer .since , it can be decomposed within the new variables and as : within this decomposition every state , into which the eigenvectors are decomposed , ( [ eigenvectors ] ) , can be written as . to find its base- one needs to divide by , and extract integer and modulo parts . since one finds that is an integer , or equivalently is a multiple of .that is , does not contribute to the modulo part , and one can write : let us summarize the parameterization just described . in the decomposition of any one finds distinct states of subsystem `` 0 '' , , with . in turn , for a fixed , there are distinct states of subsystem `` 1 '' , . within this parameterizationany eigenstate has the following form : where the coefficients denote the phase of coefficients in eq .( [ eigenvectors ] ) . to understand the structure of the eigenbasis take for a fixed state in eq .( [ eigen_decomp_subsyst ] ) , the state of subsystem `` 1 '' with which it is associated , namely : it will be shown that within the same eigenvector , any two states and , for , are either _ orthogonal _ or _ the same_. a similar result holds for the states of different eigenvectors , with the same value of .let us first consider states of subsystem `` 1 '' within the same eigenvector . their scalar product , ,is given by : since for different s the states are shifted , the scalar product is either equal to zero , if the individual states involved are orthogonal , or it is equal to the sum of terms : .using explicit form of the coefficients one finds that the scalar product is proportional to : where the only relevant terms are given , involving in the exponent the products of and . since and , right - hand side equals to the kronecker delta : with .the states of subsystem `` 1 '' are either orthogonal or the same ( up to a global phase , which can be put to multiply them ) .if the kronecker delta is equal to one , one has ,i.e. a vector of subsystem `` 1 '' is multiplied by a superposition of corresponding s , with coefficients defined by ( [ eigen_decomp_subsyst ] ) , respectively multiplied by the phase . since different sare orthogonal , every vector can be written as a superposition of bi - orthogonal product states . in other words , one has a schmidt decomposition of the eigenvectors .moreover , for different eigenvectors , the states and , which correspond to the same state of subsystem `` 0 '' , are also either orthogonal or the same .their scalar product involves scalar products . for different eigenvectors the states be shifted , and one has : ^{k_1 d_0}.\ ] ] since the product of eigenvalues , , is equal to , one has : with .notice that the last kronecker delta does not depend on .e.g. , if for some one finds that vectors and are orthogonal , then the same relation holds for any other , i.e. all the eigenstates have exactly the same number of terms in the schmidt form ( the same schmidt number ) . if all the states in the decomposition of are the same ( up to a global phase ) as those entering . in this casethe coefficients which multiply products make the two eigenvectors orthogonal . to conclude , given that only individual measurements on subsystems are available to an experimenter , she / he can learn from above considerations whether it is possible to measure a generalized pauli operator defined on the whole system ( of two components ) .let us apply the developed formalism . consider a two - bases quantum cryptography protocol with higher - dimensional systems , as described in ref . .one has a qudit randomly prepared in a state of a certain basis , or of another basis , which is unbiased with respect to the first one .the measurement basis is also randomly chosen between these two .interestingly , if a qudit is composed of two subsystems , the measurements involved in the protocol do not require any joint actions .the two mutually unbiased bases can be chosen as the eigenbases of and operators . using the above construction to one immediately finds , for arbitrary dimension , the well - known fourier relation between the and eigenbases : let us define the eigenbasis of a global operator as : where , and , denote the states of subsystems `` 0 '' and `` 1 '' , respectively . within this definitiona measurement of the global observable is equivalent to individual measurements on the components .these individual measurements reveal the values of and , and the eigenvalue of is [ due to eq .( [ s_definition ] ) ] . to measure one uses the definition ( [ definition ] ) and the fact that , and finds that : where we have used the symbol to stress the factorization of this state . for the state of subsystem`` 1 '' reads : since , see ( [ alpha ] ) , and a measurement on this subsystem in the basis : reveals the value of .the value of can be measured once is known .a measurement in the basis : on the subsystem `` 0 '' reveals the value of . in this wayall values of can be measured using individual measurements only , where the measurement on subsystem `` 0 '' depends on the outcome of the measurement on subsystem `` 1 '' ( feed - forward technique ) .another application utilizes the fact that the operators form a basis in a hilbert - schmidt space , and thus can be used in quantum tomography . quantum tomography ( reconstruction of a density matrix )aims at an estimation of an unknown quantum state .the tomography of qubits was described in . soon after , the generalization to higher - dimensional systems was given in .the approach described there is based on hermitian operators . herewe follow the unitary operators approach , and explicitly present , in the next section , suitable devices to perform tomography of polarisation - path qudits .since qudit operators form a basis in the hilbert - schmidt space , they uniquely describe an arbitrary state of a qudit : where for normalisation as all operators are traceless , except the identity .tomography means to establish ( measure ) all of the coefficients . since the operators have the spectral decomposition , the coefficients can be written as : the eigenvectors form an orthonormal set , and the trace gives the probability , , to obtain the outcome in the measurement of on the system prepared in the state .finally , to perform tomography one needs to build the devices capable to measure , and collect data to estimate probabilities ( relative frequencies ) of different outcomes , .we focus on measurement devices for polarisation - path qudits .although the general requirements for a measurement involve feed - forward and joint operations on subsystems , there are certain physical realisations of composite qudits which incorporate these requirements in a simple way .the polarisation - path qudit is an example .there , a qudit is encoded in a polarized photon , which has many possible propagation paths .first , we explicitly present devices capable to measure all operators in the simplest case of two paths .next , the setups for any number of paths are discussed .consider a polarized photon with two accessible paths .its state is described in a four dimensional hilbert space , i.e. there are different operators to measure ( we put from the very definition ) .however , some of them commute ( contrary to the qubit case ) and the measurement of one of them reveals the values of the others . from the definition ,the eigenstates of are given by : where subsystem `` 0 '' is a polarisation of a photon , and subsystem `` 1 '' is a path .e.g. denotes a horizontally polarised photon in the path .the index inside the two - level kets denotes the fact that they are chosen as the eigenstates of the individual operators , i.e. . the device that measures simply checks which polarisation a photon has in a certain path .this can easily be achieved with polarizing beam - splitters . moreover, the same device also measures the values of and , as these operators commute with .their eigenvalues are powers of the eigenvalues . interestingly , the observables and can be measured in a similar way . after expressing the eigenvectors of , say , in the basis , and with definitions ( [ ququat ] ) ,one finds : is the eigenbasis of the individual operator , . to measure this observable the paths meet on a beam - splitter ( which gives a phase to the reflected beam ) where different eigenstates are directed into different output ports , followed by polarizing beam - splitters .the observable ( and its powers ) can be measured individually with an additional feed - forward .also and are measurable in this way . to see how the feed - forward method is realized ,let us study the observable .its eigenvectors read : where the index denotes the eigenbasis of the individual operator , given by .depending on the outcome of the path measurement in the basis , polarisation is measured in the or basis . however ( here comes the beauty of the approach utilizing the paths ) , appropriate phase and a beam - splitter drive different path eigenstates into different output ports of the beam - splitter . in this way feed - forwardis not needed .it is now enough to put polarisation checking devices behind the proper outputs of the beam - splitter ( see fig .[ sx ] ) . , for .the phase shift ( ps( ) ) in the path and the beam - splitter ( bs ) perform the path measurement , .the path state goes to the upper arm where the polarisation is measured in the basis with the polarizing beam - splitter which transmits ( denoted as pbs45 ) . in case of the path state the photon goes to the lower arm , where its polarisation component is phase shifted by ( pps( ) ) .next , the photon enters pbs45 , and is detected in one of its outputs .the eigenvalues corresponding to clicks of each detector are also written . ]the eigenstates of the last five observables are maximally entangled states of subsystems . some of these observables , to keep the spectrum in the domain of fourth roots of unity , need to be multiplied by .take as an example operator in the form .its eigenstates are given by : to distinguish between these states one needs to build an interferometer like the one in the fig .[ sxsz ] . , for .this setup , which measures the operator ( with ) , distinguishes maximally entangled states of paths and polarisations .first , with the phase shift ( ps( ) ) and the beam - splitter ( bs ) , the eigenstates are converted into eigenstates . next , the phase ( ps( ) ) is applied in the lower arm , where is directed . in the upper arm polarizationis rotated ( with the plate ) , such that in both arms it is the same .finally , specific clicks behind the beam - splitter and polarising beam - splitters distinguish the states ( [ sxsz_eigen ] ) . ] the same setup measures and , which commute with .finally , when different phase shifts are used , this setup also measures the remaining and observables . to sum up , the most involved device , used in the measurements of generalized pauli operators on a composite qudit encoded in two paths and polarization of a photon , involves mach - zehnder interferometer with a polarization rotator in one arm , followed by polarizing beam - splitters ( fig .[ sxsz ] ) .most of the observables are realizable with a single beam - splitter followed by polarizing beam - splitters .generally , it is possible to perform arbitrary measurement on polarized photons with many , , accessible paths . with polarising beam - splitters in each propagation path one transforms initial polarisation - path state into a double - number - of - paths state , in dimensional hilbert space( each polarising beam - splitter generates two distinct spatial modes ) .according to ref . one can always realize a unitary which brings the states to the states of well - defined propagation direction .thus , detectors monitoring these final paths distinguish all the eigenvectors .higher - dimensional quantum systems can find many applications , both in foundations of physics and in applied quantum information .a method of construction of qudits , studied here , is to compose them of other , lower dimensional , subsystems .in such a case , if a global observable has some entangled eigenvectors , its measurement naturally requires joint actions on subsystems .if eigenvectors factorize , the observable is measurable individually , sometimes with an additional feed - forward .thus , in order to design a setup capable to measure an observable , its eigenproblem must be solved . here, the eigenproblem of the unitary generalizations of pauli operators is solved , for arbitrary dimensions , and schmidt decomposition of the eigenvectors , for qudits composed of two components , is derived . using these results quantum cryptography with two bases , operating on a two - component qudit ,is shown not to involve any joint measurements .finally , simple optical devices , capable to measure all generalized pauli operators on polarisation - path qudits , are presented .these experimentally feasible devices allow full state tomography . in case of two different paths ,the most complicated device is a mach - zehnder interferometer , with a polarisation rotator in one arm , followed by polarizing beam - splitters .the author is extremely grateful to professor marek ukowski for useful comments .marcin wieniak is also gratefully acknowledged .the work is part of the mnii grant no .1 p03b 049 27 and the eu framework programme qap ( qubit applications ) contract no .the author is supported by the foundation for polish science .a. zeilinger , h. j. bernstein , d. m. greenberger , m. a. horne , and m. ukowski in _ quantum control and measurement _ , edited by h. ezawa and y. murayama ( elsevier , amsterdam , 1993 ) .a. zeilinger , m. ukowski , m. a. horne , h. j. bernstein , and d. m. greenberger in _ quantum interferometry _ , edited by f. demartini and a. zeilinger ( world scientific , singapore , 1994 ) .chen , j .- w .pan , y .- d .zhang , .brukner , and a. zeilinger , phys .lett . * 90 * , 160408 ( 2003 ) .t. yang , q. zhang , j. zhang , j. yin , z. zao , m. ukowski , z .- b .chen , and j .- w .pan , phys .lett . * 95 * , 240406 ( 2005 ) . c. cinelli , m. barbieri , r. perris , p. mataloni , and f. de martini , phys .lett . * 95 * , 240405 ( 2005 ) .j. schwinger , proc .ac . sci . * 46 * , 570 ( 1960 ) .d. i. fivel , phys .lett . * 74 * , 835 ( 1995 ) .d. gottesman , in _ quantum computing and quantum communications : first nasa international conference _, edited by c. p. williams ( springer - verlag , berlin , 1999 ) .a. o. pittenger and m. h. rubin , phys .a * 62 * , 32313 ( 2000 ) .n. j. cerf , s. massar , and s. pironio , phys .lett . * 89 * , 80402 ( 2002 ) .w. son , j. lee , and m. s. kim , physlett . * 96 * , 60406 ( 2006 ) .j. lee , s .- w .lee , and m. s. kim , phys .a * 73 * , 32316 ( 2006 ) .
|
we study measurements of the unitary generalization of pauli operators . first , an analytical ( constructive ) solution to the eigenproblem of these operators is presented . next , in the case of two subsystems , the schmidt form of the eigenvectors is derived to identify measurements which are easy to implement . these results are utilized to show that quantum cryptography with two bases , when operating on a two - component qudit , can be realized with measurements on individual subsystems , assisted with classical communication . we also discuss feasible devices which perform tomography of polarisation - path qudits .
|
i first became aware that there is a deep problem at the heart of physics in a first year chemistry class .the instructor began his very first lecture by writing an equation on the board : \psi(r , t),\label{eq : schrodinger}\ ] ] and asked if anyone knew what this was . when no one did , he continued by saying that this was the schrdinger equation , the fundamental equation of quantum mechanics , and that chemisty , biology and everything around us is a consequence of this equation .he then concluded ( perhaps hyperbolically ) by saying that the only problem is that no one knows how to solve it in general and for this reason we had to learn chemistry . for me , this lesson raised an important question : how can we be certain that the schrdinger equation correctly describes the dynamics of these systems if solving it is computationally intractable ?this is the central question of this article , and in the course of addressing it we will not only challenge prevailing views of experimental quantum physics but also see that computer science is fundamental to the foundations of science .the essence of the problem is that relatively simple quantum systems , such as chemicals or spin lattices , seem to be performing tasks that are well beyond the capabilities of any conceivable supercomputer .this may come as a surprise since is a linear partial differential equation that can be solved in time polynomial in the dimension of the vector , which represents the `` quantum state '' of an ensemble of particles .the problem is that the dimension of the vector grows exponentially with the number of interacting particles .this means that even writing down the solution to the schr " odinger equation is exponentially expensive , let alone performing the operations needed to solve it .richard feynman was perhaps the first person to clearly articulate that this computational intractability actually poses an opportunity for computer science .he suspected that these seemingly intractable problems in quantum simulation could only be addressed by using a computer that possesses quantum properties such as superposition and entanglement .this conjecture motivated the field of quantum computing , which is the study of computers that operate according to quantum mechanical principles .the fact that quantum computing has profound implications for computer science was most famously demonstrated by shor s factoring algorithm , which runs in polynomial time whereas no polynomial time algorithm currently exists for factoring .this raises the possibility that a fast scalable quantum computer might be able to easily break the cryptography protocols that make secure communication over the internet possible .subsequently , feyman s intuition was shown to be correct by seth lloyd , who showed that quantum computers can provide exponential advantages ( over the best known algorithms ) for simulating broad classes of quantum systems .exponential advantages over existing algorithms have also been shown for certain problems related to random walks , matrix inversion and quantum chemistry , to name a few . at present , no one has constructed a scalable quantum computer .the main problem is that although quantum information is very flexible it is also very fragile . even looking at a quantum systemcan irreversibly damage the information that it carries . as an example , let s imagine that you want to see a stationary electron . in order to do so, you must hit it with a photon .the photon must carry a substantial amount of energy ( relative to the electron s mass ) to measure the position precisely and so the interaction between the photon and electron will impart substantial momentum ; thereby causing the electron s momentum to become uncertain .such effects are completely negligible for macroscopic objects , like a car , because of they have substantial mass ; whereas , these effects are significant in microscopic systems .this behavior is by no means unique to photons and electrons : the laws of quantum mechanics explicitly forbid learning information about a _general _ quantum state without disturbing it .this extreme hypersensitivity means that quantum devices can lose their quantum properties on a very short time scale ( typically a few milliseconds or microseconds depending on the system ) .quantum error correction strategies can correct such problems .the remaining challenge is that the error rates in quantum operations are still not low enough ( with existing scalable architectures ) for such error correction schemes to correct more error than they introduce .nonetheless , scalable quantum computer designs already operate close to this `` threshold '' regime where quantum error correction can lead to arbitrarily long ( and arbitrarily accurate ) quantum computation . what provides quantum computing with its power ?quantum computing superficially looks very much like probabilistic computing . rather than having definite bit strings ,a quantum computer s data is stored as a quantum superposition of different bit strings .for example , if and ( where is the vector transpose ) then the state is a valid state for a single quantum bit . this quantum bit or`` qubit '' can be interpretted as being simultaneously 0 and 1 with equal weight . in this case , if the state of the qubit is measured ( in the computational basis ) then the user will read with probability and with probability .quantum computers are not just limited to having equal `` superpositions '' of zero and one .arbitrary linear combinations are possible and the probability of measuring the quantum bitstring in the quantum computer to be ( in decimal ) is for any unit vector .therefore , although quantum states are analog ( in the sense that they allow arbitrary weighted combinations of bit strings ) the measurement outcomes will always be discrete , similar to probabilistic computing .the commonalities with probabilistic computation go only so far .quantum states are allowed to be complex vectors ( i.e. ) with the restriction that ( here is the conjugate transpose operation ) ; furthermore , any attempt to recast the quantum state as a probability density function will result ( for almost all `` pure '' quantum states ) in a quasi probability distribution that has negative probability density for certain ( not directly observable ) outcomes .the state in a quantum computer is therefore fundamentally distinct from that of a probabilistic computer .in essence , a quantum computer is a device that can approximate arbitrary rotations on an input quantum state vector and measure the result . of course giving a quantum computer the ability to perform _ any _rotation is unrealistic .a quantum computer instead is required to approximate these rotations using a discrete set of gates .the quantum gate set that is conceptually simplest consists of only two gates : the controlled controlled not ( the toffoli gate ) , which can perform any reversible boolean circuit , and an extra quantum gate known as the hadamard gate .this extra gate promotes a conventional computer to a quantum computer .the hadamard gate which is a linear operator that has the following action on the `` 0 '' and `` 1 '' quantum bit states : in other words , this gate maps a zero bit to an equal weight superposition of zero and one and maps a one bit to a similar vector but with an opposite sign on one of the components .the toffoli gate is a three qubit linear operator that , at a logical level , acts non trivially on only two computational basis vectors ( i.e. bit string inputs ) : and .all other inputs , such as , are mapped to themselves by the controlled controlled not gate . using the language of quantum state vectors ,the action of the toffoli gate on the two dimensional subspace that it acts non trivially upon is this finite set of gates is universal , meaning that any valid transformation of the quantum state vector can be implemented within arbitrarily small error using a finite sequence of these gates .these gates are also reversible , meaning that any quantum algorithm that only uses the hadamard and toffoli gates and no measurement can be inverted .i will make use of this invertibility later when we discuss inferring models for quantum dynamical systems .measurement is different from the operations described above . the laws of quantum mechanics require that any attempt to extract information from a generic quantum state will necessarily disturb it .measurements in quantum computing reflect this principle by forcing the system to _ irreversibly _ collapse to a computational basis vector ( i.e. it becomes a quantum state that holds an ordinary bit string ) .for example , upon measurement ^t & \text{with } \\\\ [ 0,1]^t & \text{with } \end{cases},\ ] ] where ^t ] is the logical state .there clearly is no way to invert this procedure . in this sense ,the act of measurement is just like flipping a coin .until you look at the coin , you can assign a prior probability distribution to it either being heads or tails , and upon measurement the distribution similarly `` collapses '' to either a heads or tails result .quantum computation thus combines elements of reversible computing with probabilistic computing wherein the inclusion of the hadamard gate introduces both the ability to have non positive quantum state vectors and superposition ( the ability for quantum bits to be in the state 0 and 1 simultaneously ) and thereby promotes the system to a universal quantum computer . although quantum computing is distinct from probabilistic computing , it is unclear whether quantum computation is fundamentally more powerful . using the language of complexity theory , it is known that the class of decision problems that a quantum computer can solve efficiently with success probability greater than , , obeys is the class of decision problems that can be efficiently solved with success probability greater than using a probabilistic turing machine , and is the class of problems that can be solved using polynomial space and ( possibly ) exponential time using a deterministic turing machine .it is _ strongly _ believed that since exponential separations exist between the number of times that quantum algorithms and the best possible conventional algorithms have to query an oracle to solve certain problems ; however , the precise relationship between the complexity classes remains unknown .the apparent difficulty of simulating large quantum systems creates an interesting dilemma for quantum computing : although true quantum computers are outside of our capabilities at present , it would appear that purpose built analog devices could be used to solve problems that are truly difficult for conventional computers .recent experiments by the nist group have demonstrated that a two dimensional lattice of ions with over two hundred and seventeen ions with programmable interactions can be created .this device can be thought of as a sort of _ analog quantum simulator _ in the sense that these interactions can be chosen such that the system of ions approximate the dynamics of certain condensed physics models known as ising models .on the surface , it would seem that this system of ions may be performing calculations that are beyond the reach of any conceivable supercomputer .if the quantum state for such a system were `` pure '' ( which roughly speaking means that it is an entirely quantum mixture ) then the state vector would be in .if the quantum state vector were expressed as an array of single precision floating point numbers then the resultant array would occupy roughly megabytes of memory .it is inconceivable that a conventional computer could even store this vector , let alone solve the corresponding schrdinger equation .a conventional computer does provide us with something that the analog simulator does not : the knowledge that we can trust the computer s results .the analog simulator has virtually no guarantees attached that the dynamical model that physicists believe describes the system is actually correct to a fixed number of digits of accuracy . finding a way to use computers to inexpensively certify that such quantum devices function properly not only remains an important challenge facing the development of new quantum technologies but it also poses a fundamental restriction on our abilities to understand the dynamics of large quantum systems ,provided that feynman s conjecture is correct .in essence , the central question of this paper is that of whether we can practically test quantum mechanical models for large quantum systems .although this is a question about physics , it can only be resolved by using the tools computer science . in particular, we will see that formal costs can be assigned to model inference and that computational complexity theory can provide profound insights about the limitations of our ability to model nature using computers .the importance of computational complexity to the foundations of physics has only recently been observed .in particular , the apparent emergence of thermodynamics in closed quantum systems is , in some cases , a consequence of the cost of distinguishing the `` true '' quantum mechanical probability distribution from the thermodynamical predictions scaling exponentially with the number of particles in the ensemble .this is especially intriguing since it provides further evidence for the long held conjecture that there is an intimate link between the laws of thermodynamics in physics and computational complexity .the problem of modeling , or certifying , quantum systems is essentially the problem of distinguishing distributions .although the problem of distinguishing distributions has long been solved , the problem of doing so under particular operational restrictions can actually be surprisingly subtle ( even in the absence of quantum effects ) .to see this , consider the following problem of deciding whether a device yields samples from a probability distribution or from another distribution .the probability of correctly distinguishing which one of two possible discrete probability distributions and based on a single sample , using the best possible data processing technique , is given by the variational distance : given any non zero bias in this measurement , the chernoff bound shows that the success probability can be boosted to nearly by repeating the experiment a logarithmically large number of times . for typical quantum systems, is of order one which means that in principle such systems are easy to distinguish . in practice , the processing method that efficiently distinguishes from may be impractically difficult to find in quantum mechanical problems . thus distinguishing typical quantum systems can still be challenging . as a motivating example , assume that you have a dice with sides that is promised to either be fair ( i.e. ) or that the probability distribution describing the dice was itself randomly drawn from another distribution such that and the variance in the probabilities obeys .this problem is substantially different from the base problem of distinguishing two known distributions because is unknown but information about the distribution that is drawn from is known .a number of samples that scales at least as the fourth root of the dimension are needed to distinguish the distributions with high probability ( over samples and ) .this problem is not necessarily hard : a rough lower bound on the number of samples needed is .distinguishing typical probability distributions that arise from quantum mechanics from the uniform distribution is similar to the dice problem except that in large quantum systems `` the dice '' may have sides or more . in the aforementioned examplea minimum of roughly samples will be needed to distinguish a quantum distribution from the uniform distribution with probability greater than . although collecting samples may already be prohibitively large , the number of samples needed at least doubles for every qubits that are added to the quantum system .this means that distinguishing typical quantum probability becomes prohibitively expensive as quantum bits ( i.e. particles ) are added to the system , despite the fact that is of order one . in short ,quantum dynamics tend to rapidly scramble an initial quantum state vector , causing the predictions of different quantum mechanical models to become practically indistinguishable given limited computational resources .we will see below that complexity theory provides strong justification for the impracticality of distinguishing quantum distributions in general . in the followingi will assume that the reader is familiar with standard complexity classes , , the polynomial hierarchy , and so forth . for brevity , the term efficient will mean `` in polynomial time '' .scott aaronson and alex arkhipov provided the strongest evidence yet that quantum systems exist that are exponentially difficult to simulate using conventional computers in their seminal 2011 paper on `` boson sampling '' .their work proposes a simple experiment that directly challenges the extended church turing thesis , which conjectures that any physically realistic model of computation can be efficiently simulated using a probabilistic turing machine .their boson sampler device is not universal and is trivial to simulate using a quantum computer .the remarkable feature of this device is that it draws samples from a distribution that a conventional computer can not efficiently draw samples from under reasonable complexity theoretic assumptions . in particular ,an efficient algorithm to sample from an arbitrary boson sampler s outcome distribution would cause the polynomial hierarchy to collapse to the third level , which is widely conjectured to be impossible .a boson sampling experiment involves preparing photons ( which are bosons ) in different inputs of a network of beam splitters , which are pieces of glass that partially reflect and transmit each incident photon .an illustration of this device is given in ( a ) .the boson sampler is analogous to the galton board , which is a device that can be used to approximately sample from the binomial distribution .the main difference between them is that the negativity of the quantum state ( manifested as interference between the photons ) causes the resultant distribution to be much harder to compute than the binomial distribution , as we will discuss shortly . at the end of the protocol ,detectors placed in each of different possible output `` modes '' count the number of photons that exit in each mode .the sampling problem that needs to be solved in order to simulate the action of a boson sampler is as follows .let be the transition matrix mapping the input state to the output state for the boson sampling device .let us define to be the experimental outcome observed , i.e. a list of the number of photons observed in mode .then let be the matrix that has copies of the first row of , copies of the second and so forth .then the probability distribution over given is where is the permanent of a matrix , which is like a determinant with the exception that positive cofactors are always used in the expansion by minors step .it is important to note , however , that a quantum computer is not known to be able to efficiently learn these probabilities and in turn the permanent .permanent approximation is known to be complete , where is the class of problems associated with counting the number of satisfying assignments to problems whose solutions can be verified efficiently .such problems can be extremely challenging ( much harder than factoring in the worst cases ) , which strongly suggests that the distribution for these experiments will be hard to find for certain cases if .the work of aaronson and arkhipov use the hardness of permanent approximation ( albeit indirectly ) to show that even drawing a sample from this distribution can be computationally hard under reasonable complexity theoretic assumptions .in contrast , drawing a sample from the quantum device is easy ( modulo the engineering challenges involved in preparing and detecting single photons ) since it involves only measuring the number of photons that leave the boson sampling device in each of the possible modes .this provides a compelling reason to believe that there are simple physical processes that are too difficult to simulate using any conceivable conventional computer .although the boson sampling distribution can be hard to compute , it can be easily to distinguished from the uniform distribution because information is known about the asymptotic form of the distribution is known . despite this, there are other distributions that are easy to sample from but are hard to distinguish from typical boson sampling distributions .thus certifying boson samplers can still be a difficult .a more general paradigm is needed to approach certifying , or more generally learning , an underlying dynamical model .bayesian inference provides a near ideal framework to solve such problems .bayes theorem provides a convenient way to find the probability that a particular model for a system is true given an observed datum and a set of prior assumptions about the model .the prior assumptions about the model can be encoded as a probability distribution that can be interpreted as a probability that you subjectively assign to represent your belief that the true model is .constraints on the classes of models allowed can also be naturally included in this framework by changing the support of the prior distribution ( i.e. disallowed models are given zero a priori probability ) .the probability that model is correct , given the prior distribution and is where is a normalizing constant .the resultant distribution is known as the posterior distribution , which then becomes the prior distribution for the next experiment .this procedure is called `` updating '' and is called the likelihood function .there is a strong connection in bayesian inference between learning and simulation .this link is made explicit through the likelihood function , , whose computation can be thought of as a simulation of .it further suggests that if the likelihood function can not be computed efficiently then updating your beliefs about the correct model for the system , given an observation of the system , is also not efficient .this can make bayesian inference intractable .let s return to the problem of testing models for quantum systems . in this context, bayesian inference can be extremely difficult to perform using conventional computers because the likelihood function is evaluated by simulating a quantum system , which we have argued can be exponentially expensive modulo reasonable complexity theoretic assumptions .this raises an interesting philosophical question : does computational complexity place fundamental limitations on our ability to understand systems in nature ?although this may be difficult to show in general ( owing to the plethora of ways that a quantum system can be probed and the data can be analyzed ) , it is possible to show that this conjecture holds in certain limiting cases : there exist finite - dimensional models and for a quantum mechanical system and fixed observables such that models and can not be efficiently distinguished using bayesian inference based on outcomes of without causing the polynomial hierarchy to collapse and yet quantum computing can be used to efficiently distinguish these models using bayesian inference and the same .the proof of the first half of this claim is straight forward .let us consider and to be boson sampling experiments with different matrices and which are taken to be gaussian random matrices .let us further take to represent the measurement of the number of bosons in each of the modes .we saw that the likelihood of a particular outcome being observed involves calculating the permanents of and .efficient approximation of the permanent causes the polynomial hierarchy to collapse and hence , modulo reasonable complexity theoretic assumptions , bayesian inference can not be used to distinguish between these models efficiently using .a quantum computer can not be expected to inexpensively compute the likelihood function directly because that entails computing a permanent : a task for which neither a fast quantum or conventional algorithm is known .so how is it that a quantum computer can provide an advantage ?the answer is that a quantum computer can be used to change the problem to one that is easily decidable . the key freedom that a quantum computer provides is the ability to perform basis transformations that , in effect , allow to be transformed into a different measurement operator whose outcomes provide clear evidence for some models over others .the presence of a quantum computer ( that is coupled to the experimental system ) therefore gives us the ability to solve the problem of model distinction in a completely different way .boson samplers can be efficiently simulated by a quantum computer . indeed ,a boson sampler is simply a linear optical quantum computer without the ability to adaptively control quantum gates based on prior measurements ( which is needed for universality ) .this means that a universal quantum computer trivially has all of the computational power of a boson sampler .furthermore , closed system quantum dynamics are always be invertable ( unless the system is measured ) as we discussed in the context of quantum computing above .this means that we can scramble the initial state by running it through the boson sampler and attempt to approximately unscramble it using a quantum computer that simulates the inverse of the map that the boson sampler would perform .this inversion experiment is illustrated in ( b ) .the forward evolution under the experimental system and approximate inversion under the quantum computer of the initial quantum state vector is where is the quantum transformation generated by , is the quantum transformation generated by and is an unknown transformation promised to be either or .if , for example , then this protocol performs : meaning that we will always observe as output the same state that was used as the input .in contrast , if then this process leads to in fact , with high probability over the matrices and this transformation will lead to a vector that is nearly orthogonal to if is large , i.e. , where is the conjugate transpose operation . in boson sampling experiments , is taken to be a boson number eigenstate meaning that a measurement of the number of bosons in each mode will yield the same value every time .this means that we can easily use the same to determine whether maps back to itself , which should happen with probability if and with low probability if .thus the decision problem can be solved efficiently using bayesian inference in concert with quantum computing , which justifies the above claim .quantum computation can therefore be used to convert seemingly intractable problems in modeling quantum systems into easily decidable ones . also , by using a quantum computer in place of an experimental device we can assign a computational complexity to performing experiments .this allows us to characterize facts about nature as `` easy '' or `` hard '' to learn based on how computationally expensive it would be to do so using an experimental system coupled to a quantum computer ; furthermore such insights are valuable _ even in absentia of a scalable quantum computer_.a quantum computer is more than just a computational device : it also is a universal toolbox that can emulate any other experimental system permitted by quantum theory . put simply , if a quantum computer were to be constructed that accepts input from external physical systems then experimental physics would become computer science . when seen in this light, computational complexity becomes vital to the foundations of physics because it gives insights into the limitations of our ability to model , and in turn understand , physical systems that are no less profound than those yielded by the laws of thermodynamics .it is assumed that the reader is familiar with quantum computing in the following . in particular , andare provided for the benefit of experts in quantum computation .the details of these results can , nonetheless , be safely ignored by the reader .this approach to learning physics using quantum computers is simply an elaboration on the strategy used above in to . as an example of this paradigm ( see ) consider the following computational problem : assume that you are provided with an experimental quantum system that you can evolve for a specified evolution time .we denote the transformation that this system enacts by that can be applied to an arbitrary input quantum state for any .the computational problem is to estimate the true model for , denoted , in a ( potentially continuous ) family of models using a minimal number of experiments and only efficient quantum computations . in order to practically learn , we need to have a concise representation of the model .we assume that each is parameterized by a vector and explicitly denote a particular model as when necessary . as a clarifying example , let us return to the case of the schr " odinger equation with the case where : \psi(r , t)\label{eq : schrodinger2}.\ ] ] this problem , which corresponds to a quantum mechanical mass spring system ( harmonic oscillator ) , it may be that neither the mass of the particle nor the spring constant are known accurately . in this case , the model parameters can be thought of as a vector ] .such models are known in quantum mechanics as hamiltonians and they uniquely specify the quantum dynamical system : . for finite dimensional systemsthe rule is exactly the same : ( in units where ) , which means .the problem of inferring the model parameters , , can be addressed by using bayesian inference by following steps identical to those discussed in to in the context of boson sampling .the following theorem illustrates that the likelihood functions that can be efficiently computed using a quantum computer for many quantum systems .[ thm : likelihood ] let where is the model `` hamiltonian '' and we assume that 1 . for every , is a hermitian matrix whose non zero entries in each row can be efficiently located and computed .there exists such that every , for all . is an arbitrary real valued evolution time for the quantum system that is specified by the user . then for any such hypothetical model , the likelihood of obtaining a measurement outcome , , under the action of , where , can be computed with high probability within error using a number of accesses to the entries of the matrices amd that , at most , scales as and a number of elementary operations that scales polynomially in given that the initial state can be efficiently prepared using a quantum computer .the proof follows by combining two well known quantum algorithms .first the childs and kothari simulation algorithm shows that the action of can be simulated within error at most using accesses to the matrix elements of .the likelihood can then be estimated to within precision by repeating this algorithm times and measuring the fraction of times that outcome is observed .the overall error is at most since these errors are at worst additive . a faster quantum algorithm called amplitude estimation exists for such sampling problems .it can be used to estimate the likelihood using a number of queries that scales quadratically better with than statistical sampling and is successful at least of the time .the caveat is that we need to be able to reflect quantum state vectors about the space orthogonal to the marked state and also the initial state , which is efficient under the assumptions made above .both algorithms require a number of auxiliary operations that scale polynomially in which justifies the above claims of efficiency .this shows that the likelihood function can be efficiently approximated within constant error for a broad class of quantum systems using a quantum computer : quantum chemistry , condensed matter systems and many other systems fall into this class .now let us assume that possible hypothetical models are posited to describe that each satisfy the properties of .using requires simulations resulting in an overall cost of here can be as large as for reasonably large quantum systems and is often on the order of a few hundred .the cost will therefore likely be modest if is small , the hamiltonian matrix is sparse and ( ) is not unreasonably long . these requirements can be met in most physically realistic cases .these results show efficiency for fixed , but does need to be to prohibitively small to guarantee stability ? if , for example , is an outcome such that for hypothetical model then will have to be extremely small in order to compute the likelihoods to even a single digit of accuracy .cases where every model predicts are therefore anathema to bayesian inference .fortunately , inversion removes this possibility because it reduces each experiment to two effective outcomes .if then for all , which precludes the possibility of exponentially small likelihoods if is a computational basis vector ( or more generally if can be efficiently transformed to a computational basis vector using the quantum computer ) .conversely , it is well known that with high probability over models , in the limit as if and is taken to be a constant .in contrast if is small then it is trivial to see that almost all models and choices of will result in .thus : 1 .each experiment has two outcomes : either or where is in the orthogonal complement of .the variable can be chosen by the user and hence can be chosen to ensure that roughly half of the most likely models for the quantum system will approximately yield with high probability over models .these properties will typically allow bayesian inference to identify the correct model using a logarithmic number of experiments , similar to binary search .provides a concrete method that uses these principles to learn an approximate model for a quantum system . ' '' ''prior probabilities , , hypothetical model specifications , , number of samples used to estimate probabilities , total number of updates used , state preparation protocol for the initial states , protocol for implementing measurement operator such that is a povm element , an error tolerance and the number of votes used to boost the success probability of amplitude estimation .hamiltonian parameters such that .0.2em ' '' '' 0.2em . draw and from . . . measurement of using . result of amplitude estimation on , to within error . ' '' ''the question is , how well does work in practice ?since we lack quantum hardware that can implement it directly , we can not assess s performance directly ; however , we can estimate how it ought to scale in practice using small numerical experiments .we will see that it works extremely well for the examples considered .a good benchmark of the performance of the algorithm is given in , wherein it is shown that this approach can be used to learn inter atom couplings of frustrated ising models ( which are used condensed matter physics to model magnetic systems ) .the hamiltonian matrix that describes these systems is where this notation means that the state is assumed to be a tensor product of two dimensional subsystems and each term in acts non trivially only on the and subsystems .this application is slightly more complex than the simple one discussed in because the model is continuous rather than discrete , but by discretising the problem and using a technique called resampling , this process essentially reduces to that in .shows that the number of updates needed to learn the couplings in an ising model that has qubits with pairwise interactions between every qubit in the system ( i.e. the interaction graph is complete ) where ^t$ ] .the error decreases exponentially with the number of updates for any fixed .in other words , there exist such that the error scales as .shows that the decay constants , , for these exponential curves scale as , where is the number of parameters in the model . in general not be a function of but for the data above , each qubit interacts with all other qubits in the system and hence .we find that can be learned to within error using a number of experiments that empirically scales as .the prior results allow us to estimate the complexity of inferring an ising model using a quantum computer as an experimental device .the algorithm requires a number of queries to that scales as and a number of elementary operations that scale as where is the number of models used in the discretization ( is known to scale sub exponentially with but experimentally and a constant value of seems to suffice ) and .these scalings are not only show that a formal complexity can be assigned to the experimental problem of learning a physical model for a large quantum system , but also suggest that quantum computing could make the learning problem tractable even for cases where .the combination of quantum computation and statistical inference may therefore provide our best hope of deeply understanding the physics of massive quantum systems that seem to be too complex to understand directly .returning now to our original question , we see that there is good reason ( based on complexity theoretic conjectures ) to suspect that solving the equations of quantum dynamics , such as the schr " odinger equation , may be intractable even for certain modestly large quantum systems .this would seem to suggest that some quantum systems exhibit dynamics that , for all practical purposes , can not be directly compared to the predictions of quantum theory .quantum computation offers the possibility that such problems can be circumvented by indirectly comparing the dynamics of the system with simulations performed on the quantum computer .this also provides a surprising insight : experimental quantum physics can be recast in the language of quantum computer science , allowing formal time complexities to be assigned to learning facts about nature and reinforcing the importance of computer science to the foundations of science . a quantum computerdoes not , however , allow us to address all of the deep problems facing us in physics : they are not known to be capable of simulating the standard model in quantum field theory , which is arguably the most successful physical theory ever tested .simulation algorithms are also unknown for string theory or quantum loop gravity , meaning that many of the most important physics questions of this generation remain outside of reach of quantum computing .much more work may be needed before physicists and computer scientists can truly claim that quantum computers ( or generalizations thereof ) can be used to rapidly infer models for all physical systems .andrew m childs , richard cleve , enrico deotto , edward farhi , sam gutmann , and daniel a spielman .exponential algorithmic speedup by a quantum walk . in _ proceedings of the thirty - fifth annual acm symposium on theory of computing_ , pages 5968 .acm , 2003 .joseph w britton , brian c sawyer , adam c keith , c - c joseph wang , james k freericks , hermann uys , michael j biercuk , and john j bollinger . engineered two - dimensional ising interactions in a trapped - ion quantum simulator with hundreds of spins ., 484(7395):489492 , 2012 .
|
since its inception at the beginning of the twentieth century , quantum mechanics has challenged our conceptions of how the universe ought to work ; however , the equations of quantum mechanics can be too computationally difficult to solve using existing computers for even modestly large systems . here i will show that quantum computers can sometimes be used to address such problems and that quantum computer science can assign formal complexities to learning facts about nature . hence , computer science should not only be regarded as an applied science ; it is also of central importance to the foundations of science .
|
it is our privilege to contribute a paper to the memory of prof .shuichi tasaki . on a few occasions prof .tasaki encouraged us to publish the present work .we devote this work to him .the present work concerns the subject of quantum chaos .more precisely , the study of quantum systems whose corresponding classical systems exhibit chaotic behavior . prof .tasaki was one of the first authors to link irreversibility and chaotic dynamics using `` generalized spectral representations '' of time evolution operator ( also called frobenius - perron or fp operator ) .one interesting outcome of the work of prof .tasaki and others ( refs .- , as well as and references therein ) was the demonstration that eigenfunctions of the fp operator may have a fractal nature .this was shown using classical chaotic maps , such as the multi - bernoulli map or the multi - baker map .the present paper is motivated by this work of prof .tasaki and others . in classical systems, chaos may appear in isolated systems with few degrees of freedom as a consequence of stretching and folding dynamics .a simple model of this type of chaos is the baker map .the baker map acts on a unit square with coordinates representing the phase space .the square is squeezed down in direction ; it is stretched in direction by a factor of and then right half is put on top ( see fig .[ fig1 ] ) .these time - evolution rules are a simple example of stretching and folding dynamics .their consequence is a chaotic time evolution where any uncertainty in the initial condition grows exponentially with time until the uncertainty is spread over the whole phase space . despite its chaotic evolution ,the baker map is invertible and unitary . after applying the map any number of times, the resulting final phase - space distribution can be reverted to the initial distribution by application of the inverse map .an even simpler map is obtained by projecting the baker map onto the horizontal axis .this simpler map is called the bernoulli map ; in contrast to the baker map it is not invertible .the bernoulli map maps a number ] indicates the restriction in eq .( [ kdom ] ) . in the classical casethe generalized spectral representation involves derivatives . seeking a correspondence , in the quantum casewe introduce differences .we use the notations , \quad \alpha>0\ ] ] and for the differences . in the classical limit , with the condition , the differences go to derivatives , note that the difference at a point gives the function at that point minus the function at the point on the left .we could as well have taken the difference between the points and .as we will see , our definition of difference breaks the symmetry of the eigenfunctions around ( the mid point between the end points and ) that exists in the classical model .the symmetry is restored in the classical limit .the summation over in eq .( [ qgr1 ] ) may be written as where the summation in is similar to the left hand side of eq .( [ em1 ] ) .thus we have where by recursion we get \end{aligned}\ ] ] hence , the density can be written as ( see eqs .( [ nonupart ] ) and ( [ qgr1 ] ) ) } \frac{-e_{j , l } ( q_n ) } { \left(1-e_{j , l}^*(q_1)\right)^\alpha } \nonumber\\ & \times & \left[\rho^{(\alpha -1)}(q_{n-1},n)-\rho^{(\alpha -1)}(q_{\alpha -1},n)e_{j , l}^ * ( q_{\alpha-1})\right ] \nonumber\\ \end{aligned}\ ] ] let us now define the quantum bernoulli polynomials } \frac{-e_{j , l } ( q_n ) } { \left(1-e_{j , l}^*(1/n)\right)^\alpha } \ ] ] ( in appendix [ app : p ] we show that these are polynomials of degree ) . with this definition , and re - writing the last term of eq .( [ eq1d ] ) in terms of we arrive to \nonumber \end{aligned}\ ] ] this expression is the quantum version of the classical spectral representation written in the form of eq .( [ grepq ] ) .it is a quantum version of the euler - maclaurin expansion .note that , in contrast to the classical expansion , the quantum expansion is not symmetric with respect to the exchange of the points and ( corresponding to and ) .this is due to the difference operation we have used .it might be possible to use a symmetric difference operation , but this will not be investigated here .in this section we will obtain an expression describing the time evolution of density matrices produced by successive applications of the quantum bernoulli map .we will obtain an expansion analogous to eq .( [ grepqt ] ) , which will involve the time - evolved quantum bernoulli polynomials . strictly speaking our expansionwill not be a spectral representation of the quantum fp operator , because the states we will construct are not eigenstates of the fp operator . however , these states will approach the classical eigenstates in the classical limit .the time - evolution of the quantum bernoulli polynomials is obtained from eq .( [ ubqe ] ) as } \frac{1}{\mu_{j , l}(n , n ) } \frac{-{\tilde e}_{j-\tau , l } ( q_n ) } { \left(1-e_{j , l}^*(1/n)\right)^\alpha } \theta(j-\tau ) \end{aligned}\ ] ] changing and using eq .( [ etild ] ) this becomes } \frac{\mu_{j , l}(n , n)}{\mu_{j+\tau , l}(n , n ) } \frac{-{e}_{j , l } ( q_n ) } { \left(1-e_{j+\tau , l}^*(1/n)\right)^\alpha } \quad { \rm for}\ , 2^\tau \le n\end{aligned}\ ] ] and hereafter we consider the case . using and obtain where } \frac{\mu_{j-\tau , l}(n , n/2^\tau)}{\mu_{j , l}(n , n/2^\tau ) } \frac{e_{j , l } ( q_{n } ) } { \left(1-e_{j , l}^*(2^\tau / n\right)^\alpha } \end{aligned}\ ] ] are the time - evolved quantum bernoulli polynomials . in this expressionwe have ( see eq .( [ weight_s ] ) ) with .thus for even the time - evolved bernoulli polynomials in eq .( [ qfract ] ) have the same form as the initial polynomials in eq .( [ bqdef ] ) , except for the change . for oddthere is an explicit correction coming from the ratio of weight factors in eq .( [ qcorrodd ] ) .let us introduce the re - scaled variables then with we have where the `` quantum '' correction comes from the second line of eq .( [ qcorrodd ] ) .this correction will play an important role in the next section when we will discuss the quasi - fractal shape developed by the time - evolved quantum bernoulli polynomials . inserting eq .( [ qfract ] ) into eq .( [ eq1d ] ) we get the following expression for the time evolution of the density matrix , expressed in terms of the time - evolved quantum bernoulli polynomials : \end{aligned}\ ] ] this equation is the quantum analogue of eq .( [ grepqt ] ) . to see how the classical limit is reached , we write an alternative form of the quantum bernoulli polynomials + ( -1)^\alpha \exp\left[-2\pi i k ( n ' + \alpha/2)/n'\right]}{\left[2 in ' \sin(\pi k / n')\right]^\alpha } \nonumber \end{aligned}\ ] ] we may approximate provided that in this case the fourier components of the bernoulli polynomials are dominated by small components with . moreover let us assume that under these conditions the fourier components for small are independent of because appears only in the expression , which is equal to .this means that we can approximate and therefore , neglecting the correction in eq .( [ qfract2 ] ) , we get inserting this into eq .( [ qfract ] ) we get which corresponds to eq .( [ ubb ] ) .so the quantum bernoulli polynomials behave as the classical ones , i.e. , as decaying eigenstates of the fp operator .furthermore have ( with finite ) + ( -1)^\alpha \exp\left[-2\pi i k n'/n'\right]}{\left[2\pi i k \right]^\alpha } \end{aligned}\ ] ] the right - hand side is an expression of the classical bernoulli polynomial . at ,provided only small values of with contribute to the summation in eq .( [ eq1d ] ) , the discrete difference goes to the continuous derivative in the limit ( see eq .( [ dev2diff ] ) ) .we recover the classical euler - maclaurin expansion ( [ grepq ] ) .for and in eq .( [ qgsr ] ) we recover the generalized spectral representation in eq .( [ grepqt ] ) .we remark that the existence of quantum decaying states may be interpreted as a signature of quantum chaos .the decomposition ( [ qgsr ] ) shows that any density will approach equilibrium with the decay rates . for quantum bernoulli polynomials vanish identically and equilibrium is reached .we consider now the quantum corrections in the bernoulli map and the development of a quasi - fractals shape in the evolving quantum bernoulli polynomials .let us first now assume that ( in ) is even such that is an integer . keeping the assumptions( [ ass1 ] ) and ( [ ass2 ] ) , both and give a representation of the quantum bernoulli polynomials with different number of grid points , namely and .the point belongs to both grids . if both , then .after re - scaling by , the quantum bernoulli polynomials remain approximately constant at the points where is integer .moreover , as discussed in the previous section they are approximately equal to their classical counterparts .they behave as decaying eigenstates of the fp operator . in contrast , if is odd or more generally , if is not an integer for , then will not belong to the grid with points .we expect a deviation from the classical bernoulli polynomial .this deviation is due to the discretization of space , so it will give a quantum correction of the order of .in addition there will appear the quantum correction in eq .( [ qfract2 ] ) . writing with both and integers the condition that is integer is translated as .we denote the set of integers satisfying this condition as .we have at each time step , the quantum bernoulli polynomials act as quasi - eigenstates of the fp operator only at the points in the sets . at the other points we get deviations .the recursive nature of the sets in eq .( [ sets ] ) means that these deviations appear in a self - similar fashion .this gives the evolving bernoulli polynomials a quasi - fractal shape . in order to visualize this ,we have employed a computer program to calculate the exact evolution of densities under the quantum bernoulli map . as an example , at we take as the initial state the polynomial with and . at each step ,we re - scale the density by .if this were the classical bernoulli map , the graphs would remain unchanged for , because is an eigenstate of the fp operator with eigenvalue .but for the quantum map we have a different behavior .the result is shown in figure [ fig3 ] . for , width=384 ] for and , width=288 ] , width=288 ] we see the graphs developing the quasi - fractal shape mentioned above .we also notice that as increases the graphs are shifted .this is due to the term in the numerator of the right hand side of eq .( [ bqsin ] ) . provided the shift is negligible , but as increases the shift becomes noticeable .figure [ fig4 ] shows a zoom - in view of with , and .figures [ fig5]-[fig6 ] show further zoom - in views . .if we zoom - in this figure , the self - similarity disappears.,width=288 ] the self - similarity of the graphs stops when the space resolution is of the order of ( see figure [ fig6 ] ) . for this reason ,we call this evolving state a quasi - fractal . when , the quasi - fractal behaves as a true fractal .at the same time , when , the amplitude of the fractal deviations from the classical bernoulli polynomial gets smaller and the graph looks smoother .eventually , it just looks like the classical bernoulli polynomial ( see figures [ fig8 ] and [ fig7 ] ) . for and ,width=288 ] for and ( compare with fig .[ fig8 ] ) .as increases ( decreases ) , the curve looks smoother . at the same time, the self - similar pattern appears at smaler length scales.,width=288 ]we summarize the main results .we introduced the quantum bernoulli map by coupling the quantum baker map to an external environment that produces instant decoherence after each time step .we described the quantum bernoulli map in terms of decaying quasi - eigenstates of the frobenius - perron operator .these states are analogous to the classical bernoulli polynomials .we found conditions under which these quantum states approach the classical ones .moreover we found a quantum analogue of the generalized spectral representation of the classical bernoulli map .we also found a quantum analogue of the euler - maclaurin expansion . in this expansionthe classical bernoulli polynomials are replaced by their quantum counterparts ; derivatives are replaced by differences .we have suggested that a signature of quantum chaos may be the presence of decaying eigenstates ( or quasi - eigenstates ) like the ones we have described in this paper . one interesting finding is that even after decoherence , the quantum bernoulli map shows quantum corrections with respect to the classical bernoulli map .the quantum corrections give a quasi - fractal shape to the evolving quantum bernoulli polynomials .these corrections vanish in the classical limit . at the same time , as the classical limit is approached , the quasi - fractal approaches a true fractal . in a sense , this fractal is hidden in the classical limit . to our knowledgethis is a feature chaotic quantum systems that has not received attention before .an open question is : do quasi - fractals like the ones we described here appear in other chaotic quantum systems ?self - similarity in quantum systems has been discussed in other contexts .for example in , self - similarity has been discussed in the context of non - linear resonances . have discussed singular spectra or fractal spectra of quantum systems , that appear , for example , with quasi - periodic lattices . ref . gives an interesting discussion on fractals generated by a quantum rotor .the self - similarity discussed in the present work is of a different origin .still , it would be worthwhile to investigate any connections with these other works .another question for future work is whether the quantum baker map can be described along the lines developed in this paper ( i.e. using quantum bernoulli polynomials and difference operators ) .a final point worth noting is that the model we have described may be realized as a set of shifting quantum bits subject to repeated measurements after each shift .the measurements cause instantaneous decoherence .if the measurement time interval is greater than the zeno time , the system decays exponentially .equations ( [ qfract ] ) and ( [ qfract _ ] ) manifest this property .however if the time interval between measurements approaches zero , we might obtain the quantum zeno effect .we speculate that repeated measurements will prevent the system from decaying despite the underlying chaotic dynamics .in this appendix we show that the functions } \frac{-e_{j , l } ( q_n ) } { \left(1-e_{j , l}^*(q_1)\right)^\alpha } \ ] ] are polynomials of degree in .let us first state two properties of these functions , which are easily proved : and ( for ) from the last equation we find that we can find using eq .( [ bpol3 ] ) together with eq .( [ bpol1 ] ) , which gives using we get in a similar way , using we find that we can continue in this way for showing that are polynomials of degree .moreover , recalling that and consulting a table of classical bernoulli polynomials , we find that for .similar relations must hold for with , because of eq .( [ clim ] ) .
|
the classical bernoulli and baker maps are two simple models of deterministic chaos . on the level of ensembles , it has been shown that the time evolution operator for these maps admits generalized spectral representations in terms of decaying eigenfunctions . we introduce the quantum version of the bernoulli map . we define it as a projection of the quantum baker map . we construct a quantum analogue of the generalized spectral representation , yielding quantum decaying states represented by density matrices . the quantum decaying states develop a quasi - fractal shape limited by the quantum uncertainty . 6.5 mm
|
in this short and condensed article we are going to introduce structural tendencies which are effects of random adaptive evolution of complex systems .terminal modification and terminal predominance of addition are the main examples of these tendencies .the former one means that a random change of a system near system outputs has a higher probability of being accepted by adaptive condition than a change far from system outputs .the latter one means that near system outputs more additions of new nodes to network are accepted than removals of nodes in this place .these tendencies are an equivalent of naef s ` terminal modifications' and weismann s ` terminal additions' in the evolutionary biology .now these classic regularities are forgotten together with comparative embryology only due to lack of their explanation .therefore our investigations should have influence on biology .also growth of the network is one of such structural tendencies and there is no need to assume it .complexity has as many descriptions and definitions ( e.g. or more recent ) as different aspects and meanings .we define the complexity threshold during system growth as the phase transition to chaos . over this thresholdwe observe in a simulation the mechanisms of these structural tendencies .we use the term chaos as kauffman does .systems exhibit chaotic behaviour when after a small disturbance of system the damage statistically explodes and reaches a high level in equilibrium .damage is the difference between node output states in the disturbed and undisturbed systems .we estimate that the typical living or human - designed system grows under adaptive condition and is chaotic .we treat the gene regulatory network as a strange exception .to describe such a system we use kauffman s network and similar ones but not in the autonomous case .we do not use cellular automata ordered on lattice but a randomly structured network .it is a directed network where each node is influenced by other nodes or external inputs , and influences other nodes or external outputs .we study or .the distribution of number depends on network type , may be fixed or flexible in range starting from .each node influences others by signals which are deterministic functions of the node s input signals .the whole network has external input signals ( in this paper they are constant ) and external output signals ( in first step ) . system s fitness is defined as the number of output signals which agree with an arbitrarily defined sequence of ideal signals . in the network evolution changes are made randomly but they form adaptive evolution only if they do not reduce system fitness .kauffman places living systems in phase transition between order and chaos but he uses the boolean network - the case of kauffman network with two variants of signal .he uses the assumption that the two variants of signal have equal probability .we denote the number of equally probable variants of signal as . notethat using this parameter we know that these variants are equally probable and we will not repeat it every time . for more precisionwe assume in this all paper that the ` internal homogeneity' is the smallest . only for the typical ( ) big random systemcan exhibit ordered ( opposite to chaotic ) behaviour .if variants of signal are not equally probable ( i.e. we can not use description ) , chaos can be avoided also for more than two variants . the mostly used case is also often not a realistic simplification .typically real alternatives , especially when there are two alternatives , are not equally probable .this expected inequality is described using probability for one alternative .we estimate that the typical occurrence of two unequally probable alternatives in descriptions of living and humane designed systems is an effect of our concentration on one of the possibilities which is special for us , and of collecting all remaining as second alternatives .such a view needs another description than using for statistical mechanism because there are more than two real alternatives . in the first step of our waywe introduce and we show that the parameter can not be substituted by others ( e.g. - see fig . [ f1 ] ) .typically is used for genome modelling ( e.g. ref . ) . for genetic regulatory networkwhere 1 is interpreted as active and 0 as inactive it seems adequate and gives results close to experimental data .however , when we are going to describe certain properties coded by genes or their mechanisms assessed using fitness we should remark that there are 4 nucleotide or 20 amino acids or other unclear spectra of alternatives , not only 2 , therefore or seems much more adequate .such strong simplification ( ) can only be used if it does not significantly changed results , but we show , that it would do it . stability or order in gene regulatory network results in properties assessed by natural selection in a long process where assumption of is probably not adequate .such a case described well by differs from one described by and in the statistical mechanism and its result .e.g. for extreme and small order is expected but not for . for system should be chaotic which our coefficient of damage propagation shows easily .however , to be chaotic a system also needs to be big enough . in the second stepour complexity threshold shows how big it should be .this depends on the network type .our complexity threshold can be easily detected in the reality and in the simulation . in the last , third stepwe investigate structural tendencies in the adaptive evolution of chaotic system ( above the complexity threshold on system size axis , with ) .in our model we define fitness , which is needed in adaptive evolution , only on the system external outputs , not on all the node states as in kauffman model .therefore our system in the third step is not an autonomous one ( , in simulation we use typically ) . in the investigation of complexity threshold in second stepwe also prefer such a form of the system to be appropriate for the third step . only in the first stepthe considered system is autonomous ( ) . in our fitnessthere are no local extremes . to keep fitness not maximal but high and constant the ideal vector of output signalsis changed accordingly .for simulation we use our simplified algorithm which allows us to obtain one particular vector of output signals instead of a periodic attractor .it gives correct answers for our statistical questions and practically allows to investigate the emerging structural tendencies .we suggest that an interpretation needs usually which keeps models in the chaotic area .kauffman started from famous ref ., used boolean network and . using such assumptions he concluded that the best place for systems to adapt is the ` phase transition between chaos and order ' where the ` structural stability ' occurs .structural stability can be understood as the ability to small changes ( see small change tendency in the end of ch.5 ) .this ability can be different in different areas of network .we differentiate between areas using distance to external outputs of network but kauffman can not do it using autonomous networks .he uses properties of the whole network , therefore he need order regime to obtain ability to small changes . the phase transition to order when occurs between and .the number of node inputs , which was typically fixed in considered networks , is equal to the average - number of node outputs .we also use constant , however , in more recent works the in - degree distribution and out - degree distribution are considered .the coefficient of damage propagation shows how many output signals of a node are changed on average if its input state is changed .it needs randomly defined deterministic functions of nodes .damage is a part of nodes with changed output state .system is big if it consists of large number of elements . when the damage avalanche is still small and the range of interactions spans a whole and big system then probability of more than one changed input signal is also small and damage is well approximated by as ( fig .[ f1 ] ) . in this critical time period describes the damage multiplication on one node .if then the damage should statistically grow and spread on a large part of a system .it is similar to the coefficient of neutron multiplication in a nuclear chain reaction - we have less than one in a nuclear power station , for values greater than one an atomic bomb explodes .note , that appears only if and .both these parameters appear here in their smallest , extreme values .the case is sensible for particular node but not as an average in a whole , typical , randomly built network , however we can find the case in ref . . for all other cases where or we have .the number of equally probable variants of signals is the next main parameter of a system , like kauffman s - number of element s inputs and - the ` internal homogeneity ' in boolean functions .these parameters define a system as chaotic or ordered .note that parameters and work in opposite direction when they differ from their typical value - the smallest one .higher causes chaos but higher allows to avoid chaos , however both of them are connected to the problem of equal probability of two variants of signal .they describe different aspects of this idealisation .the simple and intuitive coefficient may substitute two of them ( and ) in this role but this is only the first approximation .we have shown in fig .[ f1 ] different levels of damage equilibrium for different but the same .as we will show later , the type of network , i.e. distribution of node degree , and number of node in the system , also has an influence on a place of system on the chaos - order axis , and this influence depends on and in other way than . when the damage is still small the probability of its fadeout is not to be neglected .later , it practically can not fade out , cases of more than one changed input signal happened more often and the multiplication factor of damage decreases to one .damage reaches a stable level .see fig .[ f1 ] , which we have calculated in the way described in ref ., expanded to the case .e.g. for we obtain where , for small , we can neglect the second element . in the kauffman networks all outputs of a node transmit the same signal - it is the state of the node , the value of a function . to understand the coefficient of damage propagation, we must average by the nodes .it is much simpler and more intuitive if each output of a node has its own signal to transmit .in such a case , the function value is a -dimensional vector of signals . due to function uniformityit is useful to fix .i have introduced such a network in ref . where i have named it ` aggregate of automata ' .however , kauffman s formula gives an useful ability to differ and to investigate networks types which differ in distribution of node degree .expectations shown in fig .[ f1 ] are independent of the type of a kauffman network .we check them in simulation for a few types of networks .we denote the network type by two letters : ` ' - old , typical , ` random ' erds - rnyi ; ` ' - barabsi - albert scale - free network ; ` ' - single - scale ; ` ' - aggregate of automata ; ` ' - like , but uses kauffman function formula ( one output signal for all k outputs ) .the ` ' network does not grow - it is built for fixed .the ` ' and ` ' in order to add a new node need to draw links , which are broken and their beginning parts become inputs to the new node and the ending parts become its outputs ( fig .[ f2].1 ) . for ` ' new node connects with the node present in the network with equal probability for each one . for ` ' new node connects with another one with a probability proportional to its - node degree ( fig .[ f2].2 ) .we first draw one link for and and then we break them like for and to define one output and the first input . for typeat least one output is necessary to participate in further network growth . if , then only one input follows the rules but it is enough to obtain correct ( -number of node outputs treated as node degree ) distribution characteristic for these network types .damage spreading in scale - free networks has often been investigated ( e.g. ref . ) in nondirected networks .related studies based on complex computational networks have been conducted in ref . .a directed scale - free network was used in ref . preceded by ref . .these networks describe opinion agreement process and are not similar to kauffman networks .dynamics of boolean networks with scale free topology were studied by aldana and kauffman , now iguchi et al .they look for difference between the dynamics of ( here called : rbn ) and the scale - free random boolean network ( sfrbn ) .here =2 , flexible and are used , therefore their networks differ from our .we use our own simple algorithm for statistical simulation of damage in synchronous mode , where we only calculate nodes with changed input state , not as in classic method two systems - changed and undisturbed .we ignore the remaining input signals , they can be e.g. effects of feedback loops .each node output is calculated only once .we often do not use a concrete function for node either - if the input state is changed , then the output state is random .such an algorithm works fast and gives correct statistical effects .intuition behind this algorithm can be found when we imagine a network without feedbacks , where each node output is equal to the function value of current node inputs .in such a case for calculating node with changed input signal we can use the old input signals as the remaining ones .after a finite number of steps the process will stop .the damaged part will become a tree , and all the node states will be equal to function value of current node inputs as was the case at the beginning . in the case with feedbacks sometimes an already calculated node gets a damaged input signal for a second time , but for measuring the statistical effect only it is not necessary to examine its initiation for the next time . when the network achieves nodes we stop the growth and we start to initiate damage . as damage initiationwe change the output state in turn for each of the nodes .in the first few steps the damage can fade out , but , therefore on average the damage grows . laterthe damage is too large to fade out . during its growththere are less and less nodes which are not reached by damage yet and damage slows down then stops the growth . in our simplified algorithmit looks like fadeout on nodes , which have been already affected by damage but it corresponds with a stable equilibrium level , which appears at the end of curves in fig .[ f1 ] .we have simulated the cases described as : 2,3 ; 4,2 ; 4,3 .each simulation consists of 600 000 damage initiations .networks are autonomous ( without external inputs and outputs ) .when we compare effects of simulation to the theoretical curve in fig .[ f1 ] for e.g. and 2,3 or 4,2 ( fig .case 2,3 ) we can identify a few independent fractions of summarized processes which have different tempo of damage growth . in our simplified algorithmthere are no data for time steps later than ` pseudo fadeout ' , therefore we can observe a slower group of processes . for and tempo is strongly connected with the time when the damage reaches the hubs . for andobviously and there are no hubs and obtained curves are much more similar to the theoretical one shown in fig .[ f1 ] . in the distribution of damage fadeout in time dependencythere are two peaks : one for real fade out in the first steps and the second one for ` pseudo - fade out ' . for networks with wide range of like and with great fraction of ( near ) the probability of early fade outis much greater , especially for small . herehubs are present , they decrease the average for the remaining nodes which helps the damage to fade out before the first hub is achieved . at the opposite end ( only of kauffman mode )lies 4,3 where and hubs are absent and is high and equal for all nodes .in such a case , the early fadeout is very small and most of the damage grows up to the equilibrium level .different tempo of damage spreading causes the wideness of these peaks and lack of sharp boundary between them .these peaks are much narrower and separated by a long period of exact zero frequency if distribution of damage fadeout is shown in damage size variable .in such a description the second peak shows exactly the equilibrium level and only the case 2,3 appears extreme .different network types exhibit significant differences in the behaviour of damage spreading .this appears especially near boundary of chaos and order , and is more intensive for . in fig .[ f3 ] we compare average damage size for initiation .it also contains the phenomenon of real fade - out in the first few steps .the shown points have 3 decimal digit of precision .all cases exhibit different behaviour of damage spreading , despite the fact that 4,2 and 2,3 have the same , therefore we can not limit ourselves to only one of parameters , or .in addition to the different levels of damage equilibrium for different ( which were shown earlier ) , now we encounter very different fade - out pattern in the first steps connected with network type and .let us investigate the evolution of distribution of damage size at fadeout in dependency on system in more detail .above we have considered autonomous networks , but when we investigate a real system , we can only observe its outside properties or a few points inside , which we can describe as system external outputs .it is similar to ashby s essential variable .therefore we suggest simultaneous considering of effects of damage on external network outputs .let the number of output signals be fixed as . as a parameter analogous to damage size we will use the hamming distance - number of changed output signals but without normalization .distributions and should be similar .asymptotic value of which we named and asymptotic value are simply dependent : but such a dependency is not valid during system growth and is smaller than we would expect .evolution of these distributions starts for small as one peak distribution , similar to distribution for ordered system and when damage quickly fades out .next , when is greater , the second peak appears on the right long tail .it shifts to the right and stops in some position of damage equilibrium . for networks with feedbacks , before this right peak stops , between peaks there appears a large period of practically exactly zero frequency .distribution for different network types is shown in fig 4.1 . to understand the mechanism of this evolutionlet us consider an extremely simple example of network ` ' of nodes with and functions as in ` ' , without feedbacks and with clear levels of nodes , wrapped around a cylinder to remove the left and right ends .on each level there are 32 nodes connected according to the ordered pattern like in fig .for such an extremely simple network we can draw a cone of influence ( fig .[ f5].1 ) which splits the network into three parts of nodes : nodes later than the selected one ( which are dependent on the selected node ) ; earlier ones ( which have an influence on the selected one ) ; and independent ones . to define sequence ( earlier - later ) the sequence of signal flow and transformation depicted by arrows of directed network is used .this is the functional order .the cone of influence shows which nodes and outputs can be reached by damage from a given source of damage , but not all of them will actually be reached by every particular case of damage .the affected ones create a smaller cone inside the later area .if the later part of cone of influence does not include a part of outputs , then the signals on these outputs can not change as a result of damage .the number of outputs inside the ` later cone ' depends on the cone height - here it is measured in levels from outputs on the top down to the source of damage , therefore we named it ` depth ' and denoted it by .if increases by one , then for the number of later outputs increases only by two .let us decrease order in this network and define ` ' network where connections between neighbouring levels are random .now the number of later outputs increases two times for the first few levels . for ,for the right peak in achieves a stable position for damage initiation on depth , but for it occurs much faster - at .if we take , then the order also decreases and we need accordingly less levels for peak stabilization for and for . for smaller the right peak in shifts from the left to on the right with different tempo for both networks . to obtain total must summarize distribution for all levels . in effectwe obtain non - zero frequency in the whole period from zero on the left to . for systems with more levelsthe right peak in grows and the contribution of constant numbers of cases in - between peaks decreases . for networks with feedbacks the notions of cone of influence , functional order and depth dramatically ( but not completely ) lose their precision .we can define depth as the smallest distance to outputs , but in a different aspect this depth can be infinite if we consider the loops . for an networkthe maximum of the right peak in achieves 80% of ( ) for , but for network for and for network for equal to only ( see table 1 ) .the stabilization ( i.e. stop of shifting to the right during network growth ) of the right peak in and in and its parameters , and structure in the shown examples , correctly correspond to our intuitive notion of complexity .( if similar states of the system create very different effects , we must know much more to predict these effects . ) we define them as complexity threshold , however for networks with feedbacks it is also the chaos ( in kauffman s meaning ) threshold . in networks containingfeedbacks it is accomplished by appearance of practically zero frequency in - between peaks but these two phenomena do not appear exactly simultaneously .they can be used as other variants of complexity threshold .we have found parameters for appearance of all three phenomena ( table [ ta1 ] ) as three criteria of complexity threshold .first three rows show zero occurrence ( 0oc . ) .percent of and is shown for comparizon of zero occurrence criterion to two others shown below .networks , and always lay between and , therefore we simulate them only for and where in - between peaks there appears an area of near - zero frequency when position of right maximum in is on 90% of and in ) on 80% of . if or grows then this coincidence no longer occurs .the scale - free network is an extreme case and it achieves all of these criteria much slower than .networks and differ in distribution value but in they are very similar or even identical .much more data is used for these conclusions , which we can not include here due to limited space , they will be described separately .we do not show error ranges either , which are circa 3% for most of stability positions and ca 20% for of zero appearance but they are not important due to large difference of values . for comparison : ` how should complexity scale with system size ? ' considered in a theoretical way by olbrich et al . .to investigate adaptive evolution we must define fitness , which should not decrease .fitness can be defined using states of all nodes ( this way was applied by kauffman ) , or using effects of system function which are accessible outside of the system , i.e. external output signals ( as essential variable ) .we prefer the second solution .the simplest method is to arbitrarily define some ideal vector of signals and to compare it to the vector of system output signals .let fitness be the number of identical signals in both of these vectors .this is a common method .system is changed , e.g. by addition or removing of a node ( fig .after such a change it has a certain fitness which describes its state .consecutive system states of fitness counted by are delimited by changes of system construction .this creates a process .all changes creating an adaptive process meet the adaptive condition which is used to eliminate ( not accept ) some random changes in the simulation .we will compare the adaptive process to the free process , which accepts all random changes , as they are drawn .the tendency is a difference between probability distribution of change parameter for an adaptive process and for a free process .note , must describe a change , it can not be a state parameter .> from bayes : is a constant ; therefore , a tendency is shown by . as we can see , we do not have to know to know the tendency .it is enough that for different , is different .however , in the structure development of complex system is important due to other causes .it is useful to introduce a general parameter of process advancement connected to the count , let it be denoted by .it is the state describing parameter , e.g. it can be or .similarly we can find , that tendency is described by and we will use this form later .the first , very simple but very important one , is the tendency to collect in adaptive process much smaller changes than the changes creating a free process .we named it ` small change tendency ' .it is the base of mechanisms of all the structural tendencies investigated later , which are more interesting .this tendency is a different view on underlying by kauffman ` structural stability ' as a condition of adaptive evolution .if we limit our consideration only to output vectors and assume , that each signal changes independently , then we can calculate for given .this is the form obtained above , which indicates a tendency when really depends on .for higher only very small changes are acceptable - see fig .[ f4].2 , where it is compared to : for networks above the complexity threshold all cases from the right peak can not be accepted by the adaptive condition test .if we consider both : small change tendency and dependency of change size on depth in the construction of cone of influence then we can expect ` terminal modifications ' tendency known in classic developmental biology .depth is a structural approximation of functional order which creates the cone of influence and time of ontogeny stages .however , the cone of influence is well defined in a system without feedbacks , but if feedbacks are present , then it can only be a premise .the answer can only be obtained by simulation , where we should check . for investigationswe have used a special , more adequate definition of depth ( fig .[ f5].2 ) , but it is not to be applied to various in other networks . to investigate the system development , this system must have the ability to grow , therefore in the set of possible changes there should be addition of a new node . for higher adequateness there should also be the removing of a node .both of them should be drawn randomly , but the sets of possibilities for such a draw can not be the same . removing can only be drawn from nodes present in the network , but addition has a much larger set of possibilities .this difference can create some difference of acceptance probability for additions and removals in different areas of network , which differ with respect to modification speed in effect of the terminal modification tendency .note , that additions and removing transform a particular system into another one .it is a walk in the system parameters space in the kauffman approach but in our approach we can see important differences between probability to adaptive move using addition and probability using removal , and we can distinguish between various areas in the system body .however , similarly to kauffman , we have a close similarity between the effects of small changes which change the system to another ( additions , removing of nodes ) and the ones which affect the system state only ( changes of state of a node ) . in our caseit is not an assumption but a simulation result .one of the typical cases of removing is a removing of a ` transparent ' node which does not change signals of the remaining nodes .it especially occurs when a ` transparent ' node is just added .such a case may have different interpretations .some of them suggest forbidding transparent addition .we introduce such forbidding using strict inequality in the adaptive condition for additions , and weak inequality for removals .this is equivalent to a cost function for additions of a new automaton .this simple ` cost ' condition appears very strong which is easy to understand - newly connected nodes must lie closely near assessed outputs to influence at least one signal of system outputs but not much more .it eliminates additions on longer distance in both regimes - in chaotic one because change of outputs is too large and in ordered one because damage fadeout without affecting outputs ( fig .[ f6 ] and [ f7 ] ) . similarly to above , we have checked these assumptions in simulation for different network types .random network can not grow and therefore we do not use it for these experiments . for important networka problem appears for removing , which creates nodes .such a node can not come back into play and creates a dummy network where most of nodes have .to correct this situation we add to link drawing also node drawing like in network and we obtain connection proportionality to for modified type which we name .it is a little similar to ref . .however this modification , especially important for case without cost , modifies which becomes more like for . for correct description of phenomena similar to transparent addition and removingwe now use node functions in our algorithm .as it can be expected , and networks need much higher to obtain typical phenomena for and networks .to create damage spreading inside the network as for very high we use a special function and , denoted in figures as $ .the main results are shown in fig .they are obtained in three series of simulations , all with fixed .first for , where clear and strong tendencies of terminal modifications and conservation of deeper part ( fig .[ f6].1 ) , and terminal predominance of addition over removing and simplification of deeper parts are obtained ( fig .[ f6].2 ) .only the case of without cost appears to be extreme ( still too small ) .next we have investigated the question : how important are feedbacks in the mechanisms of these tendencies ?for network ` ' ( similar to only devoid of feedbacks ) , we have obtained similar effects ( fig .[ f6].3 and 6.4 ) . in the last series we ask : are there the same tendencies in the contemporarily preferred networks and , with various node degree ? the answer is shown in fig .[ f6].5 and 6.6 - yes , they are there , but in these networks there are much more interesting phenomena connected with and hubs. one of them is deep fadeout tendency - in deeper parts of such networks nodes of are collected , hubs take their place at a small depth , but not very small . if cost is absent , then this tendency is very strong for and blocks other tendencies . in all cases ,probability of addition and removing was the same ( before elimination ) and networks grow , but for and if cost is present , then the networks do not grow .this is an effect of deep fadeout for removing . in order to growthey need a very small percentage of removing in the tested changes , ca .1% . during system growththe number of outputs is constant .the volume of small depth , where damage easier reach outputs , quickly is fulfilled and any addition of new nodes ( wherever it happens ) only expands deeper the part of the system where the way to output is long and the damage has higher chance to fade out without reaching the outputs .therefore additions are there not acceptable ( cost ) but removals are accepted with higher probability and growth of the network stops on the certain level of .high terminal predominance of addition over removing corresponds with weismann s ` terminal additions ' regularity and in the same way creates similarity of historical and functional order ( fig .the historical order is a sequence of connections of given node to the network .it is the main element of the famous ` recapitulation of the phylogeny in the ontogeny ' regularity known also as the haeckel s ` biogenetic law ' .the stability of function is a completion of this similarity of orders .we also observe it in our simulations .it is a pity that recapitulation has died in 1977 because of the lack of explanation , before our proving of its mechanisms , despite our attempts .it is obvious , that this short announcement is not enough to reanimate it , therefore much longer descriptions are under preparation .the way to a mechanism of recapitulation is long and contains ( in opposite direction ) : tendency of terminal predominance of addition , which needs tendency of terminal modifications .we have observed them in simulation in a few different network types as effects of adaptive condition of network growth , over a certain complexity threshold . this threshold is observed the in damage size distribution on network external outputs . to model damage spreading we use functioning ,directed networks , e.g. kauffman networks , but we use more than two equally probable signal variants , therefore they are no longer the boolean networks . such an assumption ( argued using interpretation ) places the considered systems in the chaotic area , far from ` order ' and ` phase transition to order ' , in addition the complexity threshold guards this assumption . in comparison to the kauffman modelwe introduce two new elements .firstly , we allow ( more than two equally probable signal variants ) , which keeps our system in the chaotic area ( unlike ` internal homogeneity ' ) .secondly , we introduce external outputs which we use for differentiation of system body areas and as ` essential variables ' for fitness definition .we find ` structural stability ' , important for adaptive evolution , in our ` small change tendency ' which causes differences in elimination between various areas of network body .these differences lead to ` structural tendencies ' .the famous kauffman conclusion from his model ` we shall find grounds for thinking that the ordered regime near the transition to chaos is favored by , attained by , and sustained by natural selection ' seems not applicable to our model . because our definition of fitness uses output signals , we omit the problem of local optima , which are absent here and the problem of ` complexity catastrophe ' expected by kauffman . simplifications in ` fitness landscape ' and especially in algorithm in the aspect of attractorsallow us to investigate a large , new and interesting area of ` structural tendencies ' which is waiting for a more mathematical description .r. albert , a .-barabsi : dynamics of complex systems : scaling laws for the period of boolean networks .lett . * 84 * , ( 2000 ) 56605663 m. aldana : dynamics of boolean networks with scale free topology .physica d * 185 * , ( 2003 ) 45 - 66 w. r. ashby : _ design for a brain _( wiley , new york 1960 ) n. ay , e. olbrich , n. bertschinger , j. jost : a unifying framework for complexity measures of finite systems . proceedings of eccs06 a .-barabsi , e .bonabeau : scale - free networks .scientific american , www.sciam.com ( 2003 ) 5059 a .-barabsi , r. albert , h. jeong : mean - field theory for scale - free random networks .physica a * 272 * , ( 1999 ) 173187 p. crucitti , vito latora , massimo marchiori , andrea rapisarda : error and attacktolerance of complex networks physica a * 340 * ( 2004 ) 388394 b. derrida , y. pomeau : random networks of automata : a simple annealed approximation . europhys .* 1(2 ) * , ( 1986 ) 4549 s.n .dorogovtsev , j.f.f .mendes , a.n .samukhin : structure of growing networks with preferential linking .lett . * 85 * , ( 2000 ) 4633 p. erds and a. rnyi : random graphs .publication of the mathematical institute of the hungarian academy of science , * 5 * , ( 1960 ) 1761 s. fortunato , damage spreading and opinion dynamics on scale - free networks .physica a * 348 * , ( 2005 ) 683690 a. gecow a cybernetic model of improving and its application to the evolution and ontogenesis description . in : _ proceedings of fifth international congress of biomathematics _ paris ,1975 a. gecow , a. hoffman : self - improvement in a complex cybernetic system and its implication for biology .acta biotheoretica * 32 * , ( 1983 ) 6171 a. gecow , m. nowostawski , m. purvis : structural tendencies in complex systems development and their implication for software systems .journal of universal computer science , * 11 * ( 2005 ) 327356 m. gell - mann _ what is complexity ? _( john wiley and sons , inc .1995 ) s. j. gould _ ontogeny and phylogeny _( harvard university press , cambridge , massachusetts 1977 ) e. haeckel _ generelle morphologie der organismen _( george reiner , berlin 1866 ) s. a. kauffman : metabolic stability and epigenesis in randomly constructed genetic nets .* 22 * , 437 - 467 ( 1969 ) s.a .kauffman , gene regulation networks : a theory for their global structure and behaviour .current topics in dev .biol . 6 , 145 .( 1971 ) s. a. kauffman _ the origins of order : self - organization and selection in evolution _( oxford university press , new york 1993 ) s.a .kauffman , c. peterson , b. samuelsson , c. troein : genetic networks with canalyzing boolean rules are always stable .pnas usa * 101 * , ( 2004 ) , 1710217107 .a. naef _ die individuelle entwicklung organischen formen als urkunde ihrer stammesgeschichte _ ( jena 1917 ) m. nowostawski , m. purvis : evolution and hypercomputing in global distributed evolvable virtual machines environment . in : _ engineering self - organising systems _, ed by s.a .brueckner , s. hassas , m. jelasity , d. yamins ( springer - verlag , berlin heidelberg 2007 ) 176191 e. olbrich , n. brertschinger , j.jost , how should complexity scale with system size ? in : _ proceedings of eccs07 : european conference on complex systems _ ed by j. jost & d. helbing .cd - rom , paper # 276 ( 2007 ) l. peliti , a. vulpiani : measures of complexity .lecture notes in physics , * 314 * ( 1988 ) r. serra , m. villani , a. semeria : genetic network models and statistical properties of gene expression data in knock - out experiments .biol . * 227 * , ( 2004 ) 149 - 157 r. serra , m. villani , a. graudenzi , s. a. kauffman : why a simple model of genetic regulatory networks describes the distribution of avalanches in gene expression data .* 246 * ( 2007 ) 449460 r. serra , m. villani , c. damiani , a. graudenzi , a. colacci , s.a .kauffamn : interacting random boolean networks . in : _ proceedings of eccs07: european conference on complex systems _ ed by j. jost & d. helbing .cd - rom , paper # 165 ( 2007 ) d. stauffer , a. sousa , ch .schulze : discretized opinion dynamics of the deffuant model on scale - free networks . journal of artificial societies and social simulation * 7 * , no .3 , paper 7 d. stauffer , s. moss de oliveira , p.m.c .de oliveira , j.s .sa martins _ biology , sociology , geology by computational physicists _( elsevier , amsterdam 2006 ) , 276 + ix pages .a. wagner : estimating coarse gene network structure from large - scale gene perturbation data .santa fe institute working paper , 01 - 09 - 051 .( 2001 ) a. weismann _ the evolution theory _ 2 vols .( london , 1904 ) a. s. wilkins _ the evolution of developmental pathways _( sinauer associates , inc .sunderland , massachusetts 2002 ) 1922
|
we describe systems using kauffman and similar networks . they are directed functioning networks consisting of finite number of nodes with finite number of discrete states evaluated in synchronous mode of discrete time . in this paper we introduce the notion and phenomenon of ` structural tendencies ' . along the way we expand kauffman networks , which were a synonym of boolean networks , to more than two signal variants and we find a phenomenon during network growth which we interpret as ` complexity threshold ' . for simulation we define a simplified algorithm which allows us to omit the problem of periodic attractors . we estimate that living and human designed systems are chaotic ( in kauffman sense ) which can be named - complex . such systems grow in adaptive evolution . these two simple assumptions lead to certain statistical effects i.e. structural tendencies observed in classic biology but still not explained and not investigated on theoretical way . e.g. terminal modifications or terminal predominance of additions where terminal means : near system outputs . we introduce more than two equally probable variants of signal , therefore our networks generally are not boolean networks . they grow randomly by additions and removals of nodes imposed on darwinian elimination . fitness is defined on external outputs of system . during growth of the system we observe a phase transition to chaos ( threshold of complexity ) in damage spreading . above this threshold we identify mechanisms of structural tendencies which we investigate in simulation for a few different networks types , including scale - free ba networks .
|
crowdsourcing is a term often adopted to identify networked systems that can be used for the solution of a wide range of complex problems by integrating a large number of human and/or computer efforts .alternative terms , each one carrying its own specific nuance , to identify similar types of systems are : collective intelligence , human computation , master - worker computing , volunteer computing , serious games , voting problems , peer production , citizen science ( and others ) .the key characteristic of these systems is that a _ requester _ structures his problem in a set of _ tasks _ , and then assigns tasks to _ workers _ that provide _answers _ , which are then used to determine the correct task _ solution _ through a _ decision _ rule .well - known examples of such systems are seti , which exploits unused computer resources to search for extra - terrestrial intelligence , and the amazon mechanical turk , which allows the employment of large numbers of micro - paid workers for tasks requiring human intelligence ( hit human intelligence tasks ) . examples of hit are image classification , annotation , rating and recommendation , speech labeling , proofreading , etc . in the amazonmechanical turk , the workload submitted by the requester is partitioned into several small atomic tasks , with a simple and strictly specified structure .tasks , which require small amount of work , are then assigned to ( human ) workers . since on the one hand answers may be subjective , and on the other task execution is typically tedious , and the economic reward for workers is pretty small , workers are not 100 % reliable ( earnest ) , in the sense that they may provide incorrect answers .hence , the same task is normally assigned in parallel ( replicated ) to several workers , and then a majority decision rule is applied to their answers . a natural trade - off between the reliability of the decision and cost arises ; indeed , increasing the replication factor of every task , we can increase the reliability degree of the final decision about the task solution , but we necessarily incur higher costs ( or , for a given fixed cost , we obtain a lower task throughput ) .although the pool of workers in crowdsourcing systems is normally large , it can be abstracted as a finite set of shared resources , so that the allocation of tasks to workers ( or , equivalently , of workers to tasks ) is of key relevance to the system performance .some believe that crowdsourcing systems will provide a significant new type of work organization paradigm , and will employ large numbers of workers in the future , provided that the main challenges in this new type of organizations are correctly solved . in authors identify a dozen such challenges , including i ) workflow definition and hierarchy , ii ) task assignment , iii ) real - time response , iv ) quality control and reputation .task assignment and reputation are central to this paper , where we discuss optimal task assignment with approximate information about the quality of answers generated by workers ( with the term worker reputation we generally mean the worker earnestness , i.e. , the credibility of a worker s answer for a given task , which we will quantify with an error probability ) .our optimization aims at minimizing the probability of an incorrect task solution for a maximum number of tasks assigned to workers , thus providing an upper bound to delay and a lower bound on throughput . a dual version of our optimization is possible , by maximizing throughput ( or minimizing delay ) under an error probability constraint .like in most analyses of crowdsourcing systems , we assume no interdependence among tasks , but the definition of workflows and hierarchies is an obvious next step .both these issues ( the dual problem and the interdependence among tasks ) are left for further work .the performance of crowdsourcing systems is not yet explored in detail , and the only cases which have been extensively studied in the literature assume that the quality of the answers provided by each worker ( the worker reputation ) are not known at the time of task assignment .this assumption is motivated by the fact that the implementation of reputation - tracing mechanisms for workers is challenging , because the workers pool is typically large and highly dynamical .furthermore , in some cases the anonymity of workers must be preserved .nevertheless , we believe that a clear understanding of the potential impact on the system performance of even approximate information about the workers reputation in the task assignment phase is extremely important , and can properly assess the relevance of algorithms that trace the reputation of workers .examples of algorithms that incorporate auditing processes in a sequence of task assignments for the worker reputation assessment can be found in .several algorithms were recently proposed in the technical literature to improve the performance of crowdsourcing systems without a - priori information about worker reputation . in particular, proposed an adaptive simple on - line algorithm to assign an appropriate number of workers to every task , so as to meet a prefixed constraint on problem solution reliability . in , instead , it was shown that the reliability degree of the final problem solution can be significantly improved by replacing the simple majority decision rule with smarter decision rules that differently weigh answers provided by different workers .essentially the same decision strategy was independently proposed in and for the case in which every task admits a binary answer , and then recently extended in to the more general case .the proposed approach exploits existing redundancy and correlation in the pattern of answers returned from workers to infer an a - posteriori reliability estimate for every worker .the derived estimates are then used to properly weigh workers answers .the goal of this paper is to provide the first systematic analysis of the potential benefits deriving from some form of a - priori knowledge about the reputation of workers . with this goal in mind , first we define and analyze the task assignment problem when workers reputation estimates are available .we show that in some cases , the task assignment problem can be formalized as the maximization of a monotone submodular function subject to matroid constraints . a greedy algorithm with performance guaranteesis then devised .in addition , we propose a simple maximum a - posteriori ( map ) decision rule , which is well known to be optimal when perfect estimates of workers reputation are available .finally , our proposed approach is tested in several scenarios , and compared to previous proposals .our main findings are : * even largely inaccurate estimates of workers reputation can be effectively exploited in the task assignment to greatly improve system performance ; * the performance of the maximum a - posteriori decision rule quickly degrades as worker reputation estimates become inaccurate ; * when workers reputation estimates are significantly inaccurate , the best performance can be obtained by combining our proposed task assignment algorithm with the decision rule introduced in .the rest of this paper is organized as follows .section [ sec : sa ] presents and formalizes the system assumptions used in this paper .section [ sec : pf ] contains the formulation of the problem of the optimal allocation of tasks to workers , with different possible performance objectives .section [ sec : allocation ] proposes a greedy allocation algorithm , to be coupled with the map decision rule described in section [ sec : decision ] .section [ sec : results ] presents and discusses the performance of our proposed approach in several scenarios , and compares it to those of previous proposals .finally , section [ sec : conclusions ] concludes the paper and discusses possible extensions .we consider binary tasks , whose outcomes can be represented by i.i.d .uniform random variables ( rv s ) over , i.e. , , . in order to obtain a reliable estimate of task outcomes, a requester assigns tasks to workers selected from a given population of size , by querying each worker , a subset of tasks .each worker is modeled as a binary symmetric channel ( bsc ) .this means that worker , if queried about task , provides a wrong answer with probability and a correct answer with probability .note that we assume that the error probabilities depend on both the worker and the task , but they are taken to be time - invariant , and generally unknown to the requester .the fact that the error probability may depend , in general , both on the worker and the task reflects the realistic consideration that tasks may have different levels of difficulty , that workers may have different levels of accuracy , and may be more skilled in some tasks than in others .unlike the model in , we assume in this paper that , thanks to a - priori information , the requester can group workers into classes , each one composed of workers with similar accuracy and skills . in practical crowdsourcing systems , where workers are identified through authentication ,such a - priori information can be obtained by observing the results of previous task assignments .more precisely , we suppose that each worker belongs to one of classes , , and that each class is characterized , for each task , by a different _ average _ error probability , known to the requester .let be the average error probability for class and task , , .we emphasize that does not necessarily precisely characterize the reliability degree of individual workers within class while accomplishing task ; this for the effect of possible errors / inaccuracies in the reconstruction of user profiles .workers with significantly different degree of reliability can , indeed , coexist within class .in particular our class characterization encompasses two extreme scenarios : * full knowledge about the reliability of workers , i.e. , each worker belonging to class has error probability for task deterministically equal to , and * a hammer - spammer ( hs ) model , in which perfectly reliable and completely unreliable users coexists within the same class .a fraction of workers in class , when queried about task , has error probability equal to ( the spammers ) , while the remaining workers have error probability equal to zero ( the hammers ) .suppose that class contains a total of workers , with .the first duty the requester has to carry out is the assignment of tasks to workers .we impose the following two constraints on possible assignments : * a given task can be assigned at most once to a given worker , and * no more than tasks can be assigned to worker .notice that the second constraint arises from practical considerations on the amount of load a single worker can tolerate .we also suppose that each single assignment of a task to a worker has a _ cost _ , which is independent of the worker s class . in practical systems, such cost represents the ( small ) wages per task the requester pays the worker , in order to obtain answers to his queries .alternatively , in voluntary computing systems , the cost can describe the time necessary to perform the computation .the reader may be surprised by the fact that we assume worker cost to be independent from the worker class , while it would appear more natural to differentiate wages among workers , favoring the most reliable , so as to incentivize workers to properly behave .our choice , however , is mainly driven by the following two considerations : i ) while it would be natural to differentiate wages according to the individual reputation of workers , when the latter information is sufficiently accurate , it is much more questionable to differentiate them according to only an average collective reputation index , such as , especially when workers with significantly different reputation coexist within the same class ; ii ) since in this paper our main goal is to analyze the impact on system performance of a - priori available information about the reputation of workers , we need to compare the performance of such systems against those of systems where the requester is completely unaware of the worker reputation , under the same cost model .finally , we wish to remark that both our problem formulation and proposed algorithms naturally extend to the case in which costs are class - dependent .let an _ allocation _ be a set of assignments of tasks to workers .more formally , we can represents a generic allocation with a set of pairs with and , where every element corresponds to an individual task - worker assignment .let be the complete allocation set , comprising every possible individual task - worker assignment ( in other words is the set composed of all the possible pairs ) .of course , by construction , for any possible allocation , we have that .hence , the set of all possible allocations corresponds to the power set of , denoted as .the set can also be seen as the edge set of a bipartite graph where the two node subsets represent tasks and workers , and there is an edge connecting task node and worker node if and only if .it will be sometimes useful in the following to identify the allocation with the biadjacency matrix of such graph .such binary matrix of size will be denoted and referred to as the _ allocation matrix_. in the following we will interchangeably use the different representations , according to convenience . in this work ,we suppose that the allocation is non - adaptive , in the sense that all assignments are made before any decision is attempted . with this hypothesis, the requester must decide the allocation only on the basis of the a - priori knowledge on worker classes .adaptive allocation strategies can be devised as well , in which , after a partial allocation , a decision stage is performed , and gives , as a subproduct , refined a - posteriori information both on tasks and on workers accuracy .this information can then be used to optimize further assignments . however , in it was shown that non - adaptive allocations are order optimal in a single - class scenario . when all the workers answers are collected , the requester starts deciding , using the received information .let be a random matrix containing the workers answers and having the same sparsity pattern as .precisely , is nonzero if and only if is nonzero , in which case with probability and with probability . for every instance of the matrix the output of the decision phase is an estimate for task values .in this section , we formulate the problem of the optimal allocation of tasks to workers , with different possible performance objectives . we formalize such problem under the assumption that each worker in class has error probability for task deterministically equal to .if the individual error probability of the workers within one class is not known to the scheduler , it becomes irrelevant which worker in a given class is assigned the task .what only matters is actually how many workers of each class is assigned each task . by sorting the columns ( workers ) of the allocation matrix , we can partition it as * g*= where is a binary matrix of size representing the allocation of tasks to class- workers .define and .define also as the weight ( number of ones ) in the -th row of matrix , which also represents the degree of the -th task node in the subgraph containing only worker nodes from the -th class .we formulate the problem of optimal allocation of tasks to workers as a combinatorial optimization problem for a maximum overall cost .namely , we fix the maximum number of assignments ( or , equivalently , the maximum number of ones in matrix ) to a value , and we seek the best allocation in terms of degree set .let be a given performance parameter to be maximized .then , the problem can be formalized as follows . where the first constraint expresses the fact that is the number of ones in the -th row of , the second constraint derives from the maximum number of tasks a given worker can be assigned , and the third constraint fixes the maximum overall cost . note that it could also be possible to define a dual optimization problem , in which the optimization aims at the minimum cost , subject to a maximum admissible error probability ; this alternative problem is left for future work . by adopting the set notation for allocations, we can denote with the family of all feasible allocations ( i.e. the collection of all the allocations respecting the constraints on the total cost and the worker loads ) .observe that by construction is composed of all the allocations satisfying : i ) , and ii ) , where represents the set of individual assignments in associated to .the advantage of the set notation is that we can characterize the structure of the family on which the performance optimization must be carried out ; in particular , we can prove that : [ prop - matroid ] the family forms a matroid .furthermore , satisfies the following property .let be the family of maximal sets in , then the proof is reported in appendix [ app : matroid ] along with the definition of a matroid .the complexity of the above optimal allocation problem heavily depends on the structure of the objective function ( which is rewritten as when we adopt the set notation ) . as a general property ,observe that necessarily is monotonic , in the sense that whenever .however , in general , we can not assume that satisfies any other specific property ( some possible definitions for are given next ) .for a general monotonic objective function , the optimal allocation of tasks to workers can be shown to be np - hard , since it includes as a special case the well - known problem of the maximization of a monotonic submodular function , subject to a uniform matroid constraint ( see ) is said to be submodular if : we have .the problem of the maximization of a monotonic submodular function subject to a uniform matroid constraint corresponds to : \{ for with submodular . } ] .when is submodular , the optimal allocation problem falls in the well - known class of problems related to the maximization of a monotonic submodular function subject to matroid constraints . for such problems, it has been proved that a greedy algorithm yields a 1/(1+)-approximation ( where is defined as in proposition [ prop - matroid ] ) . in the next subsections , we consider different choices for the performance parameter .a possible objective of the optimization , which is most closely related to typical performance measures in practical crowdsourcing systems , is the average task error probability , which is defined as : p_1(d )= -1 t _ t=1^t p_e , t with where the second equality follows from symmetry .of course , can be exactly computed only when the true workers error probabilities are available ; furthermore it heavily depends on the adopted decoding scheme . as a consequence , in general , can only be approximately estimated by the requester by confusing the actual worker error probability ( which is unknown ) with the corresponding average class error probability .assuming a maximum - a - posteriori ( map ) decoding scheme , namely , , where is the -th row of and is its observed value , we have [ eq : error_probability_task_i ] p_e , t = _ : \{_t = 1|*a*_t= } < 1/2 \{*a*_t = |_t=1 } .it is easy to verify that the exact computation of the previous average task error probability estimate requires a number of operations growing exponentially with the number of classes .thus , when the number of classes is large , the evaluation of ( [ eq : error_probability_task_i ] ) can become critical . to overcome this problem , we can compare the performance of different allocations on the basis of a simple pessimistic estimate of the error probability , obtained by applying the chernoff bound to the random variable that is driving the maximum - a - posteriori ( map ) decoding ( details on a map decoding scheme are provided in the next section ) .we have : where .thus , the performance metric associated with an allocation becomes : the computation of requires a number of operations that scales linearly with the product .at last , we would like to remark that in practical cases we expect the number of classes to be sufficiently small ( order of few units ) , in such cases the evaluation of ( [ eq : error_probability_task_i ] ) is not really an issue .an alternative information - theoretic choice for is the mutual information between the vector of rvs associated with tasks and the answer matrix , i.e. , [ eq : mutual_info ] p_3(d ) = i(*a * ; ) = _ t=1^t i(*a*_t ; _ t ) .it is well known that a tight relation exists between the mutual information and the achievable error probability , so that a maximization of the former corresponds to a minimization of the latter .we remark , however , that , contrary to error probability , mutual information is independent from the adopted decoding scheme , because it refers to an optimal decoding scheme .this property makes the adoption of the mutual information as the objective function for the task assignment quite attractive , since it permits to abstract from the decoding scheme .the second equality in ( [ eq : mutual_info ] ) comes from the fact that tasks are independent and workers are modeled as bscs with known error probabilities , so that answers to a given task do not give any information about other tasks . by definition [ eq : mutual_info_definition ]i(*a*_t ; _ t ) = h(*a*_t ) - h(*a*_t | _ t ) = h(_t ) - h(_t |*a*_t ) where denotes the entropy of the rv , given by \ ] ] and for any two random variables , is the conditional entropy defined as .\ ] ] in what follows , we assume perfect knowledge of worker reliabilities , i.e. , we assume that each class- worker has error probability with respect to task exactly equal to , remarking than in the more general case , the quantities we obtain by substituting with the corresponding class average , can be regarded as computable approximations for the true uncomputable mutual information .since we have modeled all workers as bscs , each single answer is independent of everything else given the task value , so that [ eq : conditional_entropy_a_given_t ] h(*a*_t | _ t ) = _ a_tw 0 h(a_tw | _ t ) = _ k=1^kd_tk h_b(_tk ) . where for the second equality in , because is a uniform binary rv , and where runs over all possible values of . by symmetry , for every such that , there is such that and . as a consequence, we can write notice the relationship of the above expression with .if in we substitute with , thanks to bayes rule , we obtain .an explicit computation of can be found in appendix [ app : mutual ] .as for the task error probability , the number of elementary operations required to compute grows exponentially with the number of classes . an important property that mutual information satisfies is submodularity .this property provides some guarantees about the performance of the greedy allocation algorithm described in section [ sec : greedy ] .[ prop - submodularity ] let be a generic allocation for task .then , the mutual information is a submodular function .the proof is given in appendix [ app : submodularity ] the previous optimization objectives represent a sensible choice whenever the target is to optimize the _ average _ task performance .however , in a number of cases it can be more appropriate to optimize the worst performance among all tasks , thus adopting a max - min optimization approach . along the same lines used in the definition of the previous optimization objectives , we can obtain three other possible choices of performance parameters to be used in the optimization problem defined in , namely , the maximum task error probability , p_4(d ) = - _ t=1, ,t p_e , t the chernoff bound on the maximum task error probability , p_5(d ) = - _ t=1, ,t _ e , t and the minimum mutual information , p_6(d ) = _ t=1 , 2 , , t i(*a*_t ; _ t ) .as we observed in section [ sec : pf ] , the optimization problem stated in is np - hard , but the submodularity of the mutual information objective function over a matroid , coupled with a greedy algorithm yields a 1/2-approximation ( see proposition [ prop - matroid ] ) .we thus define in this section a greedy task assignment algorithm , to be coupled with the map decision rule which is discussed in the next section .the task assignment we propose to approximate the optimal performance is a simple greedy algorithm that starts from an empty assignment ( ) , and at every iteration adds to the individual assignment , so as to maximize the objective function .in other words ; the algorithm stops when no assignment can be further added to without violating some constraint . to execute this greedy algorithm , at step , for every task , we need to i ) find , if any , the best performing worker to which task can be assigned without violating constraints , and mark the assignment as a candidate assignment ; ii ) evaluate for every candidate assignment the performance index ; iii ) choose among all the candidate assignments the one that greedily optimizes performance .observe that , as a result , the computational complexity of our algorithm is where represents the number of operations needed to evaluate .note that in light of both propositions [ prop - matroid ] and [ prop - submodularity ] , the above greedy task assignment algorithm provides a -approximation when the objective function is chosen .furthermore , we wish to mention that a better -approximation can be obtained by cascading the above greedy algorithm with the special local search optimization algorithm proposed in ; unfortunately , the corresponding cost in terms of computational complexity is rather severe , because the number of operations requested to run the local search procedure is .is if for any positive constant . ] here we briefly recall that proposed a simple task allocation strategy ( under the assumption that workers are indistinguishable ) according to which a random regular bipartite graph is established between tasks and selected workers .every selected worker is assigned the same maximal number of tasks , i.e. , except for rounding effects induced by the constraint on the maximum total number of possible assignments .majority voting is the simplest possible task - decision rule which is currently implemented in all real - world crowdsourcing systems . for every task , it simply consists in counting the and the in and then taking in accordance to the answer majority .more formally : [ eq : majority_decision_rule ] _t(*a*_t ) = ( _ w a_tw ) . we investigate the performance of the greedy task assignment algorithm , when coupled with the map decision rule for known workers reputation. given an observed value of , the posterior log - likelihood ratio ( llr ) for task is where the second equality comes from bayes rule and the fact that tasks are uniformly distributed over , and the third equality comes from modelling workers as bscs .let be the number of `` '' answers to task from class- workers .then [ eq : llraposteriori ] _t(*a*_t ) = _ k=1^k ( d_tk - 2 m_tk ) .the map decision rule outputs the task solution estimate if and if , that is , [ eq : map_decision_rule ] _t(*a*_t ) = ( _ t(*a*_t ) ) . observe that the computation of has a complexity growing only linearly with , and that , according to , each task solution is estimated separately .note also that , whenever worker reputation is _ not _ known a - priori , the above decision rule is no more optimal , since it neglects the information that answers to other tasks can provide about worker reputation .finally , for the sake of comparison , we briefly recall here the low - rank approximation decision rule proposed in for the case when : i ) no a - priori information about the reputation of workers is available , ii ) the error probability of every individual worker is the same for every task , i.e. , .the lra decision rule was shown to provide asymptotically optimal performance under assumptions i ) and ii ) . denote with the leading right singular vector of , the lra decision is taken according to : where the idea underlying the lra decision rule is that each component of the leading singular vector of , measuring the degree of coherence among the answers provided by the correspondent worker , represents a good estimate of the worker reputation .in this section , we study the performance of a system where tasks are assigned to a set of workers which are organized in classes . each worker can handle up to 20 tasks , i.e. , , .we compare the performance of the allocation algorithms and decision rules described in sections [ sec : allocation ] and [ sec : decision ] , in terms of achieved average error probability , .more specifically , we study the performance of : * the `` majority voting '' decision rule applied to the `` uniform allocation '' strategy , hereinafter referred to as `` majority '' ; * the `` low rank approximation '' decision rule applied to the `` uniform allocation '' strategy , in the figures referred to as `` lra uniform '' ; * the `` low rank approximation '' decision rule applied to the `` greedy allocation '' strategy , in the figures referred to as `` lra greedy '' ; * the `` map '' decision rule applied to the `` greedy allocation '' strategy , in the following referred to as `` map greedy '' .specifically , for the greedy allocation algorithm , described in section [ sec : greedy ] , we employed the overall mutual information as objective function .the first set of results is reported in figure [ fig : figure1 ] .there we considered the most classical scenario where tasks are identical .the results depicted in figure [ fig : figure1](a ) assume that all workers belonging to the same class have the same error probability i.e. , .in particular , we set for all .this means that workers in class 1 are the most reliable , while workers in class 3 are spammers .moreover , the number of available workers per class is set to .the figure shows the average error probability achieved on the tasks , plotted versus the average number of workers per task , . as expected , greedy allocation strategies perform better due to the fact that they exploit the knowledge about the workers reliability ( ) , and thus they assign to tasks the best possible performing workers .these strategies provide quite a significant reduction of the error probability , for a given number of workers per task , or a reduction in the number of assignments required to achieve a fixed target error probability .for example , can be achieved by greedy algorithms by assigning only 4 workers per task ( on average ) , while algorithms unaware of workers reliability require more than 20 workers per task ( on average ) .we also observe that the lra algorithm proposed in performs similarly to the optimal map algorithm .next , we take into account the case where in each class workers do not behave exactly the same .as already observed , this may reflect both possible inaccuracies / errors in the reconstruction of user profiles , and the fact that the behavior of workers is not fully predictable , since it may vary over time .specifically , we assume that , in each class , two types of workers coexist , each characterized by a different error probability .more precisely , workers of type 1 have error probability , while workers of type 2 have error probability probability , where is a parameter .moreover workers are of type 1 and type 2 with probability and , respectively , so that the average error probability over the workers in class is .we wish to emphasize that this bimodal worker model , even if it may appear somehow artificial , is attractive for the following two reasons : i ) it is simple ( it depends on only one scalar parameter ) , and ii ) it encompasses as particular cases the two extreme cases of full knowledge and hammer - spammer .indeed , for all workers in each class behave exactly the same ( they all have error probability ) ; this is the case depicted in figure [ fig : figure1](a ) , while for we recover the hammer - spammer scenario .this case is represented in figure [ fig : figure1](b ) , where workers are spammers with probability and hammers with probability . here, the greedy allocation algorithms still outperform the others. however , the map decision rule provides performance lower than the `` lra greedy '' due to the following two facts : i ) map adopts a mismatched value of the error probability of individual workers , when , ii ) map does not exploit the extra information on individual worker reliability that is possible to gather by jointly decoding different tasks . in figure [ fig : figure1](c ) , for , we show the error probability plotted versus the parameter .we observe that the performance of the `` map greedy '' strategy is independent on the parameter while the performance of `` lra greedy '' improve as increases .this effect can be explained by observing that the lra ability of distinguishing good performing workers from bad performing workers within the same class increases as increases .next , we assume that the tasks are divided into 2 groups of 50 each .workers processing tasks of group 1 and 2 are characterized by average error probabilities and , respectively .this scenario reflects the case where tasks of group 2 are more difficult to solve than tasks of group 1 ( error probabilities are higher ) .workers of class are spammers for both kinds of tasks .the error probabilities provided by the algorithms under study are depicted in figure [ fig : figure2 ] , as a function of and for ., for and task organized in two groups of different difficulties . for the first group of tasks , while for the second group , moreover .] we observe that all strategies perform similarly , like in the scenario represented by figure [ fig : figure1](c ) , meaning that the algorithms are robust enough to deal with changes in the behavior of workers with respect to tasks .we wish to remark that the lra decoding scheme is fairly well performing also in this scenario , even if it was conceived and proposed only for the simpler scenario of indistinguishable tasks. this should not be too surprising , in light of the fact that , even if the error probability of each user depends on the specific task , the relative ranking among workers remains the same for all tasks .finally , in figure [ fig : figure3 ] we consider the same scenario as in figure [ fig : figure2 ] . here , however , the number of available workers per class is set to , and the workers error probabilities for the tasks in group 1 and 2 are given by , and , respectively .this situation reflects the case where workers are more specialized or interested in solving some kinds of tasks .more specifically , here workers of class 1 ( class 3 ) are reliable when processing tasks of group 1 ( group 2 ) , and behave as spammers when processing tasks of group 2 ( group 1 ). workers of class 2 behave the same for all tasks . in terms of performance ,the main difference with respect to previous cases is that the `` lra greedy '' algorithm shows severely degraded error probabilities for .this behavior should not surprise the reader , since our third scenario may be considered as adversarial for the lra scheme , in light of the fact that the relative ranking among workers heavily depends on the specific task .nevertheless , it may still appear amazing that `` lra greedy '' behaves even worse than the simple majority scheme in several cases .the technical reason for this behavior is related to the fact that , in our example , for , tasks of group 1 ( group 2 ) are allocated to workers of class 1 ( class 3 ) only , whilst workers of class 2 are not assigned any task . for this reason ,the matrix turns out to have a block diagonal structure , which conflicts with the basic assumption made by lra that matrix ] and .finally , by using the definition of entropy , \non & = & -\log\frac{\gamma_{tk}}{2 } -{\mathbb{e}}_{{{\bf a}}_t}f({{\bf m}}_{t } ) \non & = & -\log\frac{\gamma_{tk}}{2 } -\frac{\gamma_{tk}}{2 } \sum_{{{\bf n } } } f({{\bf n}})\log f({{\bf n}})\prod_{k=1}^k\binom{m_{tk}}{n_k } \nonumber\end{aligned}\ ] ] where $ ] and , .first we recall the definition of a matroid . given a family of subsets of a finite ground set ( i.e. , is a matroid iff : i ) if then whenever + ii ) if and with then an now we can prove proposition [ prop - matroid ] . first observe that in our case property i ) trivially holds .now we show that property ii ) holds too . given that and since by construction and , necessarily there exists an such that , this implies that .let be an individual assignment in .since by assumption , denoted with , we have that , therefore . the fact that in our case descends immediately by the fact that necessarily iff either i ) when or ii ) when .let and be two generic allocations for the task , such that and . also let the pair . with a little abuse of notation hereinafterwe denote by the mutual information between the task and the vector of answers . then the mutual information is submodular if we first observe that where in we applied the mutual information chain rule .similarly we can write by consequence reduces to by using the definition of the mutual information given in we obtain e. christoforou , a. fernandez anta , c. georgiou , m. a. mosteiro , a. sanchez , " applying the dynamics of evolution to achieve reliability in master - worker computing , _ concurrency and computation : practice and experience _ vol .25 , n. 17 , pp . 23632380 , 2013 .d. r. karger , s. oh , d. shah , " budget- optimal crowdsourcing using low - rank matrix approximations , 2011 49th annual allerton conference on communication , control , and computing ( allerton ) , pp.284,291 , 28 - 30 sept .
|
this paper presents the first systematic investigation of the potential performance gains for crowdsourcing systems , deriving from available information at the requester about individual worker earnestness ( reputation ) . in particular , we first formalize the optimal task assignment problem when workers reputation estimates are available , as the maximization of a monotone ( submodular ) function subject to matroid constraints . then , being the optimal problem np - hard , we propose a simple but efficient greedy heuristic task allocation algorithm . we also propose a simple maximum a - posteriori decision rule . finally , we test and compare different solutions , showing that system performance can greatly benefit from information about workers reputation . our main findings are that : i ) even largely inaccurate estimates of workers reputation can be effectively exploited in the task assignment to greatly improve system performance ; ii ) the performance of the maximum a - posteriori decision rule quickly degrades as worker reputation estimates become inaccurate ; iii ) when workers reputation estimates are significantly inaccurate , the best performance can be obtained by combining our proposed task assignment algorithm with the lra decision rule introduced in the literature .
|
_ stereoscopy _ consists in giving a depth perception out of 2d material to the viewer , and the concept behind it is fairly simple : it requires sending distinct and carefully chosen images to each eye , without one eye noticing the images intended for the other . the notion of depth perception , or _ stereopsis _, has been discussed as early as a.d .280 by euclid .while there has been some experimentation using sketching techniques before 1800 , the invention of photography by niepce in the beginning of the 19 century marked the real start of extensive experimentation with stereoscopy . assembled one of the earliest known stereoscopes , but brewster is usually attributed the construction of the first practical viewing device , now referred to as the _ brewster stereoscope _ . as predicted by , stereoscopy encountered quite a strong success in these early times , when stereoscopes where made widely available , because the production of stereo pairs become easier as photographic techniques evolved . discusses these early ages ( from 1851 to 1935 ) of stereoscopy in depth , and we refer the interested reader to his work for a detailed overview of the various applications of stereo pairs in those times .although the principle behind stereoscopy has remained the same ever since , there have been regular improvements to the methods of production and visualization .in fact , the interest in stereoscopy has been closely linked to the development of both imaging and visualization techniques , and peaks of interest arose as new production and/or visualization tools were invented . beside the evolution of photographic techniques ,the advent of computers and their ability to produce accurate and detailed stereo pairs represents one such development which resulted in a peak of interest for stereoscopy that started in the 70. several viewing devices have also been developed over the years , with one common aim : increasing the comfort and simplicity of stereoscopy for the viewer . in comparison to individual stereographs , the development of specialized glasses ( red - blue , polarised , shutter - type ) made stereo pairs easier to visualise . lately ,new stereoscopic technologies are being integrated into consumer products at a fast pace ; movies , televisions , gaming consoles , cell phones , advertisement panels , and so on .stereoscopy has become especially popular in the movie industry in the past few years with the advent of digital 3d cinemas .one should nonetheless not forget that stereo movies themselves are not recent : _ the power of love _ , in anaglyph 3d , first aired in 1922 . in the scientific community , stereoscopy has been known and used in the past , but the extent to which it has been exploited in the presentation of data differs from field to field . in astrophysics ,stereoscopy has not been used extensively , despite the multi - dimensional nature of many data sets .often , a data cube is sliced or projected in order to obtain 2d publishable pictures and graphs .the issue of displaying and publishing multi - dimensional data sets has been identified in the past , and some interesting ( non - stereoscopic ) solutions have been proposed .cosmologists working on the time evolution of the large scale structures in the universe using 3d movies to illustrate their simulations results is one example ( e.g. * ? ? ?recently , described how documents in an _ adobe portable document format _ ( .pdf ) are now able to contain animated 3d models , and described how this can be used to create interactive 3d graphs .in addition , developed a 3d plotting library specifically tailored to the needs of the astrophysics community . also discuss and present alternative advanced image displays which might potentially take on a more significant place within the astrophysics community in the future . yet, stereoscopic techniques are not unknown to astrophysicists .planetary scientists , for example , use red - blue anaglyphs . in the case of mars ( e.g. * ? ? ?* ; * ? ? ?* ) , several probes and remote sensing satellites were equipped with special stereo cameras , for example the european space agency _ mars express _ satellite and its high resolution stereo camera experiment , the _ mars reconnaissance orbiter _ and its high resolution imaging science experiment ( hirise ) camera , the _ phoenix _ lander and its surface stereo imager ( ssi ) or the imager for the mars pathfinder ( mpi ) mission .stereo pairs are another type of stereoscopic solution that has been employed to accommodate multi - dimensional data sets in publications .one of the first astronomical stereo pairs published depicted the moon and was created as early as 1862 by l.m .rutherford . by taking two subsequent images of the moon with a six days interval, he obtained a strong enough change in orientation to induce a reasonable feeling of depth .more recently , the advent of computers expanded the possible applications of stereo pairs in astrophysics .for example , used them to display the position of the galaxies in the revised shapley - ames catalog ; and used stereo pairs to illustrate the shape of invariant surfaces in their study of dynamical problems with 3 degrees of freedom ; and created stereo pairs to illustrate the voronoi model they used to describe the asymptotic distribution of the cosmic mass on 10 - 200 mpc scales ; produced stereo pairs to show a 3d map of their sample of abell clusters ; published a stereo pair of the solar corona during the 1991 solar eclipse ; illustrated the stripping of the lmc hi and molecular clouds ; created stereo pairs of the colour - magnitude diagrams of the ionizing cluster 30 doradus ; used stereo pairs to display the 3d position of the blue horizontal - branch ( bhb ) stars discovered in the sdss spectroscopic survey ; and and used stereo pairs in complement to their interactive 3d maps of the oxygen - rich material in snr 1e 0102.2 - 7219 and snr n132d .these examples do not represent an exhaustive list of all the work that has been published using stereo pairs in astrophysics , but illustrate the wealth of topics that can profit and make use of this technique .nonetheless , the use of stereo pairs in astrophysics is less prevalent than in other fields . in biochemistry , for example, they have been an important tool to publish the 3d shapes of molecules from the beginning of the computer - era ( e.g. * ? ? ?* ; * ? ? ?* ) until today ( e.g * ? ? ?* ; * ? ? ?* ; * ? ? ?we believe that stereo pairs are a valuable tool in astrophysics too , which recent 3d innovations may help renew .in other words , we argue that stereoscopy has a great but under - exploited potential for the publication of multi - dimensional astrophysical data sets and can be a valuable complement to more standard plotting methods , especially with today s computing abilities . in sec . [sec : real ] , after introducing the _ free - viewing _technique , we discuss the various theoretical ways to construct a stereo pair . in sect .[ sec : tools ] , we present our trade - off method to efficiently create stereo pairs of data cubes using python . we then illustrate the unique features of stereo pairs as compared to standard plots in three different examples in sect .[ sec : science ] : a conceptual one in sect .[ sec : conc ] , one based on observational data in sect .[ sec : n132d ] and one based on theoretical data in sect .[ sec : alex ] , for which the use of stereo pairs provides critical scientific information .we discuss the role that stereo pairs can play regarding future developments of 3d visualization techniques in sec .[ sec : future ] , and summarize our conclusions in sect . [ sec : concl ] .several terms , such as _ stereoscopy , stereopsis , stereo pairs _ and _ stereograms _ , are used throughout the literature , sometimes with slightly different meanings . to avoid confusion in this article ,we refer to ; _ stereopsis _ as the depth perception reconstructed by the brain , _ stereoscopy _ as the science of inducing depth perception using 2d material of any type , _ stereograms _ as any type of image capable of inducing a depth feeling when viewed with the proper material , and _ stereo pair _ as a specific type of stereogram , where the left hand side ( the image for the left eye ; lhs ) and right hand side ( the image for the right eye ; rhs ) images are located side - by - side . at any time , our brain interprets the simultaneous images from each of our eyes , combines them and automatically reconstructs a 3d image .the reconstruction algorithm is completely unconscious , and it feels natural to see our environment in 3d . if one looks at a structure at a distance of cm for example , it will be seen by the right eye as if it had been rotated by degrees as compared to the left eye s view .hence , the very same 3d feeling can be achieved by sending two different 2d pictures of the same object to the left and right eye , if the pictures are taken from two different view angles .as long as the brain believes that it is looking at a _ real _ object , its automatic 3d reconstruction algorithm will work .the main challenge is to be able to provide a different image to each eye without the other noticing it . as we mentioned previously in sect .[ sec : intro ] , several techniques exist to achieve this goal , such as stereoscopes , glasses , or auto - stereoscopic screens ( for which no glasses are required ) .depending on the visualization technique , stereoscopic images carry different names : anaglyphs ( red - blue , polarized ) need to be looked at using the appropriate glasses ; stereo pairs , and autostereograms require the so - called free - viewing technique .this latter way of looking at stereo pairs has the advantage that it does not require any special equipment , provided that the stereo pair is in the correct format .this advantage makes it the best method to visualize stereo pairs in publications , as a majority of the readers accessing the article in its online or printed version will be able to directly get a feeling of depth . with the left and right images side - by - side ,it is up to the reader to have each eye looking at one image only , thus recreating the 3d feeling .obviously , this requires the reader to be familiar with the technique .however , the extended usage of stereo pairs in other fields , as mentioned in sect .[ sec : intro ] , gives us confidence that this fact should not represent an obstacle to the popularization of stereo pairs in astrophysics .the web offers a vast resource of examples for training one s ability to see the 3d images from stereo pairs rapidly , and many websites provide tutorials and suggestions on how to make it work .there are two ways to look at stereo pairs : parallel and cross - eyed .the names refer to the required orientation of the eyes , which depends on the position of the lhs and rhs images within the pair : parallel viewing requires the lhs image to be on the left ( and the rhs image on the right ) while cross - eyed requires them to be swapped . whether one technique is more comfortable than the other is a matter of personal opinion and training . in fig .[ fig : para_cross_sphere ] , we present for comparison two stereo pairs of the same object - two intersecting spheres of different radius - from the same viewpoint .the top pair is designed for parallel viewing , while the bottom one requires the cross - eyed technique to be visualized properly .looking at a stereo pair for the first time can be quite challenging and there exists several ways to make it work . after experimenting with some students and astronomers of the mount stromlo observatory , many having no previous experience with stereo pairs, we provide some suggestions that might help when looking at a stereo pair for the first time : * holding the printed page in front of you , look at the horizon .once in focus , lift up the page so that the stereo pair reaches the level of your eyes , but without adjusting the focus onto the page - the focus remains at the horizon , however , your _ attention _ is on the page . if done correctly , the left and right images will merge into a central 3d image of the double sphere .if your eyes focus on the page when you movie it up , try to relax them .if you find it difficult to see the 3d image ( so that you can read the axis labels ) , it might help to convince yourself that you are _ looking at something real_. * place the printed page on a flat surface , and position your head straight above the stereo pair .bring one finger in between the page and your eyes , so that its tip is located just below the pair , in the middle . focusing on your finger tipwill merge the background left and right images into a central 3d one . if they do not merge properly , try adjusting the height of your finger .the last step consist in removing the finger while keeping the central 3d image of the stereo pair , and might require some concentration / practice . for both techniques ,the alignment of the head is important - a slight rotation will have for consequence that the lhs and rhs will not overlap properly when merging them . with some experience , stereo pairs can be viewed on paper as well as on a computer screen . our own experience as well as that of astronomers and students at the mount stromlo observatory show that getting the eyes in the correct position becomes easier with practice , ultimately becoming almost an automatic adjustment . in this article, we shall only use stereo pairs designed for parallel viewing , which is the most comfortable and natural technique for most of the people we have asked around us . note that looking at a stereo pair with the wrong technique will have the consequence of reverting the depth axis ( i.e. flipping the object back to front ) . while using the appropriate viewing techniqueis recommended , this fact nevertheless enables people more comfortable with the cross - eyed technique to obtain a 3d impression of every stereo pairs within this article .generally , stereo pairs in biochemistry publications are of the parallel type , a tradition which is most likely a remnant of the necessity to accommodate the use of viewing devices in the 70 s , as described by .there is however no fixed rules , and the stereo pair constructed by of the position of bhb stars in the sdss spectroscopic survey is for example a cross - eyed pair .there exist two main techniques to build a stereo pair , depending on the projection method used to obtain the left and right images . the first one , known as the _ toe - in _ method , mimics the human eyes behaviour , and is in that sense the most obvious technique . the concept , illustrated in fig .[ fig : view ] , is as follows : the lhs and rhs projections of the 3d scene are created by projecting it along two view vectors rotated by - 5 along the azimuthal angle .the value of is somewhat arbitrary , and a higher angle will result in an increased depth perception for the viewer .our own experimentations , as well as various online tutorials , indicate that a value of works well .this freedom in the choice of is also reflected in biochemistry stereo pairs , for which several values are being used throughout the literature ; 2 , 3 , 5 , 6 and 10 .there are several issues with the toe - in method , one of the principal being vertical parallax due to keystoning .because the projections are taken at an angle , successive planes perpendicular to the viewer will be slightly distorted .the distortion is inverted in the lhs and rhs images . as a result ,when merging the two images , there will be a mismatch in the outer points , resulting in a blurred image .it should be noted that keystoning is closely related to the distance of the camera to the object , and is more important when being close from the 3d scene .the second issue with the toe - in method lies in the fact that the eyes will have to adjust from a convergent to a divergent position in order to scan the depth dimension .for extended objects this can result in strong discomfort for the viewers , and even an impossibility to merge the left and right images if the converging / diverging angle becomes too important . discuss the various distortions present in stereo pairs in great details , and we refer the interested reader to his article for more details . the offset ( or off - axis ) method addresses and solves the problems of the toe - in projection , and is in that sense sometimes considered to be more _correct_. in this case , the lhs and rhs _ cameras _ look at the 3d scene in parallel directions , and offset by a distance .this method does not create any keystoning , and ensures that the eyes remain in the same position when scanning the depth dimension of the reconstructed 3d picture .an illustration of the method is shown in fig .[ fig : view ] .one of the inherent drawbacks of this method is that the outside regions of the lhs and rhs fields do not overlap - and hence must be taken off the final image .choosing between the toe - in and the offset method to create stereo pairs is entirely up to the creator of the pair . as we will discuss in the next section , thetoe - in method can provide excellent stereo pairs under certain circumstances ; the stereo pairs shown in fig .[ fig : para_cross_sphere ] have for example been created using the toe - in projection method , and in the biochemistry literature , many tutorials describing the creation of stereo pairs use the toe - in method ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?furthermore , sometimes , intrinsic limitations of the software or programming language used to create the stereo pair might require a trade - off between feasibility and quality , as we will show in sec .[ sec : tools ] .the last step required to construct a stereo pair consists in placing the lhs and rhs images side - by - side .the distance between the two images is not critical .this reflect the fact that the interpupillary distance is not uniform across the population , but varies with age , gender , and race .specifically , mentions a mean interpupillary distance of 63 mm for adults , with the vast majority lying within a 50 - 75 mm range . in this article , we have used point - to - point separations ( between the lhs and rhs images ) ranging from 3.5 cm to 5 cm depending on the stereo pair , and it is a matter of personal opinion as to which value is most comfortable . increasing the inter - image distance above 6 cm is not advisable , as the stereo pair might become harder to visualize for people with a smaller interpupillary distance than average .for stereo pairs to be recognized as a valuable tool by the astrophysics community requires them to be extremely easy to create , implement and link to the data set .let us consider the 3d data set used to produce fig .[ fig : para_cross_sphere ] , which can be seen as a cloud of points in 3d space. there exist many methods in order to easily produce stereo pairs from such a data cube , and it would be impossible to list them all here .some commercial software packages can produce stereo pairs with a single mouse click - the stereo pairs shown in sec .[ sec : alex ] were for example produced with the _ visit _ software .alternatively , many scientists have developed their own software , specifically tailored to their own data type and format .such customized or commercial software are capable of creating excellent stereo pairs , and are usually designed well enough not to have too steep learning curves . here , we propose an alternative , off - the - shelf , efficient way to produce stereo pairs using the programming language python .this method , which we will refer to as the _ simplified toe - in _ ( sti ) is based on our own experimentation , and is extremely straightforward to implement , even for people with little / no experience with this language .furthermore , creating stereo pairs with python grants access to a large collection of non - plotting modules to work on the data cube beforehand .this provides the more advanced user with the freedom to potentially reduce , sort , fit and clean the data set before creating a stereo pair - a strong advantage as compared to commercial software which often requires specific input , and does not enable direct data interaction .our sti method is based on the toe - in projection described previously , but accounts for current limitation in the matplotlib plotting module , in which several projection parameters are currently hard - coded and can not be accessed easily .the required functions are located within the mplot3d toolkit , and are designed to create a 2d projection out of a 3d scene .specifically , the axes3d instance , creating the plotting area , takes in two parameters , the elevation and the azimuth , both in degrees .creating an sti stereo pair is then a 3-steps process : 1 .create two side - by - side ( using subplot ) plots , each with the axes3d instance .2 . for a viewpoint located at ( ; ) , set the left plot viewpoint to ( ; ) and the right plot viewpoint to ( ; ) 3 .print the data using the appropriate mplot3d function , such as scatter3d for a cloud of points , or contour3d for isosurfaces .this method is a trade - off : toe - in or offset stereo pairs can not presently be created with matplotlib easily , unlike sti stereo pairs .especially , implementing the toe - in method requires the modification of the source code of the axes3d function .but the sti simplicity comes at a price . for viewpoints with no elevation ( ), the sti method is identical to the toe - in technique .however , errors both in the lhs / rhs images orientation , as well as in their azimuthal separation , are introduced with increasing values of . for completeness, we describe those issues in detail in the appendix [ app ] , and compare sti stereo pairs with toe - in and offset stereo pairs for different elevations .the comparisons show that the sti method delivers very similar results to the toe - in method for elevation as high as . beyond this limit ,the depth perception is reduced compared to the toe - in method . comparing with the offset methodreveals that the 3d structure of the object is increased and better revealed in the sti method , which makes the latter more suitable for the publication of multi - dimensional data sets . in other words , if the object appears , with the offset technique , to be _ popping out _ of the screen , it does not contain itself much depth information . reached a similar conclusion when building a stereoscopic movie of 3d simulations of a core - collapse supernova : `` _ [ the offset technique ] ... created a plausible facsimile of 3d .however , the trained observer noticed the flatness in the center of the sphere , and we did not want to rely on 2d depth cues such as lighting and shading to convey 3d information . _ '' hence , the toe - in , and in our case the sti technique , is recommended for the creation of stereo pairs in astrophysics .it is : * more efficient at providing a depth structure to the data as compared to the offset method . * easier to implement in python than the proper toe - in method ( which requires the modification of the source code ) .the hard - coded parameters within mplot3d cause no visible vertical parallax or other visual defects in sti stereo pairs .we have asked some students and astronomers at the mount stromlo observatory to test our sti stereo pairs .none of them found the sti pairs more tiring or difficult to visualize , with a very satisfactory depth impression .most of them also noted that even if the depth perception in sti pairs is degraded beyond compared to toe - in pairs , it does not vanishes completely . in that sense, our suggested sti method can be used for any viewpoint with satisfactory depth impression and reasonable comfort for the viewer .we illustrate the role that stereoscopy , in the form of stereo pairs , can play in a publication with three examples ; conceptual , observational , and theoretical .clearly , the range of applications for stereoscopy in astrophysics is much larger than those we are about to present , and the publications mentioned in sec .[ sec : intro ] highlight the fact that stereo pairs can be used for almost any type of multi - dimensional data set , e.g. , 3d maps and structures , 3d iso - surfaces , n - body simulations and trajectories , cosmological simulations , magnetic and other field maps , 3d function fitting , color - magnitude diagrams , hydrodynamic simulations and complex ( e.g. turbulent ) structures .as mentioned in section [ sec : intro ] , stereoscopy , in this case in the form of stereo pairs , is different from standard plots in that it transmits a feeling of depth to the viewer .let us illustrate this advantage with a practical example . in fig .[ fig : spheres ] , we present three stereo pairs of the same object , two intersecting spheres of different radius . in each case, the two spheres symmetry axis is in the xz plane , and tilted by 45 degrees with respect to the z axis .several observations can be made at this point .first , stereo pairs work both in color or greyscale .second , stereo pairs provide the reader with a true feeling of depth , and unambiguously convey the orientation of the structure . in the case of this double - sphere , the use of a stereo pair removes the ambiguity that arises when looking at a single image only , in which case the spheres could be seen as being oriented in the xz or yz plane .the use of colors may also help lift the orientation degeneracy : in the bottom pair , plotting the green sphere _ above _ the pink one does suggest that the axis lies in the xz plane .however , in some cases , it might not always be possible to define the order in which the objects are plotted ( or printed ) by color . in the middle pair, we have intentionally plotted the pink sphere above the green , which then suggest , when looking at only one of the image , that the rotational axis lies in a yz plane . in that case ,stereo viewing is perfect to correct this wrong feeling . in 3d ,the middle pair looks essentially identical to the bottom one , with the rotational axis lying in the xz plane ( i.e. the big , green sphere is on top in all three cases ) .while this example is rather simple , it nevertheless illustrates some key advantages that stereo pairs have over single projected images .we reinforce this point with stereo pairs of more complex shapes presented in the next sections . used the wide field spectrograph at siding spring observatory to image the young supernova remnant ( ysnr ) n132d located in the large magellanic cloud ( lmc ) .the initial data cube axis units are ( x [ arcsec ] , y [ arcsec ] , [ ] ) , and by studying the [ o iii ] forbidden line at , and its blue- and red - shifted features , they can identify the oxygen - rich knots in the ysnr , and obtain their radial velocities . the data cube axesare then transformed to ( x[arcsec ] , y[arcsec ] , v [ km s ) . assuming a distance to the lmc of 50 kpc ( from * ? ? ?* ) , and an age of years , they transformed their third data cube axis to a spatial dimension .thus , they obtained an accurate 3d spatial map with axes ( x [ pc ] , y [ pc ] , z [ pc ] ) of the oxygen - rich filaments in snr n132d . stereo pairs of this 3d map are shown in fig .[ fig : n132d ] .elevation with respect to the plane of the sky ( top ) , and from the nnw and + 10 elevation ( bottom ) .the scales are given in pc .adapted from . ]elevation with respect to the plane of the sky ( top ) , and from the nnw and + 10 elevation ( bottom ) .the scales are given in pc .adapted from . ]stereo pairs are one very useful way to fully understand the true nature of this snr . have used projections of their 3d map , that showed the clumpy structure of the ejecta , as well as hints of the ring - like structure of the ejecta .they also created an interactive 3d map , that enables the reader to zoom , pan , rotate and fly around and through their 3d map .stereo pairs , together with those others visualization methods , confirmed the presence of the ring structure , and ruled out any perspective effects due to the projection of the 3d map on a 2d plane .the stereo pairs are very efficient in _ showing _ this ring to the reader , compared to , e.g. , a montage of slices .this is a big advantage in the case of snr n132d , for which the actual shape of the ejecta has been subject to interpretation since the discovery of the remnant ( see * ? ? ?* ; * ? ? ?* ; * ? ? ?everyone can _ see _ the ring , and the impression of depth given by stereo pairs is a valuable complement to the interactive 3d map .those stereo pairs have been created with python using the sti technique described in sect .[ sec : tools ] .the whole code , that takes as an input the data cube in a _.fits _ format , contains less than 50 lines , of which only 10 are actually responsible for plotting the data .stereo pairs can also be used for the visualization of theoretical data sets . in this example, we use stereoscopy to reveal the structure of a simulated relativistic active galactic nucleus ( agn ) jet .the jet simulations were grid - based hydrodynamic simulations which produced multivariate data of thermodynamic quantities , e.g. , density , temperature , pressure , velocity components and tracers , as functions of 3 rectilinear spatial coordinates .the resolution of the simulations was cells , each cell representing a physical volume of .volume rendered images of the double jet structure are shown in the two stereo pairs , fig .[ fig : jetlong ] and fig .[ fig : jetshort ] . in these renderings ,the ray - traced variable was proportional to the 1.8th power of the density and the tracer variable of the jet , which is a measure of the radio emissivity of a jet plasma . .] figure [ fig : jetlong ] shows that stereo pairs do not necessarily need to be made up of square images . in this case, the upright rectangular stereo pair allows one to inspect the 3d structure of the jet along the full length of its propagation axis .in particular , one can see the deformations of the central jet stream as it becomes unstable due deceleration and entrainment in the lobes .the structure of the jet lobes along the line of sight is also clearer in the stereo pairs than in either of the 2d images on their own . in the edge - on stereo pairs of fig .[ fig : jetshort ] , the use of stereoscopy enables one to identify the locations of lower and higher concentrations of jet plasma within the volume of the lobe .globally , one obtains a less ambiguous picture of the true shape and structural characteristics of the complex flow .the viewer obtains a strong sense of depth in the image , despite the contracted view at small angles of the line of sight to the jet axis .the relativistic agn jet simulation were performed with the _ flash _ code .the stereo pairs were produced with the _ visit _ software , developed at the lawrence livermore national laboratory ._ visit _ has a built - in stereo output . as a further example for the visualization of theoretical data benefiting from stereoscopy , fig .[ fig : clouds ] shows a stereo pair of fractal clouds .these were generated with the procedure outlined by .the procedure creates a spatially fractal distribution in cloud pixels that simultaneously obeys a single - point log - normal probability density function in cloud density . on certain scales , fractal structure and log - normal single point statisticsare characteristic of atmospheric and interstellar clouds , and 3d data sets such as those depicted in fig .[ fig : clouds ] may be used as initial conditions in hydrodynamical simulations .such fractal structures usually prove difficult to visualize .the use of stereo pairs provides an enhanced depth perception and thereby a clearer view of the relative positions of the clouds .the fractal outlines are more obvious , even interior to the clouds .in the previous sections , we have described how stereoscopy , in the form of stereo pairs , can be a powerful tool for publishing multi - dimensional data sets . as of today , stereoscopy is the _ most widely accepted technique for the capture and display of 3d content _ , whereas other methods , such as holography ( e.g. * ? ? ?* ; * ? ? ?* ) , are considered very promising , but , as of now , harder to implement on a large scale . in that sense , we believe that stereo pairs can play a major role in the future of data visualization in astrophysics ( and any field of science ) , by providing researchers today with a simple way to explore , discover , imagine , identify , and share new _ stereoscopy - based _ analysis methods of their data , methods that will then be ready for implementation in exquisite interactive , immersive , high - end visualization tools tomorrow .such immersive 3d visualization technologies , both for the scientific and non - scientific community , still appears rather cubersome to use and implement , and often require off - the - shelf , custom software and setup .but technology is moving at a fast pace .for example , working with and sharing data sets in 3d on televisions and hand - held devices might become commonplace fairly soon , as such devices are already easily available on the market .the idea of 3d television is rather old , with early experiments on stereo tv as early as 1920 , and the first 3d tv broadcast occurring in 1980 . yet , very recently , the rapid re - appearance of 3d televisions ( and 3d hand - held devices ) lead to state that immersive 3d visualization technologies are now well in the so - called _ slope of enlightment _ within the hype s cycle of new technologies , the last step before reaching a more productive phase . so far , one of the main issues slowing down the expansion of 3d television on the market is most probably the lack of 3d content to be displayed on those devices , a key factor for success .this is also true for scientific applications of this technology , and a wider usage of stereo pairs may help scientists identify in what ways 3d tv could soon play a significant role in their research .once the need will have been clearly and widely identified , there is no doubt that the yet missing standardized application programming interface ( api ) and software links between scientific data sets and already existing hardware will be rapidly implemented . in short ,we believe that stereo pairs , an old and well documented tool ( which can now be easily implemented ) , could help scientists keep an open mind , and potentially shape the future of multi - dimensional data visualization and analysis .we have discussed the concept of stereo pairs and highlighted their potential benefits for the astrophysics community .first , we presented the free - viewing technique and provided advice to easily visualize both parallel and cross - eyed stereo pairs for the first time .we then argued that stereo pairs can be easily produced and reproduced on a computer screen or on printed material with most of the usual programming languages or software used nowadays within the astrophysics community .in particular , we have introduced and described an alternative , off - the - shelf , easy way to produce high quality stereo pairs with python , which we refer to as the _ simplified toe - in _ method .this technique adapts the _ official _ toe - in procedure taking into account the current limitation of python plotting abilities , without affecting on the quality of the stereo pairs .specifically , no vertical parallax can be detected in the resulting stereo pairs .testing our sti stereo pairs on several students and astronomers at the mount stromlo observatory revealed that they represent a good trade - off , by being able to convey a satisfactory feeling of depth from any viewpoint , and by being as effective as standard toe - in stereo pairs with an elevation lower that .the tests also revealed that sti stereo pairs provide more depth structure around the data itself as compared to their equivalent offset stereo pairs , and that the sti method is in that respect more appropriate for creating stereo pairs in astrophysics , a fact already observed by .we have then used three examples , one idealized and two realistic , with which we have presented various types of stereo pairs , highlighted several aspects of stereoscopic visualization , and identified the main benefits of using stereo pairs as a complement to more standard plotting techniques in a publication .first , they are a polyvalent tool that is adaptable to one s needs ; their shape , size , and color can be adapted to best reveal the 3d data set without impacting the ability to transmit a depth perception to the viewer .second , they profit any multivariate data set , observational or theoretical , and potentially benefit different genres of studies ( e.g. , of both the theoretical and observational kind ) .third , they greatly facilitate the communication of complex 3d shapes .especially , where a text description might be subject to interpretation , stereo pairs can force upon the viewer a unique view of the data set , thereby avoiding misconceptions .this is possibly the main factor that should dictate the use of stereoscopy in publications and presentations .for all these reasons , stereo pairs should be considered a valuable tool for the astrophysics community - a field where most data sets are multidimensional and multivariate , and where stereo pairs can be applied to many different sub - topics , but always with the common aim of simplifying , clarifying , and eliminating misconceptions .the evolution of informatics has made stereo pairs aesthetic , useful , and straightforward to produce .we are convinced that they have a promising future , given the rapid evolution of 3d visualization hardware and techniques , e.g. 3d televisions .although we still lack a standardized api and user - friendly software to couple the stereo images to the display devices , these will likely be provided as soon as the need is identified .sharing astrophysical data sets in 3d on hand - held devices might sound futuristic .nonetheless , stereo pairs can already be easily stacked into a movie , and played during a talk in a lecture theatre equipped with 3d projection abilities , enabling the audience to experience a glimpse of the 3d future for astrophysics . in conclusion, we are convinced that the ideas conceived through the ongoing 3d trend currently occurring in the non - scientific community can and ought to be used in astrophysics .stereo pairs are a good way to start opening our minds today .we thank the referee for his / her comments that helped greatly improve this paper .this research has made use of nasa s astrophysics data system .part of this research was undertaken on the nci national facility at the australian national university and some software used in this work were in part developed by the doe - supported asc / alliance center for astrophysical thermonuclear flashes at the university of chicago .ackermann , g.k ., eichler , j. : holography : a practical approach .wiley - vch verlag gmbh & co. kgaa , weinheim ( 2007 ) barker , h.w ., wiellicki , b.a ., parker , l. : a parametrization for computing grid - averaged solar fluxes for inhomogeneous marine boundary layer clouds .part ii : validation using satellite data .journal of atmospheric sciences * 53 * , 2304 - 2316 ( 1996 ) barnes , d.g . , fluke , c.j . , bourke , p.d . , parry , o.t .: an advanced , three - dimensional plotting library for astronomy .publications of the astronomical society of australia * 23 * , 82 - 93 ( 2006 ) barnes , d.g . , fluke , c.j .: incorporating interactive three - dimensional graphics in astronomy research papers . new astronomy * 13 * , 599 - 605 ( 2008 ) van den bergh , s. : the magellanic clouds , past , present and future - a summary of iau symposium no . 190 .new views of the magellanic clouds * 190 * , 569 ( 1999 ) berry , c. , baker , m.d .: inside protein structures , teaching in three dimensions .biochemistry and molecular biology education * 38 * , 425 - 429 ( 2010 ) bicknell , g.v . : a model for the surface brightness of a turbulent low mach number jet .i - theoretical development and application to 3c 31 . * 286 * , 68 - 87 ( 1984 ) blondin , j.m . ,mezzacappa , a. , demarina , c. : stability of standing accretion shocks , with an eye towards core - collapse supernovae . *584 * , 971 - 980 ( 2003 ) darrah , w.c . : the world of stereographs . land yacht press , nashville , tennessee ( 1977 ) dodgson , n.a . : variation and extrema of the human interpupillary distance .spie * 5291 * : stereoscopic displays and virtual reality systems xi , 36 - 46 ( 2010 ) dopita , m. , hart , j. , mcgregor , p. , oates , p. , bloxham , g. , jones , d. : the wide field spectrograph ( wifes ) . * 310 * , 255 - 268 ( 2007 ) dopita , m. , rhee , j. , farage , c. , mcgregor , p. , bloxham , g. , green , a. , roberts , b. , nielson , j. , wilson , g. , young , p. , firth , p. , busarello , g. , merluzzi , p. : the wide field spectrograph ( wifes ) : performance and data reduction . *327 * , 245 - 257 ( 2010 ) federrath , c. , klessen , r.s ., schmidt , w. : the fractal density structure in supersonic isothermal turbulence : solenoidal versus compressive energy injection . * 692 * , 364 - 374 ( 2009 ) fenn , j. , raskino , m. : mastering the hype cycle ; how to choose the right innovation at the right time . harvard business press , boston ( 2008 ) fryxell , b. , olson , k. , ricker , p. , timmes , f.x . , zingale , m. , lamb , d.q . ,macneice , p. , rosner , r. , truran , j.w . , tufo , h. : flash : an adaptive mesh hydrodynamics code for modeling astrophysical thermonuclear flashes .* 131 * , 273 - 334 ( 2000 ) fluke , c.j . , bourke , p.d . , odonovan , d. : future directions in astronomy visualization .pasa * 23 * , 12 - 24 ( 2006 ) hamilton , w.c . :the revolution in crystallography .science * 169 * , 133 - 141 ( 1970 ) hardman , k.d ., ainswort , c.f . :binding of nonpolar molecules by crystalline concanavalin - a .biochemistry * 12 * , 4442 - 4448 ( 1973 ) hayman , h.j.g . : the mounting of stereo slides for projecting molecular models .journal of chemical education * 64 * , 1041 - 1042 ( 1987 ) holliman , n. : cosmic origins : experiences making a stereoscopic 3d movie .spie * 7524 * : stereoscopic displays and applications xxi , ( 2010 ) icke , v. , van de weygaert , r. : the galaxy distribution as a voronoi foam . * 32 * , 85 - 112 ( 1991 ) jaumann , r. , neukum , g. , behnke , t. , duxbury , t.c ., eichentopf , k. , flohrer , j. , gasselt , s.v . ,giese , b. , gwinner , k. , hauber , e. , hoffmann , h. , hoffmeister , a. , khler , u. , matz , k .- d . ,mccord , t.b . ,mertens , v. , oberst , j. , pischel , r. , reiss , d. , ress , e. , roatsch , t. , saiger , p. , scholten , f. , schwarz , g. , stephan , k. , whlisch , m. , the hrsc co - investigator team1 : the high - resolution stereo camera ( hrsc ) experiment on mars express : instrument aspects and experiment conduct from interplanetary cruise through the nominal mission . * 55 * , 928 - 952 ( 2007 ) keszthelyi , l. , jaeger , w. , mcewen , a. , tornabene , l. , beyer , r.a . ,dundas , c. , milazzo , m. : high resolution imaging science experiment ( hirise ) images of volcanic terrains from the first 6 months of the mars reconnaissance orbiter primary science phase .journal of geophysical research ( planets ) * 113 * , 4005 ( 2008 ) koutchmy , s. molodenskii , m.m . :three - dimensional image of the solar corona from white - light observations of the 1991 eclipse . * 360 * , 717 - 719 ( 1992 ) landsberg , m.j ., moran - jones , k. , smith , r. : molecular recognition of an rna trafficking element by heterogeneous nuclear ribonucleoprotein a2 .biochemistry * 45 * , 3943 - 3951 ( 2006 ) lasker , b.m . : studies of n132 d - a supernova remnant in the lmc .ii - the rapidly moving material . * 237 * , 765 - 768 ( 1980 ) lemmon , m.t . , smith , p.h ., shinohara , c. , tanner , r. , woida , p. , shaw , a. , hughes , j. , reynolds , r. , woida , r. , penegor , j. , oquest , c. , hviid , s.f . ,madsen , m.b . , olsen , m. , leer , k. , drube , l. , morris , r.v . , britt , d.t . : the phoenix surface stereo imager ( ssi ) investigation. lunar and planetary institute science conference abstracts * 39 * , 2156 ( 2008 ) lewis , g.m . ,austin , p.h . : an iterative method for generating scaling log - normal simulations .11th conference on atmospheric radiation , 123 - 126 ( 2002 ) martinet , l. , magnenat , p. : invariant surfaces and orbital behaviour in dynamical systems with 3 degrees of freedom .* 96 * , 68 - 77 ( 1981 ) martinet , l. , magnenat , p. , verhulst , f. : on the number of isolating integrals in resonant systems with 3-degrees of freedom . celestial mechanics * 25 * , 93 - 99 ( 1981 ) mcewen , a.s ., eliason , e.m . , bergstrom , j.w . , bridges , n.t . , hansen , c.j . , delamere , w.a . , grant , j.a . , gulick , v.c . , herkenhoff , k.e . , keszthelyi , l. , kirk , r.l . ,mellon , m.t . , squyres , s.w . , thomas , n. , weitz , c.m . :mars reconnaissance orbiter s high resolution imaging science experiment ( hirise ) .journal of geophysical research ( planets ) * 112 * , 5 ( 2007 ) morse , j.a . , winkler , p.f . , kirshner , r.p .: spatially resolved kinematics and longslit spectroscopy of the young , oxygen - rich supernova remnant n132d in the large magellanic cloud . * 109 * , 2104 ( 1995 ) neukum , g. , jaumann , r. , hoffmann , h. , hauber , e. , head , j.w . , basilevsky , a.t . , ivanov , b.a . , werner , s.c . , van gasselt , s. , murray , j.b . ,mccord , t. , hrsc co - investigator team : recent and episodic volcanic and glacial activity on mars revealed by the high resolution stereo camera .* 432 * , 971 - 979 ( 2004 ) norling , j.a . : the stereoscopic art - a reprint journal of the society of the motion picture and television engineers its * 60 * , 268 - 308 ( 1953 ) okoshi , t. : three - dimensional imaging techniques .academic press inc ., new york , new york ( 1976 ) onural , l. , sikora , t. , ostermann , j. , smolic , a. , civanlar , m.r . , watson , j. : an assessment of 3dtv technologies .nab bec proceedings , 456 - 467 ( 2006 ) perrier , g. : le centenaire de niepce .lastronomie * 48 * , 77 - 86 ( 1934 ) peterka , t. , ross , r. , yu , h. , ma , k. , kooima , r. , girado , j. : autostereoscopic display of large - scale scitifc visualization .spie * 7237 * : stereoscopic displays and applications xx ( 2009 ) pujadas , g. , palau , j. : molecular mimicry of substrate oxygen atoms by water molecules in the -amylase active site .protein science * 10 * , 1645 - 1657 ( 2001 ) rhee , g.f.r.n . , van haarlem , m.p . , katgert , p. : a study of the elongation of abell clusters .ii - a sample of 107 rich clusters . * 91 * , 513 - 554 ( 1991 ) robinson , t. : making stereo pair views with molecular editor .journal of chemical education * 66 * , a62 ( 1989 ) saxton , c.j . , bicknell , g.v . , sutherland , r.s . , midgley , s. : interactions of jets with inhomogeneous cloudy media . * 359 * , 781 - 800 ( 2005 ) scripture , e.w .: anaglyphs and stereoscopic projection .science * 10 * , 185 - 187 ( 1899 ) selman , f. , melnick , j. , bosch , g. , terlevich , r. : the ionizing cluster of 30 doradus .i. internal reddening from ntt photometry and multi - object spectroscopy . *341 * , 98 - 109 ( 1999 ) selman , f. , melnick , j. , bosch , g. , terlevich , r. : the ionizing cluster of 30 doradus .star - formation history and initial mass function . *347 * , 532 - 549 ( 1999 ) sharif , l. , sharif , n. , ahmed , m. : 3-d television .international journal of research and reviews in information sciences * 1 * , 39 - 41 ( 2011 ) sherman , w.r . ,oleary , p. , whiting , e.t . , grover , s. , wernert , e.a .: iq - station : a low cost portable immersive environment .proceedings of the international symposium on visual computing * 2 * , 361 - 372 ( 2010 ) sirko , e. , goodman , j. , knapp , g.r . , brinkmann , j. , ivezi , . , knerr , e.j . , schlegel , d. , schneider , d.p . , york , d.g .: blue horizontal - branch stars in the sloan digital sky survey . i. sample selection and structure in the galactic halo . * 127 * , 899 - 913 ( 2004 ) smith , j. : the discoverer of photography . * 16 * , 501 ( 1877 ) smith , i. : stereoviewing : visual aids for stereochemistry and macromolecular structures .royal institute of chemistry reviews * 4 * , 19 - 33 ( 1971 ) smith , h.m . : principles of holography .john wiley & sons , new york ( 1975 ) smith , p.h . ,tomasko , m.g . , britt , d. , crowe , d.g . , reid , r. , keller , h.u . , thomas , n. , gliem , f. , rueffer , p. , sullivan , r. , greeley , r. , knudsen , j.m . , madsen , m.b . , gunnlaugsson , h.p . , hviid , s.f . , goetz , w. , soderblom , l.a . , gaddis , l. , kirk , r. : the imager for mars pathfinder experiment .* 102 * , 4003 - 4026 ( 1997 ) sofue , y. : fate of the magellanic stream . *46 * , 431 - 440 ( 1994 ) stockert , j.c .: stereoscopy of computer - drawn molecular structures . biochemical education * 22 * , 23 - 25 ( 1994 ) sutherland , r.s . , bicknell , g.v . :interactions of a light hypersonic jet with a nonuniform interstellar medium . *173 * , 37 - 69 ( 2007 ) tyler , c.w . , clarke , m.b . : autostereogram .spie conference series * 1256 * , 182 - 197 ( 1990 ) vogt , f. , dopita , m.a . :the cas a - like snr 1e 0102.2 - 7219 in the small magellanic cloud : an asymmetric bipolar explosion .* 721 * , 587 - 606 ( 2010 ) vogt , f. , dopita , m.a . : the 3d structure of n132d in the lmc : a late - stage young supernova remnant . * 331 * , 521 - 535 ( 2011 ) wagner , a.y . , bicknell , g.v .: relativistic jet feedback in evolving galaxies . * 728 * , 29 ( 2011 ) van de weygaert , r. , icke , v. : fragmenting the universe .ii - voronoi vertices as abell clusters . *213 * , 1 - 9 ( 1989 ) wheatstone , c. : contributions to the physiology of vision .part the first . on some remarkable , and hitherto unobserved , phenomena of binocular vision .royal society of london philosophical transactions series i * 128 * , 371 - 394 ( 1838 ) woods , a. , docherty , t. , koch , r. : image distortions in stereoscopic videa systems .spie * 1915 * : stereoscopic displays and applications iv , 36 - 48 ( 1993 ) xiang , k. , nagaike , t. , xiang , s. , turgay , k. , beh , m.m . , manley , j.l , tong , l : crystal structure of the human symlekin - ssu72-ctd phosphopeptide complex . * 467 * , 729 - 733 ( 2010 ) yahil , a. , sandage , a. , tammann , g.a .: the velocity field of bright nearby galaxies .iii - the distribution in space of galaxies within 80 megaparsecs - the north galactic density anomaly . * 242 * , 448 - 468 ( 1980 ) york , d.g . , adelman , j. , anderson , jr . ,j.e . , anderson , s.f . , annis , j. , bahcall , n.a . , bakken , j.a . , barkhouser , r. , bastian , s. , berman , e. , boroski , w.n . , bracker , s. , briegel , c. , briggs , j.w . , brinkmann , j. , brunner , r. , burles , s. , carey , l. , carr , m.a . ,castander , f.j . , chen , b. , colestock , p.l . ,connolly , a.j . ,crocker , j.h . , csabai , i. , czarapata , p.c . ,davis , j.e . , doi , m. , dombeck , t. , eisenstein , d. , ellman , n. , elms , b.r . , evans , m.l . , fan , x. , federwitz , g.r . , fiscelli , l. , friedman , s. , frieman , j.a . , fukugita , m. , gillespie , b. , gunn , j.e . , gurbani , v.k . , de haas , e. , haldeman , m. , harris , f.h . , hayes , j. , heckman , t.m . , hennessy , g.s . , hindsley , r.b . , holm , s. , holmgren , d.j . , huang , c .- h . , hull , c. , husby , d. , ichikawa , s.- i . , ichikawa , t. , ivezi , ., kent , s. , kim , r.s.j . , kinney , e. , klaene , m. , kleinman , a.n . , kleinman , s. , knapp , g.r . , korienek , j. , kron , r.g . , kunszt , p.z . , lamb , d.q . , lee , b. , leger , r.f . ,limmongkol , s. , lindenmeyer , c. , long , d.c . , loomis , c. , loveday , j. , lucinio , r. , lupton , r.h . , mackinnon , b. , mannery , e.j . , mantsch , p.m. , margon , b. , mcgehee , p. , mckay , t.a . ,meiksin , a. , merelli , a. , monet , d.g . , munn , j.a . , narayanan , v.k . , nash , t. , neilsen , e. , neswold , r. , newberg , h.j . , nichol , r.c . , nicinski , t. , nonino , m. , okada , n. , okamura , s. , ostriker , j.p . , owen , r. , pauls , a.g . , peoples , j. , peterson , r.l ., petravick , d. , pier , j.r . , pope , a. , pordes , r. , prosapio , a. , rechenmacher , r. , quinn , t.r . ,richards , g.t . , richmond , m.w . , rivetta , c.h . , rockosi , c.m . , ruthmansdorfer , k. , sandford , d. , schlegel , d.j . , schneider , d.p . , sekiguchi , m. , sergey , g. , shimasaku , k. , siegmund , w.a . , smee , s. , smith , j.a . , snedden , s. , stone , r. , stoughton , c. , strauss , m.a . , stubbs , c. , subbarao , m. , szalay , a.s ., szapudi , i. , szokoly , g.p . ,thakar , a.r .tremonti , c. , tucker , d.l . , uomoto , a. , vanden berk , d. , vogeley , m.s .waddell , p. , wang , s .- i . , watanabe , m. , weinberg , d.h . , yanny , b. , yasuda , n. : the sloan digital sky survey : technical summary . * 120 * , 1579 - 1587 ( 2000 ) zone , r. : stereoscopic cinema and the origins of 3-d film , 1838 - 1952 . the university press of kentucky , lexington , kentucky ( 2007 )we have introduced the simplified toe - in ( sti ) method , described in sect .[ sec : tools ] , as an alternative to the official toe - in and offset methods for creating stereo pairs .the sti method accounts for the current limitations of matplotlib , the python plotting library .the main limitation lies in the fact that the viewpoint towards a 3d plot is defined by only two parameters , the azimuth and the elevation .this makes it impossible to create proper toe - in or offset stereo pairs directly , for reasons highlighted below ( see appendix [ app : sti ] ) .nonetheless , our sti stereo pairs provide an excellent depth perception , especially when , and are not more difficult or tiring to visualize as compared to their equivelent toe - in stereo pairs , according to our small survey of mount stromlo students and astronomers . in appendix [ app : comp ] , we provide a comparison chart of sti , toe - in and offset stereo pairs for varying elevations . but let us start by describing the issues of sti stereo pairs , and the required python code updates to create toe - in and offset stereo pairs . as we defined in sec .[ sec : tools ] , the lhs and rhs projection viewpoints for the sti method are located at and , with the opening angle in between the lhs and rhs viewpoint . a schematic of the situation in shown in fig .[ fig : stiti ] , which shows the location of the viewpoints on the visualization sphere . by default in matplotlib ,the sphere is centred on the middle of the data set , and its radius is ten times the size of the data set .this definition of the sti lhs and rhs viewpoints makes it easy to use the axes3d instance , which can take the elevation and azimuth as parameters .however , it introduces two mistakes compared to the official toe - in method : 1 . the distance in between the and points , measured along the great circle to which they belong ( the red line in fig . [fig : stiti ] ) , is decreasing with increasing elevation .the orientation of the lhs and rhs views , which by default are oriented towards the z - axis ( i.e. along the orange lines ) is : * not parallel between the lhs and rhs viewpoints . *not perpendicular to the red great circle ( as it ought to be ) . *varying with elevation .in other words , with increasing elevation , the sti lhs and rhs projection points will move along the orange lines , slowly merging towards each other - the cause of the diminishing feeling of depth beyond . the mismatch in the view rotation , increasing from 0 to at for each view , is however small enough at any elevation not to be very noticeable . to produce toe - in stereo pairs ,we have updated the source code of the axes3d instance of the mplot3d toolkit , so as to be able to rotate the projection orientation around the view axis . using spherical trigonometry, one can show that for a central viewpoint , the lhs and rhs projection need to be made from the position and with a respective rotation of and around the view axis , where : \label{eq:1}\\ \delta_1&=&2\times\arccos\left[\left(\cos(\frac{\delta_0}{2})-\cos(\frac{\pi}{2}-\theta_0)\cos(\frac{\pi}{2}-\theta_1)\right)\cdot\frac{1}{\sin(\frac{\pi}{2}-\theta_0)\sin(\frac{\pi}{2}-\theta_1)}\right]\\ \epsilon_t&=&\sin(\frac{\pi}{2}-\theta_0)\cdot\frac{1}{\sin(\frac{\pi}{2}-\theta_1)}\label{eq:3}\end{aligned}\ ] ] and ) and toe - in ( points and ) stereo pair .the viewpoints associated with the sti method ( resp .toe - in method ) move along the orange ( respect .light blue ) line with varying elevation . ] the and viewpoints are shown in fig .[ fig : stiti ] .they will move along the light blue lines , always keeping a fixed separation at any elevation , as measured along the great circle they belong to ( dark blue line ) . the rotation error of the views ( initially oriented along the purple line , but which is corrected by the introduction of a rotation matrix within the plotting source code ) increases with elevation , and is as high as 90 for . because the sti viewpoints converge towards each other at high elevations ,their respective projection s rotation error remains small ( ) compared to the toe - in projections - and hence for high elevations we do not require an update of the plotting source code to produce comfortable sti stereo pairs ! implementing the offset method is more complicated in python , and requires much more involved modifications of the plotting code .instead , we adopted the following , more simplistic , approach .we applied the following transformation to our data cube : where is a scale factor defining the intensity of the offset . in words ,we apply a linear translation to the data , perpendicular to the view axis .this method has the disadvantage to _ disconnect _ the data from the axes , however , it is enough to get an idea of the quality of offset stereo pairs ( and observe that the data itself has less depth content than in toe - in stereo pairs ) . in fig .[ fig : comp_1 ] and fig .[ fig : comp_2 ] , we present stereo pairs produced with python using the sti , toe - in , and offset method described previously . furthermore , we also include a _stereo pair , where we have set identical lhs and rhs images , and that consequently contains no depth information at all .we are aware that by looking long enough at those stereo pairs , the brain will start inducing a _wrong _ depth perception , not directly present in the image ; the control stereo pair hence tests that one does not _ guess _, rather than _ see _ , depth information .all stereo pairs have been produced at an azimuth , and varying values for the elevation $ ] . in every case ,the green ( big ) sphere is on top , and the symmetry axis of the system lies in the xz plane . comparing the offset and toe - in stereo pairs first , one notices that , as expected , the double sphere structure appears to hover over the axes ( located further away ) in the former ones .this a direct consequence of our simplistic implementation of the offset method - it is accurate for the data points , but not for the background axes .nonetheless , focusing on the double - sphere structure itself , it has noticeably less depth elongation in the case of the offset method . in the toe - in stereo pairs ,not only do the axes wrap around the data points nicely , but the data points themselves appear with a strong feeling of depth . in that sense , we believe the toe - in method to be much more appropriate for the publication of astrophysical data cubes , as it provides more depth structure to the data itself . comparing the sti stereo pairs with their equivalent toe - in pairs , it can be seen that depth perception within the sti image is gradually degraded as the elevation increases . as mentioned above ,this is due to the fact that the lhs and rhs sti projection viewpoints gradually move towards each other at higher elevation . nonetheless , some degree of depth perception is present , even as high as .the different orientation of the lhs and rhs images are also staying small , and hence do not need to be corrected by any modification of the matplotlib source code . in summary , our suggested sti plotting method , if theoretically not correct , nevertheless appears in practice to provide ( very ) satisfactory results , and can be directly implemented using python , matplotlib and mplot3d .the experienced python user might find it easy to update his source code to create correct toe - in stereo pairs .we intend to include our code update in future releases of matplotlib .until we manage to do so , we are happy to provide our source code modifications to the interested user directly ( which does not require extensive knowledge of python to be implemented ) ; simply contact f.v . at _
|
stereoscopic visualization is seldom used in astrophysical publications and presentations compared to other scientific fields , e.g. , biochemistry , where it has been recognized as a valuable tool for decades . we put forth the view that stereo pairs can be a useful tool for the astrophysics community in communicating a truer representation of astrophysical data . here , we review the main theoretical aspects of stereoscopy , and present a tutorial to easily create stereo pairs using python . we then describe how stereo pairs provide a way to incorporate 3d data in 2d publications of standard journals . we illustrate the use of stereo pairs with one conceptual and two astrophysical science examples : an integral field spectroscopy study of a supernova remnant , and numerical simulations of a relativistic agn jet . we also use these examples to make the case that stereo pairs are not merely an ostentatious way to present data , but an enhancement in the communication of scientific results in publications because they provide the reader with a realistic view of multi - dimensional data , be it of observational or theoretical nature . in recognition of the ongoing 3d expansion in the commercial sector , we advocate an increased use of stereo pairs in astrophysics publications and presentations as a first step towards new interactive and multi - dimensional publication methods .
|
the purpose of this paper is to continue our previous study of the link between the equations of inviscid continuum mechanics and the motion of riemannian manifolds ( _ cf . _more specifically , we have shown in that a solution of the system of balance laws of mass and momentum in two space dimensions can be mapped into an evolving two - dimensional riemannian manifold in .furthermore , it is shown that the geometric image of smooth solutions of the continuum equations for non - wild data ( not simple shears ) can be shadowed by a non - smooth geometric motion which we called generalized shears .in addition , the geometric initial value problem for generalized simple shears has an infinite number of energy preserving non - smooth solutions .since the earlier paper was focused on two - dimensional continuum mechanics , it is natural to develop a theory to deal with three space dimensional case , and we provide here when the riemannian manifold is now time evolving in .moreover , our earlier paper only dealt with inviscid materials . in this paper, we extend our results to viscous fluids , including the incompressible viscous fluids governed by the classical navier - stokes equations . in particular, we use the geometric theory to predict the critical profile initial data for the onset of turbulence in channel flow .the main value of such a continuum - geometry link in was to give a rather straightforward demonstration for the existence of wild solutions for the equations of inviscid continuum mechanics for classical incompressible and compressible fluids and neo - hookian elasticity , and for the non - uniqueness of entropy weak solutions to the initial value problem .the work was motivated by the important sequence of papers by delellis , szekelyhidi , and others on the application of gromov s -principle and convex integration to provide both the existence of wild solutions and the non - uniqueness of solutions to the cauchy problem .since gromov s work is based on the classical nash - kuiper theorem for non - smooth isometric embeddings in riemannian geometry , it was our goal to exposit a direct map from continuum mechanics to the motion of a riemannian manifold , in order to avoid the rather sophisticated analysis needed to apply gromov s theory .furthermore , in making the direct continuum to geometry link , it becomes apparent that our approach is very much in the spirit of the einstein equations of general relativity , _ i.e. _ , in both our work and general relativity , the matter relation ( fluids , gases , _ etc ._ ) on one side of the equations drives the geometric motion of an evolving space - time metric .perhaps in retrospect , it is no surprise that the proof of the continuum to geometry map in two space dimensions is distinctly different than the proof we provide here for three space dimensions .the reason is more than just technical and lies at the heart of much of the work for the isometric embedding problem in three and higher dimensional riemannian manifolds .more specifically , for two - dimensional manifolds , it is rather straightforward to analyze the gauss - codazzi equations which provide both necessary and sufficient conditions for the existence of an isometrically embedded manifold in and that was the view we took in .however , for the case of three space dimensions , all the work to date has avoided the application of the next level of necessary and sufficient conditions ( gauss , codazzi , and ricci equations ) and has dealt with the fully nonlinear embedding equations directly ; we follow this direct approach here as well .we do this by invoking two key results : the solvability of the system for determination of a metric , given the components of the riemann curvature tensor ( _ cf ._ deturck - yang ) , and the local solvability of the isometric embedding problem for the embedding of a three - dimensional riemannian manifold into ( _ cf ._ maeda - nakamura ; see also goodman - yang and chen - clelland - slemrod - yang - wang ) .the paper is divided into six sections after this introduction .section 2 provides a review of the elements of riemannian geometry and the isometric embedding problem .a short presentation of the balance laws of continuum mechanics is given in section 3 . in section 4 , a short proof is presented for deriving the metric map taking continuum mechanical evolution into evolution of a three - dimensional riemannian manifold immersed in the six - dimensional euclidean space . in section 5 ,we present a short observation that shearing flow may be mapped into a riemannian flat manifold in the six - dimensional euclidean space . in section 6 , the nash - kuiper theorem for non - smooth isometric embeddings is first recalled and is then applied to show that the geometric initial value problem has an infinite set of weak solutions for the same non - smooth initial data . finally , in section 7 , the theory is applied to the classical incompressible navier - stokes equations and gives a prediction of the critical profile initial data for the onset of turbulence in channel flow .the critical profile predicted by the geometric theory of this paper is in agreement with the experimentally observed profile given in reichardt .this section is devoted to some preliminary discussion and review about geometry and isometric embedding .riemann in 1854 introduced the notion of an abstract manifold with metric structure , motivated by the problem of defining a surface in euclidean space independently of the underlying euclidean space .the isometric embedding problem seeks to establish the conditions for the riemannian manifold to be a sub - manifold of a euclidean space with the same metric .for example , consider the smooth -dimensional riemannian manifold with metric tensor . in terms of localcoordinates , the distance on between neighbouring points is where and throughout the paper , the standard summation convention is adopted .now let be the -dimensional euclidean space , and let be a smooth map so that the distance between neighbouring points is given by where the subscript comma denotes partial differentiation with respect to the local coordinates .the existence of a _ global embedding _ of in is equivalent to the existence of a smooth map for each into ._ isometric embedding _ requires the existence of a map for which the distances are equal .that is , which may be compactly rewritten as where , and the inner product in is denoted by symbol `` '' .the classical isometric embedding of a -dimensional riemannian manifold into a -dimensional euclidean space is comparatively well studied and comprehensively discussed in han - hong . by contrast , the embedding of -dimensional riemannian manifolds into euclidean space has only a comparatively small literature .when , the main results are due to bryant - griffiths - yang , nakamura - maeda , goodman - yang , and most recently to poole and chen - clelland - slemrod - wang - yang .a general , related case when is considered in han - khuri .these studies rely all on a linearization of the full nonlinear system to establish the embedding for given metric of the riemannian manifold .let denote an -dimensional riemannian manifold with ascribed metric tensor .suppose that manifold can be embedded globally into ( the term _ immersion _ is used when the embedding is local ) . as stated in [ geometry ] this assumption implies that there exist a system of local coordinates , on and embeddings , such that hold .for an -dimensional riemannian manifold , the components of the corresponding metric tensor may be represented by the symmetric matrix \ ] ] there are entries on and above the diagonal , and we conclude in general that the isometric embedding problem ( recovering the surface from the metric ) has three cases : , where is the number of unknowns , and is the number of equations . the crucial number is called the _let be an -dimensional riemannian manifold with metric , and let the covariant derivative be denoted by .this derivative permits differentiation along the manifold . for scalars ,vectors and second - order tensors are given respectively by where the _ christoffel symbols _ are calculated from metric by the formula the metric tensor with components ( upper indices ) is the inverse of that with components ( lower indices ) , so that , where is the usual kronecker delta .the kronecker deltas of upper and lower order are defined similarly . the _ riemann curvature tensor _ , , is defined in terms of the christoffel symbols by by lowering indices , we have the _ covariant riemann curvature tensor _ : or which is written as .this tensor possesses the minor _ skew - symmetries _ : and the _ interchange _ ( or major ) symmetry .the cyclic interchange of indices leads to the _ first bianchi identity _ : as well as the _ second bianchi identity _ : if we use the ricci tensor then the second bianchi identity can be written as of course , with usual raising and lowering of indices and the ricci identity , we have the quantity , , is the einstein tensor . furthermore, the ricci tensor is written in terms of metric by the formula : a _ necessary _ condition for the existence of an isometric embedding is that there exist functions such that the _ gauss equation _ holds : along with the _ codazzi equations _ : and the _ ricci equations _ : the ricci system can be expressed in covariant form by the addition and subtraction of the term , , to obtain suppose that there exist symmetric functions and anti - symmetric functions , such that the gauss - codazzi - ricci equations are satisfied in a simply connected domain. then there exists an isometric embedding of the -dimensional riemannian manifold ( with the second fundamental form and the normal bundle ) into .this theorem shows that the solvability of the gauss - codazzi - ricci equations is both necessary and sufficient for an isometric immersion .while examination of the gauss - codazzi - ricci system appears to be an appealing route to the proof of local isometric embedding , it is in fact the more direct route by using the embedding equations that has proven more successful for the problem .the first such result was given in bryant - griffiths - yang , and more refined results are due to nakamura - maeda and poole .see also chen - clelland - slemrod - wang - yang for an alternative , simpler proof of the nakamura - maeda theorem in , which we will use here and state it as follows .[ nakamura - maeda ] let be a three - dimensional riemannian manifold and let be a point such that the curvature tensor ( as defined by ) does not vanish .then there exists a local isometric embedding of a neighborhood of into .we will need the crucial result for the metric solvability for a prescribed riemann curvature due to deturck - yang : [ t : deyang ] let be a non - degenerate tensor over a three - dimensional manifold ( say an open set in three - dimensional euclidean space ) which satisfies the first bianchi identity .then , for any point on the manifold ( say a point in the open set ) , there exists a riemannian metric such that system is satisfied in a neighborhood of the point .the non - degeneracy of the tensor is equivalent to the non - singularity of the matrix ( which is also denoted by ) : \widehat{r_{1223 } } & \widehat{r_{2323 } } & \widehat{r_{1323 } } \\[1 mm ] \widehat{r_{1213 } } & \widehat{r_{1323 } } & \widehat{r_{1313 } } \end{bmatrix}.\ ] ]we consider the balance laws of mass and momentum in three - dimensional continuum mechanics .denote by the ( symmetric ) cauchy stress tensor , and assume that the fields for velocity , cauchy stress , and density are consistent with some specific constitutive equation for a body and satisfy the balances of mass , and linear momentum ( satisfaction of balance of angular momentum is automatic ) .the equations for the balance of linear momentum in the spatial representation and balance of mass are : where is the density of the body in the reference configuration , and is the deformation gradient of the current configuration with respect to this reference. the following are some examples of the cauchy stress tensor : 1 .inviscid compressible fluid : ( compressible fluid , the euler equations ) .2 . inviscid incompressible fluid with constant ( unit ) density : , which imply , ( incompressible fluid , the euler equations ) .3 . viscous incompressible fluid with constant ( unit ) density , , , , , which imply ( the navier - stokes equations ) .neo - hookean elasticity : .given a local ( space - time ) solution of the continuum balance laws of mass and momentum , we define the following quantities via the first bianchi identity and the relations : & \widehat{r_{1212 } } = \rho u_{3}^{2 } -t_{33},\quad & \widehat{r_{3123 } } = \rho u_{1}u_{2 } -t_{1 2},\\[1 mm ] & \widehat{r_{1223 } } = \rho u_1u_3-t_{13 } , \quad & \widehat{r_{3112 } } = \rho u_{2}u_{3 } -t_{2 3}.\end{aligned}\ ] ] write as the symmetric matrix : \widehat{r_{1223 } } & \widehat{r_{2323 } } & -\widehat{r_{3123 } } \\[1 mm ] -\widehat{r_{3112 } } & -\widehat{r_{3123 } } & \widehat{r_{1313 } } \end{bmatrix}\\[2 mm ] = & \begin{bmatrix } \rho u_{3}^{2 } -t_{33 } & \rho u_{1}u_{3 } -t_{1 3 } & -\rho u_{2}u_{3 } + t_{2 3 } \\[1 mm ] \rho u_{1}u_{3 } -t_{1 3 } & \rho u_{1}^{2 } -t_{11 } & -\rho u_{1}u_{2 } + t_{1 2 } \\[1 mm ] -\rho u_{2}u_{3 } + t_{2 3 } & -\rho u_{1}u_{2 } + t_{1 2 } & \rho u_{2}^{2 } -t_{22 } \end{bmatrix}. \end{split}\ ] ] then system is a system of six equations in the six unknown components of the metric . furthermore , matrix is non - singular when is non - zero . in this case, the deturck - yang theorem ( theorem [ t : deyang ] ) yields the local existence of a in space metric .moreover , matrix is positive definite when the quantities \rho u_{1}u_{3 } -t_{13 } & \rho u_{1}^{2 } -t_{11 } \end{bmatrix } , \quad \det \widehat{r}\ ] ] are positive . for the euler equations of either compressible or incompressible flow , and is non - singular when is non - zero .it also easy to see that is positive definite when is positive .we can then state the following theorem .[ geomotion ] let , , , , be a local space - time solution to the balance laws of mass and momentum with non - zero .then we have the following : there is a local space - time riemannian metric that satisfies both system and the following system : which is abbreviated as moreover , the balance of mass and momentum imply there is a local space - time isometric embedding for the three - dimensional riemannian manifold into such that .conversely , if there is a smooth metric and smooth continuum fields , , , , which satisfy and , then the balance laws of mass and momentum are satisfied .\(a ) since is non - singular , the deturck - yang theorem ( theorem [ t : deyang ] ) yields the existence of metric satisfying .then system follows from the second bianchi identity and the balance law of momentum for the continuum fields .\(b ) the existence of isometric embedding follows from the nakamura - maeda theorem ( theorem [ nakamura - maeda ] ) .\(c ) apply the second bianchi identity to . then , from , we obtain the system : from , we now recover the balance law of linear momentum for the continuum fields .finally , take the divergence of and use to see hence , if the balance law of mass is initially satisfied , it is locally satisfied .we note that , if a metric satisfies , then substitution of the formula for , which is given in terms of , into yields \(i ) a system of nonlinear essentially ordinary differential equations for the incompressible euler equations ; \(ii ) a system of weakly first - order quasilinear partial differential equations for the incompressible navier - stokes equations .these systems are non - local due to relations and .nevertheless , these systems provide what may be an avenue for proving finite time blow up of smooth solutions .in 4 , we have shown that , if is non - singular , there exists a map from the continuum flow to the geometric motion .this motivates the question as to what can be said in the case when is singular .in essence , there are two examples : one for the incompressible euler equations , and the other for neo - hookean elasticity which have been provided in .for this reason , we will give only a short discussion for the first example and the second example follows analogously .for the incompressible euler equations with ( by scaling for any lower bound ) , the singularity of means the pressure , and hence we consider the steady flow : the desired embedding is given by where is a positive constant .this yields metric with components an orthonormal set of normal vectors is given by then a direct calculation by using the definitions of and gives and all the remaining components of the second fundamental forms as well as all to be zero .the gauss equations show that the manifold is riemann flat , and the identification shows that the non - trivial euler equation : is identical to the codazzi equation : note the term , , in the codazzi equation vanishes since .we note the following observation : if we had allowed the steady shearing motion to be the more general case : we still have an exact solution to the incompressible euler equations .however , perhaps surprisingly , we have not been able to find a three - dimensional manifold that can be identified with this motion .we wish to compare the metrics arising from the two cases : and , for the incompressible euler equations .first consider metric given by theorem [ geomotion ] when . since we know that there is an isometric embedding, we can expand the metric locally via the taylor series : for ( in the definition of ) sufficiently large , we see that in the sense of quadratic forms , or in the language of the nash - kuiper theorem that the embedding is _ short _ compared to the embedding .we now recall the well known nash - kuiper - gromov results .[ nash - kuiper ] let be a smooth , compact riemannian manifold .assume that is in .then the following hold : if , any short embedding into can be approximated by isometric embeddings into of class ( nash and gromov ) ; if , then any short embedding into can be approximated in by embeddings into of class ( nash and kuiper ) . in our examples, we are in the case and , so only case ( ii ) applies .in fact , the issue is quite subtle .it is quite evident that the shearing motion had its geometric image in and , if the general euler flow had its geometric image in as well , then we could apply ( ii ) with .this would not only allow a sharper result but more importantly allow us to apply the the theory for by borisov , at least in the case when the euler flow is locally analytic , which states that embeddings of ( ii ) can not be .thus there would be an upper bound on their regularity . while it seems likely based on the discussion of that some upper bound regularity exists for our case , we know of no such result . as a direct consequence of theorem [ geomotion ] and theorem [ nash - kuiper ] , we can state the following result . [ t62 ] the short embedding given theorem [ geomotion ] ( which is the image of the euler flow with ) can be approximated in by embeddings ( defined to be _ _ generalized shear flows _ _ ) locally in space - time .furthermore , the energy is a constant in .finally , we have the following non - uniqueness result . [ t63 ] for fixed generalized shear initial data ,there are infinite number of solutions to the initial value problem for the evolution equations .the proof is identical to the one given in theorem 7.3 in .nevertheless , we provide a short sketch of the proof for the sake of completeness .choose an interval ] , ] with . by theorem [ t62 ] ,there exists a sequence of wrinkled solutions so that we can then define the wrinkled solution on the entire interval ] , we can provide an infinite number of wrinkled solutions on $ ] by simply letting vary for , with as the fixed cauchy initial data .the energy on any domain in .again as in , it is not apparent in what sense the generalized shears actually satisfy the euler equations . on the other hand ,the simplicity of our arguments gives a rather elementary explanation for both the existence of wild solutions to the equations of inviscid continuum mechanics and the non - uniqueness of solutions to the cauchy problem .in this section , we will apply our nash - kuiper approach to the classical navier - stokes equations of viscous , incompressible fluid flow : which in turn imply in particular , we will consider the problem of planar couette flow between two parallel plates placed at with the top ( bottom ) plate moving with speed ( respectively ) .we impose the non - slip boundary conditions : and the periodicity of the velocity in .define to be the spatial domain : we have taken dimensionless independent and dependent variables , and hence the quantity is now the inverse of the dimensionless reynolds number . an exact solution to the navier - stokes equations is given by the laminar couette flow : and we take the constant to be zero so that . at first glance , there appears to be no external force for couette flow , but of course this is not the case , as an external force would be required to move the parallel plates and hence energy is being added to the system . this fact is reflected in the fact that the energy norm of the laminar flow is infinite . as in [ shearing ] ,matrix is singular , yet the laminar couette flow can be identified with a three - dimensional riemannian manifold embedded in . a crucial feature of planar couette flow and related problems of pouseille flow ( pressure driven flow between parallel plates ) and rotating couette flow ( the fluid confined between two rotating cylinders ) is the numerically and physically observed bifurcation from laminar flow to roll patterns at high reynolds number ; _ cf . _ and the references cited therein .this means that the boundary value problem described above is expected to have non - unique solutions .hence , just as the simple shear solution for the euler equations given in [ shearing ] provides an infinite number of possible steady solutions to the euler equations , the boundary value problem for planar couette flow can be expected to provide a multitude of steady solutions at high reynolds number . by interchanging components for the embedding given in [ shearing ] ,the laminar couette flow is embedded into with with this yields metric with components : which is riemann flat .since we are using laminar couette flow , we have now we consider the cauchy problem for the couette flow .consider the initial data for with which satisfies the boundary conditions : and the periodicity of the velocity in , and for initially as a solution of the boundary value problem : to be positive on .this is easily done by adding a sufficiently large positive constant to any solution of a fixed solution to the boundary value problem for .hence , for sufficiently large reynolds numbers , _i.e. _ , small , the contributions of the viscous stresses to the computation of matrix become negligible , and is initially positive definite . in fact , this condition can be computed explicitly as follows .the eigenvalues of matrix for the initial data and ( constant ) are given by hence , satisfying that would suffice , so that , . hereconstant can be interpreted as the pressure at the ends of the channel .since the initial value problem for the navier - stokes equations has local - in - time smooth solutions , then we have a smooth solution with positive definite , locally in time .we can now follow again the argument in [ nk - wild ] to see that the geometric image ( at least locally in space - time ) of the incompressible navier - stokes flow is approximated in by generalized shears and , furthermore , the initial value problem for such generalized shear data has an infinite number of solutions .we state this as theorems [ t62][t63 ] are valid where 1 . is the geometric image ( locally in space - time , say a domain ) of solutions of the cauchy problem for the incompressible navier - stokes equations with the boundary conditions : and the periodicity of the velocity in ; 2 . for each and satisfies , where has the components : which is riemann flat .as noted above , if there was only one solution to the steady couette problem , the result of such non - uniqueness would seem unlikely , but the key here is that there are many such solutions at large reynolds number and the non - uniqueness of the evolutionary geometric problem is not unexpected . of course , these results give at the geometric level an indication for both the onset of turbulence for high reynolds number navier - stokes flow and non - uniqueness of weak solutions for the cauchy problem for the three - dimensional navier - stokes equations .specifically , we note that , to apply the above theory , we have needed the three eigenvalues at the ends of the channel fixed and positive , and the transition from the wild `` turbulent '' initial data will occur as passes from positive to negative values for decreasing reynolds number . thus , according to our geometric theory , the critical reynolds number will be given by the formula : in addition , this formula provides an equation for the critical profile for transition from turbulence .one possible check of this critical profile relation is to use the experimental data of reichardt where the data are normalized so that . figure ( a ) ( from ) gives profiles for water with reynolds number and oil with reynolds number .we consider the one for oil since the reynolds number for that experiment is closer to the usually accepted reynolds number for transition to turbulence . since we expect that , we drop that term in our critical profile equation and simplify it as let us work on the interval .we can find the solution for via the relation , .hence , for , we solve the initial value problem : for to give if we fit this relation to reichardt s figure ( a ) by using , we find , , and so the critical profile is approximately given by .we note the value of our approximate critical profile at is . while this value is not identically zero , its value is sufficiently small to provide a very good approximation to the reichardt s graph which has the value zero at .we plot our approximate critical profile in figure ( b ) with the corresponding to and the corresponding to .\(a ) ( b ) while the above computation does not validate the geometric theory , it does at least show the geometric theory is consistent with experiment .chen s research was supported in part by the uk engineering and physical sciences research council under grants ep / e035027/1 and ep / l015811/1 , and the royal society wolfson research merit award ( uk ) .m. slemrod was supported in part by simons collaborative research grant 232531 .d. wang was supported in part by nsf grants dms-1312800 and dms-1613213 .we also wish to thank professors amit acharya and laszlo szkelyhidi for their valuable remarks on this research project .d. deturck and d. yang , _ local existence of smooth metrics with prescribed curvature ._ nonlinear problems in geometry ( mobile , ala . , 1985 ) ,3743 , contemp .51 , amer .soc . : providence , ri , 1986 .b. riemann , _ ueber die hypothesen , welche der geometrie zu grunde liegen_. in : _habilitationsschrift , 1854 , abhandlungen der kniglichen gesellschaft der wissenschaften zu gttingen _ , 13 ( 1868 ) , s. 133150 .
|
a theory for the evolution of a metric driven by the equations of three - dimensional continuum mechanics is developed . this metric in turn allows for the local existence of an evolving three - dimensional riemannian manifold immersed in the six - dimensional euclidean space . the nash - kuiper theorem is then applied to this riemannian manifold to produce a _ wild _ evolving manifold . the theory is applied to the incompressible euler and navier - stokes equations . one practical outcome of the theory is a computation of critical profile initial data for what may be interpreted as the onset of turbulence for the classical incompressible navier - stokes equations .
|
we consider the pde system , in , where ( with ) is a bounded and sufficiently regular domain and denotes a final time , [ eqn : pde ] with subgradients and }(\chi_t) ] imply that ] on the damage variable .the progression of damage is accompanied with an increase and proliferation of micro - cavities and micro - cracks in the considered material ( as pointed out in engineering literature on damage ; see , e.g. , ) .this loss of structural integrity may also reduce the elastic response on temperature changes modeled by the heat expansion term in .but a dependence of on the damage also provokes the presence of two new nonlinear terms in coupling them nonlinearly with both the momentum balance and the damage evolution .especially the coupling term in complicates the analysis and requires elaborate estimation techniques to gain the desired a priori estimates .moreover , a dependence on in the -equation appears explicitly as well as a further dependence on and in the -equation .the other two coeffiecients and appearing in equation represent respectively the heat capacity and the heat conductivity of the system and will have to satisfy proper growth conditions ( cf .remark [ rem : ck ] ) , while the function denotes a given heat source . in the momentum balance denotes the linearized symmetric strain tensor , while the functions ) ] as soon as ] and the transformed quantities as already mentioned in the introduction , the main difficulty here , with respect to the previous works in the literature , consists in the presence of the nonlinearities due to the fact that the temperature expansion term depends on . indeed , following , herewe will combine the conditions on with conditions on the heat capacity coefficient to handle the nonlinearities , , in by means of a so - called boccardo - gallout type estimate on .the reader may consult for various examples in which a superquadratic growth in for the heat conductivity is imposed .as for the triply nonlinear inclusion , we will use a notion of solution derived in .the authors have devised a weak formulation consisting of a _ one - sided _ variational inequality ( i.e. with test functions having a fixed sign ) , and of an _ energy inequality _ , see definition [ def : weaksolution ] later . finally , let us notice that uniqueness of solutions remains an open problem even in the isothermal case .the main problem is , in general , the doubly nonlinear character of ( cf .also for examples of non - uniqueness in general doubly nonlinear equations ) .the paper is organized as follows . in section [ section : assumptions ] , we list all assumptions which are used throughout this paper and introduce some notation .subsequently , a suitable notion of weak solutions for system as well as the main result , existence of weak solutions ( see theorem [ theorem : existence ] ) , are stated in section [ section : notion ] . in the main part, the proof of the existence theorem is firstly performed for a truncated system in section [ section : existence1 ] and finally for the limit system in section [ section : existence2 ] .let denote the space dimension . for the analysis of the transformed system - , the central hypotheses are stated below . + 1 . is a bounded -domain .the function is assumed to be lipschitz continuous with and for a.e . and should satisfy the growth condition for all and for constants and .moreover , we assume for all .the heat conductivity function is assumed to be continuous and should satisfy the estimate for all and for constants satisfying 4 .the damage - dependent potential function is assumed to satisfy ) ] and ) ] and a constant .the 4th order stiffness tensor is assumed to be symmetric and positive definite , i.e. with constant .the viscosity tensor is given by where is a constant .the thermal expansion coefficient depending on is assumed to fulfill ) ] where denotes the time - discretization fineness .moreover , let be the final time index ( note that by the assumed equidistancy of the partition ) .we set and and perform a recursive procedure . in the following ,we adopt the notation ( as well as for and ) .let , furthermore , be defined as the existence of weak solutions for the time - discrete system is proven in the following .[ lemma : discrsys ] for every equidistant partition of ] . at the beginningthis will be ( see above ) which is greater than due to by ( a8 ) . by using assumption ( a6 ) , the regularity theorem in ( * ? ? ? * chap .6 , theorem 1.11 ( i ) ) ( see also ( * ? ? ?* theorem 6.3 - 6 ) for isotropic ) shows that . by using the sobolev embedding theorem, we obtain .this , on the other hand , implies therefore , after applying the -regularity result , we obtain enhanced integrability of the right - hand side . to see that can be obtained after finitely many iterations , we consider the function ( which occurs in ) and see that '\geq 0 ] .thus the increase of integrability before reaching the value can be bounded from below by a positive constant ( provided that ) : for all ] . given a ] .the corresponding calculations without the term are carried out in ( * ? ? ?* proposition 3.10 ) ( see also ( * ? ? ?* fifth estimate , p. 18 ) ) .the calculations there take advantage of assumption ( a8 ) , i.e. . in our case , we have to estimate the additional term where the -dependence of comes into play by using the first and the second a priori estimates , the estimate ( holding uniformly in ) and the regularity estimate for linear elasticity ( cf .* lemma 3.2 ) ) as well as the calculations in ( * ? ? ?* proposition 3.10 ) ( see also ( * ? ? ?* fifth estimate , p. 18 ) ) , we obtain eventually for small : gronwall s lemma leads to the claim .+ testing with and using standard convexity estimates as well as assumption ( a3 ) yield summing over the discrete time index , using the continuous embedding and standard estimates , we receive chosing sufficiently small , applying the first and the third a priori estimates as well as the estimate , we obtain by gronwall s inequality boundedness of the left - hand side and , therefore , the claim .+ a comparison argument in equation shows the assertion . by utilizing lemma [ lemma : aprioridiscr ] and by noticing ( see ) ,we obtain by standard compactness and aubin - lions type theorems ( see ) the following convergence properties .[ cor : weakconvdiscr ] we obtain functions which are in the spaces such that ( along a subsequence ) for all ] , corollary [ cor : weakconvdiscr ] allows to pass to the limit by taking into account the uniform boundedness of , and in .then , by switching to an a.e . formulation in the limit , we obtain for every and a.e . : * * balance of forces . * to obtain the equation for the balance of forces , we integrate equation over and use corollary [ cor : weakconvdiscr ] to pass to the limit . in the limit we have the necessary regularity properties to switch to an a.e . formulation in , i.e. it holds a.e . in . ** one - sided variational inequality for the damage process . *the limit passage for equation can be accomplished by an approximation argument developed in .note that this approach strongly relies on ( see ( a8 ) ) .we sketch the argument .* * initially , the main idea is to consider time - depending test - functions which satisfy for a.e . the constraint here , we make use of the embedding . * * as shown in ( * ? ? ?* lemma 5.2 ) , we obtain an approximation sequence and constants ( independent of ) such that in as and in for a.e .multiplying this inequality by -1 , adding and using the monotonicity condition , we obtain * * because of , we are allowed to test with . dividing the resulting inequality by ( which is positive and independent of ) , integrating in time over ] and using the generalized chain - rule yield where denotes the primitive of vanishing at . by using assumption ( a3 ) , the estimates ( cf .* remark 2.10 ) ) and we obtain by using hlder s inequality in space and time notice the following implications : therefore , in both cases and we can estimate the right - hand side above as follows by using assumption ( a2 ) : thus the r.h.s . can be absorbed by the l.h.s .and we obtain the assertion . * by using the definition , the growth assumption for in ( a2 ) and the estimate ( see the proof of the fourth a priori estimate ) , we obtain * hlder s inequality and the embedding yield by the fourth a priori estimate , we have the boundedness of where denotes the -dimensional lebesgue measure of .this implies by using the growth condition for in ( a2 ) : since , we obtain boundedness of and hence + to tackle the sixth a priori estimate , we will make use of the primitive of vanishing at and use the property note that the identity is not fulfilled while is true . by exploiting growth assumption ( a3 ), we obtain the crucial estimate in what follows let and as in definition [ def : weaksolution ] . applying integration by parts in, we receive for all : let denote the constant resulting from the continuous embedding . due to the crucial identities hlder s inequality reveals by using the boundedness of in and , we obtain calculating the -norm in time and using hlder s inequality show keeping the first and the third a priori estimates in mind , it still remains to show * estimate leads to since , by definition , , we infer boundedness of by the fourth a priori estimate and boundedness of by the fifth a priori estimate and by and using ( a3 ) .finally , we obtain .* by assumption ( a2 ) , we obtain because of ( since by ( a2 ) and by ( a3 ) ) , we obtain by the fifth a priori estimate .the a priori estimates from lemma [ lemma : apriori ] give rise to the subsequent convergence properties for , and along subsequences by aubin - lions type compactness results ( cf . ) and by adapting lemma [ lemma : strongzconv ] to this case .[ cor : convergencelimit ] there exist limit functions defined in spaces given in definition [ def : weaksolution ] such that the following convergence properties are satisfied for all , and all ] .taking into account , passing by employing the convergence results in corollary [ cor : convergencelimit ] and corollary [ cor : convergencelimit2 ] and switching back to an a.e . in time formulation, we end up with . * * balance of momentum equation * and * one - sided variational inequality . * translating , and to a weak formulation involving test - functions in time and space , we can pass . translating the results back to an a.e . in time formulation ,we obtain , and . ** partial energy inequality . *the inequality is gained from by using lower semi - continuity arguments in the transition .
|
in this paper we prove existence of global in time weak solutions for a highly nonlinear pde system arising in the context of damage phenomena in thermoviscoelastic materials . the main novelty of the present contribution with respect to the ones already present in the literature consists in the possibility of taking into account a damage - dependent thermal expansion coefficient . this term implies the presence of nonlinear coupling terms in the pde system , which make the analysis more challenging . * key words : * damage phenomena , thermoviscoelastic materials , global existence of weak solutions , nonlinear boundary value problems . * ams ( mos ) subject classification : * 35d30 , 34b14 , 74a45 .
|
turbo codes are a new class of coding systems that offer near optimal coding performance while requiring only moderate decoding complexity [ 1 ] .it is known that the widely - used iterative decoding algorithm for turbo codes is in fact a special case of a quite general local message - passing algorithm for efficiently computing posterior probabilities in acyclic directed graphical ( adg ) models ( also known as belief networks " ) [ 2 , 3 ] .thus , it is appropriate to analyze the properties of iterative - decoding by analyzing the properties of the associated adg model . in this paperwe derive analytic approximations for the probability that a randomly chosen node in the graph for a turbo code participates in a simple cycle of length less than or equal to .the resulting expressions provide insight into the distribution of cycle lengths in turbo decoding .for example , for block lengths of , a randomly chosen node in the graph participates in cycles of length less than or equal to with probability 0.002 , but participates in cycles of length less than or equal to 20 with probability 0.9998 . in section [ sec : notation ] we review briefly the idea of adg models , define the notion of a _ turbo graph _ ( and the related concept of a _ picture _ ) , and discuss how the cycle - counting problem can be addressed by analyzing how pictures can be embedded in a turbo graph . with these basic toolswe proceed in section [ sec : count ] to obtain closed - form expressions for the number of pictures of different lengths . in section [ sec : prob ] we derive upper and lower bounds on the probability of embedding a picture in a turbo graph at a randomly chosen node . using these results , in section [ sec : nocycles ] we derive approximate expressions for the probability of no simple cycles of length or less .section [ sec : simulation ] shows that the derived analytical expressions are in close agreement with simulation . in section [ sec : srandom ] we investigate the effect of the s - random permuter construction .section [ sec : ldpc ] extends the analysis to ldpc codes and compares both analytic and simulation results on cycle lengths .section [ sec : discussion ] contains a discussion of what these results may imply for iterative decoding in a general context and section [ sec : conclusion ] contains the final conclusions ., rate turbocode . ]an adg model ( also known as a belief network ) consists of a both a directed graph and an associated probability distribution over a set of random variables of interest .there is a 1 - 1 mapping between the nodes in the graph and the random variables . loosely speaking, the presence of a directed edge from node to in the graph means that is assumed to have a direct dependence on ( causes " ) .more generally , if we identify as the set of all _ parents _ of in the graph ( namely , nodes which point to ) , then is conditionally independent of all other variables ( nodes ) in the graph ( except for s descendants ) given the values of the variables ( nodes ) in the set .for example , a markov chain is a special case of such a graph , where each variable has a single parent .the general adg model framework is quite powerful in that it allows us to systematically model and analyze independence relations among relatively large and complex sets of random variables [ 4 ] .as shown in [ 2 , 3 , 5 ] , turbo codes can be usefully cast in an adg framework .figure [ turbo1 ] shows the adg model for a rate turbo code .the nodes are the original information bits to be coded , the nodes are the linear feedback shift register outputs , the nodes are the codeword vector which is the input to the communication channel , and the nodes are the channel outputs .the adg model captures precisely the conditional independence relations which are implicitly assumed in the turbo coding framework , i.e. , the input bits are marginally independent , the state nodes only depend on the previous state and the current input bit , and so forth .the second component of an adg model ( in addition to the graph structure ) is the specification of a joint probability distribution on the random variables .a fundamental aspect of adg models is the fact that this joint probability distribution decomposes into a simple factored form .letting be the variables of interest , we have i.e. , the overall joint distribution is the product of the conditional distributions of each variable given its parents .( we implicitly assume discrete - valued variables here and refer to distributions ; however , we can do the same factorization with density functions for real - valued variables , or with combinations of densities and distributions ) . to specify the full joint distribution , it is sufficient to specify the individual conditional distributions . thus , if the graph is sparse ( few parents ) there can be considerable savings in parameterization of the model . from a decoding viewpoint, however , the fundamental advantage of this factorization is that it permits the efficient calculation of posterior probabilities ( or optimal bit decisions ) of interest .specifically , if the values for a subset of variables are known ( e.g. , the received codeword vector ) we can efficiently compute the posterior probability for the information bits , .the power of the adg framework is that there exist exact local message - passing algorithms which calculate such posterior probabilities .these algorithms typically have time complexity which is linear in the diameter of the underlying graph times a factor which is exponential in the cardinality of the variables at the nodes in the graph .the algorithm is provably convergent to the true posterior probabilities provided the graph structure does not contain any loops ( a loop is defined as a cycle in the undirected version of the adg , i.e. , the graph where directionality of the edges is dropped ) .the message - passing algorithm of pearl [ 6 ] was the earliest general algorithm ( and is perhaps the best - known ) in this general class of probability propagation " algorithms .for regular convolutional codes , pearl s message passing algorithm applied to the convolutional code graph structure ( e.g. , the lower half of figure 1 ) directly yields the bcjr decoding algorithm [ 7 ] . if the graph has loops then pearl s algorithm no longer provably converges , with the exception of certain special cases ( e.g. , see [ 8 ] ) .a loop " is any cycle in the graph , ignoring directionality of the edges .the turbocode adg of figure 1 is an example of a graph with loops .in essence , the messages being passed can arrive at the same node via multiple paths , leading to multiple over - counting " of the same information .a widely used strategy in statistics and artificial intelligence is to reduce the original graph with loops to an equivalent graph without loops ( this can be achieved by clustering variables in a judicious manner ) and then applying pearl s algorithm to the new graph . however ,if one applies this method to adgs for realistic turbo codes the resulting graph ( without loops ) will contain at least one node with a large number of variables .this node will have cardinality exponential in this number of variables , leading to exponential complexity in the probability calculations referred to above . in the worst - caseall variables are combined into a single node and there is in effect no factorization .thus , for turbo codes , there is no known efficient exact algorithm for computing posterior probabilities ( i.e. , for decoding ) .curiously , as shown in [ 2 , 3 , 4 ] , the iterative decoding algorithm of [ 1 ] can be shown to be equivalent to applying the local - message passing algorithm of pearl directly to the adg structure for turbo codes ( e.g. , figure 1 ) , i.e. , applying the iterative message - passing algorithm to a graph with loops .it is well - known that in practice this decoding strategy performs very well in terms of producing lower bit error rates than any virtually other current coding system of comparable complexity .conversely , it is also well - known that message - passing in graphs with loops can converge to incorrect posterior probabilities ( e.g. , [ 9 ] ) .thus , we have the mystery " of turbo decoding : why does a provably incorrect algorithm produce an extremely useful and practical decoding algorithm ?in the remainder of this paper we take a step in understanding message - passing in graphs with loops by characterizing the distribution of cycle - lengths as a function of cycle length .the motivation is as follows : if it turns out that cycle - lengths are long enough " then there may be a well - founded basis for believing that message - passing in graphs with cycles of the appropriate length are not susceptible to the over - counting " problem mentioned earlier ( i.e. , that the effect of long loops in practice may be negligible ) .this is somewhat speculative and we will return to this point in section [ sec : discussion ] .an additional motivating factor is that the characterization of cycle - length distributions in turbo codes is of fundamental interest by itself . ] . ] in figure [ turbo1 ] the underlying cycle structure is not affected by the and nodes , i.e. , they do not play any role in the counting of cycles in the graph . for simplicitythey can be removed from consideration , resulting in the simpler graph structure of figure [ turbo2 ] .furthermore , we will drop the directionality of the edges in figure [ turbo2 ] and in the rest of the paper , since the definition of a cycle in an adg is not a function of the directionality of the edges on the cycle . to simplify our analysisfurther , we initially ignore the nodes , , , to arrive at a _ turbo graph _ in figure [ turbo3 ] ( we will later reintroduce the nodes ) .formally , a turbo graph is defined as follows : 1 .there are two parallel chains , each having nodes .( for real turbo codes , can be very large , e.g. . ) 2 .each node is connected to one ( and only one ) node on the other chain and these one - to - one connections are chosen randomly , e.g. , by a random permutation of the sequence .( in section [ sec : srandom ] we will look at another kind of permutation , the s - random permutation . " ) 3 . a turbo graph as defined above is an _ undirected _ graph . but to differentiate between edges on the chains and edges connecting nodes on different chains , we label the former as being _ directed _ ( from left to right ) , and the latter _undirected_. ( note : this has nothing to do with directed edges in the original adg model , it is just a notational convenience . ) so an internal node has exactly three edges connected to it : one directed edge going out of it , one directed edge going into it , and one undirected edge connecting it to a node on the other chain .a boundary node also has one undirected edge , but only one directed edge . given a turbo graph , and a randomly chosen node in the graph, we are interested in : 1 . counting the number of simple cycles of length which pass through this node ( where a simple cycle is defined as a cycle without repeated nodes ) , and 2 .finding the probability that this node is not on a simple cycle of length or less , for ( clearly the shortest possible cycle in a turbo graph is 4 ) . to assist our counting of cycles ,we introduce the notion of a picture ." first let us look at figure [ turbo4 ] , which is a single simple cycle taken from figure [ turbo3 ] .when we untangle figure [ turbo4 ] , we get figure [ turbo5 ] .if we omit the node labels , we have figure [ turbo6 ] which we call a _ picture_. formally , a picture is defined as follows : 1 .it is a simple cycle with a single distinguished vertex ( the circled one in the figure ) .it consists of both directed edges and undirected edges .the number of undirected edges is even and .4 . no two undirected edges are adjacent .adjacent directed edges have the same direction .we will use pictures as a convenient notation for counting simple cycles in turbo graphs .for example , using figure [ turbo6 ] as a template , we start from node in figure [ turbo3 ] .the first edge in the picture is a directed forward edge , so we go from along the forward edge which leads us to .the second edge in the picture is also a directed forward edge , which leads us from to .the next edge is an undirected edge , so we go from to on the other chain . in the same way , we go from to , then to , which is our starting point . as the path we just traversed starts from and ends at , and there are no repeated nodes in the path, we conclude that we have found a simple cycle ( of length 5 ) which is exactly what we have in figure [ turbo4 ] .we can easily enumerate all the different pictures of length , and use them as templates to find all the simple cycles at a node in a turbo graph .this approach is complete because any simple cycle in a graph has a corresponding picture .( to be exact , it has two pictures because we can traverse it in both directions . )the process of finding a simple cycle using a picture as a template can also be thought of as _ embedding _ a picture at a node in a turbo graph .this embedding may succeed , as in our example above , or it may fail , e.g. , we come to a previously - visited node other than the starting node , or we are told to go forward at the end of a chain , etc . using pictures , the problem of counting the number of simple cycles of length can be formulated this way : * count the number of different pictures of length , * for each distinct picture , calculate the probability of embedding it in a turbo graph at a randomly chosen node .we wish to determine the number of different pictures of length with undirected edges .first , let us define two functions : : : = the number of ways of picking disjoint edges ( i.e. , no two edges are adjacent to each other ) from a cycle of length , with a distinguished vertex and a distinguished clockwise direction . : : = the number of ways of picking independent edges from a path of length , with a distinguished endpoint .these two functions can be evaluated by the following recursive equations : and the solutions are thus , the number of different pictures of length and with undirected edges , ( and is even ) , is given by where is the number of different ways to give directions to the directed edges .the division by two occurs because the direction of the picture is irrelevant . because of the undirected edges , there are segments of directed edges , with one or more edges in a segment ; the edges within a segment must have a common direction ( property 4 of a picture ) .in this section we derive the probability of embedding a picture of length and with undirected edges at a node in a turbo graph with chain length .let us first consider a simple picture where the directed edges and undirected edges alternate ( so ) and all the directed edges point in the same ( forward ) direction .let us label the nodes of the picture as ,,, , ,,,, , + ,,, .we want to see if this picture can be successfully embedded , i.e. if the above nodes are a simple cycle .let us call the chain on which resides _ side 1 _ , and the opposite chain _side 2_. the probability of successfully embedding the picture at is the product of the probabilities of successfully following each edge of the picture , namely , * .this will fail if is the right - most node on side 1 .so .* . here* .this will fail if is the right - most node on side 2 .so .* . is the cross - chain " neighbor of . as there is already a connection between and , can not possibly coincide with ; but it may coincide with and make the embedding fail .this gives us .+ more generally , if there are visited nodes on side 1 , then of them already have their connections to side 2 .so from a node on side 2 , there are only nodes on side 1 to go to , of which are visited nodes .so .* .here we have two previously visited nodes ( , ) . when there are previously - visited nodes , the unvisited nodes are partitioned into up to segments , and after we come from side 2 to side 1 , if we fall on the right - most node of one of the segments , the embedding will fail : either we go off the chain , or we go to a previously - visited node . in this way , we have . * * . * . this final step ( ) completes the cycle .multiplying these terms , we arrive at ^ 2 \nonumber \\ & \leq & p_n(k , m ) \\ & \leq & \frac{1}{n-\frac{m}{2 } } \prod_{s=0}^{s=\frac{m}{2 } } \left [ \left(1-\frac{s}{n - s}\right ) \left(1-\frac{1}{n-2s}\right ) \right]^2 \nonumber\end{aligned}\ ] ] for large and small , the ratio between the upper bound and the lower bound is close to 1 .for example , when and the ratio is .the above analysis can be extended easily to the general case where : * the directed edges in the picture are not constrained to be unidirectional .* .( because the undirected edges can not be adjacent to each other , the total number of edges must be . ) when , no two directed edges are adjacent .equivalently , there are segments of directed edges , and in each segment , there is only one edge . when , we still have segments of directed edges , but there is more than one edge in a segment .suppose for , the segment of side 1 has edges , and the segment of side 2 has edges . is given by : \nonumber \\& \leq & p_n(k , m ) \label{bound1 } \\ & \leq & \frac{1}{n-\frac{m}{2 } } \prod_{s=0}^{s=\frac{m}{2 } } \left [ \left(1-\frac{\sum_{i=1}^{s}a_i } { n - s}\right ) \left(1-\frac{1}{n-\sum_{i=1}^{s}(a_i+1)}\right ) \left(1-\frac{\sum_{i=1}^{s}b_i } { n - s}\right ) \left(1-\frac{1}{n-\sum_{i=1}^{s}(b_i+1)}\right ) \right ] \nonumber\end{aligned}\ ] ] from and we can simplify the bounds in equation [ bound1 ] to ^ 2 \nonumber \\ & \leq & p_n(k , m ) \label{bounds } \\ & \leq & \frac{1}{n-\frac{m}{2 } } \prod_{s=0}^{s=\frac{m}{2 } } \left [ \left(1-\frac{s}{n - s}\right ) \left(1-\frac{1}{n-2s)}\right ) \right]^2 \nonumber\end{aligned}\ ] ] the ratio between the upper bound and the lower bound is still close to 1 .for example , when , the ratio is .given that the bounds are so close in the range of , and of interest , in the remainder of the paper we will simply approximate by the arithmetic average of the upper and lower bound .in section [ sec : count ] we derived , the number of different pictures of length with undirected cycles ( equation ( [ npic ] ) ) . in section[ sec : prob ] we estimated , the probability of embedding a picture ( with length and undirected edges ) at a node in a turbo graph with chain length ( equation ( [ bounds ] ) ) . with these two results , we can now determine the probability of no cycle of length or less at a randomly chosen node in a turbo graph of length .let be the probability that there are no cycles of length at a randomly chosen node in a turbo graph .thus , in this independence approximation we are assuming that at any particular node the event there are no cycles of length " is independent of the event there are no cycles of length or lower . "this is not strictly true since ( for example ) the non - existence of a cycle of length can make certain cycles of length impossible ( e.g. , consider the case ) .however , these cases appear to be relatively rare , leading us to believe that the independence assumption is relatively accurate to first - order. now we estimate , the probability of no cycle of length at a randomly chosen node .denote the individual pictures of length as ,, , and let mean that the picture fails to be embedded . here we make a second independence assumption which again may be violated in practice .the non - existence of embedding of certain pictures ( the event being conditioned on ) will influence the probability of existence of embedding of other pictures .however , we conjecture that this dependence is rather weak and that the independence assumption is again a good first - order approximation .we ran a series of simulations where 200 different turbo graphs ( i.e. , each graph has a different random permuter ) of length are randomly generated . for each graph, we counted the simple cycles of length , at 100 randomly chosen nodes . in total , the cycle counts at nodes are collected to generate an empirical estimate of the true .the theoretical estimates are derived by using the independence assumptions of equations ( [ independence1 ] ) and ( [ maineqn ] ) . is calculated as the arithmetic average of the two bounds in equation ( [ bounds ] ) .the simulation results , together with the theoretical estimates are shown in figure [ appxtable ] .the difference in error is never greater than about 0.005 in probability .note that neither the sample - based estimates nor the theoretical estimates are exact .thus , differences between the two could be due to either sampling variation or error introduced by the independence assumptions in the estimation .the fact that the difference in errors is non - systematic ( i.e. , contains both positive and negative errors ) suggests that both methods of estimation are fairly accurate . for comparison ,in the last column of the table we provide the estimated standard deviation , where is the simulation estimate .we can see that the differences between and are within of except for the last three rows where is quite small . for large can expect that the simulation estimate of will be less accurate since we are estimating relatively rare events .thus , since our estimate of is a function of , for larger values any differences between theory and simulation could be due entirely to sampling error . or less , as a function of .,width=432 ] figure [ numericresult ] shows a plot of the estimated probability that there are no cycles of length or less at a randomly chosen node .there appears to be a soft threshold effect " in the sense that beyond a certain value of , it rapidly becomes much more likely that there are cycles of length or less at a randomly chosen node .the location of this threshold increases as increases ( i.e. , as the length of the chain gets longer ) . or less , including the nodes ( figures [ turbo2 ] ) in the adg for turbo decoding , as a function of .,width=432 ]when is sufficiently large , ( i.e. , ) , the probability of embedding a picture ( equation ( [ bounds ] ) ) can simply written as in this case , we do not differentiate between pictures with different numbers of undirected edges the total number of pictures of length is the log probability of no cycle of length is then from which one has thus , the probability of no cycle of length or less is approximately .this probability equals 0.5 at , which provides an indication of how the curve will shift to the right as increases . roughly speaking , to double , one would have to square the block - length of the code from to .up to this point we have been counting cycles in the turbo graph ( figure [ turbo3 ] ) where we ignore the information nodes , .the results can readily be extended to include these nodes by counting each undirected edge ( that connects nodes from different chains ) as two edges .let be the number of undirected edges and the cycle length , respectively , when we ignore the nodes . from , we have . substituting these into equation [ maineqn ] , we have using equation [ includeu ] , we plot in figure [ numericu ] the estimated probability of no cycles of length or less in the graph for turbo decoding which includes the nodes ( figure [ turbo2 ] ) .not surprisingly , the effect is to shift " the graph to the right , i.e. , adding nodes has the effect of lengthening the typical cycle . for the purposes of investigating the properties of the message - passing algorithm , the relevant nodes on a cycle may well be those which are directly connected to a node ( for example , the nodes in a systematic code and any nodes which are producing a transmitted codeword ) .the rationale for including these particular nodes ( and not including nodes which are not connected to a node ) is that these are the only information nodes " in the graph that in effect can transmit messages that potentially lead to multiple - counting .it is possible that it is only the number of these nodes on a cycle which is relevant to message - passing algorithms .thus , for a particular code structure , the relevant nodes to count in a cycle could be redefined to be only those which have an associated .the general framework we have presented here can easily be modified to allow for such counting .note also that various extensions of turbo codes are also amenable to this form of analysis .for example , for the case of a turbo code with more than two constituent encoders , one can generalize the notion of a picture and count accordingly .in our construction of the turbo graph ( figure [ turbo3 ] ) we use a random permutation , i.e. the one - to - one connections of nodes from the two chains are chosen randomly by a random permutation . in this sectionwe look at the s - random " permutation [ 10 ] , a particular semi - random construction . formally , the s - random permutation is a random permutation function on the sequence such that the s - random permutation stipulates that if two nodes on a chain are within a distance of each other , their counterparts on the other chain can not be within a distance of each other .this restriction will eliminate some of the cycles occurring in a turbo graph with a purely random permutation .for example , there can not be any cycles in the graph of length , 5 , 6 or 7 .thus , the s - random construction disallows cycles of length for .however , from section [ sec : simulation ] we know that these short cycles ( ) occur relatively rarely in realistic turbo codes . in figure [ srand8 ], we show a cycle of length . as long as the distances of and are large enough ( ) , cycles of lengths are possible for any .+ & random & + k & permutation & & & & + 4 & 1.0000 & 1.0000 & 1.0000 & 1.0000 & 1.0000 + 5 & 0.9998 & 1.0000 & 1.0000 & 1.0000 & 1.0000 + 6 & 0.9995 & 1.0000 & 1.0000 & 1.0000 & 1.0000 + 7 & 0.9991 & 1.0000 & 1.0000 & 1.0000 & 1.0000 + 8 & 0.9984 & 0.9996 & 0.9998 & 0.9998 & 0.9998 + 9 & 0.9967 & 0.9983 & 0.9987 & 0.9987 & 0.9984 + 10 & 0.9924 & 0.9949 & 0.9945 & 0.9956 & 0.9950 + 11 & 0.9838 & 0.9890 & 0.9891 & 0.9877 & 0.9887 + 12 & 0.9684 & 0.9739 & 0.9765 & 0.9736 & 0.9748 + 13 & 0.9389 & 0.9460 & 0.9503 & 0.9449 & 0.9478 + 14 & 0.8818 & 0.8877 & 0.8920 & 0.8904 & 0.8913 + 15 & 0.7754 & 0.7804 & 0.7847 & 0.7858 & 0.7833 + 16 & 0.6006 & 0.6114 & 0.6014 & 0.6121 & 0.6006 + 17 & 0.3589 & 0.3671 & 0.3629 & 0.3731 & 0.3647 + 18 & 0.1259 & 0.1315 & 0.1289 & 0.1360 & 0.1330 + 19 & 0.0155 & 0.0146 & 0.0164 & 0.0184 & 0.0183 + 20 & 0.0002 & 0.0004 & 0.0003 & 0.0004 &0.0008 + we simulated s - random graphs and counted cycles in the same manner as described in section [ sec : simulation ] , except that the random permutation was now carried out in the s - random fashion as described in [ 10 ] .the results in table [ tab : srandom ] show that changing the value of does not appear to significantly change the nature of the cycle - distribution .the s - random distributions of course have zero probability for , but for the results from both types of permutation appear qualitatively similar , with a small systematic increase in the probability of a node not having a cycle of length for the s - random case ( relative to the purely random permutation ) . as the cycle - length increases , the difference between the s - random and random distributions narrow . for relatively short cycles with values of between 8 and 12 ( say )the difference is relatively substantial if one considers the the probability of _ having _ a cycle of length less than or equal to .for example , for and , the s - random probability is 0.0050 while the probability for the random permuter is 0.0076 ( see table [ tab : srandom ] ) .in [ 11 , 12 ] it was shown ( empirically ) that the s - random construction does not have an error floor " of the form associated with a random graph , i.e. , the probability of bit error steadily decreases with increasing snr for the s - random construction .the improvement in bit error rate is attributed to the improved weight distribution properties of the code resulting from the s - random construction . from a cycle - length viewpointthe s - random construction essentially only differs slightly from the random construction ( e.g. , by eliminating the relatively rare cycles of length and ) .note , however , that because two graphs have very similar cycle length distributions does not necessarily imply that they will have similar coding performance .it is possible that the elimination of the very short cycles combined with the small systematic increase in the probability of not having a cycle of length or less ( ) , may be a contributing factor in the observed improvement in bit error rate , i.e. , that even a small systematic reduction in the number of short cycles in the graph may translate into the empirically - observed improvement in coding performance .ldpc codes are another class of codes exhibiting characteristics and performance similar to turbo codes [ 13 , 14 ] . like turbo codes ,the underlying adg has loops , rendering exact decoding intractable . once again , however , iterative decoding ( aka message - passing ) works well in practice .recent analyses of iterative decoding for ldpc codes have assumed that there are no short cycles in the ldpc graph structure [ 15 , 16 ] .thus , as with turbo codes , it is again of interest to investigate the distribution of cycle lengths for realistic ldpc codes .the graph structure of regular ldpc codes is shown in figure [ ldpc ] ( an _ ldpc graph _ ) . in this bipartite graph , at the bottom are variable nodes , and at the top are the check nodes . for the regular random ldpc constructioneach variable node has degree , each check node has degree ( obviously ) , and the connectivity is generated in a random fashion . .] using our notion of a _ picture _ , we can also analyze the distribution of cycle lengths in ldpc graphs as we have done in turbo graphs .obviously , here the cycle length must be even .we define a picture for an ldpc graph as follows .recall that in a turbo graph , the edges in a picture are labeled as _ undirected , forward _ , or _backward_. for an ldpc graph , we label an edge in a picture by a number between 1 and ( or between 1 and ) to denote that this edge is the -th edge coming from a node .first consider the probability of successfully embedding a picture of length at a randomly chosen node in an ldpc graph . \nonumber \\ & & \cdot \left [ ( 1-\frac{1}{d_v})(1-\frac{1}{c-1 } ) \right ] \nonumber \\ & & \cdot \left [ ( 1-\frac{1}{d_c})(1-\frac{2}{n-1 } ) \right ] \nonumber \\ & & \cdot \left [ ( 1-\frac{1}{d_v})(1-\frac{2}{c-1 } ) \right ] \nonumber \\ & & \cdots \nonumber \\ & & \cdot \left [ ( 1-\frac{1}{d_c})(1-\frac{m-2}{n-1 } ) \right ] \nonumber \\ & & \cdot \left [ ( 1-\frac{1}{d_v})(1-\frac{m-2}{c-1 } ) \right ] \nonumber \\ & & \cdot \left [ ( 1-\frac{1}{d_c})\frac{1}{n-1 } \right ] \nonumber \\ & = & \frac{1}{n-1 } \left ( 1-\frac{1}{d_c } \right ) ^{m } \left ( 1-\frac{1}{d_v } \right ) ^{m-1 } \prod_{i=0}^{m-2 } \left [ \left ( 1-\frac{i}{n-1 } \right ) \left ( 1-\frac{i}{c-1 } \right ) \right ] \nonumber\end{aligned}\ ] ] the number of different pictures of length is finally , the probability of no cycle of length at a randomly chosen node in a ldpc graph is : where we make the same two independence assumptions as we did for the turbo code case . or less in an ldpc graph with , as a function of .,width=432 ] or less in an ldpc graph with , as a function of .,width=432 ] we ran a number of simulations in which we randomly generated 200 different randomly generated ldpc graphs and counted the cycles at 100 randomly chosen nodes in each .we plot in figures [ ldpc15k ] and [ ldpc63k ] the results of the simulation and the theoretical estimates from equation [ ldpceqn ] for and 63000 . from the simulation results we see that the ldpc curve is qualitatively similar in shape to the turbo graph curves earlier but has been shifted to the left , i.e., there is a higher probability of short cycles in an ldpc graph than in a turbo graph , for the specific parameters we have looked at here .this is not surprising since the branching factor in a turbo graph is 3 ( each node is connected to 3 neighbors ) while the average branching factor in an ldpc graph ( as analyzed with ) is 4 .existing theoretical analyses of the message - passing algorithms for ldpc codes rely on the assumption that none of the cycles in the underlying graph are short [ e.g. , 15 , 16 ] .in contrast , here we explicitly estimate the distribution on cycle lengths , and find ( e.g. , figure 10 and 11 ) that there is a soft threshold " effect ( as with turbo graphs ) .for example , for , the simulation results in figure 10 illustrate that the probability is about 50% that a randomly chosen node participates in a simple cycle of length 9 or less .the independence assumptions clearly are not as accurate in the ldpc case as they were for the turbo graphs .recall that we make two separate independence assumptions in our analysis , namely that 1 .the event that there is no cycle of length is independent of the event that there are no cycles of length or lower , and 2 .the event that a particular picture can not be embedded at a randomly chosen node is independent of the event that other pictures can not be embedded .we can check the accuracy of the first independence assumption readily by simulation .we ran a number of simulations to count cycles in randomly generated turbo and ldpc graphs . from the simulation data, we estimate the marginal probabilities , and the joint probabilities . to test the accuracy of our independence assumption, we compare the product of the estimated marginal probabilities with the estimated joint probability . .testing the independence between and in turbo graphs with chain length .[ cols=">,>,>,>",options="header " , ] table [ tab : turbo ] provides the comparison for turbo graphs for .the products of the marginal probabilities are quite close to the joint probabilities , indicating that the independence assumption leads to a good approximation for turbo graphs .table [ tab : ldpc ] gives a similar results for ldpc , i.e. , the independence assumption appears quite accurate here also .thus , we conclude that the first independence assumption ( that the non - occurrence of cycles of length is independent of the non - occurrence of cycles of length of less ) appears to be quite accurate for both turbo graphs and ldpc graphs .since assumption 2 is the only other approximation being made in the analysis of the ldpc graphs , we can conclude that it is this approximation which is less accurate ( given that the approximation and simulation do not agree so closely overall for ldpc graphs ) . recall that the second approximation is of the form : this assumption can fail for example when two pictures have the first few edges in common .if one fails to be embedded on one of these common edges , then the other will fail too .so the best we can hope from this approximation is that because there are so many pictures , these dependence effects will cancel out . in other words ,we know that but we hope that one possible reason for the difference between the ldpc case and the turbo case is as follows . for turbo graphs , in the expression for the probability of embedding a picture , ^ 2\ ] ] the term is the most important , i.e. , all other terms are nearly 1 .so even if two pictures share many common edges and become dependent , as long as they do not share that most important edge , they can be regarded as effectively independent .in contrast , for ldpc graphs , the contribution from the individual edges to the total probability tends to be more evenly distributed . "each edge contributes a term or a term .no single edge dominates the right hand side of ,\ ] ] and , thus , the effective independence " may not hold as in the case of turbo graphs .for turbo graphs we have shown that randomly chosen nodes are relatively rarely on a cycle of length 10 or less , but are highly likely to be on a cycle of length 20 or less ( for a block length of 64000 ) .it is interesting to conjecture about what this may tell us about the accuracy of the iterative message - passing algorithm in this context .it is possible to show that there is a well - defined distance effect " in message propagation for typical adg models [ 17 ] . consider a simple model where there is a hidden markov chain consisting of binary - valued state nodes , .in addition there is are observed , one for each state and which only depend directly on each state . is a conditional gaussian with mean and standard deviation .one can calculate the effect of any observed on any hidden node , , in terms of the expected difference between and , averaged across many observations of the s .this average change in probability , from knowing , can be shown to be proportional to , i.e. , the effect of one variable on another dies off exponentially as a function of distance along the chain .furthermore , one can show that as the channel becomes more reliable ( decreases ) , the dominance of local information over information further away becomes stronger , i.e. , has less effect on the posterior probability of on average .the exponential decay of information during message propagation suggests that there may exist graphs with cycles where the information being propagated by a message - passing algorithm ( using the completely parallel , or concurrent , version of the algorithm ) can effectively die out " before causing the algorithm to double count .of course , as we have seen in this paper , there is a non - zero probability of cycles of length for realistic turbo graphs , so that this line of argument is insufficient on its own to explain the apparent accuracy of iterative decoding algorithms .it is also of interest to note that that iterative decoding has been empirically observed to converge to stable bit decisions within 10 or so .as shown experimentally in [ 5 ] , even beyond 10 iterations of message - passing there are still a small fraction of nodes which typically change bit decisions . combined with the results on cycle length distributions in this paper, this would suggest that it is certainly possible that double - counting is occurring at such nodes. it may be possible to show , however , that any such double - counting has relatively minimal effect on the overall quality of the posterior bit decisions .the distributions of cycle lengths in turbo code graphs and ldpc graphs were analyzed and simulated .short cycles ( e.g. , of length ) occur with relatively low probability at any randomly chosen node .as the cycle length increases , there is a threshold effect and the probability of a cycle of length or less approaches 1 ( e.g. , for . for turbo codes , as the block length becomes large , the probability that a cycle of length or less exists at any randomly chosen node behaves approximately as .the s - random construction is shown to eliminate very short cycles and for larger cycles results in only a small systematic decrease in the probability of such cycles .for ldpc codes the analytic approximations are less accurate than for the turbo case ( when compared to simulation results ) .nonetheless the distribution as a function of shows qualitatively similar behavior to the distribution for turbo codes , as a function of cycle length . in summary ,the results in this paper demonstrate that the cycle lengths in turbo graphs and ldpc graphs have a specific distributional character .we hope that this information can be used to further understand the workings of iterative decoding .the authors are grateful to r. j. mceliece and the coding group at caltech for many useful discussions and feedback .1 . c. berrou , a. glavieux , and p. thitimajshima ( 1993 ) . near shannon limit error - correcting coding and decoding : turbo codes ._ proceedings of the ieee international conference on communications_. pp .1064 - 1070 . 2 .mceliece , d.j.c .mackay , and j .- f .cheng ( 1998 ) .turbo decoding as an instance of pearl s ` belief propagation ' algorithm ._ ieee journal on selected areas in communications , _sac-16(2):140 - 152 .f. r. kschischang , b. j. frey ( 1998 ) .iterative decoding of compound codes by probability propagation in graphical models ._ ieee journal on selected areas in communications , _sac-16(2):219 - 230 . 4 .p. smyth , d. heckerman , and m. i. jordan ( 1997 ) . `probabilistic independence networks for hidden markov probability models , ' _ neural computation _ , 9(2 ) , 227269 .b. j. frey ( 1998 ) ._ graphical models for machine learning and digital communication ._ mit press : cambridge , ma .j. pearl ( 1988 ) , _ probabilistic reasoning in intelligent systems : networks of plausible inference . _ morgan kaufmann publishers , inc . , san mateo , cal. r. bahl , j. cocke , f. jelinek , and j. raviv ( 1974 ) . `optimal decoding of linear codes for minimizing symbol error rate , ' _ ieee transactions on information theory _, 20:284287 .y. weiss ( 1998 ) . `correctness of local probability propagation in graphical models with loops , ' submitted to _ neural computation_. 9 .r. j. mceliece , e. rodemich , and j. f. cheng , ` the turbo - decision algorithm , ' in _ proc .allerton conf . on comm . , control , comp .s. dolinar and d. divsalar ( 1995 ) , _ weight distributions for turbo codes using random and nonrandom permutations ._ tda progress report 42 - 121 ( august 1995 ) , jet propulsion laboratory , pasadena , california . 11 .r. g. gallager ( 1963 ) , _ low - density parity - check codes . _cambridge , massachusetts : mit press . 12 .mackay , r.m .neal ( 1996 ) , _ near shannon limit performance of low density parity check codes _, published in _ electronics letters _ , also available from http://wol.ra.phy.cam.ac.uk/. 13 .k. s. andrews , c. heegard , and d. kozen , ( 1998 ) . `interleaver design methods for turbo codes , ' _ proceedings of the 1998 international symposium on information theory _ , pg.420 .c. heegard and s. b. wicker ( 1998 ) , _ turbo coding _ ,boston , ma : kluwer academic publishers . 15 .t. richardson , r. urbanke ( 1998 ) , _ the capacity of low - density parity check codes under message - passing decoding _ , preprint , available at + _ http://cm.bell - labs.com / who / tjr / pub.html_. 16 . m. g. luby , m. mitzenmacher , m. a. shokrollahi , d. a. spielman ( 1998 ) , ` analysis of low density codes and improved designs using irregular graphs , ' in _ proceedings of the 30th acm stoc_. also available online at + _ http://www.icsi.berkeley.edu/ luby / index.html_. 17 .x. ge and p. smyth , ` _ distance effects in message propagation _ ' , in preparation .
|
this paper analyzes the distribution of cycle lengths in turbo decoding and low - density parity check ( ldpc ) graphs . the properties of such cycles are of significant interest in the context of iterative decoding algorithms which are based on belief propagation or message passing . we estimate the probability that there exist no simple cycles of length less than or equal to at a randomly chosen node in a turbo decoding graph using a combination of counting arguments and independence assumptions . for large block lengths , this probability is approximately . simulation results validate the accuracy of the various approximations . for example , for turbo codes with a block length of 64000 , a randomly chosen node has a less than chance of being on a cycle of length less than or equal to 10 , but has a greater than chance of being on a cycle of length less than or equal to 20 . the effect of the s - random " permutation is also analyzed and it is shown that while it eliminates short cycles of length , it does not significantly affect the overall distribution of cycle lengths . similar analyses and simulations are also presented for graphs for ldpc codes . the paper concludes by commenting briefly on how these results may provide insight into the practical success of iterative decoding methods .
|
despite a significant progress in theoretical and computational homogenization methods , material characterization techniques and computational resources , the determination of overall response of structural textile composites still remains an active research topic in engineering materials science . from a myriad of modeling techniques developed in the last decades ( see e.g. review papers ) , it is generally accepted that detailed discretization techniques , and the finite element method ( fem ) in particular , remain the most powerful and flexible tools available .the major weakness of these methods , however , is the fact that their accuracy crucially depends on a detailed specification of the complex microstructure of a three - dimensional composite , usually based on two - dimensional micrographs of material samples , e.g. ( * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* and reference therein ) .such a step is to a great extent complicated by _ random _ imperfections resulting from technological operations , which are difficult to be incorporated to a computational model in a well - defined way .if only the overall , or macroscopic , response is the important physical variable , it is sufficient to introduce structural imperfections in a cumulative sense using available averaging schemes such as voight / reuss bounds or the mori - tanaka method .when , on the other hand , details of local stress and strain fields are required , it is convenient to characterize the mesoscopic material heterogeneity by introducing the concept of a periodic unit cell ( puc ) .while application of pucs in problems of strictly periodic media has a rich history , their introduction in the field of random or imperfect microstructures is still very much on the frontier , despite the fact that the roots for incorporating basic features of random microstructures into the formulation of a puc were planted already in mid 1990s in .additional extension presented in , see also our work for an overview , gave then rise to what we now call the concept of statistically equivalent periodic unit cell ( sepuc ) .in contrast with traditional approaches , where parameters of the unit cell model are directly measured from available material samples , the sepuc approach is based on their statistical characterization . in particular , the procedure involves three basic steps : * to capture the essential features of the heterogeneity pattern , the microstructure is characterized using appropriate statistical descriptors .such data are essentially the only input needed for the determination of a unit cell . * a geometrical model of a unit cellis formulated and its key parameters are postulated .definition of a suitable unit cell model is a modeling assumption made by a user , which sets the predictive capacities of sepuc for an analyzed material system .* parameters of the unit cell model are determined by matching the statistics of the complex microstructure and an idealized model , respectively . due to multi - modal character of the objective function ,soft - computing global optimization algorithms are usually employed to solve the associated problem .it should be emphasized that the introduced concept is strictly based on geometrical description of random media and as such it is closely related to previous works on random media reconstruction , in particular to the yeong - torquato algorithm presented in .such an approach is fully generic , i.e. independent of a physical theory used to model the material response .if needed , additional details related to the simulation goals can be incorporated into the procedure without major difficulties , e.g. , but of course at the expense of computational complexity and the loss of its generality . in the previous work , the authors studied the applicability of the sepuc concept for the construction of a single - layer unit cell reflecting selected imperfections typical of textile composites . a detailed numerical studies , based on both microstructural criteria and homogenized properties , revealed that while a single - ply unit cell can take into account non - uniform layer widths and tow undulation , it fails to characterize inter - layer shift and nesting . here ,we propose an extension of the original model allowing us to address such imperfections , which have a strong influence on the overall response of textile composites .a brief summary of the procedure for the determination of the two - ply sepuc for woven composites is given in section [ sec : sepuc ] .such extensions , however , are hardly sufficient particularly in view of a relatively high intrinsic porosity of carbon - carbon ( c / c ) composites , which are in the center of our current research efforts .it has been demonstrated in our previous work that unless this subject is properly addressed inadequate results are obtained , regardless of how `` exact '' the geometrical details of the meso - structure are represented by the computational model .unfortunately , the complexity of the porous phase seen also in [ fig : meso_porosity ] requires some approximations .while densely packed transverse cracks affect the homogenized properties of the fiber tow through a hierarchical application of the mori - tanaka averaging scheme , large inter - tow vacuoles ( crimp voids ) , attributed to both insufficient impregnation and thermal treatment , are introduced directly into the originally void - free sepuc in a discrete manner .not only microstructural details but also properties of individual composite constituents have a direct impact on the quality of numerical predictions .information supplied by manufacturers are , however , often insufficient . moreover , the carbon matrix of the composite has properties dependent on particular manufacturing parameters such as the magnitude and durations of the applied temperature and pressure .experimental derivation of some of the parameters is therefore needed . in connection with the elastic properties of the fiber and matrix ,the nanoindentation tests performed directly on the composite are discussed in section [ sec : experiment ] together with the determination of the necessary microstructural parameters mentioned already in the previous paragraphs . still , most of the work presented in this paper is computational .in particular , a brief summary of the procedure for the determination of the two - ply sepuc for woven composites is given in section [ sec : sepuc ] .section [ sec : examples ] is then reserved for the validation of the extracted geometrical and material parameters . to that end, the heat conduction and classical elasticity homogenization problems are validated against available experimental measurements .the concluding remarks and future extensions are presented in section [ sec : conclusion ] .as already stated in the introductory part , much of the considered here is primarily computational .however , no numerical predictions can be certified if not supported by proper experimental data .the objective of the experimental program in the context of the present study is twofold .first , reliable geometrical data for the construction of the unit cell and material parameters of both the carbon fibers and carbon matrix for the prediction of effective properties are needed . since still derived on the basis of various assumptions , these results must be next confirmed experimentally to acquire real predictive power . considering the mesoscopic complexity of c / c composites , the supportive role of experiments is assumed to have the following four components : * two - dimensional image analysis providing binary bitmaps of the composite further exploited in the derivation of two - layer sepuc * x - ray tomography yielding a three - dimensional map of distribution , shape and volume fraction of major pores to be introduced into a void - free sepuc .* nanoindentation tests supplying the local material parameters which either depend on the manufacturing process or are not disclosed by the producer . for the above purposes a carbon - polymer ( c / p ) laminated plate was first manufactured by molding together eight layers of carbon fabric hexcel g 1169 composed of carbon multifilament torayca t 800 hb and impregnated by phenolic resin umaform le .a set of twenty specimens having dimensions mm were then cut out of the laminate and subjected to further treatment ( carbonization at 1000 , reimpregnation , recarbonization , second reimpregnation and final graphitization at 2200 ) to create the c / c composite , see [ fig : scanned_structure ] for an illustration and for more details .the reported specimens were then fixed into the epoxy resin and subject to curing procedure . in the last step ,the specimen was subjected to final surface grounding and polishing using standard metallographic techniques . [ cols="^,^,^ " , ]the financial support provided by the gar grants no .106/07/1244 and no .106/08/1379 and partially also by the research project cez msm 6840770003 is gratefully acknowledged .we extend our personal thanks to dr .dieter h. pahr from the technical university of vienna for providing the x - ray images and to dr .ji nmeek from the czech technical university in prague for providing the results from nanoindentation tests .j. zeman , m. ejnoha , homogenization of balanced plain weave composites with imperfect microstructure : part i theoretical formulation , international journal of solids and structures 41 ( 22 - 23 ) ( 2004 ) 65496571 . s. v. lomov , d. s. ivanov , i. verpoest , m. zako , t. kurashiki , h. nakai , s. hirosawa , meso - fe modelling of textile composites : road map , data flow and algorithms , composites science and technology 67 ( 9 ) ( 2007 ) 18701891 .g. hivet , p. boisse , consistent 3d geometrical model of fabric elementary cell .application to a meshing preprocessor for 3d finite element analysis , finite elements in analysis and design 42 ( 1 ) ( 2005 ) 2549 .j. skoek , j. zeman , m. ejnoha , effective properties of textile composites : application of the mori - tanaka method , modelling and simulation in materials science and engineering 16 ( 8) ( 2008 ) 085002 ( 15pp ) . http://arxiv.org/abs/0803.4166 [ ] .h. kumar , c. briant , w. curtin , using microstructure reconstruction to model mechanical behavior in complex microstructures , mechanics of materials 38 ( 810 ) ( 2006 ) 818832 , special issue on `` advances in disordered materials '' .b. tomkov , m. ejnoha , j. novk , j. zeman , evaluation of effective thermal conductivities of porous textile composites , international journal for multiscale computational engineering 6(2 ) ( 2008 ) 153168 . http://arxiv.org/abs/0803.3028 [ ] .j. vorel , m. ejnoha , evaluation of homogenized thermal conductivities of imperfect carbon - carbon textile composites using the mori - tanaka method , structural engineering and mechanics 33 ( 4 ) ( 2009 ) 429446 . http://arxiv.org/abs/0809.5162 [ ] .l. p. djukic , i. herszberg , w. r. walsh , g. a. schoeppner , b. gangadhara prusty , d. w. kelly , contrast enhancement in visualisation of woven composite tow architecture using a microct scanner .part 1 : fabric coating and resin additives , composites part a : applied science and manufacturing 40 ( 5 ) ( 2009 ) 553565 .l. p. djukic , i. herszberg , w. r. walsh , g. a. schoeppner , b. gangadhara prusty , contrast enhancement in visualisation of woven composite architecture using a microct scanner .part 2 : tow and preform coatings , composites part a : applied science and manufacturing 40 ( 12 ) ( 2009 ) 18701879 .b. collins , k. matou , d. rypl , three - dimensional reconstruction of statistically optimal unit cells of multimodal particulate composites , international journal for multiscale computational engineering , in presshttp://www.nd.edu/~kmatous / papers / ijmce_recon3d.pdf .k. matou , m. lep , j. zeman , m. ejnoha , applying genetic algorithms to selected topics commonly encountered in engineering practice , computer methods in applied mechanics and engineering 190 ( 1314 ) ( 2000 ) 16291650 .m. s. talukdar , o. torsaeter , reconstruction of chalk pore networks from 2d backscatter electron micrographs using a simulated annealing technique , journal of petroleum science and engineering 33 ( 4 ) ( 2002 ) 265282 .j. c. michel , h. moulinec , p. suquet , effective properties of composite materials with periodic microstructure : a computational approach , computer methods in applied mechanics and engineering 172 ( 1999 ) 109143 .
|
a two - layer statistically equivalent periodic unit cell is offered to predict a macroscopic response of plain weave multilayer carbon - carbon textile composites . falling - short in describing the most typical geometrical imperfections of these material systems the original formulation presented in is substantially modified , now allowing for nesting and mutual shift of individual layers of textile fabric in all three directions . yet , the most valuable asset of the present formulation is seen in the possibility of reflecting the influence of negligible meso - scale porosity through a system of oblate spheroidal voids introduced in between the two layers of the unit cell . numerical predictions of both the effective thermal conductivities and elastic stiffnesses and their comparison with available laboratory data and the results derived using the mori - tanaka ( mt ) averaging scheme support credibility of the present approach , about as much as the reliability of local mechanical properties found from nanoindentation tests performed directly on the analyzed composite samples . balanced woven composites , material imperfections , statistically equivalent periodic unit cell , image processing , x - ray microtomography , nanoindentation , soft computing , numerical homogenization , steady - state heat conduction
|
stellar intensity interferometry ( sii ) is an experimental method for measuring the angular diameters and acquiring high resolution images of stars . in a stellar intensity interferometer ,the light from a star is received by two or more telescopes separated by a baseline that may range from tens of meters to kilometers , allowing for the resolution of surface features ranging from less than 0.1 milli - arcsecond ( mas ) ( longest baseline ) to 10 mas ( shortest baseline ) at visible wavelengths .intensity interferometry relies on the correlation between the light intensity fluctuations recorded by different telescopes .the fluctuations contain two components : the shot noise and the wave noise .the dominant component is the shot noise , which is the random fluctuation associated with the photon statistics , and which is uncorrelated between telescopes .the smaller component is the wave noise , which can be interpreted as the beating between different fourier components of the light reaching the different telescopes .the wave noise shows correlation between different telescopes provided there is some degree of mutual coherence in the light .the light intensity correlation between receivers is measured as the time - integrated product of the fluctuations in the photodetector currents .although higher order correlations can be of interest ( ; ; ; ; , , in this paper we will restrict ourselves to two - point correlations .still , the monte - carlo simulation approach described here could also be directly used for higher order correlation studies . in the case of a thermal light source and an ideal intensity interferometer ,the two - point correlation is equal to the squared degree of coherence of the light at the two telescopes : where represents a time average . according to the van cittert - zernike theorem ( ; ) , the complex degree of coherence is the normalized fourier transform of the source radiance .one difficulty associated with image reconstruction using sii is that the measurable quantity is , implying that the phase of the fourier transform is lost .methods of phase recovery and image reconstruction have recently been developed and investigated ( and references therein ) and nuez showed that details of stellar surface features can be reconstructed with a relatively high degree of accuracy .sii can provide stellar imaging to investigate various topics of interest including stellar rotation , limb darkening , mass loss in be - stars , surface temperature inhomogeneities , and binary systems .it was shown by robert hanbury brown and richard twiss that the signal ( ) in an intensity interferometry measurement is where is the effective light collecting area of the telescopes , is the quantum efficiency of the photodetectors , is the spectral density of the light , is the signal bandwidth of the photodetectors and electronics , is the optical bandwidth of the light , and is the integration time .the noise ( ) is therefore the signal - to - noise ratio snr for an intensity correlation measurement with an ideal system is it is worth noting that the snr is independent of the optical bandwidth used for the observations .this expression for the snr only accounts for the photon statistics , previously referred to as the shot noise .the factor of accounts for the fact that starlight is non - polarized .if we consider fully polarized light then the snr is increased by a factor of .there is increasing interest in sii because of the relative ease to achieve long baselines as well as the advantages it may offer in terms of cost if implemented in conjunction with existing or future very high energy ( vhe ) gamma - ray act arrays ( ; ) .acts operate by taking advantage of the cherenkov light flashes produced by atmospheric showers . because the atmospheric cherenkov light is very faint, observations of vhe gamma - rays require large light collectors ( ) and are restricted to nights with little or no moonlight . during moonlit nights ,when act arrays are less effective for gamma - ray observations , they could be used for sii observations through narrow optical bandwidths .the implication is that sii measurements could be performed with act arrays with minimal interference with the vhe observation programs while increasing the scientific output of these instruments .arrays such as the cherenkov telescope array ( cta ) , which could consist of up to telescopes could provide thousands of different baselines simultaneously and could offer detailed imaging capabilities . in previous studies characterizing sii sensitivity and imaging capabilities , it has been assumed that the detectors are point - like in size so the van cittert - zernike theorem applies directly and the degree of correlation between the intensity fluctuations is strictly proportional to the squared magnitude of the fourier transform of the source radiance .however , when the telescope aperture becomes comparable to the baseline separating the telescopes , then the light may no longer be regarded as fully coherent across individual apertures and the correlation data then departs from a pure fourier transform .additionally , other instrumentation - related systematic effects have been neglected in previous studies .for example , electronic artifacts such as single photon response pulse profile and excess noise affect the signal or contribute to the degradation of the snr . also , the effect of the night sky background light integrated in the point spread function ( psf ) of the light collector and the profile of the optical band pass have only been qualitatively or very approximately taken into account . furthermore , high speed digitization electronic systems are considered to be used for sii applications as they offer the flexibility of offline signal correlation analysis .the development of data analysis algorithms for such systems requires realistic simulated data .finally , other effects such as mirror non - isochronism and inaccuracies in time delay lines and star tracking can be investigated .all of these aspects are important for the performance characterization , data analysis preparation , and the design and deployment of an sii observatory .in this paper we present a monte - carlo simulation of a semi - classical quantum description of light and a simple instrumentation model we developed in order to investigate the above mentioned instrumentation - related effects .section [ simul ] describes the simulation approach to achieve the proper photon statistics at the different telescopes and it also includes models of instrumentation effects such as the single photon response pulse and excess noise .section [ app ] presents a few simulation applications characterizing instrumentation related effects , concentrating on the finite telescope diameter , the single photon pulse shape , the excess noise , and the night sky contamination . since there is no actual data available to test our simulations against , in each case we compare the results to simple models corresponding to ideal cases to obtain further validation of our approach .finally , the findings are summarized in section [ conclu ] .we model a stellar light source with a total photon flux within a given optical bandwidth as a collection of discrete sources , each contributing an equal flux .each point source is defined by a wave amplitude , an angular frequency , a phase , and an angular position . at a given time , the phase of the light is taken randomly and uniformly from $ ] and the amplitude is taken randomly from a gauss deviate of mean and variance , where is the speed of light . in order to make the simulation closer to the continuous distribution of a realistic stellar source , at each time - step we also randomly set the angular frequency and angular position of each point source from distributions corresponding to the spectral density spectrum , radiance , and size of the simulated stellar source .each telescope mirror is modeled as a set of small light collecting elements of area centered on position .note that `` small '' in this context means that the mutual degree of coherence between any two points within any single area element is maximal .the average number of photons emitted by the star and incident on one area element of the telescope during a time - step can be written : throughout this paper , the notation denotes a statistical average while denotes a time average .the average number of star photons collected by telescope during a given time - step can then be written : note that is the time average of and where is the average number of photons emitted by the star and collected by the telescope in one time - step .the summation of the waves prior to taking the magnitude is responsible for the non - poisson distributed light intensity fluctuations , previously referred to as wave noise , displaying correlation between the different telescopes .the actual number of photons reaching telescope in a time consists of light from the star as well as stray background light and can be written as where represents a poisson deviate of mean and variance and the night sky contamination is defined in terms of the average source radiance ( where is a dimensionless factor which sets the amount of stray light ) . in principle , with the description provided thus far , we can simulate any intensity interferometry signal , taking the time - step to be smaller than the coherence time of the light .however , in practice , it is desirable to be able to use a time - step comparable to the electronic time resolution while maintaining a manageable computation time . to do this ,the correlated , non - poisson distributed photons are artificially diluted with a stream of purely poisson distributed photons . contaminating the starlight with purelypoisson light degrades the snr equivalently to degrading the signal bandwidth , permitting simulations with . without affecting the mean number of photons incident on the telescope ,the instantaneous number of photons may be written : where the parameter sets the degree of dilution of the correlated photons without affecting the total photon rate . corresponds to a case in which the correlation is zero , while corresponds to a case of maximal correlation .we want to obtain the correlation between the signal fluctuations about the mean .they can be written , where we use so as to ensure .the correlation is computed as after correction of the mean of for the parameters , ( see appendix [ poisstat ] ) , we see that in the limit where the diameter of the telescopes is small compared to the distance required to begin to resolve surface features of the star .note that is equivalent to the term in eq .[ snr ] with .photons are recorded using electronic systems with a specific time response .it should be noted that in sii , it is the correlation between the high frequency fluctuations in light intensity recorded by different telescopes that is measured .such high frequency fluctuations are directly accessible through an ac coupling of the photo - detector . an ac coupled tracecan be modeled using any desired single - photon response pulse whose integral is zero .the accumulation of individual photons , each modeled by an ac coupled pulse , directly provides the telescope signal . with a detailed pulse model, the sensitivity of a given system may be evaluated more precisely and the effects of experimental timing inaccuracies may be investigated . in this paper , for the application of our simulations , we restrict ourselves to studying the simplest case of an ac coupled square pulse so that our results can be compared to simple models to validate our approach . to test that the simulations yield the correct correlation , we simulate a uniform disk star and calculate as a function of the baseline for point - like telescopes , ( using a square pulse of width ) , and neglecting detector excess noise and stray light contamination .figure [ fig : test1 ] shows the simulated data for a uniform disk star , in diameter observed with a point - like telescope with a flux collection area through a optical bandwidth centered on a wavelength of .the star , which has a flux through the selected optical bandwidth ( ) , was simulated for a duration of .the unrealistically large signal - to - noise ratio is obtained because of the use of a signal bandwidth that is only 10 times smaller than the optical bandwidth .we see that the signal reproduces an airy disk profile to high precision .however , we found that the fewer the number of point sources making up the star , the greater the power at high frequencies , which causes to deviate from the airy disk profile .additionally , we found that randomizing the location of each point source within the spatial extension of the star as well as randomizing the frequency emitted by each point source at each time - step is more representative of a realistic star and further reduces deviations from the airy disk . the simulated data for the observation of a uniform disk star , in diameter is shown to reproduce an airy disk profile .see text for details .the data sets shown here are : a star consisting of 10 point sources which are not randomized at each time - step ( these points are indicated by ) ; a star consisting of 10 point sources which are randomized at each time - step ( indicated by * ) ; and a star consisting of 100 point sources which are randomized at each time - step ( indicated by + ) .deviations at large baselines are greatest when the star is simulated with fewer points and when the points of the star are not randomized at each time - step . ] to further test our approach , we compared the expected snr calculated from eq .[ snr ] to the snr of simulated correlations . in these experiments , the excess noise andstray light contamination are set to zero .each numerical experiment was run times in order to calculate the standard deviation of each set of experimental parameters for a non - resolved star ( i.e. ) . in figure[ fig : test2 ] , we compare the snr obtained from simulations to the snr calculated from eq .[ snr ] . for the signal bandwidth ,a square pulse was used so that the width of the pulse is unambiguous .the ac coupling was taken into account by subtracting the average signal rather than including a negative tail for the pulse , so the correlation is obtained between signals as described in section [ signals ] .the width of the pulse was varied from 1 to 30 and it is verified that the standard deviation evolved as a square root of the pulse width .similarly , the flux of the light source was varied from to .we see that the simulated snr precisely follows the prescription of eq .[ snr ] . the effect on the snr of varying the flux is equivalent to varying the light collecting area of the detectors .the effects of increasing the aperture will be further discussed in section [ extended ] .these results confirm that our monte - carlo approach provides consistent results in simple cases , and so , more subtle instrumentation artifacts can be investigated .shows data acquired from varying the pulse width , and the * shows data acquired from varying the flux .the ratio of the simulated to expected snr for various parameters fall around a 1:1 ratio indicated by the straight line . ]when the effect of the telescope mirror extension is important , corrections must be applied to the data before the van cittert - zernike theorem can be used .simulations can be exploited to establish this correction .the mirror extension effects are significant and corrections must be applied in most cases since telescopes that may be constructed in future arrays such as cta could be up to in diameter , which is comparable to intertelescope distances of current and future arrays . we have investigated the effect of mirror extension on a diameter uniform disk star for various telescope diameters which are comparable to the dish sizes considered by cta .we see that the shape of the ideal curve , which is shown by the airy disk profile and by the simulated data for the case of a point - like detector ( i.e. too small to start resolving the star ) , is smoothed out as the size of individual detectors increases and begins to resolve the star . in principle, the effect of large detector sizes on the degree of correlation is equivalent to taking a successive convolution of with the shape of the light collecting area of each individual telescope ( see appendix [ convolution ] ) , which moves away from being the squared magnitude of the fourier transform of the source .because the simulated star is a uniform disk , follows an airy disk profile independently from the orientation of the direction of the baseline with respect to the star .therefore , the airy disk in two dimensions may be obtained by assuming axial symmetry in the correlation data . in order to test the simulation algorithm against the successive convolution model, we simulated a pair of identical telescopes .the double convolutions of the simulated two - dimensional data for two telescopes with uniform disk - shaped light collecting areas of equal diameters , , and respectively are shown in figure [ fig : mirrorext_conv ] . within the standard error ,each data set agrees well with the prediction .the effect of mirror extension on image reconstruction capabilities requires a detailed study which can not be carried out in this paper .it may be possible to develop correction algorithms to apply to the data before analysis to partially alleviate the effect ; however , some information is lost in the successive convolutions .alternatively , when a parameterized image model is available , detailed simulations can be compared to the data for parameter optimization . ) , 20 m ( * ) , and 30 m ( ) for comparison . aside from the telescope extension ,the physical parameters used in the simulations are the same as used in figure [ fig : test1 ] ] in the high - frequency regime of interest here , most of the fluctuations of the telescope signal correspond to the poisson statistics of the collected photons . to this , we must add the fact that the detector response may fluctuate from photon to photon .this is known as the excess noise .for example , in the case of a photo - multiplier , the excess noise results primarily from the fluctuations in the number of electrons ejected at the first dynode .a typical excess noise level for a photo - multiplier is around 30% of the average single - photon response .the excess noise is uncorrelated between different telescopes but it may have the effect of reducing the snr achieved in the detection of a correlation between telescopes .here , we model the excess noise by multiplying each single photon pulse by a gaussian deviate whose mean is and whose standard deviation is . however the gaussian is truncated at zero to avoid multiplication of the pulse by a negative factor . as a consequence ,the mean is greater than the most probable amplitude . in order for the mean single photon response amplitude to remain the same ,all signals are divided by the mean of the excess noise distribution . the snr expression in eq .[ snr ] accounts only for the fluctuations associated with photon statistics , so the simulations are used to gain an understanding of the noise introduced by the electronics .figure [ fig : exnoise ] shows the snr obtained from the simulations and from a model developed in appendix [ exn ] as a function of the excess noise .the model does not account for the truncation of the distribution of the single photon response pulse at zero .therefore , as expected , simulated data deviates increasingly from the model as the relative excess noise is increased .the simulated snr does not degrade as fast as suggested by the model since the distribution of pulse amplitudes is narrower due to the truncation effect .since the difference in snr between simulations with and at zero excess noise is smaller than the statistical error , we increased the number of simulation experiments by a factor of ten while maintaining all the parameters as in figure [ fig : exnoise ] in order to establish a clear difference between the sensitivities for these fluxes .we found that from the simulation , the normalized snr for a flux is compared to a model prediction of and for a flux , the snr is compared to a prediction of . for non - zero excess noise , the snr departs from the evaluation using eq .[ snr ] as discussed above .this type of study allows for a more accurate and realistic estimate of the sensitivity of an sii experiment .a more realistic model of the excess noise statistics , such as a polya distribution for photo - multiplier tubes can easily be incorporated into the simulation algorithm .similarly , the noise characteristics of other types of detectors , such as silicon photo - multipliers ( sipm ) , geiger - mode avalanche photodiodes , and micro - channel plate ( mcp ) photo - multipliers could be modeled as well .telescopes and an observation time of 1ms . for each point, the snr was obtained from running the simulation 500 times .the + indicates data for fluxes of ( approximately corresponding to a visual magnitude through a optical bandwidth ) , * indicates data for fluxes of ( ) , and indicates data for fluxes of ( ) and the corresponding solid and dashed lines indicate the respective models .the model of the snr fits the simulated data well for lower values of excess noise . ]the simulations can also be used to investigate the effects of stray light contamination .when very faint stars are to be observed , the observation time must be long enough to obtain a sufficient snr for the measurement . in these cases ,contamination from the night sky background , especially during bright moonlit nights , can not be ignored .when the light received is dominated by the night sky background contamination , increasing the observation time is no longer beneficial .an arbitrary amount of stray light can be included in the simulation as a stream of additional photons with purely poisson statistics .the expected dependence of the snr on the amount of stray light is derived in appendix [ poisstat ] . to test this, we simulated a star of brightness and integrated over .the dependence of the snr as a function of the stray light , in terms of the brightness of the source , is shown in figure [ fig : bglight ] .we observe a good match between the simulated data and our model . in terms of the source flux from an unresolved source observed for a duration of with light collecting area telescopes .for each point the snr was obtained from running the simulation 500 times . ]stray light contamination ultimately limits the faintness of stars that can be observed . using the brightness of the night sky for various phases and distances from the moon from , figure [ fig : moonlight ]shows the magnitude of the stars whose brightness equals the night sky luminosity integrated within the psf .this situation corresponds to which results in a degradation of the snr by a factor of compared to ( i.e. no background light ) .we used an optical psf full width at half maximum ( fwhm ) of , corresponding to the veritas telescope array .another limitation for the observability of a star is the practicality of the required integration time .for this reason , in figure [ fig : moonlight ] we include the integration time required to achieve a for a star with spectral density at using a telescope of area , a photodetector quantum efficiency of , and a signal bandwidth of .this does not take into account the fact that in a telescope array , the redundancy of the baselines can be used to improve the sensitivity .fwhm under various moonlight conditions .the horizontal axis shows the angular distance of the star from the moon , the vertical axis shows the faintest visual magnitude star that can be observed , and the various curves correspond to different phases of the moon , where + indicates a moon phase of , indicates a moon phase of , * indicates a moon phase of , and indicates a moon phase of .the right vertical axis shows the integration time required to achieve a for a star with spectral density at using a telescope of area , a photodetector quantum efficiency of , and a signal bandwidth of . ]in addition to limitations imposed by the moon , sii measurements may also be affected by other stars in the field of view of the telescope , especially when observing very faint stars .if there is a star which is as bright or brighter than the star of interest in the psf of the telescope then it is no longer possible to measure the star of interest . as the brightness of the star decreases then the average number of stars within the psf of the telescope increases , as shown in figure [ fig : stellardens ] .however , for stars brighter than magnitude 12 , given the low density of bright stars , other stars within the telescope psf are unlikely to interfere with measurements .stars of magnitude 12 or fainter will likely remain out of reach of sii because of the required integration time . as a function of the apparent visual magnitude .the various curves correspond to the angular distances from the galactic plane , where + corresponds to an angular distance of , corresponds to an angular distance of , * corresponds to an angular distance of , and corresponds to an angular distance of .the upper horizontal axis shows the integration time required to achieve a for a star with spectral density at using a telescope of area , a photodetector quantum efficiency of , and a signal bandwidth of . ]previous studies have not accounted for many realistic effects associated with sii measurements . here , we have demonstrated that our simulations correctly reproduce our simple models predicting how and the measurement sensitivities evolve with various parameters , and we have briefly discussed instrumental properties such as telescope mirror extension , signal bandwidth limitation and electronic pulse shape , excess noise , and stray light contamination .we have verified that the signal measured with spatially extended detectors is the successive convolutions of the normalized fourier transform of the source with the shape of each detector area .we also verified that the snr degrades as the excess noise increases according to a simple model developed in appendix [ exn ] .additionally , we find that the sensitivity degradation departs from our model for large values of excess noise , also as expected .finally , we see that when the contamination of starlight by the night sky background becomes important , the snr degrades so that increasing the observation time no longer offers benefits to the measurements .again we find that the snr degrades according to our simple model developed in appendix [ poisstat ] .these tests and comparisons of simulation results and simple models were made while isolating specific instrumental aspects one at a time .the good agreements obtained in all cases gives us confidence that the simulations may be used to characterize the performances of realistic instruments via their detailed modeling .this is important to gain an understanding of the sensitivity of existing and planned instruments and also this may be used to develop data correction algorithms so as to alleviate instrumental effects impacting the signals .many other effects that will be encountered during real sii measurements can also be investigated with the simulations .for example , the simulations can be used to investigate the effects of inaccuracies in the time alignment of the signals .furthermore , act optics are generally of davies - cotton design , which is not isochronous . in previous studies ,this was approximately accounted for as a signal bandwidth limitation while we could now simulate non - isochronous effects in detail . a possible implementation approach to sii consists in digitizing the individual telescope signals .the required fast digitization rate , exceeding 100 mega - samples per second , implies that huge volumes of data must be handled .however , this provides a lot of flexibility as the correlation is obtained off - line at the time of data analysis .the simulation could also incorporate a model of the digitizer . the simulation algorithm presented in this paper can in factbe combined with any details of the envisioned instrumentation to develop and optimize data analysis and characterize the performance of any stellar intensity interferometer .the authors are thankful to michael daniel for his time and the helpful suggestions he made to improve the clarity of this paper . 99 actis m. , agnetta g. , aharonian f. , et al . , 2011 , experimental astronomy , 32 , 193316 ahn s. , fessler j. , 2003 , standard errors of mean , variance , and standard deviation estimators .eecs department , university of michigan van cittert p. , 1934 ,physica , 1 , 201210 davies j. , cotton e. , 1957 , solar energy , 1 , 1622 dravins d. , jensen h. , lebohec s. , nuez p. , proc.spie int.soc.opt.eng . , 2010 , 7734 , 77340a dravins d. , lebohec s. , jensen h. , nuez p. , 2012, astroparticle physics , in press fontana p. , 1983 , journal of applied physics , 54(2 ) , 473480 gamo h. , 1963 , journal of applied physics , 34(4 ) , 875876 hanbury brown r. , twiss r. , 1957 , proceedings of the royal society of london .series a. mathematical and physical sciences , 242(1230 ) , 300324 hanbury brown r. , 1974 , the intensity interferometer , taylor & francis , london holder j. , atkins r. , badran h. , blaylock g. , et al ., 2006 , astroparticle physics , 25(6 ) , 391401 jain p. , ralston j. , 2008 , a&a , 484(3 ) , 887895 krisciunas k. , schaefer b. , 1991 , publications of the astronomical society of the pacific , 103 , 1033 - 1039 lebohec s. , holder j. , 2006 , the astrophysical journal , 649(1 ) , 399 mandel l. , wolf e. , 1995 , optical coherence and quantum optics , cambridge university press , cambridge nuez p. d. , lebohec s. , kieda d. , holmes r. , jensen h. , 2010 , society of photo - optical instrumentation engineers ( spie ) conference series , 7734 nuez p. d. , holmes r. , kieda d. , lebohec s. , 2012a , mnras , 419(1 ) , 172183 nuez p. d. , holmes r. , kieda d. , rou j. , lebohec s. , 2012b , mnras , 424(2 ) , 1006 - 1011 ofir a. , ribak e. , mnras , 2006a , 368(4 ) , 16461651 ofir a. , ribak e. , mnras , 2006b , 368(4 ) , 16521656 prescott j.r . , nuclear instruments and methods , 1966 , 39(1 ) , 173 - 179 sato t. , wadaka s. , yamamoto j. , ishii j. , 1978 , applied optics , 17(13 ) , 20472052 wagner r. , byrum k. , sanchez m. , et al ., 2009 , astro2010 technology development white paper ( arxiv:0904.3565 ) weekes t. , 2003 , very high energy gamma - ray astronomy , iop , bristol zernike f. , 1938 , physica , 5(8 ) , 785795the total number of photons ( i.e. from the star and from the background light ) incident on a telescope in a time is given by where represents a poisson distribution of mean and variance , is the mean number of photons emitted by the star and collected by telescope , and with being a dimensionless factor which provides a measure of the amount of background light in terms of the amount of light from the star .the mean number of photons incident on a telescope is the sum of the photons from the star and from the night sky : where . from equations [ eq : nti ] and [ eq : ntbar ] the signal from telescope is so that .the correlation between the two telescopes signals is : considering just the numerator : using raikov s theorem , which states that for a random variable , if then can be rewritten : expanding and proceeding with the calculation by considering each term separately , it is found that : recalling the fact that .the simulation correlation is found by substituting equation [ eq : s1s2 ] into equation [ eq : simg ] : using the definition of the squared degree of coherence , then and solving for the quantity of interest , : where . since only the starlight may present correlation , the signal - to - noise expression used in figure [ fig : bglight ] which accounts for stray light contaminationis derived from equation [ eq : sgnl ] with where is the spectral density of the star , and equation [ eq : noise ] with so that : effect of large telescope area may be quite significant in practice , so it is of interest to be able to predict how it affects and degrades the signal . to do this, first we set the origin of the coordinate system at the light source .points on the source are labeled by positions with respect to the origin .the vector radii to the same points from a far away observer at position are denoted : for a point - like detector , the observed amplitude at is where is a random phase caused by atmospheric turbulence among other factors .the following approximation can be made : so that if the detector has a finite area , then the amplitude at position is a superposition of amplitudes at positions , where are points in the detector with respect to position . the random phase can be expressed as a function of detector coordinates as .now the superposition of amplitudes is expressed as a convolution with the detector area , i.e. to calculate the time averaged correlation between detectors and , denoted as , note that where is the light intensity at point .this is because separate points on the source are not correlated over large distances .the phase is also not correlated between separate points , that is , will only be zero when ; otherwise it will have a time variation which results in when .+ now defining , the time averaged correlation is where is a constant . when , then the angle can be defined as the correlation can now be expressed as : where is the fourier transform of the radiance distribution of the star , which goes from angular space to detector separation space .now the quantity measured in intensity interferometry is therefore , the effect of having finite sized telescopes is to replace the magnitude of mutual degree of coherence by its successive convolutions with each of the telescope light collection area shapes .ignoring the effects of stray light , the number of photons during one time - step in channel is where represents a poisson distribution of mean and variance .the correlation is a result of the non - poisson term , .the number of photons incident on channel can also be written as the sum of a poisson term and a binomial term as follows : where is the probability that when a photon arrives in channel there is a correlated photon that also arrives in channel , and for simplicity we set , restricting ourselves to a situation in which .the excess noise is introduced as a gaussian variation in the single photon response pulse .our model simply multiplies the amplitude by a random gauss variable of mean 1 and standard deviation : where the signal is made to be ac coupled by subtracting the mean number of photons from the sum of individual photon signals ( each of mean ) so that the mean of the signal is .the quantity of interest is the effect that the gaussian factor has on the sensitivity of the measurement , so standard deviation of the following term will be calculated : with to lighten notation we assume that the two telescopes are identical and receive the same amount of light from the star ( i.e. ) .developing the product of the signals , the effect of the excess noise on a correlated signal should be equivalent to the effect on an uncorrelated signal , i.e. . applying that , the correlation can be rewritten as : note :suppose and are random independent variables then the variance of each term can be calculated individually , resulting in then , and ^{\frac{1}{2 } } \nonumber\end{aligned}\ ] ] the standard deviation in equation [ eq : stdev ] is the standard deviation of just one time - step , while the term of interest is the standard deviation of the entire measurement , or the standard deviation of the mean : note that in the calculation described above , the gaussian variable is truncated at zero to avoid multiplying the pulse by a negative number .this becomes more significant for large values of excess noise while it is negligible when .
|
stellar intensity interferometers will achieve stellar imaging with a tenth of a milli - arcsecond resolution in the optical band by taking advantage of the large light collecting area and broad range of inter - telescope distances offered by future gamma - ray air cherenkov telescope ( act ) arrays . up to now , studies characterizing the capabilities of intensity interferometers using acts have not accounted for realistic effects such as telescope mirror extension , detailed photodetector time response , excess noise , and night sky contamination . in this paper , we present the semi - classical quantum optics monte - carlo simulation we developed in order to investigate these experimental limitations . in order to validate the simulation algorithm , we compare our first results to models for sensitivity and signal degradation resulting from mirror extension , pulse shape , detector excess noise , and night sky contamination . [ firstpage ]
|
computer networks such as datacenter networks , enterprise networks , carrier networks etc . have become a critical infrastructure of the information society .the importance of computer networks and the resulting strict requirements in terms of availability , performance , and correctness however stand in contrast to today s ossified computer networks : the techniques and methodologies used to build , manage , and debug computer networks are largely the same as those used in 1996 .indeed , operating traditional computer networks is often a cumbersome and error - prone task , and even tech - savvy companies such as github , amazon , godaddy , etc . frequently report issues with their network , due to misconfigurations , e.g. , resulting in forwarding loops .another anecdote reported in illustrating the problem , is the one by a wall street investment bank : due to a datacenter outage , the bank was suddenly losing millions of dollars per minute .quickly the compute and storage emergency teams compiled a wealth of information giving insights into what might have happened .in contrast , the networking team only had very primitive connectivity testing tools such as ping and traceroute , to debug the problem .they could not provide any insights into the actual problems of the switches or the congestion experienced by individual packets , nor could the team create any meaningful experiments to identify , quarantine and resolve the problem . given the increasing importance computer networks play today , this situation is worrying .software - defined networking is an interesting new paradigm which allows to operate and verify networks in a more principled and formal manner , while also introducing flexibilities and programmability , and hence faster innovations . in a nutshell , a software - defined network ( sdn ) outsources and consolidates the control over the forwarding or routing devices ( located in the so - called _ data plane _ ) to a logically centralized controller software ( located in the so - called _ control plane _ ) .this decoupling allows to evolve and innovate the control plane independently from the hardware constraints of the data plane .moreover , openflow , the de facto standard sdn protocol today , is based on a simple match - action paradigm : the behavior of an openflow switch is defined by a set of forwarding rules installed by the controller .each rule consists of a match and an action part : all packets matched by a given rule are subject to the corresponding action .matches are defined over layer-2 to layer-4 header fields ( e.g. , mac and ip addresses , tcp ports , etc . ) , and actions typically describe operations such as forward to a specific port , drop , or update certain header fields . in other words , in an sdn / openflow network , network devices become simpler : their behavior is defined by a set of rules installed by the controller .this enables formal reasoning and verification , as well as flexible network update , from a logically centralized perspective .moreover , as rules can be defined over multiple osi layers , the distinction between switches and routers ( and even simple middleboxes ) becomes blurry .however , the decoupling of the control plane from the data plane also introduces new challenges .in particular , the switches and controllers as well as their interconnecting network form a complex asynchronous distributed system . for example , a remote controller may learn about and react to network events slower ( or not at all ) than a hardware device in the data plane : given a delayed and inconsistent view , a controller ( and accordingly the network ) may behave in an undesirable way .similarly , new rules or rule updates communicated from the controller(s ) to the switch(es ) may take effect in a delayed and asynchronous manner : not only because these updates have to be transmitted from the controller to the switches over the network , but also the reaction time of the switches themselves may differ ( depending on the specific hardware , data structures , or concurrent load ) .thus , while sdn offers great opportunities to operate a network in a correct and verifiable manner , there remains a fundamental challenge of how to deal with the asynchrony inherent in the communication channel between controller and switches as well as in the switches themselves .accordingly , the question of how to update network behavior and configurations correctly yet efficiently has been studied intensively over the last years .however , the notions of correctness and efficiency significantly differs across the literature .indeed , what kind of correctness is needed and which performance aspects are most critical often depends on the context : in security - critical networks , a very strong notion of correctness may be needed , even if it comes at a high performance cost ; in other situations , however , short transient inconsistencies may be acceptable , as long as at least some more basic consistency guarantees are provided ( e.g. , loop - freedom ) .we observe that not only the number of research results in the area is growing very quickly , but also the number of models , the different notions of consistency and optimization objectives , as well as the algorithmic techniques .thus , it has become difficult to keep an overview of the field even for active researchers .moreover , we observe that many of the underlying problems are not entirely new or specific to sdn : rather , similar consistency challenges arose and have been studied already in legacy networks , although update algorithms in legacy protocols are often more distributed and indirect ( e.g. , based on igp weights ) .accordingly , we believe that it is time for a comprehensive survey of the subject . any dependable network does not only need to maintain a range of static invariants , related to correctness , availability , and performance , but also needs to be flexible and support reconfigurations and updates .reasons for updating a network are manifold , including : 1 ._ change in the security policy : _ due to a change in the enterprise security policy , traffic from one subnetwork may have to be rerouted via a firewall before entering another subnetwork . or, in the wide - area network , the set of countries via which it is safe to route sensitive traffic may change over time ._ traffic engineering : _ in order to improve traffic engineering metrics ( e.g. , minimizing the maximal link load ) , a system administrator or operator may decide to reroute ( parts of ) the traffic along different links . for example , many internet service providers switch between multiple routing patterns during the day , depending on the expected load .these patterns may be precomputed offline , or may be computed as a reaction to an external change ( e.g. , due to a policy change of a content distribution provider ) .maintenance work : _ also maintenance work may require the update of network routes . for example , in order to replace a faulty router , or to upgrade an existing router , it can be necessary to temporarily reroute traffic .link and node failures : _ failures happen quite frequently and unexpectedly in today s computer networks , and typically require a fast reaction . accordingly , fast network monitoring and update mechanisms are required to react to such failures , e.g. , by determining a failover path . despite these changes ,it is often desirable that the network maintains certain minimal consistency properties , _ during the update_. for example , per - packet consistency ( a packet should be forwarded along the old or the new route , but never a mixture of both ) , loop - freedom ( at no point in time are packets forwarded along a loop ) , or waypoint enforcement ( a packet should never bypass a firewall ) . moreover , while the reasons for network updates identified above are general and relevant in any network , both software - defined and traditional , we believe that the flexibilities introduced by programmable networks are likely to increase the frequency of network updates , also enabling , e.g. , a more fine - grained and online traffic engineering . this paper presents a comprehensive survey of the consistent network update problem .we identify and compare the different notions of consistency as well as the different performance objectives considered in the literature . in particular , we provide an overview of the algorithmic techniques required to solve specific classes of network update problems , and discuss inherent limitations and tradeoffs between the achievable level of consistency and the speed at which networks can be updated .in fact , as we will see , some update techniques are not only less efficient than others , but with them , it can even be impossible to consistently update a network . while our survey is motivated by the advent of software - defined networks ( sdns ) , the topic of consistent network updates is not new , and for example , guaranteeing disruption - free igp operations has been considered in several works for almost two decades .accordingly , we also present a historical perspective , surveying the consistency notions provided in traditional networks and discussing the corresponding techniques accordingly .moreover , we put the algorithmic problems into perspective and discuss how these problems relate to classic optimization and graph theory problems , such as multi - commodity flow problems or maximum acyclic subgraph problems .the goal of our survey is to ( 1 ) provide active researchers in the field with an overview of the state - of - the - art literature , but also to ( 2 ) help researchers who only recently became interested in the subject to bootstrap and learn about open research questions .the remainder of this paper is organized as follows . [ sec : history ] presents a historical perspective and reviews notions of consistency and techniques both in traditional computer networks as well as in software - defined networks . [ sec : taxo ] then presents a classification and taxonomy of the different variants of the consistent network update problems . [ sec : forwarding ] , [ sec : policies ] , and [ sec : cap ] review models and techniques for connectivity consistency , policy consistency , and performance consistency related problems , respectively . [ sec : orthogonal ] discusses proposals to further relax consistency guarantees by introducing tighter synchronization . in [ sec : practice ] , we identify practical challenges . after highlighting future research directions in [ sec : openprob ] , we conclude our paper in [ sec : conclusion ] .any computer network needs to provide basic mechanisms and protocols to change forwarding rules and network routes , and hence , the study of consistent network updates is not new and the topic to some extent evergreen .for example , a forwarding loop can quickly deplete switch buffers and harm the availability and connectivity provided by a network , and protocols such as the spanning tree protocol ( stp ) have been developed to ensure loop - free layer-2 forwarding at any time .however , consistency problems may also arise on higher layers in the osi stack . in this section, we provide a historical perspective on the many research contributions that lately focused on guaranteeing consistency properties during network updates , that is , while changing device configurations ( and how they process packets ) .we first discuss update problems and techniques in traditional networks ( [ subsec : history - igp]-[subsec : history - routing ] ) . in those networks ,forwarding entries are computed by routing protocols that run standardly - defined distributed algorithms , whose output is influenced by both network topology ( e.g. , active links ) and routing configurations ( e.g. , logical link costs ) .pioneering works then aimed at avoiding transient inconsistencies due to modified topology or configurations , mainly focusing on igps , i.e. , the routing protocols that control forwarding within a single network .a first set of contributions tried to modify igp protocol definitions , mainly to provide forwarding consistency guarantees upon link or node failures .progressively , the research focus has shifted to a more general problem of finding a sequence of igp configuration changes that lead to new paths while guaranteeing forwarding consistency , e.g. , for service continuity ( [ subsec : history - igp ] ) .more recent works have also considered reconfigurations of protocols different or deployed in addition to igps , mostly generalizing previous techniques while keep focusing on forwarding consistency ( [ subsec : history - routing ] ) . subsequently ( [ subsec : history - sdn ] ) , we discuss update problems tackled in the context of logically - centralized networks , implementing the software defined networking ( sdn ) paradigm .sdn is predicated around a clear separation between controller ( implementing the control logic ) and dataplane elements ( applying controller s decision on packets ) .this separation arguably provides new flexibility and opens new network design patterns , for example , enabling security requirements to be implemented by careful path computation ( done by the centralized controller ) .this also pushed network update techniques to consider additional consistency properties like policies and performance .we rely on the generic example shown in fig .[ fig : example - history ] for illustration .the figure shows the intended forwarding changes to be applied for a generic network update .observe that possible forwarding loops can occur during this update because edges and are traversed in opposite directions before and after the update . in traditional ( non - sdn ) networks ,forwarding paths are computed by distributed routing protocols .link - state interior gateway protocols ( igps ) are the most popular of those protocols used to compute forwarding paths within a network owned by the same administrative entity .link - state igps are based on computing shortest - paths on a weighted graph , representing a logical view of the network , which is shared across routers .parameters influencing igp computations , like link weights of the shared graph , are set by operators by editing router configurations . as an illustration ,[ fig : example - history - igp ] shows a possible implementation for the update example presented in fig .[ fig : example - history ] . in particular , fig .[ fig : example - history - igp ] reports the igp graph ( consistent with the physical network topology ) with explicit mention of the configured link weights . based on those weights , for each destination ( e.g. , in this example ) , all routers independently compute the shortest paths , and forward the corresponding packets to the next - hops on those paths .consequently , the igp configurations in figs .[ subfig : example - history - igp - init ] and [ subfig : example - history - igp - fin ] respectively produce the forwarding paths depicted in figs .[ subfig : example - history - generic - init ] and [ subfig : example - history - generic - fin ] .when the igp graph is modified ( e.g. , because of a link failure , a link - weight change or a router restart ) , messages are propagated by the igp itself from node to node , so that all nodes rebuild a consistent view of the network : this process is called _igp convergence_. however , igps do not provide any guarantee on the time and order in which nodes receive messages about the new igp graphs .this potentially triggers transient forwarding disruptions due to temporary state inconsistency between a set of routers .for example , assume that we simply remove link from the igp graph shown in fig .[ subfig : example - history - igp - init ] .this will eventually lead us to the configuration presented in fig .[ subfig : example - history - igp - fin ] . before the final stateis reached , the notification that is removed has to be propagated to all routers .if receives such notification before ( e.g. , because closer to the removed link ) , then would recompute its next - hop based on the new information , and starts forwarding packets for to ( see fig .[ subfig : example - history - generic - fin ] ) . nevertheless , keeps forwarding packets to as it still forwards as is still up .this creates a loop between and : the loop remains until is notified about the removed link .a similar loop can occur between and .guaranteeing disruption - free igp operations has been considered by research and industry since almost two decades .we now briefly report on the main proposals .* disruption - free igps have been studied . * early contributions focused on modifying the routing protocols , mainly to avoid forwarding inconsistencies . among them , protocol extensions have been proposed to gracefully restart a routing process , that is , to avoid forwarding disruptions ( e.g. , blackholes ) during a software update of a router .other works focused on preserving forwarding consistency , that is , avoiding loops , upon network failures .for example , franois _ et al . _ propose ofib , an igp extension that guarantees the absence of forwarding loops after topological changes ( link / node addition or removal ) .the key intuition behind ofib is to use explicit synchronization between routers in order to constrain the order in which each node changes its forwarding entries .in particular , each router ( say , in our example ) is forced not to update its forwarding entry for a given destination ( in our example ) until all its final next - hops ( ) use their own final next - hops ( in our case ) .et al . _ generalize the previous approach by defining a loop - free ordering of igp - entry updates for arbitrary forwarding changes .moreover , plsn specializes ofib : it allows routers to dynamically avoid loops by locally delaying forwarding changes that are not safe . a variant of ofib , studied by shi _et al . _ , also extends the reconfiguration mechanism to consider traffic congestion in addition to forwarding consistency .modifying protocol specifications may seem the most straightforward solution to deal with reconfigurations in traditional networks , but it actually has practical limitations .first , this approach can not accommodate custom reconfiguration objectives .for instance , ordered forwarding changes generally work only on a per - destination basis , which can make the reconfiguration process slow if many destinations are involved while one operational objective could be to exit transient states as quickly as possible .second , protocol modifications are typically targeted to specific reconfiguration cases ( e.g. , single - link failures ) , since it is technically hard to predict the impact of any configuration change on forwarding paths. finally , protocol extensions are not easy to implement in current igps , because of the need for passing through vendors ( to change proprietary router software ) , the added complexity and the potential overhead ( e.g. , load ) induced on routers .limited practicality of protocol modifications quickly motivated new approaches , based on coordinating operations in order to eventually replace the initial configuration with the final one on all network nodes , while guaranteeing absence of disruptions throughut the process . those approaches , summarized in the following , mainly focused on support for planned operations .* optimization algorithms can minimize disruptions .* as a first attempt , keralapura _ et al . _ studied the problem of finding the optimal way in which devices and links can be added to a network to minimize disruptions .many following contributions focused on finer - grained operations to gain additional degrees of freedom in igp reconfiguration problems .a natural choice among finer - grained operations readily supported by traditional routers is tweaking igp link weights .for example , in and , raza _ et al . _propose a theoretical framework to formalize the problem of minimizing a certain disruption function ( e.g. , link congestion ) when the link weights have to be changed .the authors also propose a heuristic to find an ordering in which to modify several igp weights within a network , so that the number of disruptions is minimal .while easily applicable to real reconfiguration cases , those approaches assume primitives which are quite coarse grained ( e.g. , addition of a link , or weight changes ) , and can not guarantee the absence of disruptions in several cases : the scenario in fig .[ fig : example - history - igp ] is an example where coarse - grained operations ( link removal ) can not prevent forwarding loops .* progressively changing link weights can avoid loops . *intermediate igp link weights can be used during a reconfiguration to avoid disruptions at the cost of increasing the size of the update sequence and slowing down the update .consider again the example in fig .[ fig : example - history - igp ] , and let the final weight for link conventionally be . in this case , the forwarding loops potentially triggered by the igp reconfiguration can be provably prevented by using two intermediate weights for link , as illustrated in fig .[ fig : example - history - igp - seq ] .the first of those intermediate weights ( see fig . [subfig : example - history - igp - seq2 ] ) is used to force and only to change its next - hop , from to : intuitively , this prevents the loop between and . the second intermediate weight ( see fig. [ subfig : example - history - igp - seq3 ] ) similarly guarantees that the loop between and is avoided , i.e. , by forcing to use its final next - hop before .of course , finding intermediate weights that guarantee the absence of disruptions becomes much trickier when multiple destinations are involved .such a technique can be straightforwardly applied to real routers .for example , an operator can progressively change the weight of to by editing the configuration of and , then check that the all igp routers have converged on the paths in fig .[ subfig : example - history - igp - seq2 ] , repeat similar operations to reach the state in fig . [ subfig : example - history - igp - seq3 ] , and safely remove the link .even better , it has been shown that a proper sequence of intermediate link weights can always avoid all possible transient loops for any single - link reweighting .obviously , the weight can be changed on multiple links in a loop - free way , by progressively reweighting links one by one .additional research contributions then focused on minimizing the number of intermediate weights that ensure loop - free reconfigurations . surprisingly , the problem is _ not _ computationally hard , despite the fact that all destinations have potentially to be taken into account when changing link weights .polynomial - time algorithms have been proposed to support planned operations at the per - link ( e.g. , single - link reweighting ) and at a per - router ( e.g. , router shutdown / addition ) granularity .* ships - in - the - night ( sitn ) techniques generalize the idea of incremental changes to avoid loops . * to improve the update speed in the case of many link changes and deal with generalized reconfigurations ( from changing routing parameters to replacing an igp with another ) , both industrial best practices and research works often rely on a technique commonly called ships - in - the - night .this technique builds upon the capability of traditional routers to run multiple routing processes at the same time .thanks to this capability , both the initial and final configurations can be installed ( as different routing processes ) on all nodes at the same time .[ fig : example - history - sitn - setup ] shows the setup for a ships - in - the - night reconfiguration for the reconfiguration case in fig .[ fig : example - history - igp ] . in sitn ,the reconfiguration process then consists in swapping the preference of the initial configuration with the final one on every node , potentially for a single destination .hence , at any moment in time , every node uses only one of the two configurations , but different nodes can use different configurations .this implies that ( 1 ) for each destination , every switch either uses its initial next - hops or its final ones , meaning that the update does not add overhead to the hardware memory of any node ; but ( 2 ) inconsistencies may arise from the mismatch between routing processes used by distinct nodes .for example , fig .[ fig : example - history - sitn - seq ] shows a sitn - based reconfiguration that mimicks the progressive link weight increment depicted in fig . [fig : example - history - igp - seq ] .sitn reconfiguration techniques are more powerful than igp link reweighting .the ships - in - the - night framework enables to change the forwarding next - hop of each router independently from the others , hence providing a finer - grained reconfiguration primitive with respect to igp weight modifications ( which influence all routers for all destinations ) .moreover , sitn techniques can be applied to arbitrary changes of the igp configuration , rather than just link reweighting . on the flip side, the ships - in - the - night approach also opens a new algorithmic problem , that is , to decide a safe order in which to swap preferences on a per - router basis . indeed , naive approaches in swapping configuration preferences can not guarantee disruption - free reconfigurations .for example , replacing the initial configuration with the final one on all nodes at once provides no guarantee on the order in which new preferences are applied by nodes , hence potentially triggering packet losses and service disruptions ( in addition to massive control - plane message storms ) .even worse , such an approach could leave the network in an inconsistent , disrupted and hard - to - troubleshoot state if any reconfiguration command is lost or significantly delayed .similarly , industrial best practices ( e.g. , ) only provide rules of thumb which do not apply in the general case , and do not guarantee lossless reconfiguration processes . to guarantee the absence of disruptions , configuration preferencemust then be swapped incrementally , in a carefully - computed order .this called for research contributions .prominently , vanbever _ et al ._ proposed various algorithms ( based on linear programming , and heuristic ones ) to deal with many more igp reconfiguration scenarios , including the simultaneous change of multiple link weights , the modification of other parameters ( e.g. , ospf areas ) influencing igp decisions , and the replacement of one igp protocol with another ( e.g. , ospf with is - is ) .to minimize the update time , the proposed algorithms also try to touch each router only once , i.e. , modifying its forwarding entries to all possible destinations altogether .as such , they also generalize the algorithms behind protocol - modification techniques , especially ofib , that restrict to per - destination operational orderings . beyond providing ordering algorithms , also describe comprehensive system to carry out loop - free igp reconfigurations in automatically or semi - automatically , i.e. , possibly waiting the input from the operator to perform the next set of operations in the computed operational sequence .research contributions have been devoted to reconfigurations in more realistic settings , including other protocols than just an igp . * enterprise networks , with several routing domains . * as a first example , the ships - in - the - night framework has been used to carry out igp reconfigurations in enterprise networks .those networks typically use _ route redistribution _ , a mechanism enabling the propagation of information from one routing domain ( e.g. , running a given igp ) to another ( e.g. , running a different igp ) .unfortunately , route redistribution may be responsible for both routing ( inability to converge to a stable state ) and forwarding ( e.g. , loop ) anomalies .generalized network update procedures have been proposed in to avoid transient anomalies while ( i ) reconfiguring a specific routing domain without impacting another , and/or ( ii ) arbitrarily changing the size and shape of routing domains . * internet service providers ( isps ) , with bgp and mpls . * in isp networks , the bgp and often mpls protocols are pervasively used to manage transit internet traffic , for which both the source and the destination is external to the network .et al . _ showed that even techniques guaranteeing safe igp reconfigurations can cause transient forwarding loops in those settings , because of the interaction between igp and bgp .they also proved conditions to avoid those bgp - induced loops during igp reconfigurations , by leveraging mpls or bgp configuration guidelines . in parallel , a distinct set of techniques aimed at supporting bgp reconfigurations .those contributions range from mechanisms to avoid churn of bgp messages during programmed operations ( e.g. , router reboots or bgp session maintenance ) to techniques for safely moving virtual routers or part of physical - router configuration ( e.g. , bgp sessions ) . a framework that guarantees strong consistency for arbitrary changes of the bgp configurationis presented in : it is based on implemeting ships - in - the - night in bgp and using packet tags to uniformly apply either the initial or the final forwarding at all routers .internet - level problems , like maintaining global connectivity upon failures , have also been explored ( see , e.g. , ) .* protocol - independent reconfiguration frameworks . * by design ,all the above approaches are dependent on the considered ( set of ) protocols and even on their implementation .protocol - independent reconfiguration techniques have also been proposed .prominently , in , alimi _ et al ._ generalize the ship - in - the - night technique , by re - designing the router architecture .this re - design would allow routers not only to run multiple configurations simultaneously but also to select the configuration to be applied on every packet based on the value of a specific bit in the packet header .the authors also describe a commitment protocol to support the switch between configurations without creating forwarding loops .mechanisms for consensus routing have been explored in .recently , software defined networking ( sdn ) has grown in popularity , thanks to its promises to spur abstractions , mitigate compelling management problems and avoid network ossification .software - defined networks differ from traditional ones from an architectural viewpoint : in sdn , the control is outsourced and consolidated to a logically - centralized element , the network controller , rather than having devices ( switches and routers ) run their own distributed control logic .in pure sdn networks , the controller computes ( according to operators input ) and installs ( on the controlled devices ) the rules to be applied to packets traversing the network : no message exchange or distributed computation are needed anymore on network devices .[ fig : example - history - sdn - init ] depicts an example of an sdn network , configured to implement the initial state of our update example ( see fig .[ fig : example - history ] ) . beyond the main architectural components ,the figure also illustrate a classic interaction between them .indeed , the dashed lines indicate that the sdn controller instructs the reprogrammable network devices , typically switches ) , on how to process ( e.g. , forward ) the traversing packets .an example command sent by the controller to switch is also reported next to the dashed line connecting the two . in an sdn network . ]the sdn architecture makes the role of network updates even more frequent and critical than in traditional networks . on the one hand ,controllers are often intended to support several different requirements , including performance ( like optimal choice of per - flow paths ) , security ( like firewall and proxy traversal ) and packet - processing ( e.g. , through the optimized deployment of virtualized network functions ) ones .on the other hand , devices can not provide any reaction ( e.g. , to topological changes ) like in traditional networks . in turn , this comes at the risk of triggering inconsistencies , e.g. , creating traffic blackholes during an update , that are provably impossible to trigger by reconfiguring current routing protocols . as a consequence, the controller has to carry out a network update for every event ( from failures to traffic surges and requirement modification ) that can impact the computed forwarding entries , while typically supporting more critical consistency guarantees and performance objectives than in traditional networks .an extended corpus of sdn update techniques have already been proposed in the literature , following up on the large interest raised by sdn in the last decade .this research effort nicely complements approaches to specify , compile , and check the implementation of network requirements specified by operators in sdn networks . historically speaking ,the first cornerstone of sdn updates is represented by the work by reitblatt __ in .this work provides a first analysis of the additional ( e.g. , security ) requirements to be considered for sdn updates , extending the scope of consistency properties from forwarding to policy ones .in particular , it focuses on per - packet consistency property , imposing that packets have to be forwarded either on their initial or on their final paths ( never a combination of the two ) , throughout the update .the technical proposal is centered around the 2-phase commit technique , which relies on tagging packets at the ingress so that either all initial rules or all final ones can be consistently applied network - wide .initially , all packets are tagged with the `` old label '' ( e.g. , no tag ) and rules matching the old label are pre - installed on all the switches . in a first step ,the controller instructs the internal switches to apply the final rule to packets carrying the `` new label '' ( i.e. , no packet at this step ) .after the internal switches have confirmed the successful installation of these new rules , the controller then changes the tagging policy at the ingress switches , requiring them to tag packets with the `` new label '' . as a result, packets are immediately forwarded along the new paths .finally , the internal switches are updated ( to remove the old rules ) , and an optional cleaning step can be applied to remove all tags from packets .[ fig : example - history-2phase - seq ] shows the operational sequence produced by the 2-phase commit technique for the update case in fig .[ fig : example - history - igp - seq ] .several works have been inspired by the 2-phase technique presented in .on the one hand , a large set of contributions focused on additional guarantees that can be provided by building upon that technique , e.g. , to avoid congestion during sdn updates ( from to ) . on the other hand , several algorithms to compute a set of ordered rule replacements have been proposed to deal with specific sdn update cases ( e.g. , where only forwarding consistency is needed ) avoid adding rules and wasting critical network resources ( i.e. , expensive and rare switch tcam memory slots ) . in the following sections , we detail most of those contributions and the insights on different update problems that globally emerge from them .with this historic perspective and traditional network update problems and techniques in mind , we now present a general formulation of network update problem ( [ subsec : taxo - problem ] ) , which abstracts from assumptions and settings considered in different works .this formulation enables us to classify research contributions on the basis of the proposed techniques ( e.g. , simultaneous usage of multiple configurations on nodes or not ) and algorithms , independently of their application to traditional and sdn networks ( [ subsec : taxo - taxo ] ) . in order to compare and contrast research contributions ,we first provide a generalized statement for network update problems .we use again fig .[ fig : example - history ] for illustration . *basic problem . * generally speaking , a network update problem consists in computing a sequence of operations that changes the packet - processing rules installed on network devices .consider any communication network : it is composed by a given set of inter - connected devices , that are able to process ( e.g. , forwarding to the next hop ) data packets according to rules installed on them .we refer to the set of rules installed on all devices at a given time as network state at that time . given an initial and final state , a network update consists in passing from the initial state to the final one by applying operations ( i.e. , adding , removing or changing rules ) on different devices . in fig .[ fig : example - history ] , the initial state forces packets from source to destination along the path . in contrast , the final state forwards the same packets over , as well as packets from to on .the network update problem consists in replacing the initial rules with the final ones , so that the paths for are updated from to and .* operations . * to perform a network update , a sequence of operations has to be computed . by operation , we mean any modification of a device behavior in processing packets . as an example ,an intuitive and largely - supported operation on all network devices is rule replacement : instructing any device ( e.g. , ) to replace an initial rule ( e.g. , forward the packet flow to ) with the corresponding final one ( e.g. , forward the flow to ). * consistency . *the difficulty in solving network update problems is that some form of consistency must be guaranteedly preserved throughout the update , for practical purposes like avoiding service disruptions and packet losses . preserving consistency properties , in turn ,depends on the order in which operations are executed by devices even if both the initial and the final states comply with those properties .for example , if replaces its initial rule with its final one before in fig .[ fig : example - history ] , then the operational sequence triggers a forwarding loop between and that interrupts the connectivity from to . in [ subsec : taxo - taxo ], we provide an overview of consistency properties considered in prior work .the practical need for guaranteeing consistency has two main consequences .first , it forces network updates to be carried out incrementally , i.e. , conveniently scheduling operations over time so that the installed sequence of intermediate states is provably disruption - free .second , it requires a careful computation of operational sequences , implementing specific reasoning in the problem - solving algorithms ( e.g. , to avoid replacing s rule before s one in the previous example ). * performance .* another algorithmic challenge consists in optimizing network - update performance . as an example , minimizing the time to complete an update is commonly considered among those optimization objectives .indeed , carrying out an update generally requires to install intermediate configurations , and in many cases it is practically desirable to minimize the time spent in such intermediate states .we provide a broader overview of performance goals considered by previous works in [ subsec : taxo - taxo ] . * final operational sequences . *generally , the solution for an update problem can be represented as a sequence of _ steps _ or _ rounds _ , that both guarantees consistency properties and optimizes update performance .each step is a set of operations that can be started at the same time .note that this does not mean that operations in the same step are assumed to be executed simultaneously on the respective devices .rather , all operations in the same step can be started in parallel because target consistency properties are guaranteed irrespectively of the respective order in which those operations are executed. examples of operational sequences , computed by different techniques , are reported in [ sec : history ] ( see figs . [fig : example - history - igp - seq ] and [ fig : example - history - sitn - seq ] ) . in this section ,we provide an overview of the problem space and classify existing models and techniques .previous contributions have indeed considered several variants of the generalized network update problem as we formulated in [ subsec : taxo - problem ] .those variants differ in terms of both consistency constraints , performance goals and operations that can be used to solve an update problem. * routing model .* we can distinguish between two alternative routing models : _ destination - based _ and _ per - flow _ routing .1 . * destination - based routing : * in destination - based routing , routers forward packets based on the destination only . an example for destination - based routing is ip routing , where routers forward packets based on the longest common ip destination prefix . in particular , destination - based routing describes confluent paths : once two flows from different sources destined toward the same destination intersect at a certain node , the remainder ( suffix ) of their paths will be the same . in destination - based routing , routers store at most one forwarding rule per specific destination .per - flow routing : * in contrast , according to _ per - flow _ routing , routes are not necessarily confluent : the forwarding rules at the routers are defined per - flow , i.e. , they may depend not only on the destination but for example also on the source . in traditional networks ,flows and per - flow routing could for example be implemented using mpls : packets belonging to the same equivalence class resp .packets with the same mpls tag are forwarded along the same path .* operations . *techniques to carry out network updates can be classified in broad categories , depending on the operations that they consider . 1 .* rule replacements : * a first class of network update algorithms is based on partitioning the total set of updates to be made at the different switches into different rounds : , where for all $ ] and where denotes the set of switches which is updated in round .consistent node ordering update schedules have the property that the updates in each round may occur asynchronously , i.e. , in an arbitrary order , without violating the desired consistency properties ( e.g. , loop - freedom ) . the next batch of updates only issued to the switches after the successful implementation of the updates has been confirmed ( i.e. , acked ) by the switches .* rule additions : * a second class of network update algorithms is based on adding rules to guarantee consistency during the update .the following two main variants of this approach have been explored so far. * 2-phase commit : * in this case , both the initial and the final rules are installed on all devices in the central steps of the updates .packet are tagged at the border of the network to enforce that the internal devices either ( i ) all use the initial rules , or ( ii ) all use the final rules .see fig .[ fig : example - history-2phase - seq ] for an example .* additional helper rules : * for the purpose of the update , additional rules may be introduced temporarily , which do not belong neither to the old path nor to the new path .these rules allow to divert the traffic temporarily to other parts of the network , and are called _helper rules_. * consistency properties . *another canonical classification can be defined along the fundamental types of consistency properties : 1 .* connectivity consistency : * the most basic form of consistency regards the capability of the network to continuously deliver packets to their respective destinations , throughout the update process .this boils down to guaranteeing two correctness properties : absence of blackholes ( i.e. , paths including routers that can not forward the packets further ) and absence of forwarding loops ( i.e. , packets bouncing back and forth over a limited set of routers , without reaching their destinations ) .* policy consistency : * paths used to forward packets may be selected according to specific forwarding policies , for example , security ones imposing that given traffic flows must traverse specific waypoints ( firewalls , proxies , etc . ) . in many cases ,those policies have to be preserved during the update . generally speaking ,policy consistency properties impose constraints on which paths can be installed during the update ( as a consequence of the partial application of an operational sequence ) .for example , a well - studied policy consistency property , often referred to as _ strong consistency _ , requires that packets are always forwarded along either the pre - update or the post - update paths , but never a combination of the two .* performance consistency : * a third class of consistency properties takes into account actual availability and limits of network resources .for instance , many techniques account for traffic volumes and the corresponding constraints raised by the limited capacity of network links : they indeed aim at respecting such constraints in each update step , e.g. , to avoid _ transient congestion _ during updates .this classification is also reflected in the structure of this survey . *performance goals .* we can distinguish between three broad classes of performance goals . 1 .* link - based : * a first class of consistent network update protocols aims to make new links available as soon as possible , i.e. , to maximize the number of switch rules which can be updated simultaneously without violating consistency .* round - based : * a second class of consistent network update protocols aims to minimize the number of inter - actions between the controller and the switches .* cross - flow objectives : * a third class of consistent network update protocols targets objectives arising in the presence of multiple flows . 1 .* augmentation : * minimize the extent to which link capacities are oversubscribed during the update ( or make the update entirely congestion - free ) .* touches : * minimize the number of interactions with the router .link - based and round - based objectives are usually considered for node - ordering algorithms and for weak - consistency models .congestion - based objectives are naturally considered for capacitated consistency models .[ fig : taxonomy ] gives an overview of different types of network update problems .[ t ] +in this section , we focus on update problems where the main consistency property to be guaranteed concerns the delivery of packets to their respective destinations . packet delivery can be disrupted during an update by forwarding loops or blackholes transiently present in intermediate states .we separately discuss previous results on how to guarantee loop - free and blackhole - free network updates .we start from the problem of avoiding forwarding loops during updates , because they are historically the first update problems considered by works on traditional networks ( see [ sec : history ] ) .this is also motivated by the fact that blackholes can not be created by reconfiguring current routing protocols , as proved in .we then shift our focus on avoiding blackholes during arbitrary ( e.g. , sdn ) updates .loop - freedom is a most basic consistency property and has hence been explored intensively already .we distinguish between flow - based and destination - based routing : in the former , we can focus on a single ( and arbitrary ) path from to : forwarding rules stored in the switches depend on both and , can flows can be considered independently . in the latter , switches store a single forwarding rule for a given destination :once the paths of two different sources destined to the same destination intersect , they will be forwarded along the same nodes in the rest of their route : the routes are confluent .moreover , once can distinguish between two different definitions for loop - free network updates : _ strong loop - freedom ( slf ) _ and _ relaxed loop - freedom ( rlf ) _ .slf requires that at any point in time , the forwarding rules stored at the switches should be loop - free .rlf only requires that forwarding rules stored by switches _ along the path from a source to a destination _ are loop - free : only a small number of `` old packets '' may temporarily be forwarded along loops .* node - based objective ( `` greedy approach '' ) .* mahajan and wattenhofer initiated the study of destination - based ( strong ) loop - free network updates .in particular , the authors show that by scheduling updates across multiple rounds , consistent update schedules can be derived which do not require any packet tagging , and which allow some updated links to become available earlier .the authors also present a first algorithm that quickly updates routes in a transiently loop - free manner .the study of this model has been refined in , where the authors also establish hardness results .in particular , the authors prove that for two destinations and for sublinear , it is np - hard to decide if rounds ( cf . round - based objectives ) of updates suffice . furthermore , maximizing the number of rules updated for a single destination is np - hard as well , but can be approximated well .et al . _ initiated the study of arbitrary route updates : routes which are not necessarily destination - based .the authors show that the update problem in this case boils down to an optimization problem on a very simple directed graph : initially , before the first update round , the graph simply consists of two connected paths , the old and the new route . in particular ,every network node which is not part of both routes can be updated trivially , and hence , there are only three types of nodes in this graph : the source has out - degree 2 ( and in - degree 0 ) , the destination has in - degree 2 ( and out - degree 0 ) , and every other node has in - degree and out - degree 2 .the authors also observe that loop - freedom can come in two flavors , strong and relaxed loop - freedom . despite the simple underlying graph , however , amiri _et al . _ show that the node - based optimization problem is np - hard , both in the strong and the relaxed loop - free model ( slf and rlf ) . as selecting a maximum number of nodes to be updated in a given round ( i.e. , the node - based optimization objective )may also be seen as a heuristic for optimizing the number of update rounds ( i.e. , the round - based optimization objective ) , the authors refer to the node - based approach as the `` greedy approach '' .amiri _ et al . _ also present polynomial - time optimal algorithms for the following scenarios : both a maximum slf update set as well as a maximum rlf update set can be computed in polynomial - time in trees with two leaves . regarding polynomial - time approximation results , the problem is 1/2-approximable in general , both for strong and relaxed loop - freedom . for additional approximation results for specific problem instances, we refer the reader to amiri _et al . _ . * round - based objective ( `` greedy approach '' ) . * ludwig _ et al . _ initiate the study of consistent network update schedules which minimize the number of interaction rounds with the controller : _ how many communication rounds are needed to update a network in a ( transiently ) loop - free manner ? _the authors show that answering this question is difficult in the strong loop - free case .in particular , they show that while deciding whether a -round schedule exists is trivial for , it is already np - complete for .moreover , the authors show that there exist problem instances which require rounds , where is the network size .moreover , the authors show that the greedy approach , aiming to `` greedily '' update a _ maximum _ number of nodes in each round , may result in -round schedules in instances which actually can be solved in rounds ; even worse , a _ single _greedy round may inherently delay the schedule by a factor of more rounds .however , fast schedules exist for _ relaxed loop - freedom _ : the authors present a deterministic update scheduling algorithm which completes in -round in the worst case .* hybrid approaches .* vissicchio _ et al . _presented flip , which combines per - packet consistent updates with order - based rule replacements , in order to reduce memory overhead : additional rules are used only when necessary .moreover , hua _ et al . _ initiated the study of adversarial settings , and presented foum , a flow - ordered update mechanism that is robust to packet - tampering and packet dropping attacks .* other objectives .* dudycz _ et al . _ initiated the study of how to update multiple policies simultaneously , in a loop - free manner . in their approach , the authors aim to minimize the number of so - called _ touches _ , the number of updates sent from the controller to the switches : ideally , all the updates to be performed due the different policies can be sent to the switch in one message .the authors establish connections to the _ shortest common supersequence ( scs ) _ and _ supersequence run _problems , and show np - hardness already for two policies , each of which can be updated in two rounds , by a reduction from _max-2sat _ .however , the authors also present optimal polynomial - time algorithms to combine consistent update schedules computed for individual policies ( e.g. , using any existing algorithm , e.g. , ) , into a global schedule guaranteeing a minimal number of touches .this optimal merging algorithm is not limited to loop - free updates , but applies to any consistency property : if the consistency property holds for individual policies , then it also holds in the joint schedule minimizing the number of touches .the _ shortest common supersequence ( scs ) _ and _ supersequence run _ . the link - based optimization problem , the problem of maximizing the number of links ( or equivalently nodes ) which can be updated simultaneously , is an instance of the maximum acyclic subgraph problem ( or equivalently : dual minimum feedback arc set problem ) . forthe np - hardness , reductions from _ sat _ and _ max-2sat _ are presented .loop - free network updates still pose several open problems . regarding the node - based objective , amiri _et al . _ conjecture that update problems on bounded directed path - width graphs may still be solvable efficiently : none of the negative results for bounded degree graphs on graphs of bounded directed treewidth seem to be extendable to digraphs of bounded directed pathwidth with bounded degree .more generally , the question of on which graph families network update problems can be solved optimally in polynomial time in the node - based objective remains open .regarding the round - based objective , it remains an open question whether strong loop - free updates are np - hard for any ( but smaller than ) : so far only has been proved to be np - hard .more interestingly , it remains an open question whether the relaxed loop - free update problem is np - hard , e.g. , are 3-round update schedules np - hard to compute also in the relaxed loop - free scenario ?moreover , it is not known whether update rounds are really needed in the worst - case in the relaxed model , or whether the problem can always be solved in rounds .some brute - force computational results presented in indicate that if it is constant , the constant must be large .remarks # rounds , strong lf & is there a 3-round loop - free update schedule ? _ for 2-destination rules and sublinear : is there a -round loop - free update schedule ? _ & is there a 2-round loop - free update schedule ? & in the worst case , rounds may be required .. -round schedules always exist _ . both applies to flow - based & _ destination - based _rules . # rounds , relaxed lf & no results known . &-round update schedules always exist . & it is not known whether -round schedules exist ( in the worst case ) .no approximation algorithms are known .# links , strong lf & is it possible to update nodes in a loop - free manner ? _ & polynomial - time optimal algorithms are known to exist in the following cases : a maximum slf update set can be computed in polynomial - time in trees with two leaves . & the optimal slf schedule is 2/3-approximable in polynomial time in scenarios with exactly three leaves . for scenarios with four leaves , there exists a polynomial - time 7/12-approximation algorithm . approximation algorithms from maximum acyclic subgraph and minimum feedback arc set _ _ apply .# links , relaxed lf & is it possible to update nodes in a loop - free manner ? & polynomial - time optimal algorithms are known to exist in the following cases : a maximum rlf update set can be computed in polynomial - time in trees with two leaves . & no approximation results known . another consistency property is blackhole freedom , i.e. , a switch should always have a matching rule for any incoming packet , even when rules are updated ( e.g. , removed and replaced ) .this property is easy to guarantee by implementing some default matching rule which is never updated , which however could in turn induce forwarding loops . a straightforward mechanism ,if there is currently no blackhole for any destination , is to install new rules with a higher priority , and then delete the old rules .nonetheless , in the presence of memory limits and guaranteeing loop - freedom , finding the fastest blackhole - free update schedule is np - hard .while connectivity invariants are arguably the most intensively studied consistency properties in the literature , especially in traditional networks , operators often have additional requirements to be preserved .for example , operators want to ensure that packets traverse a given middlebox ( e.g. , a firewall ) for security reasons or a chain of middleboxes ( e.g. , encoder and decoder ) for performance reasons , or that paths comply with service level agreements ( e.g. , in terms of guaranteed delay ) . in this section, we discuss studied problems and proposed techniques aiming at preserving such additional requirements . additional requirements on forwarding paths that may have to be respected during a network update can be modeled by _ routing policies _ , that is , sub - paths that have to be traversed by transient paths installed during network updates . over the years, several contributions have targeted policy - preserving updates , typically focusing on specific policies .historically , the first policy considered during network updates is _ per - packet consistency ( ppc ) _ , which ensures that every packet travels either on its initial or on its final paths , never on intermediate ones .this property is the most natural to ( try to ) preserve .assume indeed that both the initial and the final paths comply with high - level network requirements , e.g. , security , performance , sla policies . the most straightforward way to guarantee that those requirements are not violated is to constrain all paths installed during the update to always be either initial paths or final ones .nonetheless , guaranteeing per - packet consistency may be an unnecessarily strong requirement in practice .not always it is strictly needed that transient paths must coincide with either the initial or the final ones .for example , in some cases ( e.g. , for enterprise networks ) , security may be a major concern , and many security requirements may be enforced by guaranteeing that packets traverse a firewall .we refer to this specific case where single nodes ( waypoints ) have to be traversed by given traffic flows as _ waypoint enforcement ( wpe)_. an example wpe - consistent update is displayed in fig .[ fig : example - wpe ] more complex policies ( i.e. , beyond wpe ) may also be needed in general .indeed , policies to be satisfied in sdn networks tend to grow in number and complexity over time , because of both new requirements ( e.g. , as dictated by use cases like virtualized infrastructure and network functions ) and novel opportunities ( e.g. , programmability and flexibility ) opened by sdn . for example , it may desirable that specific traffic flows follow certain sub - paths ( e.g. , with low delay for video streaming and online gaming applications ) or are explitictly denied to pass through other sub - paths ( e.g. , because of political or economical constraints ) .such _ arbitrary policies _ are also considered in recent sdn update works .* 2-phase commit techniques . * as described in [ subsec : taxo - taxo ] , 2-phase commit techniques deploy carry out updates by ( 1 ) tagging packets at their ingress in the network , and ( 2 ) using packet tags to use initial or final paths consistently network - wide .unsurprisingly , this approach guarantees per - packet consistency ( hence , potentially any policy satisfied by both pre- and post - update paths ) .while the idea is quite intuitive , some support is needed on the devices , e.g. , to tag packets and match packet tags . a framework to implement this update approach in traditional networkshas been proposed by alimi __ in .it requires invasive modification of router internals , to manage tags and run arbitrary routing processes in separate process spaces .the counterpart of such a framework for sdn networks is presented in . those worksavoid the need for changing device internals since it relies on openflow , the protocol classically used in sdn networks .they also argue on the criticality of supporting ppc in the sdn case and the advantages of integrating 2-phase commit techniques within an sdn controller .a major downside of 2-phase commit is that it doubles the consumed memory on switches , along with requiring header space , tagging overhead , and complications with middleboxes changing tags .it indeed requires devices to maintain both the initial and final sets of forwarding rules throughout the update , in order to possibly apply any of the two sets according to packet tags . to mitigate this problem ,a variant of the basic approach has been studied in .the authors of the latter work proposed to break a given update into several sub - updates , such that each sub - update changes the paths for a different set of flows . of course, this approach would make it longer for the full update to be completed .in other words , it can limit the memory overhead on each switch at any moment in time but at the price of slowing down the update .actually , the switch - memory consumption of 2-phase commit techniques remains a fundamental limitation of the approach , which also motivated the exploration of alternatives .* sdn - based update protocols .* mcgeer presented two protocols to carry out network updates and defined on top of openflow .the first update protocol saves switch resources by sending packets to the controller during updates . as a result ,switch resources ( like precious tcam entries ) are saved , at the cost of adding delay on packet delivery , and consuming network bandwidth and controller memory .the second update protocol is based on a logic circuit for the update sequence which requires neither rule - space overhead nor transferring the packets to the shelter during the update .both proposals need a dedicated protocol which is not currently supported by devices out of the box . * rule replacement ordering . *some works explored which policies can be supported , and how , by only relying on ( ordered ) rule replacements , given that this both ( i ) comes with no memory overhead and ( ii ) is supported by both traditional and sdn devices .some works noticed that ppc can be an unnecessarily strong requirements in several practical cases .initial contributions mainly focused on wpe consistency , e.g. , to preserve security policies .prominently , studies how to compute quick updates that preserve wpe by only replacing initial with final rules , when any given flow has to traverse a single waypoint .the authors propose wayup , an algorithm that guarantees wpe during the update and terminates in 4 rounds .however , they also show that it may not be possible to ensure waypointing through a single node and loop - freedom at the same time .[ fig : example - wpe ] actually shows one case in which any rule replacement ordering either causes a loop or a wpe consistency violation .those infeasibility results are extended to waypoint chains in . in that work , in particular , the authors show that flexibility in ordering and placing virtualized functions specified by a chain do not make the update problem always solvable .the two works also show that it is computationally hard ( np - hard ) to even decide if an ordering preserving both wpe and loop - freedom exists .mixed integer program formulations to find an operational are proposed and evaluated in both cases .the more general problem of preserving policies defined by operators is tackled in .that paper describes an approach to ( i ) model update - consistency properties as linear temporal logical formulas , and ( ii ) automatically synthesize sdn updates that preserve input properties .such a synthesis is performed by an efficient algorithm based on counterexample - guided search and incremental model checking .experimental evidence is provided about the scalability of the algorithm ( up to one - thousand node networks ) .finally , explores algorithmic limitations of guaranteeing per - packet consistency without relying on state duplication .the work shows that a greedy strategy implements a correct and complete approach in this case , meaning that it finds the maximal sequence of rule replacements that do not violate ppc .et al . _ complement those findings , by presenting a polynomial - time synthesis algorithm that preserves ppc while allowing the maximal parallelism between per - switch updates .also , an evaluation on realistic update cases is presented in .it shows that ppc can be preserved while replacing many forwarding entries on the majority of the switches , despite updates can rarely be completed this way .however , this observation motivates both approaches tailored to a more restricted family of policies ( like wpe - preserving ones , described above ) , and efforts for mixed approaches ( mixing rule replacements and duplication , see below ). * mixed approaches . * in ,a basic mixed approach is considered to ensure ppc in generalized networks running both traditional and sdn control - planes ( or any of the two ) .this approach consists in first computing the maximal sequence of rule replacements that preserve ppc , and then applying a restricted 2-phase commit procedure on a subset of ( non - ordered ) devices and flows .vissicchio _ et al . _ propose an algorithm addressing a larger set of update problems with a more general algorithmic approach , but restricting to sdn networks .this work focuses on the problem of preserving generic policies during sdn updates . for each flow, a policy is indeed defined as a set of paths so that the flow must traverse any of those paths in each intermediate state .the proposed algorithm interleaves rule replacements and additions ( i.e. , packet tagging and tag matching ) in the returned operational sequences and during its computation rather than considering the two primitives in subsequent steps as in .both works argue that it is practically profitable to combine rule replacements and additions , as it greatly reduces the amount of memory overhead while keeping the operational sequence always computable .many policy - preserving algorithms face generalized versions of the optimization problems associated to connectivity - preserving updates ( see [ sec : forwarding ] ) : while the most common objective remains the maximization of parallel operations ( to speed - up the update ) , policy consistency requires that all possible intermediate paths comply with certain regular expressions in addition to being simple ( that is , loop - free ) paths .mixed policy - preserving approaches focus on even more general problems where ( i ) different operations can be interleaved in the output operational sequence ( which provides more degrees of freedom in solving the input problems ) , and ( ii ) multiple optimization objectives are considered at the same time ( typically , maximizing the update parallelism while also minimizing the consumed switch memory ) . unsurprisingly , preserving policies requires more sophisticated update techniques , since it is generally harder to extract policy - induced constraints and model the search space .two major families of solutions have been explored so far . on the one hand ,2-phase commit techniques and update protocols sidestep the algorithmic challenges , at the cost of relying on specific primitives ( packet tagging and tag matching ) that comes with switch memory consumption . on the other hand, ordering - based techniques directly deal with problem complexities , at the cost of algorithmic simplicity and impossibility to always solve update problems . finding the best balance between those two extremes is an interesting research direction .some initial work has started in this direction , with the proposal of algorithms that can interleave different kinds of operations within the computed sequence ( see mixed approaches in [ subsec : policies - algo ] ) .however , many research questions are left open .for example , the computational complexity of solving update problems while mixing rule additions ( for packet tagging and matching ) with replacements is unknown .moreover , it is unclear whether the proposed algorithms can be improved exploiting the structure of specific topologies or the flexibility of new devices ( e.g. , p4-compatible ones ) , e.g. , to achieve better trade - offs between memory consumption and update speed .computer networks are inherently capacitated , and respecting resource constraints is hence another important aspect of consistent network updates .congestion is known to significantly impact throughput and increase latency , therefore negatively impacting user experience and even leading to unpredictable economic loss .the capacitated update problem is to migrate from a multi - commodity flow to another multi - commodity flow , where consistency is defined as not violating any link capacities and not rate - limiting any flow below its demand in . in few works , e.g. , , is only implicitly specified by its demands , but not by the actual flow paths .some migration algorithms will violate consistency properties to guarantee completion , as a consistent migration does not have to exist in all cases .typically , four different variants are studied in the literature : first , individual flows may either only take one path ( unsplittable ) or they may follow classical flow - theory , where the incoming flow at a switch must equal its outgoing flow ( splittable ) . secondly , flows can take any paths via helper rules in the network during the migration ( intermediate paths ) , or may only be routed along the old or the new paths ( no intermediate paths ) . to exactly pinpoint congestion - freedom , one would need to take many detailed properties into account , e.g. , buffer sizes and asic computation times .as such , the standard consistency model does not take this fine - grained approach , but rather aims at avoiding ongoing bandwidth violations and takes a mathematical flow - theory point of view. introduced by , consistent flow migration is captured in the following model : no matter if a flow is using the rules before the update or after the update , the sum of all flow sizes must be at most the links capacity .current algorithms for capacitated updates of network flows use the seminal work by reitblatt _et al . _ as an update mechanism .analogously to _ per - packet consistency _[ sec : policies ] ) , one can achieve _ per - flow consistency _ by a 2-phase commit protocol . while this technique avoids many congestion problems ,is not sufficient for bandwidth guarantees : when updating the two flows in fig .[ fig : move up ] , the lower green flow could move up before orange flow is on its new path , leading to congestion .an overview over all algorithmic approaches discussed here can be found in table [ flow - algo - table ] .mizrahi _ et al . _ prove that flow swapping is necessary for throughput optimization in the general case , as thus algorithms are needed that do not violate any capacity contraints during the network update , beyond simple flow swapping as well . the seminal work by hong _et al . _ on _ swan _ introduces the current standard model for capacitated updates .their algorithmic contribution is two - fold , and also forms the basis for _ zupdate _ : first , the authors show that if all flow links have free capacity _ slack _ , consistent migration is possible using updates : e.g. , if the free capacity is 10% , 9 updates are required , always moving 10% of the links capacity to the new flow paths .if the network contains non - critical background traffic , free capacity can be generated for a migration by rate - limiting this background traffic temporarily , cf .[ fig : move up - swan ] : removing some background traffic allows for consistent migration .second , the authors provide an lp - formulation for splittable flows which provides a consistent migration schedule with updates , if one exists . by performing a binary search over the number of updates ,the number of necessary updates can be minimized .this approach allows for intermediate paths , where the flows can be re - routed anywhere in the network .e.g. , consider the example in fig .[ fig : move up - swan ] with all flows and links having unit size .if there was an additional third route to , the orange flow could temporarily use this intermediate path : we can then switch the green flow , and eventually the orange flow could be moved to its desired new path .this second lp - formulation was extended by zheng _et al . _ to include unsplittable flows as well via a mip .furthermore , using randomized rounding with an lp , zheng _ et al . _ can approximate the minimum congestion that will occur if the migration has to be performed using updates .should intermediate paths be allowed however , then their lp is of exponential size .et al . _ also consider the tradeoff between reconfiguration effects and update speed in the context of dynamic flow arrivals .in terms of tradeoffs , luo _et al . _ allow for user - specified deadlines ( e.g. , a flow has to be updated until some time ) via an mip or an lp - based heuristic . the work by brandt _ et al . _ tackles the problem of deciding in polynomial time if consistent migration is possible at all for splittable flows with intermediate paths allowed . by iteratively checking for augmenting flows that create free capacity ( slack ) on fully - capacitated links , it is possible to decide in polynomial time if slack can be obtained on all flow links .if yes , then the first technique of can be used , else no consistent migration is possible .should the output be no , they also provide an lp - formulation to check to which demands it is possible to migrate consistently .jain _ et al . _ also consider the variable update times of switches in the network .for both splittable and unsplittable flows without intermediate paths , they build a dependency graph for the update problem .then , this dependency graph is traversed in a greedy fashion , updating whatever flows are currently possible .e.g. , in fig .[ fig : move up ] , the orange flow would be moved first , then the green flow next .should this traversal result in a deadlock , flows are rate - limited to guarantee progress .et al . _ improve the local dependency resolving to improve the greedy traversal .et al . _ provide a mip - formulation of the problem , and also provide a heuristic framework using tiny mips .foerster and wattenhofer consider an alternative approach to migrating unsplittable flows without intermediate paths : they split each flow along its old and new path , changing the size allocations during the updates , until the migration is complete .their algorithm has polynomial computation time , but has slightly stronger consistency requirements than the model of .lastly , brandt _ et al . _ consider a modified migration problem by not fixing the new multi - commodity flow , but just its demands .if the final ( and every intermediate ) configuration has no congestion then the locations of the flows in the network do not matter . in scenarios with a single destination ( or a single source ) , augmenting flows can be used to compute the individual updates : essentially , the flows are changed along the routes of the augmenting flows , allowing for a linear number of updates for splittable flows with intermediate paths .the augmentation model can not be extended to the general case of multi - source multi - destination network flows .the complexity of capacitated updates can roughly be summarized as follows : problems involving splittable flows can be decided in polynomial time , while restrictions such as unsplittable flows or memory limits turn the problem np - hard , see table [ hardness results - table ] . in a way , the capacitated update problems differs from related network update problems in that it is not always solvable in a consistent way . on the other hand ,e.g. , per - packet / flow consistency can always be maintained by a 2-phase commit , and loop - free updates for a single destination can always be performed in a linear number of updates .one standard approach in recent work for flow migration is linear ( splittable flows ) or integer programing ( unsplittable flows ) : with the number of intermediate configurations as an input , it is checked if a consistent migration with intermediate states exists .should the answer be yes , then one can use a binary search over to find the fastest schedule .this idea originated in _ swan _ for splittable flows , and was later extended to other models , cf . table [ flow - algo - table ] . however , the lp - approach via binary search ( likewise for the integer one ) suffers from the drawback that it is only complete if the model is restricted : if is unbounded , then one can only decide whether a migration with updates exists , but not whether there is no migration schedule with steps , for some .additionally , it is not even clear to what complexity class the general capacitated update problem belongs to , cf .the decision problem hardness column of table [ hardness results - table ] .the only exception arises in case of splittable flows without memory restrictions , where either an ( implicit ) schedule or a certificate that no consistent migration is possible , is found in polynomial time .the authors use a combinatorial approach not relying on linear programming . adding memory restrictions turns this problem np - hard as well . if the model is restricted to allow every flow only to be moved once ( from the old path to the new path ) , then the capacitated update problem becomes np - complete : essentially , as the number of updates is limited by the number of flows , the problem is in np . in this specific case , one can also approximate the minimum congestion for unsplittable flows in polynomial time by randomized rounding . hardly any ( in-)approximability results exist today , and most work relies on reductions from the partition problem , cf .table [ flow - hardness - table ] .the only result that we are aware of is via a reduction from max 3-sat , which also applies to unit size flows . in a practicalsetting , splitting flows is often realized via deploying multiple unsplittable paths , which is an np - hard optimization problem as well , both for minimizing the number of paths and for maximizing -splittable flows , cf .another popular option is to split the flows at the routers using hash functions ; other major techniques are flow(let ) caches and round - robin splitting , cf .nonetheless , splitting flows along multiple paths can lead to packet reordering problems , which need to be handled by further techniques , see , e.g. , .many of the discussed flow migration works rely on linear programming formulations : even though their runtime is polynomial in theory , the timely migration of large networks with many intermediate states is currently problematic in practice . if the solution takes too long to compute ,the to - be solved problem might no longer exist , a problem only made worse when when resorting to ( np - hard ) integer programming for unsplittable flows . as such, some tradeoff has to be made between finding an optimal solution and one that can actually be deployed .orthogonal to the problem of consistent flow migration is the approach of scheduling flows beforehand , not changing their path assignments in the network during the update .we refer to the recent works by kandula _et al . _ and perry _ et al . _ for examples .game - theoretic approaches have also been considered , e.g. , .lastly , the application of model checking to consistent network updates does not cover bandwidth problem restrictions yet .the classification of the complexity of flow migration still poses many questions , cf .table [ flow - hardness - table ] : if every flow can only be moved once , then the migration ( decision ) problem is clearly in np .however , what is the decision complexity if flows can be moved arbitrarily often , especially with intermediate paths ? is the `` longest '' fastest update schedule for unsplittable flows : linear , polynomial or exponential , or even worse ?related questions are also open for flows of unit or integer size in general .the problem of migrating splittable flows without memory limits and without intermediate paths is still not studied either : it seems as if the methods of and also apply to this case , but a formal proof is missing .approach & ( un-)splittable model & intermediate paths & computation & # updates & complete ( decides if consistent migration exists ) & install old and new rules , then switch from old to new & both , move each flow only once & no & polynomial & 1 & no bandwidth guarantees & partial moves according to free slack capacity & splittable & no & polynomial & & requires slack on flow links + & greedy traversal of dependency graph & both , move each flow only once & no & polynomial & linear & no ( rate - limit flows to guarantee completion ) + & mip of & both , move each flow only once & no & exponential & linear & yes + & & both & no & polynomial & any & & & & yes & _ exponential _ & & & & & & & & & ... via mip & both & both & _ exponential _ & any & for any given yes , but can not decide in general + & binary search of intermediate states via lp & splittable & yes & polynomial in # of updates & unbounded & can not decide if migration possible + & create slack with intermediate states , then use partial moves of & splittable & yes & polynomial & unbounded & yes & split unsplittable flows along old and new paths & 2-splittable & no & polynomial & unbounded & yes + & use augmenting flows to find updates & splittable , 1 dest . ,paths not fixed & yes & polynomial & linear & yes + & & & & memory restrictions & decision problem hardness & & yes & & & no & & & yes & & & no & & & yes & & & no & & & yes & & & no & & & yes & np - hard & & no & p & & yes & np - hard & & no & open & & yes & & & no & & & yes & np - complete & & no & np - complete via & ( un-)splittable model & intermediate paths & memory limits & decision problem in general & optimization problems / remarks & partition & splittable & no & yes & np - hard & np - complete if every flow may only move once + & partition & splittable & no & no & & np - hard ( fewest rule modifications ) + & & splittable & yes & no & p & fastest schedule can be of unbounded length , lp for new reachable demands if can not migrate + & & 2-splittable & no & no & p & studies slightly different model + & ( max ) 3-sat & unsplittable & yes & no & np - hard ( also for unit size flows ) & np - hard to approx .additive error of flow removal for consistency better than + & partition & unsplittable & yes & no & no & & np - hard ( fastest schedule ) + & partition & unsplittable & no & no & np - hard & stronger consistency model , but proof carries over + & part . & subset sum & unsplittable & no & no & & np - hard ( does a 3-update schedule exist ? ) +so far we studied network updates from the viewpoint that consistency in the respective model must be maintained , e.g. , no forwarding loops should appear at any time . in situations where the computation is no longer tractable or the consistency propertycan not be maintained at all , some of the discussed works opted to break consistency in a controlled manner .an orthogonal approach is to relax the consistency safety guarantees , and try to minimize the time the network is in an inconsistent state , with underlying protocols being able to correct the induced problems ( e.g. , dropped packets are re - transmitted ) , as done in a production environment in google _ b4 _ .one idea mainly investigated by mizrahi_ et al . _ is to synchronize the clocks in the switches s.t .network updates can be performed simultaneously : with perfect clock synchronization and switch execution behavior , at least in theory , e.g. , loop freedom could be maintained . as the standard network time protocol ( ntp ) does not have sufficient synchronization behavior , the precision time protocol ( ptp )was adapted to sdn environments in , achieving microsecond accuracy in experiments . however , even if the time is synchronized well enough , there will be unpredictable variations of command execution time from network switches , motivating the need for prediction - based scheduling methods .even worse , if a switch fails to update at all , the network can stay in an inconsistent state until the controller is notified , then either rolling back the update on the other switches or computing another update .additionally , ongoing message overhead for time synchronization is required in the whole network , and controller - to - switch messages can be delayed / lost .in contrast , at the expense of additional updates , sequential approaches can verify the application of sent network updates one by one , possibly moving forward ( to the next update ) or back ( if a command is not received or not yet applied ) with no risk of incurring ongoing safety violations .nonetheless , in some situations synchronized updates can be considered optimal : e.g. , consider the case in fig .[ fig : move up - swan ] where two unsplittable flows need to be swapped , with no alternative paths in the network available for the final links. then , synchronizing the new flow paths can minimize the induced congestion .still , timed updates can not guarantee packet consistency on their own , as packets that are currently on - route will encounter changed forwarding rules at the next switch . in some additional methodsare discussed how to still guarantee packet consistency by , e.g. , temporarily storing traffic at the switches .time can be used similarly to a 2-phase commit though , by analogously using timestamps in the packet header as tags during the update , with also showing an efficient implementation using timestamp - based tcam ranges .additional memory , as in the 2-phase commit approach of reitblatt _ et al . _ , will be used for this method , but packets only need to be tagged implicitly by including the timestamp ( where often 1 bit suffices ) .as a complement to the previously - described theoretical and algorithmic results , we now provide an overview on practical challenges to ensure consistent network updates .we also describe how previous works tackled those challenges in order to build automated systems that can automatically carry out consistent updates . 1 .* ensuring basic communication with network devices : * automated update systems classically rely on a logically - centralized coordinator , which must interact with network devices to both instruct them to apply operations ( in a given order ) .such a device - coordinator interaction requires a communication channel .update coordinators in traditional networks typically exploit the command line interface of devices . in sdn networks ,the interaction is simplified by their very architecture , since the coordinator is typically embodied by the sdn controller which must be already able to program ( e.g. , through openflow or similar protocols ) and monitor ( e.g. , thanks to a network information base ) the controlled devices .2 . * applying operational sequences , step by step : * both devices and the device - coordinator communication are not necessarily reliable .for example , messages sent by the coordinator may be lost or not be applied by all devices upon reception .those possibilities are typically taken into account in the computation of the update sequence ( see [ sec : taxo ] ) .however , an effective update system must also ensure that operations are actually applied as in the computed sequences , e.g. , before sending operations in the next update step . to this end, a variety of strategies are applied in the literature , from dedicated monitoring approaches ( based on available network primitives like status - checking commands and protocols or lower - level packet cloning mechanisms ) of traditional networks to acknowledgement - based protocols implemented by sdn devices .3 . * working around device limitations : * applying carefully - computed operational sequences ensures update consistency but not necessarily performance ( e.g. , speed ) , as the latter also depends on device efficiency in executing operations .this aspect has been analyzed by several works , especially focused on sdn updates which are more likely to be applied in real - time ( e.g. , even to react to a failure ) .it has been pointed out that sdn device limitations impact update performance in two ways .first , sdn switches are not yet fast to change their packet - processing rules , as highlighted by several measurement studies .for example , in the devoflow paper , the authors showed that the rate of statistics gathering is limited by the size of the flow table and is negatively impacted by the flow setup rate . in 2015 , he _et al . _ experimentally demonstrated the high rule installation latency of four different types of production sdn switches .this confirmed the results of independent studies providing a more in - depth look into switch performance across various vendors .second , rule installation time can highly vary over time , independently on any switch , because it is a function of runtime factors like already - installed rules and data - plane load .the measurement campaign on real openflow switches performed in dionysus indeed shows that rule installation delay can vary from seconds to minutes .update systems are therefore engineered to mitigate the impact of those limitations despite not avoiding per - rule update bottlenecks .prominently , dionysus significantly reduces multi - switch update latency by carefully scheduling operations according to dynamic switch conditions .covisor and minimize the number of rule updates sent to switches through eliminating redundant updates .* avoiding conflicts between multiple control - planes : * for availability , performance , and robustness , network control - planes are often physically - distributed , even when logically centralized as in the cases of replicated sdn controllers or loosely sdn controller applications . for updates of traditional networks ,the control - plane distribution is straightforwardly taken into account , since it is encompassed in the update problem definition ( see [ sec : history ] ) .in contrast , additional care must be applied to sdn networks with multiple controllers : if several controllers try to update network devices at the same time , one controller may override rules installed by another , impacting the correctness of the update ( both during and after the update itself ) .this requires to solve potential conflicts between controllers , either by pro - actively specifying how the final rules have to computed ( e.g. , ) or by reactively detecting and possibly resolving conflicts ( e.g. , ) .a generalization of the above setting consists in considering multiple control - planes that may be either all distributed , all centralized , or mixed ( some distributed and some centralized ) .potential conflicts and general meta - algorithms to ensure consistent updates in those cases are described in .* updating the control - plane : * in traditional networks , data - plane changes can only be enforced by changing the configuration of control - plane protocols ( e.g. , igps ) .in contrast , the most studied case for sdn updates considers an unmodified controller that has to change the packet - processing rules on network switches .nevertheless , a few works also considered the problem of entirely replacing the sdn controller itself , e.g. , upgrading it to a new version or replacing the old controller with a newer one .prominently , hotswap describes an architecture that enable the replacement of an old controller with a new one , by relying on a hypervisor that maintains a history of network events . as an alternative ,explicit state transfer is used to design and implement the morpheus controller platform in .dealing with events occurring during an update : * operational sequences computed by network update algorithms forcedly assume stable conditions . in practice , however , unpredictable concurrent events like failures can modify the underlying network independently from the operations performed to update the network .while concurrent events can be very unlikely ( especially for fast updates ) , by definition they can not be prevented .a few contributions assessed the impact of such unpredictable events on the update safety . for instance , the impact of link failures on sitn - based igp reconfigurations is experimentally evaluated in .another example is represented by the recent foum work , that aims at guaranteeing per - packet consistency in the presence of an adversary able to perform packet - tampering and packet - dropping attacks .while we have already identified specific open research questions in the corresponding sections , we now discuss more general areas which we believe deserve more attention by the research community in the future . 1 .* charting the complexity landscape : * researchers have only started to understand the computational complexities underlying the network update problem .in particular , many np - hardness results have been derived for general problem formulations for all three of our consistency models : connectivity consistency , policy consistency , and performance consistency .so far , only for a small number of specific models polynomial - time optimal algorithms are known .even less is known about approximation algorithms .accordingly , much research is required to chart a clearer picture of the complexity landscape of network update problems .we expect that some of these insights will also have interesting implications on classic optimization problems* refining our models : * while we believe that today s network models capture well the fundamental constraints and tradeoffs in consistent network update problems , these models are still relatively simple . in partiular, we believe that there is room and potential for developing more refined models .such models could for example account for additional performance aspects ( e.g. , the impact of packet reorderings on throughput ) .moreover , they could e.g. , better leverage predictable aspects and models , e.g. , empirical knowledge of the network behavior .for example , the channel between sdn controller and openflow switches may not be completely asynchronous , but it is reasonable to make assumptions on the upper and lower bound of switch update times .* considering new update problems : * we expect future update techniques to ensure consistency of higher - level network requirements ( like nfv , path delay , etc . ) , the same way as recent sdn controllers are supporting them .dealing with distributed control planes : * we believe that researchers have only started to understand the design and implication of more distributed sdn control planes .in particular , while for dependability and performance purposes , future sdn control planes are likely to be distributed , this also introduces additional challenges in terms of consistent network updates and controller coordination .the purpose of this survey was to provide researchers active in or interested in the field of network update problems with an overview of the state - of - the - art , including models , techniques , impossibility results as well as practical challenges .we also presented a historical perspective and discussed the fundamental new challenges introduced in software - defined networks , also relating them to classic graph - theoretic optimization problems .finally , we have identified open questions for future research .p. bosshart , d. daly , g. gibb , m. izzard , n. mckeown , j. rexford , c. schlesinger , d. talayco , a. vahdat , g. varghese , and d. walker .p4 : programming protocol - independent packet processors . ,44(3):8795 , july 2014 .f. clad , p. merindol , j .- j .pansiot , p. francois , and o. bonaventure .graceful convergence in link - state ip networks : a lightweight algorithm ensuring minimal operational impact ., 22(1):300312 , february 2014 .t. koponen , m. casado , n. gude , j. stribling , l. poutievski , m. zhu , r. ramanathan , y. iwata , h. inoue , t. hama , and s. shenker .onix : a distributed control platform for large - scale production networks . in _ proc .usenix osdi _ , 2010 .
|
computer networks have become a critical infrastructure . designing dependable computer networks however is challenging , as such networks should not only meet strict requirements in terms of correctness , availability , and performance , but they should also be flexible enough to support fast updates , e.g. , due to a change in the security policy , an increasing traffic demand , or a failure . the advent of software - defined networks ( sdns ) promises to provide such flexiblities , allowing to update networks in a fine - grained manner , also enabling a more online traffic engineering . in this paper , we present a structured survey of mechansims and protocols to update computer networks in a fast and consistent manner . in particular , we identify and discuss the different desirable update consistency properties a network should provide , the algorithmic techniques which are needed to meet these consistency properties , their implications on the speed and costs at which updates can be performed . we also discuss the relationship of consistent network update problems to classic algorithmic optimization problems . while our survey is mainly motivated by the advent of software - defined networks ( sdns ) , the fundamental underlying problems are not new , and we also provide a historical perspective of the subject . /`null ` copyrightspace
|
coastline morphology is of current interest in geophysical research and coastline erosion may have important economic consequences .even more , the concern about global warming has increased the demand for a better understanding of coastal evolution .this paper deals specifically with the erosion of rocky coasts .rocky coasts have been estimated to represent of the world s shorelines .they are found in different contexts , and there exists a rich bibliography on the subject , see for example and the references therein , as well as for an update bibliography . however , this estimation strongly depends on the very definition of what constitutes a rocky coast .many cliffed coasts are fronted by beaches , with many different morphologies and several different dynamical processes in action .the morphology of these sea - shores may result from several different processes ; tectonicity and various erosion mechanisms ( sea , rivers , wind ) acting on different soils as well as the possible role of sediment transport .sea erosion can be imperceptibly slow , but nevertheless shapes coastal morphology .it can also be observed over human timescale , being of concern to planners .for instance , a study of cliff coasts in new zealand reports sea erosion rates with peaks as high as meters in one year ( with typical rates ranging from to m / yr ) . eroding cliffed shorelines account for of the entire new zealand coast .accordingly to the definition of rocky coasts given in , here we address coasts dynamics in the limiting case where the role of sediment transport is considered to be negligible .for instance , tectonically active coasts often display rocky coasts with very limited sediment deposited by rivers ( as in peru and chile or along the north america cordillera ) . the rugged appearance of these coasts is usually considered as an extension of the rugged mountains characterizing the nearby landscape , however it is difficult to exclude that sea wave erosion does not play a role in their morphology .collision coasts also tend to be rocky , containing few depositional features . because of their relative youth , neo - trailing edge coasts , such as the arabian coast along the red sea , are also rugged and mostly rocky . furthermore , there are many sites throughout the world where rocky and rugged coasts are found in tectonically passive margins , such as south africa , parts of argentina and brazil , eastern canada , southern australia and a section of north - west europe. one might think that wave erosion can play a role in relatively low rocky coasts , but the height of the cliff is not a general contraindication for erosive sea dynamics .very often these coasts exhibit some kind of irregular morphology . in the last decades , attempts to describe global geometry of sea - coastshave been made using the tools of fractal geometry to the point that the coast of britain has been taken as an introductory archetype of self - similarity in nature . since then, many tentative applications of fractal concepts to geomorphology have been published but at the same time there has been some debate about the fractality of coastlines . in other words , coasts and rocky coasts may be fractal or not , depending possibly on the scale on which the coast is observed . often but not always , different scales exhibit different shapes .this corresponds to the variety of possible contributions to coast morphology mentioned above .nevertheless , the observation of geometrical similarities and the presence of `` some sort '' of scale invariance in the morphology of rocky coasts , may suggest the existence of a common mechanism which , when in action , shapes this type of coastline .a qualitative model for the appearance of fractal sea - coasts had been suggested in .the idea is that irregular coasts contribute to the damping of sea waves with the consequence that the resulting erosion is weaker .more recently , a numerical model of such coastal dynamics was developed and studied .it was found that a mechanism based on the retro - action of coastal shape on wave damping , leads necessarily , for the specific case worked in this paper , to the self - stabilization of a fractal coastline .interestingly , it was found that the self - stabilized geometry belongs to a well defined universality class , precisely characterized in the mathematical theory of percolation . in that sense , the notion of fractal geometry for rocky sea - coasts should no more appear as a curiosity , but as a necessary consequence of percolation theory . in this paperwe advocate the model , discussing the statistical analysis of earth geomorphological data , the possible role of sediments in erosion , the role of large scale heterogeneity in the lithology of the eroded coasts , and we address the problem of statistical characterization of different non fractal morphologies predicted by the model .the structure of the paper is the following . in section [ sec :singular ] we present some statistical field geomorphological data that suggest that coastlines geometry differs from the general earth geomorphology .our specific erosion model , based on the aforementioned retro - action mechanism , is presented in section [ sec : model ] . therewe discuss the fundamental connection between our model and the general theory of percolation .the erosion dynamics , short and long term evolutions , are discussed in section [ sec : dynamics ] .we also discuss how the critical nature of percolation , results in a very irregular , or episodic , erosion process .the question of fractal versus non - fractal sea - coast is discussed in section [ sec : fractalornot ] .more precisely we give statistical arguments that could help to distinguish transitory from final morphology . in section [sec : geology ] we address the fundamental question of the role of geology in the frame of our retro - action model .the qualitative results is that , very often , coastal morphology should be the results of both retro - action and geological constraints . in section [ sec : sediment ] we discuss the possible role of sediments and rubbles in the erosion process .we show that in various cases the same scaling and morphologies should be obtained . in section [ sec : complexity ] we present some detailed data analysis of some rocky coasts , unfolding the complexity of the real morphology of coastlines as well as of the inland coastal regions . in section [ sec : conclusion ]we give the summary and conclusion of the paper .a question that can be raised , observing rocky coasts , is whether their geometry is simply inherited by the morphology of the inland , or whether the interaction with the sea changes and shapes the coast in a distinct way . in order to disentangle and recognize specific features of coastlines , with respect to higher earth surface isolines ,it is interesting to use tools which may help to reveal general universal features of their geometry. statistical analysis of coastlines fractal dimension .top : color map of earth coastline , the color shows the local measured fractal dimension .bottom left : distribution of measured fractal dimension on the earth coastlines .bottom right : average fractal dimension for earth isolines as a function of elevation ( in meters ) between and ; the world average fractal dimension for coastlines ( elevation ) is slightly above 1.2 .the horizontal dash - dotted line corresponds to ., width=529 ] statistical analysis of coastlines fractal dimension .top : color map of earth coastline , the color shows the local measured fractal dimension .bottom left : distribution of measured fractal dimension on the earth coastlines .bottom right : average fractal dimension for earth isolines as a function of elevation ( in meters ) between and ; the world average fractal dimension for coastlines ( elevation ) is slightly above 1.2 .the horizontal dash - dotted line corresponds to ., width=529 ] the following analysis of the world coastlines is part of a more general and detailed analysis , published elsewhere ( more details in the appendix [ app : isolines ] ) .topographic data for earth have been obtained from the srtm30-plus set .the data consists in the earth surface elevation over a grid of points . from the data we computed earth isolines , the coastline belonging to the elevation isoline .next , the whole earth surface is divided in squares of degrees latitude x degrees longitude and the fractal dimension is computed for each isoline portion in each square . fig .[ worldiso ] shows the result of the coastline fractal analysis .the bottom left panel of fig .[ worldiso ] represents the distribution of the measured fractal dimensions for the zero elevation isoline .the world coastline average dimension is found to be slightly above , but rocky coasts have often higher dimensions .moreover , one can consider the behavior of the global isolines as a function of elevation near the sea level and compare their corresponding dimension . in the bottom right panel of fig .[ worldiso ] we show the average fractal dimension measured between and meters .interestingly , exactly at elevation a rapid change in the measured average fractal dimension is observed .this gives an indication that the interaction between sea and land is responsible for the complex geometry of coastlines .in this sense , a model for the geometry of rocky coastlines should explicitly take in account the main physical processes taking places at the interface between sea and land , that is the dynamics of coastal erosion .the global analysis presented so far , suggests that the coastal morphology is not the sole reflect of the inland morphology .large scale geometry of coastline may be the result of many different characteristic phenomena .sand deposits usually smooth the irregularity of rocky coasts , filling bays , or may display specific patterns . on the other hand the rough geometry of glacial valleysgives the very convoluted coastline typical of fjords at large absolute latitudes .however , in many cases , rocky coasts present a very _ steep slope _ ( or terrain gradient ) with respect to the inland profile .this is the case of cliffs , or what we call `` plateau '' coasts . in our mind , a plateau coast is characterized by an extreme situation where a flat landscape becomes steep at the coast .a photographic example is shown in fig .[ fig : plateau ] . in this case one expects : first , that sea erosion is the most relevant shaping mechanism , and second , that a 2d model ( as the one presented in the next section ) could be sufficient to approach such morphology evolution .example of plateau coasts .top left : ouessant island , brittany , france ; top right : detail of the north east coast of the island .bottom : coast at the south of plougonvelin , brittany , france .note the cultivated fields right to the sea - shore .( pictures are snapshot from google earth),title="fig:",height=188 ] example of plateau coasts .top left : ouessant island , brittany , france ; top right : detail of the north east coast of the island .bottom : coast at the south of plougonvelin , brittany , france .note the cultivated fields right to the sea - shore .( pictures are snapshot from google earth),title="fig:",height=188 ] example of plateau coasts .top left : ouessant island , brittany , france ; top right : detail of the north east coast of the island .bottom : coast at the south of plougonvelin , brittany , france .note the cultivated fields right to the sea - shore .( pictures are snapshot from google earth),height=207 ] of course , the case of plateau coasts is a limiting case .discussion of several other examples of coastline complexity will be given in section [ sec : complexity ] below . through these more detailed examples ,we wish to put forward the idea that those shores with _ high local slopes ( or terrain gradients ) at the shore _ may be reasonably attributed to marine erosion , acting in a given geological context .rocky coasts erosion is the product of marine and atmospheric causes .there exist many different erosion processes : wave quarrying , abrasion , wetting and drying , frost shattering , thermal expansion , salt water corrosion , carbonation , hydrolysis . in the same time , the mechano - chemical properties of the rocks constituting the coast , which are linked to structure , composition and aging defining their `` lithology '' , exhibit an unknown dispersion .on the other hand , erosion is the consequence of the existence of an `` erosion power '' .it is a selective mechanism , which progressively eliminates the weaker parts of the surface .the remaining shore is then hardened as compare to the initial shore .but the erosion power is not constant and may change during erosion . in particular , damping mechanisms caused by the erosion itself could arise , establishing a self - stabilizing mechanism . in other words , erosion is the product of three ingredients : erosion power , rocks lithology , _ and _ a damping depending on the shore morphology .the interplay between these three factors is discussed here in various conditions , through the numerical implementation of a simple model .the specific damping mechanism considered here , relies on the studies of irregular or fractal acoustic cavities .they show that viscous damping is increased on a longer , irregular surface .these considerations have been applied practically in the conception of efficient acoustic road absorber now installed along several roads in france .an other example of the proposed selection mechanism , is the case of the dynamics of pit corrosion of thin aluminum films .there , fractal geometries spontaneously appears at the interface of the corroded solid .the phenomenon can be understood by means of a minimal model , which disregards many atomic details of the corrosion process .the analogy between erosion and corrosion is less artificial than one could think _ a priori _ ( see discussion in section [ sec : sediment ] ) . in the following we make an arbitrary distinction between _`` rapid '' _ mechanical erosion ( namely wave quarrying ) and _ `` slow '' _ weakening of the rocks due to the action of the elements ( weathering processes ) .these _ `` slow '' _ weakening events trigger , from time to time , new _`` rapid '' _ erosion sequences .the justification is that mechanical erosion generally occurs rapidly , mainly during storms , after rocks has been slowly altered and weakened .we first study this supposedly rapid erosion mechanisms .then we show that the full complex dynamics , involving fast and slow processes , changes the shape of the coast on a longer time scale keeping its gross geometrical characteristics .this dynamics is reminiscent of the _ quasi - equilibrium _ evoked by trenhaile ( see below ) our model schematize the sea , the land , and their interaction in the following way . in analogy with the acoustic oscillations in a cavity , the sea , together with the coast ,is considered to constitute a resonator .it is assumed that there exists an average excitation power of the waves .the `` force '' acting on the unitary length of the coast is measured by the square of the wave amplitude .this wave amplitude is related to by a relation of the type where is the morphology dependent quality factor of the system : the smaller the quality factor , the stronger the damping of the sea - waves .there are several different causes for damping .since the different loss mechanisms occur independently , the quality factor satisfies a relation of the type where is the quality factor due to the `` viscous '' dissipation of the fluid moving along the coast and the nearby islands and is related to other damping mechanisms ( e.g. bulk viscous damping ) .studies of fractal or irregular acoustic cavities have shown that the viscous damping increases roughly proportionally to the cavity perimeter .this model uses , as a working hypothesis , the idea that sea - waves are more damped along an irregular coast much in the same way as acoustic modes in irregular cavities .this appears to be an empirically known effect used to build efficient break - waters that are based on hierarchical accumulations of tetrapods piled over layers of smaller and smaller rocks , in close analogy with fractal geometry ( see fig .[ fig : tetrapods ] , left , and the many descriptions of breakwaters in ref . ) . artificial and natural break - waters .left : break - water made of concrete tetrapods on layers of rocks ( picture from http://www.sys.com.my ) .right : picture taken at pacific grove , in the monterey peninsula , california .our measure of the fractal dimension of such coast is close to .,title="fig:",height=158 ] artificial and natural break - waters . left : break - water made of concrete tetrapods on layers of rocks ( picture from http://www.sys.com.my ) .right : picture taken at pacific grove , in the monterey peninsula , california .our measure of the fractal dimension of such coast is close to .,title="fig:",height=158 ] to check the relation between irregularity and damping for the particular shapes of a sea - coast during erosion , we consider the properties of four 2d resonators , with the upper side corresponding to four successive coast shapes . for each time and morphologywe solve numerically the helmoltz wave equation with neumann boundary conditions .we compute the eigenmodes in a given frequency range , assuming weak losses on the eroding profile . for each eigenmodewe compute the energy dissipation which is supposed to take place on the coast . in other wordswe study the wave dissipation due to viscous forces acting on the boundary of irregular swimming pools .results are given in fig .[ fig : cavities ] , showing that the average losses of the computed eigenmodes in the four resonators increase roughly proportionally to the coast perimeter .the fact that the velocity of sea waves depends on the sea - floor depths would not modify the general link between perimeter and damping .( a ) a numerically computed eigenmode in a 2d resonator with the upper side representing one of the coast shapes during erosion . the energy dissipation of a mode is proportional to the integral of the squared amplitude along the coastline .frame ( b ) shows the evolution of the average dissipation for four different resonators , whose upper boundary corresponds to four successive times . during erosion ,the coast perimeter increases and the damping increases roughly proportionally to the coast perimeter . ]therefore , one can , in first approximation , assume that is inversely proportional to the coast perimeter whereas is independent of the coast morphology . in other words ,the sea exerts a homogeneous erosion power on each coast element proportional to : where is the total length of the coast at time ( then ) .the factor measures the relative contribution to damping of a flat shore as compared with the total damping .the quantity is the renormalized value of such that at all .small factor correspond to _ weak coupling _ between the erosion power and the coast length and large correspond to _strong coupling_. note that for the higher frequency waves with short wavelengths that contains a large erosion power , it is clear that their damping is proportional to the coast perimeter .these are the breaking waves usually considered to be the most erosive waves .the functional dependence of the erosion force as a function of the coast perimeter could be different without affecting the results .rather , a better model for damping should take care of a possible wave frequency dependence as well as the possibility of localization effects along the irregular coast . this would modify eq .[ eq2 ] and change the time evolution .note however , that what is important here is that , as erosion proceeds and sea `` penetrates '' progressively the earth , the erosion power is diminished .this is the essence of such a retro - action model .any model which would present this property would lead to the same type of results .in particular if erosion sediments stay locally on the sea floor , ( are not transported ) and contribute to damping , the erosion power would decrease as a function of the total amount of material already eroded and would create the same type of effects ( see below ) .the `` resisting '' random earth is modeled by a square lattice of random units of global width .each site represents a small portion of the earth , named _ a rock _ here .the sea acts on a shoreline constituted of these rocks , each one characterized by a random number , between and , representing its lithology .the erosion model should also take into account that a site surrounded by the sea is weaker than a site surrounded by earth sites .hence , the resistance to erosion of a site depends on both its lithology and the number of sides exposed to the action of the sea .this is implemented here through the following weakening rule : sites surrounded by three earth sites have a resistance . if in contact with sea sites the resistance is assumed to be equal to . and, if site is attacked by or sides , it has zero resistance .the iterative evolution rule is simple : at computer time step , all coast sites with are eroded ( sometimes exposing new sites to erosion ) , and then and are updated together with the resistances of the earth sites in contact with the sea .then , from one step to the next , some sites are eroded because they present a `` weak lithology '' while some strong sites are eroded due to their weaker stability due to sea neighboring .an example of local evolution is shown in fig .[ fig2 ] . _ note that our variable simply denotes a number of computer steps and therefore is not a real time_. illustration of the erosion process .the thick number at the square center represent the lithology .the numbers in the corners are the corresponding resistances which depend on the local environment as explained in the text .the sites marked with 1 are earth sites with no contact with the sea . left and right : situations before and after an erosion step with .after this step resistances are updated due to the new sea environment.,width=321 ] to exemplify the intrinsic properties of the model we consider an artificial situation where erosion would start on a flat sea - shore .the computer implementation of the above dynamic model leads to a spontaneous evolution of the smooth seashore towards geometrical irregularity as shown in fig .the figure exhibits the time evolution of an initially flat coastline towards geometric irregularity .the left column describes the case of weak coupling , the right column the case of strong coupling .time evolution of the coastline morphology starting with a flat sea - shore . left and right columns respectively weak and strong coupling .top to bottom : successive morphologies with the final morphologies at the bottom .note that case ( b ) , transitory shape with weak coupling and case ( f ) , final shape with strong damping appear to be similar but it is shown below that there exist statistical means to distinguish one from the other.,width=529 ] in the case of _ `` weak coupling '' _ the terminal morphology is highly irregular and it looks much like some of the irregular morphologies observed on the field .consider for instance , the north eastern coast of sardinia : the fractal dimension of this coast found to be very close to , as shown in fig .[ fig : palau ] .there , we compare the coastline fractal dimension ( isoline ) and the isolines closest to the coast , which is quite steep . on the opposite , the inland does not present very high reliefs .the picture at the bottom - left panel shows the coast near palau .northern coast of sardinia ( near palau ) .top right : box - counting measure of coastal isolines , compared with the ideal box - counting for a fractal with dimension .top right inset : coastal isolines , several elevations .bottom left inset : a picture of the coast .[ fig : sardinia],height=340 ] indeed , in the weak coupling case , our model produces a fractal terminal morphology with a dimension very close to ( see fig .[ fig4 ] , left ) .note , however , that the fractal morphology extends up to a maximum scale , of the order of the transverse width of the artificial coastline ( depicted in the last snapshots of fig .[ fig1 ] ) . the precise mathematical definition of this statistical width is given below ( see eq .[ sigmadef ] ) .top left : box - counting determination of the model coast fractal dimension .the straight line is a power law with a slope -4/3 .the best fit gives : .the data refer to a large system with , and small damping .top right : scaling behaviour of the coast width .the straight - line is a power law with the gp exponent -4/7 ( each point is an average over samples with ) .[ fig3 ] bottom : dependence of the erosion force as a function of the number of computer time steps .the left figure shows the evolution of the sea erosion force acting on the coastline during a `` rapid '' sea - erosion process ( different values of the scale gradients and of ) .this dynamics spontaneously stops at a value weakly dependent on ( systems with , averaged over ten different realisations ) .the right plot is the erosion force during the complete dynamics ( `` slow '' weathering process triggering `` rapid '' erosion ) illustrated in fig .[ slow].,width=377 ] top left : box - counting determination of the model coast fractal dimension .the straight line is a power law with a slope -4/3 .the best fit gives : .the data refer to a large system with , and small damping .top right : scaling behaviour of the coast width .the straight - line is a power law with the gp exponent -4/7 ( each point is an average over samples with ) .[ fig3 ] bottom : dependence of the erosion force as a function of the number of computer time steps .the left figure shows the evolution of the sea erosion force acting on the coastline during a `` rapid '' sea - erosion process ( different values of the scale gradients and of ) .this dynamics spontaneously stops at a value weakly dependent on ( systems with , averaged over ten different realisations ) .the right plot is the erosion force during the complete dynamics ( `` slow '' weathering process triggering `` rapid '' erosion ) illustrated in fig .[ slow].,width=529 ] here we want to stress that the average width is the distance below which the coast geometry is fractal . in other wordswe expect to have a geometrical scale invariance up to a distance . at larger scalethe seashore can be considered as the reunion of independent ( uncorrelated ) fractals of size .it turns out that depends directly through a power law of the coupling parameter .this is shown also in the right panel of fig .one observes a power law with exponent close to . as we will discuss below , the fractal dimension and the exponent are the signature of the deep connection of our model with percolation theory . in the language of statistical physics, the model belongs to the universality class of percolation , as will be discussed in the next section . for _ `` strong coupling '' _ ,the erosion ends on rugged morphologies , as shown in fig .[ fig1](f ) .this can be understood as the limit of small , i.e. where the geometrical correlation of the coast is short ranged .note that such rugged morphology resembles to transitory morphologies observed with weak coupling as that of fig .[ fig1](b ) .we will see below in section [ sec : fractalornot ] that there exists statistical methods to distinguish transitory ( young coasts ) from final morphologies ( old coasts ) . in summary, we discuss a model which although based on very few ingredients , gives rise to a variety of coastline morphologies , fractal or simply rugged , transitory or final .our study indicates the existence of a connection between coastal erosion of rocky coasts and percolation theory .percolation is a cornerstone of the theory of disordered systems , which has brought new understanding and techniques to a broad range of topics in physics , materials science , complex networks , epidemiology , etc . percolation theory deals with the following statistical problem .consider a lattice , like the square lattice for instance , and independently assign to each site a random number obtained from a uniform distribution between and .now select the set of sites which happen to have a number smaller than some fixed arbitrary value and occupy them .if these selected sites are first nearest neighbors , they define so - called clusters .of course if is small , the clusters themselves will be of small size .however , strictly above a critical value , there exists an infinite cluster that crosses the lattice from left to right and from top to bottom .the most important characteristics of percolation phenomena is that near criticality , that is when is close to , the scaling properties of the system are independent of the lattice geometry .although the value of the percolation threshold does depend on the lattice under consideration the exponents of the so - called scaling laws that describe the properties of clusters and the geometry of the infinite cluster at percolation are independent of the lattice .in particular the external frontier of the percolation cluster is fractal with a fractal dimension exactly equal to and the so - called accessible perimeter has a fractal dimension equal to .as the reader can suspect , there is a similarity between the percolation cluster and the coastline produced by our model .in fact , at the end of the erosion process , all earth sites at the interface with the sea have a resistance larger than the wave erosive power .so , in some sense , our erosion model spontaneously identifies , and stops at , a percolating interface constituted of `` strong '' sites . indeed, there exist a direct relation between our model and the theory of percolation .in particular , besides the fractal dimension of the coast , the behavior of the width as a function of the coupling factor exhibit a power law dependance , as shown in fig .this power law is characteristic of a specific variant of percolation , called _ gradient percolation _ . in the appendix [ app : gp ]we explain gradient percolation and how it can be related to our erosion model . at this stageit is important to recall briefly the concept of universality in statistical physics .universality means that several very different phenomena can exhibit analogous macroscopic properties described by the same power law exponents .the phenomena which exhibits the same exponents belong to a unique _ universality classes_. the models described in this paper , percolation , gradient percolation , our model of coastal erosion , all belongs to the percolation universality class , irrespectively of many details .for example , if we change the lattice geometry , or the distribution of the rocks lithology , the model still belongs to the percolation universality class .here it means that shorelines made of rocks of different nature and sizes , subjected to different external climate , can exhibit the same large scale geometrical properties .recent studies inspired by our model , corroborates the deep connection between coastal morphology and percolation .a remarkable property of such real coasts has been observed : they have been found to be conformally invariant .this geometrical property stands for itself but there exists a mathematical demonstration that it exists for the so - called accessible perimeter of the percolation cluster .( it should be stressed that not every fractal with dimension is conformally invariant .let now describe in more detail the erosion dynamics resulting from the minimal rules defined by our model . for the sake of simplicity ,we consider first the case of a flat ( smooth ) initial coastline submitted to the erosion action of the sea .as discussed later ( section [ sec : smoothing ] ) , the dynamics with different initial morphologies can be understood from these results . in the first steps of the dynamicsthe erosion front keeps quite smooth and it roughens progressively as shown in fig . [ fig1 ] . during the process, finite clusters are detached from the infinite earth , creating _islands_. at any time , both the islands and the coastline perimeters contribute to the damping .as the total coastline length increases , the sea force becomes weaker . at a certain timestep , the weakest point of the coast is stronger than and the `` rapid '' dynamics stops .this indicates that erosion reinforces the coast by preferential elimination of its weakest elements until the coast is strong enough to resist further erosion .whatever the dynamics , at the stopping time the coastline is irregular ( see fig.[fig1 ] ) up to a characteristic width .this width is defined as the standard deviation of the final coastline depth .more precisely , defined as the mean number of points of the front lying on the line , and as the average position of the front , that is then is the ( averaged ) time evolution of is shown in fig .[ fig3 ] ( left ) .the dynamics depends strongly on the value of .if is large enough the dynamics is rapid and the erosion stops on an irregular but non - fractal sea - shore ( see the _ strong coupling _case in fig .[ fig3 ] ) . on the opposite , if is small enough , the dynamics last much longer and it finally stops on a fractal sea - shore ( _ weak coupling _case ) . note that the final values of the sea - power are different from the classical percolation threshold .this is linked to the weakening rule implemented in the model .it is however important to stress that the time here is a computer step , not directly comparable with physical time .a unit of computer step corresponds to the duration between erosion events .such a duration is _ directly related to the strength or fragility of the coast itself_. of course , the real dynamics of the coasts are more complex than the _ `` rapid '' _ processes considered above .they result from the interplay with the slow weathering processes , generally attributed to carbonation or hydrolysis .these processes act on longer , geological , time scales . in order to mimic this long term evolution after the ending of _ `` rapid '' _ erosion ,the lithology parameter of all the coast sites is decreased by a small fraction , _ i.e. _ with after the erosion has stopped at .one or a few coast sites then become weaker than and they are eroded .this exposes new sites , previously protected , to erosion , triggering possibly a new start of the rapid erosion dynamics up to a next arrest .this process can then be iterated .snapshots of the coastline at successive arrest times are shown in fig .[ slow ] together with their measured fractal dimension .note that the measured fractal dimension fluctuates around , which is the expected fractal dimension for a very large coast with a vanishing or very small coupling ( as in fig .[ slow ] ) .snapshots taken during the long term erosion dynamics for a small system ( ) with a moderate coupling .color codes for successive arrest times .note that the measured fractal dimension fluctuates around the universal value corresponding to the limit of vanishing coupling ( right panel).,height=283 ] moreover , at each restart of erosion , a finite and strongly fluctuating portion of the earth is eroded .the `` slow ''weathering mechanisms induces also small fluctuations of ( see fig .[ fig3 ] right ) . in the language of coastal studies , the system state evolves through a dynamical equilibrium where small perturbations may stimulate large fluctuations and avalanche dynamics .the coastal engineering community has recently suggested the use of a stochastic description of the dynamics of rocky coast erosion , characterized by episodic , discontinuous events , more than simple constant erosion rate processes .field inspections confirmed that the local variability in rock resistances , or in general `` geological contingency influences the nature and scale of erosion processes and thresholds '' . in our modelsuch fluctuating dynamics is due to the underlying criticality of percolation systems .a detailed statistical analysis of the episodic erosion events is out of the scope of the present paper . nevertheless , we wish to point out that the statistics of such events should follow power laws ( similarly to what has been computed for the etching of disordered solids at low temperature ) .interestingly , power laws have been observed in the statistics of field coastal soft - cliff erosion . in , using a high temporal resolution rockfall statistics , the episodic character of the dynamics is debated , in opposition to a continuum activity . within our approach , a power law statistics is expected in the framework of percolation theory .note that the fluctuating dynamics of in our model , are not due to fluctuations of the sea incoming power .these could also be included , in order to mimic storms , for instance .such fluctuations could give raise to reactivations of fast eroding events , as for the weathering mechanism .in general , we do nt expect a change in the overall set of morphologies generated by the model .as explained above , our model of coastal erosion shows how the feedback between the lithology heterogeneity of shores and the damping effect of geometric irregularity may lead , in the weak coupling case , the coast towards a fractal geometric shape with a dimension characteristic of percolation .this case can be qualified as `` canonic '' since it corresponds to well established percolation theory results .however , under different conditions , the model may generate coastlines with complex irregular shapes , not necessarily fractal .this happens in the case of strong coupling morphology of `` old coasts '' , or in the case of transient morphologies of the weak coupling `` young coasts '' .a measure that can result useful to characterize the morphology in this case is the following .first , one has to recall that the exponent which plays a role in the geometry is that of the accessible perimeter , namely .so an empirical way to determine if an observed coast may result of such erosion is to measure the length of the coast contained in a square box of side .this is the classical mass method in fractal studies . in our case, this length should be proportional to to the power up to a box size of order . for larger boxes, the mass should be linear as a function of .this is shown in fig .[ mass - final ] for various computed final or _`` old '' _ coasts obtained for different , but quite large , values of . in this case , as can be seen in fig .[ fig4 ] left , is much smaller then , and the coasts appear simply rough , rather than fractal ( see fig .[ fig1](f ) ) . nevertheless , a clear flattening of the curve , for is visible .( the fact that the signature of the fractal exponent can be observed in a non fractal front has been also investigated mathematically for gradient percolation in ) . left : geometrical correlation of `` old '' coast at the end of the fast erosion dynamics .the mass plotted here is the local mass in a box of side centered around a point of the coast and averaged along this coast . .[ mass - transient ] right : geometric correlation of the `` young '' coast obtained during the fast erosion dynamics .inset : erosion strength of the sea during the dynamics .the mass plotted here is the local mass in a box of side centered around a point of the coast and averaged along this coast . , . ,title="fig:",height=188 ] left : geometrical correlation of `` old '' coast at the end of the fast erosion dynamics .the mass plotted here is the local mass in a box of side centered around a point of the coast and averaged along this coast . .[ mass - transient ] right : geometric correlation of the `` young '' coast obtained during the fast erosion dynamics .inset : erosion strength of the sea during the dynamics .the mass plotted here is the local mass in a box of side centered around a point of the coast and averaged along this coast . , ., title="fig:",height=188 ] interestingly , for _ young transient _ coasts as that of fig .[ fig1](b ) , which also appears rough , non fractal as in fig [ fig1](f ) , the flattening is much less evident . in fig .[ mass - transient ] we show the measure of during the fast erosion process , which eventually leads to red curve in fig .[ mass - final ] .we note that the flattening arise quite late in the erosion process .this suggest that with respect to this measure , transient structures behave differently than final structures .our model , as discussed until now , may appear too simplistic . herewe discuss how our results applies more generally . in the above results ,the general trend is to reach a rugged morphology starting from a flat one , an obviously artificial situation . even without including specific mechanisms , mimicking a differential erosion due to convergence of sea waves ( due to local topography or bathymetry , as well as specific wind directions ), we can wonder what would happen in our model if one would start from an initial coast with a salient geometry .let call the typical scale length of the irregularity produced by the dynamics starting with an initial flat coast ( for instance measured as the average of , defined in eq .[ sigmadef ] , on several dynamics at large times ) .this quantity depends on ( it is proportional to ) .suppose now the case of an initial non flat coast , that is a coast with an initial characteristic irregularity scale .then , if , the erosion dynamics will initially change ( decorate ) the coast on the smallest scale , keeping the larger irregularity .then , the slow erosion dynamics will eventually loose memory of the initial geometry , leading to a shoreline irregular up to time .this case is shown in fig .[ dentedisega ] , where a `` toothed '' coastline is submitted to erosion according to our model in the case of strong coupling ( small ) . otherwise ,if , the erosion will increase the irregularity up to length , which becomes the dominant irregularity scale . in fig .[ fig - trenhaile ] we show the results of the equivalent dynamics ( same and ) of two initially different irregular coasts , respectively with and .after some time , which depends on the dynamics parameters , the depth of the coast shows a fluctuating dynamics around , irrespectively from the initial value of .this , we think , resembles what trenhaile had in mind drawing a figure in his paper ( reproduced here as an inset in fig .[ fig - trenhaile ] ) .several observation are however in order . here corresponds to , which is related to the correlation length of the underlying percolation process .it depends on , the coupling between ( geometrical ) damping and erosion .there s no need to invoke an _ a priori _ differential erosion rate between headlands and bays .finally , the _ quasi - equilibrium regime _ evoked by trenhaile , corresponds here to the stationary , critical dynamics , where avalanches are triggered by slow erosion , local , events . long term erosion , strong coupling starting from an irregular geometry ( shore flattening ) : a ) initial configuration ; b ) after erosion cycles ; c ) after erosion cycles ; d ) after erosion cycles.,width=453 ] depth of the coast , measured as the length scale of its irregularity , defined in eq . , as a function of computer time during the slow erosion process .two different initial conditions are considered : large ( red curve ) and small ( black curve ) , compared with the average depth obtained starting from a initially flat shore ( dashed line ) .both simulations are perfomed with , and . in the inset ,reproduction of fig.5 from .,width=453 ] we think that this kind of dynamics blends two apparently contrasting ideas : by one side the quasi - equilibrium dynamics predicted by trenhaile , on the other the image of an episodic dynamics . at the same time it supports the continuum activity scenario recently proposed in , where the magnitude - frequency distribution corresponds to the avalanche statistics of our critical dynamics , expected in our model since the connection with percolation and observed in similar cases .it is of general consent that marine erosion processes acts on an earth that possess its own geological identity and this obviously influence the observed morphologies . in the above calculations and discussion ,the lithology distribution has been considered totally random , without any spatial correlations . by this , we mean that the lithology of neighboring `` rocks '' are independent random numbers .there is _ no correlation in the disorder_. this is a limit case .a more realistic description should include the concept of `` geological heterogeneity '' . in this casethe lithology exhibits a dispersion around a local average value , which changes only on large distances .the scale of variation of this local average is called _ the correlation distance _ of the random lithology . in fig .[ flat - correlated ] , the earth is constituted at the beginning of a collection of different patches , some of which contains long distance correlations .it then presents some regions of weak lithology and some regions of strong lithology while other regions are uncorrelated .the final morphology retains this heterogeneity , with part of the shore very irregular while other regions are smoother ( similar phenomena could be invoked in order to interpret several detailed studies of the observed self - similarity properties of seacoasts ) .top : example of initial ( left ) and corresponding final ( right ) morphologies created by the erosion process in the case where the earth present regions of local correlation between lithologic mechano - chemical properties .the lithologies values are coded by light to dark beige . [correlated ] bottom : example of initial ( left ) and corresponding final ( right ) morphologies created by the erosion process with a different starting geometry .pictures are courtesy by j.f.colonna .,title="fig:",height=188 ] top : example of initial ( left ) and corresponding final ( right ) morphologies created by the erosion process in the case where the earth present regions of local correlation between lithologic mechano - chemical properties .the lithologies values are coded by light to dark beige . [correlated ] bottom : example of initial ( left ) and corresponding final ( right ) morphologies created by the erosion process with a different starting geometry .pictures are courtesy by j.f.colonna .,title="fig:",height=188 ] top : example of initial ( left ) and corresponding final ( right ) morphologies created by the erosion process in the case where the earth present regions of local correlation between lithologic mechano - chemical properties .the lithologies values are coded by light to dark beige .[ correlated ] bottom : example of initial ( left ) and corresponding final ( right ) morphologies created by the erosion process with a different starting geometry .pictures are courtesy by j.f.colonna .,title="fig:",width=188 ] top : example of initial ( left ) and corresponding final ( right ) morphologies created by the erosion process in the case where the earth present regions of local correlation between lithologic mechano - chemical properties .the lithologies values are coded by light to dark beige . [correlated ] bottom : example of initial ( left ) and corresponding final ( right ) morphologies created by the erosion process with a different starting geometry .pictures are courtesy by j.f.colonna .,title="fig:",width=188 ] such irregular coastlines , which retain some strong resemblance with real coasts , do not enter a simple fractal or scaling category .the important fact here is that , whatever the lithological conditions , there exists a spontaneous evolution towards irregularity that stops spontaneously .this is again a consequence of the concept of percolation but of course of a percolation problem asked on a spatially correlated randomness .moreover , even in this case , the initial flat coast is an idealization , and some memory of ancient shapes could also influence the coastline morphology , as shown in fig .[ correlated ] .up to now , the results described here where obtained under the assumption that sediments play no role , either because they do not modify damping on the surface , either because they disappear by a rapid transport effect .there exists , indeed , situations in which sediment transport can be neglected .see , for instance , the discussion in chapt .18 in .however , when sediments stay on site , they can produce two different effects in terms of rock erosion ( the role of sediments produced by erosion as determinant for shaping sediment rich shorelines , which has witnessed recent modeling efforts , is not discussed here ) .first if they are small enough they can play the role of little hammers accelerated by the waves . on the opposite they can fall on the sea floor and contribute to the damping of waves .to simplify , the small pieces increase erosion while the large heavy pieces increase damping .the erosion increase has been neglected here because its role should be only transitory . now , suppose , in an extreme scenario , that the damping is not dominated , as was assumed above , by the interaction with the coast perimeter but by the damping effect of the sediments . in fact , it is known , and however poorly understood , that the shore waves are partially damped by their interaction with the sea floor . in first approximation , the more sediments the more damping .this means that the equivalent quality factor would be inversely proportional to the total amount of eroded material at time that me call , rather than to the coast perimeter length . in that casethe erosion power would evolve as where the factor measures the relative contribution to damping of accumulated sediments shore as compared with the total damping .it might also take care of the fraction of the sediments transported away from the coast .in particular erosion of very high cliffs , producing large amounts of heavy rubble , would consequently induce a strong coupling effect , leading to rugged but not fractal sea shores .note that this damping mechanism , in which the eroding force decreases with the amount of eroded mass , leads to a one to one correspondence with the aluminum corrosion experiments and models mentioned above ( where the corrosive power decreases with the amount of corroded material ) .then , the theory developed there , which again makes connection to gradient percolation , applies also in our case .this means that erosion will lead also to fractal or scaling interfaces with the sames scaling geometries .the two mechanisms may occur simultaneously , giving , without affecting the result : more generally we expect that any model that express some kind of link between random erosion and increased damping would lead to the same type of self - organized fractality or scaling .the analysis presented in section [ sec : singular ] , suggests that the coastal morphology is not the sole reflect of the inland morphology . before discussing specific examples in detail oneshould recall that other phenomenon are known to play a role .for example sand deposits usually smooth the irregularity of rocky coasts , filling bays , or may display specific patterns .also the rough geometry of glacial valleys gives the very convoluted coastline typical of fjords at large absolute latitudes .note also that our model has used implicitly the fact that the power giving rise to wave excitation was uniform .of course this may be not true on a too large scale ( think for instance to different wind or current conditions ) . in this sectionwe consider several specific locations , and we analyze the coastline fractal morphology , in comparison with the fractal geometry of the closest inland isolines .this is made possible by high resolution srtm3 data - set , which provide altimetric data of the earth surface in a grid of arc - seconds ( that is a resolution of about m ) and with a vertical error smaller than meters .thanks to this new tools , it is possible to unfold the complexity of real coastline geometry , which is the result of the interplay of several physical phenomena , and , consequently might go beyond the simple model proposed here . in the following field analysis , we restrict to coasts where there the terrain gradient at the coast is much larger than the average gradient in the inland , that is what we named plateau coasts , in section [ sec : plateau ] .an example of such coasts is the coast of brittany , as can be seen in fig .[ fig : bretagne ] : the inland is very flat ( most of the coastal land is lower than m ) , and the shores are usually very steep .the coast is subject to severe storms , which can have impressive effects on rocky cliffs .however , the overall measured fractal dimension of the coastline is smaller than .this could be due to the presence of beaches or in general sand deposits .if this would be the case , isolines slightly higher should not be affected by this and they should better show the effect of pure sea erosion .an increase of fractal dimension is in fact observed for isolines slightly higher then sea level .however a very interesting phenomena occurs , which can be clearly observed from the box counting curve of the m isoline : there , two appreciably different slopes appear , one at short scale and another , steeper , for longer range .this observation , which can not be explained by our simple model , could be the result of different geophysical processes , acting at different length scale and/or on different time scales .in the lower inbox of the figure we show the fractal dimension measured restricting the range of the fit respectively at short and long range .this effect seems maximum at about m of elevation .interestingly , the short range fit is very close to for isolines around m ( again note that above m the range for box - counting is too small to be conclusive ) .analysis of brittany coast .scaling of box counting shows complex features , not a simple power law .this is clearly visible for the isoline .upper inset : map of the region analysed .lower inset : slopes of box counting curves at short and long range , as a function of elevation.,height=302 ] such a large scale bending of isoline box - counting curves seems common in other coastal regions , for instance in the northern coast of california , as shown in fig . [ fig : twoslopes ] .( previous measures of the coastline fractal dimension of american coasts are reviewed for instance here ) . left : analysis of northern californian coast .scaling of box counting shows complex features , not a simple power law .this is clearly visible for the isoline .upper inset : map of the region analysed .lower inset : slopes of box counting curves at short and long range , as a function of elevation .right : the average fractal dimension of west coast of north america as a function of elevation , computed at short and large range.,title="fig:",height=188 ] left : analysis of northern californian coast .scaling of box counting shows complex features , not a simple power law .this is clearly visible for the isoline .upper inset : map of the region analysed .lower inset : slopes of box counting curves at short and long range , as a function of elevation .right : the average fractal dimension of west coast of north america as a function of elevation , computed at short and large range.,title="fig:",height=188 ] in the inset of the left panel , the long range and small range slopes are compared as a function of isoline elevation .while the short range slope is quite constant around ( which is quite interesting , view the uplifting nature of this coast ) , at long range a wide variation is observed . in order to test the generality of this observation, we perform the analysis on the whole american western coast , from to of latitude , parceled in coastal cells .for each cell , we computed the short and long range slopes as a function of isoline elevation . in the right panel of fig .[ fig : twoslopes ] we plot the average of the measured slopes .one can see that , even if weaker , the effect is still visible . at the moment , we do nt have any explanation for this phenomena , even if the multifractal nature of the terrain could possibly be invoked .a more detailed investigation , including the eastern north american coast , which behaves differently , will be presented in a future publication .in this work we have discussed a model for the formation of irregular rocky coastal morphology .this model links the reciprocal evolution of the erosion power with the topography of the coast submitted to that erosion .the model reproduces at least qualitatively some of the features of real coasts using only simple ingredients : the randomness of the lithology and the decrease of the erosion power of the sea . despite the simplicity of the model ,a complex phenomenology emerges .this is not surprising in complex systems research . as stated by murray et al . : `` the analytical lens of emergent phenomena highlights the idea that studying the building blocks of a system the small - scale processes within a landscape , for example may not be sufficient to understand the way the system works on much larger scales .the collective behaviors of small - scale components synthesize into effectively new interactions that produce large - scale structures and behaviors '' . in our model , depending on time , on the damping strength and on possible correlations between the lithological properties , different coastline characteristics emerge . in the simplest case , `` weak coupling '', the retro - action leads to the spontaneous formation of a fractal seacoast with a fractal dimension .the appearance of this specific fractal dimension uncovers the close , deep connection with percolation theory and the `` universality class '' of percolation in the physics of critical systems .the dynamics in the weak - coupling case leads to a self - organized fractality , since the fractal geometry plays the role of a morphological attractor : whatever its initial shape , a rocky shore will end up fractal when submitted to this type of erosion .for this reason this case can be qualified as `` canonic '' . for larger coupling , and/or depending on the erosion stage ,various irregular coastlines emerge from the same model . between these irregular but non - fractal morphologieswe have been able to distinguish `` young '' or transient from `` old '' or final coastlines .future work should then include field observation , in the hope to find such morphologies and to confront these ideas with geologic knowledge .obviously , one of the first goals of field studies will be to try to read world coastline morphology in terms of gradient percolation in the limit of strong gradients .for these `` old '' coasts , one should observe power laws with percolation exponents _ although the coasts may not be fractal _ . at this point, one should stress that , between the enormous geometrical variety and complexity of sea - coasts , fractal sea - coasts and specially those with dimension close to should no longer be considered anymore as `` complex '' . on the contrary , they may in fact be the `` most simple '' consequences of uncorrelated randomness and retro - action whatever the details of retro - action .in such a frame , they are necessary consequences of percolation phenomena .since percolation possess the universality properties of phase transitions , the scaling properties of these coasts should not depend on more specific processes . by this , we do not mean that the specificity of the process no longer plays a role , for instance on the real time scale of erosion , but that they do not determine the global large scale geometry and scaling behaviors . more precisely : * any retro - action damping acting on any distribution of lithology will lead to the spontaneous self - stabilization of an irregular coast .in particular , nonlinear damping effects , possibly including the role of turbulence , would modify the time history of erosion but not the scaling properties of the coast geometry .* once the coast is stabilized the long term evolution will be stochastic .it will be triggered even by small events and will produce some kind of avalanche statistics .this is related to the fact that a stabilized sea - shore may hide `` weak '' lithology regions or more fragile patches . * in the field , the islands which have resisted erosion under a power larger than the final power , should be stronger that the coast itself .this could be verified on the historical data of known seacoasts and the evolution of neighboring islands . in summary ,the present work provides a rationale that connects damping , as illustrated in fig .[ fig : tetrapods ] left , with rocky coast morphology , as illustrated in fig .[ fig : tetrapods ] right . a simple feedback mechanism that relates the large scale morphology with the sea wave erosion power ( usually noted with in the coastal literature ) , together with a local variability of rock resistance ( ) , naturally lead to the formulation of our minimal model , which points out how both the irregular morphology of coastlines as well as the episodic and stochastic erosion dynamics may both be the effect of an underlying critical ( percolation ) point . of course , such a feedback process , does not exclude the existence of other processes acting at more local , or meso - scales .nevertheless , our framework confirms the idea that the final coast emerges from a natural selection process , which eliminates the weaker part of the coast .the resulting shoreline constitutes a strong , but possibly fragile , barrier to sea erosion . to the extent that this idea applies, natural coasts should be `` preserved '' and managed with care .we gratefully acknowledge illuminating and fruitful discussions with jens feder and niels hovius , and alistair rowe for a reading of the manuscript .topographic data for earth have been obtained from the srtm30-plus set .the data consists in the earth surface elevation over a grid of points .the resolution of the grid is minutes of degree for latitude and longitude , which corresponds , in the region of interest , to about four kilometers .we have extracted the coastline as the set of points at zero elevation .in fact , we use a generalized method to extract `` isolines '' at arbitrary elevation : to draw the isoline of level , we identify on the topographic grid all the nearest neighbor sites whose elevations , and , satisfying . using coordinates and elevations of such points , the coordinates of the isoline pointare computed via a simple linear interpolation .once the isolines are found , the whole earth surface is divided in squares of degrees latitude x degrees longitude .in fact the square regions are separated by only two degrees as to have an overlapping covering of the total surface .then we proceed in computing the fractal dimension in each square , via the classical box counting procedure : the fractal dimension has been measured through a least squares fit of the exponent of the box counting plot in the range of degrees , which roughly correspond to a range from few kilometers to several tenths of kilometers .we disregard fractal dimensions computed on isoline sets with less than points .the regression error in the fractal dimension so estimated never exceeded .the whole numerical analysis has been repeated in the following cases : ( i ) full resolution for continental land ( seconds instead of minutes ) ; ( ii ) different interpolation schemes for the definitions of isolines ; ( iii ) different minimum number of points ( ) .the values of the corresponding measured fractal dimensions are only slightly affected , and the main results do not change .in order to relate the erosion model to the theory of percolation , we need to introduce the gradient percolation model ( * gp * ) . in this model , each site of the lattice is occupied with a probability which change linearly from to in a given direction ( is the size of the lattice in the direction ) .there is then a _ gradient _ in the occupation probability ( not to be confused with the terrain gradient in geomorphology ) . in *gp * there is always an infinite cluster of occupied sites as there is a region where is larger than the standard percolation ( * sp * ) threshold .there is also an infinite cluster of empty sites as there is a region where is smaller than .the object of interest is the * gp * front , i.e. the external limit ( or frontier ) of the infinite occupied cluster .this front is a random fractal object with but the accessible part of it is a random fractal with .it has an average position and a statistical width defined as follows . for , is the mean number , per unit horizontal length , of points of the front lying on the line .it measures the front density at distance .the position and the width of the front are then defined in terms of the by eq.[xfdef ] and [ sigmadef ] . it was found in that the mean front is located at a distance where the density of occupation is very close to or .it was also found that the width depends on through a power law where is the correlation length exponent so that . the width was also shown to be a percolation correlation length .as we can see , the fractal dimension of the accessible * gp * front coincide with the value measured in the erosion model .moreover , the width of the front scales with respect to the gradient exactly as the width of our coastlines do with respect to the parameter , i.e. through an exponent equal to .10 right ) the reason for this is that _ is proportional to a gradient of occupation probability by the sea _ from the following argument . at time , the erosion power is while the sea has eroded the earth up to an average depth , an increasing function of .inverting this function , can be written as .there exists then a spatial gradient of the occupation probability by the sea . for small enough one can write .the quantity is a function of but to the lowest order it is a constant independent of since even with , there will be an erosion due to randomness and a consequent perimeter evolution . then to lowest order , the real gradient is linear in , the coupling factor in the erosion model .b. b. mandelbrot , _ stochastic models for the earth s relief , the shape and the fractal dimension of the coastlines , and the number - area rule for islands _ , proc .usa , * 72 * , 3825 - 3828 ( 1975 ) . j. d. bartley , r. w. buddemeier , and d. a. bennett , coastline complexity : a parameter for functional classification of coastal environments , journal of sea research * 46 * , 87 - 97 ( 2001 ) and references therein .a. baldassarri , m. montuori , o. prieto - ballesteros , s. c. manrubia _ reading the geometry of landscapes : global topography reveals action of geological processes on earth _ , journal of geophysical research , * 113 * , p. e09992 ( 2008 ). w. h. f. smith , d.t t. sandwell , global seafloor topography from satellite altimetry and ship depth soundings , science , 277 , 1957 - 1962 , ( 1997 ) .data from http://topex.ucsd.edu/www_html/srtm30_plus.html b. sapoval , s. b. santra , ph .barboux , stable fractal interfaces in the etching of random systems , europhys .* 41 * , 297 ( 1998 ) , and , a. gabrielli , a. baldassarri , b. sapoval , surface hardening and self - organized fractality through etching of random solids , phys .e , * 62 * , 3103 - 3115 , ( 2000 ) .schramm , o. ( 2006 ) , conformally invariant scaling limits ( an overview and a collection of problems ) , in proceedings of the international congress of mathematicians , madrid , august 22- 30 , 2006 , edited by m. sanz - sole et al . ,zurich , switzerland .( available at http://arxiv.org/ abs / math/0602151 . )a. desolneux , b. sapoval , a. baldassarri , self - organised percolation power laws with and without fractal geometry in the etching of random solids , in fractal geometry and applications : a jubilee of benoit mandelbrot ( m. l. lapidus and m. van frankenhuijsen , eds . ) proc .symposia pure math .72 , part 2 , pp .485 - 505 ( 2004 ) .
|
we discuss various situations where the formation of rocky coast morphology can be attributed to the retro - action of the coast morphology itself on the erosive power of the sea . destroying the weaker elements of the coast , erosion can creates irregular seashores . in turn , the geometrical irregularity participates in the damping of sea - waves , decreasing their erosive power . there may then exist a mutual self - stabilization of the wave amplitude together with the irregular morphology of the coast . a simple model of this type of stabilization is discussed . the resulting coastline morphologies are diverse , depending mainly on the morphology / damping coupling . in the limit case of weak coupling , the process spontaneously builds fractal morphologies with a dimension close to . this provides a direct connection between the coastal erosion problem and the theory of percolation . for strong coupling , rugged but non - fractal coasts may emerge during the erosion process , and we investigate a geometrical characterization in these cases . the model is minimal , but can be extended to take into account heterogeneity in the rock lithology and various initial conditions . this allows to mimic coastline complexity , well beyond simple fractality . our results suggest that the irregular morphology of coastlines as well as the stochastic nature of erosion are deeply connected with the critical aspects of percolation phenomena .
|
it is the purpose of this paper to study the existence and regularity of weak solutions of the following parabolic system , which is a generalization of the well - known keller - segel model of chemotaxis : [ bbru_s1-s2-s3 ] where is a bounded domain with a sufficiently smooth boundary and outer unit normal .equation is _ doubly nonlinear _ , since we apply the -laplacian diffusion operator , where we assume , to the integrated diffusion function , where is a non - negative integrable function with support on the interval ] and , and assume that the diffusion coefficient has the following properties : },\quad \gamma_1\psi(1-s)\leq a(s)\leq \gamma_2\psi(1-s ) \quad \text{for ,}\end{aligned}\ ] ] where we define the functions and for .our first main result is the following existence theorem for weak solutions .[ bbru_theo - weak ] if with and a.e . in , then there exists a weak solution to the degenerate system in the sense of definition [ bbru_def1 ] . in section [ bbru_sect : existence ] , we first prove the existence of solutions to a regularized version of by applying the schauder fixed - point theorem .the regularization basically consists in replacing the degenerate diffusion coefficient by the regularized , strictly positive diffusion coefficient , where is the regularization parameter .once the regularized problem is solved , we send the regularization parameter to zero to produce a weak solution of the original system as the limit of a sequence of such approximate solutions .convergence is proved by means of _ a priori _ estimates and compactness arguments .we denote by the parabolic boundary of , define , and recall the definition of the intrinsic parabolic -distance from a compact set to as our second main result is the interior local hlder regularity of weak solutions .[ bbru_thm - holder ] let be a bounded local weak solution of in the sense of definition [ bbru_def1 ] , and .then is locally hlder continuous in , i.e. , there exist constants and , depending only on the data , such that , for every compact , in section [ bbru_sect : holdercont ] , we prove theorem [ bbru_thm - holder ] using the method of intrinsic scaling . this technique is based on analyzing the underlying pde in a geometry dictated by its own degenerate structure , that amounts , roughly speaking , to accommodate its degeneracies . this is achieved by rescaling the standard parabolic cylinders by a factor that depends on the particular form of the degeneracies and on the oscillation of the solution , and which allows for a recovery of homogeneity .the crucial point is the proper choice of the intrinsic geometry which , in the case studied here , needs to take into account the -laplacian structure of the diffusion term , as well as the fact that the diffusion coefficient vanishes at and . at the core ofthe proof is the study of an alternative , now a standard type of argument . in either casethe conclusion is that when going from a rescaled cylinder into a smaller one , the oscillation of the solution decreases in a way that can be quantified . in the statement of theorem [ bbru_thm - holder ] and its proof ,we focus on the interior regularity of ; that of follows from classical theory of parabolic pdes .moreover , standard adaptations of the method are sufficient to extend the results to the parabolic boundary , see .the remainder of the paper is organized as follows : section [ bbru_sect : existence ] deals with the general proof of our first main result ( theorem [ bbru_theo - weak ] ) .section [ bbru_sect : nondegenerate ] is devoted to the detailed proof of existence of solutions to a non - degenerate problem ; in section [ bbru_sect : fixed ] we state and prove a fixed - point - type lemma , and the conclusion of the proof of theorem [ bbru_theo - weak ] is contained in section [ bbru_sect : proof - weak ] . in section [ bbru_sect : holdercont ]we use the method of intrinsic scaling to prove theorem [ bbru_thm - holder ] , establishing the hlder continuity of weak solutions to . finally , in section [ bbru_sec : example ] we present two numerical examples showing the effects of prevention of overcrowding and of including the -laplacian term , and in the appendix we give further details about the numerical method used to treat the examples .we first prove the existence of solutions to a non - degenerate , regularized version of problem , using the schauder fixed - point theorem , and our approach closely follows that of .we define the following closed subset of the banach space : we define the new diffusion term , with , and consider , for each fixed , the non - degenerate problem [ bbru_s - reg ] with fixed , let be the unique solution of the problem [ bbru_s1-v ] given the function , let be the unique solution of the following quasilinear parabolic problem : [ bbru_s1-u ] here and are functions satisfying the assumptions of theorem [ bbru_theo - weak ] .since for any fixed , is uniformly parabolic , standard theory for parabolic equations immediately leads to the following lemma .[ bbru_unlema ] if , then problem has a unique weak solution , for all , satisfying in particular where is a constant that depends only on , , , and .the following lemma ( see ) holds for the quasilinear problem .if , then , for any , there exists a unique weak solution to problem .[ bbru_lem2.2 ] we define a map such that , where solves , i.e. , is the solution operator of associated with the coefficient and the solution coming from . by using the schauder fixed - point theorem , we now prove that has a fixed point .first , we need to show that is continuous .let be a sequence in and be such that in as .define , i.e. , is the solution of associated with and the solution of . to show that in , we start with the following lemma .[ bbru_lem - classic : est ] the solutions to problem satisfy * for a.e . . * the sequence is bounded in . *the sequence is relatively compact in .the proof follows from that of lemma 2.3 in if we take into account that is uniformly bounded in .the following lemma contains a classical result ( see ) .[ bbru_lem - classic : est - bis ] there exists a function such that the sequence converges strongly to in .lemmas [ bbru_lem2.2][bbru_lem - classic : est - bis ] imply that there exist and such that , up to extracting subsequences if necessary , strongly in and strongly in as , so is indeed continuous on .moreover , due to lemma [ bbru_lem - classic : est ] , is bounded in the set similarly to the results of , it can be shown that is compact , and thus is compact .now , by the schauder fixed point theorem , the operator has a fixed point such that .this implies that there exists a solution ( of we now pass to the limit in solutions to obtain weak solutions of the original system . from the previous lemmas and considering , we obtain the following result . [ bbru_maxprinc - est_v_weak ] for each fixed , the weak solution to satisfies the maximum principle moreover , the first two estimates of in lemma [ bbru_unlema ] are independent of .lemma [ bbru_maxprinc - est_v_weak ] implies that there exists a constant , which does not depend on , such that notice that , from and , the term is bounded .thus , in light of classical results on regularity , there exists another constant , which is independent of , such that taking as a test function in yields then , using , the uniform bound on , an application of young s inequality to treat the term , and defining , we obtain for some constant independent of . let . using the weak formulation , and , we may follow the reasoning in to deduce the bound therefore , from and standard compactness results ( see ) , we can extract subsequences , which we do not relabel , such that , as , to establish the second convergence in , we have applied the dominated convergence theorem to ( recall that is monotone ) and the weak- convergence of to in .we also have the following lemma , see for its proof .[ bbru_str - con - grad - v ] the functions converge strongly to in as .next , we identify as when passing to the limit in . due to this particular nonlinearity, we can not employ the monotonicity argument used in ; rather , we will utilize a minty - type argument and make repeated use of the following `` weak chain rule '' ( see e.g. for a proof ) .[ bbru_chain - rule ] let be lipschitz continuous and nondecreasing .assume is such that , , a.e .on , with . if we define , then holds for all \times{\ensuremath{\omega}}) ]. the first step will be to show that for all fixed , we have the decomposition clearly , and from we deduce that as . for ,if we multiply by and integrate over , we obtain now , if we take and use lemma [ bbru_chain - rule ] , we obtain therefore , using and lemma [ bbru_str - con - grad - v ] and defining , we conclude that and from lemma [ bbru_chain - rule ] , this yields as .consequently , we have shown that which proves .choosing with and and combining the two inequalities arising from and , we obtain the first assertion of the lemma .the second assertion directly follows from . with the above convergences we are now able to pass to the limit , andwe can identify the limit as a ( weak ) solution of .in fact , if is a test function for , then by it is now clear that since is bounded in and by lemma [ bbru_str - con - grad - v ] , in , it follows that we have thus identified as the first component of a solution of . using a similar argument, we can identify as the second component of a solution .we start by recasting definition [ bbru_def1 ] in a form that involves the steklov average , defined for a function and by t\in ( t - h , t] ] , in and integrate in time over for . applying integration by parts to the first term gives ^+\xi_n^p \ , dx \ , ds \\ & = \frac{1}{2}\int_{\tau_n}^t\int_{b_{r_n}}\partial_s \bigl ( \bigl([(u_\omega)_h - k_n]^+\bigr)^2 \bigr ) \xi_n^p\ , dx \ , ds + \left(1-\frac{\omega}{4}-k_n\right)\int_{\tau_n}^t\int_{b_{r_n}}\partial_s \biggl ( \biggl(\bigl[u-\bigl(1- \frac{\omega}{4}\bigr ) \bigr]^+\biggr)_h\biggr ) \xi_n^p \ , dx \ , ds \\ & = \frac{1}{2}\int_{b_{r_n}\times\{t\}}\bigl([u_\omega - k_n]_h^+\bigr)^2\xi_n^p \ , dx \ , ds - \frac{1}{2}\int_{b_{r_n}\times\{\tau_n\}}\bigl([u_\omega - k_n]_h^+\bigr)^2\xi_n^p \ , dx \ , ds \\& \quad -\frac{p}{2}\int_{\tau_n}^t\int_{b_{r_n}}\bigl([u_\omega - k_n]_h^+\bigr)^2\xi_n^{p-1 } \partial_s \xi_n \ , dx \ , ds \\ & \quad + \left(1-\frac{\omega}{4}-k_n\right)\int_{\tau_n}^t\int_{b_{r_n}}\partial_s \biggl ( \biggl(\bigl[u-\bigl(1- \frac{\omega}{4}\bigr ) \bigr]^+\biggr)_h \biggr ) \xi_n^p \ , dx \ , ds.\end{aligned}\ ] ] in light of standard convergence properties of the steklov average , we obtain ^+\bigr)^2\xi_n^p \ , dx \ , ds -\frac{p}{2}\int_{\tau_n}^t\int_{b_{r_n}}\bigl([u_\omega - k_n]^+\bigr)^2\xi_n^{p-1 } \partial_s \xi_n\ , dx \ , ds \\ & + \left(1-\frac{\omega}{4}-k_n\right)\biggl(\int_{b_{r_n}\times\{t\ } } \bigl[u-\bigl(1-\frac{\omega}{4}\bigr)\bigr]^+\xi_n^{p } \ , dx \ , ds \\ & \qquad -p\int_{b_{r_n}\times\{\tau_n\}}\bigl[u-\bigl(1- \frac{\omega}{4}\bigr)\bigr]^+\xi_n^{p-1}\partial_s \xi_n \ , dx \ , ds \biggr ) \quad \text{as .}\end{aligned}\ ] ] using and the nonnegativity of the third term , we arrive at ^+\bigr)^2\xi_n^p \ , dx -\frac{p}{2d}\left(\frac{\omega}{4}\right)^2\frac{2^{p(n+1)}}{r^p}\int_{\tau_n}^t \int_{b_{r_n}}\chi_{\{u_\omega\geq k_n\ } } \ , dx \ , ds \\ & \quad-\frac{p}{d}\left(\frac{\omega}{4}\right)^2\frac{2^{p(n+1)}}{r^p } \int_{\tau_n}^t\int_{b_{r_n}}\chi_{\{u\geq 1- \omega/ 4 \ } } \ , dx \ , ds \\ & \geq \frac{1}{2}\int_{b_{r_n}\times\{t\}}\bigl([u_\omega - k_n]^+\bigr)^2 \xi_n^p\ , dx -\frac{3}{2}\frac{p}{d}\left(\frac{\omega}{4}\right)^2 \frac{2^{p(n+1)}}{r^p}\int_{\tau_n}^t\int_{b_{r_n } } \chi_{\{u_\omega\geq k_n\ } } \ , dx \ , ds,\end{aligned}\ ] ] the last inequality coming from .since ^+\leq \omega / 4} h \to 0 \sigma \in { \mathcal{e}}_{\text{ext}}(k)$ , } \end{cases}\ ] ] where denotes the common edge of neighboring finite volumes and . for and with common vertexes with , let ( for , respectively ) be the open and convex polygon built by the convex envelope with vertices ( , respectively ) and .the domain can be decomposed into for all , the approximation of is defined by to discretize , we choose an admissible mesh of and a time step size . if is the smallest integer such that , then for .we define cell averages of the unknowns , and over : and the initial conditions are discretized by we now give the finite volume scheme employed to advance the numerical solution from to , which is based on a simple explicit euler time discretization . assuming that at , the pairs are known for all , we compute from ,\\ { \mathrm{\nabla}}_h v^{n}_{k,\sigma}+|k|g^n_k.\end{aligned}\ ] ] here denotes the discrete euclidean norm .the neumann boundary conditions are taken into account by imposing zero fluxes on the external edges .burger m , di francesco m , dolak - struss y. the keller - segel model for chemotaxis with prevention of overcrowding : linear vs. nonlinear diffusion ._ siam journal of mathematical analysis _ 2007 ; * 38*:12881315 .laurenot p , wrzosek d. a chemotaxis model with threshold density and degenerate diffusion . in : chipot m , escher j , _ nonlinear elliptic and parabolic problems : progress in nonlinear differential equations and their applications ._ birkhuser : boston ; 2005 : 273290 .karlsen kh , risebro nh . on the uniqueness and stability of entropy solutions of nonlinear degenerate parabolic equations with rough coefficients ._ discrete and continuous dynamical systems _ 2003 ; * 9*:10811104 .dibenedetto e , urbano jm , vespri v. current issues on singular and degenerate evolution equations . in : dafermoscm , feireisl e ( eds . ) , _ handbook of differential equations , evolutionary equations , vol .i_. elsevier / north - holland : amsterdam ; 2004 : 169286 .dibenedetto e. on the local behaviour of solutions of degenerate parabolic equations with measurable coefficient ._ annali della scuola normale superiore di pisa. classe di scienze .serie iv _ 1986 ; * 13*:487535 .
|
this paper addresses the existence and regularity of weak solutions for a fully parabolic model of chemotaxis , with prevention of overcrowding , that degenerates in a two - sided fashion , including an extra nonlinearity represented by a -laplacian diffusion term . to prove the existence of weak solutions , a schauder fixed - point argument is applied to a regularized problem and the compactness method is used to pass to the limit . the local hlder regularity of weak solutions is established using the method of intrinsic scaling . the results are a contribution to showing , qualitatively , to what extent the properties of the classical keller - segel chemotaxis models are preserved in a more general setting . some numerical examples illustrate the model .
|
_ random walks _ are a mechanism to route messages through a network . at each hop of the random walk , the node holding the message forwards it to some neighbor chosen uniformly at random .random walks have interesting properties : they produce little overhead and network nodes require only local information to route messages . in turn, this makes random walks resilient to changes on the network structure .thanks to these features , random walks are useful for different applications , like routing , searching , sampling and self - stabilization in diverse distributed systems such as peer - to - peer ( p2p ) and wireless networks .past works have addressed the study of random walks .some of this research has focused on the coverage problem , trying to find bounds for the expected number of hops taken by a random walk to visit all vertices ( nodes ) in a graph is often denoted the _ cover time_. however , in this work we will use the term _ time _ to refer to the _ duration _ of the random walk . to avoid confusion , from now on the term _ time_ will only denote the physical magnitude . ] ( ) .results vary from the optimal of complete graphs ( where is the number of vertices ) to the worst case found in the lollipop graph .barnes and feige in generalize this bound to the expected number of hops to cover a fraction ( ) of the vertices of the network , which they found is .other works , for example , are devoted to find bounds on the expected number of steps before a given node is visited starting from node ( ) .for example , it is known that the upper bound for is .many of these results are based on the study of the properties of the transition matrix and adjacency matrix in spectral form .the previous results are used in several works to discuss the properties of random walks in communication networks .gkantsidis et al . apply them to argue that random walks can simulate random sampling on p2p networks , a property that in their opinion justifies the ` success of the random walk method ' when proposed as a search tool or as a network constructing method .adamic et al . study the search process by random walks in power - law networks applying the generating function formalism .this work seems deeply inspired by a previous contribution of newman et al . , who study the properties ( mean component size , giant component size , etc . ) of random graphs with arbitrary degree distribution .this paper introduces a study of random walks from a different perspective .it does not study the formal bounds in the amount of hops to cover the network .instead , it tries to estimate the efficiency of the random walk as a search mechanism in communications networks , applying network queuing theory .it takes into account the bounded processing capacities of the nodes of the network and the load introduced by the search messages , that are routed using random walks . to obtain this load, we need to estimate first the average search length , which in turn is computed from the expected average coverage : the average number of different nodes covered at each hop of the random walk .a distinguishing feature of our work is that , as in the case of adamic et al . , it deals with a scenario that has not been very exhaustively explored although , in our opinion , is quite interesting in the communications field : _ one - hop replication networks_. [ [ one - hop - replication ] ] one - hop replication + + + + + + + + + + + + + + + + + + + one - hop replication networks ( also called _ lookahead networks _ ) are networks where each node knows the identity of its neighbors and so it can reply on their behalf .hence , to find a certain node by a random walk it suffices to visit any of its neighbors .this feature is present for example in social networks , where to find some person it is usually enough to locate any of her / his friends .also , certain proposals to improve the resource location process on p2p systems ( some based on random walks ) assume that each node knows the resources held by its neighbors , so to discover some resource ( such as a file or a service ) it suffices to visit any of the neighbors of the node(s ) holding it . in one - hop replication networks ,when the random walk visits some node we say it also _ discovers _ the neighbors of .hence , we will use two different terms to refer to the coverage of the random walk .we denote by _ visited nodes _ those that have been traversed by the random walk , and by _ covered nodes _ the visited nodes and their neighbors .see figure [ fig : illustration ] for an illustrative example .[ [ previous - work - and - the - revisiting - effect ] ] previous work and the revisiting effect + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + there is some research work related with the characterization of random walks in one - hop replication networks . in the authors prove that in the power - law random graph the amount of hops for a random walk to discover the graph is sublinear ( faster than coupon collection , with which the random walk is compared in ) .also , manku et al . study the impact of lookahead on p2p systems where searches are routed through greedy mechanisms .in another work , adamic et al . try to find analytical expressions for cover time of a random walk in power - law networks with two - hops replication .they detected divergences between the analytical predictions and the experimental results .the reason for such discrepancy , as the authors point out , is the _ revisiting effect _ , which occurs when a node is visited more than once . in small - world networks , where a small number of nodes are connected to other nodes far more often than the rest, it is quite common for random walks to visit often these highly connected nodes .[ [ our - contributions ] ] our contributions + + + + + + + + + + + + + + + + + although there is a plethora of interesting results about random walks , we have noticed that there are situations where current findings are not straightforward to apply , especially on communication networks with one - hop replication .for example , in such networks , we can be interested on studying beforehand the expected behavior of the random walk to evaluate if it suits the system requirements .we characterize the random walk performance by four values : * _ the expected coverage_. given by the expected number of visited and covered nodes of each degree at each hop of the random walk . * _ the expected average search length_. expected length of searches in number of hops , assuming that the source and destination nodes of each search are chosen uniformly at random . obtained from the coverage estimations . * _ the expected average search duration_. expected time to solve searches .obtained from the average search length , given the _ processing capacity _ of each node and the _ load _ on the network due to queries .* _ the maximum load that can be injected to the network _ without overloading it . in this work we provide a set of expressions that model the behavior of the random walk andgive estimations for the three previous parameters .our claim is that these expressions can be used as a mathematical tool to predict how random walks will perform on networks of arbitrary degree distribution .then , we do not only address the coverage problem ( i.e. to estimate the amount of nodes covered after each hop of the random walk ) , but we also apply queuing theory to model the response time of the system depending on the load . as we show , this approach allows to compute in advance important magnitudes , such the expected search duration or the maximum load that can be managed by the network before getting overloaded .additionally , we find our model useful to study how certain features of the network impact on the performance of searches .for example we find that the best average search time is achieved only if the nodes with higher degrees have also greater processing capacities .the expressions related with the estimation of covered nodes at each hop are the most complex part of the model .they must deal both with the one - hop replication feature and the revisiting effect .however , we should remark that the model can be trivially adapted to networks where the _ one - hop replication _ property does not hold , and the search finishes only when the node we are searching for is found ( see the last paragraph in section [ subsec : averagelength ] ) . likewise , it is easy to modify the model to a variation of the random walk where each node avoids sending back the message to the node it received it from at the previous hop .we denote this routing mechanism _ avoiding random walks _ , and we deem it interesting for two reasons .first , intuitively , it should improve the random walk coverage ( we have confirmed this experimentally ) .second , it can be implemented in real systems using only local information , just as the pure random walk ( the sending node only needs to know from which neighbor the message came from ) .a feature of our proposal is that it does not require the complete adjacency matrix , that in some situations could be unknown .instead , thanks to the randomness assumption we apply it only needs the degree distribution of the network to compute the metrics we are interested in . on the other hand , this work is focused on networks with good connectivity and where the nodes degrees are independent ( see section [ subsec : modelandassumptions ] ) .another property of this model is that it takes into account the revisiting effect by modeling the coverage of the random walk at each hop depending on the coverage at the previous hop .that is , the evolution of the coverage is not assumed to be a memoryless process , a simplification that can lead to errors as seen in .the rest of the paper is organized as follows .section [ sec : knownnodesaverleng ] introduces our analysis of the coverage and average search length of random walks , along with some experimental evaluation .section [ sec : searchesbyrw ] is centered on obtaining the average search time of random walks . finally , in section [ sec : conclusions ], we state our conclusions and propose some potential future work .in this section , we analyze the behavior of random walks in arbitrary networks .we will represent networks by means of undirected graphs , where vertices represent the nodes and edges are the links between nodes .there are no links connecting a vertex to itself , or multiple edges between the same two vertices .this does not simplify our model , but makes it closer to real scenarios like typical p2p networks .we denote by the number of nodes in the graph and by the number of nodes that have degree ( i.e. , the number of nodes that have neighbors , ) . for all vertices its degree is lower than the size of the network , as in typical real world networks ( such as social and pure p2p networks ) each nodeis connected to only a subset of the other vertices in the system .we also denote by the probability that some node in the network , chosen uniformly at random , has degree ( i.e. , ) .the average degree of a network is given by .for a given network , the distribution formed by the probabilities ( for all ) is known as the _ degree distribution _ of such a network . a random walk over be defined as a_ markov chain _ process where the transition matrix ] .this property holds in networks built by random mechanisms , like the ones used to built the er and small - world networks we target in our experiments . to confirm that the degree independence assumption is valid we have run some experiments , whoseresults are shown in figure [ fig : property2 ] .these experiments aim to measure if the probability of reaching a node of degree when following a random walk is affected by the degree of the node the random walk was in the previous hop ( ) .our results lead to the conclusion that , that is , does not have an impact on .we should note also that this property is not fulfilled in certain graphs like those built by preferential mechanisms where it is well - known that there is a correlation among neighbors degrees .this could lead to certain deviations in mean - based analysis of the random walk ( as our own ) . in the following ,we study how many different nodes are visited by a random walk as a function of its length ( i.e. , of the number of steps taken ) and of the degree distribution of the chosen network .subsequently , we extend this result to also consider the neighbors of the visited node . these metrics allow us to quantify how much of a network is being `` known '' throughout a random walk progress .then , we turn our attention to provide an estimation of the average search length of a random walk . in the last subsection , we validate our analytical results by means of simulations .we assume that only the degree distribution and the size of the network are known .this metric represents the average number of different nodes that are visited by a random walk until hop ( inclusive ) , denoted by .note that nodes may each be visited more than once , but revisits are not counted . to obtain , we first calculate the average number of different nodes of degree that are visited by a random walk until hop ( inclusive ) , denoted by .we make a case analysis : * when ( i.e. , in the source node ) : since the source node of the random walk is chosen uniformly at random , then the probability of starting a random walk at a node of degree is .therefore , + * when ( i.e. , at the first hop ) : here we apply that the probability of visiting some node of degree at any hop is given by ( equation [ equ : pa ] ) .this is based on the assumption that the random walk behaves similarly to independent sampling despite dependencies between consecutive hops ( based on , see section [ subsec : modelandassumptions ] ) .we deem this premise to be reasonable even at the first stages of the random walk , due to the high mixing rates found in the type of networks on which we focus our work ( again , see section [ subsec : modelandassumptions ] ) .recall that the experimental evaluation both of this assumption ( fig .[ fig : property ] ) and of our model ( shown in section [ sec : experimentalresults ] ) , seem to verify this .thus , we have that + * when : we must take into account the probability of the random walk arriving at an already visited node . to compute such a probability , we define the following two values : * * : this represents the probability that , if the random walk arrives at a node of degree at hop , that node has been visited before .it can be obtained as follows : + note that we put instead of because the node visited at hop can not be visited at hop ( no vertex is connected to itself ) . * * : this is the probability that at any given hop the random walk is moving back to the node where it came from . ] .since any visited node has degree with probability , then the random walk will go back through the same link from which it came with probability .therefore , we have : + + using these probabilities , can be written as + finally , taking the results obtained in equations [ eq : visited0 ] , [ eq : visited1 ] and [ eq : visitedn ] , we have that the total number of different nodes visited until hop is this metric provides an estimation of the average number of different nodes _ covered _ by a random walk until hop ( inclusive ) , denoted by .a node is covered by a random walk if such a node , or any of its neighbors , has been visited by the random walk . to obtain , we first calculate the number of different nodes of degree covered at hop , denoted by . * when : + the first term takes into account the possibility that the source node has degree .the second term refers to the number of neighboring nodes ( of the source node ) of degree .if the source node has degree ( which happens with probability ) then , on average , nodes of degree will be covered , since each one of the neighboring nodes of the source node will have degree with probability .* when : given a link , we say that it has two endpoints , which are the two ends of the link .we denote the endpoint of the link at node by , and similarly the endpoint of the link at node by .we say that _ hooks onto _node .we also say that has been _ checked _ by a random walk if such a random walk has visited node .these concepts are graphically explained in fig .[ fig : endpoints ] .+ now , let us denote by the number of endpoints checked for the first time at hop , and by the probability that these endpoints hook onto still uncovered nodes of degree .then , ( where ) can be written as follows : + * * to obtain , we consider the number of different endpoints checked after hop to be .so , the number of endpoints checked for the first time at hop is .however , one of the endpoints hooks onto the node the random walk comes from ( i.e. , it can not increase the amount of nodes that are covered ) .thus : + * * to obtain , on one hand we consider the overall number of endpoints hooking onto uncovered nodes of degree just before hop is . on the other hand ,the overall number of endpoints is , and the overall number of checked endpoints until hop ( inclusive ) is .that is , the number of endpoints not checked just before hop is .therefore , we can write : + + substituting equation [ eq : dos ] and [ eq : tres ] into equation [ eq : uno ] , we have that + finally , taking into account equations [ eq : covered0 ] and [ eq : coveredn ] , we have that the total number of nodes covered after hop is using the previous metric , we are now able to provide an estimation of the average search length of random walks , denoted by .formally , is given by the following expression : where is the probability that the search finishes at hop ( i.e. , the probability that the search is successful at hop , having failed during the previous hops ) .let us define the _ probability of success _ at hop , denoted by , as the probability of finding , at that hop , the node we are searching for . can be obtained as the relation between the number of new nodes that will be covered at hop , and the number of nodes that are still uncovered at hop .that is , now , can be obtained as follows : therefore , can be written as we have run a set of experiments to evaluate the accuracy of the expressions presented in the previous subsections .the results obtained are presented in this section . for our work, we consider two kinds of network : small - world networks ( constructed as in ) and erdos - renyi networks ( constructed as in ) . *_ small - world networks _ . in it is shown that many real world networks present an interesting feature : each node can be reached from any other node in few hops .these networks are typically denoted small - world networks .the internet , the web , the science collaboration graph , etc .are examples of real world networks that are consistent with this property .this kind of networks are also specially interesting for our work because here the revisiting effect commented in section [ sec : intro ] is strongly present due to the uneven degree distribution .we build small - world networks using the mechanism described in , which leads to networks whose degree distribution follows a power - law distribution ( power - law networks ) . * _ erdos - renyi ( er ) random networks _ . for two any nodes is a constant probability that they are connected ) .the resulting degree distribution is a binomial distribution .see figure [ fig : networks ] for an illustrative example of both kinds of networks .[ [ number - of - visited - and - covered - nodes ] ] number of visited and covered nodes + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + our first goal is to study the evolution of the network coverage by random walks in real networks .the experiments were run on networks of two sizes , and nodes .networks were built using three different average degrees : , and . in each networkwe ran random walks of length .the source node of each random walk was chosen uniformly at random . from the experiments, we obtained the average number of visited and covered nodes for each degree at each hop .finally , for each network , we extracted its degree distribution and apply the expressions described in the previous section to get a prediction of those values , given by and .results are shown in figures [ fig : visited ] , [ fig : visitedperdeg ] , [ fig : known ] , and [ fig : knownperdeg ] .for the sake of clarity , the experimental results are shown every 2000 hops in all figures .model predictions , on the other hand , are drawn as lines .figure [ fig : visitedrandomsmallworld50 ] shows the evolution of the number of visited nodes in er and small - world networks of size nodes , with two different average degrees and .we see that , although the length of the random walks is enough to potentially include all the nodes , only a fraction of them are visited .this happens because of the revisiting effect , and it is more evident when the number of hops increases , since the probability of revisiting grows with the number of hops .the revisiting effect is stronger in small - world networks than in random networks .the reason is the uneven distribution of the nodes degrees : there are some nodes with a very high degree that will be visited once and again by the random walk .thus , the chances of finding new nodes at each hop are lowered faster in small - world networks than in er networks .also , we observe in figure [ fig : visitedrandomsmallworld50 ] that in networks of smaller the revisiting effect is stronger .finally , figure [ fig : visitedrandomsmallworld50100 ] shows the impact of the network size on the amount of visited nodes .as expected , a greater implies a lesser number of revisits for the same number of hops . in all cases ,the prediction of the total amount of different nodes visited is very close to the experimental results . in figure[ fig : visitedperdeg ] we study the accuracy of the predictions of the amount of visited nodes of a particular degree at each hop , .we draw the results and predictions of degrees and , for , and . again, it can be seen that the model predictions fit very well with the experimental results , despite the revisits and the different behavior observed for different degrees .figure [ fig : known ] gives the results of the experiments run to study the coverage of the random walk .figure [ fig : knownramdomsmallworld100 ] shows how the coverage grows faster in small - world networks than in er networks for networks of the same average degree .this contrasts with the amount of visited nodes , that behave in the opposite way ( see previous paragraphs ) .the reason is the presence of well - connected nodes , that are quickly visited during the first hops of the random walk and increase considerably the coverage because of the high amount of neighbors they have .for example , after hops , the random walk has covered about half of the small - world network with , while in the er network of the same the random walk only has covered close to of the nodes .moreover , we can see that the network average degree has also an important impact on the coverage . in both kind of networksthe coverage grows faster when the average degree is higher . besides, we observe that the difference of the coverage for both networks decreases more quickly for a higher .figure [ fig : knownramdomsmallworld100 ] confirms the importance of the average degree , comparing the results for networks of different size and .in addition , figure [ fig : knownrandom50100 ] compares the results of the coverage for er networks of different sizes and average degrees . as it could be expected , the networks of smaller size require less hops to be covered .we observe also that the average degree has an important influence on the coverage difference .the greater the average degree , the faster the coverage of both networks converges .in all cases , the values given by the model predict very well how the coverage behaves and evolves .figure [ fig : knownperdeg ] allows to check the precision of the coverage predictions for different values , . as before ,the values provided are very close to the experimental results , although the behavior of the coverage changes strongly depending on the kind of network and average degree .finally , we check the model accuracy for random walks that avoid the previous node , the _ avoiding random walk_. as stated in section [ subsec : visitednodes ] , the avoiding random walk can be easily implemented by our model just by setting ( see equation [ equ : probback ] ) .results are shown in figure [ fig : avoid ] .there we compare the coverage of pure and avoiding random walks in er and small - world networks of size nodes and average degree .figure [ fig : avoidvisited ] confirms that , as expected , the avoiding random walk is able to visit a greater number of different nodes , as the revisiting effect is , to a certain degree , lessened .however , figure [ fig : avoidknown ] shows that this has little impact on the network coverage .we find that there is only a small increase on the amount of covered nodes when using avoiding random walks , for both kind of networks .nonetheless , in all cases the and values given by the model are very close to real results .[ [ average - search - length ] ] average search length + + + + + + + + + + + + + + + + + + + + + for the experiments regarding the average search length we used networks whose sizes ranged from to nodes . in each experimentwe ran searches , averaging the obtained results . at each search , two nodes ( one corresponding to the source and the other to the destination ) were chosen uniformly at random .starting from the source , a random walk traversed the network until the destination node was found ( i.e. , a neighbor of the destination is visited ) .the first thing to note is that the average search length grows linearly with the network size in both er and small - world networks . besides , the average degree has an important effect on the results .the bigger the , the shortest the searches are .the reason is that a higher implies that at each hop more nodes of the network are discovered .also , it can be observed in figure [ fig : averagesearchlength ] that the average search length is greater in er networks than in small - world networks . this can be explained if we take into account that random walks , on average , cover more nodes in small - world networks than in er networks ( see figures [ fig : known ] ) . as in the previous experiments ,figure [ fig : averagesearchlength ] also shows that our experimental results regarding the average search length correspond very close to the analytical results that were obtained . at this point, we would like to note that , given the assumptions we made in our analytical model , it seems that the very good match achieved with the experimental results could only occur if these assumptions are correct . as a matter of fact, we have verified , in practice ( see figs . [fig : property ] and [ fig : property2 ] ) , that the type of networks we consider in this paper , indeed , fulfill our assumptions . on the other hand ,it is clear that if we take into account networks that do not fulfill some of our assumptions , then a certain mismatch should be expected .for instance , networks built by preferential mechanisms are known not to preserve the independence of degrees of neighbors .therefore , we should not aim for a very close correspondence between analytical and experimental results .we have performed the same experiments we ran for random and small - world networks regarding the average search length , but this time with networks built using the preferential attachment mechanism proposed by barabsi .now , we have observed that , as expected , in preferential networks our experimental results do not correspond very close to the analytical results ( see fig .[ fig : aversearchlenbar ] ) .instead , the model seems to be consistently pessimistic .also , the error continuously grows with the network size .finally , we have tested the model against toroidal networks of different average degrees ( 5 dimensions ) and ( 8 dimensions ) .our intention is to analyze networks which are not random at all .results , which are shown in fig .[ fig : aversearchlentor ] , show a very clear mismatch among the results predicted by the model and the actual performance of the random walk .in this section , we present the second part of our model . herewe provide useful expressions that allow to predict the performance of random walks as a search tool , which is the main goal of this work .these expressions rely on the same estimation of the average search length ( like the one described in the previous section ) , that is combined with queuing theory . as a result , given the processing capacities and degrees of nodes , we are able to compute two key values : * the _ load limit _ : the searches rate limit that the network can handle before saturation . * the _ average search time _ : the average time it takes to complete a search , given the global load .also , we show how these expressions can be used to analyze which features a network should have so random walks have a better performance ( i.e. , searches are solved in less time ) . in particular , we focus on studying the relationship between degree and capacity distributions , showing that the minimum search time is obtained when nodes of higher capacities are also those of higher degrees . in our analysis, networks are assumed to be _ jackson networks _ : the arrival of new searches into the network follows a poisson distribution and the service at each node is a poisson process . our first step is to set the relationship between the average searches length and the system load .each search is processed , on average , times ( once at the source node , and once at each step of the random walk ) . using this, we can express the total load on all the nodes of the system , , as where is the load injected in the system by new searches , that we assume to be known .note that is composed of the new generated searches ( ) , plus the searches that move from one node to another , denoted by .hence , to compute the load on each particular node , , let us take into account that the probability that a random walk visits a node is proportional to the node s degree ( see section [ sec : knownnodesaverleng ] ) .this implies that , for each node , the load on node due to search messages , denoted , is proportional to its degree . as a result, we have that there is a value such that , for all .hence , , where is the sum of all degrees in the network ( i.e. , ) .therefore , assuming that all nodes generate approximately the same number of new searches ( ) , we can compute the average load at node as where the first term represents the load due to search messages , and the second term to the searches generated at node .note that any other search generation rate model can be implemented just by changing the term . in order to obtain the average search duration , , we use _ little s law _ , which states that where is the average number of _ resident _ searches in the network ( i.e. , searches that are waiting or being served ) , and is the average number of searches _ generated _ per unit of time ( i.e. , the arrival rate of searches ) .observe that is assumed to be known .hence , the challenge to compute is to obtain .let be the number of resident searches in node .then , . to obtain , we apply _ little s law _ again , this time individually to each node : where is the average search time at node and is the average _ load _ at node , which includes both searches generated at node and searches due to messages from other nodes .next we use that , by _jackson s theorem _ ( recall we assume the network to be a jackson network ) , each node can be analyzed as a single m / m/1 queue with poisson arrival rate and exponentially distributed service time with mean ( which can be computed from the node capacity , that we assume to be known ) .then : where is the utilization rate and is the average service time at node . as , we can write once we have and , we can combine them to obtain that is , we have provided an expression that computes the _ average search time _ using the topology , the average service times of nodes , and the search arrival rate . implicitly , in our previous results it has been assumed that no node is overloaded ( i.e. , for all ) .otherwise , the network would never reach a stable state .thus , a key value for any network is its _ load limit _ : the minimum search arrival rate ( ) that would overload the network , denoted by .clearly , being the minimum search arrival rate that would overload node . from equation [ equ : loadonnode ] , we have that also , since no node must be overloaded , it must be satisfied that combining equation [ eq : min1 ] with equation [ eq : min2 ] we have that , for each , the following must hold : therefore , the load limit for node is and [ [ average - search - duration-1 ] ] average search duration + + + + + + + + + + + + + + + + + + + + + + + .capacity distributions [ cols="<,<",options="header " , ] ) , we used equation [ equ : restimesystem ] , taking into account that follows an exponential distribution with average ( i.e. , ) , where can be computed as the relation between the number of resources known and their processing capacity.,width=340 ] in this subsection , we present the results of a set of experiments addressed to evaluate , in practice , the accuracy of our model for the average search time . as in the previous experiments ( section [ sec : experimentalresults ] ) , we conducted extensive simulations over er and small - world networks .all networks are made up of nodes . in each experiment, nodes generate new searches following a poisson process with rate , where is the global load on the network .when a node starts a search for a resource , it first checks whether it already knows that resource ( i.e. , if the node itself or any of its neighbors hold the resource ) .if so , the search ends successfully .otherwise , a search message for the requested resource is created and sent to some neighbor node chosen uniformly at random .when a node receives a search message , it also verifies whether it knows the resource .if so , the search is finished .otherwise , the search is again forwarded to another neighbor chosen uniformly at random .the experimental results are obtained by averaging the results that were obtained .we used six different global loads ( ) : , , , , and , where is the minimum arrival rate that would overload the network ( see section [ sec : loadlimit ] ) .the distribution of the nodes search processing capacities is derived from the measured bandwidth distributions of gnutella ( see table [ table : timemodelcapacities ] ) .capacities are assigned so that nodes with a higher degree are given a higher capacity .all nodes are assumed to have the same number of resources .each resource is held by one node , and all resources have the same probability of being chosen for search .the processing time at each node follows an exponential distribution with an average service time computed as .this average is computed dividing the amount of resources checked for each search ( the total amount of resources known , , minus the resources of the node the search message came from , ) by the node s capacity .for each load , we measured the average search times experimentally for each network .results are shown in fig .[ fig : timemodel ] .it can be seen that , as expected , the average search time always increases with the load , undergoing a higher growth when it approaches the maximum arrival rate .furthermore , our experimental results show a very close correspondence with the analytical results that were obtained .[ [ load - limit ] ] load limit + + + + + + + + + + we have computed the values for random and small - world networks with different average degrees . for each kind of network and average degreefive networks were built with the capacity distribution presented in table [ table : timemodelcapacities ] .our goal was to observe the variation of the for networks of the same type and , and also to study the difference among the values depending on the network kind and average degree .results , which are shown in figure [ fig : maxload ] , differ for random and small - world networks .the first thing to note is that small - world networks can handle a greater load than random networks .small - world networks present variations of the values even for networks of the same average degree . despite this variation ,it is clear that the load limit tends to grow with the .the reason is that a greater implies a smaller global load for the same rate of queries injected to the system .recall that the total load is given by ( equation [ equ : totalload ] ) and that higher average degrees lead to lesser average searches lengths ( figure [ fig : aversearchlennewman ] ) .hence , it is possible to perform more queries before overloading the network .erdos - renyi networks however behave in a very different manner .they present very little variations of the values . and ,more surprising , there is a small decrease of the load limit when the grows .this contrasts with the behavior of small - world networks .as it is shown in figure [ fig : aversearchlener ] , larger average degrees imply smaller average searches lengths and so a smaller global load .however , the that can be handled by the network does not change accordingly to this .the reason seems to be that in er networks the load is more evenly distributed among nodes .this implies that low capacity nodes have to handle an important amount of searches . besides , a greater average degree impacts on the average services times of these nodes , as they know , and so they have to process , more resources per search .hence , these nodes keep being the bottleneck of the network despite the smaller average search length , preventing the system to be able to handle a greater load .however , it is important to recall that these results are also due to the capacity distribution used , and how it was distributed among the nodes . in small - world networks , if we assign low capacities to high degree nodes we can expect them to become bottlenecks of the network that force small values . in er networks , adding more high capacity nodes could change the tendency so it would grow with the average degree . exploring all these phenomena is beyond the scope of this paper . .five different networks are created for each network type ( er or small - world ) and average degree .the resulting are shown grouped by .,width=340 ] in this section we show that , when there is a full correlation between the _ capacity _ of a node ( i.e. , the number of searches a node can process per time unit ) and its degree , this leads to a minimal value of the average search time .let us first state the relation we assume between the capacity and the average service time of a node .we assume that the first is a parameter that does not depend on the degree or the number of resources known by the node , and only depends on the processor and network connection speeds .we assume that the second is a strictly increasing function of the node s degree .we assume that a node s service time is directly proportional to its degree and inversely proportional to its capacity as follows : let us now consider a pair of nodes , such that ( so ) , and two possible positive capacities and , such that .we show that , if no other degree or capacity assignment changes , having and gives a smaller average search time , , than the average search time with reverse assignment and . using eq .[ equ : servicetime ] , we obtain the following possible average service times : in which are the service times obtained with the first capacity assignment and are the service times obtained with the second . from the above equations , we have and let and be the loads on and . since , then .hence , from this and eq .[ equ : servtimes ] , we find that to compute the values and , we use eq .[ equ : restimesystem ] where and are obtained with the first capacity assignment and and with the second .observe that remains the same for any node that is neither nor , because its degree , load , and capacity are just the same for both cases .hence , if then . from eqs .[ equ : littlelaw3 ] and [ equ : restimenode2 ] , we obtain that and finally , applying eqs .[ equ : serprodequal ] and [ equ : sertimesdif ] , we conclude that and hence this proves that , for a given degree distribution , the best performance will be obtained by assigning the largest capacities to the nodes with the largest degrees .note that we have found a condition that is necessary in order to attain the minimum possible , once the degree distribution has been set .however , different degree distributions can obtain very different values .in this paper , we have presented an analytical model that allows us to predict the behavior of random walks .furthermore , we have also performed some experiments that confirm the correctness of our expressions .some work can be carried out to complement our results .for instance , several random walks can be used at the same time , a situation that could be used to further improve the efficiency of the search mechanism .these random walks could run independently or , in order to cover separated regions on the graphs , coordinate among them in some way . c. gkantsidis , m. mihail , a. saberi , on the random walk method for p2p networks , in : proceedings of the twenty - third annual joint conference of the ieee computer and communications societies , infocom 2004 , vol . 1 , hong kong , 2004 , pp . 148159. y. chawathe , s. ratnasamy , n. lanham , s. shenker , making gnutella - like p2p systems scalable , in : proceedings of the 2003 conference on applications , technologies , architectures , and protocols for computer communications ( sigcomm 2003 ) , karlsruhe , germany , 2003 , pp . 407418 .q. lv , p. cao , e. cohen , k. li , s. shenker , search and replication in unstructured peer - to - peer networks , in : proceedings of the 16th international conference on supercomputing , new york , new york , united states , 2005 , pp .q. lv , s. ratnasamy , s. shenker , can heterogeneity make gnutella scalable ? , in : revised papers from the first international workshop on peer - to - peer systems , cambridge , united states , 2002 , pp .94103 .n. bisnik , a. abouzeid , modeling and analysis of random walk search algorithms in p2p networks , in : proceedings of the second international workshop on hot topics in peer - to - peer systems ( hot - p2p 2005 ) , ieee computer society , 2005 , pp .95103 .i. mabrouki , x. lagrange , g. froc , random walk based routing protocol for wireless sensor networks , in : proceedings of the 2nd international conference on performance evaluation methodologies and tools ( valuetools 07 ) , icst ( institute for computer sciences , social - informatics and telecommunications engineering ) , 2007 , pp .110 . c. law , k .- y . siu , distributed construction of random expander networks , in : proceedings of the 22nd annual joint conference of the ieee computer and communications societies ( infocom 2003 ) , vol . 3 , 2003 , pp .21332143 . c. gkantsidis , m. mihail , a. saberi ,random walks in peer - to - peer networks , in : proceedings of the twenty - third annual joint conference of the ieee computer and communications societies , infocom 2004 , vol . 1 , hong kong , 2004 , pp . 120130. g. s. manku , m. naor , u. wieder , know thy neighbor s neighbor : the power of lookahead in randomized p2p networks , in : proceedings of the 36th annual acm symposium on theory of computing ( stoc 2004 ) , acm press , 2004 , pp . 5463 .v. cholvi , p. a. felber , e. w. biersack , efficient search in unstructured peer - to - peer networks , in : proceedings of the sixteenth annual acm symposium on parallelism in algorithms and architectures , barcelona , spain , 2004 , pp .271272 .a. sinclair , improved bounds for mixing rates of marked chains and multicommodity flow , in : lecture notes in computer science , proceedings of the 1st latin american symposium on theoretical informatics ( latin 92 ) , vol .583 , springer - verlag , 1992 , pp .474487 .a. tahbaz - salehi , a. jadbabaie , small world phenomenon , rapidly mixing markov chains , and average consensus algorithms , in : proceedings of the 46th ieee conference on decision and control , ieee computer society , 2007 , pp .276281 .a. broder , r. kumar , f. maghoul , p. raghavan , s. rajagopalan , r. stata , a. tomkins , j. wiener , graph structure in the web , computer networks : the international journal of computer and telecommunications networking 33 ( 2000 ) 309320 .s. saroiu , p. k. gummadi , s. d. gribble , a measurement study of peer - to - peer file sharing systems , in : proceedings of spie ( proceedings of multimedia computing and networking 2002 , mmcn02 ) , vol .4673 , 2002 , pp . 156170 .
|
random walks are gaining much attention from the networks research community . they are the basis of many proposals aimed to solve a variety of network - related problems such as resource location , network construction , nodes sampling , etc . this interest on random walks is justified by their inherent properties . they are very simple to implement as nodes only require local information to take routing decisions . also , random walks demand little processing power and bandwidth . besides , they are very resilient to changes on the network topology . here , we quantify the effectiveness of random walks as a search mechanism in _ one - hop replication networks _ : networks where each node knows its neighbors identity / resources , and so it can reply to queries on their behalf . our model focuses on estimating the expected average search time of the random walk by applying network queuing theory . to do this , we must provide first the expected average search length . this is computed by means of estimations of the expected average coverage at each step of the random walk . this model takes into account the _ revisiting effect _ : the fact that , as the random walk progresses , the probability of arriving to nodes already visited increases , which impacts on how the network coverage evolves . that is , we do not model the coverage as a memoryless process . furthermore , we conduct a series of simulations to evaluate , in practice , the above mentioned metrics . our results show a very close correlation between the analytical and the experimental results . , , , random - walk , look - ahead networks , average search time , average search length
|
natural scenes have a large range of intensity values(thus a large dynamic range ) and conventional non - hdr cameras can not capture this range in a single image . by controlling various factors , one of them being the exposure time of the shot we can capture a particular window in the total dynamic range of the scene .so we need multiple `` low dynamic range '' images of the scene to get the complete information of the scene .fig.1 illustrates an example .the internal processing pipeline of the camera is highly non linear i.e. the pixel intensity value at location , is equal to where is the exposure time of the shot and is the irradiance value at location . is known as the camera response function of that particular camera which is a non linear function .given the values of for differently exposed images of the same scene we can get a robust , noise free estimate of the s ( methods like debevec et .1997 ) use a weighted average of the to get a robust estimate of the corresponding .we use a deep neural network to estimate the function taking as input the ldr pixel intensities of 5 ldr images of a static scene to estimate the irradiance values , i.e. the hdr map of the scene .fig 1 . shows the camera acquisition pipeline in modern cameras and the non - linear transforms involved . + + we further conduct experiments of getting another similar convolutional neural network to approximate a tone mapping operator . our training setincludes hdr images of scenes and their corresponding tone mapped images generated by one of the tone mapping operators provided in matlab s hdr - toolbox ( banterle et al .2011 ) which gives the highest value of the tmqi metric ( yeganeh et al . 2013 ) .we try further experiments to improve the results , details of which are provided in the main report .+ + this is the end to end pipeline involved in digital image acquisition in modern digital cameras , which indicates that the mapping from the scene radiance to the pixel values at a spatial location in the image is non - linear .we have a collected a dataset of 957 hdr images from the internet from the websites of various groups around the world working on hdr imaging and the camera response functions of the cameras with which those photos were taken .we use these to generate ldr images from the hdr s whose exposure time values are taken from a list of exposure times ( a geometric progression with the first term 1 , common ratio 4 and the last term equal to .our network takes as input a stack of 5 rgb ldr images of a particular scene , each having a different exposure . in the initial experiments we fixed the exposure of these ldr images to be [ 1,8,64,512,4096 ] , and then we moved on to adaptive exposure based method in which we choose first the ldr image of a scene which gives the maximum value of entropy ( entropy of a single channel imageis defined as - where is the histogram count of a particular intensity value in the image and the sum is calculated for all the possible intensities in the image ) and we take two images of the previous and next two intensity values .we have 3 networks , one for each of the r , g and b channels of the inputs , so we conducted many experiments with both the former and the latter case approach but we were able to obtain plausible results only in the first case where the exposure times were fixed .the graphs of training error vs. epochs for 3 different models which turned out to be the best after testing many different sets of hyperparameters are shown below for the case where the exposure times are kept constant . the final test error that we get forthe best model is .we conducted experiments in models with dropout in each layer with in order to improve generalization .we also also added spatial batch normalization just before passing the activations of each layer through the relu non linearity .batch size was kept 40 ( decided by the memory limitations of the gpu ) .batchnorm strictly improved the results as the same training error was attained in less number of epochs .the architecture of the network is illustrated in the fig [ fig : tone_cnn ] we first create a dataset using the existing 957 hdr images .we then use the tone mapping operators provided in the hdr - toolbox by francesco banterle et al . and use them to create different tone maps of each hdr and run the tmqi metric on each of the tone maps and choose the one which gives the highest tmqi score .some are local and some are global tone mapping operators , so our approach is not fully justified .we then train a convolutional neural network whose input is the hdr image and its corresponding truth is the best tone map corresponding to the tmqi metric .we use 3x3 conv in the first layer followed by 1x1 convs in the subsequent layers .we get this intuition from the works of mertens et al .whose method most of the time gave the best tmqi score .in their work the final intensity value at location depends only on the 3x3 neighborhood of radiance value in its corresponding hdr image at location .further study of the other tone mapping works is required in order to improve the architecture after the pilot testing that we have done in the course of the summer .after preliminary results it was observed that the network was not able to deal with high frequency inputs simultaneously with low frequency ones so in order to tackle that problem we first convert both the input and output pairs to lab space , apply a bilateral filter to the channel , create a new channel and train 4 networks each for these new channels as well as for a and b channels .we obtain better results for this method . in order to obtain a good estimate of the hyperparameters of the network, we test out several values of then by training their corresponding architectures for 2 epochs and observing the validation error . due to computational constraints , this could not be afforded for more epochs and only 4 sets of hyperparameters could be tested . at ( 0.2,-1 ) [ cols="^ " , ] ; ( 1.75 + 4.5 + 2.25 + 2.25,0.25 ) ( 2.75 + 4.5 + 2.25 + 2.25,0.25 ) ( 2.75 + 4.5 + 2.25 + 2.25,1.25 ) ( 1.75 + 4.5 + 2.25 + 2.25,1.25 ) ( 1.75 + 4.5 + 2.25 + 2.25,0.25 ) ;for all the data processing tasks we use matlab and for implementing and testing our neural networks we use the torch framework in the lua scripting language , and most of the models are trained on a single nvidia geforce gt 730 graphics processor , although for a brief amount of time during which we had access to a hpc node which had 2 gpu s , a tesla k20c and a titan x , we did multi - gpu training of our models using the following algorithm - * have the same network on the 2 gpu s at the beginning of every iteration in an epoch .* independently processing two different batches on the two gpu s and then copying over the accumulated gradients in the backward pass on one of the gpu s to the other , adding them to the accumulated gradients on the other gpu during the backward pass on it . * updating the parameters of the model on the gpu to which the gradients were copied .* process the next set of batches one drawback with this approach was that the inter gpu communication overhead outweighed the almost 2x gain time in the actual training of the networks ( the time required to the forward - backward pass ) . during our other experiments , in order to save time in loading of the data , we implemented a multi - threaded approach to load our mini - batch . + another important thing to note is that we were not able to process even a single input example of dimensions 15 x m x n as our images were of quite high resolution and during a forward pass of the network since every individual module in torch caches its local output , the gpu s memory did nt turn out to be sufficient , so we broke each image into patches of 64 x 64 and after that we were able to keep the minibatch size to be 40 without overloading the gpu s memory . due to computing power issues we were not able to test our models that efficiently .the code will shortly be made available at my github repository .in the results we present the graph of the training error vs no . of epochs for the best three models(fig 3 . ) . the test error forthe best model is 0.09345 .visual results are shown below .it is clear that further experiments with validating the hyperparameters are required to find the optimal architecture for the task .the network is clearly able to generate plausible results for some of the colors but not all(fig . 3 and 4 ) .also it is clear that the network is able to generate outputs without under / over saturation of regions that have high and low radiance values in the same image which hence proves that the dynamic range of the output is quite high . 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 in the results we present the outputs of the final validated architecture for the cases where the network performs good as well as bad ( fig 7 . and 8 . ) .the final test error for that model after 28 epochs of training is 0.002764 ( fig 6 . ) .we also present the plot of training error vs no .of epochs for that model .1 . ) debevec , paul e. , and jitendra malik .`` recovering high dynamic range radiance maps from photographs . ''acm siggraph 2008 classes .acm , 2008 .mitsunaga , tomoo , and shree k. nayar .`` radiometric self calibration . ''computer vision and pattern recognition , 1999 .ieee computer society conference on .. vol .1 . ieee , 1999 .krizhevsky , alex , ilya sutskever , and geoffrey e. hinton .`` imagenet classification with deep convolutional neural networks .'' advances in neural information processing systems . 2012 .krizhevsky , alex , ilya sutskever , and geoffrey e. hinton .`` imagenet classification with deep convolutional neural networks .'' advances in neural information processing systems . 2012 .+ 5.)wang , zhou , et al .`` image quality assessment : from error visibility to structural similarity . ''ieee transactions on image processing 13.4 ( 2004 ) : 600 - 612 .
|
we propose novel methods of solving two tasks using convolutional neural networks , firstly the task of generating hdr map of a static scene using differently exposed ldr images of the scene captured using conventional cameras and secondly the task of finding an optimal tone mapping operator that would give a better score on the tmqi metric compared to the existing methods . we quantitatively show the performance of our networks and illustrate the cases where our networks performs good as well as bad .
|
a bidirectional or two - way relaying scenario consists of nodes a and b wanting to exchange data using a relay r. if the relay performs plnc , the relaying has two phases : multiple access ( ma ) phase and broadcast ( bc ) phase . in the ma phase , users transmit signals simultaneously to the relay .the signal received at the relay is a noisy sum of transmitted signals scaled by their respective channel coefficients .the relay applies a many - to - one map on the received symbol , such that the users can decode the desired message , given their own message and a knowledge of the map .the salient feature of this scheme is that the mapping depends on the _ fade state _ of user - relay channels .hence , the complex plane representing the ratio of channel coefficients ( or fade state ) has to be partitioned to indicate which map is to be used in a given region .relaying carried out in this manner is highly efficient compared to conventional relaying and network coding at bit - level . .both users are assumed to apply the same constellation during the ma phase .however , due to varying channel conditions ( i.e. average snr ) , each link may only be able to support constellations up to a certain cardinality .such a scheme is known as heterogeneous plnc ( hepnc ) which was proposed by zhang et al .three cases of hepnc viz .qpsk - bpsk , 8psk - bpsk , and 16qam - bpsk are investigated in and the network coding maps are obtained using cnc algorithm . however , a computationally efficient way of obtaining the network coding maps and the corresponding fade state region boundaries using the concept of _ singular fade states _ ( sfs ) , is known for _ symmetric _ psk modulations .also , the concept of _ clustering independent _ regions is not considered in . in this paper, we provide an analytical approach to the plnc scheme with heterogeneous psk modulations by extending the framework of muralidharan et al. .the contributions of this paper are : * the analytical framework provided in this paper enables the investigation of two - way relaying using hepnc with psk modulations .this includes cases like 8psk - qpsk plnc which are not considered by zhang et al . *the approach taken by zhang et al . uses a computer search based cnc algorithm to find the boundaries of _ relay - mapping regions_. the complexity increases with the order of psk signal sets used .this work provides explicit equations for the boundaries of _ relay - mapping_ regions . * for the hepnc modulations dealt by zhang et al , where one of the users employs bpsk signal set , the boundaries of relay - mapping consist of straight lines only . for a general case, the boundaries also include arcs of circles , for which this paper gives explicit equations . *this work generalizes some results of .for example , if same order of psk modulation is used at both users , the _ internal _ clustering independent ( ci ) region is obtained from complex inversion of _ external _ ci region .this paper provides equations to obtain boundaries of internal ci regions for heterogeneous psk modulations .the paper is organized as follows : section ii describes the system model , which is followed by the derivation of the exact number and location of sfs in section iii .the latin rectangles used for qpsk - bpsk and 8psk - bpsk schemes , which serve as many - to - one maps at the relay are listed in section iv . the analytical framework for partitioning of the fade state planeis given in section v. thereafter , the performance is evaluated in terms of bit and relay error rates in awgn and rayleigh fading channels .the results match those provided by zhang et al .the hepnc system comprises of two users a and b , wanting to exchange data through a relay r. the notation from and has been used and extended . let a and b use psk modulations of cardinality , where , for .subscripts is used for a , b interchangeably , to indicate parameters of a and b respectively .that is , a wants to send a binary - bit tuple to b , and b wants to send a binary - bit tuple to a. without loss of generality , .let the functions mapping the bit - tuples to complex symbols be binary to decimal mappings given by for .this paper considers users using symmetric -psk constellations of the form .the relaying has two phases , ma phase and bc phase .code ( a ) at ( 0 , 3 ) ; ( b ) at + ( 1.5,3 ) ; ( c ) at + ( 3,3 ) ; ( a ) ( b ) node[pos=.5,sloped , above ] ; ( c ) ( b ) node[pos=.5,sloped , above ] ; at ( 0,2.5 ) ; at ( 3,2.5 ) ; at ( 1.5,2 ) ; \(a ) at ( 0 , 1 ) ; ( b ) at + ( 1.5,1 ) ; ( c ) at + ( 3,1 ) ; ( b ) ( a ) node[pos=.5,sloped , above ] ; ( b ) ( c ) node[pos=.5,sloped , above ] ; at ( 1.5,0.5 ) , ; at ( 1.5,0 ) ; in this phase , the users transmit complex symbols to the relay simultaneously . the user transmissions , and , where , and . the received signal at r is given by where and are fade coefficients of the a - r and b - r links respectively .the additive noise is , where denotes circularly symmetric complex gaussian random variable with zero mean and variance .slow fading is assumed . the pair is defined as a fade state.where it is assumed that the fade states are distributed according to a continuous probability distribution .the channel state information is assumed to be available only at the receivers .thus , no carrier phase synchronization is needed at the users .however , symbol - level timing synchronization is assumed . the effective constellation seen at the relay during the ma phase is therefore , the minimum distance between any two points in the constellation is defined as , a fade state where is called a _ singular fade state ( sfs)_. the set of all sfs , is .the relay r jointly decodes the transmitted complex symbol pair by computing a maximum likelihood estimate : instead of re - transmitting the estimated pair in the form of a constellation of cardinality , the relay applies a many - to - one map from this pair to points on another constellation having smaller cardinality .this many - to - one map depends on the fade state and is defined by .the signal set satisfies the inequality , .elements of mapped to the same complex number in by the map are said to form a cluster .let denote the set of all clusters for a given fade state , also called a clustering .let denote a generic clustering .let denote the cluster to which belongs under the clustering .the relay needs to broadcast a complex symbol .the cardinality of the transmitted constellation is assumed to be limited by the snr of the weakest link ( by convention b - r ) .thus , the relay transmits a set of symbols with the number of broadcast transmissions being and .the users a and b receive transmissions and , where , .the fading coefficients corresponding to the r - a and r - b links are denoted by and and the additive noises and are .the users then decode the individual transmissions and create composite symbols and to estimate and respectively .the map is known to the users , and so is the symbol transmitted by them , using which the data of the other user has to be recovered . to ensure this , the many - to - one map should satisfy the condition called exclusive law , which is this constraint leads to the mapping function being of the form of a _latin rectangle_. if the fade state is an sfs , the relay can not decide upon the transmitted pair , as multiple pairs lead to the same received symbol at the relay . for fade state values near the neighbourhood of an sfs ,the value of is greatly reduced , which might lead the relay mapping the estimated transmitted symbols to a wrong constellation point in the ma phase . to mitigate this harmful effect of an sfs , another constraint called the singularity removal constraintis imposed : for all pairs and , where , and , such that , ensure .the above mappings remove the detrimental effect of _ distance shortening _ .the minimum clustering distance for a given mapping / clustering and a given fade state ( ) is defined as a mapping is said to _ remove _ an sfs , if the minimum clustering distance .in this section , the location and number of sfs are obtained for -psk-psk plnc .the points in the -psk signal set are assumed to be of the form , where , and for =1,2 .the _ difference constellation _ , of a signal set , is given by for a symmetric -psk signal set , we have , let .it is sufficient to consider in the range to get all the members of .let , if is odd , and , if is even , and .thus , we have thus , , is given by , [c]{l?s } \ieeestrut 2sin\left(\frac{n\pi}{m_i}\right ) e^{j\left(\frac{(2k+1)\pi}{m_i}\right ) } & if , \\ 2sin\left(\frac{n\pi}{m_i}\right ) e^{j\left(\frac{2k\pi}{m_i}\right ) } & if , \ieeestrut \end{ieeeeqnarraybox } \right . \label{diff_const_eqn}\ ] ] where , .thus , sfs are of the form , for some .let . for integers and , where , and , and only if and . the singular fade states lie on circles with points on each circle with radii of circles given by , where .the phase angles of the points are given by , where and is , from ( [ diff_const_eqn ] ) , the amplitudes of sfs for some $ ] .we need to count the number of distinct values of . from lemma 1 and using , where , and , if and only if and .thus , out of the pairs of and , we subtract cases for which ( since they all lead to same and add one on behalf of all of them .hence , the number of distinct amplitudes of singular fade states is . from ( [ diff_const_eqn ] ) , the phase of singular fade states on the circles of different radii depend on the values of .if is odd and is also odd , the phase , where , .taking , it is clear that has distinct values and hence .thus , in this case the phase of points is , which shows that there are equispaced sfs on each circle .the other cases follow similarly . if is odd and is even , the phase . if is even and is odd , the phase .finally , if both and are even , . when , we get the result of lemma 2 of as a special case .hence , lemma 2 is the generalized version of the corresponding result given in .let users a and b use qpsk and bpsk signal sets respectively .thus , and , and . and . from the definition of sfs, we get , if we use lemma 2 , we see that the sfs are distributed in circles .the radii of the circles can be computed by taking cases of and , so that .there are points on each circle with phases , as given in lemma 2 .the sfs for qpsk - bpsk , 8psk - bpsk , and 8psk - qpsk are shown in fig .[ sfs : sub1 ] , [ sfs : sub2 ] , and [ sfs : sub3 ] respectively . .33 .33 .33a latin rectangle of order on the symbols from the set is an array , where each cell contains one symbol and each symbol occurs at most once in each row and column .from example 1 , there are nine sfs in qpsk - bpsk plnc .the effect of sfs at can not be eliminated .this is because , no matter what constraint is imposed on the mapping , the small value of leads to very short minimum euclidean distance . to eliminate the rest , we list the singularity removal constraints for each sfs for qpsk - bpsk plnc in table [ table_1 ] . based on the constraints and requirements of the exclusive law, we find the smallest possible set of latin rectangles that remove all the non - zero sfs .a list of latin rectangles is provided in table [ qpsk_bpsk_ls ] .a set of 8 mappings has also been obtained to remove the effect of 32 sfs in 8psk - bpsk plnc , which are shown in table [ 8psk_bpsk_ls ] ..singularity removal constraints for qpsk - bpsk plnc [ cols="^,^,^ " , ] first row is common for all maps ..5 .5 .5 .5 sfs in qpsk - bpsk plnc . ] .5 .5the mappings in the previous section removed the particular fade state .however , the best possible mapping for each has to be found .we follow the scheme proposed by for clustering at the relay , which is based on using the maps which were used for the removal of sfs . for a given realization of use one of the latin rectangles which is used to remove an sfs based on the criteria given below .the distance metric is defined as let , where . the criterion for selecting the sfs , whose map is to be used for a given fade state is as follows : + _ if then choose the clustering which removes the singular fade state h. _ the set of all values of fade states for which any clustering satisfying the exclusive law gives the same minimum cluster distance is called clustering independent region . first , an upper bound on the minimum cluster distance as in is obtained .the existence of such a region is proved by adapting lemma 10 from for the hepnc scenario . for any clustering satisfying the exclusive law , with unit energy -psk signal sets , , is upper - bounded as , as satisfies the exclusive law , where and .we have similarly , from the fact that , where and , we have . from the definitions , . hence , from theorem 1, it follows that regardless of which is considered , when .similarly , for , .let and denote the clustering independent regions corresponding to and respectively . from theorem 1 , for , .hence , let , and denote the circle centered at the origin with radii .let denote the set of circles whose centers are the sfs which lie on and have radii .the following theorem generalizes the boundary of external region given in .the region is the common outer envelope region formed by the circles , .see appendix .similarly , the transformation , is called complex inversion .it can be verified that by applying complex inversion in ( [ ext_ci_eq2 ] ) we do not get ( [ int_ci_eq ] ) unless . unlike lemma 11 of , getting internal clustering independent region for heterogeneous plnc with psk is non - trivial .the following theorem gives the general method to obtain the internal independent region for plnc using heterogeneous psk modulations .the region is the region formed by the intersection of following regions + where , such that , and where , and * denotes complex conjugation . from ( [ int_ci_eq ] ) , ,is the intersection of all regions in the complex fade state plane which satisfy the inequality by squaring on both sides of the above inequality and completing the magnitude square , we get the curves given in ( [ equation_ineq1 ] ) . the region other than the clustering independent regionis called clustering dependent region . from the criterion to associate a clustering to any given channel fade state , it is seen that every sfs has an associated region in the channel fade state plane in which the clustering which removes that sfs is used at the relay .hence the region associated with the sfs is , from lemma 2 , we know that the sfs which lie on the same circle , have phase angles of the form or for .hence , there is an angular symmetry of .therefore , it suffices to consider those sfs which lie on the lines and and use symmetry to obtain the regions for all values of .the pairwise transition boundary formed by the sfs and , denoted by , is the set of values of for which .the pairwise transition boundaries are either circles or straight lines .we reproduce the proof provided by namboodiri et al . [theorem 2 , ] for completeness .the curve is , squaring on both sides and manipulating , we get the following : assume . dividing by , completing the magnitude square on the lhs and substituting , we get , where if , we get the equation of a straight line , where , and .5 .5 .5 .5 this proof is independent of the signal sets considered at the users .it can be easily verified that the following lemmas from , which are used to get the regions associated with the sfs , can be applied to the case of heterogeneous psk modulations also .the region , where the sfs lies on the line , lies inside the wedge formed by the lines and . to obtain the boundaries of , where the sfs lies on the line , it is enough to consider those curves , which lie on the lines , and .consider the case where and .from example 1 , there are sfs .we select one and derive the boundaries of clustering dependent region .let .the difference constellation points for are and . from lemma 3 ,the region lies in the wedge formed by the lines and ( since ) .these lines are denoted by and .we find the equation of the boundary between and .the difference constellation points for are and . from theorem 1, the boundary , , satisfies the expression of a straight line ( since ) , which is .this is the horizontal line .similarly , for , and , and we get a vertical line , which is . since the regionis also bounded by the external clustering independent region , we have to consider the circle centered at and having radius .all the lines and circles have been shown in fig .[ reg_example ] .the desired region around h is the internal region formed by the intersection of these curves .the regions obtained for qpsk - bpsk case are shown in fig .[ reg : sub1 ] and they match with those in . in fig .[ reg : sub2 ] we show the regions for 8psk - bpsk scheme .the performance of the hepnc scheme is evaluated for awgn and rayleigh fading channels .results are compared with those given in . in this paper ,awgn channel indicates a channel with additive gaussian noise and without phase synchronization of users , i.e. , and .the two performance metrics are relay error rate ( rer ) and bit error rate ( ber ) . during simulation ,the average snr of the b - r link is kept constant and the average snr of a - r link is varied . to ensure error probability better than in b - r link , for awgn channel and for rayleigh fading channel .the performance parameters and the values are chosen to present a comparison with zhang et al . .[ diff_map ] shows rer as a function of the number of maps used at relay for qpsk - bpsk plnc scheme .it can be seen that one denoising map can not remove all the sfs and hence the performance improves till all three maps are used ( i.e all sfs are removed ) . the error floor behavior is a result of fixing . in end - to - end ber simulations ,if the ber from a to b is and from b to a is , the overall ber is calculated as the average ber across both users , .the rer & average ber are shown in fig .[ ber_qpsk : test ] and agree with . for the 8psk - bpsk plnc ,the average ber is calculated as .the overall ber & rer are shown in fig .[ ber_8psk : test ] and agree with .the paper extends the framework of to plnc with heterogeneous psk modulations .exact expressions for the number and location of sfs for heterogeneous psk modulations are given and equations for the boundaries of clustering independent and dependent regions are derived . a possible direction for future work would be to investigate qam - qam heterogeneous plnc analytically .another possible direction is to use the theory of constrained partially filled latin rectangles for finding the mappings similar to .the proof of theorem 2 follows by combining the inferences from lemma 5 , 6 and 7 described hereafter .let , , be a group of circles .the region between the outermost circle and the innermost circle in is called the ring formed by .let be the unit circle centered at the origin of fade state plane .the following lemma is modified version of lemma 17 in .the external clustering independent region is the unshaded region obtained when the interior regions of all the circles which belong to the sets , . from ( [ diff_const_eqn ] ) , for . from ( [ ext_ci_eq2 ] )we have the equation is the exterior region of the circle centered at with radius .hence the result follows .it can be verified that the lemma 19 and lemma 20 of can be adapted for our case , with the modified definition of and .their statements are provided for completeness .the rings formed by , are fully shaded . among the circles , , is the outermost .the region between circles and is fully shaded .this work was supported partly by the science and engineering research board ( serb ) of department of science and technology ( dst ) , government of india , through j. c. bose national fellowship to b. sundar rajan . v. t. muralidharan , v. namboodiri , and b. s. rajan , `` wireless network - coded bidirectional relaying using latin squares for m - psk modulation , '' _ ieee trans .inf . theory _66836711 , 2013 .
|
in bidirectional relaying using physical layer network coding ( plnc ) , it is generally assumed that users employ same modulation schemes in the multiple access phase . however , as observed by zhang et al . , it may not be desirable for the users to always use the same modulation schemes , particularly when user - relay channels are not equally strong . such a scheme is called heterogeneous plnc . however , the approach in uses the computationally intensive closest neighbour clustering ( cnc ) algorithm to find the network coding maps to be applied at the relay . also , the treatment is specific to certain cases of heterogeneous modulations . in this paper , we show that , when users employ heterogeneous _ symmetric_-psk modulations , the network coding maps and the mapping regions in the _ fade state _ plane can be obtained analytically . performance results are provided in terms of relay error rate ( rer ) and bit error rate ( ber ) .
|
general relativity successfully describes the gravitational field , in terms of space - time geometry , on large scales . however on small scales it is hardly being probed by direct observations .nevertheless , there are indirect indications , and all of them point to its failure : singularity theorems imply not only infinite densities but even a complete breakdown of evolution after a finite amount of proper time for most realistic solutions .the best known examples are cosmological situations and black holes .traditionally , such a breakdown of evolution has been interpreted as corresponding to the beginning , or end , of the universe . in a situation like thisit is instructive to remember what is probably best expressed by the quote `` _ the limits of my language mean the limits of my world . _ '' . currently , our best language to speak about the universe is general relativity , but it clearly has its limitations .the description it provides of the world is incomplete when curvature becomes large , and such limitations should not be mistaken for the limits of the actual world it is supposed to describe .such strong curvature regimes are reached when relevant length scales become small such as in black holes or the early universe .one can understand the failure of general relativity as a consequence of extrapolating the well - known long - distance behavior of gravity to unprobed small scales .according to general relativity , the gravitational force is always attractive such that , at some point , there will be nothing to prevent the total collapse of a region in space - time to a black hole , or of the whole universe .moreover , this is a purely classical formulation , and further expected modifications arise on small scales when they approach the planck length .possible spatial discreteness , also an expectation from well - known quantum properties , would imply a radically different underlying geometry .these lessons from quantum mechanics can be made more precise , although they remain at a heuristic level until they are confirmed by a theory of quantum gravity ( see , e.g. , ) .first , the hydrogen atom is known to be unstable classically since the electron falls into the nucleus in a finite amount of time , a situation not unlike that of the collapse of a universe into a singularity . upon quantization ,however , the atom acquires a finite ground state energy and thus becomes stable .the value , , also shows the importance of quantization and the necessity of keeping the planck constant non - zero . in quantum gravityone can expect on similar dimensional grounds that densities are bounded by an inverse power of the planck length such as which again diverges in the classical limit but is finite in quantum gravity .the second indication comes from black body radiation where the classical rayleigh jeans law , which would imply diverging total energy , is modified by quantum effects on small ( wavelength ) scales .the resulting formula for its spectral energy density , due to planck , gives a finite total energy .this observation gives us an indication as to what could happen when one successfully combines general relativity with quantum theory .according to einstein s equations , matter energy density back - reacts on geometry .if matter energy behaves differently on small scales due to quantum effects , the classical attractive nature of gravity may change to repulsion .in addition , also the non - linear gravitational interaction itself , without considering matter , can change .all those expectations , some of them long - standing , have to be verified in explicit calculations which requires a _ non - perturbative _ ( due to strong fields ) and _ background independent _( due to degenerate geometry ) framework of quantum gravity .non - perturbativity can be implemented by adopting a canonical quantization , which when done in ashtekar variables also allows one to deal with background independence : there are natural smearings of the basic fields as holonomies and fluxes which do not require the introduction of a background metric and nonetheless result in a well - defined kinematical algebra .instead of using the spatial metric and extrinsic curvature for the canonical formulation following arnowitt , deser and misner , the ashtekar formulation expresses general relativity as a constrained gauge theory with connection and its momenta given by a densitized triad expressing spatial geometry . in the connection, is the spin connection compatible with the densitized triad and thus a function of , while is extrinsic curvature .the barbero immirzi parameter is free to choose and does not have classical implications .these basic fields must now be smeared in order to obtain a well - defined poisson algebra ( without delta functions ) suitable for quantization . however , the common 3-dimensional smearing of all basic fields is not possible because the spatial metric is now dynamical and there is no other background metric .fortunately , ashtekar variables allow a natural smearing without background geometry by using as smeared objects .indeed , their poisson algebra closes with well - defined structure constants .edges and surfaces arising in this definition play the role of labels of the basic objects and chosen freely in a given 3-manifold .alternatively , edges and surfaces can be introduced as abstract sets with a relation showing their intersection behavior which determines the structure constants of the algebra .this algebra can now be represented on a hilbert space to define a quantum theory .requiring diffeomorphism covariance of the representation , i.e. the existence of a unitary action of the diffeomorphism group , even fixes essentially the available representation to that used in loop quantum gravity .it is most easily constructed in terms of states being functionals of the connection such that holonomies become basic multiplication operators .starting with the simplest possible state which does not depend on the connection at all , holonomies `` create ''spin network states upon action .most generally , states are then of the form where is an oriented graph formed by the edges used in multiplication operators .each edge carries a label corresponding to an irreducible su(2 ) representation and arising from the fact that the same edge can be used several times in holonomies . finally , are contraction matrices to multiply the matrix elements of for edges containing the vertex to a complex number .these contraction matrices can be chosen such that the state becomes invariant under local su(2 ) gauge transformations .flux operators can be derived using the fact that they are conjugate to holonomies , and thus become derivative operators on states .replacing triad components by functional derivatives and using the chain rule , we obtain with non - zero contributions only if intersects the edges of . moreover , each such contribution is given by the action of an su(2 ) derivative operator with discrete spectrum .the whole spectrum of flux operators is then discrete implying , since fluxes encode spatial geometry , discrete spatial quantum geometry .this translates to discrete spectra also of more familiar spatial geometric expressions such as area or volume .it does , however , not directly imply discrete space-_time _geometry since this requires dynamical information encoded , in a canonical formulation , in the hamiltonian constraint .there are classes of well - defined operators for this constraint , which usually change the graph of the state they act on .their action is therefore quite complicated in full generality , not unexpectedly so for an object encoding the quantized dynamical behavior of general relativity . as usually , symmetries can be used to obtain simpler expressions while still allowing access to the most interesting phenomena in gravity such as cosmology or black holes . in loop quantum gravity with its well - developed mathematical techniques ,moreover , such symmetries can be imposed _ at the quantum level _ by inducing a reduced quantum representation from the unique one in the full theory .this is particularly useful because in symmetric situations usually no uniqueness theorems hold , for instance in homogeneous models where the widely used wheeler dewitt representation is inequivalent to the representation arising from loop quantum gravity . the loop representation in such models is then distinguished by its relation to the unique representation of the full theory . on the loop representation onecan then construct more complicated operators such as the hamiltonian constraint in analogy to the full construction .often , the operators simplify considerably and can sometimes be used in explicit calculations . following this procedure in cosmological situations of homogeneous spatial slices , the usual wheeler dewitt equation is replaced by a _non - singular _ difference equation . for this equation, the wave function of a universe model is uniquely defined once initial conditions are imposed at large volume .in particular , the difference equation continues to determine the wave function even at and beyond places where classical singularities would occur and also the wheeler dewitt equation would stop . for a mathematical discussion of properties of the resulting difference equations , see . as in the full setting , there are currently different versions ( such as symmetric and non - symmetric orderings , or other ambiguity parameters ) resulting in non - singular behavior . some versions , however , are singular which means that ambiguities are already restricted by ruling them out .nevertheless , some aspects can change also between different non - singular versions , such as the issue of _ dynamical initial conditions _ which are consistency conditions for wave functions provided by the dynamical law rather than being imposed by hand .they arise with stronger restrictions on wave functions in a non - symmetric ordering compared to a symmetric one .such ambiguities in simple models have to be constrained by studying more complicated situations ( see , e.g. , ) . through this procedure ,the theory becomes testable because it is highly non - trivial that one and the same mechanism ( including , e.g. , the same ordering choice ) applies to all situations .intuitively , the behavior can be interpreted as giving a well - defined evolution to a new branch _ preceding the big bang _ ; see for a general discussion .this new branch is provided by a new , binary degree of freedom given by the orientation of the triad .we are naturally led to this freedom since triads occur in the background independent formulation .it is precisely the orientation change in triad components which presents us with two sides to classical singularities . unlike the wheeler dewitt equation which abruptly cuts off the wave function at vanishing metric components , loop quantum cosmology has an equation connecting the wave function on both sides of a classical singularity .this is a very general statement and applies to _ all _ possible initial conditions at large volume , compatible with potential dynamical initial conditions .it is thus independent of complicated issues such as which of the solutions to the difference equation are normalizable in a physical inner product ( see for more details ) . for more precise information on the structure of classical singularities in quantum gravity ,however , one needs additional constructions .for instance , the evolution of semiclassical states can be studied in detail in some special models such as a flat isotropic model coupled to a free , massless scalar . here, the classical singularity turns out to be replaced by a bounce at small scales , connecting two semiclassical phases at larger volume .it remains open , however , how general this scenario is when potentials or anisotropies are included .when a physical inner product or precise semiclassical states are unavailable , one can make use of effective equations of the classical type which are ordinary differential equations in coordinate time but incorporate some prominent quantum effects .they have been introduced in and applied in the context of bounces in .recently , it has been shown , using a geometrical formulation of quantum mechanics , that this is part of a general scheme which agrees with effective action techniques common from quantum field theory where both approaches can be applied .thus , these equations allow an effective analysis of quantum theories in the usual sense .also those equations show bounces , sometimes even in semiclassical regimes , but not generically .effects can depend on quantization choices as well as on which quantum corrections are included .there are different types of corrections which in general are all mixed without any one being clearly dominant .a full study , including all possible quantum correction terms , is complicated and still to be done .all these bounce scenarios can be seen intuitively as confirmation of our expectation that quantum gravity should contribute a repulsive component to the gravitational force on small scales .such repulsion can stop the collapse of a universe and turn it into a bounce , after which the weakening repulsion will contribute to accelerated expansion in an inflationary scenario .unlike cosmological models , black holes require inhomogeneous situations .there are currently several techniques to get hints for the resulting behavior and for typical quantum effects in the physics of black holes . inside the horizon ,the schwarzschild solution is homogeneous because the killing vector field which is timelike in the static outside region turns spacelike . with the three rotational killing vectorsthis combines to a 4-dimensional symmetry group corresponding to kantowski sachs models .densitized triads with this symmetry can be written as ( see for details on this part ) such that and orientation , which is important for the singularity structure , is given by .the sign of is not relevant as there is a residual gauge transformation .from such a densitized triad , the spatial metric results .comparison with the interior schwarzschild metric suggests the following identification between space - time and minisuperspace locations : the _ schwarzschild singularity _ at and the _ horizon _ at .when quantized , the densitized triad components become operators acting on orthonormal states with , .this hilbert space is the analog of the spin network representation in the full theory , although most labels disappeared thanks to the high symmetry .also analogously to the full theory one can construct the hamiltonian constraint operator which gives rise to the dynamical law as a difference equation for the wave function depending on the triad components .this is singularity free as in the isotropic case , extending the wave function beyond the classical singularity .the situation is more complicated , however , because the classical minisuperspace now has two boundaries , one corresponding to the horizon at and one at the singularity corresponding to .but only one direction can be extended given only one sign factor from orientation .thus , the system provides an interesting consistency check of the general scheme by determining which boundary , the singularity or the horizon , is removed upon quantization .the horizon boundary should not be removed because at this place our minisuperspace approximation breaks down .indeed it is just the classically singular boundary which is removed by including the sign of in the analysis , providing a non - trivial test of the singularity removal mechanism of loop quantum cosmology .this rests crucially on the use of densitized triad variables which we are led to naturally in a full background independent formulation .while models also allow quantizations in terms of other variables , e.g. using co - triads or metrics , classical singularities appear at different places of minisuperspace and general schemes of singularity removal do then not exist . by extrapolating the extension of the interior schwarzschild geometry through the classical singularity to dynamical situations in the presence of matterone can arrive at a new paradigm for _ black hole evaporation _ founded on loop quantum gravity . as illustrated in fig .[ nonsingh ] , the quantum region around the classical black hole singularity is not a future boundary of space - time .correlations between infalling matter components are then not destroyed during evaporation . instead , matter will be able to leave the black hole region , defined by the presence of trapped surfaces and a horizon , after the evaporation process restoring all correlations .there are thus trapped surfaces which as in the usual singularity theorems of general relativity implies geodesic incompleteness . buthere , this does not lead to a singularity ; rather the space - time continuum is replaced by a discrete , quantum geometrical structure .only classical concepts break down but not the quantum gravitational description .there is then also no need to continue the horizon beyond the point where it meets the strong quantum region , although there can well be past trapped surfaces in the future of the quantum regime . in this sense ,the picture is similar to closed horizons enclosing a bounded space - time region which have been suggested earlier .there are several underlying assumptions for this scenario which are now being tested .for instance , for this picture space - time has to become semiclassical again after evolving through the quantum regime where discrete geometry is essential .otherwise , there will be no asymptotically flat future space - time to detect remaining correlations . in the schwarzschild interiordiscussed above , one can verify that solutions to the difference equation have to be symmetric under as a consequence of consistency conditions in the recurrence scheme .note that this is not a condition imposed on solutions but follows from the dynamical law , similarly to dynamical initial conditions for cosmology . in the non - interacting , empty schwarzschild solutionthe future is thus the time reverse of the past and in particular becomes semiclassical also to the future .whether semiclassical behavior in the future is also achieved in more complicated collapse scenarios is not known so far . the behavior is more complicated if matter is present or when inhomogeneities are considered . then , back - reaction of hawking radiation on geometry and scattering of matter leads to a future behavior different from the past of the quantum region . for such cases, it has not been shown that space - time becomes semiclassical after all fields are settled down . to complete the picture , access to the outside of the horizonis needed as well as a handle on field degrees of freedom of matter or gravity itself . moreover, the horizon dynamics must be understood taking into account quantum effects .there are two main ways to approach this complex issue : * _ effective equations _ , which have been successful in understanding qualitative aspects of homogeneous models ( e.g. , ) , are not yet available for inhomogeneous situations , but the homogeneous forms can be exploited in matchings . * _ midisuperspace models _ and their quantum dynamics close to classical singularities or at horizons , related to properties of difference equations , are being developed and have already given initial promising insights .gravitational collapse is often modeled by matching a homogeneous matter distribution ( such as a star ) to an inhomogeneous exterior geometry , following the work of oppenheimer and snyder .modeling the collapsing body by an isotropic interior solution with the scale of the body determined by , and matching it to a generalized vaidya spherically symmetric outside region with a function determining the outside matter flux , leads to the conditions at the matching surface . here , is the coordinate value for where the interior solution is cut off .the homogeneous interior can then be described by effective equations including repulsive quantum correction terms as discussed in sec .[ lqc ] .with such a term , the interior bounces which , through the matching , also influences the exterior geometry and its horizons . in the absence of effective equations for inhomogeneous situations ,the full outside behavior can not be determined .but at least in the neighborhood of the matching surface one can study the formation and possible disappearance of horizons . a marginally trapped spherical surface forms when is satisfied which , with the matching conditions , implies for the interior .classically , is unbounded and monotonic such that there is always exactly one solution to the condition of a marginally trapped surface at the matching surface .there is thus only a single horizon covering the classical singularity .with repulsive quantum effects , the situation changes : first , starts to decrease before the bounce in the effective dynamics , implying the existence of a second solution corresponding to an inner horizon as illustrated in fig .[ horizon ] .secondly , is bounded and there are cases , depending on parameters and initial conditions , without any solution for .the classical singularity is then replaced by a bounce which is covered by horizons only for larger mass .this scenario thus indicates the presence of a threshold for black hole formation .the quantum behavior across horizons can only be seen in inhomogeneous models .for spherically symmetric ones , the loop representation leads to states of the form subject to coupled difference equations ( one for each edge ) with coefficients which have been computed explicitly . also here ,superspace is extended by the freedom of ( local ) orientation : is determined by .again , the quantum equations are _ non - singular _ , however much more crucially depending on the form ( in particular on possible zeros ) of the coefficients . unlike in homogeneous models , a _ symmetric ordering _ is required to extend solutions . in isotropic modelsthis removes any possible dynamical initial conditions , but in less symmetric models the solution space is still restricted .how strong the restrictions will be has to be seen from a more detailed analysis of the resulting initial / boundary value problem for difference equations . also unlike homogeneous models , the anomaly issue plays a bigger role : coupled difference equations must be consistent with each other for a well - defined initial / boundary value problem . while the anomaly issue in this model is open as of now , the existence and uniqueness of solutions in terms of suitable initial and boundary values has been shown .this has to be revisited , however , for the issue of semiclassical properties . in this model ,there are further qualitatively different possibilities for the constraint operator . while the above discussion was based on a fixed number of labels ( an operator not creating new spin network vertices ) ,also other variants exist .for such a choice , the number of degrees of freedom would not be preserved when acting with the constraint , and a different type of recurrence problem arises .this freedom is analogous to quantization choices in the full theory which makes it possible to compare ambiguities and restrict choices by tight mathematical consistency conditions and the physical viability of scenarios .so far , the detailed quantization is far from unique ( except for kinematical aspects ) , but there are characteristic and robust generic effects . in midisuperspace models , it is also easier than in a full setting to impose horizon conditions at the quantum level and study quantum horizon dynamics .isolated horizons are particularly useful because in spherical symmetry they simplify considerably in ashtekar variables .this is useful for loop quantum gravity where ashtekar variables are basic , and here even the quantum dynamics simplifies in the neighborhood of horizons .moreover , the simplifications occur approximately also for slowly evolving horizons such that even dynamical situations are accessible .alternatively , direct quantizations of classical expansion parameters have been formulated in for fully dynamical situations .conclusions drawn so far confirm quantum fluctuations of horizons as suggested often before by heuristic arguments .one can also easily count the degrees of freedom of exactly spherically symmetric horizons , but the symmetry reduction removes far too many of them for a faithful counting of black hole entropy .this is the one issue where symmetric models are clearly not reliable and one has to use the full theory .fortunately , even the full dynamics simplifies if an isolated horizon is introduced as a boundary allowing the correct counting of entropy for all astrophysically relevant black holes .the framework of loop quantum gravity provides promising indications , but so far can not be seen as a complete theory due to a large amount of ambiguities as they often occur in non - linear quantum theories ( discussed , e.g. , in ) .this fact belongs to a much broader issue about uniqueness in quantum gravity where often two very different kinds are envisaged : a unique theory vs. a unique solution ( see also ) .these are indeed very different concepts as the uniqueness of a theory as such is not testable even in principle and thus of metaphysical quality .when a theory has a unique solution , however , its properties can be compared with observations at least in principle .in fact , both concepts may be contradictory for all we know so far : there are ambiguities in loop quantum gravity and thus no unique theory ( although solutions may be restricted by dynamical initial conditions giving some degree of uniqueness at the level of solutions ) , while the supposedly unique string theory has a whole landscape of potentially admissible solutions . from a philosophical point of view, this situation has a precedent which led to the following statement : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` _ ...a vast new panorama opens up for him , a possibility makes him giddy , mistrust , suspicion and fear of every kind spring up , belief in m [ ... ] , every kind of m [ ... ] , wavers , finally , a new demand becomes articulate .so let us give voice to this _ new demand _ : we need a _ critique _ of m [ ... ] values , _ the value of these values should itself , for once , be examined __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ today , one may be tempted to complete the m - word by `` m - theory , '' but in those days it was actually `` morality . ''historically , philosophers attempted to construct something which in the current terminology could be called `` grand unified morality '' or gum , most widely known in the form of kant s categorical imperative .this was meant as a unique theory , but did have too many solutions .nietzsche s lesson was that the overly idealistic approach had to be replaced by a rather phenomenological one , where he studied the behavior ( i.e. the phenomenology of morality ) in different cultures .similarly , quantum gravity may currently be at such a crossroads where idealism has to be replaced by phenomenology .quantum effects are significant at small scales and lead to qualitatively new behavior .intuitively , this can be interpreted as repulsive contributions to the gravitational force for which there are several examples in the framework of loop quantum gravity .when such effects are derived rather than being chosen with phenomenological applications in mind , it is by no means guaranteed that their modifications take effect in the correct regimes .this gives rise to many consistency checks such as those discussed here in anisotropic and inhomogeneous models in the context of black holes .these effects , while not fixed in detail , are robust and rather direct consequences of the loop representation , with non - perturbativity and background independence being essential . with these effects , the theory can resolve a variety of conceptual and technical problems from basic effects , without the need to introduce new ingredients . at the current stagewe have a consistent picture of the universe , including the classically puzzling situations of the big bang and black holes , which is well - defined everywhere . from here, one can use internal consistency and potential contact with observations to constrain the remaining freedom and test the whole framework .a. ashtekar and t. a. schilling , geometrical formulation of quantum mechanics , in _ on einstein s path : essays in honor of engelbert schcking _a. harvey , 2365 , springer , new york , 1999 , gr - qc/9706069 .m. bojowald and a. skirzewski , quantum gravity and higher curvature actions , in _ current mathematical topics in gravitation and cosmology ( 42nd karpacz winter school of theoretical physics ) _ , hep - th/0606232 .m. bojowald , loop quantum cosmology : recent progress , in _ proceedings of the international conference on gravitation and cosmology ( icgc 2004 ) , cochin , india _ , _ pramana _ * 63 * , 765776 ( 2004 ) , gr - qc/0402053 .
|
general relativity successfully describes space - times at scales that we can observe and probe today , but it can not be complete as a consequence of singularity theorems . for a long time there have been indications that quantum gravity will provide a more complete , non - singular extension which , however , was difficult to verify in the absence of a quantum theory of gravity . by now there are several candidates which show essential hints as to what a quantum theory of gravity may look like . in particular , loop quantum gravity is a non - perturbative formulation which is background independent , two properties which are essential close to a classical singularity with strong fields and a degenerate metric . in cosmological and black hole settings one can indeed see explicitly how classical singularities are removed by quantum geometry : there is a well - defined evolution all the way down to , and across , the smallest scales . as for black holes , their horizon dynamics can be studied showing characteristic modifications to the classical behavior . conceptual and physical issues can also be addressed in this context , providing lessons for quantum gravity in general . here , we conclude with some comments on the uniqueness issue often linked to quantum gravity in some form or another . igpg06/74 + gr qc/0607130 + quantum geometry and its + implications for black holes + martin bojowald + institute for gravitational physics and geometry , + the pennsylvania state university , + 104 davey lab , university park , pa 16802 , usa +
|
the widespread availability of affordable , high - quality commercial ccd cameras for astronomy now makes it possible for educators and amateur astronomers to conduct sophisticated observing programs with small telescopes and limited resources . in recent yearsthe potential for building inexpensive ccd spectrographs around these cameras has been recognized by several companies and individuals ( e.g. * ? ? ?observations of the spectra of stars , planets , and nebulae open up an exciting new realm of inquiry for beginning astronomers , and the design and construction of the spectrograph itself can provide students with a hands - on education in the principles of optics and engineering . here we describe one approach to designing and building a simple ccd spectrograph , based on introspec , an educational instrument we recently built for the harvard astronomy department .we had four design goals for introspec : ( 1 ) sufficient spectral range and resolution for a variety of projects in optical astronomy , ( 2 ) high throughput to enable observations of nearby nebulae and variable stars , ( 3 ) simple controls and a durable design suitable for student operation , and ( 4 ) a small telescope - borne weight .we decided upon a spectral range of at resolution .this wavelength range includes many important spectral features : h , fe , mg and na absorption lines ( prominent in stellar atmospheres ) , molecular absorption bands from ch and nh ( visible in planetary atmospheres ) , as well as h , n , and o emission lines ( characteristic of star - forming nebulae ) .spectral coverage extending futher to the blue would be useful , but most inexpensive ccd chips have poor uv response .our chosen resolution of 6 resolution allows detection of most of the commonly observed features in this spectral range and is adequate to resolve the closely spaced emission lines of h and [ nii ] .we were able to meet our design goals on a budget of slightly under 4000 went into buying a mid - range meade ccd camera and paying for the time and materials of a skilled machinist ( cbh ) .below we describe how introspec s design was tailored to the specific challenges posed by our budget and performance requirements . [ sc : optimizing ] reviews the general optical design constraints on any similar spectrograph , while [ sc : introspec ] examines the specific choices we made for introspec , with special attention to the instrument s fiber - fed design .we begin with a brief review of the layout of a generic spectrograph , then consider how to optimize its optical design . figure [ fg : generic ] shows the light path in a simple ccd spectrograph .the aperture accepts light from a small portion of the telescope focal plane that may contain the image of a star or a portion of an extended nebula .this light is collimated with a lens , producing a parallel beam .the beam travels to a diffraction grating , which disperses the light into a spectrum described by the grating equation ( e.g. * ? ? ?* ) : where is the diffraction order , is the wavelength of the light , is the grating line spacing , and and are the incoming and outgoing angles of the light with respect to the normal to the grating surface ( see schroeder for sign conventions ) .a reflection grating such as that shown in figure [ fg : generic ] , folds the light path as well as dispersing the light . in this casethe angle between the incident and diffracted beam typically must be to allow the camera optics to stay clear of the collimated beam heading toward the grating . the grating normal generally points closer to the camera axis than to the collimator axis , leading to anamorphic magnification : a geometrical expansion of the dispersed collimated beam along the dispersion axis described by ( see figure [ fg : generic ] ) .anamorphic magnification increases the spectral resolution by the same factor that the collimated beam is expanded , but requires a larger camera acceptance aperture to avoid vignetting .camera - collimator angles exceeding 45 are rarely used for this reason .the light dispersed by the grating is a superposition of collimated beams of different colors , all traveling at slightly different angles .the camera lens focuses each color onto a location on the ccd chip determined by its incoming angle , thereby forming a spectrum .this focusing process is exactly analogous to the way a telescope focuses incoming parallel light beams from stars at different angles on the sky onto different positions on the telescope focal plane : in the spectrograph , each point along the spectrum is an image of the light from the spectrograph entrance aperture at a different wavelength .a higher dispersion grating increases the angular spread between collimated beams of different colors , spreading the corresponding images further apart . for a detector of fixed size , higher dispersion yields higher spectral resolution at the expense of reduced spectral coverage .independent of the grating choice or the absolute scale of the spectrograph , we wish to specify the relative focal lengths of the spectrograph optics so that the image of the spectrograph entrance aperture is correctly sampled by the the ccd pixels .the width of this image in the dispersion direction defines the spectroscopic resolution element . oversampling the image wastes ccd pixels and decreases spectral coverage , while undersampling it degrades spectral resolution . in the nyquist sampling limit , pixels per wavelength allow one to recover the peak and valley of an oscillating waveform .typically , finer sampling up to 34 pixels per resolution element is advantageous in professional instruments with large ccd chips .however , for an economical instrument , ccd pixels for spectral coverage are at a premium , and one may prefer sampling near the nyquist limit . with nyquist sampling , introspec s design target of 6 resolution combined with 2300 spectral coverage requires a ccd chip with pixels .we chose this combination of spectral resolution and spectral coverage knowing that the largest ccd within our budget had 768 pixels in the long dimension .once the desired sampling is chosen , the ratio of the focal lengths of the collimator and camera lenses , and , is determined by the diameter ratio of the spectrograph entrance aperture and its image . for nyquist sampling, the image of the spectrograph entrance aperture must be reduced to ccd pixels , or m for a typical amateur ccd camera .the spectrograph entrance aperture has generally been sized to the expected diameter of a stellar image at the telescope s focus , which is determined by atmospheric conditions and the telescope optics .this choice maximizes the constrast between the the object and the background sky light .for the 40 cm /10 knowles telescope at harvard , the local five - arcsecond seeing disk makes a 100 m image at the telescope s focus , implying that must equal to reduce the image to m .for a 20 cm /10 telescope , similar seeing would make a 50 m point source image , requiring an ratio of .5 .however , a larger entrance aperture ( implying higher to achieve the same resolution ) might be desirable for a 20 cm telescope to allow for image shifts due to imperfect guiding .the acceptance angles of the collimator and camera lenses can also be estimated approximately independently of the absolute scale of the spectrograph .the collimator must be fast enough to accept an /10 light cone from the telescope from every point within the spectrograph entrance aperture .this aperture typically takes the form of a long slit to enable spatial sampling perpendicular to the dispersion direction .the camera must be faster than the collimator : if , each monochromatic beam intercepted by the camera has a focal ratio of and is elongated in the dispersion direction by anamorphic magnification ( see [ sc : genericlayout ] ) .these beams must be accepted over a field of view that is extended in both the spatial direction ( due to the slit length ) and the dispersion direction ( due to the angular divergence of the different wavelengths in the spectrum , [ sc : vignetting ] ) .ccccccc 35 & 175 & 17.5 & 900 & 2161 & 5.6 & 1.20 + 50 & 250 & 25.0 & 600 & 2267 & 5.9 & 1.13 + 85 & 425 & 42.5 & 300 & 2636 & 6.9 & 1.06 + we have some flexibility in choosing the overall dimensions of the optics , limited by financial constraints and the desired spectroscopic performance .we must balance two considerations .first , larger optics and their mounts are generally more expensive , although the economies of mass - produced commercial optics can bend this rule .second , to maintain a given spectral range and resolution on a fixed ccd format , one must keep the product of the grating dispersion ( ruling density ) and the focal length of the camera constant .the grating cost goes up nonlinearly with the ruling density , so the grating cost and availability must be balanced against the cost of the optics , particularly the camera lens .the widespread availability of commercial 35 mm format single lens reflex ( slr ) photographic lenses suggests an attractive solution .these lenses are inexpensive , offer excellent image quality , have sophisticated anti - reflection coatings , and are designed to work over a film format of 24 36 mm .ccd chips in typical amateur cameras fit easily within this format .many photographic lenses can produce images of m diameter , or about 2 pixels for a ccd camera with m pixels .excellent used lenses can be obtained for less than 300 in parts .the machine consists of a flat glass disk attached to an aluminum turntable rotating on a 72 rpm motor .a rigid arm holds a brass cylinder perpendicular to the glass disk .the brass cylinder has v - grooves cut along its length , and the fibers are temporarily attached to the v - grooves with duco cement for polishing .a large drop of duco cement covers the end of each fiber , which protrudes by mm ( most of this will be polished away ) .we attach disks of high quality self - adhesive abrasive paper in successively finer grades to the turntable , and the brass cylinder containing the fibers is swept across the abrasive paper .the sequence begins with 30 m grit paper and ends with 0.3 m grit paper .a thin solution of dishwashing soap in water provides adequate lubrication ; clean water is used to rinse off grit residue before moving to the next smaller grit ( the motor is isolated from water spills ) .finally , we inspect the fiber ends under a microscope to verify that no scratches remain .the duco cement can be removed with acetone to release the fiber from the polishing cylinder .acquiring and tracking astronomical targets requires some technique for viewing the focal plane . in a professional instrumentthis function is usually performed by a sensitive , high frame rate electronic camera .in introspec we have provided optics that allow the observer to visually monitor and correct the telescope tracking while viewing a 12 arcminute field of view imaged around the spectrograph entrance aperture .figure [ fg : guiderphotos ] illustrates the primary components of the guider .a flat metal mirror intersects the focal plane of the telescope at a 45 angle .the mirror is machined in two pieces and the fibers protrude through the mirror at the intersection of the focal plane and the mirror s surface .all the light that does not enter the fibers strikes the mirror and is reflected by 90 into the eyepiece extension . inside the extension , two matched achromatic lenses relay the image of the telescope focal plane to a 26 mm plossl eyepiece .the observer guides the telescope by centering the object of interest on one of the fiber ends so that most of the light disappears down the fiber .the guider also serves as the mechanical interface between the telescope and the fibers , incorporating mechanisms for eyepiece rotation , strain relief , and fiber alignment . to minimize weight ,nearly all of the parts were made from aluminum , including the mirror .we finished the mirror in two steps , first removing machining marks with a fine grit paste on a turntable and then polishing with white rouge compound and a hard felt wheel mounted on a lathe .the two halves of the mirror were registered with pins , allowing us to disassemble the mirror to mount the fibers in internal v - grooves without disturbing the polished surface .introspec saw first light on march 17 , 1999 and is now in regular use in a harvard astronomy course for non - majors .students perform projects such as comparing the spectra of the two stars in albireo . even under a bright urban sky , we have successfully used introspec to observe objects ranging from stars and planets to emission nebulae and binary accretion systems . figure [ fg : spectra ] shows a few spectra taken with introspec along with their exposure times .we have also used introspec to demonstrate to students how a spectrograph works ; the instrument s easy - to - remove top cover allows students to peer directly inside . in another article we provide a brief introduction to amateur spectroscopy with low - budget spectrographs like introspec , including basics of data analysis and interpretation as well as a variety of spectroscopy project ideas .introspec was designed and built at the harvard - smithsonian center for astrophysics , with financial support from harvard college and the harvard astronomy department arranged by jonathan grindlay and ramesh narayan .we thank professor grindlay for his participation in planning and commissioning the instrument .the knowles telescope where introspec is used was generously donated by c. harry and janet knowles .we are grateful to john geary , steve amado , joe zajac , douglas mar , john roll , warren brown , and anita bowles for advice , resources , and technical assistance during introspec s construction phase .we also thank tom narita , john raymond , dimitar sasselov , and the students , friends , and family who helped with commissioning runs .
|
we discuss the design of an inexpensive , high - throughput ccd spectrograph for a small telescope . by using optical fibers to carry the light from the telescope focus to a table - top spectrograph , one can minimize the weight carried by the telescope and simplify the spectrograph design . we recently employed this approach in the construction of introspec , an instrument built for the 16-inch knowles telescope on the harvard college campus .
|
we are now in the age of big data . an unprecedented era in history where the storing , managing and manipulation of information is no longer effective using previously used computational tools and techniques . to compensate for this , one important approach in manipulating such large data sets and extracting worthwhile inferences ,is by utilizing machine learning techniques .machine learning involves using specially tailored ` learning algorithms ' to make important predictions in fields as varied as finance , business , fraud detection , and counter terrorism .tasks in machine learning can involve either supervised or unsupervised learning and can solve such problems as pattern and speech recognition , classification , and clustering .interestingly enough , the overwhelming rush of big data in the last decade has also been responsible for the recent advances in the closely related field of artificial intelligence ; with the achievements of alphago being a remarkable milestone .another important field in information processing which has also seen a significant increase in interest in the last decade is that of quantum computing .quantum computers are expected to be able to perform certain computations much faster than any classical computer .in fact , quantum algorithms have been developed which are exponentially faster than their classical counterparts .recently , a new subfield within quantum information has emerged combining ideas from quantum computing with artificial intelligence to form quantum machine learning .such discrete - variable schemes have shown exponential speedup in learning algorithms , such as supervised and unsupervised learning , support vector machine , cluster assignment and others .initial proof - of - principle experimental demonstrations have also been performed . in this paper , we have developed learning algorithms based on a different , but equally important , type of substrate in quantum computing , those of continuous variables ( cvs ) .a cv system is characterized by having an infinite - dimensional hilbert space described by measuring variables with a continuous eigenspectra .the year 1999 saw the first important attempt at developing a cv model of quantum computing .seven years later , the cluster state version of cvs , accelerated the field s interest due to experimental interest .the result of this were proof - of - principle demonstrations , which culminated in an ` on - the - run ' one - million - node cluster , as well as a 60-node ` simultaneous ' cluster .further important theoretical work was also carried out , including an important cv architecture that was finally fault tolerant . here , we take advantage of the practical benefits of cvs ( high - efficiency room - temperature detectors , broad bandwidths , large - scale entanglement generation , etc . ) by generalizing quantum machine learning to the infinite dimension .specifically , we develop the important cv tools and subroutines that form the basis of the exponential speedup in various machine learning algorithms .this includes matrix inversion , principle component analysis and vector distance .furthermore , each of these crucial subroutines are given a finite squeezing analysis for future experimental demonstrations along with a suggested photonic implementation ._ encoding state _ the general quantum state of a -mode system is given by |f= f(q_1 , ,q_n ) |q_1 |q_ndq_1 if we use this state to encode a discrete set of classical data , , which requires at least classical memory cells , only modes are sufficient , i.e. , [ eq1psi ] f_(q_1 , , q_n)=_x=1^n a_x _ i=1^n _x_i(q_i ) where is the number of basis state in each mode ; is a -nary representation of ; for is the wavefunction of the single mode basis state , . herewe assume the vector is normalised . obtaining the classical value of each data requires copies of .nevertheless in some applications only the global behavior of the data set is interesting . for example , the value can be computed efficiently by a quantum computer with significantly fewer copies of . quantum machinelearning algorithms take advantage of this property to reduce the amount of memory and operations needed . if the data set is sufficiently uniform , it is known that can be efficiently generated . as an illustration, we outline in the supplementary section an explicit protocol to generate a state with coherent basis states , and .our protocol generalizes the discrete variable method in ref . to cv system by utilizing the cv implementation of the grover s operators , for any given , as well as the efficient generation of cat states and coherent states .the encoding state construction of general non - uniform data could be constructed by extending the discrete - variable quantum ram ( qram ) to a cv system , or by using a hybrid scheme , although the state generation efficiency of such a general encoded state remains an open question .nevertheless , the versatility of cv machine learning is not limited to process classical data sets that involve a discrete number of data . in the context of universal cv quantum computation ,the output of a computer is a cv state that evolves under an engineered hamiltonian ; the wave function of such a full cv output can not be expressed in the form of eq .( [ eq1psi ] ) .as we will see , the cv machine learning subroutines are capable of processing even full cv states , and they are thus more powerful than the discrete variable counterparts . _ exponential swap gate _ in both the data state construction and the quantum machine learning operation , the generalized grover s operator , , plays the main role of inducing a phase shift according to an ensemble of unknown given states . as suggested in ref . , such an operation can be implemented by repeatedly applying the exponential swap operation and tracing out the auxiliary mode , i.e. , [ eq : erho ] _ ( e^it e^-it ) = e^it e^-it + ( ^2 ) , where by definition the swap operator functions as . herewe outline the procedure of implementing the exponential operator with standard cv techniques .first of all , we need a qubit as control , which can be implemented by two auxiliary modes , 1 and 2 , with one and only one photon in both modes , i.e. , the state of the modes is .the rotation angle is controllable by applying the rotation operator , which can be implemented by linear optics .in addition , we need a controlled - swap operation , [ eqn3 ] c^c c_= e^- ( _ c _ c^-_c^_c ) e^i _ 1^_1 ^_c _ c e^ ( _ c _ c^-_c^_c ) which swaps the modes and depending on the photon number of the control qubit .the operations in can be implemented with the quartic gate introduced in .see appendix b for more detail .the control qubit is first prepared in . by applying the operations in sequence ,the state becomes [ eq : eswap ] & & c^c c_r()c^c c_|+|_c |_c = |+e^i_cc & & |+(|_c |_c + i |_c |_c ) .the method can be generalized to implement a multi - mode exponential swap , , by applying .we note that the precious resources of a single photon state is not measured or discarded , so it can be reused in future operations .we emphasize that , in stark contrast to the proposed implementation of exponential - swap gate in which is _ logical _ and thus composed by a series of discrete variable logic gates , our implementation of the exponential - swap gate is _ physical _ ,i.e. , it can be applied to full cv states that could not be written as the discrete variable form in eq .( [ eq1psi ] ) .this property allows our subroutine to be applied in , e.g. quantum tomography of cv states , which is more complicated than the discrete variable counterparts due to the large degree of freedom .now we discuss several key subroutines ( matrix inversion , principle component analysis , and vector distance ) that power the quantum machine learning problems using the tools we have just introduced ._ matrix inversion _various machine learning applications involves high - dimensional linear equations , e.g. , .the advantage of some quantum machine learning algorithms is the ability to solve linear equations efficiently .specifically , for any vector , computing the solution vector is more efficient on a quantum computer . in a cv system ,the algorithm starts by preparing the state and two auxiliary modes in the quadrature eigenstates , i.e. , and .we apply the operator times . each operator can be implemented based on eq .( [ eq : erho ] ) , and a modified exponential swap gate with the rotation operator in eq .( [ eq : eswap ] ) replaced by the four - mode operator r ( _ _ ) = e^i _ _ ( _ 1_2^+ _ 1^_2 ) , which can be implemented efficiently .the state then becomes e^i __ ||0_q , |0_q , = _i b_i |_i|p_p , |_i p _ q , dp , where we have neglected a normalization constant .if the auxiliary mode is measured in the quadrature with outcome , then we get _ i b_i/_i |_i|q_/_i_p , . up to the normalization, the solution state is obtained if the auxiliary mode is measured in the quadrature and we get the result . in the infinitely squeezed case, the successful rate of the last measurement is vanishing . in practice , however , when squeezed vacuum states are employed as auxiliary modes , the successful rate of obtaining an answer state with error scales as , which is comparable to the discrete - variable algorithm that has success which scales as .the detailed argument is shown in appendix c. _ principal component analysis _the next problem is to find the eigenvalue corresponding to a unit eigenvector with respect to the matrix , i.e. , .this problem is ubiquitous in science and engineering and can also be used in quantum tomography , supervised learning and cluster assignment .the algorithm starts from a data state and an auxiliary mode prepared as the zero eigenstate of the quadrature , .the idea of the algorithm is to apply the operator that displaces the auxiliary mode according to the eigenvalue , i.e. , e^i _ |_i|0_q,= measuring the auxiliary mode with homodyne detection .this operator can be implemented by preparing an ensemble such that the density matrix is , and repeatedly apply the techniques in eq .( [ eq : erho ] ) to implement , for times . herethe argument of the exponential swap operator is not a c - number but an operator .this can be implemented by replacing the rotation operator in eq .( [ eq : eswap ] ) by the three - mode operator r(_r ) = e^i _ ( _ 1_2^+ _ 1^_2 ) , which can be efficiently implemented by a cubic phase gate and linear optics . in practice ,the success of the algorithm relies on the distinguishability of , which depends on the spectrum of eigenvalues , the degree of squeezing of the auxiliary state , and the magnitude of error . in appendixd , we have shown that operations are needed for an error ._ vector distance _ in supervised machine learning , new data is categorized into groups by its similarity to the previous data . for example , the belonging category of a vector is determined by the distance , , to the average value of the previous data .the objective of a quantum machine learning algorithm is to compute the value . following the approach given in ref . , we assume an oracle can generate the state |= ( |||0_i |+ _ i = i^m |_i||i_i |_i ) , where the first mode is denoted as the index mode ; the normalization is supposed to be known . can be obtained by conducting a swap test on the index mode with a reference mode prepared as .various swap tests for cv systems have been proposed where the result is obtained from a photon number measurement .here we propose a swap test that employs only homodyne detection and an exponential swap operation .we consider two test modes that are prepared in the coherent states .the operator is applied to exponential swap the two test modes , as well as the reference and the index modes . after that , the test modes pass through a beam splitter .the density operator of the test modes after tracing out the other modes becomes _ 12 & = & ( | _ 11 | + i d^2 |_11 | + & & - i d^2 | _ 11 | + |_11 |)|_22| .we find that if the mode is homodyne detected in the quadrature and , the probability difference of measuring a positive and negative outcome scales as , where the scaling constant is at the order of for a wide range of .see appendix e for further details .in this section , we outline an all - photonic implementation of the previously mentioned machine learning algorithms .first , one must create an ancillary state for use in the exponential swap gate .one method is to provide a heralded ancilla via parametric down conversion ( see for example , for background on a lot of the standard quantum optics methods discussed here ) .the undetected photon is interfered with the vacuum on a beam splitter in order to place it in the superposition required for eq .( [ eq : eswap ] ) ( see fig . [ fig1.eps ] ) .this serves as an input to the phase - dependent gates outlined in , which can be used to construct the exponential swap gate . the rotation gate in eq .( [ eq : eswap ] ) is essentially the interference of the two modes on a variable reflectivity , or programmable beam splitter , which can be achieved via polarization control and a polarizing beam splitter , or via a collection of phase or amplitude modulators .inverse phase - dependent gate operations are implemented after the rotation .each algorithm essentially utilizes a variation of this configuration , in addition to the possibility of squeezed ancilla in order to increase the accuracy of the result .the principle component analysis problem replaces the variable beam splitter in the swap gate with a two - mode quantum - non - demolition phase gate .it can be implemented by treating the auxiliary mode , , as the ancilla in the phase - dependent gate .thus , the principle component analysis problem essentially relies on repeated application of the ` repeat - until - success ' phase gate . in a realistic scenario , is in a single - mode squeezed state with finite squeezing ( see appendix c.2 ) , which is experimentally straightforward using a below - threshold optical parametric amplifier ( opa ) .phase sensitive amplification can also be used .the squeezing parameter can be used to tune the accuracy of the computation .the final homodyne detection is also experimentally straight forward with a local oscillator derived from the pump laser used in the opa ( via a doubling cavity , for instance ) .the matrix inversion algorithm is experimentally very similar to the eigenvalue problem .the key difference is the use of an extra auxiliary mode , which can be prepared independently with an additional opa .the four - mode operator is conceptually similar to the operator in eq .( 6 ) used in the previous algorithm .each auxiliary mode serves as an ancilla in the phase - dependent gate , and the algorithm otherwise follows a similar approach to the previous one , with a final homodyne detection step for the amplitude quadrature of each auxiliary mode , with the local oscillators derived from the pumps of each opa .finally , the vector distance algorithm requires use of a swap test , which can be implemented via the application of the exponential swap gate between two auxiliary states ( which can be coherent states or squeezed states ) and the oracle mode in eq .( 10 ) and the reference mode .the required homodyne detection of the phase quadrature of the first test mode in a bright coherent state and is again experimentally straight forward .the previous all - photonic implementations are difficult to do experimentally but are still within current reach of the latest technological achievements .for instance , high rates of squeezing are now achievable , along with the generation of cat states .however , we note that our scheme is not limited to photonic demonstrations but a variety of substrates , including spin ensemble systems , such as trapped atoms and solid state defect centers .we hope that the work presented here will lead to further avenues of research . especially since there has been a substantial increase of results in discrete - variable machine learning .all of these would be interesting to be generalized to continuous variables as future work . additionally , adapting our current work into the cluster - state formulism would also be interesting in order to take advantage of state - of - the - art experimental interest and the scalability that continuous variables can provide .furthermore , we note another viable option that uses a ` best - of - both - worlds ' approach to quantum information processing , i.e. , hybrid schemes . it would be interesting to adapt our scheme presented here to such hybrid architectures .we thank kevin marshall for helpful discussions .l would like to acknowledge support from the croucher foundation .r. c. p. performed portions of this work at oak ridge national laboratory , operated by ut - battelle for the us department of energy under contract no .de - ac05 - 00or22725 .99 http://www.ibm.com/big-data/us/en/ t. hastie , r. tibshirani , j. friedman , the elements of statistical learning : data mining , inference , and prediction , ( springer , cambridge ) ( 2013 ) .d. mackay , information theory , inference and learning algorithms ( cambridge university press , 2003 ) .e. alpaydin , introduction to machine learning ( adaptive computation and machine learning ) ( mit press , 2004 ) . c. m. bishop , pattern recognition and machine learning ( springer , 2007 ) .k. p. murphy , machine learning : a probabilistic perspective ( mit press , 2012 ) .yann lecun , yoshua bengio , and geoffrey hinton , nature * 521 * , 436 ( 2015 ) .t. d. ladd et al ., nature * 464 * , 45 ( 2010 ) .p. w. shor , siam j. sci .* 26 * , 1484 ( 1997 ) .s. lloyd , science * 273 * , 1073 ( 1996 ) .d. s. abrams and s. lloyd , phys .lett . * 83 * , 5162 ( 1999 ) .d. a. lidar and h. wang , phys .e * 59 * , 2429 ( 1999 ) .j. p. dowling , nature * 439 * , 919 ( 2006 ) .i. buluta and f. nori , science * 326 * , 108 ( 2009 ) .j. q. you and f. nori , nature * 474 * , 589 ( 2011 ) .h. wang , s. ashhab , and f. nori , phys .a * 85 * , 062304 ( 2012 ) .l. veis , j. visnak , t. fleig , s. knecht , t. saue , l. visscher , and j. pittner , phys .a * 85 * , 030304 ( 2012 ) .a. w. harrow , a. hassidim , and s. lloyd , phys .* 103 * , 150502 ( 2009 ) .s. lloyd , m. mohseni , and p. rebentrost , arxiv:1307.0411 ( 2013 ) .p. rebentrost , m. mohseni , and s. lloyd , phys .. lett . * 113 * , 130503 ( 2014 ) .s. lloyd , m. mohseni , and p. rebentrost , nat .* 10 * , 631 ( 2014 ) .a. w. harrow , a. hassidim , and s. lloyd , phys .103 * , 150502 ( 2009 ) .e. aimeur , g. brassard , and s. gambs , machine learning * 90 * , 261 ( 2013 ) .pudenz and d.a .lidar , quant .* 12 * , 2027 ( 2013 ) .a. hentschel and b. c. sanders , phys .* 104 * ( 2010 ) .n. wiebe , a. kapoor , and k. svore , quant .info . and comp . * 15 * , 0318 ( 2015 ) .x. -d .cai , et al .lett . * 110 * , 230501 ( 2013 ) .s. barz , et al ., scientific reports * 4 * , 6115 ( 2014 ) .cai , et al .. lett . * 114 * , 110504 ( 2015 ) .z. li , et al .114 * , 140504 ( 2015 ) .s. l. braunstein and p. van loock , rev .. phys . * 77 * , 513 ( 2005 ) .c. weedbrook , s. pirandola , r. garca - patrn , n. j. cerf , t. c. ralph , j. h. shapiro , and s. lloyd , rev .84 * , 621 ( 2012 ) .s. lloyd and s. l. braunstein , phys .* 82 * , 1784 ( 1999 ) .r. raussendorf and h. j. briegel , phys .lett . * 86 * , 5188 ( 2001 ) . j. zhang and s. l. braunstein , phys .rev . a * 73 * , 032318 ( 2006 ) .n. c. menicucci , et al . , phys .lett . * 97 * , 110501 ( 2006 ) .s. yokoyama , r. ukai , s. c. armstrong , j .-yoshikawa , p. van loock , and a. furusawa , phys .a * 92 * , 032304 ( 2014 ) .k. miyata , h. ogawa , p. marek , r. filip , h. yonezawa , j .-yoshikawa , and a. furusawa , phys .a * 90 * , 060302(r ) ( 2014 ) .m. pysher , y. miwa , r. shahrokhshahi , r. bloomer , and o. pfister , phys .* 107 * , 030505 ( 2011 ) .s. takeda , t. mizuta , m. fuwa , j .-yoshikawa , h. yonezawa , and a. furusawa , phys .a * 87 * , 043803 ( 2013 ). jun-.i .yoshikawa , s. yokoyama , t. kaji , c. sornphiphatphong , y. shiozawa , k. makino , a. furusawa , arxiv:1606.06688 ( 2016 ) .s. yokoyama , r. ukai , s. c. armstrong , c. sornphiphatphong , t. kaji , s. suzuki , j .-yoshikawa , h. yonezawa , n. c. menicucci , and a. furusawa , nat .* 7 * , 982 ( 2013 ) .m. chen , n. c. menicucci , and o. pfister , phys .lett . * 112 * , 120505 ( 2014 ) .k. marshall , r. pooser , g. siopsis , and c. weedbrook , phys .rev . a * 91 * , 032321 ( 2015 ) .lau and c. weedbrook , phys .a * 88 * , 042313 ( 2013 ) p.van loock , c. weedbrook , and m. gu , phys .a * 76 * , 032321 ( 2007 ) .m. gu , c. weedbrook , n. menicucci , t. ralph , and p. van loock , phys .a * 79 * , 062318 ( 2009 ) .r. n. alexander , s. c. armstrong , r. ukai , and n. c. menicucci , phys .a * 90 * , 062324 ( 2014 ) .t. f. demarie , t. linjordet , n. c. menicucci , and g. k. brennen , new j. phys . * 16 * , 085011 ( 2014 ) .n. c. menicucci , t. f. demarie , and g. k. brennen , arxiv : quant - ph/1503.00717 ( 2015 ) .p. wang , m. chen , n. c. menicucci , and o. pfister , phys .a * 90 * , 032325 ( 2014 ) .n. c. menicucci , phys .a * 83 * , 062314 ( 2011 ) .k. marshall , r. pooser , g. siopsis , and c. weedbrook , phys .a * 92 * , 063825 ( 2015 ) .n. c. menicucci , phys .lett . * 112 * , 120504 ( 2014 ) .d. gottesman , a. kitaev , and j. preskill , phys .a * 64 * , 012310 ( 2001 ) .v. giovannetti , s. lloyd , and l. maccone , phys .* 100 * , 160501 ( 2008 ) .a. n. soklakov and r. schack , phys .a * 73 * , 012307 ( 2006 ) .a. furusawa and p. van loock , quantum teleportation and entanglement : a hybrid approach to optical quantum information processing ( wiley - vch , 2011 ) .r. filip , physical review a , * 65 * , 062320 ( 2002 ) .h. jeong , c. noh , s. bae , d. g. angelakis , t. c. ralph , journal of the optical society of america b , * 31 * , 3057 . (2014 ) u. l. andersen , t. gehring , c. marquardt , and g. leuchs , arxiv:1511.03250 ( 2015 ) .w. -b .gao , et al ., nat . phys .* 6 * , 331 ( 2010 ) .j. p. dowling , g. s. agarwal , and w. p. schleich , phys .a * 49 * , 4101 ( 1994 ) .k. tordrup , a. negretti , and k. molmer , phys .* 101 * , 040501 ( 2008 ) .j. h. wesenberg , et al . ,lett . * 112 * , 070502 ( 2009 ) .y. kubo , et al .* 105 * , 140502 ( 2010 ) .y. kubo , et al .lett . * 107 * , 220501 ( 2011 ) .n. wiebe , d. braun , and s. lloyd , phys .* 109 * , 050505 ( 2012 ) .n. wiebe , a. kapoor , and k. m. svore , arxiv:1412.3489 ( 2014 ) .g. d. paparo , v. dunjko , a. makmal , m. a. martin - delgado , and h. j. briegel , phys .x * 4 * , 031002 ( 2014 ) .m. h. amin , e. andriyash , j. rolfe , b. kulchytskyy , and r. melko , arxiv:1601.02036 ( 2016 ) .n. wiebe , a. kapoor , k. m. svore , arxiv:1602.04799 ( 2016 ) .n. liu , et al . , arxiv:1510.04758 ( 2015 ) .u. l. andersen , et al .phys . * 11 * , 713 ( 2015 ) .scott aaronson , nature physics * 11 * , 291 ( 2015 ) .it should be noted that actually qram is not necessary for every machine learning algorithm .for example , principal component analysis .in such a case we only need multiple copies of a density matrix and the possibility to perform controlled swap operations .qram can be used to prepare the but so can other quantum subroutines .interestingly , there are also problems where potentially a cv quantum computer would be more ideally suited to be implemented practically than a qubit quantum computer .recently , we explored in ref . an algorithm where a cv quantum computer was used to simulate quantum field theory equations where the fields themselves are of a continuous nature .hoi - kwan lau and martin b. plenio , phys .lett . * 117 * , 100501 ( 2016 ) .here we discuss the encoding efficiency as described in sec . ii .of particular interest are the grover operators , e^i|_0_0| = - 2 |_0_0| and ( which implements a phase change on the state ) .a repeated application of these unitaries can create any state from .the complexity ( number of resources and oracle calls required ) varies depending on the distribution of data . for probability distributions which remain uniformly bounded for large , the complexity has a polynomial dependence on .more precisely , this is the case if , ( for example , states close to ) .otherwise , the complexity can be as high as .the latter is the case for any state , such as , in which the probability distribution is highly peaked ( although , it should be noted that a state is easy to construct , because it is the tensor product of coherent states ) .the proof follows the lines of ref .the number of copies of and needed for implementation of and are also polynomial in . indeed , if is the required fidelity , we have , therefore the number of copies of and needed is , which is of polynomial order in .here we discuss the implementation of higher - order gates using cvs .the discussion generalizes the construction of cubic phase gate given in ref .non - gaussian phase gates of order are of the form , where is a polynomial of order ( ) . in this paper , we make use of cubic ( ) and quartic ( ) phase gates . to implement them, we first decompose them as e^ip_k ( ) = ( 1 + i p_k ( ) ) ^k + ( 1/k)and further , 1 + i p_k ( ) = _ 0 _ 1 _ k-1 where , and are the ( complex ) roots of the - order polynomial .each linear operator ( ) can be implemented as discussed in further detail in ref . .specifically , the quartic gate needed for the controlled - swap operator can be written as u _= e^ih_1h_c where and . to implement it ,we decompose it as u _ = ( e^ _ 1 ^ 2 _ c^2 e^ _ 1 ^ 2 _ c^2 e^ _ 1 ^ 2 _ c^2 e^_ 1 ^ 2 _ c^2 ) ^k + ( 1/k ) the first three factors can written in terms of the last factor , respectively , as e^ _ 1 ^ 2 _ c^2 & = & e^ h_1 e^ h_c e^ _ 1 ^ 2 _c^2 e^- h_ce^- h_1 + e^ _ 1 ^ 2 _ c^2 & = & e^ h_1 e^ _ 1 ^ 2 _c^2 e^- h_1 + e^ _ 1 ^ 2 _ c^2 & = & e^ h_c e^ _ 1 ^ 2 _c^2 e^- h_c the last factor can be written in terms of quartic gates as [ eqlast ] e^ _ 1 ^ 2 _ c^2 & = & e^2ip_1 x_ce^ _ 1 ^ 4 e^-4ip_1x_ce^ _ 1 ^ 4 e^2ip_1x_ce^- _ 1 ^ 4 + & & e^- _ c^4 the other non - gaussian gates used in our calculations can be implemented similarly . the strategy is to first decompose the operator into simple factors with an error , and then rotate each into using .thus , we obtain an exponent which is a polynomial in .the latter can be written in terms of single - mode unitaries , as in .it should be noted that it is not necessary to introduce an error , because the operators considered here have exponents which are polynomial in and .therefore , it is possible to perform an exact decomposition of these operators into simple factors .we will not do this here , because the expressions become long and do not serve our purpose of demonstrating the implementation of these unitaries using cubic and quartic gates .we start with qumodes in the state and two resource modes in the state .thus , initially , ||bdp _ d _|p_|_next , we apply the unitary [ eq40 ] e^iap _ _ where is a parameter that can be adjusted at will .the unitary is implemented similarly to , except that we now have two resource qumodes .the algorithm is unchanged , except for the rotation of the ancillas , which becomes a four - mode unitary , ] , in agreement with . for , we have |_i ( q_,_)|^2 ~ so both and have probability distributions of width .the width of is .if we want , for a normal distribution the success rate is .given an hermitian matrix , and a vector , find out if is an eigenvector , and if so , which eigenvalue it belongs to .in particular , we are interested in matrices of the form , where are both mixed states . we start with qumodes in the state and a resource mode in the state .thus , initially , |= problem of is a|e_i= _i |e_iand we expand |b= _ i _ i of the resource mode .this projects the state onto q_|e^iap _ |&= & _ i _ i dp _e^i(_i -q _ ) p _ |e_i + & = & _ i _ i ( _ i -q _ ) |e_ithus , the measurement outcome is proportional to one of the eigenvalues , , for which .the most probable outcome corresponds to the maximum .we obtain that outcome with certainty , if is an eigenstate of .all of the above steps are independent of the size of the matrix , .this is evident for all steps , except for the implementation of the unitary . to implement, we make copies of and copies of , where is the desired accuracy .let be the swap operator . we have _e^-i p _ & = & e^ip _ |bb|e^-ip _ + & & + ( ^2 ) where we took a partial trace over the degrees of freedom of .similarly for , _e^i p _ & = & e^-ip _ |bb| e^ip _ + & & + ( ^2 ) the smaller the , the larger the range of over which there is little distortion .repeating these two steps with the rest of the copies , we arrive at an approximation of e^ia p _e^-ia p _ i.e. , an implementation of .the unitary itself can be implemented using [ eq26 ] e^i p _+ i p _ to implement , we introduce two ancillary modes in the logical state , and rotate it to .next , we apply the string of three - mode unitaries , and arrive at the state ( |0_1 |1_2 + |1_1 , which is a rotation on the ancillas and can be implemented using the cubic gate constructed in together with two - mode operators .the state becomes & & once again , we apply and obtain ( ancilla goes back to its original state , and we arrive at |0_l matching , as desired . realistically , we start with a qumode in the state and a resource mode in a squeezed state . thus , initially , ||bdp _we apply the unitary , and measure of the resource mode , we obtain the projected state q_|e^iap _ | & & _ i _ i dp _ e^-(p_)^2/(2s ) e^i(_i -q _ ) p _ & & _ i _ i e^- s(_i -q_)^2/2 |e_iwhich yields a probability distribution p(q _ ) _i |_i|^2 e^- s(_i -q_)^2 consisting of peaks at the eigenvalues . since we are interested in eigenvalues , to discriminate between them , the width of the peaks ought to be , so .the number of copies needed to simulate must be such that , therefore by adjusting the arbitrary parameter , even a small integer will suffice .let and be two -dimensional unit vectors .we are interested in computing the distance between to the average of , i.e. , [ eq1 ] d^2 | - _ i=1^m_i|^2 & = & ||^2 + _ i , i |_i| |_i| _i_i + & & - _ i=1^m |||_i| ( ^_i + _ i^ ) .the objective of quantum machine learning is to measure the value of without learning all of the coefficients of each data set .following , we consider a mode resources state , given by |= ( |||0_i |+ _ i = i^m |_i||i_i |_i ) , where the normalization is supposed to be known .we denote the first mode as the index mode , while the following modes are the data modes . following our argument in appendix [ sec : encoding ] , if the data of and are sufficiently homogeneous , can also be efficiently constructed . in analogous to the discrete - variable algorithm , the value of can be deduced from the probability of measuring the index mode in the state , i.e. , .such a measurement can be achieved by conducting a swap test between the index mode with an auxiliary reference mode that is prepared in . herewe propose a swap test that involves only homodyne detection and the exponential swap operation .& & + i|0_12 ( |||0_ |_i |+ _ i = i^m |_i| |i _|_i |_i ) ) . after tracing out the index mode , the reference mode , and the data modes ,the total state of the two test modes becomes _ 12 & = & ( |00| - i |00 | + & & + i | 0 0 | + |0 0 | ) when measuring the first test mode in the quadrature , the probability of obtaining a value is proportional to ( p ) e^-p^2 ( 2 + i e^i p - i e^-i p ) .we find that the probability difference between the positive and negative values of is ( p>0 ) - ( p<0 ) = -e^-^2/2 ( ) , where erfi is the imaginary error function .for , the expression is well approximated by .
|
machine learning is a fascinating and exciting field within computer science . recently , this excitement has been transferred to the quantum information realm . currently , all proposals for the quantum version of machine learning utilize the finite - dimensional substrate of discrete variables . here we generalize quantum machine learning to the more complex , but still remarkably practical , infinite - dimensional systems . we present the critical subroutines of quantum machine learning algorithms for an all - photonic continuous - variable quantum computer that achieve an exponential speedup compared to their equivalent classical counterparts . finally , we also map out an experimental implementation which can be used as a blueprint for future photonic demonstrations .
|
due to the simplicity of its formulation and the complexity of its exact solution , the traveling salesman problem ( tsp ) has been studied for a very long time and has drawn great attention from various fields , such as applied mathematics , computational physics , and operations research .the traveling salesman faces the problem to find the shortest closed tour through a given set of nodes , touching each of the nodes exactly once and returning to the starting node at the end .hereby the salesman knows the distances between all pairs of nodes , which are usually given as some constant non - negative values , either in units of length or of time .the costs of a configuration are therefore given as the sum of the distances of the used edges . if denoting a configuration as a permutation of the numbers , the costs can be written as a tsp instance is called symmetric if for all pairs of nodes . for a symmetric tsp ,the costs for going through the tour in a clockwise direction are the same as going through in an anticlockwise direction .thus , these two tours are to be considered as identical . as the time for determining the optimum solution of a proposed tsp instance grows exponentially with the system size, the number of nodes , a large variety of heuristics has been developed in order to solve this problem approximately . besides the application of several different construction heuristics , which were either specifically designed for the tsp or altered in order to enable their application to the tsp , the tsp has been tackled with various general - purpose improvement heuristics , like simulated annealing and related algorithms such as threshold accepting , the great deluge algorithm , algorithms based on the tsallis statistics , simulated and parallel tempering ( methods described in ) , and search space smoothing .furthermore genetic algorithms , tabu search and scatter search , and even ant colony optimization , particle swarm optimization , and other biologically motivated algorithms have been applied to the tsp .the quality of these algorithms is compared by creating solutions for benchmark instances , one of which is shown in fig .[ fig : usa ] . and [ fig : l3o ] . ]most of these improvement heuristics apply a series of so - called small moves to a current configuration or a set of configurations . in this context , the move being small means that it does not change a configuration very much , such that usually the cost of the new tentative configuration which is to be accepted or rejected according to the acceptance criterion of the improvement heuristics does not differ very much from the cost of the current configuration .this method of using only small moves is called the local search approach , as the small moves lead to tentative new configurations , which are close to the previous configuration according to some metric like the hamming distance for the tsp : the hamming distance between two tours is given by the number of different edges .one move which does not change a configuration very much is the exchange ( exc ) , which is sometimes also called swap and which is shown in fig .[ fig : excnim ] .the exchange exchanges two randomly selected nodes in the tour .thus , from a proposed configuration , other configurations can be reached , such that the neighborhood of a configuration generated by this move has a size of order .another small move is the node insertion move ( nim ) , which is also called jump .it is also shown in fig .[ fig : excnim ] .the node insertion move randomly selects a node and an edge .it removes the randomly chosen node from its original position and places it between the end points of the randomly selected edge , which is cut for this purpose .the neighborhood size generated by this move is and thus also of order .lin introduced a further small move , which is called lin-2-opt ( l2o ) : as shown in fig .[ fig : l2o ] , it cuts two edges of the tour , turns the direction of one of the two partial sequences around , and reconnects these two sequences in a new way . for symmetric tsp instances , only the two removed edges and the two added edgeshave to be considered when calculating the cost difference created by this move . for these symmetric tsps ,it plays no role which of the two partial sequences is turned around when performing the move , due to the identical cost function value for moving through clockwisely or anticlockwisely . in the symmetric case , on which we will concentrate throughout this paper ,the move creates a neighborhood of size and thus of order .please note that a move cutting two edges after neighboring nodes does not lead to a new configuration , such that the neighborhood size is not , a false value which is sometimes found in the literature .the lin-2-opt turned out to provide better results for the symmetric traveling salesman problem than the exchange .the reason for this quality difference was explained analytically by stadler and schnabl . in their paper , they basically found out that the results are the better the less edges are cut : the lin-2-opt cuts only two edges whereas the exchange cuts four .but they also reported results that the lin-3-opt cutting three edges leads to an even better quality of the solutions than the lin-2-opt , what contradicted their results at first sight , but they explained this finding with the larger neighborhood size of the lin-3-opt .the next larger move to the lin-2-opt is the lin-3-opt ( l3o ) : the lin-3-opt removes three edges of the tour and reconnects the three partial sequences to some new closed tour .in contrast to the smallest moves for which there is only one possibility to create a new tour , there are four possibilities in the case of the symmetric tsp how to create a new tour with three new edges with the lin-3-opt if each of the partial sequences contains at least two nodes .these four possibilities are shown in fig . [fig : l3o ] .please note that we count only the number of `` true '' possibilities here , i.e. , only those cases in which the tour contains three edges which were not part formerly in the tour , as otherwise the move would e.g. only be a lin-2-opt .if one of the partial sequences contains only one node and the other two at least two nodes each , then only one possibility for a `` true '' lin-3-opt remains .if even two of the three partial sequences do only contain one node , then there is no possibility left to reconnect the three sequences without reusing at least one of the edges which was cut .analogously , there is one possibility for the lin-2-opt , if both partial sequences contain at least two nodes each , otherwise there is no possibility . if looking closely at the four variants of the lin-3-opt in fig .[ fig : l3o ] , one finds that the resulting configurations could also be generated by a sequence of lin-2-opts : for the upper left variant in fig .[ fig : l3o ] , three lin-2-opts would be needed , whereas only two lin-2-opts would be sufficient for the other three variants .thus , one might ask whether the lin-3-opt is necessary as a move as a few lin-2-opts could do the same job .however , due to the acceptance criteria of the improvement heuristics , it might be that at least one of the lin-2-opts would be rejected whereas the combined lin-3-opt move could be accepted .thus , it is often advantageous also to implement these next - higher - order moves in order to overcome the barriers in the energy landscape of the small moves .now the question arises how large the neighborhood size of a lin-3-opt is .of course , it has to be of order , as three edges to be removed are randomly selected out of edges .however , for the calculation of the exact number of possibilities one has to distinguish between the case in which all partial sequences contain at least two nodes each and the case in which exactly one partial sequence contains only one node . please note that the node insertion move , which was introduced earlier , is the special case of the lin-3-opt in which one of the partial sequences only contains one node .but in the special case that one of the two next nearest edges to the randomly chosen node is selected , the nim corresponds to a lin-2-opt . as the number of cut edges of the nim is 3 , such that this move is basically a lin-3-opt , but as the neighborhood size of this move is of order , this move is also sometimes called lin-2.5-opt .one can go on to even higher - order lin--opts : the lin-4-opt cuts four edges of the tour and reconnects the four created partial sequences in a new way .if every partial sequence contains at least two nodes , there are 25 possibilities for a true lin-4-opt to reconnect the partial sequences to a closed feasible tour .the neighborhood size of this move is of order .the exchange , which was also introduced earlier , is usually a special case of a lin-4-opt . only if the two nodes which are to be exchanged are direct neighbors of each other or if there is exactly one node between them , then the move is equivalent to a lin-2-opt. one can increase the number of deleted edges further and further .however , by doing so , one gradually loses the advantage of the local search approach , in which , due to the similarity of the configurations , their cost values do not differ much . in the extreme , the lin--opt would lead to a randomly chosen new configuration , the cost value of which is not related to the cost value of the previous configuration at all . moving away from the local search approach ,the probability for getting an acceptable new configuration among the many more neighboring configurations with cost values in a much larger interval strongly decreases due to the finite available computing time .when using the local search approach , it turns out that using the smallest possible move only is only optimal in the case of very short computing times . with increasing computing time, a well chosen combination of the smallest moves and their next larger variants becomes optimal . herethe optimization run has more time to search through a larger neighborhood .the next larger moves enable the system to overcome barriers in the energy landscape formed by the small moves only .of course , one can extend this approach and also include moves with the next larger and spend even more computing time . however , for some difficult optimization problems , an approach based on small moves and their next larger variants is not sufficient .there indeed large moves have to be used .a successful approach here are the ruin & recreate moves , which destroy a configuration to some extent and rebuild it according to a given rule set .they work in a different way than the small moves , which completely randomly select a way to change the configuration .in contrast , the ruin & recreate moves contain constructive elements in order to result in good configurations . also for problems like the tsp , for which small moves basically work ,well designed ruin & recreate moves are superior to the small moves .however , the development of excellent ruin & recreate moves is rather difficult , it is indeed an optimization problem itself , whereas the application of the local search approach , which simply intends to `` change the configuration a little bit '' , is rather straightforward and also usually quite successful in producing good solutions , such that it is mostly used .sometimes one needs to know the exact size of the neighborhood generated by the implemented moves for relating it to the available computing time or for tuning an optimization algorithm like tabu search . naturally , one is aware of the neighborhood size of a lin--opt being of order .the aim of this paper is to provide exact numbers for the neighborhood size . for deriving the neighborhood size of the lin-3-opt and of even larger lin--opts, we will start with the determination of the number of possibilities for reconnecting partial sequences to a complete tour with a true lin--opt in sec .[ verbinden ] . therewe will find laws how many possibilities exist depending on the number of partial sequences containing only one node and their spatial neighborhood relation to each other . having this distinction at hand, we will calculate the corresponding numbers of possibilities for cutting the tour in sec .[ trennen ] .for the calculation of the number of possibilities for reconnecting the tour , we want to start out with the special case that each partial sequence contains at least two nodes .a lin--opt cuts the tour into partial sequences .the overall number of possibilities to reconnect them to a closed tour containing all nodes can be obtained when imagining the following scenario : one randomly selects one of the partial sequences and fixes its direction .( this has to be done for the symmetric tsp for which a tour and its mirror tour are degenerate ) this first partial sequence serves as a starting sequence for the new tour to be constructed .then one randomly selects one out of the remaining partial sequences and adds it to the already existing partial tour .there are two possible ways of adding it , one for each direction of the partial sequence .thus , one gets a new system with only partial sequences .the number of possibilities to construct a new feasible tour is thus given as this recursive formula can be easily desolved to however , this overall set of possibilities contains many variants in which old edges which were cut are reused in the new configuration , such that the move is not a true lin--opt .thus , in order to get the number of true lin--opts , those variants have to be subtracted from the overall number . as there are possibilities to choose old edges for the new tour if there were overall deleted edges , the number of true lin--opts is given by the recursive formula the starting point of this recursion is , as there is one possibility for the lin-0-opt , the identity move , in which no edge is changed . table [ tabelle ]gives an overview of the numbers of true lin--opts for small , in the case that each partial sequence contains at least two nodes and that the tsp is symmetric .we find that there is one lin-0-opt , the identity move , no lin-1-opt , as by cutting only one edge no new tour can be formed , one lin-2-opt , four lin-3-opts , and so on . in the case of an asymmetric tsp , each number here has to be multiplied by a factor of 2 .25 5 & 208 6 & 2121 in the general case , not every partial sequence contains at least two nodes .there might be sequences containing only one node which are surrounded by two sequences containing more than one node in the old tour .furthermore , there might be tuples of neighboring sequences containing only one node each which are surrounded by two sequences containing more than one node , and so on .let be the number of the partial sequences containing more than one node , be the number of sequences containing only one node and surrounded by two sequences containing more than one node in the old tour , be the number of tuples of one - node - sequences surrounded by two sequences containing more than one node in the old tour , be the number of triples of one - node - sequences surrounded by two sequences containing more than one node in the old tour , and so on .we will see that and are no longer functions of only , but depend on the entries of the vector . in the following, the assumption shall hold that not every edge of the tour is cut , such that and .thus , one can always choose a partial sequence consisting of two or more nodes as a starting point for the creation of a new tour and fix its direction , as we only consider the symmetric tsp here . starting with this fixed partial sequence ,a new feasible tour containing all nodes can be constructed by iteratively selecting an other partial sequence and adding it to the end of the growing partial tour . for the overall number of possibilities for construcing a new tour, it plays no role whether one - node - sequences were side by side in the old tour or not .thus , the number of possibilities is simply given as there are two possible ways for adding a partial sequence containing at least two nodes , but only one possibility for adding a sequence with only one node .thus , analogously to the result above we get the result for the asymmetric tsp , this number has to be multiplied with 2 .when calculating the number of true lin--opts , we need to consider the spatial arrangement of the one - node - sequences in the old tour .we have to distinguish between single one - node - sequences , tuples , triples , quadruples , and so on , i.e. , we have to consider the -tuples for each separately .contrarily , we have no problems with the spatial arrangement of partial sequences with at least two nodes .in order to get the number of true lin--opts , we want to use a trick by artificially blowing up partial sequences with only one node to sequences with two nodes .let us first consider here that there are not only partial sequences with at least two nodes but also isolated partial sequences with only one node .( we thus first leave out the tuples , triples , of single - node - sequences in our considerations , but it does not matter here whether there are any such structures . )we extend one one - node sequence to two nodes by doubling the node .thus , one gets possibilities for performing a true lin--opt instead of possibilities . by changing the direction of this blown - up sequence, one can connect it in contrast to before as it consisted of only one node to those nodes of the neighboring parts to which it was connected before .there are two possibilities to connect it this way to one of the two neighboring partial sequences and one possibility to connect it this way to both of them .but these cases are forbidden , such that we have to subtract the number of these possibilities in which they get connected and we achieve the recursive formula please note that the resulting number has to be divided by 2 , as a partial sequence with only one node can not be inserted in two different directions .analogously , one can derive a formula if there are tuples of neighboring sequences with only one node each . hereone expands one of the two partial sequences to two nodes , such that there is one tuple less , but one isolated one - node - sequence more and one longer sequence more .analogously to above , the false possibilities must be subtracted and the result divided by 2 , such that we get the formula for all longer groups of single - node - sequences , like triples and quadruples , there is one common approach . hereit is appropriate to blow up a single - node - sequence at the frontier , such that the following recursive formula is achieved : generally , one should proceed with the recursion in the following way : first , those non - zero should vanish for which is maximal .this approach should be iterated with decreasing until one ends up for a formula for tours with partial sequences consisting of at least two nodes each for which we can use the formula for the special case . 3 & 4 & 5 & after having determined the number of possibilities to reconnect partial sequences to a closed tour with a true lin--opt , we still have to determine the number of possibilities for cutting the tour in order to create these partial sequences .we again start out with the special case in which every partial sequence to be created shall contain at least two nodes . by empirical going through all possibilities, we found the formulas which are given in tab .[ sonderfall ] . from this result, we deduce a general formula for the possibilities for the lin--opt : thus , we find here that the neighborhood created by a lin--opt is of order . & & & 2 & 2 & 0 & 0 & 0 & & 0 & 1 & 0 & 0 & 3 & 3 & 0 & 0 & 0 & & 1 & 1 & 0 & 0 & & 0 & 0 & 1 & 0 & 4 & 4 & 0 & 0 & 0 & & 2 & 1 & 0 & 0 & & 0 & 2 & 0 & 0 & & 1 & 0 & 1 & 0 & & 0 & 0 & 0 & 1 & 5 & 5 & 0 & 0 & 0 & & 3 & 1 & 0 & 0 & & 1 & 2 & 0 & 0 & & 2 & 0 & 1 & 0 & & 0 & 1 & 1 & 0 & & 1 & 0 & 0 & 1 & 6 & 2 & 2 & 0 & 0 & & 0 & 3 & 0 & 0 & & 3 & 0 & 1 & 0 & & 1 & 1 & 1 & 0 & & 0 & 0 & 2 & 0 & & 2 & 0 & 0 & 1 & & 0 & 1 & 0 & 1 & in the general case , an arbitrary lin--opt can also lead to partial sequences containing only one node .here we have to distinguish between various types of cuts : the cuts introduced by a lin--opt can be isolated , i.e. , they are between two sequences with more than one node each. then they can lead to isolated nodes which are between two partial sequences with more than one node each , and so on .let us view this from the point of view of the cuts of the tour .all in all , a lin--opt generally leads to cuts in the tour .let us denote an -type multicut ( with ) at position ( with ) as the scenario that the tour is cut at successive positions after the node with the tour position number .thus , the tour is cut by an -type multicut successively between pairs of nodes with the tour position numbers .furthermore , let be the number of -type multicuts : is the number of isolated cuts . is the number of 2-type multicuts by which the tour is cut after two successive tour positions such that a partial sequence containing only one node is created , surrounded by two sequences containing more than one node .thus , is also the number of isolated nodes surrounded by longer sequences and is thus identical with .analogously , 3-type multicuts lead to tuples of nodes which are surrounded by partial sequences with more than one node , thus , is the number of these tuples .analogously , 4-type multicuts lead to triples of sequences containing only one node each , and so on . generally , we have for all that of the last section , but for the situation is different : each -type multicut produces a further sequence consisting of at least two nodes , such that we have note that is both the number of these longer partial sequences and the number of all -type multicuts .the overall number of cuts can be expressed as in this general case , the number of ways of cutting the tour also depends not only on , but on the entries of the vector . as tab .[ allgemeinerfall ] shows , the order of the neighborhood size is now given as . from these examples in the table ,we empirically derive the formula for the number of possibilities for cutting a tour with a lin--opt , leading to many -type multicuts. please note that if the upper index of a product is smaller than the lower index , then this so - called empty product is 1 .this formula can be rewritten to making use of being the sum of all .please note that eq .( [ formelspeziell ] ) for the special case with each sequence containing at least two nodes is a special case of eq .( [ formelallgemein ] ) .the numbers for cutting a tour in partial sequences in this section were first found by hand for the examples given in tabs .[ sonderfall ] and [ allgemeinerfall ] , then the general formulas were intuitively deduced from these .after that the correctness of these fomulas was checked by computer programs up to for the special case and for all variations of the general case ..results for the greedy algorithm using small moves ( exchange , node insertion move , and lin-2-opt ) and the variants of the lin-3-opt : for each instance , 100 optimization runs were performed , starting with a random configuration and performing a specific number ( given in the text ) of the corresponding move .[ cols="<,^,>,>,>,^ , < " , ] now one can ask why not to proceed and to move on to even larger moves . we implemented two of the 25 different variants of the lin-4-opt .if denoting the tour position numbers after which the which the tour is cut as , , , and and their successive numbers as , , , and , then the cut tour can be written as follows : then the move variants lead to the following new tours : the variant l4o1 is sometimes called the two - bridge - move , as the two `` bridges '' and are exchanged . table [ l4oergebnisse ] shows the quality of the results achieved with these two variants of the lin-4-opt .we find that a further improvement can not be found , the results for the better variant of the lin-4-opt are roughly of the same quality as the results for the worse variants of the lin-3-opt .thus , leaving the local search approach even further leads to worse results .of course , using only the greedy algorithm , one fails in achieving the global optimum configurations for the tsp instances , which have a length of 118293.52 (beer127 instance ) , 42042.535 (lin318 instance ) , and 50783.5475 (pcb442 instance ) , respectively .these optima can be achieved with the small moves and the variants of the lin-3-opt if using a better underlying heuristic ( see e.g. ) . .for getting an approximate solution of an instance of the traveling salesman problem , mostly an improvement heuristic is used , which applies a sequence of move trials which are either accepted or rejected according to the acceptance criterion of the heuristics . for the traveling salesman problem ,mostly small moves are used which do not change the configuration very much . among these moves ,the lin-2-opt , which cuts two edges of the tour and turns around a part of the tour , has been proved to provide superior results .thus , this lin-2-opt and its higher - order variants , the lin--opts , which cut edges of the tour and reconnect the created partial sequences to a new feasible solution , and their properties have drawn great attention . in this paper , we have provided formulas for the exact calculation of the number of configurations which can be reached from an arbitrarily chosen tour via these lin--opts . a specific lin--opt leads to a certain structure of multicuts , i.e. , there are isolated cuts , which divide two partial sequences with at least two nodes , then there are two cuts just behind each other , such that a partial sequence with only one node is created , which is in between two partial sequences with more than one node , then there are three cuts just behind each other , such that a tuple of partial sequences with only one node each is created , and so on . the number of possibilities for cutting a tour according to these structures of multicuts is given in eq .( 14 ) . from the numbers of multicut structures , one has then to derive the numbers of partial sequences , which are given in eqs .( 10 ) and ( 11 ) .then one has to use the recursive formulas ( 6 - 8 ) in order to simplify the dependency of the number of reconnections to only one parameter . finally , one has to use eq .( 3 ) for getting the number of reconnections for a true lin--opt with new edges .finally , one has to sum up the products of the numbers of possible cuttings and of the numbers of possible reconnections in order to get the overall number of neighboring configurations which can be reached via the move . at the end, we have compared the results achieved with these moves using the simple greedy algorithm which rejects all moves leading to deteriorations .we have found that the lin-2-opt is superior to the other small moves and that the lin-3-opt provides even better results than the lin-2-opt . but moving even further away from the local search approach does not lead to further improvements .99 _ der handlungsreisende wie er sein soll und was er zu thun hat , um auftrge zu erhalten und eines glcklichen erfolgs in seinen geschften gewi zu sein von einem alten commis - voyageur _ ( 1832 ) .e. l. lawler et al . , _ the traveling salesman problem _ ( john wiley and sons , new york , 1985 ) .g. reinelt , _ the traveling salesman _ ( springer , berlin , germany , 1994 ) s. kirkpatrick , c. d. gelatt jr . , and m. p. vecchi , science * 220 * , 671 ( 1983 ) .g. dueck and t. scheuer , j. comp .phys . * 90 * , 161 ( 1990 ) .p. moscato and j. f. fontanari , phys .a * 146 * , 204 ( 1990 ) .g. dueck , j. comp . phys .* 104 * , 86 ( 1993 ) .g. dueck , t. scheuer , and h .-wallmeier ( 1993 ) , spektrum der wissenschaft * 1993/3 * , 42 ( 1993 ) .g. dueck , _ das sintflutprinzip ein mathematik - roman _( springer , heidelberg , 2004 ) .t. j. p. penna , phys .e * 51 * , r1-r3 ( 1995 ) .e. marinari and g. parisi , europhys .lett . * 19 * , 451 ( 1992 ) .w. kerler and p. rehberg , phys .e * 50 * , 4220 ( 1994 ) .k. hukushima and k. nemoto , j. phys .japan * 65 * , 1604 ( 1996 ) .k. hukushima , h. takayama , and h. yoshino , j. phys .japan * 67 * , 12 ( 1998 ) .b. coluzzi and g. parisi , j. phys .a * 31 * , 4349 ( 1998 ) .j. gu and x. huang , ieee trans .systems man cybernet .* 24 * , 728 ( 1994 ) .j. schneider et al . , physica a * 243 * , 77 ( 1997 ) .coy , b. l. golden , and e. a. wasil , a computational study of smoothing heuristics for the traveling salesman problem , research report , university of maryland , research report , to appear in eur .j. operational res .( 1998 ) .e. schneburg , f. heinzmann , and s. feddersen , _ genetische algorithmen und evolutionsstrategien _( addison wesley , bonn , 1994 ) .d. e. goldberg , _ genetic algorithms in search , optimization and machine learning _ ( addison wesley , reading , mass . , 1989 ) .j. holland , siam j. comp .* 2 * , 88 ( 1973 ) .a. colorni , m. dorigo , and v. maniezzo ( 1991 ) , proceedings of ecal91 european conference on artificial life , paris , 134 ( 1991 ) .j. kennedy , ieee international conference on evelutionary computation ( indianapolis , indiana , 1997 ) .j. kennedy and r. eberhart , proceedings of the 1995 ieee international conference on neural networks * 4 * , 1942 ( 1995 ) .j. kennedy , r. eberhart , and y. shi , _ swarm intelligence _( morgan kaufmann academic press , 2001 ) .f. h. stillinger and t. a. weber , j. stat .phys . * 52 * , 1429 ( 1988 ) .
|
when trying to find approximate solutions for the traveling salesman problem with heuristic optimization algorithms , small moves called lin--opts are often used . in our paper , we provide exact formulas for the numbers of possible tours into which a randomly chosen tour can be changed with a lin--opt .
|
suppose is a long transaction with data operations on various variables while and are short transactions which just want to read the value of . in 2pl, we saw that the ` ro ` or likewise transactions ( here and ) suffer from time - lag until starts to unlock locks on .al resolved this problem to an extent in which once is done with , it donates the lock to and reads the current version of x and similarly for . by current ,most recent committed version is implied ( here ) .now , if no further writes on take place , the write of on is useless since and read from .if we know that will not abort , we can read from uncommitted versions as well i.e. and turn by turn can read either from or from if versions are assigned .hence more usefulness in terms of garbage collection and donation of locks is seen with a multiversion variant .the notion of a multiversion variant of altruistic locking can be seen from the motivation provided above . from now on , we ll abbreviate this protocol as mal .+ the key point in this protocol like al would be donation of locks . like al, locks would be donated on variables but now since read operations have multiple choices of versions to read from , the field of conflicts ( now multiversion ) would be less and thus would allow more concurrency than al ; its single - version counterpart protocol . the first three rules would be similar to al of course . + `mal1 ` : items can not be read or written by once it has donated them ; that is , if and occur in a schedule , then .+ ` mal2 ` : donated items are eventually unlocked ; that is , if occurs in a schedule following an operation , then is also in and .+ ` mal3 ` : transactions can not hold conflicting locks simultaneously , unless one has donated the data item in question ; that is , if and , are conflicting operations in a schedule and , then either , or is also in and . + the terminology of wake , completely in wake , indebted also is on similar lines .intuitively , if transaction locks a data item that has been donated and not yet unlocked by transaction , , we say that is in the wake of .more formally , we have the following : 1 .an operation from transaction is in the wake of transaction , , in the context of a schedule if and for some operation from .a transaction is in the wake of transaction if some operation from is in the wake of .transaction is completely in the wake of if all of its operations are in the wake of .a transaction is indebted to transaction in a schedule if ,, such that is in the wake of and either and are in conflict or some intervening operation such that is in conflict with both and . is conflict serializable .but if would be replaced by , would not be in csr but still would be allowed by al .so we had introduced al4 .+ ` al4 ` : when a transaction is indebted to another transaction , must remain completely in the wake of until begins to unlock items .that is , for every operation occurring in a schedule , either is in the wake of or there exists an unlock operation in such that .+ so with either or is not passed by al . schedule is in csr though .thus a valid schedule is not passed through al and hence poses an eminent shortcoming . in mal ,the conflicts are only since only multiversion conflicts are considered . thus consider two cases in the above : 1 . when , no problem is faced anyways .2 . when , a new version of is created and no new conflict is created . hence the schedule is still in mvcsr and hence also passed by mal .hence mal is more flexible and allows more concurrency than al .thus mal4 is a more flexible version of al4 in which the conflicts are of the form instead of all , and .therefore it can be concluded that al mal . in schedule , conflicts exist from to and to .hence the schedule is not in mvcsr .however it will get passed using mal1 - 3 rules which should be prohibited .therefore it is required to define another rule mal4 to handle the problem .+ ` mal4 ` : when a transaction is indebted ( conflicts only ) to another transaction , must remain completely in the wake of until begins to unlock items .that is , for every operation occurring in a schedule , either is in the wake of or there exists an unlock operation in such that .+ we have now completely described the rules of mal .* _ gen(mal ) mvcsr _ * + it essentially follows a standard argument , namely , that any mal - generated history has an acyclic conflict graph .it can be shown that each edge of the form in such a graph is either a `` wake edge , '' indicating that is completely in the wake of , or a `` crest edge , '' indicating that unlocks some item before locks some item .in addition , for every path in , there is either a wake edge from or , or there exists some on the path such that there is a crest edge from to .these properties suffice to prove the claim .+ strict inclusion of mal mvcsr has been shown later with an example .we know that al is an extension of 2pl where donation of locks is permitted .long transactions hold onto locks until they commit and do not allow other transactions to execute .similar problem can be observed in case of mv2pl as well .if a secondary small transaction needs to access a subset of data items which are currently locked by the primary transaction , read and write operation will get executed however commit will get delayed due to unavailability of the certify lock ( certify lock is a type of lock that a transaction needs to acquire on all data items it has written to at the time of commit ) .hence the secondary transaction will have to delay itself until the primary transaction releases all its locks .+ if donation of locks is allowed in mv2pl then lock on certain data item can be donated to the secondary transaction which can commit without delaying itself by acquiring the certify lock .handling of individual steps remains same as followed by mv2pl .inclusion of donation of locks into mv2pl inspires the mal scheduling protocol . in the next sectionwe will infact see that + mv2pl mal . either or ( or both ) must be locked by between operations and . by rule al1 , either or ( or both )must be donated by for and to occur , so either or ( or both ) must be indebted to .however , neither nor are allowed to be in the wake of if the latter is well formed , since later reads .hence either or violate rule al4 .+ however as mal allows donation of locks can donate lock to for certification and can commit . hence need not acquire lock read lock on along with lock on . lock on can be obtained at read time .we know that 2pl al as al is a relaxed version of 2pl .following the previous comparison 2pl mal .hence we can also conclude that 2pl . generating the output as per mv2pl rules , will get executed by acquiring locks on respective data items .however can not acquire certify lock on due to conflict with and will have to wait . will acquire and execute . following thisno transaction would proceed due to deadlock . ca nt acquire lock on due to conflict with , can not acquire certify lock on due to conflict with and can not acquire write lock on due to conflict with .hence the schedule wo nt get accepted under mv2pl protocol .+ in case of mal can donate lock on to so that can commit using certify lock on and . followingwhich can acquire write lock on and commit as well . at the end will commit by obtaining certify lock on .2v2pl is just a special case of mv2pl where only two versions of a particular data item are allowed .hence we conclude that 2v2pl mal . the conflicts in schedule are from to .the conflict is acyclic and the schedule is in mvcsr .but the mal runs into a deadlock while scheduling . get executed by acquiring locks on respective data items . as can not acquire write lock on due to conflict with the operation will get delayed . would have to donate its lock to for it certify write on and .as per rule 1 of mal , once a lock on a data item has been donated by a transaction , then that transaction can not carry out any operation on that data item .hence will not get executed .therefore the schedule can not be generated by mal .as mvsr mvcsr , using transitivity we can conclude that mal mvsr .* mal + mvto * + due to donations of locks , detection of aborted transactions of late writers can be done quickly saving both storage space and time . +if we know that a long transaction has only reads after a short span of the transaction time , it wo nt abort in mvto ( since aborts happen only due to write operations ) . in this case, is one such transaction . has a donated lock on from .the altruism is predominant in the fact that a transaction ca nt commit until all transactions it has read from have committed .we change this .if we know has only reads after writing , we know it wo nt abort . if reading from commits , is aborted since it has a late writer on ( reads from ) . is able to read from since it is committed ; otherwise it would have to read from and hence and would have gone to waste due to waiting for to complete which would be a waste of space . +thus mal + mvto is more successful than mvto in this scenario .1 . storage space would be required to store all versions of all variables .2 . this could be expensive if there are more ` rw ` transactions than ` ro ` transactions .3 . to avoid rollback , which would be very expensive considering the versions assigned , we should be pretty sure that there would not be any or very less number of aborts .
|
_ this paper builds on altruistic locking which is an extension of 2pl . it allows more relaxed rules as compared to 2pl . but altruistic locking too enforces some rules which disallow some valid schedules ( present in vsr and csr ) to be passed by al . this paper proposes a multiversion variant of al which solves this problem . the report also discusses the relationship or comparison between different protocols such as mal and mv2pl , mal and al , mal and 2pl and so on . this paper also discusses the caveats involved in mal and where it lies in the venn diagram of multiversion serializable schedule protocols . finally , the possible use of mal in hybrid protocols and the parameters involved in making mal successful are discussed . _
|
parallel discrete - event simulations ( pdes ) are a technical tool to uncover the dynamics of information - driven complex systems .their wide range of applications in contemporary sciences and technology has made them an active area of research in recent years .parallel and distributed simulation systems constitute a complex system of their own , whose properties can be uncovered with the well - established tools of statistical physics . in pdes physical processesare mapped to logical processes ( assigned to processors ) .each logical process manages the state of the assigned physical subsystem and progresses in its _ local virtual time _ ( lvt ) .the main challenge arises because logical processes are not synchronized by a global clock .consequently , to preserve causality in pdes the algorithms should incorporate the so - called local causality constraint whereby each logical process processes the received messages from other processes in non - decreasing time - stamp order . depending on the way the local causality constraintis implemented , there are two broadly defined classes of update protocols : conservative algorithms and optimistic algorithms . in conservative pdes ,an algorithm does not allow a logical process to advance its lvt until it is certain that no causality violation can occur .in the conservative update protocol a logical process may have to wait to ensure that no message with a lower time stamp is received later .in optimistic pdes , an algorithm allows a logical process to advance its lvt regardless of the possibility of a causality error .the optimistic update protocol detects causality errors and provides a recovery procedure from the violation of the local causality constraint by rolling back the events that have been processed prematurely .there are several aspects of pdes algorithms that should be considered in systematic efficiency studies .some important aspects are : the synchronization procedures , the utilization of the parallel environment as measured by the fraction of working processors , memory requirements which may be assessed by measuring the statistical spread in lvts ( i.e. , the desynchronization ) , inter - processor communication handling , scalability as measured by evaluating the performance when the number of computing processors becomes large , and the speedup as measured by comparing the performance with sequential simulations . in routinely performed studies to date , the efficiency is investigated in a heuristic fashion by testing the performance of a selected application in a chosen pdes environment , i.e. , in a parallel simulator .only recently a new approach in performance studies has been introduced in which the properties of the algorithm are examined in an abstract way , without a reference to a particular application platform . in this approachthe main concept is the simulated _ virtual time horizon _ ( vth ) defined as the collection of lvts of all logical processes .the evolution rule of this vth is defined by the communication topology among processors and by the way in which the algorithm handles the advances in lvts .the key assumption here is that the properties of the algorithm are encoded in its representative vth in analogy with the way in which the properties of a complex system are encoded in some representative non - equilibrium interface . in this way, fundamental properties of the algorithm can be deduced by analyzing its corresponding simulated vth . in this chapterwe give an overview of how the methods of non - equilibrium surface growth ( physics of complex systems ) can be applied to uncover some properties of state update algorithms used in pdes .in particular , we focus on the asynchronous conservative pdes algorithm in a ring communication topology , where each processor communicates only with its immediate neighbors .the time evolution of its vth is simulated numerically as an asynchronous cellular automaton whose update rule corresponds to the update rule followed by this algorithm .the purpose of this study is to uncover generic properties of this class of algorithms . in modeling of the conservative update mode in pdes, we represent sequential events on processors in terms of their corresponding lvts . a system of processors or _ processing elements _ ( pe ) is represented as a one - dimensional grid .the column height that rises above the -th grid point is a building block of the simulated vth and represents the total time of operations performed by the -th processor .these operations can be seen as a sequence of update cycles , where each cycle has two phases .the first phase is the processing of the assigned set of discrete events ( e.g. , spin flipping on the assigned sublattice for dynamic monte carlo simulation of lattice systems ) .this phase is followed by a messaging phase that closes the cycle , when a processor broadcasts its findings to other processors .but the messages broadcasted by other processors may arrive any time during the cycle .processing related to these messages ( e.g. , memory allocations / deallocations , sorting and/or other related operations ) are handled by other algorithms that carry their own virtual times .in fact , in actual simulations , this messaging phase may take an enormous amount of time , depending on the hardware configuration and the message processing algorithms . in our modeling the time extent of the messaging phaseis ignored as though communications among processors were taking place instantaneously . in this sensewe model an ideal system of processors .the lvt of a cycle represents only the time that logical processes require to complete the first phase of a cycle .therefore , the spread in lvts represents only the desynchronisation that arises due to the asynchronous conservative algorithm alone . by the same token , all other performance indicators such as , e.g. , the overall efficiency or the utilization of the parallel processing environment , that are read out of the simulated vth are the intrinsic properties of this algorithm .this chapter is organized as follows .the simulation model of asynchronous conservative updates and the mapping between the logical processes and the physical processes considered in this study are explained in sec .[ model ] .section [ physics ] outlines the selected ideas taken from non - equilibrium surface science that are used in the interpretation of simulation results ; in particular , the concepts of universality and a non - universal microscopic structure that are relevant in deducing algorithmic properties from the simulated vth . one group of these properties includes the utilization and the speedup , which is provided in sec .another group includes the desynchronization and the memory request per processor , required for past state savings , which is presented in sec .[ dynamics ] .performance of the conservative and the optimistic pdes algorithms is discussed in sec . [compare ] . the new approach to performance studies , outlined in this chapter ,can be a very convenient design tool in the engineering of algorithms .this issue and directions for future research are discussed in sec .[ conclude ] .in simulations a system of processors is represented as a set of equally spaced lattice points , .each processor performs a number of operations and enters a communication phase to exchange information with its immediate neighbors .this communication phase , called an update attempt , takes no time in our simulations . in this sensewe simulate an ideal system of processors , as explained in sec .[ intro ] .an update attempt is assigned an integer index that has the meaning of a wall - clock time ( in arbitrary units , which may be thought of as a fixed number of ticks of the cpu clock ) .the local virtual time at the -th processor site represents the cumulative local time of all operations on the -th processor from the beginning at to time .these local processor times are not synchronized by a global clock .= 0.42truecm the mapping between physical processes and logical processes .the nearest - neighbor physical interactions on a lattice with periodic boundary conditions ( the right part ) are mapped to the ring communication topology of logical processes ( two - sided arrows in the left part ) .each pe carries a sublattice of sites .communications take place only at border sites .each pe has at most two effective border sites , i.e. , neighboring pes that it communicates with.,width=453 ] there is a two - way correspondence between the physical system being simulated in pdes and the system of pes that are to perform these pdes in a manner _ consistent with and faithful to _ the underlying stochastic dynamics of the physical system , as depicted in fig.[kol-01 ] . on the one hand , by spatially distributing a physical lattice with the nearest - neighbor interactions and periodic boundaries among processors , the asynchronous nature of physical dynamics is carried over to the asynchronous nature of logical processes in the ring communication topology of the computing system . on the other hand ,the ring communication topology among processors is mapped onto a lattice arrangement with periodic boundary conditions , , and asynchronous update events in the system of pes can be modeled as an asynchronous cellular automaton on this lattice .the set of local virtual times forms the vth at ( see fig .[ kol-02 ] ) .the time - evolution of the vth is simulated by an update rule , where local height increments are sampled from the poisson distribution of unit mean .the form of the deposition rule depends on the processor load , as explained later in this section . a general principle that governs the conservative update protocol requires a processor to idle if at update attempt the local causality constraint may be violated .this happens when at the -th processor does not receive the information from its neighboring processor ( or processors ) if such information is required to proceed in its computation .this corresponds to a situation when the local virtual time of the -th processor is ahead of either one of the local virtual times or of its left and right neighbors , respectively . in this unsuccessful update attemptthe local virtual time is not incremented , i.e. , the processor waits : . in another case , for example, when at the processor does not need information from its neighbors it performs an update regardless of the relation between its local virtual time and the local virtual times on neighboring processors . at every successful update attempt ,the simulated local virtual time at the -th pe - site is incremented for the next update attempt : , where , and $ ] is a uniform random deviate .the simulations start from the _ flat - substrate condition _ at : .one example of computations that follow the above model is a _ dynamic monte carlo _ simulation for ising spins . in a parallel environment ,a spin lattice is spatially distributed among processors in such a way that each processor carries an equal load of one contiguous sublattice that consists of spin sites ( i.e. , each processor has a load of volumes ) .some of these spin - lattice sites belong to border slices , i.e. , at least one of their immediate neighbors resides on the sublattice of a neighboring processor .processors perform concurrent spin - flip operations ( i.e. , increment their lvts ) as long as a randomly selected spin - site is not a border site .if a border spin - site is selected , to perform a state update a processor needs to know the current spin - state of the corresponding border slice of its neighbor .if this information is not available at the update attempt ( because the neighbor s local time is behind ) , by the conservative update rule the processor waits until this information becomes available , i.e. , until the neighbor s local virtual time catches up with or passes its own local virtual time .the least favorable parallelization is when each processor carries the minimal load of .computationally , this system can be identified with a closed spin chain where each processor carries one spin - site . at each update attempt each processor must compare its lvt with the local times on both of its neighbors .the second least favorable arrangement is when .as before , at each update attempt every processor must compare its local time with the local time of one of its neighbors . when , at update attempt , the comparison of the local virtual times between neighbors is required only if the randomly selected volume site is from a border slice .the above three cases are realized in simulations by the following three update rules .when , the update attempt at is successful iff when , at any site where the update attempt was successful at , at we first randomly select a neighbor ( left or right ) .this is equivalent to selecting either the left or the right border slice on the processor .the update attempt is successful iff where is the randomly selected neighbor ( for the left , for the right ) . at any site where the update attempt was not successful at , at we keep the last value .when , at any site where the update attempt was successful at , at we first randomly select any of the volume sites ( indexed by ) assigned to a processor. the selected site can be either from the border sites ( either or ) or from the interior .the attempt is successful if the selected site is the interior site . when the border site is selected , the attempt is successful if condition ( [ rule2 ] ) is satisfied . as for , at any site where the update attempt was not successful at , at we keep the last value .= 0.42truecm the growth and roughening of the simulated vth interface : snapshots at and at a later time .local heights are in arbitrary units .here , and .,width=453 ] in this way the simulated vth , corresponding to the conservative update rule followed by the pdes algorithm , emerges as a one - dimensional non - equilibrium surface grown by depositions of poisson - random time increments that model waiting times .two sample vth surfaces are presented in fig .[ kol-02 ] .major properties of the corresponding algorithm are encoded in these interfaces . in principle , with the help of statistical physics , one should be able to obtain from vth such properties as the utilization , the ideal speedup , the desynchronization , the memory request per processor , the overall efficiency and the scalability .one basic property is the _mean utilization _ , which can be assessed as the fraction of sites in the vth interface that performed an update at , averaged over many independent simulations . for the minimal load per processor , is simply the mean density of local minima of the interface ( fig .[ kol-02 ] ) .another basic property is the _mean desynchronization _ in operation times , which can be estimated from simulations as the mean statistical spread ( roughness ) of the vth interface . in the following section we review some useful concepts from surface physics relevant to our study .the roughness of a surface that grows on a one dimensional substrate of sites can be expressed by its interface width at time where is the height of the column at site and is the average height over sites , .the angular brackets denote the average over many interface configurations that are obtained in many independent simulations . in our studythese configurational averages were computed over 800 simulations , unless noted otherwise .based on the time - evolution of , interfaces can be classified in various _ universality classes _ ( for an overview see ref .the idea behind the concept of universality is that , in a statistical description , the growth of the surface depends only on the underlying mechanism that generates correlations among time - evolving columns and _ not _ on the physical particulars of physical ( or other ) interactions that cause the growth .for instance , two completely different physical interactions among deposited constituents ( e.g. , one of a magnetic nature and the other of a social nature ) may generate two equivalent surfaces of one universality class , depending on the observed evolution of .the simplest case of surface growth is _ random deposition _ ( rd ) , when the column heights grow independently of each other .the rd interface is totally uncorrelated .the time - evolution of its width is characterized by a never - ending growth in accordance to the power law , with the _ growth exponent _ .such growth defines the rd universality class .the self - affined roughness of the interface manifests itself by the existence of family - vicsek scaling : where the scaling function describes two regimes of the width evolution : the _ dynamic exponent _ gives the time - evolution of the lateral correlation length , i.e. , at a given the largest distance along the substrate between two correlated columns .when exceeds the system size the width saturates and does not grow any more . at saturation , for , for a given the width remains constant and obeys the power law , where is the _ roughness exponent_. the growth phase is the initial phase for before the _ cross - over time _ at which saturation sets in .the growth phase is characterized by the single growth exponent .the roughness , growth and dynamic exponents are universal , i.e. , their values depend only on the underlying mechanism that generates correlations . a simple continuum model of non - equilibrium growth that leads to scaling is provided by the kardar - parisi - zhang ( kpz ) equation : where is the height field ( subscripts denote partial derivatives ) , is the mean interface velocity , , and is the uncorrelated gaussian noise .the coefficients and give the strength of the linear damping and the coupling with the nonlinear growth , respectively . a renormalization group analysis can provide a connection between the stochastic growth equation and scaling exponents .the _ kpz universality class _ , governed by the dynamics of eq.([kpz ] ) , is characterized by and , and the exponent identity .when in eq.([kpz ] ) , the growth is governed by the linear edwards - wilkinson ( ew ) equation .the _ ew universality class _ is characterized by and , and the ew exponent identity is . when and in eq.([kpz ] ) , the growth belongs to the rd universality class . unlike the kpz and ew interfaces , the rd interface is not self - affined . the origins of scale invariance , as in eq.([family1 ] ) , andthe universal properties of time - evolving surfaces are well understood . in this study, we use the universal properties of the simulated vth to investigate the scalability of the corresponding pdes algorithm . however , there are many instances where non - universal properties , i.e. , those pertaining to the _ microscopic structure _ of the interface , are of importance . in this study ,one example is the density of local minima or the density of update sites of the vth interface .it is safe to say that there is no general silver - bullet - type of approach to these problems . for the case study of vth interfaces, we were able to develop a discrete - event analytic technique that provides a means for calculating a probability distribution for events that take place on the surface .for the closed linear chain of processors carrying minimal load , when the vth is simulated by poisson - random depositions at local interface minima , the probability distribution of the update events on the corresponding vth surface is : where is the number of updates at the -th update attempt after the simulations reach a steady state .this distribution can be used to derive approximate closed formulas for mean quantities measured in simulations , e.g. , for .also , it is a starting point in the derivation of analogous distributions for the cases when each processor carries a larger load of or . the advantage of knowing is that it enables one to compute analytically quantities that otherwise can be only estimated qualitatively in simulations .= 0.42truecm simulated time evolutions of characteristic densities and the scaled vth interface velocity during simulations with the minimal load per processor ( ) .time marks the transition to steady - state simulations and is the saturation time , as explained in the text . for times later than , both the utilization ( diamonds ) and the velocity ( filled circles ) are constant.,width=377 ] = 0.42truecm simulated time evolutions of characteristic densities and the scaled vth interface velocity during simulations with the load per processor ( ) . as in fig . [ kol-03 ] , for times later than both the utilization ( diamonds ) and the velocity ( filled circles ) are constant , but their values are significantly higher than in the worst - case performance scenario presented in fig .[ kol-03].,width=377 ] for a real system , the mean utilization is defined either as the number or as the fraction of processors that on average work simultaneously at a time . in our model, is the mean fraction ( i.e. , the density ) of update sites in the simulated vth . the simulated time evolution of is presented in fig . [ kol-03 ] ( for ) and in fig . [ kol-04 ] ( for ) that illustrate the following observations .first , is not constant as the simulations evolve but abruptly decreases from its initial value at and very quickly , after a few hundred steps , settles down at its steady value when it no longer depends on time .both the utilization and the transition period to the _ steady state _depend strongly on the processor load .second , the vth velocity must be related to by a simple linear scaling relation .third , the transition time to the steady state can be estimated from simple statistics of the interface .let us elaborate on these issues .the characteristic densities and , plotted in figs .[ kol-03][kol-04 ] , are the fractions of the interface sites ( processors ) that have their lvt larger and equal - or - smaller , respectively , than the mean virtual time .their relation to each other is a simple indicator of the skewness of the distribution of the lvts about the mean virtual time . for the times when they approximately coincide this distribution is approximately symmetric .the reason for a non - zero skewness for the early times is the flat - substrate initial condition , i.e. , the initial null lvt on all processors ( the detailed analysis of this issue can be found in ) . in the worst - case performance scenario , when each processor has the minimal load of ( fig .[ kol-03 ] ) , the duration of this initial transition time to the steady state is a non - universal property of the vth interface .this means that in real applications the time will depend on the application platform , i.e. , the hardware configuration and parameters .but , for a real application can be determined in a non - expensive way by monitoring and in a trial simulation with a fixed .then , the results can be scaled either up or down for an arbitrary load .the existence of such scaling is the universal property of the vth interface , which is discussed in sec.[dynamics ] where the explicit scaling relations are given .= 0.42truecm the steady - state mean utilization vs the number of processors for the minimal load per processor : the analytical result ( continuous curve ) , its asymptotic limit ( horizontal line ) and simulation results ( symbols ) .the error bars are smaller than the symbol size.,width=377 ] the overall progress of pdes can be estimated from the time rate of the _ global virtual time _ ( gvt ) that is the smallest lvt from all processors at .the gvt determines the fossil collection , i.e. , the memory that can be re / de - allocated from past - saved events . in our model , the mean gvt is the mean global minimum of the simulated vth : . on the average ,the time rate of is not larger than the vth interface velocity and after cross - over time to saturation ( indicated in figs .[ kol-03][kol-04 ] and figs .[ kol-08][kol-10 ] ) these two rates are equal .thus , is the measure of progress .it is shown analytically that , where is a constant .for the simulated vth , as can be seen in figs .[ kol-03][kol-04 ] . in a real application is a hardware dependent parameter .the steady - state mean utilization vs the number of processors for loads per processor : analytical results ( continuous curves ) and simulation results ( symbols ) .the error bars are smaller than the symbol size.,width=377 ] since the overall progress in pdes connects linearly to , the mean utilization is the most important property of this algorithm .the steady - state utilization for the worst - case performance as a function of the number of processors is presented in fig .[ kol-05 ] . for brevity of notation , from now on we omit the index because for the utilization is constant . considering the simulation data alone ( symbols in fig .[ kol-05 ] ) it is easily seen that even at the infinite limit of the mean utilization has a non - zero value .in fact , this asymptotic value is approached very closely with less than a thousand processors .the existence of this limit guarantees a non - zero progress of pdes for any value of and for any processor load , since the curve is the lower bound for the steady - state utilization ( compare with fig .[ kol-06 ] ) .the presence of this non - zero limit for as and the global behavior of the simulation data , observed in fig .[ kol-05 ] , suggest the existence of some underlying scaling law for .it appears that , indeed , the underlying scaling law can be obtained analytically from the first moment of the probability distribution given by eq.([distr ] ) and the result is and .this function is plotted in fig .[ kol-05 ] for .similarly for any , starting with , one can construct the probability distribution of updates in the system of processors , each having a load of ( i.e. , the distribution of updates on the corresponding vth interface ) .the mean utilization can then be obtained from the first moment of : where , and and .relation ( [ util-3 ] ) is exact .relation ( [ util-2 ] ) is presented in fig .[ kol-06 ] . as asymptotic limit is .the computational speedup of a parallel algorithm is defined as the ratio of the time required to perform a computation in serial processing to the time the same computation takes in concurrent processing on processors .it is easy to derive from the above definition that for an _ ideal system of processors _ , that is for the particular update _algorithm _ considered in this work , the mean speedup is in other words , is measured by the average number of pes that work concurrently between two successive update attempts .we observe that for ideal pes the speedup as a function must be such that the equation has a unique solution , where is a fixed positive number .this requirement follows naturally from the logical argument that distributing the computations over ideal pes gives a unique speedup , i.e. , two ideal systems having sizes and , respectively , may not give the same .this means that must be a monotonically increasing function of . combining eq.([speedup-1 ] ) and eq.([util-1 ] ) , in the worst - case performance when the load per processor is minimal , the mean speedup is and .the latter relation says that this algorithm produces no speedup when the computations are distributed over or processors . in this case , although the utilization is , the processors do not work concurrently but _ alternately _, i.e. , one is working while the other is idling .for a real system of pes performing pdes , in such a situation the communication overhead will produce an actual slowdown , i.e. , the parallel execution time will be longer than the sequential execution time on one processor .when , to take an advantage of concurrent processing the average number of pes working in parallel between two successive update attempts must satisfy , which gives .still , depending on the implementation platform , the actual speedup of pdes may be negligible or not present at all for small . for a general load per processor , combining eq.([speedup-1 ] ) and eq.([util-2 ] ) gives a linear relation with respect to : \left ( 1 - \frac{q(n)}{2 } \right).\ ] ] equation ( [ speedup-2 ] ) can be rearranged to where is the probability that a processor performs an update without the need of communicating with other processors .this probability sharply increases with the processor load . since the mean speedup increases quadratically with and only linearly with , for some pdes that perform the updates in accordance to this algorithm it may be more advantageous to assign more load per processor than to distribute the computations over a large number of processors . in any case , either of the above relations , eq.([speedup-2 ] ) or ( [ speedup-3 ] ) ,can be used to assess the upper bound for the speedup in actual applications .= 0.42truecm simulated time evolution of the vth width in the worst - case performance scenario .time , common for all , marks the transition to the steady - state simulations of fig .[ kol-03].,width=377 ] = 0.42truecm the scaled time evolution of the simulated vth widths of fig .[ kol-07 ] for all times .the insert shows the full data collapse for , with the growth exponent .,width=377 ] in pdes the memory request per processor , required for past - state savings , depends on the extent to which processors get desynchronized during simulations . in our model , the statistical spread of the simulated vth , as illustrated in fig .[ kol-02 ] , provides the measure of this desynchronization . in simulationsthe width of the vth interface is computed using eq .( [ width ] ) .the representative results of simulations for the case of the minimal load per processor are presented in fig .[ kol-07 ] . for any number of processors the time evolution of the vth width has two phases .the first phase , for , is the growth regime , where for the width follows a power law in with the growth exponent . the second phase , after the cross - over time , is the saturation regime , where has a constant value that depends only on the system size and follows a power law in with the roughness exponent .the values of these exponents are characteristic of the kpz universality class .explicitly , the evolution can be written as where is the initial regime where the family - vicsek scaling law , eqs.([family1][family2 ] ) , does not hold .this can be seen directly when the scaling is performed for all , as illustrated in fig .[ kol-08 ] .the whisker - like structures that appear after data collapse in the growth phase , clearly observed in fig . [ kol-08 ] , indicate that in the initial start - up time the curves in fig .[ kol-07 ] before scaling follow one evolution for all .the insert shows the universal family - vicsek scaling function , eq.([family2 ] ) , for . here , the cross - over time scales as , where .the presence of the initial non - scaling growth regime is an artifact of the flat - substrate initial condition .its duration is a non - universal parameter that can be determined in pdes by monitoring characteristic densities and the utilization , as discussed in sec .[ util ] .= 0.42truecm simulated time evolution of the vth width for loads .there are two growth regimes , characterized by two exponents and .the duration of the early phase depends on . in this early phase ,simulations are not in the steady - state : the squared width increases linearly with time.,width=377 ] = 0.42truecm the scaled time evolution of the simulated vth widths for general values of and .this scaling function is valid only for steady - state simulations . in the scaling regimethe growth exponent is , as in fig .[ kol-08 ] .the time to saturation , when the width is constant , depends on both and .,width=377 ] when the simulations are performed for the case when each processor carries a load , the evolution of the vth width changes . now , as illustrated in fig .[ kol-09 ] for , there are two distinct phases in the initial growth regime .the early phase evolves in the rd fashion , having the growth exponent , and the later phase has signatures of the kpz scaling : where both and depend on the processor load .the initial rd - like growth does not scale with .this lack of scaling extends to approximately , where marks the end of the initial relaxation period when or .the physical justification for the presence of the rd growth is that when there is a non - zero probability of having some processors performing state updates without the need of communicating with other processors .this probability of uncorrelated " updates increases when the processor load increases .however , even for a large but finite there are some processors that may not complete an update without communication with another processor , thus , correlations are build among the processors and propagate throughout the system .eventually these correlated " updates cause the vth interface to saturate .the net effect of having a large load per processor is the noticeable elongation of the time scale over which the correlations are build , but the dynamics of building these correlations belongs to the same universality class as in the case of the minimal load per processor .therefore , it is expected that the simulated vth should exhibit kpz universality in this case as well , as soon as the correlations become apparent .indeed , after the initial transition time the vth widths can be collapsed onto the following scaling function : where satisfies eq.([family2 ] ) , and with .this scaling function is presented in fig .[ kol-10 ] .accordingly , the vth interfaces belong to the kpz universality class . in the scaling regime ,the evolution can be written explicitly as where , , and . for , for all the width follows the power low , where .the consequence of eq.([evol-4 ] ) is that the memory request per processor does not grow without limit but varies as the computations evolve .the fastest growth characterizes the initial start - up phase .the length of the start - up phase depends on the load per processor .the start - up phase is characterized by decreasing values of the utilization . in the steady - state simulations ,when the utilization has already stabilized at a mean constant value the memory request grows slower , at a decreasing rate . in this phase, the mean request can be estimated globally from eq.([evol-4 ] ) . the important consequence of scaling , expressed by eq.([evol-3 ] ) , is the existence of the upper bound for the memory request for any finite number of processors and for any load per processor . on the average ,this upper bound increases proportionally to with the size of conservative pdes .the characteristic time scale from the first step to the steady - state simulations can be estimated by monitoring the utilization for the minimal processor load ( to determine ) and , subsequently , scaling this time with .similarly , the characteristic time scale to , when the desynchronization reaches its steady state , can be scaled with the processor load to determine an approximate number of simulation steps to the point when the mean memory request does not grow anymore .while the conservative algorithm strictly avoids the violation of the local causality constraint , the optimistic algorithm may process the events that are out of the time - stamp order which might violate this constraint . at timeswhen the conservative algorithm forces the processors to idle the optimistic algorithm enforces the computations and state - updates , thus , according to our adopted definition , the theoretical utilization of the optimistic update scheme is always at its maximum value of one because the processors never idle .however , some of the events in the thus processed stream of events on a processor must be processed prematurely , judging by the random nature of the optimistic scheme that takes a risk of guessing whenever the next event is not certain .when in the course of an update cycle a processor receives a _ straggler message _, i.e. , a message from the past " that has its time - stamp smaller than the clock value , all the later events that have been processed incorrectly must be cancelled .the processor must then send out cancellation messages ( called _ anti - messages _ ) to other processors to _ roll back _ all the events that have been processed prematurely .thus , in the optimistic update scheme , although the processors never idle , the computation time of the update cycle is not utilized fully efficiently because some part of this time is used for the meaningless operations ( i.e. , creation and processing of the rollbacks ) and only part of a cycle represents the computations that assure the progress of pdes .there are many variations of optimistic update schemes , e.g. , refs . and references in , oriented to building implementations with better efficiencies and memory management . the key feature of the update mechanism , as described above , and main concepts such as rollback and gvt , first introduced in jefferson s time warp , can be treated as the generic properties of the optimistic algorithm . in its generic form , the algorithm keeps the already processed events in the memory in case of the necessary re - processing required by the rollbacks . for the ring communication topology ,we simulate the growth of the vth corresponding to the optimistic update procedure as poison - random depositions to the lattice sites in analogy to the model described in sec .[ model ] .however , in the optimistic model the deposition rule is modified to mimic key features of the optimistic algorithm .we assume that each update cycle on each processor consists of processing events , only some of which can be eventually committed . with each of these eventsthere is the associated random time increment . now the integer index represents the update cycle .the main difference between the optimistic and the conservative simulation models is that any time the conservative would wait the optimistic is allowed to perform a random guess . in the simplest case of totally unbiased guesses , in each cyclethe number of correctly guessed events is obtained from a uniform distribution .the cumulative simulated lvts that correspond to processing all events form the simulated _ optimistic _ vth .the cumulative simulated lvts that correspond to processing only the correctly guessed events form the simulated _ progress _ vth , which is embedded in the optimistic vth .the difference between the optimistic vth and the progress vth represents the cumulative time that has been wasted by generating and processing erroneously guessed events and their associated rollback operations .we define the overall efficiency of the optimistic algorithm as the ratio of _ the total progress time _ to _ the total computation time_. at , the total progress time and the total computation time are obtained by integrating the progress vth and the optimistic vth , respectively , and are represented by the areas under these vth interfaces . in analogy with the above definition, we define the overall efficiency of the conservative algorithm as the ratio of the total computation time ( i.e. , the area under the conservative vth ) to the total time that the processors spend on computations and idling .these efficiencies are presented in fig .[ kol-11 ] for the worst - case scenario of the minimal load per processor and .= 0.42truecm simulated time - evolution of overall efficiencies in an optimistic ( upper curve ) and a conservative ( lower curve ) pdes when processors carry minimal loads ( ).,width=377 ] our simulations ( fig .[ kol-11 ] ) confirm the common conception that , in the ideal setting , the optimistic algorithm should outperform the conservative algorithm .[ kol-11 ] shows , the lower bound for the steady - state conservative efficiency is about and coincides with the lower bound obtained for the utilization ( fig .[ kol-03 ] ) . for the same case ,the steady - state optimistic efficiency is about ; accordingly , the optimistic algorithm has a better utilization of the parallelism . in actual applicationsthe conservative efficiency can be improved by exploiting in programming a concept of lookahead , based on actual properties of the distributed pdes physical model under consideration .the statistical spread of the simulated optimistic vth is presented in fig .[ kol-12 ] ( note , this figure presents the results obtained in only one simulation ) .the simulated optimistic vth belongs to the rd universality class and the spread in local virtual times grows without limit in accordance to the power law .intuitively , this result should be expected because , by analyzing the operation mode of optimistic updates , one notices that the processors work totally independently , progressing their lvts in an uncorrelated fashion .thus , it should be expected that the memory request per processor required to execute generic optimistic pdes grows without bounds as when the simulations are progressing in .= 0.42truecm simulated time - evolution of desynchronizations in an optimistic pdes when processors carry minimal loads ( ) : the widths of the optimistic vth ( upper curve , plus " symbol ) and of the progress vth ( lower curve , diamonds).,width=377 ] the unboundedness of the memory request in a generic optimistic scheme can be also justified using quite different arguments , however , the power - law growth for this request , illustrated in fig .[ kol-12 ] , has been never reported before .the adverse ways in which such an unbounded desynchronization affects the performance and standard remedies that can be taken to improve on the use of computing resources by optimizing the optimistic memory management , are well - known issues ( a comprehensive discussion can be found in ref . ) . in general ,the generic optimistic update scheme requires some kind of explicit synchronization procedure that would limit the lengths of rollbacks .the actual performance of the pdes application may depend on the particulars of the underlying physical dynamics of the physical system being simulated and the best choice of the algorithm may be uncertain in advance without some heuristic trial studies .for example , recently , overeiner _ reported the first observation of self - organized critical behavior in the performance of the optimistic time warp algorithm when applied to dynamic monte carlo simulations for ising spins on a regular two - dimensional lattice .they found that when this pdes approaches a point when the physical ising - spin phase transition is being simulated ( i.e. , the critical point of the physical dynamics ) , the average rollback length increases dramatically and simulation runtimes increase nonlinearly . in ising - spins simulations , increases in rollback lengths are to be expected since around the ising critical temperature the physical system is characterized by the presence of long - range spin - spin correlations and collective behavior , where large - scale spin - domains may be overturned simultaneously .consequently , approaching the critical point of physical dynamics should produce a decreased number of committed events .however , the simultaneous nonlinear increase of the simulation runtime when pdes approaches this critical point seems to be a property of the optimistic algorithm since a similar problem is not observed when the same physical system is being simulated using the conservative algorithm .one possible explanation for this nonlinear deterioration in runtime may be a nonlinear cache behavior when rollback lengths increase beyond a certain critical value and memory requests increase .the role of the cache behavior , in particular , the nonlinear performance degradation with the number of cache misses , has been recently discussed in regard with the efficient implementation of an asynchronous conservative protocol for a different physical system .another possible explanation , as conjectured in ref . , may be the onset of self - organized criticality in the time warp simulation systems , unrelated to the physical critical state .further studies are required to extract universal properties of optimistic protocols to identify a class of simulation problems that would show in computations a similar behavior to that encountered in pdes for ising spins .the performance of the distributed pdes algorithms depends in general on three main factors : the partitioning of the simulated system among processors , the communication topology among the processors , and the update protocol being adopted . in a heuristic approach to performance studies ( e.g. , as in ref . ) the application algorithm often utilizes physical properties of the model to be simulated ; thus , the conclusions of such studies , as being application specific , may have a limited scope of generalization . in this chapter, we presented a new way to study the performance of pdes algorithms , which makes no explicit reference to any particular application . in this new approach ,the system of processors that perform concurrent update operations in a chosen communication topology is seen as a complex system of statistical physics .first , based on the update and the communication patterns of the algorithm , we construct a simulation model for its representative virtual time horizon .the statistical properties of this virtual time interface correspond to the properties of the algorithm , that is , to the properties of the pattern in which the correlations are formed and propagate in the computing system .second , we extract the properties of the algorithm from the statistical properties of its simulated virtual time horizon . in this chapter ,we demonstrated how this approach can be used to elucidate the key generic properties of the asynchronous conservative parallel update protocol in the ring communication topology among processors .for a finite pdes size , i.e. , a finite load per processor and a finite number of processors , our findings can be summarized as follows .both the utilization of the parallel processing environment and its desynchronization can be derived explicitly as theoretical functions of and ( these are eqs.([util-1]-[util-3 ] ) and eqs.([evol-2]-[evol-4 ] ) , respectively ) .these functions express the existence of the underlying scaling laws for the corresponding virtual time horizon and are understood as approximate relations in the sense of statistical averages .the existence of these scaling laws presents one aspect of scalability of this type of pdes algorithm .the other aspect of algorithmic scalability is the behavior of these functions when and increase . in the limit of large is a theoretical non - zero lower bound for the utilization , for any , and the value of this bound increases with . on the other hand , for any and is a finite upper bound for the desynchronization , thus , for the mean memory request per processor during steady - state simulations .therefore , this kind of conservative pdes algorithm is generally scalable .the model simulation of the virtual time horizon for the generic optimistic update protocol in the ring communication topology ( sec .[ compare ] ) showed that the optimistic algorithm lacks the algorithmic scalability .as the optimistic simulations evolve in the steady state , the width of the optimistic virtual time horizon grows without limit for any finite pdes system .in other words , even for the minimal load per processor , the memory request per processor ever increases as the square root of the performed number of time - stamped update cycles , as the simulations evolve in time .therefore , the generic version of this algorithm demands some form of explicit periodic synchronization .one advantage of studying the pdes algorithms in terms of their corresponding virtual time interfaces is the possibility of deriving explicit diagnostic formulas for the performance evaluation , such as , e.g. , the evaluation of the speedup given by eqs.([speedup-2]-[speedup-3 ] ) or the estimate of the memory request given by eq.([evol-4 ] ) for the conservative algorithm considered in this study .these theoretical estimates should be treated as the ideal upper bounds for the performance when pdes are implemented on the real computing systems .a real implementation will produce a deviation from the theoretical prediction , depending on the computing platform and on other components of simulation algorithms .the extend to which the performance of the implementation scales down from the ideal performance should provide important information about possible bottlenecks of the real implementation and should be a guide to improving the efficiencies . the other benefit that comes from the modeling of virtual - time interfaces is a relatively inexpensive design tool for new - generation algorithms , without a prior need for heuristic studies .for example , knowing that in the ring communication topology the maximal conservative memory request per processor for past state savings gets larger as the simulation model gets larger , it is easy to predict the maximum model size that would fit the available memory in the system . however , the available memory resources vary across implementation platforms , so it may happen that one size simulation model may fit on one platform and may be too large for the other , having the same number of available processors .the question then is : how to modify the update algorithm to allow for the tunable memory request ? obviously , the question concerns the control of the extent to which the processors get desynchronized in the course of simulations , i.e. , the control of the vth width. one can think about a suitable update pattern that would model the virtual - time interface of the desired properties and then translate this pattern to a new update procedure of the modified algorithm .this approach has been used to design a constrained conservative update algorithm , where the desynchronization is controlled by the width of a moving virtual update window and the ring communication topology is modified to accommodate multiple connections between a processor that carries gvt at given update attempt and other processors . in another group of conservative algorithms an implicit autonomous synchronization may be achieved by modifying the ring communication pattern to accommodate connections with the build - in small - world type of communication network . in both of these modifications , the additional communication network imposed onthe original ring communication topology serves the sole purpose of reducing the desynchronization .further studies are required in this matter to identify best efficient ways of tuning the desynchronization and the memory request . in summary ,the new approach to performance studies , outlined in this chapter , that utilizes simulation modeling of virtual - time interfaces as a tool in algorithm design , opens new interdisciplinary research methodologies in the physics of complex systems in application to computer science .promising avenues where this kind of approach to complex systems of computer science should lead to useful practical solutions may include the criticality issues in distributed pdes algorithms , their scalability , prognostication , the design of efficient communication networks as well as the development of new diagnostic tools for the evaluation of hardware performance .this work is supported by the erc center for computational sciences at msu .this research used resources of the national energy research scientific computing center , which is supported by the office of science of the us department of energy under contract no .de - ac03 - 76sf00098 . partially supported by nsf grants dmr-0113049 and dmr-0426488 .dickens p.m. and reynolds p.f . , _ srads with local rollback _ , in _ proceedings of the scs multiconference on distributed simulation , san diego _ , edited by nicol d. and fujimoto r. , simulation series , 22 ( 1990 ) , pp.161164 .prakash a. and subramanian r. , _ an efficient optimistic distributed scheme based on conditional knowledge _ , in _ proceedings of the sixth parallel and distributed simulation workshop , 1992 scs western multiconference _( ieee press , new york , 1992 ) , pp.8596 . steinman j.s . ,_ breathing time warp _ , in _ proceedings of the seventh workshop on parallel and distributed simulation _ , edited by bagrodia r. and jefferson d. ( ieee computer society press , los alamitos , ca , 1993 ) , pp.109118 .ferscha , a. and chiola g. , _ self adaptive logical processes : the probabilistic distributed simulation protocol _ , in proceedings of the 27th annual simulation symposium , lajolla , 1994 ( ieee computer society press , los alamitos , ca , 1994 ) , pp.7888 .korniss g. , toroczkai z. , novotny m.a . , and rikvold p.a ., _ from massively parallel algorithms and fluctuating time horizons to non - equilibrium surface growth _ , physical review letters , 84 ( 2000 ) , pp.13511354 . korniss g. , novotny m.a . , toroczkai z. , and rikvold p.a . ,_ non - equilibrium surface growth and scalability of parallel algorithms for large asynchronous systems _ , in _computer simulation studies in condensed matter physics xiii _ ed . by landau d.p . ,lewis s.p . , and schuettler h .- b . , springer proceedings in physics , 86 ( springer - verlag , 2001 ) , pp.183188 .korniss g. , novotny m.a ., rikvold p.a . ,guclu h. , and toroczkai z. , _ going through rough times : from non - equilibrium surface growth to algorithmic scalability _ , materials research society symposium proceedings series , 700 ( 2001 ) , pp.297308 . korniss g. , novotny m.a . , kolakowska a. , and guclu h. , _ statistical properties of the simulated time horizon in conservative parallel discrete - event simulations _, in _ proceedings of the 2002 acm symposium on applied computing , sac 2002 _ , ( 2002 ) , pp.132138 .kolakowska a. , novotny m.a . , and korniss g. , _ algorithmic scalability in globally constrained conservative parallel discrete - event simulations of asynchronous systems _, physical review e , 67 ( 2003 ) , article no 046703 , 13 pages .kolakowska a. , novotny m.a . , and rikvold p.a . ,_ update statistics in conservative parallel - discrete - event simulations of asynchronous systems _ , physical review a , 68 ( 2003 ) , article no 046705 , 14 pages .toroczkai z. , korniss g. , novotny m.a . , and guclu h. , _ virtual time horizon control via communication network design _ , in _ computational complexity and statistical physics _ ed . by percus a. , istrate g. , and moore c. , santa fe institute studies in the sciences of complexity series ( oxford university press , 2003 ) , in press , arxiv : cond - mat/0304617 .guclu h. , korniss g. , toroczkai z. , and novotny m.a ., _ small - world synchronized computing networks for scalable parallel discrete - event simulations _ , in _ complex networks _ ed . by ben - naim e. , frauenfelder h. , and toroczkai z. , _ lecture notes in physics( springer , 2004 ) , in press .korniss g. , toroczkai z. , novotny m.a . , and rikvold p.a ., _ parallelization of a dynamic monte carlo algorithm : a partially rejection - free conservative approach _ ,journal of computational physics , 153 ( 1999 ) , pp.488508 .novotny m.a ., kolakowska a. , and korniss g. , _ algorithms for faster and larger dynamic metropolis simulations _ , in _ the monte carlo method in the physical sciences _ , ed . by gubernatisj.e , aip conference proceedings , vol .690 ( american institute of physics , new york , 2003 ) , pp .240247 .kolakowska a. , novotny m.a . , and verma p.s ., _ roughening of the interfaces in dimansional two - component surface growth with an admixture of random deposition _ , physical review e , in press ( 2004 ) , 16 pages , arxiv : cond - mat/0403341 . overeiner b. j. , schoneveld a. , and sloot p. m. a. , _ self - organized criticality in optimistic simulations of correlated systems _ ,in _ parallel and distributed discrete event simulation _ , edited by tropper c. ( nova science publishers , new york , 2002 ) , pp.79 - 98 .p . and turner , s. j. , _ an asynchronous protocol for virtual factory simulation on shared memory multiprocessor systems _ , journal of operational research society , special issue on progress in simulation research , vol .51 , no . 4 ( 2000 ) , pp.413 - 422
|
* abstract * = 0.42truecm in a state - update protocol for a system of asynchronous parallel processes that communicate only with nearest neighbors , global desynchronization in operation times can be deduced from kinetic roughening of the corresponding virtual - time horizon ( vth ) . the utilization of the parallel processing environment can be deduced by analyzing the microscopic structure of the vth . in this chapter we give an overview of how the methods of non - equilibrium surface growth ( physics of complex systems ) can be applied to uncover some properties of state update algorithms used in distributed parallel discrete - event simulations ( pdes ) . in particular , we focus on the asynchronous conservative pdes algorithm in a ring communication topology . the time evolution of its vth is simulated numerically as asynchronous cellular automaton whose update rule corresponds to the update rule followed by this algorithm . there are two cases of a balanced load considered : ( 1 ) the case of the minimal load per processor , which is expected to produce the lowest utilization ( the so - called worst - case performance scenario ) ; and , ( 2 ) the case of a general finite load per processor . in both cases , we give theoretical estimates of the performance as a function of and the load per processor , i.e. , approximate formulas for the utilization ( thus , the mean speedup ) and for the desynchronization ( thus , the mean memory request per processor ) . it is established that the memory request per processor , required for state savings , does not grow without limit for a finite number of processors and a finite load per processor but varies as the conservative pdes evolve . for a given simulation size , there is a theoretical upper bound for the desynchronization and a theoretical non - zero lower bound for the utilization . we show that the conservative pdes are generally scalable in the ring communication topology . the new approach to performance studies , outlined in this chapter , is particularly useful in the search for the design of a new - generation of algorithms that would efficiently carry out an autonomous or tunable synchronization . * keywords : * distributed parallel discrete - event simulations , virtual time , desynchronization , asynchronous cellular automata
|
inherent broadcast nature of wireless channel makes wireless transmission susceptible to eavesdropping within the communication range of the source .traditionally security in wireless networks has mainly been considered at higher layers using cryptographic methods .however , recent advances in computation technology pose serious threats to such frameworks , motivating researchers to explore alternative security solutions which offer unbreakable and quantifiable secrecy . _ physical layer security _ has emerged as a viable solution .the fundamental principle behind physical layer security is to exploit the inherent randomness of noise and communication channels to limit the amount of information that can be extracted at ` bit ' level by an unauthorized receiver .physical layer security builds upon the pioneering results developed by wyner which established the possibility of achieving information - theoretic security by exploiting the noise of communication channel and for the first time showed that secure communication is possible if the eavesdropper s channel is a _ degraded _ version of the destination channel . later in ,wyner s result was extended to gaussian wire - tap channels .these results are further extended to various models such as multi - antenna systems , multiuser scenarios , fading channels .an interesting direction of work on secure communication in the presence of eavesdropper(s ) is one in which the source communicates with the destination via relay nodes . in the existing literaturesuch work has been considered in one or both of the following two scenarios : ( a ) relay nodes employ relaying schemes , such as decode - and - forward ( df ) , ( b ) the source communicates with the destination over a two - hop relay network .we argue that computationally complex relaying schemes such as df render end - to - end performance characterization intractable , without providing any performance guarantee in general channel and network scenarios .therefore , we consider one of the simplest relaying scheme : amplify - and - forward ( af ) that allows us to provide guarantees on the optimal performance over a wide range of channel conditions .a node performing af - relaying scales and forwards the signals received at its input .further , with af - relaying the end - to - end performance characterization problem remains tractable for much larger class of networks than those that can be considered with other relaying schemes .the characterization of the optimal secure af rate is a computationally hard problem for general layered network .thus , in this paper we introduce an approach based on network simplification to reduce the computational effort of approximating the maximum achievable secure af rate in general layered networks . consider a network scenario where source is connected to a destination via a network of wireless relays in the presence of an eavesdropper .the eavesdropper can overhear the transmissions of a subset of relay nodes in the network . to characterize the maximum secure af rate one has to optimize it over the scaling factors of all relay nodes .however , there can be several relay nodes which contribute marginally to the achievable secrecy rate . shutting down those relays saves physical resources without compromising much on the performance . at the same time , computational effort of calculating optimal secure af rate is reduced greatly as now one needs to optimize the secrecy rate over scaling factors of fewer relay nodes . in this paperwe aim to understand what portion of the maximum achievable secrecy rate can be maintained if only a fraction of available relay nodes are used .previous work on network simplification embraces two major threads : one pertaining to the characterization of the fraction of achievable rate / capacity that can be maintained with a selected subset of available relay nodes and the other pertaining to the design of efficient algorithms for the selection of a subset of the best relay nodes . in the first direction , for the gaussian n - relay diamond network, characterizes the fraction of the capacity when out of available relay nodes are used . in , the work of were extended to diamond network with multiple antennas at the source and the destination for some scenarios .authors in provide upper bounds on multiplicative and additive gaps between optimal af rates with and without network simplification for the gaussian diamond network and a class of symmetric layered networks .recent work in , characterizes the guarantees achievable over arbitrary layered gaussian networks , however restricted to the selection of exactly one relay from each layer .the performance guarantees for the scenario where a subset of two relays per layer is selected from a network with two layers of three relays each is also provided .the progress in the second direction is made by and , where low - complexity heuristic algorithms for the selection of near - optimal relay subnetwork of a given size from a layered gaussian relay network is provided .previously , in cooperative communication literature also , the notion of relay selection has been used . however ,such prior work used relay subnetwork selection in a restricted sense ( selecting the best single relay node among n relays in one layer networks ) . in the above - mentioned workthe main objective was throughput maximization .to the best of our knowledge , no work has been done hitherto towards the characterization of the performance of network simplification for wireless relay networks in the presence of an eavesdropper with the objective of secrecy rate maximization which is an even harder problem . as a first step in this direction ,we provide the optimal secure af rate characterization in the communication scenarios where the source communicates with the destination over layers of relay nodes , in the presence of an eavesdropper which overhears the transmissions of the nodes in the last layer and any number of relay nodes in each layer are used , .the eavesdropper being a passive entity , a realistic eavesdropper scenario is the one where nothing about the eavesdropper s channel is known , neither its existence , nor its channel state information ( csi ) .however , the existing work on secrecy rate characterization assumes one of the following : ( 1 ) the transmitter has prefect knowledge of the eavesdropper channel states , ( 2 ) _ compound channel : _ the transmitter knows that the eavesdropper channel can take values from a finite set , and ( 3 ) _ fading channel : _ the transmitter only knows distribution of the eavesdropper channel . in this paper , we assume that the csi of the eavesdropper channel is known perfectly for the following two reasons .first , this provides an upper bound to the achievable af secrecy rate for the scenarios where we have imperfect knowledge of the eavesdropper channel .for example , the lower ( upper ) bound on the compound channel problem can be computed by solving the perfect csi problem with the worst ( best ) channel gain from the corresponding finite set .further , this also provides a benchmark to evaluate the performance of achievability schemes in such imperfect knowledge scenarios .second , this assumption allows us to focus on the nature of the optimal solution and information flow , instead of on complexities arising out of imperfect channel models ._ organization : _ in section [ sec : sysmdl ] we introduce a general wireless layered relay network model and formulate the problem of maximum achievable secrecy rate with amplify - and - forward relaying in such networks .section [ sec : netsimdiamondnet ] addresses the performance of network simplification in the gaussian -relay diamond network , layered network with layer and computes additive and multiplicative gaps between the maximum secure af rates achievable when and , relays are used . in section [ sec : ecgalnet ]we consider a class of symmetric layered networks and compute additive and multiplicative gaps between the optimal secure af rates obtained when when and , relays are used . section [ sec : cnclsn ] concludes the paper .consider a -layer wireless network with directed links .the source is at layer ` ' , the destination is at layer ` ' and the relays from the set are arranged in layers between them .the layer contains relay nodes , .the source transmits message signals to the destination via relay layers .however , the signals transmitted by the relays in the last layer are also overheard by the eavesdropper .an instance of such a network is given in figure [ fig : layrdnetexa ] . each node is assumed to have a single antenna and operate in full - duplex mode . and the destination .each layer contains two relay nodes .the eavesdropper overhears the transmissions from the relays in layer .,width=336 ] at instant , the channel output at node , is = \sum_{j \in { \mathcal n}(i ) } h_{ji } x_j[n ] + z_i[n ] , \quad - \infty < n < \infty,\ ] ] where ] , are i.i.d .gaussian random variables with zero mean and variance that satisfy an average source power constraint , \sim { \cal n}(0 , p_s) ] is a sequence ( in ) of i.i.d .gaussian random variables with \sim { \cal n}(0 , \sigma^2) ] , its input at time instant , as follows = \beta_i y_i[n ] , \quad 0 \le \beta_i^2 \le \beta_{i , max}^2 = p_i / p_{r , i},\ ] ] where is the received power at node and choice of scaling factor satisfies the power constraint . assuming equal delay along each path , for the network in figure [ fig : layrdnetexa ] , the copies of the source signal ( ] ) , respectively ,arrive at the destination and the eavesdropper along multiple paths of the same delay .therefore , the signals received at the destination and eavesdropper are free from intersymbol interference ( isi ) .thus , we can omit the time indices and use equations and to write the input - output channel between the source and the destination as x_s + \sum\limits_{l=1}^l \sum\limits_{j-1}^{n_l}\left[\sum\limits_{(i_1, ...,i_{l - l } ) \in k_{lj , t } } \!\!\!\!\!\!\!\!\!\beta_{lj } h_{lj , i_1} ...\beta_{i_{l - l } } h_{i_{l - l},t}\right ] z_{lj } + z_t\ ] ] where is the set of -tuples of node indices corresponding to all paths from source to destination with path delay .similarly , is the set of - tuples of node indices corresponding to all paths from the relay of layer to the destination with path delay .we introduce modified channel gains as follows .for all the paths between the source and the destination : for all the paths between the relay of layer to destination with path delay : in terms of these modified channel gains , the source - destination channel in can be written as : similarly , the input - output channel between the source and the eavesdropper can be written as the secrecy rate at the destination for such a network model can be written as , ^+ ] .the secrecy capacity is attained for the gaussian channels with the gaussian input , where = p_s ] with optimum value of the scaling factors for the nodes in the last layer from lemmas [ lemma : ecgalreducedbeta1 ] , the problem of computing the optimal network - wide scaling vector reduces to similar to ( * ? ? ?* lemma 3 ) for linear chain networks , we can show that is a quasi - convex function of in the interval $ ] for a given sub - vector of scaling factors of first relay layers and optimum sub - vector of the optimum scaling factors of the last relay layer .thus , .carrying out this process successively for relays in layer , proves the lemma .therefore , can be written as .we use superscript to emphasize that the optimal scaling factors are computed for nodes in each layer . with these optimum scaling factors ,the snr at the destination with relay nodes in each layer of a layered network with relay layers is given as follows where similarly , the snr at the eavesdropper with relay nodes in each layer is given as follows ^ 2 } \ ] ] now consider the network simplification scenario where only out of available relays in each layer are used .using lemma [ lemma : ecgalreducedbeta1 ] and [ lemma : ecgalreducedbeta2 ] , we can solve secrecy rate optimization problem of and obtain the optimal solution , where is the optimum scaling factor for all the nodes in layer and is given as follows : where \sigma^2},\label{eqn : beta_l_maxk}\\ \left(\beta_{l , glb}^k\right)^ { 2 } & = \frac{1}{k h_l h_e \left(1 + k b\right ) \sqrt{1 + \frac{p_s}{\sigma^2}\frac{k a}{1+k b } } } \label{eqn : betalglb_k}\end{aligned}\ ] ] with the corresponding optimal snr at the destination and the eavesdropper is given as : ^ 2 } \label{eqn : snr_e_k}\end{aligned}\ ] ] let and denote the optimal secure af rate achieved by using all relays and any relays out of available relays in each layer of the ecgal network , respectively . in the following we compute the additive gap for large and the multiplicative gap for small . before discussing the upper - bounds on the additive and multiplicative gaps for ecgal networks with arbitrary number of relay layers ,consider the following example where we compute such bounds for an ecgal network with two layers of relay nodes for any and .[ ex:2lyrnetsim ] consider an ecgal network with two layers of relay nodes between the source and the destination , .+ * case i : * . + using and , we have for this network with substituting for and in and and subsequently substituting the results in , we get + 1 + \frac{n p h_1 ^ 2}{\sigma^2 } + \frac { n p h_2 ^ 2}{\sigma^2 } + \frac{n p h_1 ^ 2}{\sigma^2 } \frac { n^{\!2}\ !p h_2 ^ 2}{\sigma^2}}}{1\!+\!\frac{\frac{p_{\!s } \!h_{\!s}^{\!2}\!}{\sigma^2 } \frac{n^{\!2}\ !p h_1 ^ 2}{\sigma^2 } \frac{n^{\!2}\ ! p h_e^2}{\sigma^2}}{\frac{p_{\!s } \!h_{\!s}^{\!2}\!}{\sigma^2 } \left[\ ! \frac{n^{\!2}\ !^ 2}{\sigma^2 } + \frac{n p h_e^2}{\sigma^2 } + 1 \!\right ] + 1 + \frac{n p h_1 ^ 2}{\sigma^2 } + \frac{n p h_e^2}{\sigma^2 } + \frac{n p h_1 ^ 2}{\sigma^2 } \frac{n^{\!2}\ !p h_e^2}{\sigma^2}}}\right]\ ] ] similarly , + 1 + k \frac{ph_1 ^ 2}{\sigma^2 } + k \frac{p h_2 ^ 2}{\sigma^2 } + k \frac{p h_1 ^2}{\sigma^2 } k^2 \frac{p h_2 ^ 2}{\sigma^2}}}{1\!+\!\frac{\frac{p_{\!s } h_s^2}{\sigma^2 } k^2 \frac{p h_1 ^ 2}{\sigma^2 } k^2 \frac{p h_e^2}{\sigma^2}}{\frac{p_{\!s } h_s^2}{\sigma^2 } \left[\!k^2 \frac{p h_1 ^ 2}{\sigma^2 } + k\frac{p h_e^2}{\sigma^2 } + 1 \!\right ] + 1 + k \frac{ph_1 ^ 2}{\sigma^2 } + k\frac{p h_e^2}{\sigma^2 } + k \frac{p h_1 ^ 2}{\sigma^2 } k^2 \frac{p h_e^2}{\sigma^2}}}\right]\ ] ] thus , we have \nonumber\\ & \leq \frac{1}{2}\log\ ! \left[1\!+\!\frac{h_2 ^2}{h_1 ^ 2}\!\left(\!\frac{1}{k}\!-\!\frac{1}{n}\!\right)\!\!+\ ! \frac{\sigma^2}{p h_1 ^ 2}\ !\left(\!\frac{1}{k^2}\!-\!\frac{1}{n^2}\!\right)\!\right ] \nonumber\\ & \quad + \frac{1}{2}\log\ ! \left[1\!+\!\frac{\sigma^2}{p h_e^2}\!\left(\!\frac{1}{k^2}\!-\!\frac{1}{n^2}\!\right)\!\!+\ ! \frac{\sigma^2}{p h_1 ^ 2 } \!\left(\!\frac{1}{k^3}\!-\!\frac{1}{n^3}\!\right)\!\ ! + \ ! \frac{\sigma^4}{p h_1 ^ 2 p h_e^2}\ ! \left(\!\frac{1}{k^4}\!-\!\frac{1}{n^4}\!\right)\right]\end{aligned}\ ] ] and }}{1 + \frac{n p_s h_s^2/\sigma^2}{1+\frac{\sigma^2}{n^ { 2 } p } \left [ \frac{1}{h _ { 1}^ { 2 } } + \frac{1}{h _ { e}^ { 2 } } + \frac { \sigma^2 } { n h _ { 1}^ { 2 } h _ { e}^ { 2 } p } \right ] } } \right ] \middle/ \log \left [ \frac{1 + \frac{k p_s h_s^2/\sigma^2}{1+\frac{\sigma^2}{k^ { 2 } p } \left [ \frac{1}{h _ { 1}^ { 2 } } + \frac{1}{h _ { 2}^ { 2 } } + \frac { \sigma^2 } { k h _ { 1}^ { 2 } h _ { 2}^ { 2 } p } \right]}}{1 + \frac{k p_s h_s^2/\sigma^2}{1+\frac{\sigma^2}{k^ { 2 } p } \left [ \frac{1}{h _ { 1}^ { 2 } } + \frac{1}{h _ { e}^ { 2 } } + \frac { \sigma^2 } { k h _ { 1}^ { 2 } h _ { e}^ { 2 } p } \right ] } } \right]\right.\nonumber\\ & \leq \left(\!\!\frac{n}{k}\!\!\right)^{\!4}\ ! \left(\!\frac{1\!+\!n\frac{p2}{\sigma^2}}{1\!+\!k\frac{p h_1 ^ 2}{\sigma^2}}\!\right ) \left(\!\frac { 1\!+\ !k\frac{p h_1 ^ 2}{\sigma^2 } \!+\ ! k\frac{p h_e^2}{\sigma^2 } \!+\ !^ 2}{\sigma^2 } \frac{p h_e^2}{\sigma^2}}{1\!+\ !n\frac{p h_1 ^ 2}{\sigma^2 } \!+\ !n\frac{p h_2 ^2}{\sigma^2 } \!+\ ! n^3 \frac{p^ 2}{\sigma^2 } \frac{p h_2 ^ 2}{\sigma^2}}\!\right)\left(\!\frac{1\!+\ !k\frac{p h_1 ^ 2}{\sigma^2 } \!+\ ! k\frac{p h_2 ^ 2}{\sigma^2 } \!+\ ! k^3 \frac{p h_1 ^ 2}{\sigma^2 } \frac{p h_2 ^ 2}{\sigma^2}}{1\!+\ !n\frac{p h_1 ^ 2}{\sigma^2 } \!+\ ! n\frac{p h_e^2}{\sigma^2 } \!+\ !h_1 ^ 2}{\sigma^2 } \frac{p h_e^2}{\sigma^2}}\!\right)\nonumber\\ & \leq \left(\!\frac{n}{k}\!\right)^{\!2 } \left[1\!+\!\frac{\sigma^2 } { p h_1 ^ 2}\!\left(\frac{1}{k^2}\!-\!\frac{1}{n^2}\right)\ ! + \ !} { p}\!\left(\!\frac{1}{k^2 h_e^2}\!-\!\frac{1}{n^2 h_2 ^ 2}\!\right)\ ! + \ !\frac{\sigma^2 } { p h_1 ^ 2}\frac{\sigma^2 }{ p}\!\left(\!\frac{1}{k^3 h_e^2}\!-\!\frac{1}{n^3 h_2 ^ 2}\!\right)\!\right]\end{aligned}\ ] ] * case ii : * .+ from lemma [ lemma : ecgalreducedbeta1 ] , we have substituting for and in and and subsequently substituting the results in , we get }}{1+\frac{n p_s h_s^2/\sigma^2}{1+\frac{\sigma^2}{n^2 p h_1 ^ 2}\left[1+\frac{p_s h_s^2 } { \sigma^2 } + \frac { h_2 } { h_e } d\sqrt{1+\frac{p_s h_s^2}{\sigma^2}\frac{n^3 p h_1 ^ 2}{\sigma^2}d}\right]}}\!\right]\ ] ] with . similarly , }}{1+\frac{kp_s h_s^2/\sigma^2}{1+\frac{\sigma^2}{k^2 p h_1 ^ 2}\left[1+\frac{p_s h_s^2 } { \sigma^2 } + \frac { h_2 } { h_e } d'\sqrt{1+\frac{p_s h_s^2}{\sigma^2}\frac{k^3 p h_1 ^ 2}{\sigma^2}d'}\right]}}\!\right]\ ] ] with .thus , we have \nonumber\\ & \leq \frac{1}{2}\log \left[\left(\frac{n}{k}\right)^3\frac{1 + \frac{h_e } { h_2 } \sqrt{\!1\!+\ ! \frac{k^3 p h_1 ^ 2}{\sigma^2 } } } { 1 + \frac{h_e } { h_2}\sqrt{\!1\!+\ !\frac{n^3 p h_1 ^ 2}{\sigma^2 } } } \ \frac{1 \!+\ ! \frac{h_2 } { h_e}\sqrt{\!1\!+\!\frac { k^3 p h_1 ^ 2}{\sigma^2 } } \!+\!\frac{k^3 ph_1 ^ 2 } { \sigma^2}}{1\ ! + \ !} { h_e}\sqrt{\!1\!+\!\frac { n^3 p h_1 ^ 2}{\sigma^2 } } \!+\!\frac{n^3 ph_1 ^ 2}{\sigma^2}}\ \frac{1\ ! + \ ! \frac{h_2 } {h_e}\sqrt{\!1\!+\ ! \frac{n^3 p h_1 ^ 2}{\sigma^2 } } } { 1\!+\ ! \frac{h_2 } { h_e}\sqrt{\!1\!+\ ! \frac{k^3 p h_1 ^ 2}{\sigma^2 } } } \right ] \nonumber\\ & \leq \frac{3}{4 }\log\!\left(\frac{n}{k}\right ) \!+\! \frac{\sigma^2}{p h_1 ^ 2}\ ! \left[\left\{\frac{1}{k^3}\!-\!\frac{1}{n^3}\right\}+\!\frac{h_2}{h_e}\!\left\{\!\sqrt{\frac{1}{k^6}\!+\!\frac{p h_1 ^ 2}{k^3 \sigma^2}}-\!\sqrt{\frac{1}{n^6}\!+\!\frac{p h_1 ^ 2}{n^3 \sigma^2}}\right\}\right]\right]\end{aligned}\ ] ] and (h_2\!+\!h_e ) \!+\ ! k^3 \frac{p_s h_s^2}{\sigma^2}\frac{p h_1 ^ 2}{\sigma^2 } h_2}{\left[\!1\!+\!n^2\frac{p h_1 ^ 2}{\sigma^2}\right](h_2\!+\!h_e ) \!+\ ! n^3 \frac{p_s h_s^2}{\sigma^2}\frac{ph_1 ^ 2}{\sigma^2 } h_e } \nonumber\\ & \leq\ ! \max\left\{\left(\frac{n}{k}\right)^3 \frac{\left(1+k^2\frac{ph_1 ^ 2}{\sigma^2}\right ) } { \left(1+n^2\frac{p h_1 ^ 2}{\sigma^2}\right ) } , \frac{h_2}{h_e } \right\}\nonumber\\ & \leq \max\left\{\!\left(\frac{n}{k}\right ) \left[1+\frac{\sigma^2}{p h_1 ^ 2}\left(\frac{1}{k^2}-\frac{1}{n^2}\right)\right ] , \frac{h_2}{h_e } \right\}\end{aligned}\ ] ] for arbitrary and relays in each layer. however , for ecgal networks with arbitrary number of relay layers , it is analytically hard to compute such upper - bounds on the additive and multiplicative gaps between the optimal end - to - end performances with and without network simplification for any and . therefore , in the following we attempt to analyze the scaling behavior of such upper bounds with large and . +* case i : * . using and , we have for large and : then from , and we obtain for large and : where , similarly , from , and we obtain for large and : and for large and , and , from , and we obtain } \mbox { , and } s\!n\!r_e^n \sim \frac{n p_s h_s^2/\sigma^2}{1\!+\!\frac{\sigma^2}{n^2 p}\!\left[\frac{h_l^2}{h_e^2}\!+\!\frac{h_l^2}{h_1 ^ 2}\!+\!\frac{b}{n}\right ] } \end{aligned}\ ] ] and from , and } \mbox { , and } s\!n\!r_e^k \sim \frac{k p_s h_s^2/\sigma^2}{1\!+\!\frac{\sigma^2}{k^2 p}\!\left[\frac{h_l^2}{h_e^2}\!+\!\frac{h_l^2}{h_1 ^ 2}\!+\!\frac{b}{k}\right ] } \end{aligned}\ ] ] where , . with these results we are now ready to compute the upper bound on additive and multiplicative gap between the optimal performance of af relaying with and without network simplification. first , we consider the upper bound on the additive gap .substituting the snr values from and in and respectively , we obtain the following upper bound on the additive gap for and asymptotically large and satisfying : + \!\frac{1}{2}\!\log\!\left[1\!+\ ! a { \frac{\sigma^2}{p h_l^2}\!\left(\frac{1}{k^3}\!-\!\frac{1}{n^3}\right)\!\ ! + \ ! \frac{\sigma^2}{p h_e^2}\ !\left(\frac{1}{k^2}\!-\!\frac{1}{n^2}\right)}\right]\label{eqn : addgapllyrbetamax}\ ] ] next , we consider the upper bound on the multiplicative gap .substituting the snr values from and in and resp . , we obtain the following upper bound on the multiplicative gap for and asymptotically large and satisfying : \end{aligned}\ ] ] * case ii : * . fromand we have , for and asymptotically large : \ ! \sqrt{1\!+\ ! \frac { n^3 p h_{l\!-\!1}^2/\sigma^2 } { 1+{h_{l\!-\!1}^2}\ ! \sum_{l=1}^{l-2}{1}/{h_{l}^2 } } } } \label{eqn : betalglbninfty}\ ] ] thus , from and we have , for and asymptotically large : similarly , for and asymptotically large we have : substituting these snr values from and in and respectively , we obtain the following upper bound on additive gap for : \label{eqn : addgapllyrbetaglb}\end{aligned}\ ] ] now , we consider the upper bound on the multiplicative gap for . from and we have , for and asymptotically large : thus , from and we have , for and asymptotically large : similarly , for and asymptotically large we have : substituting these snr values from and in and respectively , we obtain the following upper bound on multiplicative gap for : , \frac{h_l}{h_e } \right\}\ ] ] the results on the asymptotic behaviour of additive and multiplicative gaps are summarized in the following lemma : for ecgal network , the asymptotic additive and multiplicative gaps between the optimal performance of amplify - and - forward relaying obtained in terms of maximum achievable secrecy rate with and without network simplification are bounded from above as : for , \ ! + \!\frac{1}{2}\!\log\!\left(1\!+\ ! a { \frac{\sigma^2}{p h_l^2}\!\left(\frac{1}{k^3}\!-\!\frac{1}{n^3}\right)\!\ ! + \ !\frac{\sigma^2}{p h_e^2}\ !\left(\frac{1}{k^2}\!-\!\frac{1}{n^2}\right)}\right]\\ & \leq \frac{1}{2}\!\log \!\left[1\!+\ ! a \right]+\!\frac{1}{2}\!\log\!\left[1\!+\ ! a { \frac{\sigma^2}{p h_l^2}\!\ ! + \ !\frac{\sigma^2}{p h_e^2}}\right]\\ \frac{\bar{r}_s^n}{\bar{r}_s^k } \leq & \left(\!\frac{n}{k}\!\right)^{\!2}\left[1\!+\!\frac{\sigma^2 } { p h_1 ^ 2}\left(\frac{1}{k^2}-\frac{1}{n^2}\right)+\frac{\sigma^2}{p}\!\left(\frac{1}{k^2 h_e^2}-\frac{1}{n^2 h_l^2}\right)\ ! + \ !\frac { \sigma^2}{p h_l^2}\left(\frac{1}{k^3}\!-\!\frac{1}{n^3}\right)b\right]\end{aligned}\ ] ] and , for , \\ \frac{\bar{r}_s^n}{\bar{r}_s^k } \leq & \max \left\ { \left(\frac{n}{k}\right ) \left[1 \!+\ ! \frac{\sigma^2}{p h_1 ^ 2}\left(\frac{1}{k^2}\!-\!\frac{1}{n^2}\right ) \!+\ ! \frac{\sigma^2 b}{p}\left(\frac{1}{k^3}\!-\!\frac{1}{n^3}\right)\right ] , \frac{h_l}{h_e } \right\}\end{aligned}\ ] ] with and _ discussion : _ the results in this lemma show that asymptotically ( in source power ) , for the case where the constraint on scaling factors of the nodes is satisfied with strict equality , the additive gap is independent of the ratio and increases at most logarithmically with and the corresponding multiplicative gap increases at most quadratically with ratio and .similarly , when the constraint on scaling factors of the nodes which eavesdropper snoops on is satisfied with strict inequality , , the additive gap increases at most logarithmically with ratio and , and the corresponding multiplicative gap increases at most linearly with ratio and .exact characterization of the optimum secure af rate in general layered relay networks is an important but computationally intractable problem .we take an approach based on the notion of network simplification to approximate the optimal secure af rate within small additive and multiplicative gaps in the symmetric gaussian n - relay diamond network and a class of symmetric layered networks while simultaneously reducing the computational effort of solving this problem . to the best of our knowledge, this work provides the first characterization of the performance of network simplification in af relay networks in the presence of an eavesdropper . in future, we plan to extend this work to general layered networks .
|
we consider a class of gaussian layered networks where a source communicates with a destination through intermediate relay layers with nodes in each layer in the presence of a single eavesdropper which can overhear the transmissions of the nodes in the last layer . for such networks we address the question : what fraction of maximum secure achievable rate can be maintained if only a fraction of available relay nodes are used in each layer ? in particular , we provide upper bounds on additive and multiplicative gaps between the optimal secure af when all relays in each layer are used and when only , relays are used in each layer . we show that asymptotically ( in source power ) , the additive gap increases at most logarithmically with ratio and , and the corresponding multiplicative gap increases at most quadratically with ratio and . to the best of our knowledge , this work offers the first characterization of the performance of network simplification in layered amplify - and - forward relay networks in the presence of an eavesdropper . amplify - and - forward relaying , secrecy rate , layered relay networks , diamond networks .
|
the object of study in the gibbs formulation of statistical mechanics is an ensemble of systems and the gibbs entropy is a functional of the ensemble probability density function .equilibrium is defined as the state where the probability density function is a time - independent solution of liouville s equation .the development of this approach has been very successful , but its extension to non - equilibrium presents contentious problems . to implement the boltzmann approach the phase space is divided into a set of macrostates .the boltzmann entropy at a particular point in phase space is a measure of the volume of the macrostate in which the phase point is situated .the system is understood to be in equilibrium when the phase point is in a particular region of phase space .the entropy and equilibrium are thus properties of a single system .the purpose of this paper is to attempt to produce a synthesis of the gibbs and boltzmann approaches , which validates the gibbs approach , as currently used in ` equilibrium ' statistical mechanics and solid state physics , while at the same time endorsing the boltzmann picture of the time - evolution of entropy , including ` the approach to equilibrium ' . in order to do thiswe need to resolve in some way three questions , to which the current versions of the gibbs and boltzmann approaches offer apparently irreconcilable answers : ( a ) what is meant by equilibrium ?( b ) what is statistical mechanical entropy ? and ( c ) what is the object of study ? the attempt to produce conciliatory answers to ( a ) and ( b ) will occupy most of this paper .however , we shall at the outset deal with ( c ) . as indicated above, ensembles are an intrinsic feature of the gibbs approach ( see , for example * ? ? ?however we follow the neo - boltzmannian view of that we `` neither have nor do we need ensembles '' .the object of study in statistical mechanics is a _ single system _ and all talk of ensemblescan be understood as just a way of giving a relative frequency flavour to the probabilities of events occurring in that system .we now describe briefly the dynamics and thermodynamics of the system together with the statistical approach of gibbs .the boltzmann approach is described in greater detail in sect .[ pwtba ] .[ [ at - the - microscopic - dynamic - level ] ] at the microscopic ( dynamic ) level + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the system ( taken to consist of microsystems ) is supposed to have a one - to - one autonomous dynamics , on its phase space .the system is reversible ; meaning that there exists a self - inverse operator on , such that . then . on the subsets of is a sigma - additive measure , such that ( a ) is finite , ( b ) is absolutely continuous with respect to the lebesque measure on , and ( c ) is preserved by ; that is , and measurable .this means that there will be no convergence to an attractor ( which could in the dynamic sense be taken as an equilibrium state ) .[ [ at - the - phenomenological - thermodynamic - level ] ] at the phenomenological ( thermodynamic ) level + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + equilibrium is a state in which there is no perceptible change in macroscopic properties .it is such that a system : ( a ) either is or is not in equilibrium ( a binary property ) , ( b ) never evolves out of equilibrium and ( c ) when not in equilibrium evolves towards it .[ [ at - the - statistical - level - in - the - gibbs - approach ] ] at the statistical level ( in the gibbs approach ) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the phase - point is distributed according to a probability density function invariant under ; meaning that it is a solution of liouville s equation . at equilibriumthe gibbs entropy is the functional {\ensuremath{:\!\mbox{}}\,}-k_{{{{\mbox{\tiny b}}}}}\int_{\gamma_{{{{\mbox{\tiny}}}}}}\rho({\boldsymbol{x}})\ln[\rho({\boldsymbol{x}})]\,\mathrm{d}{{\sf m}}\label{giben}\ ] ] of a time - independent probability density function .problems arise when an attempt is made to extend the use of ( [ giben ] ) to non - equilibrium situations , which are now perceived as being those where is time - dependent .here we must introduce a set of macroscopic variables at the observational level which give more detail than the thermodynamic variables , and a set of macrostates defined so that : ( i ) every is in exactly one macrostate denoted by , ( ii ) each macrostate corresponds to a unique set of values for , ( iii ) is invariant under all permutations of the microsystems , and ( iv ) the phase points and are in macrostates of the same size . is the direct product of the configuration space and the momentum space , and the macrostates are generated from a partition of the one - particle configuration space , the points and are in same macrostate .however , for discrete - time systems phase - space is just configuration space[disc ] and the points and are usually in different macrostates . ]the boltzmann entropy , which is a function on the macrostates , and consequently also a function on the phase points in , is .\label{bolen}\ ] ] this is , of course , an extensive variable and the quantity of interest is the dimensionless entropy per microsystem , which for the sake of brevity we shall refer to as the boltzmann entropy . along a trajectory will not be a monotonically increasing function of time .rather we should like it to exhibit _ thermodynamic - like _ behaviour , defined in an informal ( preliminary ) way as follows : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ definition ( tl1):_the evolution of the system will be * thermodynamic - like * if spends most of the time close to its maximum value , from which it exhibits frequent small fluctuations and rarer large fluctuations . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this leads to two problems which we discuss in sects .[ hdwde ] and [ wntlt ] .is it possible to designate a part of as the equilibrium state ? on the grounds of the system s reversibility and recurrence we can , of course , discount the possibility that such a region is one which , once entered by the evolving phase point , will not be exited .as was well - understood by both maxwell and boltzmann , equilibrium must be a state which admits the possibility of fluctuations out of equilibrium . and refer to a particular macrostate as the equilibrium macrostate and the remark by ,[briclab ] that `` by far the largest volumes [ of phase space ] correspond to the _ equilibrium values _ of the macroscopic variables ( and this is how ` equilibrium ' should be defined ) '' is in a similar vein .so is there a single equilibrium macrostate ? if so it must be that in which the phase point spends more time than in any other macrostate and , if the system were ergodic , it would be the largest macrostate ( see sect .[ wntlt ] ) , with largest boltzmann entropy .there is one immediate problem associated with this .suppose we consider the set of entropy levels , .then , as has been shown by for the baker s gas , associated with these levels there may be degeneracies , such that , for some with , .the effect of this is that the entropy will be likely , in the course of evolution , to spend more time in a level less than the maximum ( see * ? ? ?* figs . 4 and 5 ) .another example , which has been used to discuss the evolution of boltzmann s entropy ( see * ? ? ?* appendix 1 ) and which we shall use as an illustrative example in this paper , is the kac ring model . up or down spins distributed equidistantly around a circle .randomly distributed at some of the midpoints between the spins are spin flippers .the dynamics consists of rotating the spins ( but not the spin - flippers ) one spin - site in the clockwise direction , with spins changing their direction when they pass through a spin flipper . consists of the points , corresponding to all combinations of the two spin states , and is decomposable into dynamically invariant cycles .if is even the parity of is preserved along a cycle which has maximum size .if is odd the parity of alternates with steps along a cycle and the maximum cycle size is . ]macrostates in this model can be indexed , where is the macrostate with up spins and down spins , giving then , of course , with monotonically decreasing with increasing .but , although the maximum macrostate is unique , , and the entropy level corresponding to the largest volume of is given , for , by the macrostate pair .it may be suppose that this question of degeneracy is an artifact of relevance only for small .it is certainly the case , both for the baker s gas and kac ring , that , if is a macrostate which maximizes , although , , as .so , maybe the union of and all the equally - sized macrostates with measure can be used as the equilibrium state . to test this possibilitytake the kac ring and consider the partial sum where is the union of all with =n= ] is the standard deviation with respect to the time distribution . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ of course , it could be regarded as unsatisfactory that _ two _ parameters are used as a measure of the degree of a property and it is a matter of judgement which is more important . for the kac ring of 10,000 spins with the entropy profile shown in fig .[ fig1 ] , = 0.58122724\times 10^{-2},\hspace{0.7 cm } \psi_{{\boldsymbol{x}}}[s_{{{{\mbox{\tiny b}}}}}]=0.31802804\times10^{-1 } \label{wntlt5}\ ] ] and , as a comparison , for the same ring with the flippers placed at every tenth site =0.20078055,\hspace{0.7 cm } \psi_{{\boldsymbol{x}}}[s_{{{{\mbox{\tiny b}}}}}]=0.20632198 . \label{wntlt6}\ ] ] it is clear ( and unsurprising ) that the random distribution of spin flippers leads to more thermodynamic - like behaviour . to explore the consequences of tl2 we distinguish between four aspects of a system : a. the number of microsystems and their degrees of freedom , together giving the phase space , with points representing microstates .b. the measure on . c. the mode of division of into the set of macrostates .d. the -measure preserving dynamics of the system .having chosen ( i ) and ( ii ) the choices for ( iii ) and then ( iv ) are not unique . in the case of the baker sgas , is a unit hypercube with volume measure .macrostates are specified by partitioning each square face of the hypercube .with such a setup it would now be possible to choose all manner of discrete - time dynamics . whether a system is ergodic will be determined by ( i ) , ( ii ) and ( iv ) and , if it is , is both the proportion of in and of the time spent in , f.a.a . . this will be the case for the baker s gas but not the kac ring .for any specification of ( i)(iii ) we denote the results of computing , ] using ( [ ttb1 ] ) by , ] ; that is to say , we omit the unnecessary trajectory - identifying subscript .if we were able to devise a model with ( i)(iii ) the same as the kac ring , but with an ergodic dynamics , we would have \triangle[s_{{{{\mbox{\tiny b}}}}}]= 0.49997497\times10^{-4 } , \hspace{0.7 cm } \psi[s_{{{{\mbox{\tiny b}}}}}]=0.70707142\times10^{-4 } \end{array } \label{wntlt7}\ ] ] and it is not difficult to show that ] are monotonically decreasing functions of .ergodicity leads to more thermodynamic - like behaviour , which becomes increasingly thermodynamic - like with increasing . into macrostates with the measure given by a combinatorial quantity like ( [ kac1 ] ) . ]this behaviour is also typical , since it occurs f.a.a . . of course, the results ( [ wntlt7 ] ) are not simply dependent on the putative ergodic dynamics of the system , but also on the way that the macrostates have been defined . if the time average of is to be close to and if the fluctuations in are to be small then most of must lie in macrostates with close to . in the case of the kac ring , with , 99.98% of in macrostates with .however , of course , the kac ring , although not ergodic , gives every appearance , at least in the instances investigated ( see fig .[ fig1 ] ) , of behaving in a thermodynamic - like manner . although ergodicity , with a suitable macrostate structure , is _ sufficient _ for thermodynamic - like behaviour , it is clearly not _necessary_. consider the case where can be ergodically decomposed ; meaning that where is invariant and indecomposable under .then the time spent in is and we , henceforth , identify a time average along a trajectory in using with also replacing the subscript in ( [ wntlt1 ] ) and ( [ wntlt2 ] ) . in the case of the kac ring the index labels the cycles which form the ergodic decomposition of . for a particular cycle , like that shown in fig .[ fig1 ] , is obtained simply by counting the number of times the phase point visits each macrostate in a complete cycle .the data are then used to compute the results given in ( [ wntlt5 ] ) . a plot of against the microstate index is shown in fig .[ fig2 ] with a comparison made with , the corresponding curve for a system with ergodic dynamics .this gives a graphic illustration of the suggestion that for the cycle of the kac ring shown in fig .[ fig1 ] together with the corresponding curve of for a system with ergodic dynamics.,width=377 ] ergodic systems , with a suitable choice of macrostates , are likely to be more thermodynamic - like in their behaviour than non - ergodic systems .it might also be speculated that systems show increasingly thermodynamic - like behaviour with decreasing .another advantage of an ergodic system is that , f.a.a . , the level of thermodynamic - like behaviour will be the same .this contrasts with an ergodic decomposition characterized by ( [ ttb2 ] ) , where it is possible for differing levels of thermodynamic - like behaviour to be exhibited within different members of the decomposition . to be precise ,take small positive and and regard behaviour along a trajectory as thermodynamic - like if and only if both <\varepsilon_\triangle ] .let be the union of all in which the behaviour is thermodynamic - like with . in the discussion of the boltzmann approach we have so far avoided any reference to probabilities . to complete the discussion in this section and to relate our arguments to the gibbs approach we shall need ( see * ? ? ?* ) to introduce two sets of probabilities for a system with the ergodic decomposition ( [ ttb2 ] ) .the first of these is , .then thermodynamic - like behaviour will be typical for the system if thus we have two levels of degree , the first represented by the choices of and and the second concerning the extent to which ( [ ttb4 ] ) is satisfied .as we saw in sect .[ giben ] , the gibbs approach depends on defining a probability density function on , for a system ` at equilibrium ' .thus we must address more directly the question of probabilities .assuming the ergodic decomposition ( [ ttb2 ] ) , we take the time - average definition of , for which where is given by ( [ ttb3 ] ) .thus the probability density function for is 0,&\mbox{otherwise , } \end{array}\right . \label{rgb2}\ ] ] from which we have if we assume that all points of are equally likely , then on bayesian / laplacean grounds , and consonant with the approach of , we should choose giving , from ( [ rgb3 ] ) , , which is the microcanonical distribution and for which , from ( [ giben ] ) , =s_{{{\mbox{\tiny b}}}}(\gamma_{{{\mbox{\tiny } } } } ) . \label{rgb4}\ ] ] it also follows , from ( [ bolen ] ) , ( [ wntlt3 ] ) , ( [ ttb1 ] ) and ( [ assum1 ] ) that has proposed a general scheme for relating a phase function , defined on to a macro - function defined on the macrostates and then to a thermodynamic function . the first step is to course grain over the macrostates to produce . is a good approximation to for the phase functions relevant to thermodynamics since their variation is small over the points in a macrostate . ]the second step is to define the thermodynamic variable along the trajectory as . in the case of the boltzmann entropy , which is both a phase function and a macro - functionthe first step in this procedure is unnecessary since it already , by definition , course grained over the macrostates .then we proceed to identify the dimensionless thermodynamic entropy per microsystem with . in the case of a system with an ergodic decompositionthis definition would yield a different thermodynamic entropy for each member of the decomposition , with , from ( [ wntlt1 ] ) , . \label{rgb6}\ ] ] in the case where the behavior is thermodynamic - like in , differs from by at most some small and , if ( [ ttb4 ] ) holds , this will be the case for measurements along most trajectories . in the case of the kac ring with andthe trajectory investigated for figs .[ fig1 ] and [ fig2 ] the actual difference is given in ( [ wntlt5 ] ) , a value which is likely to decrease with increasing .it is often said that in `` equilibrium [ the gibbs entropy ] agrees with boltzmann and clausius entropies ( up to terms that are negligible when the number of particles is large ) and everything is fine '' . interpreted within the present contextthis means that the good approximation , for the entropy per microsystem of a system for which thermodynamic - like behaviour is typical , can be replace by .the advantage of this substitution is obvious , since the first expression is dependent on the division into macrostates and second is not .however a little care is needed in justifying this substitution .it is not valid because , as asserted in the quote from on page , occupies an increasing proportion of as increases .indeed , we have shown for the kac ring the reverse is the case .that proportion becomes vanishingly small as increases .however , the required substitution can still be made , since for that model although it may seem that the incorrect intuition on the part of et al .concerning the growth in the relative size of the largest macrostate , leading as it does to the correct conclusion with respect to entropy , is easily modified and of no importance , we have shown in sect .[ hdwde ] that it has profound consequences for the attempt to define equilibrium in the boltzmann approach .it should be emphasized that the gibbs entropy ( [ rgb4 ] ) is no longer taken as that of some ( we would argue ) non - existent equilibrium state , but as an approximation to the true thermodynamic entropy which is the time - average over macrostates of the boltzmann entropy .the use of a time - independent probability density function for the gibbs entropy is not because the system is at equilibrium but because the underlying dynamics is autonomous .the thermodynamic entropy approximated by the gibbs entropy ( [ rgb4 ] ) remains constant if the phase space remains unchanged but changes discontinuously if a change in external constraints leads to a change in .an example of this , for a perfect gas in a box when a partition is removed , is considered by who shows that the boltzmann entropy follows closely the step change in the gibbs entropy .in our programme for reconciling the boltzmann and gibbs approaches to statistical mechanics we have made use both of ergodicity and ergodic decomposition and there is deep ( and justified ) suspicion of the use of ergodic arguments , particularly among philosophers of physics .argue `` that ergodic theory in its traditional form is unlikely to play more than a cameo role in whatever the final explanation of the success of equilibrium statistical mechanics turns out to be '' . in its ` traditional form 'the ergodic argument goes something like this : ( a ) measurement processes on thermodynamic systems take a long time compared to the time for microscopic processes in the system and thus can be effectively regarded as infinite time averages .( b ) in an ergodic system the infinite time average can be shown , for all but a set of measure zero , to be equal to the macrostate average with respect to an invariant normalized measure which is unique .the traditional objections to this argument are also well known : ( i ) measurements may be regarded as time averages , but they are not _ infinite _ time averages . if they were one could not , by measurement , investigate a system not in equilibrium .in fact , traditional ergodic theory does not distinguish between systems in equilibrium and not in equilibrium .( ii ) ergodic results are all to within sets of measure zero and one can not equate such sets with events with zero probability of occurrence .( iii ) rather few systems have been shown to be ergodic .so one must look for a reason for the success of equilibrium statistical mechanics for non - ergodic systems and when it is found it will make the ergodicity of ergodic systems irrelevant as well .our use of ergodicity differs substantially from that described above and it thus escapes wholly or partly the strictures applied to it . in respect of the question of equilibrium / non - equilibrium we argue that the reason this does not arise in ergodic arguments is that equilibrium does not exist .the phase point of the system , in its passage along a trajectory , passes through common ( high entropy ) and uncommon ( low entropy ) macrostates _ and that is all_. so we can not be charged with ` blurring out ' the period when the system was not in equilibrium .the charge against ergodic arguments related to sets of measure zero is applicable only if one wants to argue that the procedure always works ; that is that non - thermodynamic - like behaviour never occurs .but we have , in this respect taken a boltzmann view .we need thermodynamic - like behaviour to be typical and we have proposed conditions for this to be the case .but we admit the possibility of atypical behaviour occurring with small but not - vanishing probability .while the class of systems admitting a finite or denumerable ergodic decomposition is likely to be much larger than that of the purely ergodic systems , there remains the difficult question of determining general conditions under which the temporal behaviour along a trajectory , measured in terms of visiting - times in macrostates , approximates , in most members of the ergodic decomposition , to thermodynamic - like behaviour .bricmont , jean ( 2001 ) , bayes , boltzmann and bohm : probabilities in physics , _ in _ j. bricmont , d. drr , m. c. galvotti , g. ghirardi , f. petruccione n. zanghi ( eds ) , _ chance in physics : foundations and perspectives _ , springer , 321 .goldstein , sheldon ( 2001 ) , boltzmann s approach to statistical mechanics , _ in _j. bricmont , d. drr , m. c. galvotti , g. ghirardi , f. petruccione n. zanghi ( eds ) , _ chance in physics : foundations and perspectives _ , springer , 3954 .
|
the boltzmann and gibbs approaches to statistical mechanics have very different definitions of equilibrium and entropy . the problems associated with this are discussed and it is suggested that they can be resolved , to produce a version of statistical mechanics incorporating both approaches , by redefining equilibrium not as a binary property ( being / not being in equilibrium ) but as a continuous property ( degrees of equilibrium ) measured by the boltzmann entropy and by introducing the idea of thermodynamic - like behaviour for the boltzmann entropy . the kac ring model is used as an example to test the proposals .
|
the energy transport properties of lattice vibrations in crystalline solids have recently attracted much research activity . in the simplest case energyis the only relevant conserved quantity , and then it is expected that for a large class of three or higher dimensional systems energy transport is diffusive and the fourier s law of heat conduction holds ; see for instance for a discussion . in contrast , many one - dimensional systems exhibit anomalous energy transport violating the fourier s law .section 7 of ref . offers a concise summary of the state of the art in the results and understanding of transport properties of such systems .if a suitable stochastic noise is added to the hamiltonian interactions , also one - dimensional particle chains can produce diffusive energy transport .a particularly appealing test case is obtained by taking a harmonic chain , which has ballistic energy transport , and endowing each of the particles with its own poissonian clock whose rings will flip the velocity of the particle . to our knowledge , this _ velocity flip model _ was first considered in , and it is one of the simplest known particle chain models which has a finite heat conductivity and satisfies the time - dependent fourier s law .its transport properties depend on the harmonic interactions , most importantly on whether the forces have an on - site component ( _ pinning _ ) or not . for nearest neighbor interactions ,if there is no pinning , there are two locally conserved fields , while with pinning there is only one , the energy density .in addition , the thermal conductivity , and hence the energy diffusion constant , happens to be independent of temperature , which implies that the fourier s law corresponds to a _ linear _ heat equation .this allows for explicit representation of its solutions in terms of fourier transform .it also leads to the useful simplification that for any stochastic initial state also the expectation value of the temperature distribution satisfies the fourier s law , even when the initial total energy has macroscopic variation .the main goal of the present contribution is to introduce a mathematical method which allows splitting the dynamics of the velocity flip model into local and global components in a controlled manner .the local evolution can then be chosen conveniently to simplify the analysis .for instance , here we show how to apply the method to separate the harmonic interactions and a dissipation term generated by the noise into explicitly solvable local dynamics .although created with perturbation theory in mind , the method itself is non - perturbative .in fact , our main result will assume the exact opposite : we work in the regime in which the noise dominates over the harmonic evolution , as then it will be possible to neglect certain resonant terms which otherwise would require more involved analysis .the method of splitting is quite generic but in the end it only amounts to reorganization of the original dynamics .therefore , it is important to demonstrate that it can become a useful tool also in practice . as a test case , we study here the velocity flip model with pinning and in the regime where the noise dominates , i.e. , the flipping rate is high enough .the ultimate goal is to prove that this system _ thermalizes _ for any sensible initial data : after some initial time any local correlation function , i.e. , an expectation value of a polynomial of positions and velocities of particles microscopically close to some given point , is well approximated by the corresponding expectation taken over some statistical equilibrium ( thermal ) ensemble .the thermalization time may depend on the initial data and on the system size but for systems with `` normal '' transport this time should be less than diffusive , , where denotes the length of the chain .we do not have a proof of such a strong statement yet and we only indicate how the present methods should help in arriving at such conclusions . instead of the full local statistics , we focus here on the time evolution of the _ average kinetic temperature profile _ , the observables where denotes the momentum of the particle at the site at time momentum is a random variable whose value depends both on the realization of the flips and on the distribution of the initial data at .we use to denote the corresponding expectation values .we prove here that for a large class of harmonic interactions with pinning , the average kinetic temperature profile does thermalize and its evolution will follow the time evolution dictated by the dynamic fourier s law as soon as a thermalization period has passed . as mentioned above , the velocity flip model has been studied before , using several different methods from numerical to mathematically rigorous analysis . in , it was proven that every translation invariant stationary state of the infinite chain with a finite entropy density is given by a mixture of canonical gibbs states , hence with temperature as the sole parameter .this was shown to hold even when fairly generic anharmonic interactions are included . since the dynamics conserves total energy , this provides strong support to the idea that energy is the only ergodic variable in the velocity flip model with pinning .results from numerical analysis of the velocity flip model are described in . therethe covariance of the nonequilibrium steady state of a chain with langevin heat baths attached to both ends was analyzed , and it was observed that the second order correlations in the steady state coincide with those of a similar , albeit more strongly stochastic , model of particles coupled everywhere to self - consistently chosen heat baths .hence , in its steady state the stationary fourier s law is satisfied with an explicit formula for the thermal conductivity ; the full mathematical treatment of the case without pinning is given in .it was later proven that , unlike the self - consistent heat bath model , the velocity flip model satisfies also the dynamic fourier s law .this was postulated in , based on earlier mathematical work on similar models by bernardin and olla ( see e.g. ) , and it was later proven by simon in .( although the details are only given for the case without pinning , it is mentioned in the remark after theorem 1.2 that the proofs can be adapted to include interactions with pinning . ) also the structure of steady state correlations and energy fluctuations are discussed in with supporting numerical evidence presented in . for a more general explanatory discussion about hydrodynamic fluctuation theory , we refer to a recent preprint by spohn . the strategy for proving the hydrodynamic limit in was based on relative entropy methods introduced by yau and varadhan ; see also and for more references and details .there one begins by assuming that the initial state is close to a local thermal equilibrium ( lte ) state , which allows for unique definition of the initial profiles of the hydrodynamic fields .one considers the relative entropy _ density _( i.e. , entropy divided by the volume ) of the state evolved up to time with respect to a local equilibrium state constructed from the hydrodynamic fields at time , and the goal is to show that the entropy density approaches zero in the infinite volume limit . in the present work we improve on the result proven in in two ways .we prove that it is not necessary to assume that the initial state is close to an lte state ; indeed , our main theorem is applicable for arbitrary deterministic initial data , including those in which just one of the sites carries energy . instead , we allow for an initial thermalization period infinitesimal on the diffusive time scale and only after the period has passed is the evolution of the temperature profile shown to follow a continuum heat equation . in particle systemsdirectly coupled to diffusion processes similar results have been obtained before : for instance , in the references a hydrodynamic limit is proven assuming only convergence of initial data .however , as discussed in remarks 3.5 of , even assuming a convergence restricts the choice of initial data but we do not need to do it here .secondly , we show that the fourier s law has a version involving a lattice diffusion kernel for which the temperature profile is well approximated by its macroscopic value at every lattice site and at every time after the themalization period .this improves on the standard estimates which only imply that the _ macroscopic averages _ of the two profiles coincide in the limit . as shown in , it is sometimes possible to use averaging over smaller regions , of diameter , , but it is not easy to see how relative entropy alone could be used to control local microscopic properties of the solution .the precise statements and assumptions for our main results are given in theorem [ th : maintprof ] and corollary [ th : maincoroll ] in sec .[ sec : tprofile ] .however , in two respects our result is less informative than the one in .firstly , we only describe the evolution of the _ average _ temperature profile , whereas the relative entropy methods produce statements which describe the hydrodynamic limit profiles _ in probability_. secondly , we do not prove here that the full statistics can be locally approximated by equilibrium measures , although the estimate for the temperature profile does indicate that this should be the case .the thermalization of the other degrees of freedom is only briefly discussed in sec .[ sec : lte ] where we introduce a local version of the dynamic replica method .the paper is organized as follows : we recall the definition of the velocity flip model in sec .[ sec : model ] and introduce various related notations there . the first version of the new tool , called _ global dynamic replica method _, is described in sec .[ sec : global ] where we also discuss how it might be applied to prove global equilibration for this model .this discussion , as well as the one involving the _ local dynamic replica method _ in sec . [ sec : lte ] , is not completely mathematically rigorous , for instance , due to missing regularity assumptions .we have also included a discussion about applications of the dynamic replica method to other models in sec .[ sec : anharm ] . to illustrate the expected differences to the present case, we briefly summarize there the changes occurring when the velocity flips are replaced by an anharmonic onsite potential .the main mathematical content is contained in sec .[ sec : tprofile ] where the global replica equations are applied to provide a rigorous analysis of the time evolution of the kinetic temperature profile , with the above mentioned theorem [ th : maintprof ] and corollary [ th : maincoroll ] as the main goals of the section .we have included some related but previously known material in two appendices .appendix [ sec : applocal ] concerns the explicit solution of the local dynamic semigroup , and in appendix [ sec : diffkernel ] we derive the main properties of the green s function solution of the renewal equation describing the evolution of the temperature profile .mainly for notational simplicity , we consider here only one - dimensional periodic crystals , i.e. , particles on a circle . for particleswe parametrize the sites on the circle by then always and if .in addition , for odd , we have {}\right.\!\right\ } } ] for . also , we use to denote ] is the dual lattice and for we set the formula holds in fact for all , in the sense that the right hand side is then equal to , i.e. , it coincides with the periodic extension of .the inverse transform is given by where we use the convenient shorthand notation with the above conventions , for any we have where is a `` discrete -function '' on , defined by here , and in the following , denotes the generic characteristic function : if the condition is true , and otherwise .we assume all particles to have the same mass , and choose units in which the mass is equal to one .the linear forces on the circle are then generated by the hamiltonian ) = \frac{1}{2 } x^t \mathcal{g}_l x\ , , \\ & \mathcal{g}_l : = \begin{pmatrix } \phi_l & 0 \\ 0 & 1 \end{pmatrix } \in { { \mathbb r}}^{(2 \lambda_l)\times(2 \lambda_l ) } \ , , \end{aligned}\ ] ] on the phase space .the canonical pair of variables for the site are the position , and the momentum .the hamiltonian evolution is combined with a velocity - flip noise .the resulting system can be identified with a markov process and the process generates a feller semigroup on the space of observables vanishing at infinity , see for mathematical details .then for and any in the domain of the generator of the feller process the expectation values of satisfy an evolution equation we consider the time evolution of the moment generating function where belongs to some fixed neighborhood of .although the observable does not vanish at infinity , is always well defined , and we assume that it satisfies the evolution equation dictated by ( [ eq : mainevoleq ] ) . this will require some additional constraints on the distribution of initial data , but for instance it should suffice that all second moments of are finite .( the existence of initial second moments will also be an assumption for our main theorem . ) ultimately , the goal is to prove thermalization , i.e. , the appearance of local thermal equilibrium .more precisely , we would like to prove that after a thermalization period the local restrictions of the generating functional are well approximated by mixtures of equilibrium expectations . to do such a comparison ,the first step is to classify the generating functions of equilibrium states .we start with a heuristic argument based on ergodicity which gives a particularly appealing formulation for the present case in which the canonical gibbs states are gaussian measures .suppose that the evolution of our finite system is ergodic , with energy as the only ergodic variable .then for any invariant measure there is a borel probability measure on such that for any we have where denotes the microcanonical partition function .( details about mathematical ergodic theory can be found for instance from . ) hence , if is an initial state which converges towards a steady state , we have for any observable applying this for , continuous with a compact support , we find by conservation of that .therefore , we can formally identify .finally , we can rewrite the somewhat unwieldy microcanonical expectations in terms of the canonical gaussian measures by using the representation which should be valid for all sufficiently nice and . applying this representation to yields , where the canonical partition function is .hence , we arrive at the conjecture that for all sufficiently nice initial data by a change of variables to and using the fact that , the limit function can also be represented in the form where the integral is taken around a circle in the right half of the complex plane and is a complex measure satisfying . for any fixed initial data with energy , there is a natural choice for the parameter as the unique solution to the equation .this choice coincides with the unique saddle point for on the positive real axis , i.e. , it is the only for which with . then also and hence the integration path in ( [ eq : ftlim ] ) follows the path of steepest descent through the saddle point .as the energy variance typically is proportional to the volume , the integrand should be concentrated to the real axis , with a standard deviation .hence , for fixed initial data and large we would expect to have here equivalence of ensembles in the form .in order to treat the local dynamics independently , we replicate the whole chain at each lattice site , and transform the evolution equation into a new form by selecting some terms to act on the replicated direction .we use a generating function with variables , where each controls the random variable , and we think of as the original site and as the position in its `` replica '' .explicitly , we study the dynamics of the generating function where the mean is taken over the distribution of at time for some given initial distribution of .clearly , depends on only via the combinations , .if is known , the local statistics at for some given time can be obtained directly from its restriction ) ] for .we assume but otherwise it can be chosen independently of . the parameter determines which neighboring particles are chosen to belong to the same `` local '' neighborhood . by ( [ eq : mainevoleq ] ) , the generating function satisfies the evolution equation where we use the random variable andhave defined for = x_0\ , , \\\zeta_{xy}^i\ , , & \text{otherwise}\ , .\end{cases}\end{aligned}\ ] ] the equation can be closed by using the identity there is some arbitrariness in the resulting equation : ( [ eq : tozetader ] ) is true for all , but the right hand side depends only on .we choose here to use it as indicated by the choice of summation variables in ( [ eq : htevol1 ] ) .since in the summand always this results in the evolution equation where denotes the standard gradient , i.e. , it is a vector whose -component is , and since , this can also be written as for any resulting from the replication procedure , we obviously have whenever . hence , by taylor expansion with remainder up to second order now for any continuously differentiable function setting and thus yields the `` duhamel formula '' in this formula , the replica dynamics has been exponentiated in the operator semigroup .no approximations have been made in the derivation of the formula , but to show that its solutions , under some natural assumptions , are unique and correspond to lte states does not look straightforward . we do not attempt to do it here . instead , the formula will be used in the next section to derive a closed evolution equation for the temperature profile .we conclude the section by showing that eq .( [ eq : htiter ] ) is consistent with the discussion in sec .[ sec : model ] .suppose is a complex bounded measure on , for some . then , with , solves ( [ eq : htiter ] ) . to see this , first note that the first two terms in the integrand cancel , since and thus .therefore , the value of the integral is equal to .here , with where in the second equality we have used the periodicity of . however , then , and thus . hence , the functions of defined by setting on the right hand side of ( [ eq : ftlim ] ) are solutions to the equation ( [ eq : htiter ] ) . to check that energy is the only ergodic variable one would need to prove that there are no other time - independent solutions .we postpone the analysis of this question to a future work , although by the results proven in it would seem to be a plausible conjecture .since the `` replicated '' generating function satisfies ( [ eq : htiter ] ) , a direct differentiation results in an evolution equation for the kinetic temperature profile .we obtain where each is a symmetric matrix obtained by a periodic translation of the matrix of the initial second moments .the final sum can be transformed into a standard convolution form by changing the summation variable .this yields where for , , the `` source term '' and the `` memory kernel '' are given by with the following shorthand we will prove later that and that .hence , mathematically the equation ( [ eq : tevoleq ] ) has the structure of a generalized _ renewal equation_. renewal equations have bounded solutions in great generality ( * ? ? ?* theorem 9.15 ) .the problem is closely connected to tauberian theory ; the classical paper by karlin contains a discussion and detailed analysis of the standard case .unfortunately , most of these results are not of direct use here since they do not give estimates for the speed of convergence towards the asymptotic value and thus can not be used for estimating the -dependence of the asymptotics .nevertheless , the standard methods can be applied to an extent also in the present case : in appendix [ sec : diffkernel ] we give the details for the existence and uniqueness of solutions to ( [ eq : tevoleq ] ) and derive an explicit representation of the solutions using laplace transforms .the analysis relies on upper and lower bounds for the tail behavior of .these follow from explicit formulae for the solutions of the semigroup derived in appendix [ sec : applocal ] .in particular , we have for any where in principle , the formulae should only be used if which implies .however , they also hold for all other values of if extended using the following `` analytic continuation '' : if , we set and the values for case agree with the limit .since we consider here the case in which the noise dominates , only the expressions in ( [ eq : aiexp ] ) will be needed in the following .we begin by summarizing the regularity assumptions about the free evolution , already discussed in sec .[ sec : model ] . without additional effort, we can relax the assumption of having a finite support to mere exponential decay .there are then several possibilities for fixing the finite volume dynamics ; here , we set for . then ( [ eq : aiexp ] ) is still pointwise valid for the fourier transform of the semigroup generated by .the assumptions imply that the fourier transform of can be extended to an analytic map on the strip , and the extension is -periodic . by continuity and periodicity of , we can then find such that on the strip .therefore , the infinite volume dispersion relation has the following regularity properties : assume satisfies the assumptions in [ th : phiassump ] .then , , defines a smooth function on which satisfies and for all .in addition , there is such that has an analytic , -periodic continuation to the region . from now onwe assume that satisfies assumption [ th : phiassump ] and is some fixed flipping rate .this already fixes the functions defined above .however , here we aim at convenient exponential bounds for the errors from diffusive evolution of the temperature profile .this requires to rule out resonant behavior , which can be achieved if the noise flipping rate is high enough and the dispersion relation satisfies a certain integral bound excluding degenerate behavior .explicitly , we only consider and satisfying the following : the nondegeneracy condition is satisfied by the nearest neighbor interactions , for which with .this can be proven for instance by relying on the estimate valid for all and large enough .however , the condition fails for the degenerate next - to - nearest neighbor coupling which skips over the nearest neighbors : then and thus for we have for all , hence the integral in ( [ eq : nondegcond ] ) evaluates to zero at . instead of including a formal proof of these statements ,we have depicted the values of the above integrals for one choice of parameters in fig .[ fig : degintplots ] . defined by the left hand side of the nondegeneracy condition in ( [ eq : nondegcond ] ) for and two different dispersion relations .the left plot is for the standard nearest neighbor case , , while the right one depicts a degenerate next - to - nearest neighbor case , with .the plots have been generated by numerical integration using mathematica.[fig : degintplots],title="fig:",scaledwidth=48.0% ] defined by the left hand side of the nondegeneracy condition in ( [ eq : nondegcond ] ) for and two different dispersion relations .the left plot is for the standard nearest neighbor case , , while the right one depicts a degenerate next - to - nearest neighbor case , with .the plots have been generated by numerical integration using mathematica.[fig : degintplots],title="fig:",scaledwidth=48.0% ] [ th : maintprof ] suppose and satisfy the conditions in assumption [ th : gammaassump ] .then there is such that equation ( [ eq : tevoleq ] ) has a unique continuous solution for every whenever all second moments of the initial field exist .let , independent of the initial data and of , such that for this solution for all and .in addition , we can choose and define and , , so that for all and where the operator is defined by both and in the statement have explicit definitions which can be found in the beginning of the proof of the theorem . to summarize in words ,the first of the bounds implies that the temperature profile equilibrates , and the relative error is exponentially decaying on the diffusive time scale , i.e. , as becomes large .the second statement says that solving the `` lattice diffusion equation '' with initial data provides an approximation to the temperature profile which is accurate even before the diffusive time scale , for .a closer inspection of the proof of the theorem reveals that the main contribution to the error bound given in ( [ eq : tlatticediff ] ) comes from `` memory effects '' of the original time evolution .these corrections can estimated using a bound which for large and behaves as . the rest of the factors can be uniformly bounded using the total energy , resulting in the bound in ( [ eq : tlatticediff ] ) .we do not know if the bound is optimal , although this could well be true for generic initial data .the worst case scenario for thermalization should be given by initial data in which all energy is localized to one site .it would thus be of interest to study the solution of ( [ eq : tevoleq ] ) with initial data and in more detail to settle the issue . for nearest neighbor interactions with and using for .the horizontal line depicts the corresponding predicted infinite volume value , as explained in the text.[fig : kappalim],title="fig:",scaledwidth=70.0% ] the following corollary makes the connection to diffusion more explicit . its physical motivation is to show that the fourier s law can here be used to predict results from temperature measurements , as soon as these are not sensitive to the lattice structure .explicitly , we assume that the measurement device detects only the cumulative effect of thermal movement of the particles , say via thermal radiation , and thus can only measure a smeared temperature profile .the smearing is assumed to be linear and given by a convolution with some fixed function which for convenience we assume to be smooth and rapidly decaying , i.e. , that it should belong to the schwartz space .the corollary implies that then for large systems it is possible to obtain excellent predictions for future measurements of the temperature profile by first waiting a time , measuring the temperature profile , and then using the profile as initial data for the time - dependent fourier s law .the diffusion constant of the fourier s law depends on the harmonic dynamics and is given by ( [ eq : defkappa ] ) below .we prove later , in corollary [ th : ptxcoroll ] and proposition [ th : greenprop ] , that the constant remains uniformly bounded away from and infinity .hence , the present assumptions are sufficient to guarantee normal heat conduction .we have also computed the values of numerically for a nearest neighborhood interaction and plotted these in fig .[ fig : kappalim ] .the results indicate that the limit exists and agrees with which is the value obtained in previous works on this model .[ th : maincoroll ] suppose the assumptions of theorem [ th : maintprof ] hold , , and all second moments of the initial field exist .let denote the corresponding solution to ( [ eq : tevoleq ] ) and the energy density .for any kernel function and initialization time , define the corresponding observed temperature profile by let the predicted temperature profile be defined as the solution of the diffusion equation on the circle with initial data , i.e. , let be the unique solution to the cauchy problem where and , and the diffusion constant is defined by then there is a constant , independent of the initial data and of , and , such that for all and . in particular , if is a `` macroscopic averaging kernel '' and satisfies additionally , and for all , then for the same as above the rest of this section is used for proving the above statements .however , as the arguments get somewhat technical and will not be used in the remaining sections , it is possible to skip over the details in the first reading .we begin with a lemma collecting the main consequences of our assumptions .[ th : amainlemma ] suppose and satisfy the conditions in assumption [ th : gammaassump ] .use ( [ eq : aiexp ] ) to define for , and set . then we can find constants , , such that 1 .the functions , , belong to .[ it : abounds ] and for every , and .[ it : alowerb ] for all and .[ it : atildeb ] the functions , , satisfy for all , , .the first item follows straightforwardly from the definitions . as in the statement , set and recall the functions and defined in ( [ eq : defmupm ] ) . since , we have for all and thus a direct computation shows that for the first two upper bounds in item [ it : abounds ] can be found .the bound for follows similarly , using the estimate and possibly increasing to accommodate the extra factors resulting from taking the derivative , such as .the lower bound in item [ it : alowerb ] is a direct consequence of the identity where and .( we may define , for instance , and . )all of the maps can be represented as a composition of a function analytic on and the function ( note that ) .then , by assumption , for real , and there is a strip on which is an analytic , -periodic function .therefore , we can find such that is an analytic , -periodic continuation of to a neighborhood of ] to +{{\rm i}}\operatorname*{sign}(x ) { \varepsilon}_0 ] , such that and then the goal is to use cauchy s theorem to move the integration contour in ( [ eq : defakernel ] ) to the left half - plane , in which case the factor produces exponential decay in time . to do this , it is crucial to study the zeroes of since these will correspond to poles of the integrand determining the dominant modes of decay . for notational simplicity ,let us for the moment consider some fixed and set .as proven in appendix [ sec : diffkernel ] , if and thus is an analytic function for which has no zeroes in the right half plane .it turns out that under the present assumptions , in particular , when the nondegeneracy condition in assumption [ th : gammaassump ] holds , only the case with small and will be relevant , and we begin by considering that case .suppose first that with , and , with also allowed .the derivatives of can be computed by differentiating the defining integrand .therefore , the :th derivative of is equal to .thus by item [ it : rhotbounds ] in corollary [ th : ptxcoroll ] , for any , we have here , the second bound can be derived for instance from the representation where in the exponent for any the real part is bounded by . since , we also have where we have used that , for any , and applied the bounds in corollary [ th : ptxcoroll ] . here the constants and are strictly positive and independent of .therefore , so is and we can conclude that whenever , we have .consider then the case , with defined above .let be given such that and suppose that satisfies . then and by ( [ eq : fshift1 ] ) we have . hence , .we set which is -independent and strictly positive , and conclude that then we have a lower bound for all . on the other hand ,if with , then the identity implies a bound suppose that is a zero of in the closed ball of radius .since , then has multiplicity one .also , by ( [ eq : implicitbnd ] ) , we have for all , and thus there can then be no other zeros of in the ball .since and for all , we can also conclude that then necessarily with .therefore , if , with , and is real and satisfies , we may always use the estimate .consider then the case in which there are no zeros of in the closed ball of radius .the map is continuous , it maps real values to real values , and .hence now for all .we apply ( [ eq : implicitbnd ] ) with to conclude that for all with therefore , we can then conclude that whenever with and is real and satisfies .the above estimates are sufficient to control the -neighborhood of zero for small .coming back to general we next study the properties of on the imaginary axis , for with . using item [ it : ptxnormalization ] in corollary[ th : ptxcoroll ] and the notations introduced in the proof of the corollary shows that since , there is such that . to derive lower bounds for ( [ eq : refia ] ) , it suffices to consider the case in which , since then for we can use the symmetry of cosine and apply the bounds derived for the case where the signs of and are reversed .consider first the case in which is even .then there is such that and . in this case , , the fourier - transform of equals and thus by using in ( [ eq : refia ] ) yields where in the last step we used the fact that are real . applying the lower bound in item [ it : alowerb ] of lemma [ th : amainlemma ] thus proves that for instance by representing the cosine as a sum of two exponential terms , we find that if .therefore , now latexmath:[\ ] ] which is bounded by where .if , we similarly obtain from the definition a bound . therefore , item [ it : ftp2 ] holds with .the above estimates imply a bound for the integrand in ( [ eq : defgreen ] ) . thus it is absolutely integrable , for any .in addition , the integrand is analytic on the whole right half plane , and hence cauchy s theorem allows to conclude that its value does not depend on the choice of .we fix some value and then define by ( [ eq : defgreen ] ) for all , . relying on dominated convergence ,we conclude that then is continuous .also , the above bounds imply that the value of the integral in ( [ eq : defgreen ] ) is zero for any since in that case cauchy s theorem allows taking . in particular , then and thus ( [ eq : arenewal ] ) holds at . in order to check that satisfies ( [ eq : arenewal ] )also for , we rely on the following computation , whose steps can be justified by using fubini s theorem and the above observation about the vanishing of the integral for negative : since the final integral is equal to , we have proven that ( [ eq : arenewal ] ) holds for all and .hence , the above function provides a continuous solution to ( [ eq : arenewal ] ) .let us next prove that this solution is unique . consider an arbitrary and the banach space \times \lambda_l,{{\mathbb c\hspace{0.05 ex}}}) ] .then we have and is equal to , the restriction of to $ ] .thus necessarily .hence for all we have , and as can be taken arbitrarily large , this proves the uniqueness of the solution . since is obviouslypositivity preserving and is nonnegative , this also implies that the unique solution , coinciding with defined in ( [ eq : defgreen ] ) , is pointwise nonnegative .as is assumed to be continuously differentiable , the continuous solution to ( [ eq : arenewal ] ) is that , as well .this concludes the proof of the proposition .[ th : renewcorr ] suppose that and are given as in proposition [ th : renewalprop ] . then for any the formula defines the unique continuous solution to the equation since is continuous , dominated convergence theorem immediately implies that ( [ eq : defgensol ] ) defines a continuous function .then a straightforward application of fubini s theorem and ( [ eq : arenewal ] ) proves that ( [ eq : genrenew ] ) holds for all .the uniqueness can be proven via the same argument which was used in the proof of proposition [ th : renewalprop ] .the research of j. lukkarinen was partially supported by the academy of finland and partially by the european research council ( erc ) advanced investigator grant 227772 .i thank franois huveneers , wojciech de roeck , and herbert spohn for discussions and suggestions and janne junnila for an introduction to mathematical ergodic theory .i am also grateful to the anonymous reviewer for pointing out additional references and improvements of representation .f. bonetto , j. l. lebowitz , and l. rey - bellet , _fourier s law : a challenge to theorists_. in a. fokas , a. grigoryan , t. kibble , and b. zegarlinski ( eds . ) , _ mathematical physics 2000 _ , pp . 128150 , london , 2000 . imperial college press .m. simon , _ hydrodynamic limit for the velocity - flip model _ , stochastic process .* 123 * ( 2013 ) 36233662 .yau , _ relative entropy and hydrodynamics of ginzburg - landau models _ , lett .* 22 * ( 1991 ) 6380 .s. r. s. varadhan , _ nonlinear diffusion limit for a system with nearest neighbor interactions ii_. in k. d. elworthy and n. ikeda ( eds . ) , _ asymptotic problems in probability theory : stochastic models and diffusions on fractals _ , pp .longman scientific & technical , 1993 .
|
we propose a new mathematical tool for the study of transport properties of models for lattice vibrations in crystalline solids . by replication of dynamical degrees of freedom , we aim at a new dynamical system where the `` local '' dynamics can be isolated and solved independently from the `` global '' evolution . the replication procedure is very generic but not unique as it depends on how the original dynamics are split between the local and global dynamics . as an explicit example , we apply the scheme to study thermalization of the pinned harmonic chain with velocity flips . we improve on the previous results about this system by showing that after a relatively short time period the average kinetic temperature profile satisfies the dynamic fourier s law in a local microscopic sense without assuming that the initial data is close to a local equilibrium state . the bounds derived here prove that the above thermalization period is at most of the order , where denotes the number of particles in the chain . in particular , even before the diffusive time scale fourier s law becomes a valid approximation of the evolution of the kinetic temperature profile . as a second application of the dynamic replica method , we also briefly discuss replacing the velocity flips by an anharmonic onsite potential . 1em _ dedicated to herbert spohn , with sincere gratitude for his support , + inspiration and insight .
|
many large - scale density - functional theory ( dft ) methods make use of localized basis functions , since locality in some form is an essential prerequisite for developing a method that scales linearly with system size . in particular , the use of strictly localized orbitals results in the hamiltonian being formally sparse , without needing to impose a cutoff tolerance on the matrix elements .numerical atomic orbitals ( naos ) of finite range have been found to be particularly well - suited for dft calculations , due to the fact that they are very flexible , and , therefore , only a small number of them is usually needed to obtain accurate results .the transferability of the nao basis between different systems has also been found to be quite reasonable .the most important drawback of using naos is the lack of a systematic way of improving the basis to arbitrary precision , such as what can easily be achieved in plane - wave ( pw ) methods by increasing the kinetic energy cutoff , or in real - space - grid methods by decreasing the grid spacing .this is an important shortcoming of the method , since it means that it is not possible to assess the accuracy of the calculation with respect to the basis set with any degree of certainty .in contrast , it is standard practice in studies using pw methods to perform preliminary convergence tests for the kinetic energy cutoff on representative samples of the system in question , in order to tune the cutoff to be used in the production calculations to a given degree of accuracy . in this paper, we aim to demonstrate best practice in the generation of transferable nao basis sets of increasing accuracy for ordered and disordered bulk material , and to show how it is possible to determine with confidence the level of accuracy of the basis by careful comparison with pw calculations . as a test case , we focus on water , in the liquid and two solid phases .water is a material of fundamental importance in many different areas of science , and is currently of great interest to practitioners of _ ab initio _ techniques , since the subtle interplay between different interaction mechanisms that gives rise to its unusual properties ( e.g. , its notoriously complex phase diagram , and the large number of anomalies in the liquid phase ) is still not well understood ( see , e.g. , refs . ) . however , _ab initio _ molecular dynamics ( aimd ) studies using pw methods have been limited by the small system sizes and simulations times that are accessible due to computational cost , while the accuracy of existing nao basis sets for such systems has never been systematically verified , as we propose here .using the water monomer as our starting point , we construct double- , triple- , and quadruple- basis sets for hydrogen and oxygen . we make use of a previously - documented confinement potential for the valence orbitals , and propose a simple new form for the polarization orbitals .we find this new confinement scheme to give a good control over the overall shape of the polarization orbitals by varying a single parameter , making it ideal for variational optimization .we test our basis sets on the water dimer , ice i and viii , and liquid water , showing our most accurate bases to achieve the level of precision of high - quality pw calculations for energies , pressures , and forces , and all bases to be highly transferable between systems .therefore , our basis sets can be used with confidence for ambitious future studies of aqueous systems , as they enable cost - efficient large - scale simulations of high accuracy .we use the siesta code for generating the nao bases and performing the dft calculations with them , and the abinit code for the comparison calculations using pw bases . for both codeswe use the pbe semi - local ( gga ) exchange - correlation functional , and the same set of norm - conserving pseudopotentials in troullier - martins form .the pseudopotential core radius is 0.66 for all angular momentum channels of h and 0.61 for all channels of o ; we also employ a small non - linear core correction for o. the pseudopotentials are factorized in the separable kleinman - bylander form , with the same local and non - local components used in both codes .the naos making up our basis sets are composed of a freely - varying radial function multiplied by a spherical harmonic ; the radial part is defined numerically and is strictly zero beyond a cutoff radius .the siesta method for generating nao bases has been documented in several previous publications .the main ideas are : * a soft confinement potential of the form between an inner radius and ; * a scheme for obtaining multiple- for each orbital , inspired by the split valence method used in quantum chemistry for gaussian basis sets ; * polarization orbitals obtained either by applying a small electric field within perturbation theory , or by adding unoccupied atomic shells of higher with soft confinement . using these schemes, it is possible to achieve significant optimization of a fixed - size basis by varying the free parameters for each shell : , and , and the matching radii which define the multiple- orbitals . increasing the orbital cutoff radiusgenerally provides a better quality basis , but also increases its computational cost .two schemes have been proposed for achieving an optimal balance between accuracy and cost based on a single parameter : the orbital energy shift in the isolated atom caused by confinement , and a fictitious ` basis enthalpy ' calculated using the orbital volume and a pressure - like variable . for the valence shells ,soft confinement has proven to be very satisfactory : not only does it remove the problematic derivative discontinuity introduced by hard - walled confinement , but it also performs better at a given from a variational point of view .this can be attributed to the fact that it has little effect on the shape of the orbital in the core region , resulting in a good agreement with the free atom orbital . instead , the choice of polarization orbitals is more problematic , as the free atom orbitals of higher can become quite extended , or even entirely unbound .the explicit polarization of the pseudoatom by a small electric field provides an elegant and parameter - free solution , as the extent of the polarization orbitals is defined by that of the orbitals they polarize .practical applications , however , have shown that a better variational estimate is instead usually obtained by including the unoccupied atomic shells of higher with an aggressive soft confinement , in order to control the position of the maximum of the radial part of the orbital .it is also important to note that the former method only allows for orbitals up to ( being the highest angular momentum for the valence shells ) , while the latter method can include shells of any . in this workwe propose a new confinement scheme for obtaining short - range polarization orbitals that exhibit a good agreement with the free atom orbitals in the core region .we use a yukawa - like screened coulomb potential of the form for which the main parameter for variational optimization is the strength rather than for shaping the orbital .default values of ry and can generally be used without further optimization . ] . is introduced to avoid numerical difficulties arising from the singularity at , and after some tests has been fixed to 0.01 a . can additionally be used for fine tuning of the orbital tail ( the default value is set to 0 , thus making eq .[ eq : q ] a normal coulomb potential ) .[ fig : q_conf ] shows the radial part of the orbitals obtained with our new coulomb confinement scheme for the 3d shell of o. its decay is intermediate between those obtained by -field polarization and soft confinement , and is similar to that of the free atom orbital . within a double- basis with a single polarization shell ( ) , the coulomb confinement scheme results in a variationally better basis for the water monomer and dimer respect to soft confinement ( by mev / molecule ) , as well as a smaller counterpoise ( cp ) correction ( by 5 mev ) .[ fig : q_confb ] shows a similar behaviour for the 3d shell of si ; in this case , however , the free atom orbital is unbound .as before , the radial part of the coulomb - confined d orbitals exhibits a slower decay than the soft - confined one , thereby achieving a greater overlap with that of the p orbitals they are polarizing , which gives the latter more flexibility . in bulk silicon , this results in coulomb confinement gaining mev / atom respect to soft confinement . making use of the coulomb confinement scheme for the polarization orbitals, we have developed a series of 13 basis sets of increasing accuracy for the water molecule , ranging from 23 to 91 basis orbitals / molecule . following the systematic convergence strategy proposed for correlation - consistent basis sets ,the maximum size of our bases depends on the number of orbitals used for the valence shells , decreasing by one for each additional polarization shell : , , , and so on .we also provide intermediary bases with various subsets of the full number of polarization orbitals indicated by this scheme .we limit , and so the largest basis we consider is . for cases in which the number of orbitals in the polarization shells differs between o and h , we adopt the following naming convention :the highest level of is used , followed in brackets by the element for which it applies ; the other element in such cases has one fewer shell .for example , includes a double- polarization shell for h , and a single- polarization shell for o. we do not here consider the optimization of , but simply fix it to 4.5 for the double- and triple- bases , and to 5.3 for the quadruple- bases .we test the effect of varying the cutoff radius for the basis , by decreasing to 4.5 ( the shorter basis is denoted as ) ; the difference in energy is minimal ( table [ table : dimer_summary ] ) .other parameters , including and , are optimized variationally for the water monomer ( see supplementary material for full specifications of all basis sets used ) .we also include in our results two previously proposed bases , obtained using the basis enthalpy optimization procedure with a confining pressure of 0.2 gpa ; for these bases ( which we denote as and ) the value of varies between shells , up to a maximum of 3.3 for and 3.7 for .the many different options for defining the basis orbitals discussed and reviewed above are at the disposal of the user for generating good basis sets .it must be remembered , however , that there is no unique way of defining a basis , and that , as for pseudopotentials , the responsibility of the choice of basis lies with the person performing the calculations .methods like siesta are limited to using naos of finite range , but there is absolute freedom in the radial shapes , their range , the number of orbitals , the angular momentum values , and even the location ( which is not restricted to be centred on an atom ) . herewe present some guidelines for producing reasonable basis sets : * use as many basis functions as required for the accuracy needed , following the canonical hierarchy described above starting from the minimal single- basis : , , , , and so on . *avoid too large or too small orbital ranges . some procedures ,like the one based on an energy shift , or blind variational optimization based on a single reference can produce very short orbitals for light elements , inner orbitals , or cations .this severely limits the transferability of the basis .never use orbitals with radii smaller than 3 . on the other hand ,some procedures generate extremely large cutoff radii ( e.g. , in loosely bound states , as the 4s shell in transition metals ) .if they are used for bulk calculations , functions with radii larger than 5 can be quite useless ( there are sufficient other functions in neighbouring atoms ) but make the calculations substantially less efficient .siesta allows the user to establish both a minimum and a maximum cutoff radius globally .* use the soft confinement potential for smoothening the radial function close to only , with . * for generating a polarization shell ,use the coulomb confinement potential described above ( if not using the perturbative -field polarization option ) . ( and , optionally , ) can be defined variationally on a set of representative simple systems , or by maximizing the overlap of the radial part of the polarization orbitals with the one of the shell to be polarized : .* finally , the matching radii for the multiple- orbitals can be defined variationally .we first show the convergence of the total energy for a single water molecule in a large box ( ) .we perform a series of pw calculations of increasing kinetic energy cutoff , up to a maximum of 5000 ev ( by which point the total energy is converged to mev ) . in fig[ fig : monomer ] , we show how our nao bases compare with the pw bases . since the same pseudopotential is used in both sets of calculations , the total energies can be compared directly .the nao bases show a smooth convergence with basis size , similar to that of pws ; it is tempting therefore to equate each nao basis with the pw cutoff giving the same total energy , despite the fact that the hilbert space spanned by them is very different .nevertheless , the nao bases undoubtably achieve a good level of variational convergence , with the basis reducing the error in the total energy to mev , as much as a 1460 ev pw cutoff .even our smallest basis , , is reasonably well converged : the total energy error is mev , equivalent to a 1080 ev cutoff .the error for all nao basis sets is listed in table [ table : dimer_summary ] .there is a fairly large energy difference between the and bases and our newly parametrized ones of the same size ( denoted simply and ) .this we attribute both to the use of coulomb confinement for the first polarization shell and the larger cutoff radii . in general , the convergence with basis size is almost monotomic ; the only exception is the basis , for which we add a set of diffuse orbitals of s and p symmetry onto the o ion in order to capture the swelling due to its anionic character .this basis adds four new orbitals onto , yet gains 6 mev more energy than , which adds six .such small energy differences , however , do not significantly alter the overall convergence behaviour . fig . [fig : dimer ] shows the pw convergence and values for three selected nao bases of the binding energy of the water dimer in a fixed geometry ( from supplementary material of ref .the results for all nao bases are given in table [ table : dimer_summary ] . in order to ensure that periodic image interactions are negligible, we use a box . here , the difference between nao and pw bases is clear , as the former converges from above and the latter from below .all our quadruple- bases give an error in within 23 mev of the converged pw result .for the double- and triple- bases , basis set superposition error ( bsse ) accounts for most of the total error .the addition of a cp correction term to brings all bases to within the same level of precision of the quadruple- ones ; only and are somewhat less precise ( with errors of 1013 mev ) , despite having a smaller cp correction due to their shorter range . basis set & basis size & ( mev ) & ( mev ) & ( mev ) & cp corr .( mev ) + & 23 & 378.61 & 266.38 & 241.16 & .22 + & 23 & 280.02 & 251.51 & 226.52 & .99 + & 29 & 217.98 & 249.42 & 239.89 & .53 + & 29 & 142.84 & 241.34 & 229.48 & .86 + & 35 & 102.14 & 232.01 & 226.80 & .21 + & 40 & 95.59 & 230.68 & 225.96 & .72 + & 57 & 57.64 & 228.68 & 223.65 & .03 + & 46 & 84.97 & 231.50 & 229.70 & .80 + & 46 & 83.98 & 231.28 & 229.56 & .72 + & 63 & 45.37 & 227.99 & 226.25 & .74 + & 67 & 38.43 & 229.26 & 227.16 & .10 + & 69 & 44.43 & 227.68 & 226.58 & .10 + & 79 & 37.71 & 228.01 & 226.54 & .47 + & 84 & 36.17 & 228.10 & 226.68 & .42 + & 91 & 32.64 & 228.63 & 227.12 & .51 + pw & - & 0.00 & 228.62 & - & - + [ table : dimer_summary ]we now turn to the first test of our bases in bulk water systems , by focussing on two of the ordered phases of ice : cubic ice i and high - density tetragonal ice viii .the study of ice by first principles dft methods has progressed considerably in recent years , but understanding in detail the relative energetic contributions that give rise to its phase diagram and the properties of the various phases remains a challenging problem of current interest . typically for such studies , it is desirable to calculate energy differences between configurations to within a few mev / molecule . to this end , we test the accuracy of the nao bases both in terms of the relaxation of the ionic positions and the final energy difference between the two phases under consideration . & & + phase & basis & & & & & & & pw + & nao & 295.93 & 197.35 & 148.34 & 107.68 & 68.70 & 25.29 & - + & pw & 0.78 & 2.73 & 2.11 & 0.99 & 0.55 & 0.11 & 0.00 + & nao & 259.76 & 166.58 & 161.32 & 111.92 & 73.89 & 26.86 & - + & pw & 1.49 & 2.88 & 1.95 & 1.04 & 0.53 & 0.17 & 0.00 + & nao & .58 & .99 & .73 & .99 & .95 & .32 & - + & pw & .46 & .90 & .59 & .80 & .73 & .82 & .75 + & ref . & - & - & - & - & - & - & .+ [ table : ice_summary ] for the calculations , we fix the volume / molecule to 30.51 for ice ic and 20.45 for ice viii , using the equilibrium volumes reported by murray and galli for pbe at ambient pressure , and the c / a ratio to 1.44 for ice viii .we use a monkhorst - pack ( mp ) k - point grid for both unit cells .first , we perform pw cutoff convergence tests for the two ice phases up to a 5000 ev cutoff ; we find that 4500 ev gives extremely accurate results ( within 0.1 mev for total energies , 0.5 mev / for ionic forces , and 0.05 kbar for pressures ) , and use this value for all pw results given in this section .we choose four representative nao basis sets for testing from those presented in the previous section ; additionally , the two old parametrizations for double- and triple- are also tested .the ice unit cells are relaxed with respect to the ionic positions for these six bases , and independently for pws , using a maximum force tolerance of 10 mev / . finally , we recalculate the energy of the system with pws using the six geometries obtained from the nao bases .all results are given in table [ table : ice_summary ] .the pw results for the different geometries show that all nao bases give sufficiently accurate ionic relaxations .the errors in the pw energy difference / molecule between the two phases using the nao - relaxed geometries , as compared with the pw - relaxed ones , are less than a mev in all cases , ranging from 0.71 mev for the geometries to only 0.02 mev for the ones .the errors in total energies are similarly small , less than 3 mev for all geometries .the accuracy of the relaxations is also confirmed by examining the bond lengths , with the largest errors found for the geometries ( up to 21 m ) , and the smallest for the ones ( up to 3 m ) .we now consider not only the geometries , but also the energies calculated with the nao bases .as should be expected , total energies / molecule of the ice phases have errors on the order of those reported in table [ table : dimer_summary ] for the total energy of the monomer ; however , it is interesting to note that they are systematically smaller , resulting from the fact that we are now considering a condensed phase , thus enabling the naos to represent the charge density everywhere in the unit cell .the most important results are for the energy differences / molecule between phases given by the nao calculations . gives a fairly substantial error ( mev ) , while both and are in good agreement with the converged pw result , with errors of mev .the most accurate basis , , succeeds in reducing the error to less than 2 mev . as with the dimer binding energy, we find our new bases to give a substantial improvement with respect to the old parametrizations , especially in this case for .assessing the accuracy of our bases for water in its liquid state is more challenging than for the solid state , as the calculation of any quantity of interest will involve a statistical average over a md trajectory . performing a full aimd simulation using a tightly converged pw cutoffwould be computationally very expensive , and , furthermore , would not allow for a detailed comparison with the results obtained for the same simulation using one of the nao basis sets , as the trajectories would quickly decorrelate . herewe employ a simple alternative method for testing basis sets for liquid systems , which can be used routinely to obtain accurate estimates of the errors in quantites such as total energies , energy differences , ionic forces and cell pressures , as well as for direct parametrization of the basis set using the same procedures proposed for fitting classical force fields from _ ab initio _ data .we first employ the gromacs code to perform two long 1 ns md runs of a box of 32 water molecules using the tip4p force field , at equilibrium density ( 1.00 g / cm ) and high density ( 1.20 g / cm ) .we select 100 random snapshots from each run , which we use as our testing set ; the long md run ensures that they are uncorrelated .we include the high density run to ensure that we are sampling sufficient configurations with occupied interstitial anti - tetrahedral sites between the first and second coordination shells , as the correct description of these configurations is important for reproducing several key properties of the liquid . finally , we perform single - point dft calculations of the 200 snapshots , using the various nao bases in siesta and pws at different cutoffs in abinit . in this casewe choose 2700 ev as our ` converged ' pw cutoff value for comparing to all other bases .cutoff convergence tests on a few snapshots up to 5000 ev show this value to give total energies to within 1 mev , ionic forces to within 0.6 mev / , and pressures to within 0.5 kbar .we perform our tests on five nao bases , , , , , and ; all results are given in table [ table : water_summary ] .firstly , we examine the error in total energies / molecule for all the snapshots . as expected , calculating the average shift in total energy for all snapshots gives a similar convergence to that discussed previously for the water monomer and the two ice phases . one point of interestis that the average shifts calculated for the two densities become closer to each other as the quality of the basis increases , anticipating the fact that pressure values also become more precise .the best quantitative estimator of overall accuracy for a fixed - volume nve aimd simulation is the energy difference between snapshots . for this , we calculate the root mean square ( rms ) error in our test set ; the results are shown in fig .[ fig : ed_conv ] , compared to pw calculations of increasing cutoff .there is a negligible difference between the two densities , with the , , and bases giving rms errors equivalent to pw cutoffs of 820 ev , 910 ev , and 980 ev , respectively .this is similar to the errors in the dimer binding energy before applying the cp correction .the scatter plots in figs . [ fig : ed_scatter ] and [ fig : ed2_scatter ] show these results in more detail , by comparing either a nao or a lower - cutoff pw basis with the converged pw basis , and plotting the energy difference between each pair of snapshots ; both densities are included in the plot .low - accuracy pw calculations give a large scatter of results , while all the nao bases considered give a tight clustering around the diagonal , as do the higher pw cutoffs .indeed , the nao bases show no spurious outliers at all , and , equally importantly , no obvious systematic trend away from the diagonal that might cause a significant difference in the region of configuration space explored during a md run . fig . [fig : f_conv ] shows the rms error in ionic forces .the results are very similar to those already discussed for energy differences , with the three new nao bases giving errors equivalent to pw cutoffs of 830 ev , 960 ev , and 1000 ev , respectively .differences between o and h ions are negligible ( see table [ table : water_summary ] ) .the table also reports results for the rms angle between the forces obtained with the nao basis and the converged pw one ; these range from for the basis to for the one .it is noteworthy that the basis performs better than the newly parametrized version in terms of force magnitudes and angles ; this explains the relatively accurate relaxed geometries obtained for ice ( see table [ table : ice_summary ] ) .finally , we examine the accuracy of our bases for calculating cell pressures .it is well - known that pw calculations suffer in this respect from the fact that the kinetic energy cutoff effectively changes with an infinitesimal change in volume ( since the calculation of stresses is implicitly performed at constant pw number ) , leading to a spurious tensile stress . in order to elimate this ,the correction by meade and vanderbilt ( here referred to as mv ) is generally applied .the mv correction is given by where is the total energy , the cell volume , and the kinetic energy cutoff .in contrast , the only systematic error introduced by nao basis sets in the calculation of stresses is that from bsse .[ fig : p_conv ] shows the average shift in pressure with respect to the converged calculations . even after applying the mv correction term ,the pw calculations give very large shifts , on the order of 10100 kbar .the nao shifts , instead , are of no more than 4 kbar ( for the basis ) .the pw convergence also varies significantly with density , which is not the case for the naos .we can use our results to define a simple and effective correction for cell pressures calculated with naos .this is illustrated in fig .[ fig : p_fit ] , showing a scatter plot of nao pressures with respect to the converged pw result ( note that we are plotting the error in the nao value against the absolute value from pws ) .two trends can be clearly observed : a constant shift to higher pressures ( i.e. , the same result as shown in fig .[ fig : p_conv ] ) , and an increase in the stiffness of the system ; both of these reduce with basis size , and are eliminated almost completely for the basis . therefore , by performing a linear fit to the data points for a given basis we obtain an expression for correcting pressure values in a large range .it is important to note that the figure includes data points at both densities , which show the same trend ; the correction can therefore also be used independently of density , at least within the range considered . the effect of our correction for nao calculations is given in table [ table : water_summary ] for average pressure shifts and rms errors in pressure differences .the correction reduces the average shift to very small values ( .5 kbar ) for all bases , the remaining error being due to the small discrepancy between the uncorrected shifts at the two different densities shown in fig .[ fig : p_conv ] .pressure differences between snapshots are generally of less interest for md simulations than absolute values ; nevertheless , they can be used as an additional measure of the quality of the basis . the comparison with pwsis shown in fig .[ fig : pd_conv ] , both for the uncorrected and corrected nao results ( the mv correction for pws has no effect on pressure differences ) .the three uncorrected nao bases give errors equivalent to pw cutoffs of approximately 810 ev , 950 ev , and 1000 ev , respectively , similarly to the values reported previously for other quantities of interest .after the correction is applied , the error for all bases is reduced to .2 kbar . once again, we note the peculiarity of the basis ( used in ref . ) , which features a very small average pressure shift without the need for a correction , while the rms error in pressure differences follows the trend of the other bases . & & & & + average shift & & & ( g / cm ) & & & & & + & 1.20 & 0.28 & 0.19 & 0.15 & 0.12 & 0.08 + & & & 1.00 & 0.29 & 0.21 & 0.16 & 0.12 & 0.08 + & & 1.20 & 0.06 & 4.18 & 2.48 & 1.73 & .57 + & & & 1.00 & .89 & 3.56 & 1.99 & 1.30 & .48 + & & 1.20 & .27 & .55 & .43 & .37 & .43 + & & & 1.00 & .48 & .65 & .53 & .50 & .32 + rms error + & 1.20 & 1.86 & 1.90 & 1.32 & 0.93 & 0.47 + & & & 1.00 & 1.54 & 1.85 & 1.44 & 0.94 & 0.43 + & & 1.20 & 0.41 & 0.38 & 0.27 & 0.20 & 0.15 + & & & 1.00 & 0.39 & 0.31 & 0.22 & 0.15 & 0.10 + & & 1.20 & 0.33 & 0.21 & 0.14 & 0.16 & 0.13 + & & & 1.00 & 0.24 & 0.18 & 0.14 & 0.14 & 0.09 + & & 1.20 & 0.11 & 0.17 & 0.07 & 0.05 & 0.03 + & & & 1.00 & 0.11 & 0.17 & 0.07 & 0.05 & 0.03 + & & & 1.20 & 0.07 & 0.15 & 0.09 & 0.04 & 0.02 + & & & 1.00 & 0.07 & 0.15 & 0.09 & 0.04 & 0.02 + & & 1.20 & 8.27 & 13.83 & 5.41 & 4.33 & 0.98 + & & & 1.00 & 7.56 & 13.40 & 5.00 & 4.33 & 1.04 + & & & 1.20 & 7.57 & 15.11 & 8.91 & 3.88 & 1.77 + & & & 1.00 & 7.34 & 15.01 & 8.77 & 4.07 & 1.67 + [ table : water_summary ]in this paper , we have described the development and testing of finite - range nao basis sets for water - based systems .we have discussed the general strategy employed for creating basis sets of increasing size and accuracy , and have proposed the use of a screened coulomb confinement potential for shaping the polarization orbitals , in order to achieve a good agreement with the higher angular momentum shells of the free atom without needing to extend the confinement radius beyond what is physically useful for condensed matter systems .we have presented 13 new bases for the water molecule , ranging from to .the full list of parameters needed to recreate and use them are given in the supplementary material ( we provide these instead of the numerical radial functions themselves so as to minimize the dependence on specific pseudopotentials ) . in order to perform rigorous tests of the accuracy of our bases , we use auxiliary pw calculations at different kinetic energy cutoffs .we can therefore compare the results obtained with naos with the pw ones at a very large ( essentially fully converged ) cutoff , and also equate the accuracy of the various nao bases with pw bases at specific cutoffs .this is done using two different dft codes ( siesta for naos , abinit for pws ) with the same pseudopotentials and kleinman - bylander factorization ; even so , small algorithmic differences between the codes will affect the comparison with the nao bases of highest accuracy , slightly underestimating their performance with respect to pws for quantities such as energy differences .the results for our tests on a variety of molecular and condensed systems show the transferability and accuracy of the bases . in particular , there is a good level of consistency both between different systems and different properties of the same system when comparing to the performance of pw calculations at finite cutoff :errors in total energies for the , , and bases are on the order of those for cutoffs of 1100 ev , 1200 ev , and 1300 ev , respectively , while errors in energy differences , ionic forces , and pressure differences are on the order of those for cutoffs of 800 ev , 900 ev , and 1000 ev .however , it is important to remember that the two types of bases are not at all equivalent : this is clearly demonstrated by calculations of absolute pressure , which show naos to naturally give much smaller errors for this quantity than pws .we have also proposed a simple correction for further reducing errors in absolute pressures and pressure differences for nao calculations , based on a linear fit to the data obtained from 200 liquid water snapshots at two densities ; this can be used , e.g. , to correct average pressures obtained from nve aimd simulations .this work was partly funded by grants fis2009 - 12721 and fis2012 - 37549 from the spanish ministry of science .mvfs acknowledges a doe early career award no. de - sc0003871 .we thank javier junquera for support with the translation of pseudopotentials between siesta and abinit .the calculations were performed on the following hpc clusters : kroketa ( cic nanogune , spain ) , arina ( universidad del pas vasco / euskal herriko unibertsitatea , spain ) , magerit ( cesvima , universidad politcnica de madrid , spain ) .we thank the res red espaola de supercomputacin for access to magerit .sgiker ( upv / ehu , micinn , gv / ej , erdf and esf ) support is gratefully acknowledged .
|
finite - range numerical atomic orbitals are the basis functions of choice for several first principles methods , due to their flexibility and scalability . generating and testing such basis sets , however , remains a significant challenge for the end user . we discuss these issues and present a new scheme for generating improved polarization orbitals of finite range . we then develop a series of high - accuracy basis sets for the water molecule , and report on their performance in describing the monomer and dimer , two phases of ice , and liquid water at ambient and high density . the tests are performed by comparison with plane - wave calculations , and show the atomic orbital basis sets to exhibit an excellent level of transferability and consistency . the highest - order bases ( quadruple- ) are shown to give accuracies comparable to a plane - wave kinetic energy cutoff of at least ev for quantities such as energy differences and ionic forces , as well as achieving significantly greater accuracies for total energies and absolute pressures .
|
a set of mobile agents with ( possibly distinct ) maximum speeds ( ) are in charge of _ guarding _ or in other words _ patrolling _ a given region of interest . patrolling problems find applications in the field of robotics where surveillance of a region is necessary .an interesting one - dimensional variant have been introduced by czyzowicz et al . , where the agents move along a rectifiable jordan curve representing a _fence_. the fence is either a _ closed _ curve ( the boundary of a compact region in the plane ) , or an _ open _ curve ( the boundary between two regions ) . for simplicity ( and without loss of generality ) it can be assumed that the open curve is a line segment and the closed curve is a circle .the movement of the agents over the time interval is described by a _ patrolling schedule _, where the speed of the agent , ( ) , may vary between zero and its maximum value in any of the two moving directions ( left or right ). given a closed or open fence of length and maximum speeds of agents , the goal is to find a _ patrolling schedule _ that minimizes the _ idle time _ , defined as the longest time interval in during which a point on the fence remains unvisited , taken over all points . a straightforward volume argument yields the lower bound for an ( open or closed ) fence of length .a _ patrolling algorithm _ computes a _ patrolling schedule _ for a given fence and set of speeds . for an open fence ( line segment ) ,czyzowicz et al . proposed a simple partitioning strategy , algorithm , where each agent moves back and forth perpetually in a segment whose length is proportional with its speed .specifically , for a segment of length and agents with maximum speeds , algorithm partitions the segment into pieces of lengths , and schedules the agent to patrol the interval with speed .algorithm has been proved to be optimal for uniform speeds , i.e. , when all maximum speeds are equal .algorithm achieves an idle time on a segment of length , and so is a 2-approximation algorithm for the shortest idle time .it has been conjectured ( * ? ? ? *conjecture 1 ) that is optimal for arbitrary speeds , however this was disproved by kawamura and kobayashi : they selected speeds and constructed a schedule for agents that achieves an idle time of .a patrolling algorithm is _ universal _ if it can be executed with any number of agents and any speed setting for the agents .for example , above is universal , however certain algorithms ( e.g. , algorithm in section [ sec:2/3 ] or the algorithm in section [ sec:24/25 ] ) can only be executed with certain speed settings or number of agents , i.e. , they are not universal . for the closed fence ( circle ) , no universal algorithm has been proposed to be optimal . for uniform speeds ( i.e. , ) , it is not difficult to see that placing the agents uniformly around the circle and letting them move in the same direction yields the shortest idle time .indeed , the idle time in this case is , matching the lower bound mentioned earlier . for the variant in which all agents are _ required _ to move in the same direction along a circle of unit length ( say clockwise ) , czyzowicz et al .* conjecture 2 ) conjectured that the following algorithm always yields an optimal schedule .label the agents so that .let , , be an index such that .place the agents at equal distances of around the circle , so that each moves clockwise at the same speed .discard the remaining agents , if any .since all agents move in the same direction , we also refer to as the `` runners '' algorithm .it achieves an idle time of ( * ? ? ?* theorem 2 ) . observe that is also universal .[ [ historical - perspective . ] ] historical perspective .+ + + + + + + + + + + + + + + + + + + + + + + multi - agent patrolling is a variation of the fundamental problem of multi - robot coverage , studied extensively in the robotics community .a variety of models has been considered for patrolling , including deterministic and randomized , as well as centralized and distributed strategies , under various objectives ._ idleness _ , as a measure of efficiency for a patrolling strategy , was introduced by machado et al . in a graph setting ; see also the article by chevaleyre . the closed fence patrolling problem is reminiscent of the classical _ lonely runners conjecture _ , introduced by wills and cusick , independently , in number theory and discrete geometry .assume that agents run clockwise along a circle of length , starting from the same point at time .they have distinct but constant speeds ( the speeds can not vary , unlike in the model considered in this paper ) .a runner is called _ lonely _ when he / she is at distance of at least from any other runner ( along the circle ) .the conjecture asserts that each runner is lonely at some time .the conjecture has only been confirmed for up to runners .[ [ notation - and - terminology . ] ] notation and terminology .+ + + + + + + + + + + + + + + + + + + + + + + + + a _ unit _ circle is a circle of unit length .we parameterize a line segment and a circle of length by the interval ] , for , where is the position of agent at time .each function is continuous ( for a closed fence , the endpoints of the interval ] where for all and . for a given fence ( closed or open ) of length andgiven maximum speeds , denotes the idle time of a schedule produced by algorithm .we use _ position - time diagrams _ to plot the agent trajectories with respect to time .one axis represents the position of the agents along the fence and the other axis represents time . in fig .[ fig : dtd ] , for instance , the horizontal axis represents the position of the agents along the fence and the vertical axis represents time . in fig .[ fig : circle ] , however , the vertical axis represents the position and the vertical axis represents time . a schedule with idle time is equivalent to a covering problem in such a diagram ( see fig .[ fig : dtd ] ) . for a straight - line ( i.e. , constant speed ) trajectory between points and in the diagram , construct a shaded parallelogram with vertices , , , , , where denotes the desired idle time and the shaded region represents the covered region . in particular , if an agent stays put in a time - interval , the parallelogram degenerates to a vertical segment .a schedule for the agents ensures idle time if and only if the entire area of the diagram in the time interval is covered .to evaluate the efficiency of a patrolling algorithm , we use the ratio between the idle times of and the partition - based algorithm .lower values of indicate better ( more efficient ) algorithms .recall however that certain algorithms can only be executed with certain speed settings or number of agents .we write for the _ harmonic number_. from to , waiting at for time and then moving from to with speed .,scaledwidth=25.0% ] [ [ our - results . ] ] our results .+ + + + + + + + + + + + 1 .consider the unidirectional unit circle ( where all agents are required to move in the same direction ) .+ \(i ) we disprove a conjecture by czyzowicz et al .* conjecture 2 ) regarding the optimality of algorithm . specifically, we construct a schedule for agents with harmonic speeds , , that has an idle time strictly less than .in contrast , algorithm yields a unit idle time for harmonic speeds ( ) , hence it is suboptimal. see theorem [ thm : counter ] , section [ sec : uni ] .+ \(ii ) for every ]. see theorem [ thm : tau ] , section [ sec : uni ] .2 . consider the open fence patrolling .for every integer , there exist agents with and a guarding schedule for a segment of length .alternatively , for every integer there exist agents with suitable speeds , and a guarding schedule for a unit segment that achieves idle time at most .in particular , for every , there exist agents with suitable speeds , and a guarding schedule for a unit segment that achieves idle time at most .this improves the previous bound of by kawamura and kobayashi .see theorem [ thm:24/25 ] , section [ sec:24/25 ] .3 . consider the bidirectional unit circle .+ \(i ) for every , there exist maximum speeds and a new patrolling algorithm that yields an idle time better than that achieved by both and .in particular , for large , the idle time of with these speeds is about of that achieved by and .see proposition [ prop:2/3 ] , section [ sec:2/3 ] .+ \(ii ) for every , there exist maximum speeds so that there exists an optimal schedule ( patrolling algorithm ) for the circle that does not use up to of the agents .in contrast , for a segment , any optimal schedule must use all agents . see proposition [ prop : useless ] , section [ sec:2/3 ] .+ \(iii ) there exist settings in which if all agents are used by a patrolling algorithm , then some agent(s ) need overtake ( pass ) other agent(s ) .this partially answers a question left open by czyzowicz et al .* section 3 ) .see the remark at the end of section [ sec:2/3 ] .[ [ a - counterexample - for - the - optimality - of - algorithm - a_2 . ] ] a counterexample for the optimality of algorithm .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we show that algorithm by czyzowicz et al . for unidirectional circle patrolling is not always optimal .we consider agents with _ harmonic speeds _ , .obviously , for this setting we have , which is already achieved by the agent with the highest ( here unit ) speed .we design a periodic schedule ( patrolling algorithm ) for agents with idle time . in this schedule , agent moves continuously with unit speed , and it remains to schedule agents such that every point is visited at least one more time in the unit length open time interval between two consecutive visits of . we start with a weaker claim , for _ closed _ intervals but using only agents .[ lem : counter ] consider the unit circle , where all agents are required to move in the same direction . for k=6 agents of harmonic speeds , ,there is a schedule where agent moves continuously with speed , and every point on the circle is visited by some other agent in every closed unit length time interval between two consecutive visits of .agents of speeds , , on a unit circle with period .agent moves continuously with speed .each point is visited by one of the agents between any two consecutive visits of agent .,scaledwidth=98.0% ] our proof is constructive .we construct a periodic schedule for the agents with period ; refer to fig .[ fig : circle ] .agents , and continuously move with maximum speed , while agents , and each stop at certain times in their movements .their schedule in one period ] .we construct a schedule with an idle time at most .let agent start at time and move clockwise at maximum ( unit ) speed , i.e. , denotes the position on the unit circle of agent at time . assume without loss of generality that is a multiple of , i.e. , , where is a natural number .divide the time interval ] is the interval . for each , cover the unit circle so that every point of is visited at least once by some agent .this ensures that each point of the circle is visited at least once in the time interval ] . for ,let be the interval of points visited by agent during the time interval ] .indeed , if ] is not visited by any agent during the time interval ] is not visited by any agent during the time interval ] , i.e. , it starts at ] , otherwise some point in ] .3 . ] , i.e. , it starts in ] .moreover this rotation must be in the same clockwise sense as the previous one , since otherwise there would exist points not visited for at least one unit of time .pick three points close to , , and , respectively , i.e. , , for . by observations 2 and 4 ,these three points must be visited by in the first two rotations during the time interval ] . thus thereexist settings in which if all agents are used by a patrolling algorithm , then some agent(s ) need to overtake ( pass ) other agent(s ) .observe however that overtaking can be easily avoided in this setting by not making use of any of the agents .kawamura and kobayashi showed that algorithm by czyzowicz et al . does not always produce an optimal schedule for open fence patrolling .they presented two counterexamples : their first example uses agents and achieves an idle time of ; their second example uses agents and achieves an idle time of . by replicating the strategy from the second example with a number of agents larger than , i.e. , iteratively using blocks of agents , we improve the ratio to for any . we need two technical lemmas to verify this claim .[ lem:1 ] consider a segment of length such that three agents are patrolling perpetually each with speed of and generating an alternating sequence of uncovered triangles , as shown in the position - time diagram in fig .[ fig : lemmadiagram1 ] . denote the vertical distances between consecutive occurrences of and by and between consecutive occurrences of and by .denote the bases of and by and respectively , and the heights of and by and respectively .then \(i ) observe that , and reach the left endpoint of the segment at times , , and , respectively . during the time interval $ ] , each agent traverses the distance and the positions and directions of the agents at time are the same as those at time .hence is a period for their schedule .\(ii ) since and , we have .since is the midpoint of , we have , thus .since all the agents have same speed , , all the trajectory line segments in the position - time diagram have the same slope , .hence .thus , is similar to . since , is congruent to , and consequently .put , , and .recall from ( i ) that . by construction, we have , thus .we also have , thus .since is the midpoint of , we have , thus .let denote the -coordinate of point ; then . to compute we compute the intersection of the two segments and .we have , , , and .the equations of and are : and : , and solving for yields , and consequently . [ lem:2 ] ( i )let be the speed of an agent needed to cover an uncovered isosceles triangle ; refer to fig .[ fig : lemmadiagram2 ] ( left ). then , where and are the base and height of , respectively .\(ii ) let be the speed of an agent needed to cover an alternate sequence of congruent isosceles triangles with bases on same vertical line ; refer to fig .[ fig : lemmadiagram2 ] ( right ) .then where is the vertical distance between the triangles , is the base and is the height of the congruent triangles .blocks ; each block has three agents with speed . middle : six agents with speed . bottom : patrolling strategy for blocks using agents for two time periods ( starting at relative to fig . [fig : lemmadiagram1 ] ) ; the block length is and the time period is .,title="fig : " ] blocks ; each block has three agents with speed .middle : six agents with speed . bottom : patrolling strategy for blocks using agents for two time periods ( starting at relative to fig .[ fig : lemmadiagram1 ] ) ; the block length is and the time period is .,title="fig : " ] [ thm:24/25 ] for every integer , there exist agents with and a guarding schedule for a segment of length .alternatively , for every integer there exist agents with suitable speeds , and a guarding schedule for a unit segment that achieves idle time at most .in particular , for every , there exist agents with suitable speeds , and a guarding schedule for a unit segment that achieves idle time at most .refer to fig .[ fig:24/25 ] .we use a long fence divided into blocks ; each block is of length .each block has 3 agents each of speed 5 running in zig - zag fashion .consecutive blocks share one agent of speed which covers the uncovered triangles from the trajectories of the zig - zag agents in the position - time diagram . the first and the last block use two agents of speed shared by any other block .the setting of these speeds is explained below . from lemma [ lem:1](ii ) , we conclude that all the uncovered triangles generated by the agents of speed 5 are congruent and their base is and their height is . by lemma [ lem:2](i ) , we can set the speeds of the agents not shared by consecutive blocks to . also , in our strategy , lemma [ lem:1](ii ) yields .hence , by lemma [ lem:2](ii ) , we can set the speeds of the agents shared by consecutive blocks to . in our strategy, we have 3 types of agents : agents running with speed 5 as in fig . [ fig:24/25 ] ( top ) ,unit speed agents not shared by 2 consecutive blocks and unit speed agents shared by two consecutive blocks as in fig .[ fig:24/25 ] ( middle ) . by lemma [ lem:1](i ) ,the agents of first type have period . in fig . [ fig:24/25 ] ( middle ) , there are two agents of second type and both have a similar trajectory .thus , it is enough to verify for the leftmost unit speed agent .it takes time from to and again time from to .next , it waits for time at .hence after time , its position and direction at is same as that at .hence , its time period is . for the agents of third type ,refer to fig .[ fig:24/25 ] ( middle ) : it takes time from to and time from to .thus , arguing as above , its time period is .hence , overall , the time period of the strategy is . for blocks, we use agents .the sum of all speeds is and the total fence length is .the resulting ratio is .for example , when we reobtain the bound of kawamura and kobayashi ( from their 2nd example ) , when and further on , .thus an idle time of at most can be achieved for every given , as required .we sincerely thank akitoshi kawamura for generously sharing some technical details concerning their patrolling algorithms .we also express our satisfaction with the software package _jsxgraph , dynamic mathematics with javascript _ , used in our experiments .n. agmon , s. kraus , and g. a. kaminka , multi - robot perimeter patrol in adversarial settings , in _ proc . international conference on robotics and automation ( icra 2008 ) _ , ieee , 2008 , pp .j. barajas and o. serra , the lonely runner with seven runners , _ electronic journal of combinatorics _ * 15 * ( 2008 ) , r48 .y. chevaleyre , theoretical analysis of the multi - agent patrolling problem , in _ proc .international conference on intelligent agent technology ( iat 2004 ) _ , ieee , 2004 , pp .h. choset , coverage for robotics a survey of recent results , _ annals of mathematics and artificial intelligence _ * 31 * ( 2001 ) , 113126 . j. czyzowicz , l. gasieniec , a. kosowski , and e. kranakis , boundary patrolling by mobile agents with distinct maximal speeds , in _ proc .19th european symposium on algorithms ( esa 2011 ) _ , lncs 6942 , springer , 2011 , pp . 701712 .y. elmaliach , n. agmon , and g. a. kaminka , multi - robot area patrol under frequency constraints , in _ proc .international conference on robotics and automation ( icra 2007 ) _ , ieee , 2007 , pp .a. kawamura and y. kobayashi , fence patrolling by mobile agents with distinct speeds , in _ proc .23rd international symposium on algorithms and computation ( isaac 2012 ) _ , lncs 7676 , springer , 2012 , pp .598608 .a. machado , g. ramalho , j. d. zucker , and a. drogoul , multi - agent patrolling : an empirical analysis of alternative architectures , _3rd international workshop on multi - agent - based simulation _ , springer , 2002 , pp .
|
suppose that a fence needs to be protected ( perpetually ) by mobile agents with maximum speeds so that no point on the fence is left unattended for more than a given amount of time . the problem is to determine if this requirement can be met , and if so , to design a suitable patrolling schedule for the agents . alternatively , one would like to find a schedule that minimizes the _ idle time _ , that is , the longest time interval during which some point is not visited by any agent . we revisit this problem , introduced by czyzowicz et al . ( 2011 ) , and discuss several strategies for the cases where the fence is an open and a closed curve , respectively . in particular : ( i ) we disprove a conjecture by czyzowicz et al . regarding the optimality of their algorithm for unidirectional patrolling of a closed fence ; ( ii ) we present an algorithm with a lower idle time for patrolling an open fence , improving an earlier result of kawamura and kobayashi . * keywords * : mobile agents , fence patrolling , idle time , approximation algorithm .
|
a rare event is one which occurs with a very small probability .however , when they do occur they can have a huge effect and so it is often important to estimate the actual probability of their occurrence .examples where rare events are important are in banking and insurance , in biological systems where important processes such as genetic switching and mutations occur with extremely small rates , and in nucleation processes .rare events are also of importance in nonequilibrium processes such as charge and heat transport in small devices and transport in biological cells .the functioning of nano - electronic devices can be affected by rare large - current fluctuations and it is important to know how often they occur . in this paperour interest is in predicting probabilities of rare fluctuations in transport processes .a number of interesting results have been obtained recently on large fluctuations away from typical behavior in nonequilibrium systems .these include results such as the fluctuation theorems and the jarzynski relation . in the context of transport onetypically considers an observable , say , such as the total number of particles or heat transferred across an object with an applied chemical potential or temperature difference .this is a stochastic variable and for a given observation time this will have a distribution .the various general results that have been obtained for give some quantitative measure of the probability of rare fluctuations .analytic computations of the tails of for any system are usually difficult .this is also true in experiments or in computer simulations since the generation of rare events requires a large number of trials . for large probabilities of large fluctuations show scaling behavior , where the function is known as the large deviation function . for a few model systems exact results have been obtained for either or its legendre transform , which can be defined in terms of the characteristic function as . recently an algorithm has been proposed to compute .however , as has been pointed out in ref . there may be problems in obtaining the tails of using the algorithm of ref .the algorithm proposed in this paper is complementary to the one discussed in ref . in the sense that we obtain directly .our algorithm , based on the idea of importance sampling , computes for any given and accurately reproduces the tails of the distribution .algorithms based on importance sampling have earlier been used in the study of equilibrium systems and in the study of transition rate processes .however , we are not aware of any applications to the study of large fluctuations of currents in nonequilibrium systems and this is the main focus of this paper .here we choose two prototype models of transport , namely , heat conduction across a harmonic chain and particle transport in the symmetric simple exclusion process .we illustrate the implementation of importance sampling in the computation of large fluctuations of currents in these two nonequilibrium systems .consider a system with a time evolution described by the stochastic process .for simplicity we assume for now that is an integer - valued variable and time is discrete .let us denote a particular path in configuration space over a time period by the vector and let be an observable which is a function of the path . we will be interested in finding the probability distribution of and especially in computing the probability of large deviations about the mean value . as a simple illustrative example consider the tossing of a fair coin .for tosses we have a discrete stochastic process described by the time series where if the outcome in the trial is heads and otherwise .suppose we want to find the probability of generating heads ( thus ) .an example of a rare event is , for example , the event .the probability of this is and if we were to simulate the coin toss experiment we would need more than repeats of the experiment to realize this event with sufficient frequency to calculate the probability reliably . for large this is clearly very difficult .the importance sampling algorithm is useful in such situations .the basic idea is to increase the occurrence of the rare events by introducing a bias in the dynamics .the rare events are produced with a new probability corresponding to the bias .however , by keeping track of the relative weights of trajectories of the unbiased and biased processes it is possible to recover the required probability corresponding to the required unbiased process .we now describe the algorithm in the context of evaluating for the stochastic process .we denote the probability of a particular trajectory by . by definition : for the same system let us consider a biased dynamics for which the probability of the same path is given by .then we have : thus in terms of the biased dynamics , is the average and in a simulation we estimate this by performing averages over realizations to obtain : where denotes the path for the realization .for we obtain which is the required probability .note that the weight factor is a function of the path . in a simulationwe know the details of the microscopic dynamics for both the biased and unbiased processes .thus we can evaluate for every path generated by the biased dynamics .a necessary requirement of the biased dynamics is that the distribution of that it produces [ i.e. , should be peaked around the desired values of for which we want an accurate measurement of .as we will see the required dynamics can often be guessed from physical considerations .we first explain the algorithm for the coin tossing experiment . in this casewe consider a biased dynamics where the probability of heads is and that of tails is . if we take then the event , which was earlier rare , is now generated with increased frequency and we can use eq .( [ pest ] ) to estimate the required probability . for any path consisting of heads the weight factoris simply given by ] . in fig .( [ fig-1-part ] ) we plot the exact distribution along with a direct simulation of the above process with averaging over realizations . as we can seethe direct simulation is accurate only for events with probabilities of .now we illustrate our algorithm using a biased dynamics .we consider biasing obtained by changing the boundary transition rates .we denote the rates of the biased dynamics by and these are chosen such that has a peak in the required region . in our simulationwe consider a discrete - time implementation of ssep .for every realization of the process over a time ( after throwing away transients ) the weight factor is dynamically evaluated .for instance , every time a particle hops into the system from the left reservoir , is incremented by . in fig .( [ fig-1-part ] ) we see the result of using our algorithm with two different biases .using the same number of realizations we are now able to find probabilities up to and the comparison with the exact result is excellent . for for the one - site ssep model with .mc refers to direct monte carlo simulations . leftbias corresponds to and right bias to .,width=312 ] for for the three - site ssep model with .mc refers to direct monte carlo simulations . for left ( right ) bias simulations , the particles in bulk hop to the left ( right ) with rate and to the right ( left ) with unit ratethe boundary rates are kept unchanged.,width=312 ] we next study the case with with rates chosen such that the system reaches a nonequilibrium steady state with . finding analytically involves diagonalizing an matrix .we do this numerically and after an inverse laplace transform find . in fig .( [ fig-3-part ] ) we show the numerical and direct simulation results for this case and also the results obtained using the biased dynamics ; in this case we consider a biased dynamics with asymmetric bulk hopping rates .again we find that the biasing algorithm significantly improves the accuracy of finding probabilities of rare events using the same number of realizations ( ) .for for heat conduction across a single free particle with .the parameters have been chosen to correspond to a region in parameter space where the fluctuation theorem is not satisfied .mc refers to direct monte carlo simulations .the left bias corresponds to .,width=312 ] next we consider the problem of heat conduction across a system connected to heat reservoirs modeled by langevin white - noise reservoirs . herewe are interested in the distribution of the net heat transfer from the left bath into the system over time .first let us consider the simple example of a single brownian particle connected to two baths at temperatures and .this model was studied recently by visco who obtained an exact expression for the characteristic function of .the equation of motion for the system is given by : where are gaussian delta - correlated noises with zero mean and unit variance , thus and .the heat flow from the left bath into the system in time is given by .for the single brownian particle in this problem it is sufficient to specify the state by the velocity alone .if we choose then will have a peak at .it is clear that to use the biasing algorithm to compute probabilities of rare events with we can choose a biased dynamics with temperatures of left and right reservoirs taken to be and with .the calculation of the weight factor is somewhat tricky since computing ] is non - trivial .also one can not eliminate to express as a functional of only the path . to get around this problemwe note the following mapping of the single - particle system to the over - damped dynamics of two coupled oscillators given by the equations of motion : .the variable satisfies the same equation as in eq .( [ visco1 ] ) .thus with the same definition for as given earlier we can use the above equations for and to find . in this casewe do not have the problem as earlier and both and can be readily expressed in terms of .let us denote by the parameters of the biased system .also let be the noise realizations in the biased process that result in the same path as produced by for the original process .choosing for it can be shown that : .\label{weq}\ ] ] using the equations of motion we can express in terms of the phase - space variables and this gives : \\ & + { \frac}{1}{4d_2}\int_0^\tau dt[2(\g_2-\g_2')\dot{x}_2 x_{12}+(\g_2 ^ 2-\g_2 ^ 2 ) x_{12}^2 ] , \\q&=\int_0^\tau dt \dot{x}_1 x_{12}. \end{aligned}\ ] ] thus and are easily evaluated in the simulation using the biased dynamics . in fig .( [ fig - visco ] ) we show results for obtained both directly and using the biased dynamics .again we see that for the same number of realizations ( ) one can obtain probabilities about times smaller than using direct simulations .the comparison with the numerical results obtained from the exact expression for also shows the accuracy of the algorithm . for for heat conduction across two particles connected by a harmonic spring with unit spring constant and .mc refers to direct monte carlo simulations .the left bias corresponds to and right bias to .,width=312 ] as an example we study the case with and with .for the special parameters we use the results in ref . to obtain with ^{1/6 } \bigr\ } ] . in the systems that we have studied we find that the error can be made small by choosing the biased dynamics carefully .we have applied the algorithm to two different models of particle and heat transport and shown that in both cases it gives excellent results .we note , however , that , in general , the fluctuations in grow with and with the system size , hence the errors are large and finding an appropriate biased dynamics is not always easy .further work is necessary for improving the efficiency of the algorithm for general systems .d. j. evans , e. g. d. cohen , and g. p. morriss , phys . rev .* 71 * , 2401 ( 1993 ) ; d. j. evans and d. j. searles , phys . rev .e * 50 * , 1645 ( 1994 ) ; g. gallavotti and e.g.d .cohen , phys .lett . * 74 * , 2694 ( 1995 ) ; j. l. lebowitz and h. spohn , j. stat . phys .* 95 * , 333 ( 1999 ) ; c. jarzynski , phys .. lett . * 78 * , 2690 ( 1997 ) ; g. e. crooks , phys .e * 60 * , 2721 ( 1999 ) ; t. hatano and s. sasa , phys .lett . * 86 * , 3463 ( 2001 ) ; u. seifert , phys .lett . * 95 * , 040602 , ( 2005 ) .
|
we present an algorithm for finding the probabilities of rare events in nonequilibrium processes . the algorithm consists of evolving the system with a modified dynamics for which the required event occurs more frequently . by keeping track of the relative weight of phase - space trajectories generated by the modified and the original dynamics one can obtain the required probabilities . the algorithm is tested on two model systems of steady - state particle and heat transport where we find a huge improvement from direct simulation methods .
|
convolutional neural networks ( cnns ) have been widely applied in the general problem of object localization and detection .the task is to detect a target object s location and its spatial coverage in an image in the form of bounding boxes . to localize objects from images ,typically a model is given images of category exemplars for training .critically , these training samples have precise object - level annotations , such as segmentations or bounding boxes . the models can be fine - tuned from a pre - trained network and utilize region proposals for candidate object locations , or trained end - to - end .these models have demonstrated high performance in localizing objects from learned categories , and further fine - tuning is required in order to accommodate novel object categories .co - localization is the more challenging problem of localizing objects from only the set of positive image examples of the category without any object - level annotations .the lack of negative examples and detailed annotations hinders the use of supervised methods for the co - localization task .recent methods typically utilize existing region proposal methods for generating a number of candidate regions for objects and object parts , followed by matching or selecting the region with the highest confidence score .however , region and object proposals are part of a research problem of its own , and have drawbacks such as lack of repeatability , reduced detection performance with a large number of proposals , and lead to difficult balance in precision and recall . our method, however , does not require any object proposals or object detectors to perform co - localization .the main idea in our work is that objects of the same class share common features or parts .moreover , these commonalities are central to both , the category representation and the detection and localization of the object . by finding those object categorical features, their joint locations can act as a single - shot object detector .this idea is also grounded in human visual learning , where it is suggested that people detect common features from examples of the category , as part of the object - learning process .we do this by obtaining the cnn features of the provided set of positive images , in order to select the ones that are highly and consistently activated , which we denote as category - consistent cnn features ( ccfs ) we then use these ccfs to discover the rough object locations , and demonstrate an effective way to co - propagate the feature activations into a stable object for precise co - localization .figure [ fig : pipeline ] illustrates the pipeline of our proposed framework . in more detail ,our approach begins with a cnn that has been pre - trained for image classification on imagenet .then , the images of the target category are passed through the network .we identify the last - layer convolutional filters that have highly and consistently activated feature maps as the ccfs .the ccfs feature maps are combined into a single normalized activation probability map , where the highly activated region directly implies the rough object location , since the ccfs represent object parts or object - associated features .the ccf step allows us to bypass the need for region proposals .then , the activation map is partitioned into superpixels and weighted by the superpixel geodesic distance into an object - likelihood map such that the responses of the object - associated features propagate over the region of the entire object .finally , the precise object location can be obtained by placing a tight bounding box around the thresholded object - likelihood map .the three main contributions of this work are : * 1 . *we propose a novel ccf extraction method that can automatically highlight the rough initial object regions , which acts as a single - shot detector .* we introduce an effective method of feature co - propagation for generating a stable object region using superpixel geodesic distances on the original images .* our method achieves state - of - the - art performance for object co - localization on the voc 2007 and 2012 datasets , the object discovery dataset , and the six held - out imagenet subset categories .furthermore , our framework is fully unsupervised , objects are discovered using just positive image exemplars .we are able to accurately localize objects without needing any region proposals .co - localization is related to work on weakly supervised object localization ( wsol ) since both share the same objective : to localize objects from an image .however , since wsol allows the use of negative examples , designing the objective function to discover the information of the object - of - interest is less challenging : wsol - based methods achieve higher performance on the same datasets as compared to co - localization methods , due to the allowed supervised training .for instance , uses image labels to evaluate the discrimination of discovered categories in order to localize the objects . adopts a discriminative multiple instance learning scheme to compensate the lack of object - level annotations to localize the objects based on the most discriminative instances . because of the supervision that is required by those methods , it is not trivial for wsol approaches to be directly applied to the co - localization scenarios .one challenge of co - localization is to define the criteria for discovering the objects without any negative examples . to fill the gap , state - of - the - art co - localization methods such as employ object proposals as part of their object discovery and co - localization pipelines . use the measure of objectness to generate multiple bounding boxes for each image , followed by an objective function to simultaneously optimize the image - level labels and box - level labels .such settings allow the use of discriminative cost function .this is also used in the work of co - localization on video frames . also starts from object proposals , their method shares the same spirit with the deformable part model where the objects are discovered and localized by matching common object parts .most recently , study the confidence score distribution of a supervised object detector over the set of object proposals to define an objective function , that learns a common object detector with similar confidence score distribution .all the aforementioned methods heavily depend on the quality of object proposals .our work approaches the problem from a different perspective . instead of trying to fill in the gap of the negative data and annotations that are unavailable , we find the common features shared by the objects from the positive images .then , we use the joint locations of those features as our single - shot object detector .this allows us to bypass the need for utilizing a region or object proposal algorithm as a fist step .our subsequent step refines the detected object features into a stable object by co - propagating their activations together .we describe the details of our 2-step approach in the following sections .our proposed method consists of two main steps . the first step is to find the ccfs of a category , and obtain their combined feature map that contains aggregated ccf activations over the rough object region .then , the ccf activations are co - propagated into a stable object using superpixel geodesic distances on the original images .[ sec_ccf_selection ] given a set of object images from the same class and a cnn that has been pre - trained to contain sufficient visual features , we first compute the feature maps from the last - layer convolutional kernels over the images . then , we obtain an activation matrix with each row being the activation vector of a kernel containing the maximum values of the kernel s feature maps . specifically , for each kernel : , where is the feature map of kernel , given image that has been forward - passed through the cnn .the activation matrix therefore describes the max - response distributions of all kernels to all category images .our goal in this step is to identify a subset of representative kernels from the global set of candidate kernels , that contain common features from the positive images of the same class .this implies that the activation vectors of the kernels should have high values over all vector elements , since there is at least one instance of the object on every image .conceptually , the kernels that we seek correspond to object parts , or some object - associated features . to find the ccfs , we compute the pair - wise similarities between all pairs of kernels activation vectors , and cluster them using k - means . the kernels from the cluster with the highest mean activation correspond to the ccfs .the similarity between two cnn kernels and , can be defined as the distance between two activation vectors and .the sets of positive images are only required for the identification of the ccf kernels .the ccf kernels can then be used to generate the rough object location in an image in a single - shot : given an image from the target category , the feature maps of the ccfs are combined to form a single activation map .since each ccf corresponds to an object part of object - associate features , the densely activated area of the activation map indicates the rough location of the target object .the final activation map is normalized into a probability map that sums to 1 . in figure[ fig : feature ] we show the identified ccfs for the bus category , where the activated regions describe bus - related features and all fall within the spatial extent of the objects .the activation probability map from the ccfs automatically points out only the rough location of the object .it does not ensure a reliable object localization due to : * * 1.**the higher layer of a cnn does not guarantee a kernel s receptive - field size to cover the area of an entire object . * * 2.**while the feature maps contain spatial information , they have unprecise object locations due to previous max - pooling layers . * * 3.**the cnn was trained discriminatively .hence , only discriminative features of each object may be localized rather than the the whole object . in order to obtain the complete region that corresponds to the object , we compute geodesic distances between superpixels on the original image .in essence , the geodesic distance compactly encodes the similarity relationship between the two superpixels image contents .the similarity is computed via object boundary detection algorithm .therefore , the smaller geodesic distance between the two superpixels , the more likely that they belong to the same object . based on this characteristic, we propose a simple and effective method to highlight the object region from the activation probability map , that is both low - resolution and contains non - smooth feature activations . given an input image , we oversegment it into superpixels .the geodesic distance between a pair of superpixels is computed based on the graph built from the boundary probability map , which is done similarly as the method proposed by .we take the combined activation map that was obtained in the ccf identification step , and assign an energy value to each superpixel by averaging its corresponding pixel values found in the activation map . for superpixels , we denote the resulting flattened superpixel activation vector as .vector can be considered as the initial likelihood of each superpixel being within the object .next , we perform the geodesic distance propagation to localize the object .the main idea is that if two superpixels are likely to belong to the same object , then they should have similar geodesic distances and activations .this concept has been similarly adopted by various works in terms of interactive image segmentation and matting , and we find it to suit specially well for our purpose .therefore , we seek to obtain a global activation map that has regions of highly boosted or co - propagated activations by superpixels of similar geodesic distances , mediated by some level of consistency by formulating this co - propagating mechanism into a co - propagation matrix , such that is the normalized amount of co - propagation between superpixel and , with a parameter for controlling the amount of activation diffusion : finally , we apply to the activation vector directly : where is the co - propagated activation vector of the image , containing the globally boosted activations of the superpixels based on their pair - wise geodesic distances to all other superpixels .this allows us to fill in each superpixel on the image with their respective values from , and normalize the co - propagated superpixel map by dividing every pixel by the max value of the map .the result is an object - likelihood map , on which we apply a global threshold to obtain the region as our final object co - localization result .finally , a tight bounding box is placed around the maximum coverage of the thresholded regions within an image .figure [ fig : disprop ] illustrates the effects of activation propagation for two selected superpixels .the top row corresponds to the case of a background superpixel .this superpixel has small geodesic distances to a large set of other superpixels .hence , the activation values propagated from this superpixel to others are relatively small . in the second row , we consider a superpixel corresponding to the bird . in this case ,high activation values are only propagated from this superpixel to other superpixels that also correspond to the bird .hence , we filter out the undesirable high activation of the background regions surrounding the object , while also balancing the activation of all superpixels that reside within the same object .the first column of figure [ fig : disprop ] shows the original image and its boundary map .the second column marks the position and initial activation of the two selected superpixels .the next columns illustrate the activation propagating from the selected superpixel to all other superpixels with a varying degree of controlling parameter set at , and , where warmer values indicate higher activations .it can be seen from the third column that since the background superpixel has small geodesic distances to many other superpixels , the activations being propagated from this superpixel to others are equally small ; in contrast , the activations propagated from the bottom superpixel mostly fall into its close neighboring superpixels and ones that belong to the object s region . as increases , the amount of propagation is more evenly and widely spread , but with a lower overall magnitude .we evaluate our proposed 2-step framework with different parameter settings to illustrate different characteristics of our method .we also evaluate our method on multiple benchmarks , with intermediate and final results to show the localization effects of our proposed method . in all of our experiments, we used the last convolutional layer of a vgg-19 network that was pre - trained on imagenet as our ccf kernel pool .we used for kmeans clustering in the ccf identification step .the geodesic distances are computed using the structured forests soft boundary .the control parameter for activation co - propagation was set at .the final global threshold for obtaining the object region from the object - likelihood map was set at for all images .we use the conventional corloc metric to evaluate our co - localization results .the metric measures the percentage of images that contain correctly localized results . an image is considered correctly localized if there is at least one ground truth bounding box of the object - of - interest having more than 50% intersection - over - union ( iou ) score with the predicted bounding box . to benchmark our method performance ,we evaluate our method on three commonly used datasets for the problem of co - localization .these are voc 2007 and 2012 , and object discovery dataset . for experiments on voc datasets, we followed previous works that used all images on the _ trainval _ set excluding the images that only contain the object instances annotated as _difficult _ or _truncated_. for experiments on the object discovery dataset , we used the 100-image subset following in order to make an appropriate comparison with related methods .the ground truth bounding box for each image in the object discovery dataset is defined as the smallest bounding box covering all the segmentation ground truth of the object .max width=1.0 [ cols="^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^",options="header " , ] table [ selectedset ] reports the co - localization performance of our method on the 6 smaller subsets in comparison with the two state - of - the - art methods .the table shows that our method outperformed and by a large margin on average , and all but one individual classes .the six subsets of the imagenet dataset , chosen by , are held - out categories from the 1000-label classification task , which means they do not overlap with the 1000 classes used to train vgg-19 .we show that our method is generalizable to truly novel object categories with the six held - out imagenet subset classes .the images and the corresponding bounding box annotations were downloaded from imagenet website , and table [ nim ] shows the numbers of images in each class with available bounding box annotations when we accessed the imagenet dataset website .it is noticeable that paper used an even smaller set with less images ( table [ nim ] ) . to compare with the methods of and , we first conduct our experiment on the same smaller sets of images that was used in .then , we test our method on the full set of imagenet held - out categories that we downloaded from the imagenet dataset .table [ selectedset ] reports the co - localization of our method compares to on the smaller imagenet subset , and table [ fullset ] reports the performance of our method on the imagenet full subsets .the results show that our method significantly outperforms the competing methods in the smaller subset , with even higher accuracies for the full subset in table [ fullset ] .this result demonstrates that our method is robust to detect and localize truly unseen categories using previously learned cnn features .we show some examples of our co - localization results in figure [ fig : short ] and [ fig:1 ] .the results show that the bounding boxes generated by our proposed framework accurately match the ground truth bounding boxes .it is apparent that our results generate well - covered object regions , that has the potential to well delineate the objects in majority of the cases .the figures also show that the objects were able to be accurately co - localized with various sizes and locations .figure [ fig : fail ] illustrates three failure scenarios of our approach . while these three examples did not cover the ground truth bounding box sufficiently , but they were not far off .some analysis suggests that these failures were due to some ccfs that are shared by multiple categories , and that the object boundaries may not have been strong enough ( i.e. bottle and boat ) .in this work , we proposed a fully unsupervised 2-step approach for the problem of co - localization , that uses only positive images , and without any region or object proposals .our method is motivated by human vision , people implicitly detect the common features of category examples to learn the representation of the class .we show that the identified category - consistent features can also act as an effective first - pass object detector .this idea is implemented by finding the group of cnn features that are highly and consistently activated by a given positive set of images .the result of this first step generates a rough but reliable object location , and acts as a single - shot object detector .then , we aggregate the activations of the identified ccfs , and co - propagate their activations so that the activations over the true object region are boosted , while the activations over the background region are smoothed out .this effective activation refinement step allowed us to obtain accurately co - localized objects in terms of the standard corloc score with bounding boxes .we achieved new state - of - the - art performance on the three commonly used benchmarks . in the future, we plan to extend our method to generate unsupervised object co - segmentations .
|
co - localization is the problem of localizing categorical objects using only positive sets of example images , without any form of further supervision . this is a challenging task as there is no pixel - level annotations . motivated by human visual learning , we find the common features of an object category from convolutional kernels of a pre - trained convolutional neural network ( cnn ) . we call these category - consistent cnn features . then , we co - propagate their activated spatial regions using superpixel geodesic distances for localization . in our first set of experiments , we show that the proposed method achieves state - of - the - art performance on three related benchmarks : pascal 2007 , pascal-2012 , and the object discovery dataset . we also show that our method is able to detect and localize truly unseen categories , using six held - out imagnet subset of categories with state - of - the - art accuracies . our intuitive approach achieves this success without any region proposals or object detectors , and can be based on a cnn that was pre - trained purely on image classification tasks without further fine - tuning .
|
recent years have witnessed spectacular progress on similarity - based hash code learning in a variety of computer vision tasks , such as image search , object recognition and local descriptor compression etc .the hash codes are highly compact ( , several bytes for each image ) in most cases , which significantly reduces the overhead of storing visual big data and also expedites similarity - based image search . the theoretic ground of similarity - oriented hashing is rooted from johnson - lindenstrause theorem , which elucidates that for arbitrary samples , some -dimensional subspace exists and can be found in polynomial time complexity .when embedded into this subspace , pairwise affinities among these samples are preserved with tight approximation error bounds .this seminal theoretic discovery sheds light on trading similarity preservation for high compression of large data set .the classic locality - sensitive hashing ( lsh ) is a good demonstration for above tradeoff , instantiated in various similarity metrics such as hamming distance , cosine similarity , distance with ] is described as and when bit is absent , the code product using partial hash codes is * exponential loss * : given the observation that faithfully indicates the pairwise similarity , we propose to minimize an exponentiated objective function defined as the accumulation over all data pairs : where represents the collection of parameters in the deep networks excluding the hashing loss layer .the atomic loss term is this novel loss function enjoys some elegant traits desired by deep hashing compared with those in bre , mlh and ksh .it establishes more direct connection to the hashing function parameters by maximizing the correlation of code product and pairwise labeling . in comparison , bre and mlhoptimize the parameters by aligning hamming distance with original metric distances or enforcing the hamming distance larger / smaller than pre - specified thresholds .both formulations incur complicated optimization procedures , and their optimality conditions are unclear .ksh adopts a least - squares formulation for regressing code product onto the target labels , where a smooth surrogate for gradient computation is proposed .however , the surrogate heavily deviates from the original loss function due to its high non - linearity . * gradient computation * : a prominent advantage of exponential loss is its easy conversion into multiplicative form , which elegantly simplifies the derivation of its gradient . for presentation clarity ,we hereafter only focus on the calculation conducted over the topmost hashing loss layer .namely , $ ] for bit , where are the response values at the second top layer and are parameters to be learned for bit ( ) . following the common practice in deep learning , two groups of quantities , and ( ranges over the index set of current mini - batch ) need to be estimated on the hashing loss layer at each iterationthe former group of quantities are used for updating , , and the latter are propagated backwards to the bottom layers .the additive algebra of hash code product in eqn .( [ eqn : cp ] ) inspires us to estimate the gradients in a leave - one - out mode . for atomic loss in eqn .( [ eqn : aloss ] ) , it is easily verified where only the latter factor is related to .since the product can only be -1 or 1 , we can linearize the latter factor through exhaustively enumerating all possible values , namely where are two sample - specific constants , calculated by and . since the hardness of calculating the gradient of eqn .( [ eqn : ee ] ) lies in the bit product , we replace the signum function using the sigmoid - shaped function , obtaining freezing the partial code product , we define an approximate atomic loss with only bits active : where the first factor plays a role of re - weighting specific data pair , conditioned on the rest bits .iterating over all s , the original loss function can now be approximated by training set , data labels , and step size ; network parameters , for the hashing - loss layer , and for other layers ; * * concatenate all layers ( excluding top hashing - loss layer ) with a softmax layer that defines an image classification task ; apply alexnet ) style supervised parameter learning algorithm , obtaining .calculate neuron responses on second topmost layer through ; * * replicate all s from previous stage ; forward computation starting from ; update by minimizing the image classification error ; * * forward computation starting from the raw images ; estimate ; update ; estimate , ; propagate to bottom layers , updating ; compared with other sigmoid - based approximations in previous hashing algorithms ( , ksh ) , ours only requires ( rather than both and ) is sufficiently large .this bilinearity - oriented relaxation is more favorable for reducing approximation error , which will be corroborated by the subsequent experiments . since the objective in eqn .( [ eqn : obj ] ) is a composition of atomic losses on data pairs , we only need to instantiate the gradient computation on specific data pair .applying basic calculus rules and discarding some scaling factors , we first obtain and further using calculus chain rule brings importantly , the formulas below obviously hold by the construction of : the gradient computations on other deep network layers simply follow the regular calculus rules .we thus omit the introduction .deep hashing algorithms ( including ours ) mostly strive to optimize pairwise ( or even triplet as in ) similarity in hamming space .this raises an intrinsic distinction compared with conventional applications of deep networks ( such as image classification via alexnet ) .the total count of data pairs quadratically increases with regard to the training sample number , and in conventional applications the number of atomic losses in the objective only linearly grows .this entails a much larger mini - batch size in order to combat numerical instability caused by under - sampling sampling rate in image classification .in contrast , in deep hashing , capturing pairwise similarity requires a tremendous mini - batch of 10,000 data . ] , which unfortunately often exceeds the maximal memory space on modern cpu / gpus .we adopt a simple two - stage supervised pre - training approach as an effective network pre - conditioner , initializing the parameter values in the appropriate range for further supervised fine - tuning . in the first stage ,the network ( excluding the hashing loss layer ) is concatenated to a regular softmax layer .the network parameters are learned through optimizing the objective of a relevant semantics learning task ( , image classification ) . after stage oneis complete , we extract the neuron outputs of all training samples from the second topmost layer ( , the variable s in section [ subsec : loss ] ) , feed them into another two - layer shallow network as shown in figure [ fig : network ] and initialize the hashing parameters , .finally , all layers are jointly optimized in a fine - tuning process , minimizing the hashing loss objective .the entire procedure is illustrated in figure [ fig : network ] and detailed in algorithm [ alg : deephash ] .this section reports the quantitative evaluations between our proposed deep hashing algorithm and other competitors .* description of datasets * : we conduct quantitative comparisons over four image benchmarks which represent different visual classification tasks .they include * * mnist * * for handwritten digits recognition , * * cifar10 * * which is a subset of _ 80 million tiny images _ dataset and consists of images from ten animal or object categories , * * kaggle - face * * , which is a kaggle - hosted facial expression classification dataset to stimulate the research on facial feature representation learning , and * sun397 * which is a large scale scene image dataset of 397 categories .figure [ fig : example ] shows exemplar images .for all selected datasets , different classes are completely mutually exclusive such that the similarity / dissimilarity sets as in eqn ( [ eqn : y ] ) can be calculated purely based on label consensus .table [ table : data ] summarizes the critical information of these experimental data , wherein the column of feature dimension refers to the neuron numbers on the second topmost layers ( , dimensions of feature vector ) .
|
similarity - based image hashing represents crucial technique for visual data storage reduction and expedited image search . conventional hashing schemes typically feed hand - crafted features into hash functions , which separates the procedures of feature extraction and hash function learning . in this paper , we propose a novel algorithm that concurrently performs feature engineering and non - linear supervised hashing function learning . our technical contributions in this paper are two - folds : 1 ) deep network optimization is often achieved by gradient propagation , which critically requires a smooth objective function . the discrete nature of hash codes makes them not amenable for gradient - based optimization . to address this issue , we propose an exponentiated hashing loss function and its bilinear smooth approximation . effective gradient calculation and propagation are thereby enabled ; 2 ) pre - training is an important trick in supervised deep learning . the impact of pre - training on the hash code quality has never been discussed in current deep hashing literature . we propose a pre - training scheme inspired by recent advance in deep network based image classification , and experimentally demonstrate its effectiveness . comprehensive quantitative evaluations are conducted on several widely - used image benchmarks . on all benchmarks , our proposed deep hashing algorithm outperforms all state - of - the - art competitors by significant margins . in particular , our algorithm achieves a near - perfect 0.99 in terms of hamming ranking accuracy with only 12 bits on mnist , and a new record of 0.74 on the cifar10 dataset . in comparison , the best accuracies obtained on cifar10 by existing hashing algorithms without or with deep networks are known to be 0.36 and 0.58 respectively .
|
curvature causes the fourier transform to decay . this observation links geometry to analysis , and lies at the base of several topics of modern harmonic analysis . given the long history of the subject , it is perhaps surprising that the possibility of restricting the fourier transform to curved submanifolds of euclidean space was not observed until the late sixties .the fourier transform of an integrable function is uniformly continuous , and as such , can be restricted to any subset . on the other hand ,the fourier transform of a square - integrable function is again square - integrable , and in view of plancherel s theorem no better properties can be expected . in particular , restricting the fourier transform of a square - integrable function to a set of zero lebesgue measure is meaningless .the question is what happens for intermediate values of .it is not hard to check that the fourier transform of a radial function in defines a continuous function away from the origin whenever , see for instance .the corresponding problem for non - radial functions is considerably more delicate . to introduce it ,let be a smooth compact hypersurface in , endowed with a surface - carried measure . here denotes the surface measure of , and the function is smooth and non - negative .given , for which exponents does the _ a priori _inequality hold ?a complete answer for is given by the celebrated tomas stein inequality .[ tomasstein ] suppose has non - zero gaussian curvature at each point of the support of . then the restriction inequality holds for and .the range of exponents is sharp , since no restriction can hold for if .this is shown via the famous knapp example , which basically consists of testing the inequality dual to against the characteristic function of a small cap on .moreover , some degree of curvature is essential , as there can be no meaningful restriction to a hyperplane except in the trivial case when and .an example to keep in mind is that of the unit sphere , with constant positive gaussian curvature .however , nonvanishing gaussian curvature is a strong assumption that can be replaced by the nonvanishing of some principal curvatures , at the expense of decreasing the range of admissible exponents .the question of what happens for values of is the starting point for the famous restriction conjecture .one is led by dimensional analysis and knapp - type examples to guess that the correct range for estimate to hold is and , where denotes the dual exponent .this is depicted in figure [ fig : restriction ] .note that the endpoints of this relation are the trivial case , and .[ scale = 5 ] ( 0 , 0 ) rectangle ( 1 , 1 ) ; ( 0.625 , 1 ) ( 0.625 , 0.625 ) ( 0.7 , 0.5 ) ( 0.7 , 1 ) cycle ; ( 0.7 , 1 ) ( 0.7 , 0.5 ) ( 1 , 0 ) ( 1 , 1 ) cycle ; ( 0 , 0 ) ( 1.05 , 0 ) node[right ] ; ( 0 , 0 ) ( 0 , 1.05 ) node[above ] ; ( 0 , 1 ) node[left ] ; ( 1 , 1 ) ( 0 , 0 ) node[below left ] ; ( 1 , 1 ) ( 1 , 0 ) node[below ] ; ( 0.625 , 1 ) ( 0.625 , 0 ) ; ( 0.625 , 0 ) .. controls ( 0.625 , -0.06 ) .. ( 0.6 , -0.06 ) node[left = -3 ] ; ( 0.7 , 1 ) ( 0.7 , 0 ) ; ( 0.7 , 0 ) .. controls ( 0.7 , -0.06 ) .. ( 0.725 , -0.06 ) node[right = -3 ] ; ( 1 , 0.5 ) ( 0 , 0.5 ) node[left ] ; ( 0.625 , 0.625 ) ( 0 , 0.625 ) node[left ] ; ( 1 , 0.5 ) ( 0.7 , 0.5 ) ; ( 0.7 , 0.5 ) .. controls ( 0.625 , 0.3 ) .. ( 0.5 , 0.3 ) node[fill = white , left ] tomas stein ; ( 0.7 , 0.5 ) circle [ radius = 0.01 ] ; ( 0.625 , 0.625 ) .. controls ( 0.55 , 0.8 ) .. ( 0.5 , 0.8 ) node[fill = white , left , align = center ] restriction + conjecture ; ( 0.625 , 0.625 ) circle [ radius = 0.01 ] ; despite tremendous effort and very promising partial progress , the restriction conjecture is only known to hold for .the restriction conjecture implies the kakeya conjecture and is implied by the bochner riesz conjecture .multilinear versions of the restriction and kakeya conjectures have been established by bennett , carbery and tao , and played a crucial role in the very recent work of bourgain and demeter on decoupling . for more on the restriction problem , and its relation to other prominent problems in modern harmonic analysis , we recommend the works and tomas stein type restriction estimates are very much related to strichartz estimates for linear partial differential equations of dispersion type .let us illustrate this point in two cases , that of solutions to the homogeneous schrdinger equation and that of solutions to the homogeneous wave equation in both situations , .the following theorem was originally proved by strichartz .[ strichartz ] let .then there exists a constant such that whenever is the solution of with initial data . if , then there exists a constant such that whenever is the solution of with initial data and . a hint that theorems [ tomasstein ] and [ strichartz ] might be related comes from the numerology of the exponents : the strichartz exponent coincides with the dual of the tomas stein exponent in dimension .it turns out that strichartz estimates for the schrdinger equation correspond to restriction estimates on the paraboloid , whereas strichartz estimates for the wave equation correspond to restriction estimates on the cone .note that the gaussian curvature of the cone is identically zero because one of its principal curvatures vanishes .this in turn translates into estimate holding for the strichartz exponent in one lower dimension .perhaps more significantly , neither of these manifolds is compact .however , they exhibit some scale invariance properties that enable a reduction to the compact setting .we shall return to this important point later in our discussion . in this note , we are interested in extremizers and optimal constants for sharp variants of restriction and strichartz - type inequalities . apart from their intrinsic mathematical interest and elegance , such sharp inequalities often allow for various refinements of existing inequalities .the following are natural questions , which in particular can be posed for inequalities , and : 1 . what is the value of the optimal constant ? 2 .do extremizers exist ? 1 .if so , are they unique , possibly after applying the symmetries of the problem ?2 . if not , what is the mechanism responsible for this lack of compactness ? 3 . how do extremizing sequences behave ?what are some qualitative properties of extremizers ? 5 . what are necessary and sufficient conditions for a function to be an extremizer? questions of this flavor have been asked in a variety of situations , and in the context of classical inequalities from euclidean harmonic analysis go back at least to the seminal work of beckner for the hausdorff young inequality , and lieb for the hardy littlewood sobolev inequality . in comparison ,sharp fourier restriction inequalities have a relatively short history , with the first works on the subject going back to kunze , foschi and hundertmark zharnitsky . works addressing the existence of extremizers for inequalities of fourier restriction type tend to be a _ tour de force _ in classical analysis , using a variety of sophisticated techniques : bilinear estimates and refined estimates in -type spaces , concentration compactness arguments tailored to the specific problem in question , variants and generalizations of the brzis lieb lemma from functional analysis , fourier integral operators , symmetrization techniques , variational , perturbative and spectral analysis , regularity theory for equations with critical scaling and additive combinatorics , among others .in contrast , a full characterization of extremizers has been given in a few selected cases using much more elementary methods .this is due to the presence of a large underlying group of symmetries which allows for several simplifications that ultimately reduce the problem to a simple geometric observation .we will try to illustrate this point in the upcoming sections . before doing so ,let us briefly comment on some approaches that have been developed in the last decades in order to establish tomas stein type fourier restriction inequalities .for the sake of brevity , we specialize our discussion to inequalities and , but a more general setting should be kept in mind . if denotes the restriction operator , then its adjoint , usually called the extension operator , is given by , where the fourier transform of the measure is given by the tomas stein inequality at the endpoint is equivalent to the extension estimate if , then the composition is well - defined , and a computation shows that since the operator norms satisfy , the study of these three operators is equivalent , even if the goal is to obtain sharp inequalities and determine optimal constants .so we focus on the operator .boundedness of is only ensured if the fourier transform exhibits some sort of decay as .this in turn is a consequence of the principle of stationary phase , see , since the nonvanishing curvature of translates into a nondegenerate hessian for the phase function of the oscillatory integral given by with .this is the starting point for the original argument of tomas , which was then extended to the endpoint by embedding into an analytic family of operators and invoking stein s complex interpolation theorem .it is hard not to notice the parallel between the operator and the averaging operator whose improving properties can be established via the same proof on the fourier side , see for instance .a second method to prove restriction estimates goes back to the work of ginibre and velo .it consists of introducing a time parameter and treating the extension operator as an evolution operator .two key ingredients for this approach are the hausdorff young inequality and fractional integration in the form of the hardy littlewood sobolev inequality .can be proved with a combination of hausdorff young and hardy littlewood sobolev , see for instance . ]these methods are more amenable to the needs of the partial differential equations community , and for instance allow to treat the case of mixed norm spaces . in the special cases when the dual exponent is an even integer, one can devise yet another proof which comes from the world of bilinear estimates , see for instance .one simply rewrites the left - hand side of inequality as an norm , and appeals to plancherel in order to reduce the problem to a multilinear convolution estimate .for instance , if , then , and similarly , if , then , and in the bilinear case , the pointwise inequality then reveals that one can restrict attention to nonnegative functions .this observation can greatly simplify matters , as it reduces an oscillatory problem to a question of geometric integration over a specific manifold .furthermore , an application of the cauchy schwarz inequality with respect to an appropriate measure implies the pointwise inequality a good understanding of the convolution measure becomes a priority .given integrable functions , the convolution is a finite measure defined on the borel subsets by it is clear that this measure is supported on the minkowski sum . in most situations of interest when some degree of curvature is present , one can check that is absolutely continuous with respect to lebesgue measure on .in such cases , the measure can be identified with its radon nikodym derivative with respect to lebesgue measure , and for almost every we have that here denotes the -dimensional dirac delta distribution . as we shall exemplifyin the course of the paper , expression turns out to be very useful for computational purposes .+ * overview . *as we hope to have made apparent already , this note is meant as a short survey of a restricted part of the topic of sharp strichartz and fourier extension inequalities . in [ sec : noncompact ] we deal with noncompact surfaces , and discuss the cases of the paraboloid and the cone , where a full characterization of extremizers is known , and the cases of the hyperboloid and a quartic perturbation of the paraboloid , where extremizers fail to exist .most of the material in this section is contained in the works . in [ sec : compact ] we discuss the case of spheres .a full characterization of extremizers at the endpoint is only known in the case of .we mention some recent partial progress in the case of the circle , and observe how the methods in principle allow to refine some related inequalities . in particular , we obtain a new sharp extension inequality on in the mixed norm space .most of the material in [ sec : compact ] is contained in the works .we leave some final remarks to [ sec : unify ] , where we hint at a possible unifying picture for the results that have been discussed .finally , we include an appendix with a brief introduction to integration on manifolds using delta calculus . + * remarks and further references . *the style of this note is admittedly informal .in particular , some objects will not be rigorously defined , and most results will not be precisely formulated .none of the material is new , with the exception of the results in [ sec : mixednorm ] and a few observations that we have not been able to find in the literature .the subject is becoming more popular , as shown by the increasing number of works that appeared in the last five years .we have attempted to give a rather complete set of references , which includes several interesting works that will not be discussed here . given its young age , there are plenty of open problems in the area .our contribution is to provide some more .given , let us consider the -dimensional paraboloid equipped with projection measure the validity of an extension estimate follows from strichartz inequality for the schrdinger equation as discussed before .extremizers for this inequality are known to exist in all dimensions , and to be gaussians in low dimensions .extremizers are conjectured to be gaussians in all dimensions , see also .let us specialize to the case and follow mostly . in view of identity , which itself is a consequence of formula from the appendix ,the convolution of projection measure on the two - dimensional paraboloid is given by changing variables and computing in polar coordinates according to , we have that we arrive at the crucial observation that the convolution measure defines a function which is not only uniformly bounded , but also constant in the interior of its support .see ( * ? ? ?* lemma 3.2 ) for an alternative proof that uses the invariance of under galilean transformations and parabolic dilations . a successive application of cauchy schwarz and hlder s inequality finishes the argument . indeed , the pointwise bound follows from an application of the cauchy schwarz inequality with respect to the measure integrating inequality over , an application of hlder s inequality then reveals it is possible to turn both inequalities simultaneously into an equality .the conditions for equality in translate into a functional equation which should hold for some complex - valued function defined on the support of the convolution , and almost every point .an example of a solution to is given by the gaussian function and the corresponding .all other solutions are obtained from this one by applying a symmetry of the schrdinger equation , see ( * ? ? ?* proposition 7.15 ) . that they turn inequality into an equality follows from the fact that the convolution is constant inside its support .the one - dimensional case admits a similar treatment .the threefold convolution is given by a constant function in the interior of its support , and the corresponding functional equation can be solved by similar methods .gaussians are again seen to be the unique extremizers .alternative approaches are available : hundertmark zharnitski based their analysis on a novel representation of the strichartz integral , and bennett et al . identified a monotonicity property of such integrals under a certain quadratic heat - flow .given , consider the ( one - sheeted ) cone equipped with its lorentz invariant measure the second identity is a consequence of formula from the appendix .the validity of an extension estimate follows from strichartz inequality for the wave equation as discussed before .extremizers for the cone are known to exist in all dimensions , and to be exponentials in low dimensions .let us specialize to the case .the convolution of the lorentz invariant measure on the three - dimensional cone is given by given , we write a generic vector in spherical coordinates , so that where , ] is an angular variable . setting , the jacobian of the change of variables into bipolar coordinates , see figure [ fig : bipolar ] below ,is given by [ scale = 1 ] ( -2.01 , -2.51 ) rectangle ( 5.01 , 2.51 ) ; in 0.5 , 1 , ... , 5 ( 0 , 0 ) circle [ radius = ] ; ( 3 , 0 ) circle [ radius = ] ; ( -2 , 0 ) ( 5 , 0 ) ; ( 0 , -2.5 ) ( 0 , 2.5 ) ; ( 1.5 , 0 ) circle [ x radius = 2.5 , y radius = 2 ] ; ( 0,0 ) ( 2.333 , 1.886 ) node[midway , above ] ; ( 3,0 ) ( 2.333 , 1.886 ) node[midway , right ] ; ( 0 , 0 ) circle [ radius = 0.05 ] node[below left ] ; ( 3 , 0 ) circle [ radius = 0.05 ] node[below left ] ; ( 2.333 , 1.886 ) circle [ radius = 0.05 ] node[above ] ; letting and , we invoke the change of variables formula to further compute this again defines a constant function inside its support , and a combination of cauchy schwarz and hlder as before establishes the sharp inequality .the characterization of extremizers follows from the analysis of the functional equation which yields as a particular solution the exponential function and the corresponding .all other solutions are obtained from this one by applying a symmetry of the wave equation , see ( * ? ? ?* proposition 7.23 ) , and the lower dimension case admits a similar treatment .we now switch to the ( one - sheeted ) hyperboloid equipped with the lorentz invariant measure as established in , an extension estimate holds provided note that the lower and upper bounds in the exponent range correspond to the cases of the paraboloid and the cone , respectively .we focus on the case and , and take advantage of lorentz symmetries to compute the convolution . along the vertical axis of the hyperboloid , lorentz invariance forces the convolution to be constant along the level sets of the function . as a consequence , contrary to the previous cases ,this no longer defines a constant function inside its support .since it is uniformly bounded ( by ) , the argument can still be salvaged to yield a sharp extension inequality .extremizers for this inequality , however , do not exist .we shall observe a similar phenomenon in the case of perturbed paraboloids , considered in the next subsection , and postpone a more detailed discussion until then .let us start with a brief discussion of a specific instance of a comparison principle from that proved useful in establishing sharp inequalities for perturbed paraboloids .let denote the two - dimensional paraboloid considered in [ sec : paraboloids ] , and let denote the projection measures on the surface then the pointwise inequality holds for every and , and is strict at almost every point of the support of the measure .the feature of the function that makes this possible is _convexity_. any nonnegative , continuously differentiable , strictly convex function would do , see ( * ? ? ?* theorem 1.3 ) for a precise version of this comparison principle which holds in all dimensions .a sharp extension inequality can be obtained by concatenating cauchy schwarz and hlder as before , and extremizers do not exist because of the strict inequality in .heuristically , extremizing sequences are forced to place their mass in arbitrarily small neighborhoods of the region where the convolution attains its global maximum , for otherwise they would not come close to attaining the sharp constant .the analysis of the cases of equality in reveals that this region has zero lebesgue measure , and this forces any extremizing sequence to concentrate .consider the endpoint tomas stein extension inequality on the sphere where denotes surface measure on and .extremizers for inequality were first shown to exist when in .the precise form of nonnegative extremizers was later determined in , and they turn out to be constant functions .see also for a conditional existence result in higher dimensions .it seems natural to conjecture that extremizers should be constants in all dimensions .spheres are antipodally symmetric compact manifolds , and this brings in some additional difficulties which can already be observed at the level of convolution measures .indeed , formula from the appendix implies invoking , and then once again , we have computing in polar coordinates according to , one concludes where denotes the surface measure of .when , and consequently , the inequality in question is equivalent to a 4-linear estimate .moreover , the last factor in simplifies and one is left with expression blows up at the origin , which prevents a straightforward adaptation of the methods from the previous section .in particular , the interaction between antipodal points causes difficulties that prevented the local analysis from to identify the global extremizers of the problem . the work resolves this issue in a simple manner , using the following geometric feature of the sphere : if the sum of three unit vectors is again a unit vector , then necessarily and expands the left - hand side of .an application of the cauchy schwarz inequality together with identity reduces the analysis to antipodally symmetric functions , and at the same time neutralizes the singularity of the convolution measure at the origin .this reduces the -linear problem on to a bilinear problem on its square .more precisely , one is left with establishing a monotonicity property for the quadratic form where is now assumed to be merely integrable .this in turn is accomplished via spectral analysis .if denotes the mean value of over the sphere and denotes the constant function equal to 1 , one wants to show that the crucial observation is that the quadratic form is diagonal in a suitable basis .in fact , expanding in spherical harmonics , we have where the eigenvalues can be computed via the funk hecke formula .it turns out that when , we refer the reader to for the full details .this approach was extended in to establish sharp extension estimates on for .table [ tab : hresult ] indicates the signs of the corresponding coefficients ..signs of the coefficients [ cols="^,^,^,^,^,^,^,^,^",options="header " , ] the two - dimensional surfaces in question ( paraboloid , hyperboloid and sphere ) can all be obtained as intersections of the three - dimensional cone with appropriately chosen hyperplanes .figure [ fig : conic ] below illustrates this point .there , the ambient space is endowed with coordinates , where .one can not help noticing that the restriction of the exponential function , which is an extremizer for the cone , to the different conic sections coincides with the corresponding extremizers given by table [ tab : extremizers ] .it is a constant function when ( sphere , red in fig .[ fig : conic ] ) and a gaussian when ( paraboloid , blue in fig .[ fig : conic ] ) .furthermore , it yields the function when ( hyperboloid , yellow in fig .[ fig : conic ] ) .extemizers for the problem on the hyperboloid do not exist , but it was shown in ( * ? ? ?* lemma 5.4 ) that the function produces an extremizing sequence as for the extension inequality on the hyperboloid .( -3.5 , 3.5 ) ( 0 , 0 ) ( 3.5 , 3.5 ) cycle ( 0 , -4 ) circle ( 3.5 ) ; in 0.5 , 1 , ... , 3.6 ( - , ) ( , ) ( 0 , -4 ) circle [ radius = ] ; ( 0 , 0 ) ( 0 , 3.8 ) node[left ] ( 0 , -7.5 ) ( 0 , -0.2 ) node[left ] ( -3.5 , -4 ) ( 4 , -4 ) node[below ] ; ( 3.333 , -4 ) circle [ radius = 0.05 ] node[below left ] ( 1 , -4 ) circle [ radius = 0.05 ] node[below left ] ( 0 , -4 ) circle [ radius = 0.05 ] node[below left ] ( -2 , -4 ) circle [ radius = 0.05 ] node[below left ] ; ( 2 , 1.5 ) ( 2.5 , 1.5 ) node[right ] cone : ( -3.5 , 3.5 ) ( 0 , 0 ) ( 3.5 , 3.5 ) ; ( 2 , -1 ) ( 2.5 , -1 ) node[right ] hyperboloid : , ( 1 , 1 ) ( 1 , 3.5 ) ( 1 , -7.354 ) ( 1 , -0.646 ) ; ( 2 , -0.5 ) ( 2.5 , -0.5 ) node[right ] hyperboloid : ( 0 , 3.5 ) ( 1 , 1 ) [ domain=-3.5:3.5 , smooth , variable= ] plot ( ( 8.75-sqrt(5.25*+ 12.25))/5.25 , -4 ) ; ( 2 , 0 ) ( 2.5 , 0 ) node[right ] paraboloid : ( -1.5 , 3.5 ) ( 1 , 1 ) [ domain=-3.162:3.162 , smooth , variable= ] plot ( 1-/4 , -4 ) ; ( 2 , 0.5 ) ( 2.5 , 0.5 ) node[right ] ellipsoid : ( -3 , 3 ) ( 1 , 1 ) ( -1 , -4 ) circle [ x radius = 2 , y radius = 1.732 ] ; ( 2 , 1 ) ( 2.5 , 1 ) node[right ] sphere : ( -1 , 1 ) ( 1 , 1 ) ( 0 , -4 ) circle [ radius = 1 ] ;let be a smooth -dimensional submanifold of , with . on one can define a canonical measure which is naturally induced by the euclidean metric structure of . integration on the manifold can be rigorously defined by means of differential forms , but the actual computation of integrals of the form often appears to be a challenging task when is a nontrivial manifold .the theory of distributions comes into help since these integrals can be viewed as pull - backs of dirac delta distributions .suppose that is ( locally ) implicitly described as the zero level set of a function defined on an open subset of and taking values in , and assume that the jacobian matrix of the map has maximal rank at every point .consider the usual dirac delta distribution on defined by for any smooth test functions .we would like to make sense of the composition as a distribution on .there are several ways to define such a composition .one possibility is to approximate the delta distribution with a family of smooth bump functions , one is led to define for every .this limit converges to the integral where is the jacobian determinant of the function at the point , and here stands for the wedge product of differential forms .therefore integrals over the manifold can be expressed as integrals over by means of delta distributions : let us now have a look at some simple but useful algebraic rules which follow easily from these definitions , and allow for manipulation of integrals with delta distributions .these rules have been used in the previous sections to carry out explicit computations of convolution measures defined on the various manifolds considered there . in the codimension case , , we have that is a hypersurface defined by a scalar function .identity simplifies to for example , in the case of the unit sphere equipped with surface measure , we have that , , and for another example , consider the region , with boundary inside given by , and with outgoing unit normal vector given by . combining the divergence theorem with formula , we have the following : for any smooth vector field with compact support in , we can make sense of the product of two delta distributions by simply setting whenever the right - hand side is well - defined as a distribution supported on the manifold .we can also make sense of the integration of the distribution over the manifold as follows : in the case of codimension , the delta distribution can always be viewed ( locally ) as a product of delta distributions on hypersurfaces .for example , let denote the surface measure on the -dimensional unit sphere in , and let be a smooth function on the unit sphere .the convolution is supported on the ball of radius centered at the origin , and its value at a point inside that ball can be written as an integral over the -dimensional sphere obtained as the intersection of the unit sphere with its translate by .[ scale = 2 ] ( 0 , 0 ) node[below ] ( 1.6 , 0.8 ) node[below ] ; ( 0 , 0 ) node[midway , above ] ( 0.6 , 0.8 ) node[above right ] ; ( 0.6 , 0.8 ) node[midway , above ] ( 1.6 , 0.8 ) ; ( 0 , 0 ) circle [ radius = 1 ] ; ( 1.6 , 0.8 ) circle [ radius = 1 ] ; ( 0.6 , 0.8 ) ( 1 , 0 ) ; ( 0 , 0 ) circle [ radius=0.03 ] ( 0.6 , 0.8 ) circle [ radius=0.03 ] ( 1.6 , 0.8 ) circle [ radius=0.03 ] ; ( 1.5 , -0.5 ) node[text = black , right ] .. controls ( 0.5 , -0.5 ) .. ( 0.85 , 0.3 ) ; the sphere has radius . we can write , with and .if , then using formula , we obtain a generalization of formula : if , then where is the surface volume of the -dimensional unit sphere and denotes the averaged integral . let be a positive scalar function .then , on the manifold , we have that , and consequently in particular , if and are smooth positive scalar functions which coincide on , then .for example , on the unit sphere , we have that in a similar way , on the null cone , we have that suppose that is a local diffeomorphism on . as for any distribution, we have the usual change of variables rule , if instead we consider the composition , with a local diffeomorphism defined on a neighborhood of in and satisfying , then , and in particular , if is a nonsingular matrix and is a nonsingular matrix , then we have that second author expresses his gratitude to dmitriy bilyk , feng dai , vladimir temlyakov and sergey tikhonov for organizing the workshop _ function spaces and high - dimensional approximation _ at the centrede recerca matemtica - icrea , may 2016 , whose hospitality is greatly appreciated .j. bourgain , _ fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations .i. schrdinger equations .anal . * 3 * ( 1993 ) , no .2 , 107156 .e. carneiro , d. foschi , d. oliveira e silva and c. thiele , _ a sharp trilinear inequality related to fourier restriction on the circle , _ preprint , 2015 .arxiv:1509.06674 . to appear in rev . mat .iberoam .l. hrmander , _ the analysis of linear partial differential operators . i. distribution theory and fourier analysis ._ grundlehren der mathematischen wissenschaften [ fundamental principles of mathematical sciences ] , 256 .springer - verlag , berlin , 1983 .e. m. stein , _ harmonic analysis : real - variable methods , orthogonality , and oscillatory integrals , _ princeton mathematical series , 43 .monographs in harmonic analysis , iii .princeton university press , princeton , nj , 1993 .
|
the purpose of this note is to discuss several results that have been obtained in the last decade in the context of sharp adjoint fourier restriction / strichartz inequalities . rather than aiming at full generality , we focus on several concrete examples of underlying manifolds with large groups of symmetries , which sometimes allows for simple geometric proofs . we mention several open problems along the way , and include an appendix on integration on manifolds using delta calculus .
|
the reduction of exposure to emf is one of the challenging problems in radio access technology ( ran ) that attracts the attention of different stakeholders , from the general public , to telecoms industry and regulatory bodies .the problem has become even more prominent in face of the frantic race toward the increase of network capacity and optimizing its performance and quality of service ( qos ) .in fact the introduction of multiple new technologies in the network goes along with the deployment of several new radiating antenna and transmission towers .up today , the assessment of rf - emf exposure has been focused separately on the exposure induced by personal devices on the one hand , and that of the network equipment such as base stations ( bss ) and access points on the other hand . in the framework of the fp7 european research project lexnet ,an exposure metric has been proposed , denoted as the exposure index ( ei ) .the ei aggregates the exposure from both personal devices and that from bss and access points , giving rise to a single and more realistic network parameter for exposure .it reflects the contribution to exposure from different technologies and , among others , takes into account the exposure duration from the different radiating sources , the utilized frequencies , environment , services etc .it is argued that the ei could be used in developing deployment strategies for rans , optimization , self - optimization and management techniques for reducing emf , using a more realistic metric .recently the issue of reducing the overall and individual level of exposure has been addressed and becomes one of the main concerns from the perspective of the network users . in for example , a measurement of rf field from wifi access points against a background of rf fields in the environment over the frequency range 75 mhz-3 ghz is performed to quantify the exposure that a bystander might receive from the laptop . in different factors influencing rf exposure in mobile networks are treated in a systematic manner for most relevant wireless standards relying on their rf characteristics .the most relevant levers for limiting future exposure levels are presented .nevertheless the risk perception linked to emf exposure from the network users is different . illustrates the biased view on rf exposure of network users , who mainly focus on the radiating antennas of telecommunication towers while ignoring or giving little importance to radiation from user equipment close to the body .however the two sources of radiation are strongly correlated .leading works in the area of emf exposure reduction consider solutions for uplink ( ul ) and downlink ( dl ) transmissions separately . in compliance with the definition of the ei , we address the combined effect of radiation from ul and dl transmissions .the purpose of this work is to propose a novel self organizing network ( son ) approach for reducing the overall emf exposure , expressed in terms of the ei .the ei depends on highly complex set of network data and parameters that are too complex to be handled instantly . for this reason, we adopt the strategy of self - optimizing intermediate key performance indicators ( kpis ) that impact the ei of both ul and dl transmissions . once transmit ( tx ) and received ( rx ) powersare calculated , emf exposure can be evaluated using transformation tables ( evaluated in using measurements and electromagnetic simulators ) , and the corresponding ei .the proposed son algorithm is a load balancing algorithm , based on a stochastic approximations .it adapts the small cells coverage based on ul loads and on dl qos indicators .the rationale for the proposed solution is that , to a certain extent , by off - loading macro - cell traffic towards small cells , ul transmission of cell edge users is decreased .however , above a certain cell range extension , dl qos can be jeopardized , and should be therefore included in the son algorithm .the contributions of this paper are the following : * a novel control stochastic load balancing algorithm based on recursive inclusion , which takes into account ul loads and dl qos constraints . * the proof of convergence of the proposed algorithm is provided referring to the developments on stochastic differential inclusions in . * performance analysis through flow level simulations of the designed mechanism is provided .the paper is organized as follows : section [ sec : model ] gives the description of the network settings , the problem formulation and the methodology used in the paper . in section[ sec : flowlevelmetric ] we present the different metrics and flow level kpis .section [ sec : sondev ] develops the proposed load - balancing approach to reduce emf exposure .we evaluate numerically the performances obtained upon activation of the son mechanism in section [ sec : numanalysis ] and discuss the results .section [ sec : conclusion ] eventually concludes the paper .consider a heterogeneous network(hetnet ) deployment with several operating macro- and small cells ( scs ) located close to the edge of each macro - cell coverage area .we assume that the scs can be activated whenever additional capacity is needed to serve the traffic in the cell .all nodes ( macro and scs ) use the same frequency bandwidth .information can be exchanged between the macro- and small cells in their coverage area using physical or logical links such as the x2 interface in lte .the system considered in this work matches with lte network requirements for both ul and dl transmissions .more precisely we are focused on ofdma based transmissions in ul and dl , although it is noted that the proposed methodology can be adapted to other radio access technologies . in radio access networks ,emf exposure comprises two components : ul transmissions from user equipment ( ue ) and dl transmissions from all the bss in the network ( see figure [ netexposure ] ) . in order to reduce the ei in the networkone can thus seek to minimize a well defined cost function that combines the joint effect of both ul and dl transmissions .more specifically we focus on reducing the average level of ul tx power from ues to their serving cells as an intermediate metric to reduce ei .such objective can be achieved by increasing scs coverage , and off - loading macro - cell traffic towards scs .in fact , off - loading the macro - cell with low power nodes such as scs allows , not only to bring more users to transmit to a closer serving cell with reduced power , but also to a certain extent , to increase the network capacity by off - loading loaded nodes .however , as more users are off - loaded to the scs it is likely that we observe a fast decrease of the ul / dl qos .this is due to a large number of new - interferers ( in ul ) inside the macro - cell coverage area and the additional interference produced by the scs that see their loads increase ( in dl ) .we propose to formulate the ei reduction problem as a qos constrained optimization problem which we address using an off - loading method relying on scs coverages expansion / contraction . indeed by expanding their coverage , scs can collect more users from the macro - cell relying on the user - to - cell best server attachment criterion . it is noted that coverage extension is achieved not by increasing the tx power of the scs but rather by increasing / decreasing the value of the cell individual offset ( cio ) used in the network selection / re - selection and handover ( ho ) procedures . specifically , during idle mode or ho procedures , the mobile compares through a set of measurements the dl received power plus values from each of its neighboring cells .this is the case for event a3 measurement report triggering .then , a ranking of all the available cells including the serving cell is done using the offset ratio of the candidate cells to the serving cell . typically , the selection of cell or ho when user is attached to cell is made if : with , being the measured averaged reference signal received power ( rsrps ) , is the hysteresis value and for intra - frequency cell selection . in our particular case ,the mobile compares the received signal power plus values from the scs to the received signal power from the macro - cell to define its attachment .the problem thus formulates as follows : where is the ul load of the macro - cell , the ul load of each small cell in the network , is the optimization vector variable , with element the dl pilot power plus of cell s , are respectively the actual and target dl qos levels .the problem is combinatorial and depending on the network dynamics a potential solution can stuck at the bounds of the constraint set .we propose in section [ sec : sondev ] an iterative load - balancing solution to ( [ optimpbm ] ) based on stochastic approximation .this section presents the network model and the assumptions herein considered .let be the number of bss ( macro and scs ) within a bounded area .we also denote by the number of users in cell .we denote by the pathloss ( including antennas gain and shadowing ) between a location and a given bs .transmissions occur in the ul and we assume that the system operates on a bandwidth , with the number of available resource blocks per bs and the bandwidth in of each prb . a ue at location is assigned a number of resource blocks and transmits with a total power of to bs where is the tx power on resource and is the tx power matrix on the ul .note that we do not make here any assumption on the way the total tx power of the mobile is split over the allocated sub - bands .it is discussed in and that the achieved gains by subdividing the total power equally over allocated resources is negligible compared to higher complexity of optimal allocation . in the following we thus consider equal distribution of the total tx power over the allocated resources such that .a closed - form expression of is given by : where is the maximum tx power of the ue , is a ue or a cell - specific parameter , is the cell specific pathloss compensation factor , is the dl pathloss measured at the ue , is specified at the ue by the upper - layers and is a ue - specific closed - loop correction value with a relative or absolute increase depending on the function .a communication is possible between user in and a bs whenever she is in the coverage area of defined by a best server attachment criteria : where is defined as the dl pilot power of the cell . using this notation , we can write the signal to interference plus noise ratio ( ) for ul transmission , of resource for user in location as : with being the thermal noise spectral power density . herewe consider that when user in from station does not use resource .note that is a function of which is omitted in the expression for simplicity .it is then possible to derive the ul spectral efficiency of user in location as a function of the , bounded by the maximum shannon capacity .the corresponding rate follows up as : where is the probability density function of the ergodic channel fading process that is averaged over each .it is noted that in ( [ eq : rateup ] ) the user is allocated all the available prbs and hence achieves its peak rate .we assume here a round robin scheduler , that allocates ( ) prbs when mobiles are present in cell .the expression of ( [ ultrpower ] ) becomes : the same analysis is done for the dl and we derive the dl and users rates for each cell : where is the data channel power applied by bs on resource and is the channel gain from bs to location .we denote by the dl tx traffic channel power matrix .the dl user rate when allocated all resources is given by : consider in this section flow level dynamics , and present the different metrics and key performance indicators ( kpis ) used for assessing performance .consider best effort data traffic with users arriving in the network area at location according to a poisson process with rate , where represents an infinitessimal surface element .users upload a file of exponentially distributed size with mean through their serving bs to the network .they are also downloading files of mean size . relying on the analysis in on queue, we can express the ul ( resp .dl ) loads of cell , ( resp . ) and the related kpis as follow : similarly for the dl we have : + where ( resp . ) is the ul ( resp .dl ) capacity provided by cell and corresponds to load equals one . ( resp . ) is the ul ( resp .dl ) file transfer time of user in associated to cell .as discussed previously , the ei defined in the scope of the lexnet project , requires as input from the network measurements the mean ul tx and dl rx powers .the power measurements are aggregated over the periods of users activity , with coefficients which transform power and incident power density into sar ( obtained through measurements and electromagnetic simulations ) . to assess the mean overall ul and dl radiated power in the network , we define the tx power density per surface element . from the power density , we obtain exposure density in term of sar by the linear transformation , namely . can be regarded as a linear function which weights the measured power with the sar reference value corresponding to the usage . in compliance to the ei definition, we obtain the mean overall ul exposure level metric as : and similarly , the mean overall dl exposure level is given by : where is the time spent by a user in in the network and is a factor taking into account exposure during users inactivities .it is estimated in that adult users are active ( namely in communication ) of the time ( ) but are passively exposed to dl emf the rest of the day .note that the discrete versions of expressions of and are nothing but the exposure factor defined in .this factor is the ei term for a specific radio access technology while ignoring the other terms which are not impacted by the self - optimization .the optimization under qos constraint of emf exposure reduction stated in ( [ optimpbm ] ) is addressed in this section as a self - optimization problem . in the followingwe consider the cell outage as the dl qos constraint and assume that it is increasing with the load .the latter also increases with the cell coverage and hence with , where the macro has the subscript and is not self - optimized . in the rest of the paper ,we interchangeably use and for simplicity of notations .we define the cell outage at an instant as the probability that a given user experience a smaller than a predefined threshold .equivalently , the outage is the proportion of active users that experience a smaller than this threshold .a closed form expression of the outage probability in mobile networks is proposed in . in order to find a solution for ( [ optimpbm ] ) andgiven the dl outage constraint , we design as in an iterative stochastic load - balancing algorithm as follows : where is a vector of length of components : note that is a discontinuous upper semi - continuous function taking values in the compact convex set ] this is verified here .* it must exist some such that : . we have : + \ } , & otherwise \end{array } \right.\]]for the two first cases the function is pseudo - contractive as it shows as a particular form of the discussed function in ( * ? ? ?* theorem 3 ) . in the discussions over there both terms of the difference function change with the parameterwhereas in our case one term is fixed .the function in the third case is a linear combination of the functions of the first two cases which are pseudo - contractive .the result follows as a consequence .* is upper semi - continuous , which is insured by the upper semi - continuity of .this completes the proof .from ( * ? ? ? * corollary 4 , p. 55 ) it is sufficient , given the properties of from proposition 1 , to show that : this is particularly true when there exists a lyapunov function of the dynamic induced by verifying ( * ? ? ? * lemma 1 and theorem 2 p. 12 - 15 ) :let , we have .moreover by restricting to the case and making the derivative along , we have : as is increasing with we have which concludes the proof . e. conil , n. varsier , a. hadjem , j. wiart , g. vermeeren , s. aerts , w. joseph , l. martens , y. corre , c. oliveira , m. mackowiac , d. sebastio , l. correia , r. agero , l. diez , m. koprivica , a. nekovi , m. popovi , j. milinkovi , s. niki and c. roblin , `` lexnet deliverable d2.4 : global wireless exposure metric definition , '' october 2013 .carlos beda castellanos , dimas lpez villa , claudio rosa , klaus pedersen , francesco calabrese , per - henrik michaelsen and jrgen michel , `` performance of uplink fractional power control in utran lte , '' in ieee vtc spring 2008 , 11 - 14 may 2008 .j. f. paris and d. morales - jimnez , `` outage probability analysis for nakagami - q ( hoyt ) fading channels under rayleigh interference , '' wireless communications , ieee transactions on , vol . 9 , no . 4 , pp . 12721276 , 2010 .a. tall , z. altman , and e. altman , `` self organizing strategies for enhanced icic ( eicic ) , '' submitted to the 12th intl .symposium on modeling and optimization in mobile , ad hoc , and wireless networks , wiopt 2014 , hammamet , tunisia , may 2014 .
|
this paper focuses on the exposure to radio frequency ( rf ) electromagnetic fields ( emf ) and on optimization methods to reduce it . within the fp7 lexnet project , an exposure index ( ei ) has been defined that aggregates the essential components that impact exposure to emf . the ei includes , among other , downlink ( dl ) exposure induced by the base stations ( bss ) and access points , the uplink ( ul ) exposure induced by the devices in communication , and the corresponding exposure time . motivated by the ei definition , this paper develops stochastic approximation based self - optimizing algorithm that dynamically adapts the network to reduce the ei in a heterogeneous network with macro- and small cells . it is argued that the increase of the small cells coverage can , to a certain extent , reduce the ei , but above a certain limit , will deteriorate dl qos . a load balancing algorithm is formulated that adapts the small cell coverage based on ul loads and a dl qos indicator . the proof of convergence of the algorithm is provided and its performance in terms of ei reduction is illustrated through extensive numerical simulations . self - optimization , self - organization , stochastic approximation , recursive inclusion , coverage extension , load balancing , exposure index , electromagnetic field exposure , emf .
|
the utility maximization is a basic problem in mathematical finance .it was introduced by merton .using stochastic control methods , he exhibits a closed formula for the value function and the optimal proportion - portfolio when the risky assets follow a geometric brownian motion and the utility function is of crra type .+ in the literature , many works assume that the underlying model is exactly known . in this paperwe consider a problem of utility maximization under uncertainty .the objective of the investor is to determine the optimal consumption - investment strategy when the model is not exactly known .such problem is known as the robust utility maximization and is formulated as where is the -expected utility .the investor has to solve a sup inf problem .he considers the worst scenario by minimizing over a set of probability measures and then he maximizes his utility . in the literaturethere are two approaches to solve the robust utility maximization problems .the first one relies on duality methods such as quenez or shied and wu .they considered a set of probability measures called priors and they minimized over this set .the second approach , which is followed in this paper , is based on the penalization method and the minimization is taken over all possible models such as in anderson , hansen and sargent .moreover skiadas followed the same point of view and he gave the dynamics of the control problem via bsde in the markovian context . in our case , the -expected utility is the sum of a classical utility function and a penalization term based on a relative entropy . in bordigoni , they proved the existence of a unique optimal model which minimizes our cost function .they used the stochastic control techniques to study the dynamic value of the minimization problem . in the case of continuous filtration, they showed that the value function is the unique solution of a generalized bsde with a quadratic driver .+ in faidi , matoussi and mnif , they studied the maximization part of the problem in a complete market by using the bsde approach as in duffie and skiadas and el karoui et al . . + in our paper , we assume that the portfolio is constrained to take values in a given closed convex non - empty subset of .such problem was studied when the underlying model is known by karatzas , lehoczky , shreve and xu in the incomplete market case and then by cvitanic and karatzas for convex constraints on the portfolio .skiadas and schroder studied the lifetime consumption - portfolio recursive utility problem under convex trading constraints .they used the utility gradient approach .they derived a first order conditions of optimality which take the form of a constrained forward backward stochastic differential equation .wealth was computed in a recursion starting with a time - zero value forward in time , while utility was computed in a recursion starting with a terminal date value backward in time . in our context, we study the robust formulation of the consumption - investment utility problem under convex constraints on the portfolio . using change of measures and optional decomposition under constraints, we give a dual characterization of the admissible consumption investment strategy , then we state an existence result to the optimization problem where the criterion is the solution at time 0 of a quadratic bsde with unbounded terminal condition .to describe the structure of the solution , we use duality arguments .the heart of the dual approach in the classical setting , when the criterion is taken under the historical probability measure , is to find a saddle point for the lagrangian and apply a mini - max theorem in the infinite dimensional case .it is appropriate to use the conjugate function of and . in our case, the criterion is taken under the probability measure modeling the worst scenario and the conjugate function does not appear naturally .we use the duality arguments in a different way .we prove the existence of a probability measure under which the budget constraint is satisfied with equality .then , we derive a maximum principle which gives a necessary and sufficient conditions of optimality .thanks to this result , we give an implicit expression of the optimal terminal wealth and the optimal consumption rate . this later result is a generalization of cvitanic and karatzas work . + the paper is organized as follows .section 2 describes the model and the stochastic control problem .section 3 is devoted to the existence and the uniqueness of an optimal strategy . in section 4, we characterize the optimal consumption strategy and the optimal terminal wealth by using duality techniques . in section 5 , we relate the optimal control to the solution of a forward - backward system and we study some examples .we consider a probability space supporting a d - dimensional standard brownian motion , over the finite time horizon ] satisfies the following bsde where stands the euclidean norm and the notation denotes the transposition operator .they established for the recursive relation .\end{aligned}\ ] ] they proved that there exists a unique pair that solves - .+ moreover , they showed that the density of the probability measure is a true martingale and is given by where ; \ , dt\otimes dp ] ( vector of instantaneous yield ) and the process } ] the investment strategy representing the amount of each asset invested in the portfolio .we shall fix throughout a nonempty , closed , convex set in containing 0 , and denote by the support function of the convex set .this is a closed , positively homogeneous , proper convex function on finite on its effective domain ( rockafellar p. 114 ) which is a convex cone ( called the barrier cone of ) . + we assume that * examples * * + and . + * + . + and +* + where ] a.e . in particular , if or >0 ] .we define the process by by girsanov s theorem , the doob - meyer decomposition of under , + for and , is given by : ,\end{aligned}\ ] ] where is a -brownian motion and the process is given by + , .+ we introduce the following set of probability measures : [ upperbound ] ( i)we denote by the class of all probability measures with the following property : there exists a nondecreasing predictable process such that ,\end{aligned}\ ] ] is a -local supermartingale for any .+ ( ii ) the upper bound process denoted by is a nondecreasing predictable process with which satisfies ( [ supermp ] ) and }\end{aligned}\ ] ] is nondecreasing for all nondecreasing process satisfying ( [ supermp ] ) .therefore , by fllmer and kramkov ( , lemma 2.1 ) the probability measure belongs to if and only if there is an upper bound for all predictable processes arising in the doob - meyer decomposition of the semimartingale under , denoted in our case by . in this casethe upper variation is equal to this upper bound .thanks again to lemma 2.1 in , the set consists of all probability measures for where the upper variation process is given by : .\end{aligned}\ ] ] we fix and .we denote by \rightarrow \r^d \,\mbox{s.t.}\ , \delta^{supp}(g)\ , \mbox{is equi - integrable with respect to the lebesgue measure on [ 0,t]}\},\end{aligned}\ ] ] <\infty,\,\,\sup_{\nu}e[(z_t^{\nu})^{1-\bar \eta}]<\infty\ , \mbox{and}\ , \nu\in g_{\mbox{equi}}\,p.a.s\}.\end{aligned}\ ] ] we denote by the subset of elements : such restriction is needed to characterize the optimal strategy of consumption investment ( see theorem [ bd ] ) .as is pham and in order to obtain a dual characterization of dominated random variables -measurable by a controlled process i.e. there exists and an admissible strategy of consumption - investment denoted by such that , we shall assume <\infty \mbox { for all } p^{\nu}\in \pc^0,\end{aligned}\ ] ] \mbox { is bounded in ( t , w)}. \end{aligned}\ ] ] all these conditions are satisfied in the example of the last section .[ dualdomin ] we assume that the set of controls is non - empty * ( h3)*. let and . then there exists such that if and only if \ ; \leq \ ; x.\end{aligned}\ ] ] * proof . *_ necessary condition ._ we consider and .there exists such that .since is a -local supermartingale , there exists a sequence of stopping time when goes to infinity , such that \leq x.\end{aligned}\ ] ] by condition , the nondeceasing property of and since is nonnegative , fatou s lemma yields that \geq e_{p^{\nu } } \left[\liminf_{n \rightarrow \infty}\big ( x_{t\wedge \tau_n } ^{x , c , h}+\int_0^{t\wedge \tau_n}c_tdt -a_{t\wedge \tau_n}(\nu)\big ) \right].\end{aligned}\ ] ] we have and , when goes to infinity .we deduce that : \leq x,\end{aligned}\ ] ] for all .this shows that .+ consider the random variable . since \ ; \leq \ ; x\;<\;\infty,\end{aligned}\ ] ] then by the stochastic control lemma a.1 of fllmer and kramkov , there exists a rcll version of the process : \;\;0\leq t\leq t.\end{aligned}\ ] ] moreover , for any , the process } ] , and , we have and .+ the quasi concavity of the absolute value of the utility functions * ( h5 ) * hold if or , .[ lemme1 ] we assume that the set of controls is non - empty * ( h3 ) * and .the set is closed for almost everywhere convergence topology . *we consider a sequence such that by fatou s lemma and using the uniform integrability of the family , we have & \leq & \sup_n e_p [ \exp{(\gamma |\bar u ( \xi^n)|)}]<\infty.\end{aligned}\ ] ] by the uniform integrability of the family and for a fixed , there exists such that , if , then which implies from - , we deduce the boundedness in and the equi - integrability of .this shows . similarly and so . by fatou s lemma, we have & \leq & \liminf_{n\longrightarrow \infty}e_{p^{\nu } } \left[\xi^n+\int_0^tc^n_{t}dt -a_t({\nu } ) \right]\\ & \leq & x,\end{aligned}\ ] ] which implies that . from the characterization, we deduce that and so the closeness of the set is proved .+ [ lemme0 ] we assume that the set of controls is non - empty * ( h3 ) * , and the quasi concavity of the absolute value of the utility functions * ( h5 ) * holds , then the set is convex .* proof . * we take , and ] and .thanks to the standard assumptions on the utility functions * ( h4 ) * , and are well - defined . from the concavity of and , we have a.e . , ] and dp a.s . from lemma [ lemme1] , we have .we set and . then a.s . and , , a.e .when goes to infinity . from proposition [ monotonieyn ] , we deduce that then . on the other hand , we have and and so the comparison theorem ( see theorem [ ctheorem ] ) yields which implies .this shows that and so upper semicontinuous .+ the next lemma shows the boundedness of the value function .[ lemuni ] we assume that the discounting factor is bounded * ( h1 ) * , the set of controls is not reduced to the null strategy * ( h3 ) * and .we have * proof . * from the definition of and using the boundedness on the discounting factor * ( h1 ) * , we have \\ & \leq & c \big(e_{p}[\int_0^t |u(c_s)|ds+|\bar u(\xi)| ] \big ) . \ ] ] since for all , we have and , then the result follows from the uniform integrability of the families and .+ + our next result is the existence of a unique solution to the problem .the uniqueness follows since is strictly concave .[ main1 ] we assume that the discounting factor is bounded * ( h1 ) * , the set of controls is non - empty * ( h3 ) * , the utility functions satisfy the usual conditions*(h4 ) * , the absolute value of the utility functions is quasi - concave * ( h5 ) * , and .there exists a unique solution of .* let be a maximizing sequence of the problem i.e. which is finite by lemma [ lemuni ] .+ since and , then by lemma a.1.1 of delbaen and schachermeyer , there exists a sequence such that converges almost surely to . by lemmas [ lemme1]-[lemme0 ] , we have and . from proposition [ regularity],the functional is concave and so from proposition [ regularity ] ,the functional is upper semicontinuous and so therefore solves .the aim of this section is to provide a description of the solution structure to problem via the dual formulation .in fact , to solve an investment problem when the underlying model is known , and by using the definition of the conjugate function of denoted by , we have if } ] , which implies \leq \inf_{\nu}e[\tilde u(yz_t^{\nu})]+xy.\end{aligned}\ ] ] if we find and such that we have equality in the latter equation for some , then is the solution of the dual problem . in our case ,the criterion in taken under and the use of the conjugate functions is not appropriate .in fact we have and so the supermartingale property of } ] is convex. * proof . *+ let their density processes , ] . to check the equi - integrability point , for all \subset [ 0,t] ] .+ the following theorem shows the existence of a probability measure equivalent to the probability measure solution of the problem .the density process of with respect to is the -martingale with ; t\in [ 0,t ] , \,\,dt\otimes dp , a.e.\end{aligned}\ ] ] we shall assume the translation stability on the set of admissible strategies i.e. + * ( h6 ) * if , then for any and . the translation stability on the set of admissible strategies * ( h6 ) *is satisfied if the utility functions are subadditive .[ bd ] we fix .we assume that the discounting factor is bounded * ( h1 ) * , the set of controls is non - empty * ( h3 ) * , the utility functions satisfy the usual conditions * ( h4 ) * , the absolute value of the utility functions is quasi - concave * ( h5 ) * , and the translation stability on the set of admissible strategies * ( h6 ) * holds .then , there exists a probability measure such that &=&e_{\tilde p^ * } \left[\xi^*+\int_0^t c^*_tdt -a_t ( \nu^ * ) \right],\end{aligned}\ ] ] and the budget constraint is satisfied with equality i.e. =x.\end{aligned}\ ] ] * proof . *+ let and defined by the following functionals : and .\end{aligned}\ ] ] : let be a sequence in such that : and denote by the corresponding density process . since each , it s follows from komlos theorem that there exists a sequence with for each and such that converge to some random variable , which is then also non - negative but may take value .+ because is convex , each is again associated to some which is in . by de la valle - poussin s criterion , is uniformly integrable and therefore converges in .this implies that =e_p[\bar{z}^{\infty}_t]=1 ] . from the inequality to in , we deduce that .since and are equivalent probability measures , we have &\leq & ( e _ { \bar p^n}[(\bar{z}^n_t)^{-\bar \eta}])^\frac{1}{\bar \eta}\bar p^n(a)^{1-\frac{1}{\bar \eta}}\\ & \leq & ( e _ { p}[(\bar{z}^n_t)^{1-\bar \eta}])^\frac{1}{\bar \eta}\bar p^n(a)^{1-\frac{1}{\bar \eta}}.\end{aligned}\ ] ] from the definition of the set , and since ] ( see lemma [ conv ] ) , we deduce the concavity of , which implies and so . we denote by and the probability measure associated with , i.e. .+ + : we show that the budget constraint is satisfied with equality .+ we assume that = l < x.\end{aligned}\ ] ] from the characterization , we deduce that there exists such that where .\end{aligned}\ ] ] we denote by ] , and so \leq -x+e_{\tilde{p}^*}[\xi^ * + \int_0^t c^*_s ds ]< + \infty, ] .similarly \leq \liminf_{n\longrightarrow \infty}e[(\bar{z}^{n}_t)^{1-\bar \eta}]<\infty ] .they showed that [ unicityproba ] from the dynamic programming principle , the unicity of the optimal strategy and the unicity of the probability measure associated with the worst case , we deduce the unicity of the probability measure .in this section , we characterize the optimal consumption - investment strategy as the unique solution of a forward - backward system .this characterization is a consequence of the maximum principle .in fact , from theorem [ maximumprinciple ] , the optimal terminal wealth and the optimal consumption are given by \label{stropc}\\ \xi^*&=&i_2\big(\frac{\lambda^ * } { \bar{\alpha } } s_t^{\delta } \tilde z_t^*z^{*-1}_t\big),\,\,dp\,a.s.\label{stropxi}\end{aligned}\ ] ] where ( resp . ) is the inverse of the derivative function of ( resp . ) . the following result is a direct consequence of theorem [ bd ] and theorem [ main1 ] .[ cara ] we fix .we assume that the discounting factor is bounded * ( h1 ) * , the set of controls is non - empty * ( h3 ) * , the utility functions satisfy the usual conditions * ( h4 ) * , the absolute value of the utility functions is quasi - concave * ( h5 ) * , and the translation stability on the set of admissible strategies * ( h6 ) * holds .+ we consider , } ] , and two densities of a probability measures equivalent to .then , coincides with the optimal value process given by , are given by - , coincides with the density of the minimizing measure given by and coincides with given by , if and only if there exists satisfying where , and the following forward - backward system admits a unique solution .* proof . * if is given by and is given by , then , from bordigoni et al . , is the unique solution of bsde with terminal condition and is the solution of the second equation in our forward - backward system . from theorem [ main1 ] ,the couple is the unique solution of i.e. wich implies that is the solution of the bsde and there exists such that and .+ since coincides with , then is the solution of the following forward sde from theorem [ bd ] , evolves according to the following forward sde the converse sense is straightforward .: in this example , we give an explicit formula for the investment strategy in the risky assets .we consider a financial market consisting of two risky assets , where the price is governed by is a - standard brownian motion , are constants .we take .+ this is the incomplete market case studied by karatzas et al where the investment is restricted only to the first risky asset .it follows that the support function of the convex set is given by and we know that the density of the risk neutral measure is given by where , and by the girsanov theorem , is a - brownian motion .+ we fix . if , , and , then from the recursive relation [ recurciverelation ] , we obtain ,\end{aligned}\ ] ] which is a typical example in the dynamic entropic risk measure .we refer to barrieu and el karoui for more details about risk measures .the stochastic control problem is related to the problem ,\end{aligned}\ ] ] where \leq x\} ] .it yields that where . since , we have . from equation and using it s formula , we have since , we have by identification that ,\end{aligned}\ ] ] and so the number of shares invested in the risky asset , which is denoted by } ] .a straightforward calculus shows that <\infty ] and is equi - integrable with respect to the lebesgue measure on ] and is in the class of probability measures satisfying . + + the equation is coherent with the intuition since when goes to infinity , we force the penalty term which appears in the dynamic value process ( see equation ) to be equal to zero and so our model of utility maximization under uncertainty converges to a classical utility maximization problem when the underlying model is known .the optimal strategy of investment in the first risky asset given in corresponds to the solution of utility maximization problem in incomplete market when the utility function .such result could be interpreted as a stability result . in the context of robust maximization problem, the coefficient could be interpreted as a modified relative risk .also , one could see such coefficient as a change of the level of the volatility .the volatility increases from the level to .if is close to 0 , then the modified relative risk is small enough and the number of shares invested in the first risky asset decreases which is consistent with the intuition since we maximize the worst case .we consider a financial market consisting of one risky asset where the price is governed by is a - standard brownian motion , are constants .we consider the case where ] .+ the utility function is strictly concave and increasing .it satisfies the inada condition .+ following classical arguments of convex duality , see for example kramkov and schachermayer and pham , there exists , s.t .the optimal wealth process is given by \ , dt\otimes dp \mbox { a.e . }\end{array } \right.\end{aligned}\ ] ] the dual problem is given by ,\end{aligned}\ ] ] where is the fenchel - legendre transform of . + the dynamic version of the dual control problem is given by .\end{aligned}\ ] ] the hjb equation associated to ( [ dyver ] ) is given by =0,\,\ , & ( t , z)\in [ 0,t)\times ( 0,+\infty)\\ v ( t , z)=\tilde u^{rm}(z ) , \,\ , & z \in ( 0,+\infty ) . \end{array } \right.\end{aligned}\ ] ] where + the hjb equation could be degenerate , the existence of a classical solution is not insured .we should apply the viscosity solutions theory to characterize the dual value function as a viscosity solution of the associated hjb equation .denote by , and .the pair is the solution of the following equation which implies for any stopping time , we have from the inequality , where denotes the inner product associated with the euclidean norm , we deduce that we define the probability measure equivalent to where its density is the -martingale with since is a -martingale , then is a -local martingale .let be a reducing sequence for + , then , for large enough , we have and so latexmath:[\ ] ] which implies for all . sending and , we obtain since the utility function satisfies the inada conditions ( assumption * ( h4 ) * ) , we have and so for all .+ sending and we have and so inequality is proved . + the result follows from and .the same argument holds for the consumption process . abc99xyz anderson e. , hansen l.p , sargent t. ( 2003 ) . a quartet of semigroups for model specification , robustness , prices of risk and model detection ._ journal of the european economic association _ 1 , 68 - 123 .barrieu , p. , el karoui , n. ( 2008 ) .pricing , hedging and optimally designing derivatives via minimization of risk measures . in the book _`` indifference pricing : theory and applications '' edited by ren carmona , springer - verlag , 77 - 141._. bordigoni , g. , matoussi , a. , schweizer ( 2007 ). a stochastic control approach to a robust utility maximization problem .f. e. benth et al .( eds . ) , _ stochastic analysis and applications .proceedings of the second abel symposium _ , oslo , 2005 , springer , 125 - 151 .quenez , m. q. ( 2004 ) .optimal portfolio in a multiple - priors model . in r. dalang ,m. dozzi and f. russo ( eds ; ) , _ seminar on stochastic analysis , random fields and applications iv _ , progess in probability 58 , birkhauser , 291 - 321 .schroder , m. and skiadas , c. ( 2003 ) .optimal lifetime consumption - portfolio strategies under trading constraints and generalized recursive preferences .stochastic processes and their applications , 108 , 155 - 202 .
|
: we study a robust utility maximization problem from terminal wealth and consumption under a convex constraints on the portfolio . we state the existence and the uniqueness of the consumption - investment strategy by studying the associated quadratic backward stochastic differential equation ( bsde in short ) . we characterize the optimal control by using the duality method and deriving a dynamic maximum principle . * key words :* utility maximization , backward stochastic differential equations , recursive utility , model uncertainty , robust control , maximum principle , forward - backward system . * msc classification ( 2000 ) :* 92e20 , 60j60 , 35b50 .
|
in the backpropagation learning of a neural network , the initial weight parameters are crucial to its final estimates . since hidden parameters are put inside nonlinear activation functions , simultaneous learning of all parameters by backpropagation is accompanied by a non - convex optimization problem .when the machine starts from an initial point far from the goal , the learning curve easily gets stuck in local minima or lost in plateaus , and the machine fails to provide good performance .recently deep learning schemes draw tremendous attention for their overwhelming high performances for real world problems .deep learning schemes consist of two stages : _ pre - training _ and _ fine - tuning_. the pre - training stage plays an important role for the convergence of the following fine - tuning stage . in pre - training , the weight parameters are constructed layer by layer , by stacking unsupervised learning machines such as restricted boltzmann machines or denoising autoencoders . despite the brilliant progress in application fields , theoretical interpretation of the schemes is still an open question . in this paperwe introduce a new initialization / pre - training scheme which could avoid the non - convex optimization problem .the key concept is the probability distribution of weight parameters derived from murata s integral representation of neural networks .the distribution gives an intuitive idea what the parameters represent and contains information about where efficient parameters exist .sampling from this distribution , we can initialize weight parameters more efficiently than just sampling from a uniform distribution .in fact , for relatively simple or low dimensional problems , our method by itself attains a high accuracy solution without backpropagation .de freitas et al. also introduced a series of stochastic learning methods for neural networks based on the sequential monte carlo ( smc ) . in their methodsthe learning process is iterative and initial parameters are given by less informative distributions such as normal distributions .on the other hand we could draw the parameters from a _ data dependent _ distribution . furthermore , in smc , the number of hidden units must be determined before the learning , while it is determined naturally in our method .one of the most naive initialization heuristics is to draw samples uniformly from an interval ] .such a does exist and is known as a _mollifier_.the _ standard mollifier _ is a well - known example .hereafter we assume is a sigmoid pair and is the corresponding derivative of the standard mollifier .we also assume that our target is a bounded and compactly supported -smooth function .then the integral transform of is absolutely integrable and the inversion formula is reduced to the direct form . let be a probability distribution function over which is proportional to , and be satisfying for all . with this notations ,the inversion formula is rewritten as the expectation form with respect to , that is , the expression implies the finite sum converges to in mean square as , i.e. = f ] holds for any ( th.2 in ) . here is a neural network with hidden units , therefore we can regard the inversion formula as an _ integral representation _ of neural networks .now we attempt to make use of the integral transform as an oracle distribution of hidden parameters .although the distribution is given in the explicit form as we saw in the preceding section , further refinements are required for practical calculation .given a set of input and output pairs , is empirically approximated as with some constant which is hard to calculate exactly .in fact sampling algorithms such as the acceptance - rejection method and markov chain monte carlo method work with any unnormarized distribution because they only evaluate the ratio between probability values .note that the approximation converges to the exact in probability by the law of large numbers _ only _ when the input vectors are i.i.d .samples from a uniform distribution . as a decomposing kernel make use of the -th order derivative of the standard mollifier where if is even and otherwise .the -th derivative of the mollifier takes the form where denotes a polynomial of which is calculated by the following recurrence formula : the higher order derivatives of a mollifier has more rapid oscillations in the neighbourhoods of both edges of its support .given a data set , our * sampling regression * method is summarized as below : 1 . :calculate according to eq.[eq : deriv.mol ] , eq.[eq : p0 ] and eq.[eq : pk ] , where if is even and otherwise .then is calculated by eq.[eq : tab.approx ] with setting . as we noted above, one can choose arbitrary .draw samples from the probability distribution by acceptance - rejection method , where denotes the number of hidden ( sigmoid pair ) units .then we obtain the hidden parameters .let for all and .solve the system of linear equations with respect to .then we obtain the output parameters .generally is ill - shaped and sampling from the distribution is difficult .for example in fig.[fig : tab.of.sin ] left , samples drawn from of with ] , therefore for any and , implies .the latter condition is equivalently deformed to , which implies . by the compact support assumption of , takingthe maximum with respect to leads to . by tracking back the inferences , for any and , since for any , the integrand of is always zero , the integration domain of can be restricted into . therefore by eq.[eq : prop1.4 ] , holds , which comes to the conclusion : . in a relatively high dimensional input case ,sampling in the coordinate transformed -space is more efficient than sampling in the -space because the shape of the support of in the -space is rectangular ( see , fig.[fig : tab.of.sin ] ) and therefore the proposal distribution is expected to reduce miss proposals , out of the support . in case that the coordinate transform technique is not enough , it is worth sampling from each _ component _ distribution .namely , the empirically approximated is bounded above by a _ mixture distribution _ : where is a _ component distribution _ and is a _ mixing probabilities_. in addition , an upper bound of is given by the form for some and .we conducted three sets of experiments comparing three types of learning methods : * whole parameters are initialized by samples from a uniform distribution , and trained by ackropagation . *hidden parameters are initialized by ampling from ; and the rest output parameters are initialized by samples from a uniform distribution. then whole parameters are trained by ackropagation . *hidden parameters are determined by ampling from ; the rest output parameters are fitted by linear egression . in order to compare the ability of the three methods , we conducted three experiments on three different problems : one - dimensional complicated curve regression , multidimensional boolean functions approximation and real world data classification .c . * left * : sr ( solid black line ) by itself achieved the highest accuracy without the iterative learning , whereas sbp ( dashed red line ) converged to lower rmse than bp ( dotted green line ) .* right * : the original curve ( upper left ) has high frequencies around the origin . sr ( upper right ) followed such a dynamic variation of frequency better than other two methods .sbp ( lower left ) roughly approximated the curve with noise .bp ( lower right ) only fitted moderate part of the curve . , width=226 ] . *left * : sr ( solid black line ) by itself achieved the highest accuracy without the iterative learning , whereas sbp ( dashed red line ) converged to lower rmse than bp ( dotted green line ) . *right * : the original curve ( upper left ) has high frequencies around the origin .sr ( upper right ) followed such a dynamic variation of frequency better than other two methods .sbp ( lower left ) roughly approximated the curve with noise .bp ( lower right ) only fitted moderate part of the curve ., width=226 ] first we performed one - dimensional curve regression .the objective function is a two - sided _topologists s sine curve ( tsc ) _ defined on the interval ] in equidistant manner .the number of hidden parameters were fixed to in each model .note that relatively redundant quantity of parameters are needed for our sampling initialization scheme to obtain good parameters .the output function was set to linear and the batch learning was performed by bfgs quasi - newton method .uniformly random initialization parameters for bp and sbp were drawn from the interval ] .sampling from was performed by acceptance - rejection method . in fig.[fig : logic ] both the cross - entropy curves and classification error rates are depicted in thin and thick lines respectively .the solid black line corresponds to the results by sr , which achieved the perfectly correct answer from the beginning .the dashed red line corresponds to the results by sbp , which also attained the perfect solution faster than bp .the dotted green line corresponds to the results by bp , which cost iterations of learning to give the correct answer . in this experimentwe have validated that the proposed method works well with multiclass classification problems .the quick convergence of sbp indicates that contains advantageous information on the training examples to the uniform distribution .r[1pt]6 cm finally we examined a real classification problem using the mnist data set .the data set consists of training examples and test examples .each input vector is a -level gray - scaled -pixel image of a handwritten digit .the corresponding label is one of 10 digits .we implemented these labels as -dimensional binary vectors whose components are chosen randomly with equivalent probability for one and zero .we used randomly sampled training examples for training and whole testing examples for testing .the number of hidden units were fixed to , which is the same size as used in the previous study of lecun et al. .note that sigmoid pairs corresponds to sigmoid units , therefore we used sigmoid pairs for sr and sbp , and sigmoid units for bp .the output function was set to sigmoid and the loss function was set to cross - entropy . in obedience to lecun et al. , input vectors were normalized and randomly initialized parameters for bp and sbp were drawn from uniform distribution with mean zero and standard deviation .direct sampling from is numerically difficult because the differential order of its decomposing kernel piles up as high as -th order .we abandoned rigorous sampling and tried sampling from a _ mixture annealed _ distribution .as described in eq.[eq : mix ] , we regarded as a mixture of . by making use of the log boundary given by eq.[eq : log.bound ] , we numerically approximated from above and drew samples from an _ easier _ component distribution .details of the sampling technique is explained in [ supp : mix ] .the sampling procedure scales linearly with the dimensionality of the input space ( ) and the number of required hidden units ( ) respectively .in particular it scales constantly with the number of the training examples .the following linear regression was conducted by singular value demcomposition ( svd ) , which generally costs operations , assuming , for decomposing a -matrix . in our case corresponds to the number of the training examples ( ) and corresponds to the number of hidden units ( ) . at last backpropagation learningwas performed by stochastic gradient descent ( sgd ) with adaptive learning rates and diagonal approximated hessian .the experiment was performed in r on a xeon x5660 with gb memory . in fig.[fig: mnist ] the classification error rates for test examples are depicted .the black real line corresponds to the results by sr , which marked the lowest error rate ( ) of the three at the beginning , and finished after iterations of sgd training .the training process was not monotonically decreasing in the early stage of training , it appears that the sr initialization overfitted to some extent .the red dashed line corresponds to the results by sbp , which marked the steepest error reduction in the first iterations of sgd training and finished .the green dotted line corresponds to the results by bp , which declined the slowest in the early stage of training and finished . in tab.[tab : time ] the training time from initialization through sgd training is listed .the sampling step in sr ran faster than the following regression and sgd steps .in addition , the sampling time of sr and sbp was as fast as the sampling time of bp .as we expected , the regression step in sr , which scales linearly with the amount of the data , cost much more time than the sampling step did .the sgd step also cost , however each step cost around merely seconds , and it would be shorten if the initial parameters had better accuracy ..training times for mnist [ cols="<,^,^,^",options="header " , ] [ tab : time ] in this experiment , we confirmed that the proposed method still works for real world data with the aid of an annealed sampling technique .although sr showed an overfitting aspects , the fastest convergence of sbp supports that the oracle distribution gave meaningful parameters , and the annealed sampling technique could draw meaningful samples .hence the overfitting of sr possibly comes from regression step , which suggests the necessity for further blushing up of regression technique .in addition , our further experiments also indicated that when the number of hidden units increased to , the _ initial _ test error rate scored , which is smaller than the previously reported error rates by lecun et al. with hidden units .in this paper , we introduced a two - stage weight initialization method for backpropagation : sampling hidden parameters from the oracle distribution and fitting output parameters by ordinary linear regression .based on the integral representation of neural networks , we constructed our oracle distributions from given data in a nonparametric way .since the shapes of those distributions are not simple in high dimensional input cases , we also discussed some numerical techniques such as the coordinate transform and the mixture approximation of the oracle distributions .we performed three numerical experiments : complicated curve regression , boolean function approximation , and handwritten digit classification .those experiments show that our initialization method works well with backpropagation .in particular for the low dimensional problems , well - sampled parameters by themselves achieve good accuracy without any parameter updates by backpropagation .for the handwritten digit classification problem , the proposed method works better than random initialization .sampling learning methods inevitably come with redundant hidden units since drawing good samples usually requires a large quantity of trial .therefore the model shrinking algorithms such as pruning , sparse regression , dimension reduction and feature selection are naturally compatible to the proposed method .although plenty of integral transforms have been used for theoretical analysis of neural networks , numerical implementations , in particular sampling approaches are merely done .even theoretical calculations often lack practical applicability , for example a higher order of derivative in our case , each integral representation interprets different aspects of neural networks .further monte carlo discretization of other integral representations is an important future work . in the deep learning context , it is said that the deep structure remedies the difficulty of a problem by multilayered superpositions of simple information transformations .we conjecture that the complexity of high dimensional oracle distributions can be decomposed into relatively simple distributions in each layer of the deep structure .therefore , extending our method to the multilayered structure is our important future work .the authors are grateful to hideitsu hino for his incisive comments on the paper .they also thank to mitsuhiro seki for having constructive discussions with them .sampling hidden parameter s from the oracle distribution demands a little ingenuity . in our experiments ,we have implemented two sampling procedures : a rigorous but naive , computationally inefficient way and an approximative / ad hoc but quick and well - performing way . although both work quickly and accurately in a low dimensional input problem , only the latter works in a high dimensional problem such as mnist . given a decomposing kernel , we employed acceptance - rejection ( ar ) method directly on rigorous sampling from on a proposal distribution , we employed uniform distribution .we assume here that the support of proposal distribution has been adjusted to cover the _ mass _ of as tight as possible , and the infimum has been estimated .then our sampling procedure is conducted according to the following alg.[alg : naive ] .note that in a high dimensional case , the estimation accuracy of and the tightness of affects the sampling efficiency and accuracy materially .in fact , the expectation number of trial to obtain one sample by ar is times , which gets exponentially large as the dimensionality increases . since the support of the oracle distribution is not rectangular , sampling fromcoordinate transformed remedies the difficulty .in addition , the high order differentiation in the decomposing kernel cause numerical unstability . in orderto overcome the high dimensional sampling difficulty , we approximately regarded as a mixture distribution ( as described in eq.[eq : mix ] ) and conducted two - step sampling : first choose one component distribution according to the mixing probability , second draw a sample from chosen component distribution .sampling from holds another difficulty due to its high order differentiation in . according to its upper bound evaluation ( eq.[eq : log.bound ] ) ,a high order derivative has its almost all _ mass _ around both edge of its domain interval ] according to the annealing beta distribution , then sample and under the restriction . of mollifier .* left * : has almost all mass , with high frequency , at both ends , and no mass in the middle of domain . *right * : the right half of is approximated by beta distribution ( red line ) .[ fig : dmol],width=226 ] of mollifier .* left * : has almost all mass , with high frequency , at both ends , and no mass in the middle of domain . *right * : the right half of is approximated by beta distribution ( red line ) .[ fig : dmol],width=226 ] obviously the mixture approximation gives rise to poor restriction and virtual indefiniteness of . since the rigorous computation establishes all relations between and all s , whereas the mixture approximation does just one relation between and one particular .we introduced two additional assumptions .first , is parallel to given .since always appears in the form , only the parallel component of could have any effect ( on one particular ) , hence we eliminated the extra freedom in the orthogonal component .second , the norm has similar scale to the distances between input vectors .since controls the spatial frequency of a hidden unit , it determines how broad the hidden unit covers the part of the input space .namely , controls which input vectors are selectively responded by the unit .therefore , in order to avoid such an isolation case that an unit responds for only one input vector , we assumed is no smaller than the distance between input vectors . in this procedurewe set as a distance of randomly selected two input examples and .we denote this procedure simply by .once is fixed with these assumptions , is determined as .given shape parameters of the beta distribution , one cycle of our second sampling method is summarized as alg.[alg : mix ] .this method consists of no more expensive steps .it scales linearly with the dimensionality of the input space and the number of required sample parameters respectively .moreover , it does not depends on the size of the training data .
|
a new initialization method for hidden parameters in a neural network is proposed . derived from the integral representation of neural networks , a nonparametric probability distribution of hidden parameters is introduced . in this proposal , hidden parameters are initialized by samples drawn from this distribution , and output parameters are fitted by ordinary linear regression . numerical experiments show that backpropagation with proposed initialization converges faster than uniformly random initialization . also it is shown that the proposed method achieves enough accuracy by itself without backpropagation in some cases .
|
in the light of problems caused due to poor crowd management , such as crowd crushes and blockages , there is an increasing need for computational models which can analyse highly dense crowds using video feeds from surveillance cameras .crowd counting is a crucial component of such an automated crowd analysis system .this involves estimating the number of people in the crowd , as well as the distribution of the crowd density over the entire area of the gathering .identifying regions with crowd density above the safety limit can help in issuing prior warnings and can prevent potential crowd crushes .estimating the crowd count also helps in quantifying the significance of the event and better handling of logistics and infrastructure for the gathering . in this work, we propose a deep learning based approach for estimating the crowd density as well as the crowd count from still images .counting crowds in highly dense scenarios ( > 2000 people ) poses a variety of challenges .highly dense crowd images suffer from severe occlusion , making the traditional face / person detectors ineffective .crowd images can be captured from a variety of angles introducing the problem of perspective .this results in non - uniform scaling of the crowd necessitating the estimation model to be scale - invariant to large scale changes .furthermore , unlike other vision problems , annotating highly dense crowd images is a laborious task .this makes the creation of large - scale crowd counting datasets infeasible and limits the amount of training data available for learning - based approaches . + actual count : 1115 estimated : 1143 + + actual count:440 estimated:433 hand - crafted image features ( sift , hog etc . ) often fail to provide robustness to challenges of occlusion and large scale variations .our approach for crowd counting relies instead on deep learnt features using the framework of fully convolutional neural networks(cnn ) .we tackle the issue of scale variation in the crowd images using a combination of a shallow and deep convolutional architectures .further , we perform extensive data augmentation by sampling patches from the multi - scale image representation to make the system robust to scale variations .our approach is evaluated on the challenging ucf_cc_50 dataset and has achieved state of the art results .some works in the crowd counting literature experiment on datasets having sparse crowd scenes , such as ucsd dataset , mall dataset and pets dataset .in contrast , our method has been evaluated on highly dense crowd images which pose the challenges discussed in the previous section .methods introduced in and exploit patterns of motion to estimate the count of moving objects . however , these methods rely on motion information which can be obtained only in the case of continuous video streams with a good frame rate , and do not extend to still image crowd counting .the algorithm proposed by idrees _et al . _ is based on the understanding that it is difficult to obtain an accurate crowd count using a single feature . to overcome this, they use a combination of handcrafted features : hog based head detections , fourier analysis , and interest points based counting .the post processing is done using multi - scale markov random field .however , handcrafted features often suffer a drop in accuracy when subjected to variances in illumination , perspective distortion , severe occlusion etc .though zhang _ et al . _ utilize a deep network to estimate crowd count , their model is trained using perspective maps of images . generating these perspective maps is a laborious process and is infeasible .we use a simpler approach for training our model , yet obtain a better performance .et al . _ also train a deep model for crowd count estimation .their model however is trained to determine only the crowd count and not the crowd density map , which is crucial for crowd analysis .our network estimates both the crowd count as well as the crowd density distribution .crowd images are often captured from varying view points , resulting in a wide variety of perspectives and scale variations .people near the camera are often captured in a great level of detail i.e. , their faces and at times their entire body is captured . however , in the case of people away from camera or when images are captured from an aerial viewpoint , each person is represented only as a head blob .efficient detection of people in both these scenarios requires the model to simultaneously operate at a highly semantic level ( faces / body detectors ) while also recognizing the low - level head blob patterns .our model achieves this using a combination of deep and shallow convolutional neural networks .an overview of the proposed architecture is shown in fig . [fig : architecture ] . in the following subsections, we describe these networks in detail .our deep network captures the desired high - level semantics required for crowd counting using an architectural design similar to the well - known vgg-16 network .although the vgg-16 architecture was originally trained for the purpose of object classification , the learned filters are very good generic visual descriptors and have found applications in a wide variety of vision tasks such as saliency prediction , object segmentation etc .our model efficiently builds up on the representative power of the vgg network by fine - tuning its filters for the problem of crowd counting .however , crowd density estimation requires per - pixel predictions unlike the problem of image classification , where a single discrete label is assigned for an entire image .we obtain these pixel - level predictions by removing the fully connected layers present in the vgg architecture , thereby making our network fully convolutional in nature .the vgg network has 5 max - pool layers each with a stride of 2 and hence the resultant output features have a spatial resolution of only times the input image . in our adaptation of the vgg model , we set the stride of the fourth max - pool layer to and remove the fifth pooling layer altogether .this enables the network to make predictions at times the input resolution .we handle the receptive - field mismatch caused by the removal of stride in the fourth max - pool layer using the technique of holes introduced in .convolutional filters with holes can have arbitrarily large receptive fields irrespective of their kernel size .using holes , we double the receptive field of convolutional layers after the fourth max - pool layer , thereby enabling them to operate with their originally trained receptive field . in our model, we aim to recognize the low - level head blob patterns , arising from people away from the camera , using a shallow convolutional network . since blob detection does not require the capture of high - level semantics , we design this network to be shallow with a depth of only 3 convolutional layers .each of these layers has filters with a kernel size of . to make the spatial resolution of this network s prediction equal to that of its deep counterpart, we use pooling layers after each convolution layer .our shallow network is primarily used for the detection of small head - blobs . to ensure that there is no loss of count due to max - pooling , we use average pooling layers in the shallow network .we concatenate the predictions from the deep and shallow networks , each having a spatial resolution of times the input image , and process it using a 1x1 convolution layer .the output from this layer is upsampled to the size of the input image using bilinear interpolation to obtain the final crowd density prediction .the total count of the people in the image can be obtained by a summation over the predicted density map .the network is trained by back - propagating the loss computed with respect to ground - truth .training a fully convolutional network using the ground - truth of head annotations , marked as a binary dot corresponding to each person , would be difficult .the exact position of the head annotations is often ambiguous , and varies from annotator to annotator ( forehead , centre of the face etc . ) , making cnn training difficult . in ,the authors have trained a deep network to predict the total crowd count in an image patch .but using such a ground truth would be suboptimal , as it would nt help in determining which regions of the image actually contribute to the count and by what amount .et al . _ have generated ground truth by blurring the binary head annotations , using a kernel that varies with respect to the perspective map of the image .however , generating such perspective maps is a laborious task and involves manually labelling several pedestrians by marking their height .we generate our ground truth by simply blurring each head annotation using a gaussian kernel normalized to sum to one .this kind of blurring causes the sum of the density map to be the same as the total number of people in the crowd .preparing the ground truth in such a fashion makes the ground truth easier for the cnn to learn , as the cnn no longer needs to get the exact point of head annotation right .it also provides information on which regions contribute to the count , and by how much .this helps in training the cnn to predict both the crowd density as well as the crowd count correctly .as cnns require a large amount of training data , we perform an extensive augmentation of our training dataset .we primarily perform two types of augmentation .the first type of augmentation helps in tackling the problem of scale variations in crowd images , while the second type improves the cnn s performance in regions where it is highly susceptible to making mistakes i.e. , highly dense crowd regions . in order to make the cnn robust to scale variations , we crop patches from the multi - scale pyramidal representation of each training image .we consider scales of to , incremented in steps of , times the original image resolution ( as shown in fig.[fig : imagepyramid_1 ] ) for constructing the image pyramid .we crop patches with overlap from this pyramidal representation . with this augmentation ,the cnn is trained to recognize people irrespective of their scales .we observed that cnns find highly dense crowds inherently difficult to handle . to overcome this ,we augment the training data by sampling high density patches more often .we evaluate our approach for crowd counting on the challenging ucf_cc_50 dataset .this dataset contains 50 gray scale images , each provided with head annotations .the number of people per image varies between 94 and 4543 , with an average of 1280 individuals per image .the dataset comprises of images from a wide range of scenarios such as concerts , political rallies , religious gatherings , stadiums etc . in a manner similar to recent works , we evaluate the performance of our approach using 5-fold cross validation .we randomly divide the dataset into five splits with each split containing 10 images . in each fold of the cross validation , we consider four splits ( 40 images ) for training the network and the remaining split ( 10 images ) for validating its performance .we sample patches from each of the 40 training images following the previously described data augmentation method .this procedure yields an average of 50,292 training patches per fold .we train our deep convolutional network using the deeplab version of caffe deep learning framework , using titan x gpus .our network was trained using stochastic gradient descent ( sgd ) optimization with a learning rate of and momentum of .the average training time per fold is about 5 hours .we use mean absolute error ( mae ) to quantify the performance of our method .mae computes the mean of absolute difference between the actual count and the predicted count for all the images in the dataset .the results of the proposed approach along with other recent methods are shown in table .[ tab : results ] .the results shown do not include any post - processing methods .the results illustrate that our approach achieves state - of - the - art performance in crowd counting .[ tab : results ] .quantitative results of our approach along with other state - of - the - art methods on ucf_cc_50 dataset . [ cols="^,^",options="header " , ]in this paper , we proposed a deep learning based approach to estimate the crowd density and total crowd count from highly dense crowd images .we showed that using a combination of a deep network as well as a shallow network is essential for detecting people under large scale variations and severe occlusion .we also show that the challenge of varying scales , and inherent difficulties in highly dense crowds , can be effectively tackled by augmenting the training images .our method outperforms the state - of - the - art methods on the challenging ucf_cc_50 dataset .this work was supported by science and engineering research board ( serb ) , department of science and technology ( dst ) , govt .of india ( proj no .sb / s3/eece/0127/2015 ) .a. b. chan , z .- s .j. liang , and n. vasconcelos .privacy preserving crowd monitoring : counting people without people models or tracking . in _ ieee conference on computer vision and pattern recognition _ , 2008 .n. dalal and b. triggs .histograms of oriented gradients for human detection . in _ieee computer society conference on computer vision and pattern recognition , 2005 ._ , volume 1 , pages 886893 .ieee , 2005 .
|
our work proposes a novel deep learning framework for estimating crowd density from static images of highly dense crowds . we use a combination of deep and shallow , fully convolutional networks to predict the density map for a given crowd image . such a combination is used for effectively capturing both the high - level semantic information ( face / body detectors ) and the low - level features ( blob detectors ) , that are necessary for crowd counting under large scale variations . as most crowd datasets have limited training samples ( < 100 images ) and deep learning based approaches require large amounts of training data , we perform multi - scale data augmentation . augmenting the training samples in such a manner helps in guiding the cnn to learn scale invariant representations . our method is tested on the challenging ucf_cc_50 dataset , and shown to outperform the state of the art methods .
|
one of the most important tasks in copula modeling is to decide which specific copula to employ . for that purposea rather general approach is to use omnibus goodness - of - fit tests that require minimum assumptions , for recent reviews see , e.g. , , , or .other more specific avenues consist in applying graphical tools ( ) or information based criteria ( ) . in fully parametric models , as considered in this paper , the latter can be formulated in terms of functions of the fisher information matrices , which will allow us to generate optimal designs for copula model discrimination .design optimization is generally largely employed in many applied fields as a convenient tool to improve drawing informative experiments .recently , in the theory of -optimality has been extended to a wider class of models for the usage of copulas .although the employment of such functions allows for a substantial flexibility in modeling , it also leads to the natural question of their ( proper ) choice . as stated, developments of powerful goodness - of - fit tests and strategies to avoid the wrong choice of the dependence constitute a considerable part of the literature on copulas .the issue of model choice or discrimination is in principle also a well known part of ( optimum ) experimental design theory and several criteria ( e.g. , -optimality , -optimality , -optimality ) have been proposed ( see , and for a special application to copula models ) . in this workwe first extend the general theory of -optimality to copula models . then, we present the usage of the -criterion to discriminate between various classes of dependences and possible scenarios . finally , we show through some examples possible real applications .in this section we provide the extension for the -criterion of a kiefer - wolfowitz type equivalence theorem , assuming the dependence described by a copula model .we then illustrate the basic idea of the new approach through a motivating example already analyzed in .let us consider a vector of control variables , where is a compact set .the results of the observations and of the expectations in a regression experiment are the vectors = \mathbf{e}[(y_1,y_2 ) ] = \boldsymbol{\eta}(\mathbf{x},\boldsymbol{\beta } ) = ( \eta_1(\mathbf{x},\boldsymbol{\beta}),\eta_2(\mathbf{x},\boldsymbol{\beta})),\ ] ] where is a certain unknown parameter vector to be estimated and are known functions .let us call the margins of each for all and the joint probability density function of the random vector , where is the unknown copula parameter vector .according to sklar s theorem ( ) , let us assume that the dependence between and is modeled by a copula function the fisher information matrix for a single observation is a matrix whose elements are \right)\ ] ] where .the aim of design theory is to quantify the amount of information on both sets of parameters and , respectively , from the regression experiment embodied in the fisher information matrix . for a concrete experiment with independent observations at support points ,the corresponding information matrix then is where and are such that : the approximate design theory is concerned with finding such that it maximizes some scalar function , i.e. , the so - called design criterion . in , we have developed the theory for the well known criterion of -optimality , i.e. , the criterion , if is non - singular . in this work ,we consider the case when the primary interest is in certain meaningful parameter contrasts .such contrasts are element of the vector , where is an matrix of rank .if is non - singular , then the variance matrix of the least - square estimator of is proportional to and then a natural criterion , generalization of the -optimality for this context , would be of maximizing ^{-1} ] , we observe an independent pair of random variables and , such that = \beta_1 + \beta_2 x + \beta_3 x^2 , \ ] ] = \beta_4 x + \beta_5 x^3 + \beta_6 x^4.\ ] ] the model is then linear in the parameter vector and has dependence described by the product copula with gaussian margins .[ cols=">,>",options="header " , ] we are now interested in verifying whether the -optimal design is informative enough to discriminate between asymmetry and symmetry . to this aim , we compare -optimal designs for to the corresponding -optimal designs ( figure [ fig : dswei_case ] ) . in this case , the loss in -efficiency never exceeds .in contrast to the binary case , such a result indicates that the -optimal design is already quite adequate for discriminating between symmetric and asymmetric models .in this paper we embed the issue of the choice of the copula in the framework of discrimination design .we present a new methodology based on the -optimality to construct design that discriminate between various dependences . through some examples we highlight the strength of the proposed technique due to the usage of the copula properties .in particular , the proposed approach allows to check the robustness of the -optimal design in the sense of discrimination and to construct more informative designs able to distinguish between classes of dependences .all the shown results are obtained by the usage of the r package docopulae ( ) .although we here compare just few possible dependences , the general construction is much wider .the r package docopulae allows the interested reader to run designs assuming a broad variety of dependence structures .it then provides a strong computational tool to the usage of copula models in real applications .the innovative approach we present in this paper is promising as it breaks new ground in the field of experimental design . in the future , we aim at generalizing other discrimination criteria such as and to flexible copula models ( ) .furthermore , powerful compound criteria might be developed for such models ( see , for instance , ) .in addition , the construction of multistage design procedures that allow for discrimination and estimation might be of great interest in special applications such as clinical trial studies ( ) .this work has been supported by the project anr-2011-is01 - 001 - 01 desire " and austrian science fund ( fwf ) i 833-n18 .fabrizio durante and elisa perrone .asymmetric copulas and their application in design of experiments . in susanne saminger - platz and radko mesiar , editors , _ on logical , algebraic , and probabilistic aspects of fuzzy set theory _ ,volume 336 of _ studies in fuzziness and soft computing _ , pages 157172 .springer international publishing , 2016 .jean - david fermanian . .in piotr jaworski , fabrizio durante , and wolfgang k. hrdle , editors , _ copulae in mathematical and quantitative finance _ , volume 213 of _ lecture notes in statistics _ , pages 6189 .springer berlin heidelberg , 2013 .christian genest and johanna g. nelehov . assessing and modeling asymmetry in bivariate continuous data . in piotr jaworski ,fabrizio durante , and wolfgang karl hrdle , editors , _ copulae in mathematical and quantitative finance _ , volume 213 of _ lecture notes in statistics _ , pages 91114 .springer berlin heidelberg , 2013 .
|
optimum experimental design theory has recently been extended for parameter estimation in copula models . however , the choice of the correct dependence structure still requires wider analyses . in this work the issue of copula selection is treated by using discrimination design techniques . the new proposed approach consists in the use of -optimality following an extension of corresponding equivalence theory . we also present some examples and highlight the strength of such a criterion as a way to discriminate between various classes of dependences . copula selection , design discrimination , stochastic dependence .
|
one of the most important resources for quantum information processing is entanglement .entangled states and many of its applications do not have any classical counterpart , and thus , an entangled state essentially reveals and exploits nonclassicality ( quantumness ) present in a physical system .for example , entanglement is essential ( may not be sufficient ) for dense coding , teleportation and device independent quantum cryptography ; and none of these schemes ( i.e. , dense coding , teleportation , device independent quantum cryptography ) has any classical analog . among these nonclassical schemes , teleportation deserves special attention , as a large number of quantum communication schemes can be viewed as variants of teleportation .for example , quantum information splitting ( qis ) , hierarchical qis ( hqis ) , quantum secret sharing ( qss ) , quantum cryptography based on entanglement swapping may be viewed as variants of teleportation .usually , standard entangled states , which are inseparable states of orthogonal states , are used to implement these teleportation based schemes .however , entangled non - orthogonal states do exist and they may be used to implement some of these teleportation - based protocols .specifically , entangled coherent states and schrodinger cat states prepared using coherent states are the typical examples of entangled non - orthogonal states .such a state was first introduced by sanders in 1992 . sincethen several investigations have been made on the properties and applications of the entangled non - orthogonal states .the investigations have yielded a hand full of interesting results . to be precise , in ref . , prakash et al . , have provided an interesting scheme for entanglement swapping for a pair of entangled coherent states ; subsequently , they investigated the effect of noise on the teleportation fidelity obtained in their scheme , and dong et al . , showed that this type of entanglement swapping schemes can be used to construct a scheme for continuous variable quantum key distribution ( qkd ) ; the work of hirota et al . , has established that entangled coherent states , which constitute one of the most popular examples of the entangled non - orthogonal states , are more robust against decoherence due to photon absorption in comparison to the conventional bi - photon bell states ; another variant of entangled non - orthogonal states known as squeezed quasi bell state has recently been found to be very useful for quantum phase estimation ; in , adhikari et al . , have investigated the merits and demerits of an entangled non - orthogonal state - based teleportation scheme analogous to the standard teleportation scheme . in brief , these interesting works have established that most of the quantum computing and communication tasks that can be performed using usual entangled states of orthogonal states can also be performed using entangled non - orthogonal states .the concept of masfi , which was claimed to corresponds to the least value of possible fidelity for any given information , was introduced by prakash et al .subsequently , in a series of papers ( and references therein ) , they have reported masfi for various protocols of quantum communication , and specially for the imperfect teleportation . here , it is important to note that adhikari et al . , tried to extend the domain of the standard teleportation protocol to the case of performing teleportation using entangled non - orthogonal states . to be precise , they studied teleportation of an unknown quantum state by using a specific type of entangled non - orthogonal state as the quantum channel . in their protocol, the input state to be teleported from alice to bob was prepared by a third party charlie , who used to send the prepared state to alice though a noiseless quantum channel .clearly , the presence of charlie was not essential , and in the absence of charlie , their scheme can be viewed simply as a scheme for teleportation using non - orthogonal state .interestingly , they s d that the amount of non - orthogonality present in the quantum channel affects the average fidelity ( ) of teleportation .however , their work was restricted to a specific type of entangled non - orthogonal state ( a specific type of quasi bell state ) , and neither the optimality of the scheme nor the effect of noise on it was investigated by them .in fact , in their work no effort had been made to perform a comparative study ( in terms of different measures of teleportation quality ) among possible quasi bell states that can be used as teleportation channel .further , the works of prakash et al . , and others ( and references therein ) have established that in addition to minimum assured fidelity ( masfi ) and minimum average fidelity ( mavfi ) , which we refer here as minimum fidelity ( mfi ) can be used as measures of quality of teleportation . keeping these points in mind , in the present paper, we have studied the effect of the amount of non - orthogonality on , mfi , and masfi for teleportation of a qubit using different quasi bell states , which can be used as the quantum channel .we have compared the performance of these quasi bell states as teleportation channel an ideal situation ( i.e. , in the absence of noise ) and in the presence of various types of noise ( e.g. , amplitude damping ( ad ) and phase damping ( pd ) ) .the relevance of the choice of these noise models has been well established in the past ( and references therein ) .further , using hordecky et al.s relation between optimal fidelity ( ) and maximal singlet fraction ( ) , it is established that the entangled non - orthogonal state based teleportation scheme investigated in the present work , is optimal for all the cases studied here ( i.e. , for all the quasi bell states ) .the remaining part of the paper is organized as follows . in sec .[ sec : entangled - non - orthngooal - states ] , we briefly describe the mathematical structure of the entangled non - orthogonal states and how to quantify the amount of entanglement present in such states using concurrence . in this section , we have restricted ourselves to very short description as most of the expressions reported here are well known .however , they are required for the sake of a self sufficient description .the main results of the present paper are reported in sec .[ sec : quantum - teleportation - using ] , where we provide expressions for , , and for all the four quasi - bell states and establish for all the quasi bell states , and deterministic perfect teleportation is possible with the help of quasi - bell states . in sec .[ sec : effect - of - noise ] , effects of amplitude damping and phase damping noise on is discussed for various alternative situations , and finally the paper is concluded in sec .[ sec : conclusion ] .basic mathematical structures of standard entangled states and entangled non - orthogonal states have been provided in detail in several papers ( and references therein ) .schmidt decomposition of an arbitrary bipartite state is written as where are the real numbers such that .further , is the orthonormal basis of subsystem in hilbert space .the state is entangled if at least two of the are non - zero . here, we may note that a standard bipartite entangled state can be expressed as where and are two complex coefficients that ensure normalization by satisfying in case of orthogonal states ; and are normalized states of the first system and and are normalized states of the second system , respectively .these states of the subsystems satisfy and for the conventional entangled states of orthogonal states and they satisfy and for the entangled nonorthogonal states .thus , an entangled state involving non - orthogonal states , which is expressed in the form of eq .( [ eq:1 ] ) , has the property that the overlaps and are nonzero , and the normalization condition would be here and in what follows , for simplicity , we have omitted the subsystems mentioned in the subscript .the two non - orthogonal states of a given system are assumed to be linearly independent and span a two dimensional subspace of the hilbert space .we may choose an orthonormal basis as for system , and similarly , for system , where , and now , we can express the non - orthogonal entangled state described by eq .( [ eq:1 ] ) using the orthogonal basis as follows with where the normalization constant is given by ^{-\frac{1}{2}}.\label{eq : normalization}\ ] ] eq .( [ eq:5 ] ) shows that an arbitrary entangled non - orthogonal state can be considered as a state of two logical qubits .following standard procedure , the concurrence ( ) of the entangled state can be obtained as for the entangled state to be maximally entangled , we must have _ showed that the state is maximally entangled state if and only if one of the following conditions is satisfied : ( i ) for the orthogonal case , and ( ii ) and for the non - orthogonal states , where is a real parameter . before we investigate the teleportation capacity of the entangled non - orthogonal states , we would like to note that if we choose in eq .( [ eq:1 ] ) , then for the case of orthogonal basis , normalization condition will ensure that , and the state will reduce to a standard bell state , and its analogous state under the same condition ( i.e. , for ) in non - orthogonal basis would be where is the normalization constant . in analogyto its analogous entangled non - orthogonal state is denoted as and referred to as quasi bell state .similarly , in analogy with the other 3 bell states and we can obtain entangled non - orthogonal states denoted by and , respectively .in addition to these notations , in what follows we also use four entangled non - orthogonal states , which are used in this paper , are usually referred to as quasi bell states . they are not essentially maximally entangled , and they may be expressed in orthogonal basis ( see last column of table [ tab : quasi - bell - states ] ) .notations used in the rest of the paper , expansion of the quasi bell states in orthogonal basis , etc ., are summarized in table [ tab : quasi - bell - states ] , where we can see that is equivalent to and thus is always maximally entangled and can lead to perfect deterministic teleportation as can be done using usual bell states .so is not a state of interest in noiseless case . keeping this in mind , in the next section, we mainly concentrate on the properties related to the teleportation capacity of the other 3 quasi bell states . however , in sec .[ sec : effect - of - noise ] , we would discuss the effect of noise on all 4 quasi bell states . state ( i.e. , bell - like entangled non - orthogonal state having a mathematical form analogous to the usual bell state given in the 2nd column of the same row ) & state in orthogonal basis that is equivalent to the quasi - bell state mentioned in the 3rd column of the same row + 1 . & & & + 2 . & & & + 3 . & & & + 4 . & & & +let us consider that an arbitrary single qubit quantum state is to be teleported using the quasi bell state where the normalization constant ^{\frac{-1}{2}}$ ] .these quasi bell states may be viewed as particular cases of eq .( [ eq:1 ] ) with , and . in general, is a complex number , and consequently , we can write where the real parameters _ _ and _ _ , respectively , denote the modulus and argument of the complex number with and . as implies , orthogonal basis , we may consider this parameter as the primary measure of non - orthogonality .this is so because no value of will lead to orthogonality condition .further , for we can consider as a secondary measure of non - orthogonality . now , using eq .( [ eq:10 ] ) , and the map between orthogonal and non - orthogonal bases we may rewrite eq .( [ eq:3a ] ) as }{\sqrt{1-r^{2}}}.\label{eq:11.a}\ ] ] thus , we have and consequently , can now be expressed as where and .this is already noted in table 1 , where we have also noted that if we express in basis , we obtain the bell state , which is maximally entangled and naturally yields unit fidelity for teleportation .it s not surprising to obtain maximally entangled non - orthogonal states , as in it has been already established that there exists a large class of bipartite entangled non - orthogonal states that are maximally entangled under certain conditions . using eq .( [ eq:7 ] ) , we found the concurrence of the symmetric state as clearly , is not maximally entangled unless which implies orthogonality .thus , all quasi bell states of the form are non - maximally entangled .now , if the state is used as quantum channel , then following prakash et al . , we may express the masfi for teleportation of single qubit state ( [ eq:8 ] ) as since the value of lies between 0 and 1 , the decreases continuously as increases . for orthogonal state = , and thus , .thus , we may conclude that the quasi bell state will never lead to deterministic perfect teleportation .however , its bell state counter part ( case ) leads to deterministic perfect teleportation . here, it would be apt to note that for teleportation of a single qubit state using as the quantum channel , average teleportation fidelity can be obtained as this is obtained by computing teleportation fidelity where is the input state , and with , and is a measurement operator in bell basis ( are defined in the second column of table [ tab : quasi - bell - states ] ) , and is the teleported state corresponding to projective measurement in bell basis .interestingly , is found to depend on the parameters of the state to be teleported ( cf .( 11 ) of ref .thus , if we use bloch representation and express the state to be teleported as , then the teleportation fidelity will be a function of state parameters and ( here is used to distinguish the state parameter from the non - orthogonality parameter ) .an average fidelity is obtained by taking average over all possible states that can be teleported , i.e. , by computing .this definition of average fidelity is followed in and in the present paper . however , in the works of prakash et al .( and references therein ) , was considered as fidelity and as average fidelity .they minimized over the parameters of the state to be teleported and referred to the obtained fidelity as the mavfi .as that notation is not consistent with the definition of average fidelity used here . in what follows, we will refer to the minimum value of as mfi , but it would be the same as mavfi defined by prakash et al .further , we would like to note that in and in the present paper , it is assumed that a standard teleportation scheme is implemented by replacing a bell state by its partner quasi bell state , and as a consequence for a specific outcome of bell measurement of alice , bob applies the same pauli operator for teleportation channel or ( which is a quasi bell state ) as he used to do for the corresponding bell state or where however , the expression of masfi used here ( see eq .( [ eq:14 ] ) ) and derived in are obtained using an optimized set of unitary ( cf .discussion after eq .( 10 ) in ref . ) and are subjected to outcome of bell measurement of alice , thus no conclusions should be made by comparing masfi with mfi or . from eqs .( [ eq:14 ] ) and ( [ eq:15 ] ) , we can see that for a standard bell state ( i.e. , when ) , .however , for = 1 , and .thus , we conclude that for a standard bell state both and average teleportation fidelity have the same value .this is not surprising , as for = 0 the entangled state becomes maximally entangled .however , for this state is non - maximally entangled , and interestingly , for = 1 , we obtain , whereas is nonzero .we have already noted that no comparison of and obtained as above should be made as that may lead to confusing results .here we give an example , according to , masfi is the least possible value of the fidelity , but for certain values of , we can observe that for example , for , we obtain whereas clearly , minimum found in computation of , and the average found in the computation of is not performed over the same data set , specifically not using the same teleportation mechanism ( same unitary operations at the receiver s end ) .now we may check the optimality of the teleportation scheme by using the criterion introduced by horodecki et al . in ref . . according to this criterion optimal average fidelity that can be obtained for a teleportation schme which uses a bipartite entangled quantum state as the quantum channel is where is the maximal singlet fraction defined as where : is bell state described above and summarized in table [ tab : quasi - bell - states ] . as we are interested in computing for quasi bell states which are pure states , we can write where is a quasi bell state .a bit of calculation yields that maximal singlet fraction for the quasi bell state is now using ( [ eq:15 ] ) , ( [ eq : horodecki ] ) and ( [ eq : singlerfracpsiplus ] ) , we can easily observe that thus , a quasi bell state based teleportation scheme which is analogous to the usual teleportation scheme , but uses a quasi bell state as the quantum channel is optimal .we can also minimize with respect to and to obtain which is incidentally equivalent to maximal singlet fraction in this case .so far we have reported analytic expressions for some parameters ( e.g. , and that can be used as measures of the quality of a teleportation scheme realized using the teleportation channel and have shown that the teleportation scheme obtained using is optimal . among these analytic expressions , was already reported in .now , to perform a comparative study , let us consider that the teleportation is performed using one of the remaining two quasi bell states of our interest ( i.e. , using or described in table [ tab : quasi - bell - states ] ) as quantum channel . in that case , we would obtain the concurrence as clearly , in contrast to which was only dependent , the concurrence depends on both the parameters and . from eq .( [ eq:19 ] ) it is clear that at quasi bell state is maximally entangled , even though the states and are non - orthogonal as .thus , at these points , states are maximally entangled .if quantum state is used as quantum channel , then for teleportation of an arbitrary single qubit information state ( [ eq:8 ] ) would be and similarly , that for quasi bell state would be thus , the expressions for are also found to depend on both and .clearly , at and , and hence for these particular choices of entangled non - orthogonal state leads to the deterministic perfect teleportation of single qubit information state .clearly , for these values of , indicating maximal entanglement . however , the entangled state is still non - orthogonal as can take any of its allowed values .similarly , at and , and hence the entangled state of the non - orthogonal states and leads to deterministic perfect teleportation in these conditions .thus , deterministic perfect teleportation is possible using quasi bell states or as quantum channels for teleportation , but it is not possible with unless it reduces to its orthogonal state counter part ( i.e. , we may now compute the average fidelity for , by using the procedure adopted above for and obtain ( color online ) the dependence of the average fidelity on non - orthogonality parameters is established via 2 and 3 dimensional plots .specifically , variation with for different values of and is shown in ( a ) , ( b ) , and ( c ) , respectively .similarly , ( d ) shows 3 dimensional variation for both and in light ( yellow ) and dark ( blue ) colored surface plots .the same fact can also be visualized using contour plots for both of these cases in ( e ) and ( f ) . the horizontal dotted ( black ) line in ( a ), ( b ) , and ( c ) corresponds to classical fidelity .note that the quantity plotted in this and the following figures is , which is mentioned as in the -axis . ]now , we would like to compare the average fidelity expressions obtained so far for various values of non - orthogonality parameters for all the quasi bell states .the same is illustrated in fig .[ fig : avfid ] .specifically , in fig .[ fig : avfid ] ( a)-(c ) we have shown the dependence of average fidelity on secondary non - orthogonality parameter ( ) using a set of plots for its variation with primary non - orthogonality parameter ( ) .this establishes that the primary non - orthogonality parameter has more control over the obtained average , fidelity but a secondary parameter is also prominent enough to change the choice of quasi bell state to be preferred for specific value of primary non - orthogonality parameter .thus , the amount of non - orthogonality plays a crucial role in deciding which quasi bell state would provide highest average fidelity for a teleportation scheme implemented using quasi bell state as the teleportation channel .further , all these plots also establish that there is always a quasi bell state apart from , which has average fidelity more than classically achievable fidelity for all values of .we may now further illustrate the dependence of the average fidelity on both non - orthogonality parameters via 3 d and contour plots shown in fig .[ fig : avfid ] ( d ) , ( e ) , and ( f ) . these plotsestablish that the average fidelity of state increases for the values of for which decreases , and vice - versa .we can now establish the optimality of the teleportation scheme implemented using by computing average fidelity and maximal singlet fraction for these channels .specifically , computing the maximal singlet fraction using the standard procedure described above , we have obtained using hordecky et al . , criterion ( [ eq : horodecki ] ) , and eq .( [ eq : avefidelityphipm])-([eq : singlet fractionphipm ] ) , we can easily verify that .thus , the teleportation scheme realized using any of the quasi bell state are optimal .however , they are not equally efficient for a specific choice of non - orthogonality parameter as we have already seen in fig .[ fig : avfid ] .this motivates us to further compare the performances of these quasi bell states as a potential quantum channel for teleportation .for the completeness of the comparative investigation of the teleportation efficiencies of different quasi bell states here we would also like to report mfi that can be achieved using different quasi bell states .the same can be computed as above , and the computation leads to following analytic expressions of mfi for : and interestingly , the comparative analysis performed with the expressions of mfi using their variation with various parameters led to quite similar behavior as observed for in fig .[ fig : avfid ] .therefore , we are not reporting corresponding figures obtained for mfi .in this section , we would like to analyze and compare the average fidelity obtained for each quasi bell state over two well known markovian channels , i.e. , ad and pd channels . specifically , in open quantum system formalism , a quantum state evolving under a noisy channel can be written in terms of kraus operators as follows where are the kraus operators of a specific noise model .for example , in the case of ad channel the kraus operators are similarly , the kraus operators for pd noise are for both ad and pd noise , is the decoherence rate , which determines the effect of the noisy channel on the quantum system . to analyze the feasibility of quantum teleporation scheme using quasi bell states and to compute the average fidelity we use eqs .( [ eq : kraus - damping ] ) and ( [ eq : kraus - dephasing ] ) in eq .( [ eq : noise - affected - density - matrix-1 ] ) . finally , to quantify the effect of noise we use a distance based measure `` fidelity '' between the quantum state evolved under a specific noisy channel and the quantum state alice wish to teleport ( say ) .mathematically , which is the square of the conventional fidelity expression , and is the quantum state recovered at the bob s port under the noisy channel .further , details of the mathematical technique adopted here can be found in some of our recent works on secure and insecure quantum communication .we will start with the simplest case , where we assume that only bob s part of the quantum channel is subjected to either ad or pd noise .the assumption is justified as the quasi bell state used as quantum channel is prepared locally ( here assumed to be prepared by alice ) and shared afterwards . during alice to bob transmission of an entnagled qubit , it may undergo decoherence , but the probability of decoherence is much less for the other qubits that do nt travel through the channel ( remain with alice ) .therefore , in comparison of the bob s qubits , the alice s qubits or the quantum state to be teleported , which remain at the sender s end , are hardly affected due to noise .the effect of ad noise under similar assumptions has been analyzed for three qubit ghz and w states in the recent past .the average fidelity for all four quasi bell states , when bob s qubit is subjected to ad channel while the qubits not traveling through the channel are assumed to be unaffected due to noise , is obtained as ,\\ f_{ad}^{|\phi_{-}\rangle } & = & \frac{1}{-6 + 6r^{2}\cos2\theta}\left[-4+r^{2}\left(2 + 2\sqrt{1-\text{\ensuremath{\eta}}}-3\eta\right)-2\sqrt{1-\text{\ensuremath{\eta}}}\right.\\ & + & \left.2r^{4}(-1+\text{\ensuremath{\eta}})+\text{\ensuremath{\eta}}-2r^{2}\left(-2-\sqrt{1-\text{\ensuremath{\eta}}}+r^{2}\sqrt{1-\text{\ensuremath{\eta}}}\right)\cos2\theta\right],\\ f_{ad}^{|\psi_{+}\rangle } & = & \frac{4 + 2\sqrt{1-\text{\ensuremath{\eta}}}-\text{\ensuremath{\eta}}+r^{2}\left(-2\sqrt{1-\text{\ensuremath{\eta}}}+\eta\right)}{6\left(1+r^{2}\right)},\\ f_{ad}^{|\psi_{+}\rangle } & = & \frac{1}{6}\left[\left(4 + 2\sqrt{1-\eta}-\text{\ensuremath{\eta}}\right)\right ] .\end{array}\label{eq : adsq}\ ] ] here and in what follows , the subscript of fidelity corresponds to noise model and superscript represents the choice of quasi bell state used as teleportation channel .similarly , all the average fidelity expressions when bob s qubit is subjected to pd noise can be obtained as ,\\ f_{pd}^{|\phi_{-}\rangle } & = & \frac{1}{3}\left[2+\sqrt{1-\text{\ensuremath{\eta}}}-r^{2}\sqrt{1-\text{\ensuremath{\eta}}}+\frac{r^{2}-r^{4}}{-1+r^{2}\cos2\theta}\right],\\ f_{pd}^{|\psi_{+}\rangle } & = & \frac{2+\sqrt{1-\eta}-r^{2}\sqrt{1-\eta}}{3 + 3r^{2}},\\ f_{pd}^{|\psi_{-}\rangle } & = & \frac{1}{3}\left[2+\sqrt{1-\text{\ensuremath{\eta}}}\right ] .\end{array}\label{eq : pdsq}\ ] ] it is easy to observe that for ( i.e. , in the absence of noise ) the average fidelity expressions listed in eqs .( [ eq : adsq ] ) and ( [ eq : pdsq ] ) reduce to the average fidelity expressions corresponding to each quasi bell state reported in sec . [sec : quantum - teleportation - using ] .this is expected and can also be used to check the accuracy of our calculation .it would be interesting to observe the change in fidelity when we consider the effect of noise on alice s qubit as well .though , it remains at alice s port until she performs measurement on it in suitable basis , but in a realistic situation alice s qubit may also interact with its surroundings in the meantime .further , it can be assumed that the state intended to be teleported is prepared and teleported immediately .therefore , it is hardly affected due to noisy environment . here , without loss of generality ,we assume that the decoherence rate for both the qubits is same . using the same mathematical formalism adopted beforehand ,we have obtained the average fidelity expressions for all the quasi bell states when both the qubits in the quantum channel are affected by ad noise with the same decoherence rate .the expressions are ,\\ f_{ad}^{|\phi_{-}\rangle } & = & -\frac{1}{-3 + 3r^{2}\cos2\theta}\left[3 - 2r^{2}(-1+\text{\ensuremath{\eta}})^{2}+r^{4}(-1+\eta)^{2}+(-2+\text{\ensuremath{\eta}})\text{\ensuremath{\eta}}\right.\\ & + & \left.r^{2}\left(-3-r^{2}(-1+\text{\ensuremath{\eta}})+\text{\ensuremath{\eta}}\right)\cos2\theta\right],\\ f_{ad}^{|\psi_{+}\rangle } & = & \frac{3 - 2\eta+r^{2}(-1 + 2\eta)}{3\left(1+r^{2}\right)},\\ f_{ad}^{|\psi_{-}\rangle } & = & 1-\frac{2\text{\ensuremath{\eta}}}{3}. \end{array}\label{eq : adbq}\ ] ] similarly , the average fidelity expressions when both the qubits evolve under pd channel instead of ad channel are ,\\ f_{pd}^{|\phi_{-}\rangle } & = & \frac{1}{3}\left[3+r^{2}(-1+\eta)-\eta+\frac{r^{2}-r^{4}}{-1+r^{2}\cos2\theta}\right],\\ f_{pd}^{|\psi_{+}\rangle } & = & \frac{3+r^{2}(-1+\text{\ensuremath{\eta}})-\text{\ensuremath{\eta}}}{3\left(1+r^{2}\right)},\\ f_{pd}^{|\psi_{-}\rangle } & = & 1-\frac{\eta}{3}. \end{array}\label{eq : pdbq}\ ] ] ( color online ) the dependence of the average fidelity on the number of qubits exposed to ad channels is illustrated for and .the choice of the initial bell states in each case is mentioned in plot legends , where the superscript b , ab , and all corresponds to the cases when only bob s , both alice s and bob s , and all three qubits were subjected to the noisy channel .the same notation is adopted in the following figures . ]( color online ) variation of the average fidelity for all the quasi bell states is shown for all three cases with and . ] finally , it is worth analyzing the effect of noisy channels on the feasibility of the teleportation scheme , when even the state to be teleported is also subjected to the same noisy channel .the requirement of this discussion can be established as it takes finite time before operations to teleport the quantum state are performed .meanwhile , the qubit gets exposed to its vicinity and this interaction may lead to decoherence . here , for simplicity , we have considered the same noise model for the state to be teleported as for the quantum channel .we have further assumed the same rate of decoherence for all the three qubits . under these specific conditions ,when all the qubits evolve under ad channels , the average fidelity for each quasi bell state turns out to be ,\\ f_{ad}^{|\phi_{-}\rangle } & = & \frac{1}{2\left(-3 + 3r^{2}\cos2\theta\right)}\left[-2\left(2+\sqrt{1-\text{\text{\ensuremath{\eta}}}}\right)+\text{\ensuremath{\eta}}\left(3 + 2\sqrt{1-\text{\text{\ensuremath{\eta}}}}+2(-2+\text{\text{\ensuremath{\eta}}})\text{\text{\ensuremath{\eta}}}\right)\right.\\ & - & 2r^{2}(-1+\text{\text{\ensuremath{\eta}}})\left(1+\sqrt{1-\text{\text{\ensuremath{\eta}}}}+\text{\text{\ensuremath{\eta}}}(-3 + 2\text{\ensuremath{\eta}})\right)+2r^{4}(-1+\text{\text{\ensuremath{\eta}}})^{3}\\ & + & \left.r^{2}\left(4 + 2\sqrt{1-\text{\text{\ensuremath{\eta}}}}+2\sqrt{1-\text{\text{\ensuremath{\eta}}}}\left(r^{2}(-1+\text{\ensuremath{\eta}})-\text{\text{\ensuremath{\eta}}}\right)-\text{\text{\ensuremath{\eta}}}\right)\cos2\theta\right],\\ f_{ad}^{|\psi_{+}\rangle } & = & \frac{1}{6\left(1+r^{2}\right)}\left[2\left(2+\sqrt{1-\text{\text{\ensuremath{\eta}}}}\right)+\text{\text{\ensuremath{\eta}}}\left(-3 - 2\sqrt{1-\text{\ensuremath{\eta}}}+2\text{\ensuremath{\eta}}\right)\right.\\ & + & \left.r^{2}\left(-2\sqrt{1-\text{\text{\ensuremath{\eta}}}}+\left(5 + 2\sqrt{1-\text{\ensuremath{\eta}}}-2\text{\text{\ensuremath{\eta}}}\right)\text{\text{\ensuremath{\eta}}}\right)\right],\\ f_{ad}^{|\psi_{-}\rangle } & = & \frac{1}{6}\left[4 + 2\sqrt{1-\text{\text{\ensuremath{\eta}}}}-3\text{\text{\ensuremath{\eta}}}-2\sqrt{1-\text{\ensuremath{\eta}}}\text{\text{\ensuremath{\eta}}}+2\text{\text{\ensuremath{\eta}}}^{2}\right ] .\end{array}\label{eq : adaq}\ ] ] similarly , when all three qubits are subjected to pd noise with the same decoherence rate , the analytic expressions of the average fidelity are obtained as ,\\ f_{pd}^{|\phi_{-}\rangle } & = & \frac{1}{3}\left[2+\sqrt{1-\text{\ensuremath{\eta}}}-r^{2}\sqrt{1-\text{\ensuremath{\eta}}}-\sqrt{1-\text{\ensuremath{\eta}}}\text{\ensuremath{\eta}}+r^{2}\sqrt{1-\text{\ensuremath{\eta}}}\text{\text{\ensuremath{\eta}}}+\frac{r^{2}-r^{4}}{-1+r^{2}\cos2\theta}\right],\\ f_{pd}^{|\psi_{+}\rangle } & = & \frac{2+\sqrt{1-\text{\text{\ensuremath{\eta}}}}+\sqrt{1-\text{\text{\ensuremath{\eta}}}}\left(r^{2}(-1+\text{\ensuremath{\eta}})-\text{\text{\ensuremath{\eta}}}\right)}{3\left(1+r^{2}\right)},\\ f_{pd}^{|\psi_{-}\rangle } & = & \frac{2+(1-\text{\text{\ensuremath{\eta}}})^{3/2}}{3}. \end{array}\label{eq : pdaq}\ ] ] it is interesting to note that in the ideal conditions is the unanimous choice of quasi bell state to accomplish the teleportation with highest possible fidelity . however , from the expressions of fidelity obtained in eqs .( [ eq : adsq])-([eq : pdaq ] ) , it appears that it may not be the case in the presence of noise . for further analysis, it would be appropriate to observe the variation of all the fidelity expressions with various parameters . in what follows, we perform this analysis .[ fig : avfid - ad ] , illustrates the dependence of the average fidelity on the number of qubits exposed to ad channel for each quasi bell state using eqs .( [ eq : adsq ] ) , ( [ eq : adbq ] ) , and ( [ eq : pdaq ] ) . unlike the remaining quasi bell states , the average fidelity for state starts from 1 at . until a moderate value ( a particular value that depends on the choice of quasi bell state ) of decoherence rate is reached, the decay in average fidelity completely depends on the number of qubits interacting with their surroundings. however , at the higher decoherence rate , this particular nature was absent .further , fig .[ fig : avfid - ad ] ( a ) and ( b ) show that best results compared to remaining two cases can be obtained for the initial state , while both the channel qubits are evolving under ad noise ; whereas the same case turns out to provide the worst results in case of .a similar study performed over pd channels instead of ad channels reveals that the decay in average fidelity solely depends on the number of qubits evolving over noisy channels ( cf .[ fig : avfid - pd ] ) .( color online ) the variation of average fidelity for all possible cases with each quasi bell state as quantum channel is compared for ad ( ( a)-(c ) ) and pd ( ( d)-(f ) ) channels with and .the case discussed has been mentioned as superscript of in the axes label on -axis . ] finally , it is also worth to compare the average fidelity obtained for different quasi bell states when subjected to noisy environment under similar condition .this would reveal the suitable choice of initial state to be used as a quantum channel for performing teleportation . in fig .[ fig : avfid - ad - pd ] ( a ) , the variation of average fidelity for all the quasi bell states is demonstrated , while only bob s qubit is exposed to ad noise .it establishes that although in ideal case and small decoherence rate state is the most suitable choice of quantum channel , which do not remain true at higher decoherence rate .while all other quasi bell states follow exactly the same nature for decay of average fidelity and appears to be the worst choice of quantum channel .a quite similar nature can be observed for the remaining two cases over ad channels in fig .[ fig : avfid - ad - pd ] ( b ) and ( c ) .specifically , remains the most suitable choice below moderate decoherence rate , while may be preferred for channels with high decoherence , and is inevitably the worst choice .a similar study carried out over pd channels and the obtained results are illustrated in fig .[ fig : avfid - ad - pd ] ( d ) and ( f ) . from these plots, it may be inferred that undoubtedly remains the most suitable and the worst choice of quantum channel .the investigation on the variation of the average fidelity with non - orthogonality parameters over noisy channels yields a similar nature as was observed in ideal conditions ( cf .[ fig : avfid ] ) .therefore , we have not discussed it here , but a similar study can be performed in analogy with the ideal scenario .in fact , if one wishes to quantify only the effect of noise on the performance of the teleportation scheme using a non - orthogonal state quantum channel , the inner product may be taken with the teleported state in the ideal condition instead of the state to be teleported .the mathematical procedure adopted here is quite general in nature and would be appropriate to study the effect of generalized amplitude damping , squeezed generalized amplitude damping , bit flip , phase flip , and depolarizing channel .this discussion can further be extended to a set of non - markovian channels , which will be carried out in the future and reported elsewhere .in the present study , it is established that all the quasi bell states , which are entangled non - orthogonal states , may be used for quantum teleportation of a single qubit state .however , their teleportation efficiencies are not the same , and the efficiency depends on the nature of noise present in the quantum channel . specifically , we have considered four quasi bell states as possible teleportation channels , and computed average and minimum fidelity that can be obtained by replacing a bell state quantum channel of the standard teleportation scheme by its non - orthogonal counterpart ( i.e. , corresponding quasi bell state ) .the results obtained here can be easily reduced to that obtained using usual bell state in the limits of vanishing non - orthogonality parameter .specifically , there are two real parameters and , which are considered here as the primary and secondary measures of non - orthogonality , and variation of average and minimum fidelity is studied with respect to these parameters .thus , in brief , the performance of the standard teleportation scheme is investigated using and mfi as quantitative measures of quality of the teleportation scheme by considering a quasi bell state instead of bell state as quantum channel .consequently , during discussion related to and mfi , it has been assumed that bob performs a pauli operation corresponding to each bell state measurement outcome as he used to perform in the standard teleportation scheme .further , using horodecky criterion based on maximal singlet fraction it has been shown that such a scheme for quasi bell state based teleportation is optimal .we have used another quantitative measure for quality of teleportation performance , masfi , which is computed using a compact formula given in ref . , where an optimal unitary operation is applied by bob . for a few specific cases ,the calculated masfi was found to be unity . in those cases ,concurrence for entangled non - orthogonal states were also found to be unity , which implied maximal entanglement .however , for this set of maximally entangled non - orthogonal states , we did not observe unit average fidelity and minimum fidelity as the unitary operations performed by bob ( used for computation of and mfi ) were not the same as was in computation of masfi .the comparative study performed here led to a set of interesting observations that are illustrated in figs .( [ fig : avfid])-([fig : avfid - ad - pd ] ) . from fig .( [ fig : avfid ] ) , we can clearly observe that there is at least a quasi bell state ( in addition to maximally entangled ) for which the average fidelity that can be obtained in an ideal condition remains more than classical fidelity of teleportation .however , this choice of suitable quasi bell state completely depends on the value of secondary non - orthogonality parameter , i.e. , decides whether or is preferable .the performance of the teleportation scheme using entangled non - orthogonal states has also been analyzed over noisy channels ( cf . figs .( [ fig : avfid - ad])-([fig : avfid - ad - pd ] ) ) .this study yield various interesting results .the quasi bell state , which was shown to be maximally entangled in an ideal situation , remains most preferred choice as quantum channel while subjected to pd noise as well .however , in the presence of damping effects due to interaction with ambient environment ( i.e. , in ad noise ) , the choice of the quasi bell state is found to depend on the non - orthogonality parameter and the number of qubits exposed to noisy environment .we hope the present study will be useful for experimental realization of teleportation schemes beyond usual entangled orthogonal state regime , and will also provide a clear prescription for future research on applications of entangled non - orthogonal states .bennett , c. h. , brassard , g. , crpeau , c. , jozsa , r. , peres , a. , wootters , w. k. : teleporting an unknown quantum state via dual classical and einstein - podolsky - rosen channels .lett . * 70 * , 1895 - 1899 ( 1993 ) pathak , a. , banerjee , a. : efficient quantum circuits for perfect and controlled teleportation of n - qubit non - maximally entangled states of generalized bell - type .j. quantum inf . * 09 * , 389 - 403 ( 2011 ) dong , l. , wang , j. x. , xiu , x. m. , li , d. , gao , y. j. , yi , x. x. : a continuous variable quantum key distribution protocol based on entanglement swapping of quasi - bell entangled coherent states .. phys . * 53 * , 3173 - 3190 ( 2014 ) sharma , v. , thapliyal , k. , pathak , a. , banerjee , s. : a comparative study of protocols for secure quantum communication under noisy environment : single - qubit - based protocols versus entangled - state - based protocols .quantum inf . process .doi 10.1007/s11128 - 016 - 1396 - 7 ( 2016 ) thapliyal , k. , pathak , a. : applications of quantum cryptographic switch : various tasks related to controlled quantum communication can be performed using bell states and permutation of particles .quantum inf . process .* 14 * , 2599 - 2616 ( 2015 ) prakash , h. , verma , v. : minimum assured fidelity and minimum average fidelity in quantum teleportation of single qubit using non - maximally entangled states .quantum inf . process .* 11 * , 1951 - 1959 ( 2012 )
|
the effect of non - orthogonality of an entangled non - orthogonal state based quantum channel is investigated in detail in the context of the teleportation of a qubit . specifically , average fidelity , minimum fidelity and minimum assured fidelity ( masfi ) are obtained for teleportation of a single qubit state using all the bell type entangled non - orthogonal states known as quasi bell states . using horodecki criterion , it is shown that the teleportation scheme obtained by replacing the quantum channel ( bell state ) of the usual teleportation scheme by a quasi bell state is optimal . further , the performance of various quasi bell states as teleportation channel is compared in an ideal situation ( i.e. , in the absence of noise ) and under different noise models ( e.g. , amplitude and phase damping channels ) . it is observed that the best choice of the quasi bell state depends on the amount non - orthogonality , both in noisy and noiseless case . a specific quasi bell state , which was found to be maximally entangled in the ideal conditions , is shown to be less efficient as a teleportation channel compared to other quasi bell states in particular cases when subjected to noisy channels . it has also been observed that usually the value of average fidelity falls with an increase in the number of qubits exposed to noisy channels ( viz . , alice s , bob s and to be teleported qubits ) , but the converse may be observed in some particular cases .
|
recently , vibrotactile based somatosensory modality bcis have gained in popularity .we propose an alternative tactile bci which uses p300 brain responses to a somatosensory stimulation delivered to larger areas of the user s back , defined as a back tactile bci ( btbci ) .we conduct experiments by applying vibration stimuli to the user s back , which allows us to stimulate places at larger distances on the body .the stimulated back areas are both shoulders , the waist and the hips . in order to do so ,we utilize a haptic gaming pad `` zeus vybe '' by disney & comfort research .an audio signal pad s input allows for the delivery of a sound pattern activating spatial tactile patterns of vibrotactile transducers embedded within the device . in the experiments reported in this paper, the users lay down on the gaming pad and interacted with tactile stimulus patterns delivered in an oddball style paradigm to their backs , as shown in figure [ fig : user ] .the reason for using the horizontal position of the gaming pad , developed for a seated setting , is that bedridden users could easily utilize it and it could also serve as a muscle massage preventing the formation of bedsores .the rest of the paper is organized as follows .first we introduce the btbci experimental protocols and methods .next the experiment results are discussed .finally , there is discussion and conclusions are drawn .for each of the six commands .`` n '' stands for no response cases . ]in the research project reported in this paper the psychophysical and online eeg experiments were carried out with able bodied , bci naive users .seven healthy users participated in the study ( three males and four females ) with a mean age of 25 years ( standard deviation of 7.8 years ) .the users were paid for their participation .all the experiments were performed at the life science center of tara , university of tsukuba , japan .the online eeg bci experiments were conducted in accordance with _ the world medical association declaration of helsinki - ethical principles for medical research involving human subjects_. the procedures for the psychophysical experiments and eeg recordings for the bci paradigm were approved by the ethical committee of the faculty of engineering , information and systems at the university of tsukuba , tsukuba , japan .each participant signed to give informed consent to taking part in the experiments .the psychophysical experiments were conducted to investigate the recognition accuracy and response times to the stimuli delivered from the gaming pad .the behavioural responses were collected as keyboard button presses after instructed targets . in the psychophysical experiment ,each single trial was comprised of a randomly presented single target and five non target vibrotactile stimuli ( and targets in a single session ) . the stimulus duration was set to ms and the inter interval ( isi ) to ms . in the btbci online experiments ,the eeg signals were captured with a bio signal amplifier system g.usbamp by g.tec medical instruments , austria .active eeg electrodes were attached to the sixteen locations _cz , pz , p3 , p4 , c3 , c4 , cp5 , cp6 , p1 , p2 , poz , c1 , c2 , fc1 , fc2 and fcz _ , as in international system .a reference electrode was attached to the left mastoid , and a ground electrode to the forehead at the _ fpz _ position .the eeg signals were captured and classified by bci2000 software using a stepwise linear discriminant analysis ( swlda ) classifier . in each trial , the stimulus duration was set to ms and the isi to random values in a range of ms in order to break rhythmic patterns of presentation .the vibrotactile stimuli in the two experimental settings above were generated using the same _ max 6 _ program , and the trigger onsets were generated by _bci2000 _ eeg acquisition and erp classification software .in this section , we report and discuss the results of the psychophysical and btbci eeg experiments conducted with seven healthy users .the psychophysical experiment results are summarized in the form of a confusion matrix depicted in figure [ fig : confmx ] , where the behavioural response accuracies to instructed targets and marginal errors are depicted together with no response errors , which were not observed with the users participating in our experiments .the grand mean behavioural accuracies were above , which proved the easiness of the back vibrotactile stimuli discrimination .the behavioural response times did not differ significantly as tested with the wilcoxon rank sum test for medians , which further supported the choice of the experiment set - up with vibrotactile stimuli to the back .eye blinks were rejected in the process of creation of this figure , with a threshold of .[fig : allerp ] ] the eeg experiment results are summarized in figures [ fig : allerp ] and [ fig : classacc ] in the form of grand mean averaged erps and classification accuracies .the grand mean averaged erps resulted in very clear p300 responses in latency ranges of ms .the swlda classification results in online btbci experiments of six digit spelling are shown in figure [ fig : classacc ] , depicting each user s averaged scores in a range of and the best grand mean results in the range of , both as a function of various erp averaging scenarios .the chance level was .the best mean scores show very promising patterns for possible improvements based on longer user training .this paper reports results obtained with a novel six command based btbci prototype developed and evaluated in experiments with seven bci naive users .the experiment results obtained in this study confirm the general validity of the btbci for six command based applications and the possibility to further improve the results , as illuminated by the best mean accuracies achieved by the users .the eeg experiment with the prototype confirms that tactile stimuli to large areas of the back can be used to spell six - digit ( command ) sequences with mean information transfer rates ranging from bit / min to bit / min for averaging based swlda classification to bit / min to bit / min for single trial cases .the results presented offer a step forward in the development of somatosensory modality neurotechnology applications . due to the still not very satisfactory interfacing rate achieved in the case of the online btbci, the current prototype obviously requires improvements and modifications .these requirements will determine the major lines of study for future research .however , even in its current form , the proposed btbci can be regarded as a possible alternative solution for locked in syndrome patients , who can not use vision or auditory based interfaces due to sensory or other disabilities . ]designed and performed the eeg experiments : tk , tmr . analyzed the data : tk , tmr . conceived the concept of the autd based bci paradigm : tmr .supported the project : sm . wrote the paper : tk , tmrthe presented research was supported in part by the strategic information and communications r&d promotion program ( scope ) no . of the ministry of internal affairs and communications in japan .huggins , c. guger , b. allison , c.w .anderson , a. batista , a - m .brouwer , c. brunner , r. chavarriaga , m. fried - oken , a. gunduz , d. gupta , a. kbler , r. leeb , f. lotte , l.e .miller , g. mller - putz , t. rutkowski , m. tangermann , and d.e .workshops of the fifth international brain - computer interface meeting : defining the future . , 1(1):2749 , 2014 .t. kodama . spatial tactile brain - computer interface paradigm by applying vibration stimulus to large body areas . bachelor degree thesis , school of informatics - university of tsukuba , tsukuba , japan , february 2014 .h. mori , y. matsumoto , z.r .struzik , k. mori , s. makino , d. mandic , and t.m .multi - command tactile and auditory brain computer interface based on head position stimulation . in _ proceedings of the fifth international brain - computer interface meeting 2013 _ , page article i d : 095 , asilomar conference center , pacific grove , ca usa , june 3 - 7 , 2013 .graz university of technology publishing house , austria .
|
we aim at an augmentation of communication abilities of _ amyotrophic lateral sclerosis _ ( als ) patients by creating a _ brain - computer interface _ ( bci ) which can control a computer or other device by using only brain activity . as a method , we use a stimulus driven bci based on vibration stimuli delivered via a gaming pad to the user s back . we identify p300 responses from brain activity data in response to the vibration stimuli . the user s intentions are classified according to the p300 responses recorded in the eeg . from the results of the psychophysical and online bci experiments , we are able to classify the p300 responses very accurately , which proves the effectiveness of the proposed method . corresponding author ]
|
the expansion of a physical property in terms of a basis of known functions is a very common procedure in physics . in this paper , we focus on generating bases suitable for expanding a function on the unit sphere , that is , a function of direction describing the anisotropy of a given physical property .this mathematical tool has direct applications in various fields , for instance , to parametrize the orientation - dependence of interface energies , to represent the charge densities of rigid molecules in x - ray analysis or to describe the orientation - dependence of the so - called constituent strain elastic energy of superlattice structures in the long wavelength limit laks : recip , ozolins : elas .of course , it is well - known that , in the absence of symmetry constraints , spherical harmonics provide a complete solution to the problem .the problem becomes interesting and non - trivial when symmetry constraints reduce the number of degrees of freedom in the expansion , which is the topic of the present paper .one distinguishing feature of the proposed method is that it can be easily implemented using generic linear algebra operations without having consider many different subcases that depend on the point group considered .the algorithm proposed herein has been implemented in the ` gencs ` code of the alloy theoretic automated toolkit ( atat ) avdw : maps , avdw : atat , avdw : atat2,avdw : atatcode .the method also highlights a seldom mentioned characterization of spherical harmonics : they are simply polynomials in the components of a unit vector .polynomials are known to form a complete basis for any continuous function over a bounded region ( e.g. the unit sphere ) .hence , in particular , they form a complete basis for any continuous function defined over the surface of the unit sphere .let be a three - dimensional unit vector ( e.g. ) .any continuous function of direction can therefore be represented as we use the short - hand notation(with defined as constant ) and where is a rank tensor that is symmetric under permutation of the indices ( since permutations of the indices does not change the polynomial we can , without loss of generality , limit ourselves to such symmetric tensors ) .if the function is constrained by symmetry , such constraints can then be implemented by restricting the tensors to obey suitable invariance with respect to all symmetry operations in a given point group . as explained in more detail in the section secalgo ,this can be simply accomplished by considering noncolinear trial tensors , and obtaining symmetrized tensors by averaging each trial tensor with all its transformations by each point group symmetry operation .the desired result is obtained after eliminating colinear symmetrized tensors .an additional step is needed because expansion ( [ eqexp ] ) is a bit redundant , since the polynomial is constant over the unit sphere .this would imply that nonzero coefficients for could give rise to a constant , which is undesirable .this can be avoided by projecting each onto the space orthogonal to tensors giving rise to polynomials that can be factored as some tensor of rank .( it is not necessary to consider higher powers of in this factorization , because could include additional factors as a special case . )a simple algorithm to accomplish the symmetrization and this projection is provided in section [ secalgo ] .the result of this procedure is an expression for the tensor as a sum of symmetry - constrained and non - redundant components : the tensors are fixed and determined by symmetry while the coefficients are completely unrestricted . upon substitution into ( [ eqexp ] ) we obtain a symmetry - constrained expansion: us first define a few convenient symbols .* let denote a matrix representing a point symmetry operation in cartesian coordinates and let the corresponding function applied to a tensor of rank be defined as: _{ j_{1},\ldots , j_{l}}=\sum_{i_{1}=1}^{3}\cdots \sum_{i_{l}=1}^{3}a_{i_{1},\ldots , i_{l}}^{\left ( l\right ) } \prod_{k=1}^{l}s_{i_{k}j_{k}}\]]and let denote a set of such matrices that defines the point group of interest .* let denote a permutation vector ( i.e. an -dimensional vector containing all the number not necessarily in increasing order ) and let the function be defined as : _ { j_{1},\ldots , j_{l}}=a_{i_{p\left ( 1\right ) } , \ldots , i_{p\left ( l\right ) } } ^{\left ( l\right ) } \]]and let denote the set of all such permutations for a given .* let denote the number of elements in a set .* let be a set of rank tensors defining linear constraints on the generated tensor basis , i.e. the generated must be orthogonal to all , according to the inner product (if , no constraints are imposed . )* let denote a vectorization of the tensor ( i.e. a -dimensional column vector containing all elements of the tensor ) and let denote the reverse operation . our algorithm for generating a basis for tensors of rank obeying symmetric constraints ( defined by a point group ) , indices permutation invariance constraints ( defined by the set ) and some linear constraints ( defined by a basis of tensors ) is then as follows : 1 . if , define to be the nonzero and non linearly dependent columns of the matrix: .\ ] ] 2 . set 3 .consider a set of distinct trial tensors for , each consisting of a single element set to , with all remaining elements set to . obeys . ]1 . for each trial tensor , set(or simply if ) .2 . calculate 3 .if is nonzero and ( for ) not linearly dependent with the ] using the gram - schmidt procedure .the harmonics of order up to are then generated by calling the above routine for , setting if and otherwise setting the constraints to be the are also generated with the above routine , called with rank , the same point group , the permutation set and ( no constraints ) .it is instructive to see why the method used for symmetrization in the algorithm of section [ secalgo ] actually works . for an arbitrary trial tensor we can verify that the symmetrized tensor for any operation .indeed , calculate the second equality holds because is just another operation in and two distinct can not be mapped onto the same symmetry operation by applying , since each element of a group admits an inverse .since the symmetrization procedure is a linear projection , choosing the trial tensors to be an orthogonal basis is sufficient to generate a basis for the space of symmetrized tensors .a similar argument holds for invariance under permutations of the indices . finally , note that applying a point group operation to a tensor that is invariant to indices permutations yields a tensor with the same property: _{ j_{1},\ldots , j_{l } } & = & \left [ s\left ( a^{\left ( l\right ) } \right ) \right ] _ { j\left ( p_{1}\right ) , \ldots , j\left ( p_{l}\right ) } = \sum_{i_{1}=1}^{3}\cdots \sum_{i_{l}=1}^{3}a_{i_{1},\ldots , i_{l}}^{\left ( l\right ) } \prod_{k=1}^{l}s_{i_{k}j_{p\left ( k\right ) } } \\ & = & \sum_{i_{1}=1}^{3}\cdots \sum_{i_{l}=1}^{3}a_{i_{p\left ( 1\right ) } , \ldots , i_{p\left ( l\right ) } } ^{\left ( l\right ) } \prod_{k=1}^{l}s_{i_{p\left ( k\right ) , } j_{p\left ( k\right ) } } \\ & = & \sum_{i_{1}=1}^{3}\cdots \sum_{i_{l}=1}^{3}a_{i_{p\left ( 1\right ) } , \ldots , i_{p\left ( l\right ) } } ^{\left ( l\right ) } \prod_{k=1}^{l}s_{i_{k,}j_{k } } \\ & = & \sum_{i_{1}=1}^{3}\cdots \sum_{i_{l}=1}^{3}a_{i_{1},\ldots , i_{l}}^{\left ( l\right ) } \prod_{k=1}^{l}s_{i_{k,}j_{k}}=\left [ s\left ( a^{\left ( l\right ) } \right ) \right ] _{ i_{1},\ldots , i_{l}}\end{aligned}\]]where we have used the fact re - ordering the sums or the product has no effect and the invariance of under permutation .this shows that symmetrizing the tensor after making it invariant to indices permutations does not undo the permutation invariance .it is interesting to observe that expansion ( [ eqexp2 ] ) coincides ( apart from an inconsequential linear transformation ) with spherical harmonics when no symmetry constraints are imposed . the easiest way to seethis is to compare ( [ eqexp2 ] ) with the eigenfunction of the schrdinger equation for some spherically symmetric potential selected so that the eigenfunctions involve polynomials .we can use any convenient radial potential because we only focus on the angular part . consider a spherically symmetric harmonic potential , whose eigenstates are polynomials times a spherically symmetric gaussian .the gaussian is constant over the unit sphere , so we are left with only a polynomial as the angular dependence.moreover , it is well - known that the order of that polynomial is equal to , the sum of three principal quantum numbers of the harmonic oscillator along each dimension .this sum is also ( up to a constant scaling and shift ) the energy of the system .it follows that there is a direct correspondence between all terms in ( [ eqexp2 ] ) sharing the same value of and all eigenfunctions sharing the same energy .the different terms sharing the same thus correspond to eigenstates with different angular momentum projections .we can verify that the number of terms ( in the case of a spherically symmetric potential ) matches the number of spherical harmonics for a given value of .indeed , the number of distinct terms of total power in a polynomial in variables is . in the present case . from that number, we subtract the dimension of the subspace of polynomials that factor as times a polynomial of order , we obtain , exactly the number of spherical harmonics associated with angular momentum is .hence the dimension of the space spanned by the spherical harmonics for a given is the same as the dimension of the space spanned by our polynomials .both spaces include polynomials of order on the unit sphere that are not colinear and it follows that both bases must span the same space .hence both expansions , truncated to the same , span the same space .suitable treatments for special cases already exist in the literature .notable examples include the cubic harmonics ( e.g. altmann : cubharm , muggli : cubharm ) and harmonics for hexagonal symmetry ( e.g. ) .a very general treatment can be found in .although very complete , this treatment does not lends itself to a simple implementation : the point group and its orientation has to be identified , not just as a list of symmetry operations , but recognized by name as one of the known point groups , so that one can lookup the specific rules applying to that point group .based on the point group category found ( e.g. , cubic or hexagonal ) , a superset of harmonics is selected . then , based on the specific point group found , index rules are applied to eliminate those harmonics that should vanish by symmetry .this treatment is ideally suited for researchers wanting to manually construct a basis based on the knowledge of the point group , as the different case are nicely classified by point group . however , a computer program implementing the method would also necessarily contain a large number of tests and subcases .it would also have to rotate the symmetries into a standard setting to use the tabulated index rules .moreover , if one wishes to handle other points group that are not special cases of cubic or hexagonal symmetries ( e.g. icosahedral symmetry or other noncrystallographic point groups ) , a different superset of harmonics and index rules must be constructed .in contrast , the approach proposed herein works directly with the symmetry operation in matrix form ( which are easy to determine ) and no classification into categories of point group is needed . the possibility of having point group in different orientation ( or settings ) is automatically handled , with no extra coding effort .the algorithm only relies on basic linear algebra operations and handles any point groups , not just those for which supersets of harmonics have already been constructed .the proposed method is related to the one proposed in to generate tensor bases , although additional steps , provided herein , were needed to formally show that such tensors bases can be used to generate direction - dependent harmonics and to avoid redundant harmonics via a projection scheme .figure [ ptgrpfig ] shows the harmonics generated by the proposed algorithm for each of the crystallographic point groups .the coefficients in electronic form can be found in the supplement material ( in the atat format described in section [ secatat ] , along with the input files needed to generate the harmonics ) .figure [ nonxtalfig ] shows the result of a similar exercise for selected noncrystallographic point groups .the code takes , as an input , either a point group ( specified via generators ) or a structural information from which it determines the point symmetry automatically .it outputs the harmonics in a file , in form that is easy to read into a computer code and outputs the harmonics in human - readable form on the standard output .some of the file formats contain extraneous items not needed for harmonic generation _per se _ ( marked in italics below ) , but that are included to ensure compatibility with other portions of the atat package . by default, the code reads in structural information ( from the ` lat.in ` file by default a alternate file name can be specified with the ` -l ` option ) and determines the point group automatically .the lat.in file has the following format . 1 . first , the coordinate system ,, is specified , either as~[b]~[c]~[\alpha ] ~[\beta ] ~[\gamma ] \]]*or * in terms of cartesian coordinates , one axis per line: & \left [ a_{y}\right ] & \left [ a_{z}\right ]\\ \left [ b_{x}\right ] & \left [ b_{y}\right ] & \left [ b_{z}\right ] \\\left [ c_{x}\right ] & \left [ c_{y}\right ] & \left [ c_{z}\right]\end{array}\ ] ] 2 .then the lattice vectors are listed , one per line , expressed in the coordinate system just defined: & \left [ u_{b}\right ] & \left [ u_{c}\right ] \\ \left [ v_{a}\right ] & \left [ v_{b}\right ] & \left [ v_{c}\right ] \\\left [ w_{a}\right ] & \left [ w_{b}\right ] & \left [ w_{c}\right]\end{array}\ ] ] 3 . finally , the position and type(s ) of atom for site are given , expressed in the same coordinate system as the lattice vectors: & \left [ x_{b1}\right ] & \left [ x_{c1}\right ] & \left [ t_{11},t_{21}\ldots \right ] \\\left [ x_{a2}\right ] & \left [ x_{b2}\right ] & \left [ x_{c2}\right ] & \left [ t_{12},t_{22},\ldots \right ] \\ \vdots & \vdots & \vdots & \vdots\end{array}\ ] ] an example of such file , for an al - ti alloy adopting the hcp crystal structure is : [ cols= " < , < " , ] invoking the code without any options displays help . at the minimum , the user must specify the -r option .all other options are optional .this work is supported by xsede computing resources and by the national science foundation under grant no .
|
we present a simple and general method to generate a set of basis functions suitable for parametrizing the anisotropy of a given physical property in the presence of symmetry constraints . this mathematical tool has direct applications in various fields , for instance , to parametrize the orientation - dependence of interface energies or to represent the so - called constituent strain elastic energy of superlattice structures in the long wavelength limit . the proposed method can be easily implemented using generic linear algebra operations without having consider many different subcases that depend on the point group symmetry considered . the method exploits a direct correspondence between spherical harmonics and polynomial functions of a unit vectors expressed in tensor notation . the method has been implemented in the ` gencs`code of the alloy theoretic automated toolkit ( atat ) .
|
biological processes , such as signaling , gene regulation , transcription , translation , et cetera govern the cell growth , cellular differentiation , fermentation , fertilization , germination , etc . in living organisms .chemical processes , such as oxidation , reduction , hydrolysis , nitrification , polymerization , and so forth underpin biological processes . physical processes , particularly solvation , are involved in all the aforementioned chemical and biological processes .therefore , a prerequisite for the understanding of chemical and biological processes is to study the solvation process . as a physical process , solvation does not involve the formation and/or breaking of any covalent bond , but is associated with solvent and solute electrostatic , dipolar , induced dipolar , and van der waals interactions .experimentally , solvation can be analyzed by the measurement of solvation free energies .theoretically , solavtion can be investigated by quantum mechanics , molecular mechanics , integral equation , implicit solvent models , and simple phenomenological modifications of coulomb s law . among, the implicit solvent models are known to balance the computational complexity and the accuracy in the solvation free energy prediction , and thus , offer an efficient approach .the general idea of implicit solvent models is to treat the solvent as a dielectric continuum and describe the solute in atomistic detail .the total solvation free energy is decomposed into nonpolar and polar parts .there is a wide variety of ways to carry out this decomposition .for example , nonpolar energy contributions can be modeled in two stages : the work of displacing solvent when adding a rigid solute to the solvent and the dispersive nonpolar interactions between the solute atoms and surrounding solvent .the polar part is due to the electrostatic interactions and can be approximated by generalized born ( gb ) , polarizable continuum ( pc ) poisson - boltzmann ( pb ) models . among them , gb models are heuristic approaches to polar solvation energy analysis .pc models resort to quantum mechanical calculations of induced solute charges .pb methods can be formally derived from maxwell equations and statistical mechanics for electrolyte solutions and therefore offer the promise of handling large biomolecules with sufficient accuracy and robustness .conceptually , the separation between continuum solvent and the discrete ( atomistic ) solute introduces an interface definition .this definition may take the form of analytic functions or nonsmooth boundaries dividing the solute - solvent domains .the van der waals surface , solvent accessible surface , and molecular surface ( ms ) are devised for this purpose and have found their success in biophysical calculations .it has been noticed that the performance of implicit solvent models is very sensitive to the interface definition .this comes as no surprise because many of these popular interface definitions are _ ad hoc _ divisions of the solute and solvent domains based on rigid molecular geometry and neglecting solute - solvent energetic interactions .additionally , geometric singularities associated with these surface definitions incur enormous computational instability and lead to conceptual difficulty in interpreting the sharp interface .the differential geometry ( dg ) theory of surfaces and associated geometric partial differential equations ( pdes ) provide a natural description of the solvent - solute interface . in 2005 , wei and his collaborators introduced curvature - controlled pdes for generating molecular surfaces in solvation analysis . the first variational solvent - solute interface , namely , the minimal molecular surface ( mms ) , was constructed in 2006 by wei and coworkers based on the dg theory of surfaces .mmss are constructed by solving the mean curvature flow , or the laplace - beltrami flow , and have been applied to the calculation of electrostatic potentials and solvation free energies .this approach was generalized to potential - driven geometric flows , which admit physical interactions , for the surface generation of biomolecules in solution .while our approaches were employed and/or modified by many others for molecular surface and solvation analysis , our geometric pde and variational surface models are , to our knowledge , the first of their kind for solvent - solute interface and solvation modeling .since the surface area minimization is equivalent to the minimization of surface free energies , due to a constant surface tension , this approach can be easily incorporated into the variational formulation of the pb theory to result in dg - based full solvation models , following a similar approach by dzubiella _et al _ .our dg - based solvation models have been implemented in the eulerian formulation , where the solvent - solute interface is embedded in the three - dimensional ( 3d ) euclidean space and behaves like a smooth characteristic function .the resulting interface and associated dielectric function vary smoothly from their values in the solute domain to those in the solvent domain and are computationally robust .an alternative implementation is the lagrangian formulation in which the solvent - solute boundary is extracted as a sharp surface at a given isovalue and subsequently used in the solvation analysis , including nonpolar and polar modeling .one major advantage of our dg based solvation model is that it enables the synergistic coupling between the solute and solvent domains via the variation procedure . as a result ,our dg based solvation model is able to significantly reduce the number of free parameters that users must `` fit '' or adjust in applications to real - world systems .it has been demonstrated that physical parameters , i.e. , pressure and surface tension obtained from experimental data , can be directly employed in our dg - based solvation models for accurate solvation energy prediction .another advantage of our dg based solvation model is that it avoids the use of _ ad hoc _ surface definitions and its interfaces , particularly ones generated from the eulerian formulation , are free of troublesome geometric singularities that commonly occur in conventional solvent - accessible and solvent - excluded surfaces . as a result , our dg based solvation model bypasses the sophisticated interface techniques required for solving the pb equation . in particular , the smooth solvent - solute interface obtained from the eulerian formulation can be directly interpreted as the physical solvent - solute boundary profile .additionally , the resulting smooth dielectric boundary can also have a straightforward physical interpretation .the other advantage of our dg based solvation model is that it is nature and easy to incorporate the density functional theory ( dft ) in its variational formulation .consequently , it is able to reevaluate and reassign the solute charge induced by solvent polarization effect during the solvation process .the resulting total energy minimization process recreates or resembles the solvent - solute interactions , i.e. , polarization , dispersion , and polar and nonpolar coupling in a realistic solvation process .recently , dg based solvation model has been extended to dg based multiscale models for non - equilibrium processes in biomolecular systems .these models recover the dg based solvation model at the equilibrium .recently , we have demonstrated that the dg based nonpolar solvation model is able to outperform many other methods in solvation energy predictions for a large number nonpolar molecules .the root mean square error ( rmse ) of our predictions was below 0.4kcal / mol , which clearly indicates the potential power of the dg based solvation formulation .however , the dg based full solvation model has not shown a similar superiority in accuracy , although it works very well .having so many aforementioned advantages , our dg based solvation models ought to outperform other methods with a similar level of approximations .one obstacle that hinders the performance of our dg based _ full _solvation model is the numerical instability in solving two strongly coupled and highly nonlinear pdes , namely , the generalized laplace - beltrami ( glb ) equation and the generalized pb ( gpb ) equation . to avoid such instability ,a strong parameter constraint was applied to the nonpolar part in our earlier work , which results in the reduction of our model accuracy .the objective of the present work is to explore a better parameter optimization of our dg based solvation models .a pair of conditions is prescribed to ensure the physical solution of the glb equation , which leads to the well - posedness of the gpb equation .such a well - posedness in turn renders the stability of solving the glb equation .the stable solution of the coupled glb and gpb equation enables us to optimize the model parameters and produce the highly accurate prediction of solvation free energies .some of the best results are obtained in the solvation free energy prediction of more than a hundred molecules of both polar and nonpolar types .the rest of this paper is organized as the follows . to establish the notation and facilitate further development, we present a brief review of our dg based solvation models in section [ theory ] . by using the variational principle , we derive the coupled glb and gpb equations .necessary boundary conditions and initial values are prescribed to make this coupled system well - posed .section [ algorithm ] is devoted to parameter learning algorithms .we develop a protocol to stabilize the iterative solution process of coupled nonlinear pdes .we introduce perturbation and convex optimization methods to ensure stability of the numerical solution of the glb equation in coupling with the gpb equation .the newly achieved stability in solving the coupled pdes leads to an appropriate minimization of solvation free energies with respect to our model parameters . in section [ numerical - result ], we show that for more than a hundred of compounds of various types , including both polar and nonpolar molecules , the present dg solvation model offers some of the most accurate solvation free energy prediction with the overall rmse of 0.5kcal / mol .this paper ends with a conclusion .the free energy functional for our dg based full solvation model can be expressed as & = \int\left\ { \gamma |\nabla s | + p s + ( 1-s)u + s \left [ -\frac{\epsilon_m}{2}|\nabla\phi|^2 + \phi\ \rho_m\right ] \right . \\ & \left . + ( 1-s)\left[-\frac{\epsilon_s}{2}|\nabla\phi|^2-k_b t \sum_{\alpha } \rho_{\alpha 0}\left ( e^{-\frac{q_{\alpha }\phi } { k_b t } } -1\right ) \right ] \right\ } d{\bf{r } } , \quad { \bf r } \in { \mathbb r}^3 \end{aligned}\end{aligned}\ ] ] where is the surface tension , is the hydrodynamic pressure difference between solvent and solute , and denotes the solvent - solute non - electrostatic interactions represented by the lennard - jones potentials in the present work . here is a hypersurface or simply surface function that characterizes the solute domain and embeds the 2d surface in , whereas characterizes the solvent domain .additionally , is the electrostatic potential and and are the dielectric constants of the solvent and solute , respectively . here is the boltzmann constant , is the temperature , denotes the reference bulk concentration of the solvent species , and denotes the charge valence of the solvent species , which is zero for an uncharged solvent component .we use to represent the charge density of the solute .the charge density is often modeled by a point charge approximation where denoting the partial charge of the atom in the solute .alternatively , the charge density computed from the dft , which changes during the iteration or energy minimization , can be directly employed as well . in eq .( [ eq8tot ] ) , the first three terms consist of the so called nonpolar solvation free energy functional while the last two terms form the polar one .after the variation with respect to , we construct the following generalized laplace - beltrami ( glb ) equation by using a procedure discussed in our earlier work ,\end{aligned}\ ] ] where the potential driven term is given by as in the nonpolar case , solving the generalized laplace - beltrami equation ( [ eq10surf ] ) generates the solvent - solute interface through the surface function .additionally , variation with respect to gives rise to the generalized poisson - boltzmann ( gpb ) equation : where is the generalized permittivity function .as shown in our earlier work , is a smooth dielectric function gradually varying from to .thus , the solution procedure of the gpb equation avoids many numerical difficulties of solving elliptic equations with discontinuous coefficients in the standard pb equation .the glb ( [ eq10surf ] ) and gbp ( [ eq13poisson ] ) equations form a highly nonlinear system , in which the glb equation is solved for the interface profile of the solute and solvent .the interface profile determines the dielectric function in the gpb equation .the gpb equation is solved for the electrostatics potential that behaves as an external potential in the glb equation .the strongly coupled system should be solved in self - consistent iterations . for glb equation ( [ eq10surf ] ) , the computational domain is , where is the solute van der waals domain given by .here is the ball in the solute centered at with van der waals radius .we apply the following dirichlet boundary condition to the initial value of is given by where is the boundary of the extended solute domain constructed by . here has an extended radius of with being the probe radius , which is set to 1.4 in the present work . for gpb equation ( [ eq13poisson ] ) ,the computational domain is .we set the dirichlet boundary condition via the debye - hckel expression , where is the modified debye - hckel screening function , which is zero if there is no salt molecule in the solvent .note that no interface condition is needed as and are smooth functions in general for .consequently , the resulting gbp ( [ eq13poisson ] ) equation is easy to solve . to compare with experimental solvation data, one needs to compute the total solvation free energy , which , in our dg based solvation model , is obtained as where is the electrostatic solvation free energy , \ ] ] where is the solution of the above the gpb model in a homogenous system , obtained by setting a constant permittivity function in the whole domain .the nonpolar energy is computed by d{\bf{r}}.\end{aligned}\ ] ] the dg based solvation model is formulated as a coupled glb and gpb equation system , in which the glb equation provides the solvent solute boundary for solving the gpb , while the gpb equation produces the external potential in the glb equation for the surface evolution . the solution procedure for this coupled system has been discussed in our earlier work . essentially , for the glb equation , an alternating direction implicit ( adi ) scheme is utilized for the time integral , in conjugation with the second order finite difference method for the spatial discretization .the gpb equation is discretized by a standard second order finite difference scheme and the resulting algebraic equation system is solved by using a standard krylov subspace method based solver .to solve the above coupled equation system , a set of parameters that appeared in the glb equation , namely , surface tension , hydrodynamic pressure difference and the product of solvent density , should be predetermined .unfortunately , this coupled system is unstable at the certain choices of parameters . specifically , for certain , one may have or , which leads to unphysical and unphysical solution of gpb equation ( [ eq13poisson ] ) and thus gives rise to a divergent .this instability can seriously reduce the model accuracy . for a concise description of our algorithm, we assume that there is only one solvent component ( water ) and denote the parameter set as : where is the number of types of atoms in the solute molecule .as mentioned in the previous part , the parameter set used in solving the coupled pdes should meet two requirements , namely , the stability of solving the coupled pdes and the optimal prediction of the solvation free energy ( or fitting the experimental solvation free energy in the best approach ) . based on these two criteriawe introduce a two - stage numerical procedure to optimize the parameter set and solve the coupled pdes : * explore the stability conditions of the coupled pdes by introducing an auxiliary system via a small perturbation ; * optimize the parameter set by an iteratively scheme satisfying the stability constraint . in this partwe investigate the stability conditions for the numerical solution to the coupled pdes ( [ eq10surf ] ) and ( [ eq13poisson ] ) .the basic idea is to utilize a small perturbation method .it is known that omitting the external potential in the glb equation yields the laplace - beltrami ( lb ) equation : this equation is of diffusion type and is well posed with the dirichlet type of boundary conditions provided .numerically it is easy to solve eq .( [ lbe ] ) to yield the profile of the solvent solute boundary . after solving the lb equation ( [ lbe ] ) , we use the generated smooth profile of the solvent solute boundary to determine the permittivity function in the gpb equation . for simplicity, we consider a pure water solvent , without the external potential the system of eqs .( [ lbe])-([pbe - water ] ) can be solved stably by first solving the lb equation and then the gpb equation . motivated by the above observation ,if the external potential is dominated by the mean curvature term , the stability of coupled gpb and glb equations can be preserved .based on numerical experiments , the lennard - jones interaction between the solvent and solute is usually small since this term is constrained by the nonpolar free energy in our model . in our method, we enforce the following constraint conditions to make the coupled system well - posed in the numerical sense and are some appropriate positive constants . in summary ,the original problem is transformed into optimizing parameters in the following system to attain the best solvation free energy fitting with experimental results : , \\-\nabla\cdot\left(\epsilon(s)\nabla\phi\right)=s\rho_m , \\\gamma>\gamma_0>0 , \\ |p|\leq \beta \gamma. \end{array } \right.\ ] ] note that the potential is omitted in the glb equation ( [ constrained - pde - optimization ] ) , because we have already enforced the dirichlet boundary condition in the glb equation , while is inside the van der waals surface .based on large amount of numerical tests , it is found that there is no need to enforce the constraint conditions on the parameters that appear in the lennard - jones term . when this term is used to fit the solvation energy with experimental results ,the parameters can be bounded in a small neighborhood of 0 automatically during the fitting procedure .these parameters essentially do not affect the numerical stability .in this part , we propose a self - consistent approach to solve the coupled glb and gpb equations for a given set of parameters .basically , the coupled system is solved iteratively until both the electrostatic solvation free energy given in eq .( [ reaction - field - potential ] ) and the surface function are both converged . here the surface function is said to be converged provided that the surface area and enclosed volume are both converged .we present an algorithm for solving the following coupled systems : and ,\ ] ] where is the external potential which is defined as : * * auxiliary system : * , * * full system : * .dirichlet boundary conditions are employed for both gpb ( [ gpb2 ] ) and glb ( [ glb2 ] ) equations with auxiliary and full external potentials , giving rise to a well - posed coupled system .the smooth profile of the solvent - solute boundary enables the direct use of the second order central finite difference scheme to achieve the second order convergence in discretizing the gpb equation .the biconjugate gradient scheme is used to solve the resulting algebraic equation system .the glb equation of both the auxiliary and full systems can be solved by the central finite difference discretization of the spatial domain and the forward euler time integrator for the time domain discretization .for the sake of simplicity , in the current work , we employed the central finite difference scheme for spatial domain discretization in both gpb and glb equations , and forward euler integrator for the time domain discretization of glb equation . for stability consideration , in the discretization of the glb equation, the discretization step size of temporal and spatial domain satisfies the courant - friedrichs - lewy condition . to accelerate the numerical integration ,a multigrid solver can be employed for gbp equation , and an alternating direction implicit scheme , which is unconditionally stable , can be utilized for the temporal integration .however , detail discussion of these accelerated schemes is beyond the scope of the present work .a pseudo code is given in algorithm [ scf - pb - lb ] to offer a general framework for solving the coupled glb and gpb equations in a self - consistent manner .the outer iteration controls the convergence of the gpb equation through measuring the change of electrostatic solvation free energy in two adjacent iterations , while the inner iteration controls the convergence of the glb equation based on the variation of surface areas and enclosed volumes through the surface function .the variables , , , , , and denote the electrostatic solvation free energy , surface area , and volume enclosed by the surface of two immediate iterations , respectively .* initialize : , , , , , * * do while * ( ) 1.0 cm 1.0 cm * do while * ( .and . ) 2.0 cm , . 2.0cm update the surface profile function by solving the glb equation ( [ glb2 ] ) .2.0 cm , .1.0 cm * enddo * 1.0 cm solve the gpb equation ( [ gpb2 ] ) in both vacuum and solvent with the previous updated surface profile .1.0 cm update the polar solvation free energy according to eq .( [ reaction - field - potential ] ) .* enddo * the parameters and are the threshold constants and all set to in the current implementation . in solving the glb equation , during each updating , to ensure the stability , instead of the fully update , we update it partially , i.e., the updated solution is the weighted sum of the new solution of the current glb solution and the old solution of the glb equation in the previous step : where is a constant and set to 0.5 in the present work . in this part, we present the parameter optimization scheme . in our approach , parameters start from an initial guess and then are updated sequentially until reaching the convergence . herethe convergence is measured by the root mean square ( rms ) error between the fitted and experimental solvation free energies for a given set of molecules .consider the parameter optimization for a given group of molecules , denoted as . as discussed above the parameter setis . to optimize the parameter set , we start from gpb equation ( [ gpb2 ] ) and the auxiliary system of glb equation ( [ glb2 ] ) with . after solving the initial coupled system by using algorithm [ scf - pb - lb ], we obtain the following quantities for each molecule in the training set : d\mathbf{r}\right)_j,\right.\\ \left .\cdots,\right.\\ \left .\left(\sum_{i=1}^{n_m } \delta_i^{n_t } \int_{\omega_s }\left[\left(\frac{\sigma_{s}+\sigma_{n_t}}{||\mathbf{r}-\mathbf{r}_i||}\right)^{12}- 2\left(\frac{\sigma_{s}+\sigma_{n_t}}{||\mathbf{r}-\mathbf{r}_i||}\right)^{6}\right ] d\mathbf{r}\right)_j \right\}\ ] ] where . here and denote the number of atoms and types of atoms in a specific molecule .the last few terms involve semi - discrete and semi - continuum lennard - jones potentials . where ; ; is the atomic radius of the type of atoms. therefore , atoms of the same type have a common atomic radius and fitting parameter . the predicted solvation free energy for molecule can be represented as : d\mathbf{r}\right)_j \\ + \cdots+\tilde{\varepsilon}_{n_t } \left(\sum_{i=1}^{n_m } \delta_i^{n_t } \int_{\omega_s } \left[\left(\frac{\sigma_{s}+\sigma_{n_t}}{||\mathbf{r}-\mathbf{r}_i||}\right)^{12}- 2\left(\frac{\sigma_s+\sigma_{n_t}}{||\mathbf{r}-\mathbf{r}_i||}\right)^{6}\right ] d\mathbf{r}\right)_j.\end{aligned}\ ] ] we denote the predicted solvation free energy for the given set of molecules as , which is a function of the parameter set , and denote the corresponding experimental solvation free energy as .then the parameter optimization problem in the coupled pdes given by eqs .( [ constrained - pde - optimization ] ) can be transformed into the following regularized and constrained optimization problem : s.t . and norm of the quantity and is the regularization parameter chosen to be 10 in the present work to ensure the dominance of the first term and avoid overfitting . here and are set respectively to and in the present implementation , which guarantees the stability of the coupled system according to a large amount of numerical tests .it is obvious that the objective function ( [ convex - minimization ] ) in the optimization is a convex function , meanwhile the solution domain restricted by constraints ( [ constraint1])-([constraint2 ] ) forms a convex domain .therefore the optimization problem given by eqs .( [ convex - minimization])-([constraint2 ] ) is a convex optimization problem , which was studied by grant and boyd . after solving the above convex optimization problem , parameter set is updated and used again in solving the coupled glb and gpb system , i.e. , eqs .( [ glb2 ] ) and ( [ gpb2 ] ) . repeating the above procedure, a new group of predicted solvation free energies together with a new group of parameters is obtained .this procedure is repeated until the rms error between the predicted and experimental solvation free energies in two sequential iterations is within a given threshold .based on the preparation made in the previous two subsections , namely , the self - consistent approach for solving the coupled glb and gpb system and the parameter optimization , we provide the combined algorithm for the parameter optimization and solving the coupled system for a given set of molecules .algorithm [ parameters - learning ] offers a parameter learning pseudo code for a given group of molecules .this algorithm is formulated by combining outer and inner self - consistent iterations .the outer iteration controls the convergence of the optimized parameters via two controlling parameters , and , denoting the rms error between predicted and experimental solvation free energies in two sequential iterations .the inner iteration implements the solution to the glb and gpb equations by algorithm [ scf - pb - lb ] .* initialize * : , solve the coupled gpb and glb system , where glb utilizes the auxiliary equation ( [ glb2 ] ) .solve the constrained optimization problem eqs .( [ convex - minimization])-([constraint2 ] ) to obtain the initial parameter set .update to be the rms error between experimental and predict results in the above step .* do while * ( ) 1.0 cm .1.0 cm solve the coupled gpb and glb system , where glb system with parameters set . 1.0 cm solve the constrained optimization problem eqs .( [ convex - minimization])-([constraint2 ] ) to get the updated parameters set .1.0 cm update to be rms error between experimental and predict results in the previous optimization step .1.0 cm update . * enddo * the threshold parameter is set to in the present work .in this section we present the numerical study of the dg based solvation model using the proposed parameter optimization algorithms .we first explore the optimal solvent radius used in the van der waals interactions . due to the high nonlinearity, the solvent radius can not be automatically optimized and its optimal value is obtained via searching the parameter domain .we show that for a group of molecules , there is a local minimum in the rms error when the solvent radius is varied .the corresponding optimal solvent radius is adopted for other molecules .additionally , we consider a large number of molecules with known experimental solvation free energies to test the proposed parameter optimization algorithms .these molecules are of both polar and nonpolar types and are divided into six groups : the sampl0 test set , the alkane , alkene , ether , alcohol and phenol types .it is found that our dg based solvation model works really well for these molecules . finally , to demonstrate the predictive power of the present dg based solvation model, we perform a five - fold cross validation for alkane , alkene , ether , alcohol and phenol types of molecules .it is found that training and validation errors are of the same level , which confirms the ability of our model for the solvation free energy prediction . the sampl0 moleculestructural conformations are adopted from the literature with zap 9 radii and the openeye - am1-bcc v1 charges .for other molecules , structural conformations are obtained from freesolv .amber gaff force field is utilized for the charge assignment .the van der waals radii as well as the atomic radii of hydrogen , carbon and oxygen atoms are set to 1.2 , 1.7 and 1.5 , respectively .the grid spacing is set to 0.25 in all of our calculations ( discretization and integration ) .the computational domain is set to the bounding box of the solute molecule with an extra buffer length of 6.0 ..the solvation free energy prediction for the sampl0 set .energy is in the unit of kcal / mol . [cols=">,>,>,>,>,>",options="header " , ] [ mini - group ] having verified that our dg based solvation model with the optimized parameters provides very good regression results , we perform a five - fold cross validation to further illustrate the predictive power of the present method for independent data sets . specifically , the parameters learned from a group of molecules can be employed for the blind prediction of other molecules . to perform the five - fold cross validation ,each type of molecules is subdivided into five sub - groups as uniformly as possible , table [ mini - group ] lists the number of molecules in each sub - group for each type of molecules . in our parametersoptimization , we leave out one sub - group of molecules and use the rest of molecules to establish our dg based solvation model .the optimized parameters are then employed for the blind prediction of solvation free energies of the left out sub - group of molecules .figures [ cv_alkanes ] , [ cv_alkenes],[cv_ether ] , [ cv_alcohol ] , and [ cv_phenol ] demonstrate the cross validation results of the alkane , alkene , ether , alcohol , and phenol molecules , respectively .it is seen that training and validation errors are similar to each other , which verifies the ability of our model in the blind prediction of solvation free energies . in the real prediction of the solvation free energy for a given molecule of unknown category, we can first assign it to a given group , and then employ the dg based solvation model with the optimal parameters learned for this specific group for a blind prediction .differential geometry ( dg ) based solvation models have had a considerable success in solvation analysis .particularly , our dg based nonpolar solvation model was shown to offer some of the most accurate solvation energy predictions of various nonpolar molecules .however , our dg based full solvation model is subject to numerical instability in solving the generalized laplace - beltrami ( glb ) equation , due to its coupling with the generalized poisson boltzmann ( gpb ) equation . to stabilize the coupled glb and gpb equations ,a strong constraint on the van der waals interaction was applied in our earlier work , which hinders the parameter optimization of our dg based solvation model . in the present work ,we resolve this problem by introducing new parameter optimization algorithms , namely perturbation method and convex optimization , for the dg based solvation model .new stability conditions are explicitly imposed to the parameter selection , which guarantees the stability and robustness of solving the glb equation and leads to constrained optimization of our dg based solvation model . the new optimization algorithms are intensively validated by using a large number of test molecules , including the sampl0 test set , alkane , alkene , ether , alcohol and phenol types of solutes .regression results based on our new algorithms are consistent extremely well with experimental data .additionally , a five - fold cross validation technique is employed to explore the ability of our dg based solvation models for the blind prediction of the solvation free energies for a variety of solute molecules .it is found that the same level of errors is found in the training and validation sets , which confirms our model s predictive power in solvation free energy analysis .the present dg based full solvation model provides a unified framework for analyzing both polar and nonploar molecules . in our future work, we will develop machine learning approaches for the robust classification of solute molecules of interest into appropriate categories so as to better predict their solvation free energies .this work was supported in part by nsf grants iis-1302285 and dms-1160352 , nih grant r01gm-090208 , and msu center for mathematical molecular biosciences initiative .the authors thank nathan baker for valuable comments .d. a. case , j. t. berryman , r. m. betz , d. s. cerutti , t. e. c. iii , t. a. darden , r. e. duke , t. j. giese , h. gohlke , a. w. goetz , n. homeyer , s. izadi , p. janowski , j. kaus , a. kovalenko , t. s. lee , s. legrand , p. li , t. luchko , r. luo , b. madej , k. m. merz , g. monard , p. needham , h. nguyen , h. t. nguyen , i. omelyan , a. onufriev , d. r. roe , a. roitberg , r. salomon - ferrer , c. l. simmerling , w. smith , j. swails , r. c. walker , j. wang , r. wolf , x. wu , d. m. york , and p. a. kollman .amber 2015 . , 2015 .m. daily , j. chun , a. heredia - langner , g. w. wei , and n. a. baker . origin of parameter degeneracy and molecular shape relationships in geometric - flow calculations of solvation free energies . , 139:204108 , 2013 .f. dong , m. vijaykumar , and h. x. zhou .comparison of calculation and experiment implicates significant electrostatic contributions to the binding stability of barnase and barstar ., 85(1):4960 , 2003 .a. i. dragan , c. m. read , e. n. makeyeva , e. i. milgotina , m. e. churchill , c. crane - robinson , and p. l. privalov . binding and bending by hmg boxes : energetic determinants of specificity . , 343(2):371393 , 2004 .e. gallicchio , m. m. kubo , and r. m. levy .enthalpy - entropy and cavity decomposition of alkane hydration free energies : numerical results and implications for theories of hydrophobic solvation . , 104(26):62716285 , 2000 .e. gallicchio , l. y. zhang , and r. m. levy .the sgb / np hydration free energy model based on the surface generalized born solvent reaction field and novel nonpolar hydration free energy estimators .23(5):51729 , 2002 . m. grant and s. boyd .graph implementations for nonsmooth convex programs . in v.blondel , s. boyd , and h. kimura , editors , _ recent advances in learning and control _ , lecture notes in control and information sciences , pages 95110 .springer - verlag limited , 2008 .http://stanford.edu/~boyd/graph_dcp.html .t. hastie , r. tibshirani , and j. friedman .the elements of statistical learning : data mining , inference , and prediction . in _ the elements of statistical learning : data mining , inference , and prediction _ , second edition .springer , 2009 .l. a. kuhn , m. a. siani , m. e. pique , c. l. fisher , e. d. getzoff , and j. a. tainer .the interdependence of protein surface topography and bound water molecules revealed by surface accessibility and fractal density measures ., 228(1):1322 , 1992 .e. l. ratkova , g. n. chuev , v. p. sergiievskyi , and m. v. fedorov . an accurate prediction of hydration free energies by combination of molecular integral equations theory with structural descriptors ., 114(37):120682079 , 2010 .
|
differential geometry ( dg ) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent - solute boundary definitions and associated geometric singularities , and dynamically couple polar and nonpolar interactions in a self - consistent framework . our earlier study indicates that dg based nonpolar solvation model outperforms other methods in nonpolar solvation energy predictions . however , the dg based full solvation model has not shown its superiority in solvation analysis , due to its difficulty in parametrization , which must ensure the stability of the solution of strongly coupled nonlinear laplace - beltrami and poisson - boltzmann equations . in this work , we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the dg based solvation models . an interesting feature of the present dg based solvation model is that it provides accurate solvation free energy predictions for both polar and nonploar molecules in a unified formulation . extensive numerical experiment demonstrates that the present dg based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules . solvation model , electrostatic analysis , parametrization .
|
it is well - known that , for the classic periodic - review single - commodity inventory control problems with fixed ordering costs , policies are optimal for the expected total cost criterion under certain conditions on cost functions .these policies order up to the level when the inventory level is less than and do not order otherwise .this paper investigates the general situations , when policies may not be optimal .systematic studies of inventory control problems started with the papers by arrow et al . and dvoretzky et al .most of the earlier results are surveyed in the books by porteus and zipkin .recently developed general optimality conditions applicable to inventory control problem are discussed in the tutorial by feinberg .here , we mention just a few directly relevant references .scarf introduced the concept of -convexity to prove the optimality of policies for finite - horizon problems with continuous demand .zabel indicated some gaps in scarf and corrected them .iglehart extended the results in scarf to infinite - horizon problems with continuous demand .veinott and wagner proved the optimality of policies for both finite - horizon and infinite - horizon problems with discrete demand .zheng provided an alternative proof for discrete demand .beyer and sethi completed the missing proofs in iglehart and veinott and wagner . in general , policies may not be optimal . to ensure the optimality of policies , the additional assumption on backordering cost function ( see condition [ cond : h(x ) ] below )is used in many papers including iglehart and veinott and wagner .relevant assumptions are used in schl , heyman and sobel , bertsekas , chen and simchi - levi , huh and janakiraman , and huh et al . . as shown by veinott and wagner for problems with discrete demand and feinberg and lewis for an arbitrary distributed demand ,such assumptions are not needed for an infinite - horizon problem , when the discount factor is close to for problems with linear holding costs , according to simchi - levi et al .* theorem 8.3.4 , p. 126 ) , finite - horizon undiscounted value functions are continuous and , according to bensoussan ( * ? ? ? * theorem 9.11 , p. 118 ) , infinite - horizon discounted value functions are continuous .these continuity properties obviously hold , if the amounts of stored inventory are limited only to integer values , and they are nontrivial if the amounts of stored inventory are modeled by real - valued numbers .general results on markov decision processes ( mdps ) state only lower semi - continuity of discounted value functions ; see feinberg et al .* theorem 2 ) . this paper studies the structure of optimal policies without the assumption on backordering costs mentioned above .we describe a parameter , which , together with the value of the discount factor and the horizon length , defines the structure of an optimal policy . for a finite horizon , depending on the values of this parameter , the discount factor , and the horizon length , there are three possible structures of an optimal policy : ( i ) it is an policy , ( ii ) it is an policy at earlier stages and then does not order inventory , or ( iii ) it never orders inventory . for the infinite horizon , depending on the values of this parameter and the discount factor , an optimal policy either is an policy or never orders inventory .this paper also establishes continuity of optimal discounted value functions for finite and infinite - horizon problems .the continuity of values functions is used to prove that , if the amount of stored inventory is modeled by real numbers , then ordering up to the levels and are also optimal actions at states and respectively for discounted finite and infinite - horizon problems ; see corollary [ thm : ordering at sa ] below . the rest of the paper is organized in the following way .section [ sec : model definition ] introduces the classic stochastic periodic - review single - commodity inventory control problems with fixed ordering costs .section [ sec : inventory control ] presents the known results on the optimality of policies .section [ sec : structure general results ] describes the structure of optimal policies for finite - horizon and infinite - horizon problems for all possible values of discount factors .section [ sec : continuity ] establishes continuity of value functions and describes the optimal actions at states and denote the real line , denote the set of all integers , and consider the classic stochastic periodic - review inventory control problem with fixed ordering cost and general demand . at times a decision - maker views the current inventory of a single commodity and makes an ordering decision .assuming zero lead times , the products are immediately available to meet demand .demand is then realized , the decision - maker views the remaining inventory , and the process continues .the unmet demand is backlogged and the cost of inventory held or backlogged ( negative inventory ) is modeled as a convex function .the demand and the order quantity are assumed to be non - negative .the state and action spaces are either ( i ) and or ( ii ) and the inventory control problem is defined by the following parameters . 1 . is a fixed ordering cost ; 2 . is the per unit ordering cost ; 3 . is the holding / backordering cost per period , which is assumed to be a convex real - valued function on such that as without loss of generality , consider to be non - negative ; 4 . is a sequence of i.i.d .non - negative finite random variables representing the demand at periods we assume that < + \infty ] for all define the following function for all such that the convexity of implies that is non - decreasing in on for all and non - decreasing in on for all see hiriart - urruty and lemarchal ( * ? ? ?* proposition 1.1.4 on p. 4 ) .since is a non - decreasing function on then consider the limit since as then there exists such that therefore , thus the dynamics of the system is defined by the equations where and denote the current inventory level and the ordered amount at period respectively .if an action is chosen at state then the following cost is collected , , \qquad ( x , a)\in \x\times\a .\label{inventory - control : cost function}\end{aligned}\ ] ] let be the sets of histories up to periods a ( randomized ) decision rule at period is a regular transition probability ; that is , ( i ) is a probability distribution on where and ( ii ) for any measurable subset the function is measurable on a policy is a sequence of decision rules .moreover , is called non - randomized if each probability measure is concentrated at one point .a non - randomized policy is called markov if all decisions depend only on the current state and time .a markov policy is called stationary if all decisions depend only on the current state . for a finite horizon and a discount factor define the expected total discounted costs .\label{eqn : finite costs}\end{aligned}\ ] ] when and defines the infinite horizon expected total discounted cost of denoted by instead of define the optimal values where is the set of all policies .a policy is called optimal for the respective criterion if or for all is known that optimal policies may not exists .this section considers the known sufficient condition for the optimality of and policies for discounted problems .the value functions for the inventory control problem defined in section [ sec : model definition ] can be written as where is an indicator of the event and + \a\e[v_{t,\a}(x - d ) ] , \label{eqn : gna } \qquad t=0,1,\ldots , \\\ga ( x ) & = \bar{c}x + \e[h(x - d ) ] + \a\e[\va(x - d ) ] , \label{eqn : ga}\end{aligned}\ ] ] and for all in equalities ( [ eqn : vna ] ) , ( [ eqn : gna ] ) , and in equalities ( [ eqn : va ] ) , ( [ eqn : ga ] ) ; e.g. see feinberg and lewis .the functions and are lower semi - continuous for all and feinberg and lewis ( * ? ? ?* corollary 6.4 ) . since all the costs are nonnegative , equalities and imply that recall the definitions of -convex functions and policies . [ def : k - convex ] a function is called -convex , where if for each and for each suppose is a lower semi - continuous -convex function , such that as let [ def : ss policy ] let and be real numbers such that a policy is called an policy at step if it orders up to the level if and does not order , if a markov policy is called an policy if it is an policy at all steps a policy is called an policy if it is stationary and it is an policy at all steps [ cond : h(x ) ] there exist such that and it is well - known that for the problem considered in this paper and for relevant problems , this condition and its variations imply optimality policies for finite - horizon problems and policies for infinite horizon problems ; see scarf , iglehart and veinott and wagner .the following theorem presents the result from feinberg and lewis for finite and infinite horizons and for arbitrary demand distributions ; see also chen and simchi - levi , if price is fixed for the coordinating inventory control and pricing problems considered there .if condition [ cond : h(x ) ] is satisfied , then the following statements hold : a. let for consider real numbers satisfying and defined in with then for every the policy with and is optimal for the -horizon problem . b. let consider real numbers satisfying and defined in for then the policy is optimal for the infinite - horizon problem with the discount factor furthermore , a sequence of pairs considered in statement ( i ) is bounded , and , if is a limit point of this sequence , then the policy is optimal for the infinite - horizon problem .[ thm : ss policy cond holds ] if condition [ cond : h(x ) ] does not hold , then finite - horizon optimal policies may not exist .it is shown veinott and wagner for discrete demand distributions and by feinberg and lewis for arbitrary demand distributions that finite - horizon optimal policies exist if certain non - zero terminal costs are assumed .there exists such that an policy is optimal for the infinite - horizon expected total discounted cost criterion with a discount factor where the real numbers satisfy and are defined by with furthermore , a sequence of pairs where the real numbers satisfy and are defined in with is bounded , and , for each its limit point the policy is optimal for the infinite - horizon problem with the discount factor [ thm : ss policy cond not hold : known ]this section describes the structure of finite - horizon and infinite - horizon optimal policies . unlike the previous section, it covers the situations when policies are not optimal .define where is introduced in . since then [ lm : k - h condition ] condition [ cond : h(x ) ] holds if and only if which is equivalent to the inequalities and are equivalent because of since it is sufficient to prove that condition [ cond : h(x ) ] does not hold if and only if . ] in view of .now , let us prove that ] since is a convex function , then the function is convex for all and let and where the infimum of an empty set is since the function is non - negative , then the function is non - decreasing in for all and therefore , ( i ) is non - increasing in that is , if and ( ii ) in view of the definition of for each the following theorem provides the complete description of optimal finite - horizon policies for all discount factors [ thm : general results ] let consider defined in . if ( that is , condition [ cond : h(x ) ] is satisfied ) , then the statement of theorem [ thm : ss policy cond holds](i ) holds .if then the following statements hold for the finite - horizon problem with the discount factor a. if , ] for all and b. for each there exists a number such that for all and }{y - z } < -k_h + \e < 0 .\label{eqn : h(x):2}\ ] ] \(i ) consider the function defined in for all satisfying according to lemma [ lm : k - h condition ] , since condition [ cond : h(x ) ] does not hold , then . ] for all and \(ii ) since and is non - decreasing when then for each there exists such that therefore , for all where the first two inequalities follow from the monotonicity properties of stated in the first paragraph of the proof .as follows from , since the function is non - decreasing in when for all since _ a.s ._ for all then implies that _ a.s ._ for all which yields < -k_h + \e . ] then and , in addition , the function is convex and according to lemma [ lm : k - h condition ] , since condition [ cond : h(x ) ] does not hold , then \(i ) if then there exists such that and let then according to lemma [ lm:1](ii ) , for there exists such that holds for all and therefore , for all and }{y - z } < ( -k_h + \e_k ) \sum_{i=0}^{t}\a^i < 0 .\label{eqn : limit of f}\end{aligned}\ ] ] if then if then therefore , for all there exists a natural number such that thus , and imply that there exist satisfying such that }{y - z } < -\bar{c } , ] according to lemma [ lm:1](ii ) , for there exists such that for all and }{y - z } \geq \sum_{i=0}^{+\infty}\a^i \frac{\e[h(y-\bs_{i+1 } ) - h(z-\bs_{i+1})]}{y - z } \\\geq & -k_h \sum_{i=0}^{+\infty}\a^i = \frac{-k_h}{1-\a } \geq \frac{-k_h}{1-\a^ * } = -\bar{c } , \end{split } \label{eqn : small df lower bd on slope of f}\ ] ] where the first two inequalities follow from , the first equality and the last inequality are straightforward , and the last equality follows from the definition of in view of , equivalent to for all and therefore , for all which implies that according to feinberg et al .* theorem 2 ) , as and therefore as therefore , in view of lemma [ lm : g = f](ii ) , , \label{eqn : ga = f infty}\end{aligned}\ ] ] which implies that the function is convex .observe that = \bar{c}\e[d]\frac{\a(2-\a)}{{(1-\a)}^2 }< + \infty, ] lemma [ lm : g = f](ii ) and lemma [ pro:1](ii ) imply that are convex functions and therefore , in view of lemma [ lm:2 ] , a policy that never orders at steps is optimal .\(ii ) suppose lemmas [ lm:2 ] , [ lm : g = f](i ) , and [ pro:1](i ) imply that ( a ) if then a policy that never orders at steps is optimal , and ( b ) if then the action is always optimal at steps and furthermore and in view of lemma [ lm:3 ] , the functions are -convex and these properties of the functions imply the optimality of policies at steps described in statement ( ii - b ) ; see , e.g , the paragraph following the proof of proposition 6.7 in feinberg and lewis .consider an infinite - horizon problem and the parameter defined in .if then lemma [ lm : k - h condition ] implies that condition [ cond : h(x ) ] holds .therefore , statement ( i ) follows from theorem [ thm : ss policy cond holds](ii ) . on the other hand , if then lemma [ lm : k - h condition ] implies that condition [ cond : h(x ) ] does not hold .\(i ) suppose lemma [ lm : g = f](i ) and lemma [ pro:1](i ) imply that and therefore , according to lemma [ lm:32 ] , the function is -convex and and this implies statement ( i ) .\(ii ) suppose . ] the following discontinuous function is -convex . to verify -convexity ,observe that this function is convex on and let and if then if then the following theorem describes the continuity of value functions for finite - horizon inventory control problems considered in this paper .the continuity of finite - horizon value functions is proved by induction . from theorem [ thm : general results ] , we know that either policy or a policy that does not order is optimal at epoch we prove that under these two cases the value function is continuous if is a continuous function .we prove by induction that the functions and are continuous .let then and , ] is continuous on since is arbitrary , this function is continuous on formula can be rewritten as + \a e[g_{t+1,\a } ( x - d ) ] + \a\bar{c}\e[d],\ ] ] where all the summands are continuous functions . in particular , ] is continuous too .thus , the function is continuous .hence , the induction arguments imply that and are continuous functions .the following theorem describes the continuity of value functions for infinite - horizon inventory control problems considered in this paper . according to feinberg et al .* theorem 2 ) , we know that as for all we further prove that such convergence is uniform , and the continuity of finite - horizon value functions implies the continuity of infinite - horizon value functions .[ thm : cont large df ] consider an infinite - horizon inventory control problem with expected total discounted cost criterion .the functions and are continuous on for all consider defined in . according to theorem [ thm : general results ] , if , ] is non - negative and convex on and the last one holds because and are real numbers and <+\infty, ] is convex on and hence it is continuous , and converges weakly to as and the function is continuous and bounded . the statements of theorems [ thm : ss policy cond holds ] , [ thm : general results ] , and [ thm : general results_ig ] remain correct , if the second sentence of definition [ def : ss policy ] is modified in the following way : a policy is called an policy at step if it orders up to the level if does not order , if and either does not order or orders up to the level if the proofs of theorems [ thm : ss policy cond holds ] , [ thm : general results ] , and [ thm : general results_ig ] are based on the fact that if and if where and for a finite - horizon problem and and for the infinite - horizon problem . since the function is continuous in both cases , we have that thus both actions are optimal at the state [ rm : corollary ss policy ] corollary [ thm : ordering at sa ] also follows from the properties of the sets of optimal decisions \}, ] for infinite - horizon problem , where the one - step cost function is defined in . the solution multifunctions and are compact - valued ( see feinberg et al . ,* theorem 2 ) or feinberg and lewis ) and upper semi - continuous ( this is true in view of feinberg and kasyanov ( * ? ? ?* statement b3 ) because the value functions are continuous and the optimality operators take infimums of inf - compact functions ) .since upper semi - continuous , compact - valued set - valued functions are closed ( nikaido ( * ? ? ?* lemma 4.4 ) ) , the graphs of the solution multifunctions are closed .since for all then similarly , d. l. iglehart . dynamic programming and stationary analysis of inventory roblems . in _ office of naval research monographs on mathematical methods in logistics _ ( h. scarf , d. gilford , and m. shelly , eds . ) , pp . 131 , stanford university press , stanford , ca , 1963 .the optimality of ( s , s ) policies in the dynamic inventory problem ._ mathematical methods in the social sciences _ ( k. arrow , s. karlin , and p. suppes , eds . ) , stanford university press , stanford , ca , 1960
|
_ this paper describes the structure of optimal policies for discounted periodic - review single - commodity total - cost inventory control problems with fixed ordering costs for finite and infinite horizons . there are known conditions in the literature for optimality of policies for finite - horizon problems and the optimality of policies for infinite - horizon problems . the results of this paper cover the situation , when such assumption may not hold . this paper describes a parameter , which , together with the value of the discount factor and the horizon length , defines the structure of an optimal policy . for the infinite horizon , depending on the values of this parameter and the discount factor , an optimal policy either is an policy or never orders inventory . for a finite horizon , depending on the values of this parameter , the discount factor , and the horizon length , there are three possible structures of an optimal policy : ( i ) it is an policy , ( ii ) it is an policy at earlier stages and then does not order inventory , or ( iii ) it never orders inventory . the paper also establishes continuity of optimal value functions and describes alternative optimal actions at states and _ _ * keywords : * inventory control ; finite horizon , infinite horizon ; optimal policy , policy . _
|
singular integral equations have played a significant role in the study of crack propagation in elastic media since their introduction by and have garnered much scientific attention .they have been used in the analysis of crack problems in complex domains containing an arbitrary number of wedges and layers separated by imperfect interfaces ; the resulting singular integral equations with fixed point singularities have been analysed by , based on the theory of linear singular operators .more recently , singular integral equations have been applied to problems involving interfacial cracks in both isotropic and anisotropic bimaterials .this paper extends the singular integral equation approach to an anisotropic bimaterial containing an imperfect interface .interfacial problems concerning a semi - infinite crack along a perfect interface in an anisotropic bimaterial have been considered in through the use of the formalisms proposed by and .expressions were found for the stress intensity factors at the crack tip under the restriction of symmetric loading on the crack faces . using weight function techniques introduced by and developed further by , an approach was developed to find stress intensity factors for an interfacial crack along a perfect interface under asymmetric loading for both the static and dynamic cases , see and respectively .more widely , weight functions are well developed in the literature for a wide range of fractured body geometries and allow for the evaluation of important constants that may act as fracture criteria .for instance , weight functions have been obtained for a corner crack in a plate of finite thickness , a 3d semi - infinite crack in an infinite body and a crack lying perpendicular to the interface in a thin surface layer .imperfect interfaces provide a more physically realistic interpretation of a bimaterial than a perfect one , accounting for the fact that the interface between two materials is rarely sharp . took this into account by suggesting the interface be replaced with a thin strip of finite thickness , which provided the bonding material occupying the strip is sufficiently soft may be replaced by so - called imperfect interface transmission conditions .these allow for an interfacial displacement jump in direct proportion to the traction , which is itself continuous across the interface .such transmission conditions alter physical fields near the crack tip significantly ; for instance the usual perfect interface square root stress singularity is no longer present and is instead replaced by a logarithmic singularity , although tractions remain bounded along the interface .more general imperfect interface transmission conditions were derived by which considered a thin curved isotropic layer of constant thickness , while presented a general interface model for a 3d arbitrarily curved thin anisotropic interphase between two anisotropic solids .weight function techniques have adapted to imperfect interface settings to quantify crack tip asymptotics in thin domains , analyse problems of waves in thin waveguides and conduct perturbation analysis for large imperfectly bound bimaterials containing small defects ; the absence of the square root singularity means that the weight functions are not used to find stress intensity factors , but instead yield asymptotic constants which describe the crack tip opening displacement .this quantity was proposed for use in fracture criteria by and and later justified rigorously by , and . despite their great utility ,the derivation of such weight functions is often not straightforward and so the approach deployed in the remainder of this paper efficiently utilises existing relationships between known weight functions without the need to derive further expressions .the paper is structured as follows : section 2 introduces the problem geometry and model for the imperfect interface . in section 3 , previously found results used in the derivation of the singular integral equations are discussed .these include the weight functions derived using the method of and the betti formula which can be used to relate the weight functions to the physical fields along both the crack and imperfect interface .section 4 concentrates on solving the out - of - plane ( mode iii ) problem .singular integral equations are derived and used to obtain the displacement jump across both the crack and interface for a number of orthotropic bimaterials with varying levels of interface imperfection .finite element methods for the same physical problems are also used to obtain the same results and then a comparison is made between the results obtained from the two opposing methods .the in - plane problem is considered in section 5 , where singular integral equations are obtained for the mode i and mode ii tractions and displacements and some computations are performed .we consider an infinite anisotropic bimaterial with an imperfect interface and a semi - infinite interfacial crack respectively lying along the positive and negative semi - axes .the materials above and below the -axis will be denoted materials i and ii respectively .( 2,0 ) ; ( -2,0.1 ) ( 0,0 ) ; ( -2,-0.1 ) ( 0,0 ) ; ( 0 , 1.3 ) ( 1.3 , 0 ) ; at ( 0,1.3 ) ; at ( 1.3,0 ) ; at ( 1.5,0.7 ) i ; at ( 1.5,-0.7 ) ii ; ( -1.5,0.075 ) to[out=90,in=90 ] ( -0.5,0.025 ) ; ( -1,0.05 ) ( -1,0.35 ) ; ( -1.25,0.0625 ) ( -1.25,0.29 ) ; ( -0.75,0.0375 ) ( -0.75,0.28 ) ; at ( -0.5,0.2 ) ; ( -1.8,-0.09 ) to[out=-90,in=-90 ] ( -0.8,-0.04 ) ; ( -1.3,-0.065 ) ( -1.3,-0.35 ) ; ( -1.55,-0.0775 ) ( -1.55,-0.29 ) ; ( -1.05,-0.0525 ) ( -1.05,-0.28 ) ; at(-0.8,-0.2 ) ; the imperfect interface transmission conditions for are given by where is the traction vector and is the displacement vector .the matrix quantifies the extent of imperfection of the interface , with corresponding to the perfect interface . for an anisotropic bonding material, has the following structure : the loading on the crack faces is considered known and given by the geometry considered is illustrated in figure [ geometry ] .the only restriction imposed on is that they must be self - balanced ; note in particular that this allows for discontinuous and/or asymmetric loadings .the symmetric and skew - symmetric parts of the loading are given by and respectively , where the notation and respectively denote the average and jump of the argument function : weight function used is the solution of the problem with the crack occupying the positive axis with square - root singular displacement at the crack tip , as given in .the transmission conditions for the weight functions for are given as where is the singular displacement field and is the corresponding traction field .note in particular that condition ( [ eq : contofu ] ) corresponds to a perfect interface weight function problem in contrast to the imperfect interface problem being physically considered .it was shown in that the following equations hold for the fourier transforms of the symmetric and skew - symmetric parts of the weight function : where and . here , and are the surface admittance tensors of materials i and ii respectively , superscript denotes complex conjugation and bars denote fourier transforms with respect to defined as (\xi)=\int\limits_{-\infty}^\infty f(x_1)e^{i\xi x_1}\mathrm{d}x_1.\ ] ] the matrices and have the form the entries of these matrices can be expressed in terms of the components of the material compliance tensors , .explicit expressions for and for orthotropic bimaterials are given in the appendix .in this section , the betti formula is extended to the case of general asymmetrical loading applied at the crack surfaces .the betti formula is used in order to relate the physical solution to the weight function , which is a special singular solution to the homogeneous problem with traction - free crack faces . applying the betti formula to a semi - circular domain in the half - plane , whose straight boundary is the line , and whose radius , the following equation is obtained where is a rotation matrix given by another equation can be derived by applying the betti formula to a semi - circular domain in the half - plane and taking the limit , , which after some manipulation in the spirit of for example , yields where the convolutions are taken with respect to , that is and superscripts denote the restriction of the preceding function to the respective semi--axis .applying fourier transforms then gives note that the exact nature of the weight functions and used in this analysis have not been specified at this stage and so identity ( [ finalbetti ] ) is valid for a large class of weight functions . in particular , this allows for the use of perfect interface weight functions for the imperfect interface physical setting . in the case of perfect interface physical solution and weight functions ,the corresponding analysis has been done in , for isotropic and anisotropic materials respectively , while for imperfect interfaces joining isotropic bodies , details can be found in .we now seek boundary integral equations relating the mode iii interfacial traction and displacement jump over the crack in the anisotropic bimaterial .this will utilise the betti identity in order to relate the physical solution with the perfect interface weight functions . considering only the mode iii components ofthe following equation holds : where the subscripts have been removed for notational brevity .splitting into the sum of and also separating into the sum of gives note that if imperfect interface weight functions are used , then the second and third terms of the left hand side of ( [ nocancel ] ) immediately due to the transmission conditions .however , using perfect interface weight functions , this is not true . using the transmission conditions , and yields from equations and the following relationships hold for the mode iii components of the weight functions : when combined with equationthe following relationship is obtained : where applying the inverse fourier transform to equation for the two cases , and , the following relationships are obtained : =\mathcal{f}^{-1}_{x_1<0}\left[(1+\kappa a(\xi))\bar{{\langle p \rangle}}\right ] + \frac{\delta_3}{2}\mathcal{f}^{-1}_{x_1<0}\left[(1+\kappa a(\xi))\bar{{{\llbracket}p { \rrbracket}}}\right];\]] - \mathcal{f}^{-1}_{x_1>0}\left[(1+\kappa a(\xi))\bar{{\langle p \rangle}}\right ] - \frac{\delta_3}{2}\mathcal{f}^{-1}_{x_1>0}\left[(1+\kappa a(\xi))\bar{{{\llbracket}p { \rrbracket}}}\right].\ ] ] to calculate these inversions the following relationships are used : =\frac{1}{\pi\kappa}\left(s_{{\mathcal{h}_{33}}}\ast f'\right)(x_1);\]]=-\frac{{\mathcal{h}_{33}}}{\pi}\left(t_{{\mathcal{h}_{33}}}\ast f\right)(x_1),\ ] ] where and si and ci are the sine and cosine integral functions respectively , given by these functions have the same properties as their counterparts from the isotropic case considered by , but with different constants . in particular , the function behaves as while has behaviour of the form we introduce convolution operators and , as well as projection operators : in order to rewrite the identities and as where are singular operators and are compact .the second term on the left hand side of and right hand side of appear as a result of the discontinuity of the derivative of at .the integral identities and can be formulated in alternative ways , which depending upon the specific problem parameters and loadings , can aid the ease with which computations may be performed . combining equations , and yields the auxiliary relationship using this relationship , equations and can be rewritten as follows : it is also possible to write these equations using only the operator : or solely the operator : each of the four formulations have advantages for numerical computations depending on the mechanical parameters of the problem and which quantities are known or unknown .the merits of alternative formulations for the analogous isotropic case have been discussed in detail in and we refer the reader to that paper for further discussion . in this section , the integral identities found previously will be used to calculate the jump in displacement over the crack and imperfect interface between two orthotropic materials .results for finite element simulations using comsol will also be presented and compared to the results using the integral identity approach derived in the previous subsection . for orthotropic materials , the material parameters and given in terms of the components of the material compliance tensor , , in the appendix .it is possible to express and in terms of the shear moduli , of the material : in our computations , the same orthotropic material will be used as material i and ii .however , the axes corresponding to each axis of symmetry of the material in the lower half - plane is altered .the parameters used for the computations presented are shown in table [ mutable ] .the values of are given in table [ mutable ] to illustrate that the materials considered are the same but differently oriented .henceforth , the material above the crack ( i ) will be material a from table [ mutable ] ..material properties [ cols="<,^,^,^",options="header " , ] , title="fig:"],title="fig : " ]singular integral equations have been derived which relate the loading on crack faces to the consequent crack opening displacement and interfacial tractions for a semi - infinite crack situated along a soft anisotropic imperfect interface for an anisotropic bimaterial .the derivation made efficient use of perfect interface weight functions applied to an imperfect interface physical problem ; this did not require derivation of new weight functions . as in the previously studied analogous isotropic problem, the imperfect interface s presence causes a logarithmic singularity in the kernel of the integral operator .alternative formulations have been presented for the mode iii case and used to perform computations for orthotropic materials , which display a good degree of accuracy when compared against finite element simulations .the authors would like to thank prof .gennady mishuris for fruitful discussions .lp and az acknowledge support from the fp7 iapp project ` intercer2 ' , project reference piap - ga-2011 - 286110-intercer2 .av acknowledges support from the fp7 iapp project ` hydrofrac ' , project reference piap - ga-2009 - 251475-hydrofrac .36 natexlab#1#1url # 1`#1`urlprefix antipov , y. a. , avila - pozos , o. , kolaczkowski , s. t. , movchan , a. b. , 2001. mathematical model of delamination cracks on imperfect interfaces .j. solids struct .38(36 - 37 ) , 66656697 .atkinson , c. , 1977 . on stress singularities and interfaces in linearelastic fracture mechanics .j. fracture 13 , 807820 .benveniste , y. , 2006 . a general interface model for a three - dimensional curved thin anisotropic interphase between two anisotropic media .solids 54(4 ) , 708734 .benveniste , y. , miloh , t. , 2001 .imperfect soft and stiff interfaces in two - dimensional elasticity .materials 33 , 309323 .bueckner , h. f. , 1985 .weight functions and fundamental fields for the penny - shaped and the half plane crack in three - space .j. solids struct .23 , 5793 .cottrell , a. h. , 1962 .theoretical aspects of radiation damage and brittle fracture in steel pressure vessels .iron steel institute special report 69 , 281296 .duduchava , r. , 1979 .integral equations with fixed singularities .teubner , leipzig .fett , t. , diegele , e. , munz , d. , rizzi , g. , 1996 .weight functions for edge cracks in thin surface layers .81 ( 3 ) , 205215 . gohberg , i. c. , krein , m. g. , 1960. systems of integral equations on a half line with kernels depending on the difference of arguments ( english translation ) .soc . transl .14 , 217287 .itskov , m. , aksel , n. , 2002 .elastic constants and their admissible values for incompressible and slightly compressible anisotropic materials .acta mechanica .157 , 8196 .kanninen , m. f. , rybicki , e. f. , stonesifer , r. b. , broek , d. , rosenfiels , a. r. , marschall , c. w. , hahn , g. t. , 1979 .elastic - plastic fracture mechanics for two dimensional stable crack growth and instability problems .elastic - plastic fracture astm stp 668 , 121150 .kassir , m. k. , sih , g. c. , 1973 .application of papkovich - neuber potentials to a crack problem .j. solids struct . 9 , 643654 .lekhnitskii , s. g. , 1963 .theory of elasticity of an anisotropic body .mir , moscow .lenci , s. , 2001 .analysis of a crack at a weak interface .108 , 275290 .mishuris , g. , 2001 . interface crack and nonideal interface concept ( mode iii ) .107(3 ) , 279296 .mishuris , g. , kuhn , g. , 2001 .asymptotic behaviour of the elastic solution near the tip of a crack situated at a nonideal interface .zeitschrift fur angewandte mathematik und mechanik 81(12 ) , 811826 .mishuris , g. , piccolroaz , a. , vellender , a. , 2014 .boundary integral formulation for cracks at imperfect interfaces .q. j. mech .doi:10.1093/qjmam / hbu010 ( available online ) .mishuris , g. s. , 1997 .2-d boundary value problems of thermoelasticity in a multi - wedge multi - layered region .part 1 . sweep method .49(6 ) , 11031134 .mishuris , g. s. , 1997 .2-d boundary value problems of thermoelasticity in a multi - wedge multi - layered region .systems of integral equations . arch .49(6 ) , 11351165 .morini , l. , piccolroaz , a. , mishuris , g. , radi , e. , 2013 .integral identities for a semi - infinite interfacial crack in anisotropic elastic bimaterials .j. solids struct .50 , 14371448 .morini , l. , radi , e. , movchan , a. b. , movchan , n. v. , 2013 .stroh formalism in analysis of skew - symmetric and symmetric weight functions for interfacial cracks .solids 18 , 135152 .muskhelishvili , n. i. , 1963 .some basic problems of the mathematical theory of elasticity .groningen : p.noordhoff , netherlands .piccolroaz , a. , mishuris , g. , 2013 .integral identities for a semi - infinite interfacial crack in 2d and 3d elasticity .j. elasticity 110 , 117140 .piccolroaz , a. , mishuris , g. , movchan , a. b. , 2007 .evaluation of the lazarus - leblond constants in the asymptotic model for the interfacial wavy crack .solids 55 , 15751600 .piccolroaz , a. , mishuris , g. , movchan , a. b. , 2009 .symmetric and skew - symmetric weight functions in 2d perturbation models for semi - infinite interfacial cracks .solids 57 , 16571682 .pryce , l. , morini , l. , mishuris , g. , 2013 . weight function approach to study a crack propagating along a bimaterial interface under arbitrary loading in anisotropic solids .jomms 8 , 479500 .rice , j. r. , sorenson , e. p. , 1978 . continuing crack tip deformation and fracture for plane strain crack growth in elastic - plastic solids .solids 26 , 163186 .shih , c. f. , de lorenzi , h. g. , andrews , w. r. , 1979 .studies on crack initiation and stable crack growth .elastic - plastic fracture astm stp 668 , 65120 .sneddon , i. n. , 1972 .the use of integral transforms .mcgraw - hill , new york .stroh , a. n. , 1962 .steady state problems in anisotropic elasticity .phys 41 , 77103 .suo , z. , 1990 .singularities , interfaces and cracks in dissimilar anisotropic media .lond 427 , 331358 .vellender , a. , mishuris , g. s. , 2012 .eigenfrequency correction of bloch - floquet waves in a thin periodic bi - material strip with cracks lying on perfect and imperfect interfaces .wave motion 49(2 ) , 258270 .vellender , a. , mishuris , g. s. , movchan , a. b. , 2011 .weight function in a bimaterial strip containing an interfacial crack and an imperfect interface .application to a bloch - floquet analysis in a thin inhomogeneous structure with cracks .multiscale model .9(4 ) , 13271349 .vellender , a. , mishuris , g. s. , piccolroaz , a. , 2013 .perturbation analysis for an imperfect interface crack problem using weight function techniques .j. solids struct .50(24 ) , 40984107 . wells , a. a. , 1961 .unstable crack propagation in metals : cleavage and fracture .proceedings of the crack propagation symposium , cranfield , 210230 .willis , j. r. , movchan , a. b. , 1995 .dynamic weight function for a moving crack .i. mode i loading .j. mech . phys .solids , 319341 .yu , h. h. , suo , z. , 2000 .intersonic crack growth on an interface .456 , 223 - 246 .zheng , x. j. , glinka , g. , dubey , r. n. , 1996 .stress intensity factors and weight functions for a corner crack in a nite thickness plate .54(1 ) , 4961 .the matrices and have the form for orthotropic materials it is possible to obtain explicit expressions for the these matrices in terms of the components of the material compliance tensors .the out - of - plane components are given by + \left [ \sqrt{s_{44}s_{55}}\right]_{ii},\quad \delta_3 = \frac{\left [ \sqrt{s_{44}s_{55}}\right]_i - \left [ \sqrt{s_{44}s_{55}}\right]_{ii}}{h_{33}}.\ ] ] the in - plane components of can be found in and are given as + \left [ 2n\lambda^{1/4}\sqrt{s_{11}s_{22}}\right]_{ii},\ ] ] + \left [ 2n\lambda^{-1/4}\sqrt{s_{11}s_{22}}\right]_{ii},\ ] ] {ii } - \left [ s_{12}+\sqrt{s_{11}s_{22}}\right]_{i}}{\sqrt{h_{11}h_{22}}},\ ] ] where the in - plane components of were also given in : - \left [ 2n\lambda^{1/4}\sqrt{s_{11}s_{22}}\right]_{ii}}{h_{11}},\ ] ] - \left [ 2n\lambda^{-1/4}\sqrt{s_{11}s_{22}}\right]_{ii}}{h_{22}},\ ] ] + \left[s_{12}+\sqrt{s_{11}s_{22}}\right]_{ii}}{\sqrt{h_{11}h_{22}}}.\ ] ]matrices , and have the following form where the denominator is defined as and the elements , , are given by method outlined in is used in order to perform the fourier inversion of the matrices , and .the denominator defined in ( [ denominator ] ) is factorised in the following manner where the typical term to invert is of the form the function has the following property therefore , the fourier inversion can be obtained as = \frac{1}{\pi } \mathrm{re } \int_0^\infty f(\xi ) e^{-ix_1\xi } \mathrm{d}\xi = \frac{1}{\pi } \int_0^\infty \mathrm{re}[f(\xi ) ] \cos(x_1\xi ) \mathrm{d}\xi + \frac{1}{\pi } \int_0^\infty \mathrm{im}[f(\xi ) ] \sin(x_1\xi ) \mathrm{d}\xi,\ ] ] where for = \frac{f_{r } + f_{r}^\dag\xi}{d } = \sum_{j = 1}^2 \frac{f_{r}^{(j)}}{d_2(\xi_2 - \xi_1)(\xi + \xi_j)},\ ] ] = \frac{f_{i } + f_{i}^\dag\xi}{d } = \sum_{j = 1}^2 \frac{f_{i}^{(j)}}{d_2(\xi_2 - \xi_1)(\xi + \xi_j)},\ ] ] and the following formulae can now be used \cos(x_1\xi ) \mathrm{d}\xi = \sum_{j = 1}^{2 } \frac{f_r^{(j)}}{d_2(\xi_2 - \xi_1 ) } \int_0^\infty \frac{\cos(x_1\xi)}{\xi + \xi_j } \mathrm{d}\xi = -\frac{1}{d_2(\xi_2 - \xi_1 ) } \sum_{j = 1}^{2 } f_r^{(j ) } t_{\xi_j}(x_1),\ ] ] \sin(x_1\xi ) \mathrm{d}\xi = \sum_{j = 1}^{2 } \frac{f_i^{(j)}}{d_2(\xi_2 - \xi_1 ) } \int_0^\infty \frac{\sin(x_1\xi)}{\xi + \xi_j } \mathrm{d}\xi = -\frac{1}{d_2(\xi_2 - \xi_1 ) } \sum_{j = 1}^{2 } f_i^{(j ) } s_{\xi_j}(x),\ ] ] where functions and are defined as in and , respectively .finally the fourier inversion of the general term as given as = -\frac{1}{\pi d_2(\xi_2 - \xi_1 ) } \left\ { \sum_{j = 1}^{2 } f_r^{(j ) } t_{\xi_j}(x_1 ) + \sum_{j = 1}^{2 } f_i^{(j ) } s_{\xi_j}(x_1 ) \right\}.\ ] ] for , can be written as where the fourier inverse of the matrix is given by = -\frac{1}{2\pi d_2(\xi_2 - \xi_1 ) } \left\ { \sum_{j = 1}^{2 } \mathbf{a}_r^{(j ) } t_{\xi_j}(x_1 ) + \sum_{j = 1}^{2 } \mathbf{a}_i^{(j ) } s_{\xi_j}(x_1 ) \right\}.\ ] ] for can be written as where the fourier inverse of the matrix is then = -\frac{1}{\pi d_2(\xi_2 - \xi_1 ) } \left\ { \sum_{j = 1}^{2 } \mathbf{b}_r^{(j ) } t_{\xi_j}(x_1 ) + \sum_{j = 1}^{2 } \mathbf{b}_i^{(j ) } s_{\xi_j}(x_1 ) \right\}.\ ] ] for can be written as where the fourier inverse of the matrix is then = -\frac{1}{\pi d_2(\xi_2 - \xi_1 ) } \left\ { \sum_{j = 1}^{2 } \mathbf{c}_r^{(j ) } t_{\xi_j}(x_1 ) + \sum_{j = 1}^{2 } \mathbf{c}_i^{(j ) } s_{\xi_j}(x_1 ) \right\}.\ ] ]
|
we study a crack lying along an imperfect interface in an anisotropic bimaterial . a method is devised where known weight functions for the perfect interface problem are used to obtain singular integral equations relating the tractions and displacements for both the in - plane and out - of - plane fields . the integral equations for the out - of - plane problem are solved numerically for orthotropic bimaterials with differing orientations of anisotropy and for different extents of interfacial imperfection . these results are then compared with finite element computations . singular integral equations , anisotropic bimaterial , imperfect interface , crack , weight function
|
understanding the behaviour of the relative separation between two fluid particles in a generic flow is a difficult task of paramount importance . the most intuitive application of this concept is represented by the passive scalar problem , where the eulerian description in terms of fields has its lagrangian counterpart just in the study of fluid trajectories .+ while in the one - dimensional situation general considerations and results can be carried out ( at least for smooth flows ) , only few cases are known to be solvable in higher dimension ; namely , a short - correlated strain ( fully solvable ) , a 2d slow strain and the large - dimensionality case . in this paperwe describe another situation which allows exact analytical calculations : the telegraph - noise model for the velocity gradient in the batchelor regime .this means that the relative separation between two fluid particles , , evolves according to and the lagrangian strain ( scalar or matrix ) is assumed as properly made up of a telegraph noise .we shall study both the 1d ( compressible ) and the incompressible 2d cases .the telegraph noise is a stationary random process , , which satisfies the markov property and only takes two values , and .the probability , per unit time , of passing from the latter state to the former ( or viceversa ) is given by ( , respectively ) .in what follows , we shall consider some special cases , for which simple formulas hold .+ ( i ) if , then the process itself is stochastic but its square is deterministic , keeping the constant value ( with ) .we shall denote such processes with a star , e.g. .+ ( ii ) if the average vanishes ( with ) , then the autocorrelation takes the form . for any analytical functional ] can only belong ( because of ( [ sigma ] ) ) to the interval ] is a constant in the group of its first components ( corresponding to the average of the coordinates ) and decreases exponentially in the remaining three groups . +the equations for have the same operatorial structure as in ( [ eqdiff ] ) but , instead of being homogeneous , are rather forced by the aforementioned quantities in a `` cross fashion '' , i.e. not by quantities of the same group but of `` nearest - neighbour '' groups .therefore , the only nondecreasing source term ( \iota}\rangle ] with , which implies that now }\rangle ] goes to a constant for , while the first components ( corresponding to the average of the coordinates ) get a linear behaviour in time . for alsothe components with behave linearly , which in turn implies a quadratic behaviour for the first components at , and so on .+ the situation is resumed in the following table , in which for every we show the degree of the leading behaviour in time of each group of components \iota}\rangle$ ] .e.g. , 0 denotes the evolution towards a finite constant , 1 a linear behaviour and the only presence of a decreasing exponential . + [cols="<,^,^,^,^,^,^,^,^",options="header " , ] + analysing the first line of this table , we argue that the expected exponential behaviour for corresponds to the summation of the series in ( [ series ] ) in the fashion \stackrel{t\to+\infty}{\sim}{\mathrm{e}}^{\mu_nt}\textrm { ( for some } k)\ ] ] one would be tempted to conclude that , for each , the largest - positive - real - part eigenvalue could be easily extracted by looking at the coefficient in ( [ ser ] ) with , corresponding to the column in the table , because for small . however , this would be the case only if the eigenvalues corresponding to a fixed were definitely separate for .on the contrary , from the study of the full matrix performed e.g. for ( see also appendix [ app2 ] ) , we know that several eigenvalues vanish in this limit , and we have to identify the one which keeps the largest positive real part in this process : we are therefore facing a _ degenerate perturbation theory _ , and the quest for the asymptotic expression of ( or , in a way , for the rigorous proof that it has the expression shown in the previous subsection ) is somehow longer .+ let us then proceed as follows . from the inspection of the system ( [ eqdiff ] ) and of the following orders , it is apparent that the structure of the equations is such that the degree of the leading behaviour in time ( i.e. the quantity reported in the table ) can grow of one unity only when a quantity of the second group , of order and time degree , acts as a source term for a quantity of the first group of order , which therefore gets a degree .this means that , in order to find the leading behaviour of at small , one can completely neglect the lower two rows in the table and only consider the upper two . indeed , starting from the only non - vanishing group ( ) in the column and remembering that the forcing mechanism acts in a diagonal `` cross fashion '' ( i.e. every term can only force its upper - right and lower - right neighbours ), the components of the third and of the fourth group can give rise to linear terms in the first group only for and respectively , which are clearly subdominant with respect both to the quadratic and cubic behaviours of these same components ( and in the table ) and to the linear behaviour found for due to the forcing by the second group .back to the initial dynamics , this amounts to say that in the evolution equation for one should keep the complete form into account , while for one can neglect all source terms involving two different noises : the system thus closes at this stage , reducing the order of the corresponding matrix from the initial to . in other words ,only the upper - left quarter of the matrix is relevant .+ the problem can now easily be recast as a system of second - order differential equations for .indeed , we have and , keeping into account the aforementioned simplification in the equations for , \nonumber\\ & \!\!\equiv\!\!&\!\!a^2\mathsf{m}^{(n)}_{kk'}\langle r_{n , k'}\rangle\;.\nonumber\end{aligned}\ ] ] the matrix , defined by the right - hand side of ( [ secord ] ) , is a priori of order .however , it is apparent from ( [ secord ] ) that , for a fixed , the dynamics of the components of with even are independent of those with odd .therefore , as our main goal is to reconstruct objects like ( [ binom ] ) , we can just focus on subset of consisting of the lines with even , and thus reduce to a problem , simply neglecting the lines corresponding to odd .in other words , we rewrite ( [ secord ] ) as moreover , interpreting ( [ hat ] ) as where , it is easy to show that the largest - positive - real - part eigenvalue ( divided by ) that we are looking for must belong to the spectrum of the subset matrix , because the remaining eigenvalues are all .such subset matrix is tridiagonal centrosymmetric but not symmetric , which means that left and right eigenvectors differ .it is a simple task to prove that the value ( guessed for asymptotically from the previous subsection ) is actually an eigenvalue , but it is not possible to use a variational method to show that it has the largest positive real part .however , this is not needed , because the row vector built with the components of the binomial coefficient , for , happens to be the left eigenvector of corresponding to the eigenvalue . therefore , using ( [ binom ] ) and ( [ hat ] ) , one has : this implies \ ] ] as expected .+ notice that the next - leading correction can not be captured at this stage , because of the approximation introduced previously . to take it into account correctly, one should reformulate the analysis performed in this section , including also quantities like but however excluding , i.e. considering a reduced dynamics in the first components of . + it is worth mentioning that an analysis like the previous one , but applied to odd s , leads to in accordance e.g. with the exact result found with the complete dynamics .however , as already pointed out , such values are not related to the curve .lastly , one would be interested in reformulating the previous study in the quasi - deterministic limit , ( see appendix [ app1 ] ) .unfortunately , this is not possible , because the limit turns out to be singular : an exponential growth is expected in general for finite , but a power law is found when this ratio is infinite . in other words , an expansion like ( [ series ] ) with inverted does not work . in ( [ ggg ] ) this reflects into the presence of non - integer powers and into the difference between the limits and .by assuming the telegraph - noise model for the velocity gradient ( or strain matrix ) , we were able to carry out analytical computations and to obtain several results on the separation between two fluid particles .focusing on smooth flows ( batchelor regime ) , we firstly analysed the one - dimensional compressible case , finding explicit expressions for the long - time evolution of the interparticle - distance moments ( [ alfagamma ] ) , the lyapunov exponent ( [ lambda ] ) and the cramr function ( [ cramer ] ). then we concentrated on the two - dimensional incompressible situation and provided an elementary extension of the 1d case , represented by the hyperbolic flow . moving to the general isotropic case, a thorough analysis on the complete dynamics was made for the evolution of linear ( ) and quadratic ( ) components .due to high computational cost , at higher we focused on a restricted , though exact , dynamics : however , only specific values of could be studied in this way , leading to the extrapolations ( [ ggg ] ) and ( [ lam ] ) .such guess was rigorously proved in the quasi - delta - correlated limit , for which approximated equations were introduced .we believe that the present paper represents an interesting example of the use of a coloured noise for which the well - known closure problem can be solved analytically .i warmly thank grisha falkovich , who inspired this work .i acknowledge useful discussions with itzhak fouxon , vladimir lebedev , stefano musacchio , konstantin turitsyn and marija vucelja .99 2001 particles and fields in fluid turbulence . _ rev .phys . _ * 73 * , 913975 .1999 universal long - time properties of lagrangian statistics in the batchelor regime and their application to the passive scalar problem ._ e * 60 * , 41644174 .1999 simplified models for turbulent diffusion : theory , numerical modelling , and physical phenomena .rep . _ * 314 * , 237574 .1995 normal and anomalous scaling of the fourth - order correlation function of a randomly advected passive scalar ._ e * 52 * , 49244941 .1998 particle dispersion in a multidimensional random flow with arbitrary temporal correlations ._ physica _ a * 249 * , 3646 .1959 small - scale variation of convected quantities like temperature in turbulent fluid .general discussion and the case of small conductivity ._ j. fluid mech . _ * 5 * , 113133 .1978 `` formulae of differentiation '' and their use for solving stochastic equations ._ physica _ a * 91 * , 563574 .1991 mean first - passage time in the presence of colored noise : a random - telegraph - signal approach ._ a * 43 * , 31674174 .2006 clustering properties in turbulent signals ._ j. stat ._ * 125 * , 11451157 .1999 positivity of entropy production in non - equilibrium statistical mechanics ._ j. stat .phys . _ * 95 * , 393468 .2001 intermittent distribution of inertial particles in turbulent flows .* 86 * , 27902793 .1994 equilibrium microstates which generate second law violating steady states ._ e * 50 * , 16451648 .1995 dynamical ensembles in nonequilibrium statistical mechanics .lett . _ * 74 * , 26942697 .2004 entropy production and extraction in dynamical systems and turbulence ._ new j. phys . _ * 6 * , no . 50 , pp . 111 .1963 _ trans . am .soc . _ * 108 * , 377428 . 1984_ j. fluid mech . _ * 144 * , 111 .2007 _ phys ._ e * 76 * , 026312 .the deterministic case corresponds to , when the noise takes a constant value . in 2d ,the presence or absence of rotation plays a crucial role .indeed , in the diagonal case ( corresponding to the hyperbolic flow shown in section [ hypf ] ) , one finds the exponential evolution , which implies and .+ on the contrary , in the general isotropic case ( with rotation ) , one gets ( for any of the possible combinations of the noise signs ) , thus both components are linear in time and . consequently , a power - law temporal dependence is found for the separation and .it is worth noticing that this result is due to the fact that , with our choice , and is characteristic of the 2d isotropic situation .in this appendix we provide some details of the calculation for the 2d general isotropic case .+ the matrix reads : \delta_{\kappa,\iota+(n+1)}+[(n+1)-\iota]a\delta_{\kappa,\iota+2(n+1)+1}&\\ + \sqrt{2}[(n+1)-\iota]a\delta_{\kappa,\iota+3(n+1)+1}+(\iota-1)a\delta_{\kappa,\iota+2(n+1)-1}&\\ -\sqrt{2}(\iota-1)a\delta_{\kappa,\iota+3(n+1)-1}&\\ & \hspace{-5cm}\textrm{for } 1\le\iota\le n+1\;,\\[0.2 cm ] [ 3(n+1)-2\iota+1]a\delta_{\kappa,\iota-(n+1)}+[2(n+1)-\iota]a\delta_{\kappa,\iota+3(n+1)+1}&\\ + \sqrt{2}[2(n+1)-\iota]a\delta_{\kappa,\iota+4(n+1)+1}+[-(n+1)+\iota-1]a\delta_{\kappa,\iota+3(n+1)-1}&\\ -\sqrt{2}[-(n+1)+\iota-1]a\delta_{\kappa,\iota+4(n+1)-1}-\nu\delta_{\kappa,\iota}&\\ & \hspace{-5cm}\textrm{for } ( n+1)+1\le\iota\le2(n+1)\;,\\[0.2 cm ] [ 5(n+1)-2\iota+1]a\delta_{\kappa,\iota+2(n+1)}+[3(n+1)-\iota]a\delta_{\kappa,\iota-2(n+1)+1}&\\ + \sqrt{2}[3(n+1)-\iota]a\delta_{\kappa,\iota+4(n+1)+1}+[-2(n+1)+\iota-1]a\delta_{\kappa,\iota-2(n+1)-1}&\\ -\sqrt{2}[-2(n+1)+\iota-1]a\delta_{\kappa,\iota+4(n+1)-1}-\nu\delta_{\kappa,\iota}&\\ & \hspace{-5cm}\textrm{for } 2(n+1)+1\le\iota\le3(n+1)\;,\\[0.2 cm ] [ 7(n+1)-2\iota+1]a\delta_{\kappa,\iota+2(n+1)}+[4(n+1)-\iota]a\delta_{\kappa,\iota+3(n+1)+1}&\\ + \sqrt{2}[4(n+1)-\iota]a\delta_{\kappa,\iota-3(n+1)+1}+[-3(n+1)+\iota-1]a\delta_{\kappa,\iota+3(n+1)-1}&\\ -\sqrt{2}[-3(n+1)+\iota-1]a\delta_{\kappa,\iota-3(n+1)-1}-\nu\delta_{\kappa,\iota}&\\ & \hspace{-5cm}\textrm{for } 3(n+1)+1\le\iota\le4(n+1)\;,\\[0.2 cm ] [ 9(n+1)-2\iota+1]a\delta_{\kappa,\iota-2(n+1)}+[5(n+1)-\iota]a\delta_{\kappa,\iota-3(n+1)+1}&\\ + \sqrt{2}[5(n+1)-\iota]a\delta_{\kappa,\iota+3(n+1)+1}+[-4(n+1)+\iota-1]a\delta_{\kappa,\iota-3(n+1)-1}&\\ -\sqrt{2}[-4(n+1)+\iota-1]a\delta_{\kappa,\iota+3(n+1)-1}-2\nu\delta_{\kappa,\iota}&\\ & \hspace{-5cm}\textrm{for } 4(n+1)+1\le\iota\le5(n+1)\;,\\[0.2 cm ] [ 11(n+1)-2\iota+1]a\delta_{\kappa,\iota-2(n+1)}+[6(n+1)-\iota]a\delta_{\kappa,\iota+2(n+1)+1}&\\ + \sqrt{2}[6(n+1)-\iota]a\delta_{\kappa,\iota-4(n+1)+1}+[-5(n+1)+\iota-1]a\delta_{\kappa,\iota+2(n+1)-1}&\\ -\sqrt{2}[-5(n+1)+\iota-1]a\delta_{\kappa,\iota-4(n+1)-1}-2\nu\delta_{\kappa,\iota}&\\ & \hspace{-5cm}\textrm{for } 5(n+1)+1\le\iota\le6(n+1)\;,\\[0.2 cm ] [ 13(n+1)-2\iota+1]a\delta_{\kappa,\iota+(n+1)}+[7(n+1)-\iota]a\delta_{\kappa,\iota-3(n+1)+1}&\\ + \sqrt{2}[7(n+1)-\iota]a\delta_{\kappa,\iota-4(n+1)+1}+[-6(n+1)+\iota-1]a\delta_{\kappa,\iota-3(n+1)-1}&\\ -\sqrt{2}[-6(n+1)+\iota-1]a\delta_{\kappa,\iota-4(n+1)-1}-2\nu\delta_{\kappa,\iota}&\\ & \hspace{-5cm}\textrm{for } 6(n+1)+1\le\iota\le7(n+1)\;,\\[0.2 cm ] [ 15(n+1)-2\iota+1]a\delta_{\kappa,\iota-(n+1)}+[8(n+1)-\iota]a\delta_{\kappa,\iota-2(n+1)+1}&\\ + \sqrt{2}[8(n+1)-\iota]a\delta_{\kappa,\iota-3(n+1)+1}+[-7(n+1)+\iota-1]a\delta_{\kappa,\iota-2(n+1)-1}&\\ -\sqrt{2}[-7(n+1)+\iota-1]a\delta_{\kappa,\iota-3(n+1)-1}-2\nu\delta_{\kappa,\iota}&\\ & \hspace{-5cm}\textrm{for } 7(n+1)+1\le\iota\le8(n+1)\;. \end{array}\right.\ ] ] we focus at first on the study of linear coordinates ( ) : we introduce the 16-component vector the matrix reads and its eigenvalues are given by ( twice ) , ( six times ) , ( six times ) and ( twice ) .therefore , the largest eigenvalue is , with eigenvectors and .+ let us now move to quadratic quantities ( ) : we define the 24-component vector the associated matrix is and its eigenvalues are : ( three times ) , ( three times ) , the three roots of ( corresponding to the reduced dynamics ( [ red ] ) ) , the three roots ( each taken twice ) of , the three roots ( each taken twice ) of and the three roots of . in accordance with ( [ mu ] ) , the largest eigenvalue is {216a^2\nu+3\sqrt{5184a^4\nu^2 - 3\nu^6}}}+\frac{\sqrt[3]{216a^2\nu+3\sqrt{5184a^4\nu^2 - 3\nu^6}}}{3}\ ] ] and its eigenvector reads :
|
we study the statistics of the relative separation between two fluid particles in a random flow . we confine ourselves to the batchelor regime , i.e. we only examine the evolution of distances smaller than the smallest active scale of the flow , where the latter is spatially smooth . the lagrangian strain is assumed as given in its statistics and is modelled by a telegraph noise . this is a stationary random markov process , which can only take two values with known transition probabilities . the presence of two independent parameters ( intensity of velocity gradient and flow correlation time ) allows the definition of their ratio as the kubo number , whose infinitesimal and infinite limits describe the delta - correlated and quasi - deterministic cases , respectively . however , the simplicity of the model enables us to write closed equations for the interparticle distance in the presence of a finite - correlated , i.e. coloured , noise . + in 1d , the flow is locally compressible in every single realization , but the average ` volume ' must keep finite . this provides us with a mathematical constraint , which physically reflects in the fact that , in the lagrangian frame , particles spend longer time in contracting regions than in expanding ones . imposing this condition consistently , we are able to find analytically the long - time growth rate of the interparticle - distance moments and , consequently , the senior lyapunov exponent , which coherently turns out to be negative . analysing the large - deviation form of the joint probability distribution , we also show the exact expression of the cramr function , which happens to satisfy the well - known fluctuation relation despite the time irreversibility of the strain statistics . + the 2d incompressible case is also studied . after showing a simple generalization of the 1d situation , we concentrate ourselves on the general isotropic case : the evolution of the linear and quadratic components is analysed thoroughly , while for higher moments , due to high computational cost , we focus on a restricted , though exact , dynamics . as a result , we obtain the moment asymptotic growth rates and the lyapunov exponent ( positive ) in the two above - mentioned limits , together with the leading corrections . the quasi - deterministic limit turns out to be singular , while a perfect agreement is found with the already - known delta - correlated case .
|
data from the pierre auger observatory are used to study high energy cosmic rays in the region of the greisen - zatsepin - kuzmin ( gzk ) cut - off .the detection of events with primary energy beyond the gzk cut - off , but also the discrepancy between experimental data from the agasa and hires experiments , results in a wide spectrum of hypotheses about the origin of ultra high energy cosmic rays and their nature . to solve this puzzle a well understood energy scale and well understood energy resolution combined with high statistics are of decisive importance for the pierre auger observatory .a subset of showers observed by the observatory is measured by both the ground array and fluorescence detector ( fd ) .these special _ hybrid _ events are used to set the shower energy scale , based on the determination of the shower energy by the fluorescence detector , and to measure the shower energy resolution of the experiment .this paper will focus on the optical relative calibration of the fluorescence telescopes and the accuracy of this method .details of the absolute calibration of fd are discussed in reference .the fluorescence detectors of the pierre auger observatory are located on top of hills at the perimeter of the ground array .three installations named los leones ( ll ) , los morados and , coihueco are already equipped with 6 operating telescopes each and are taking data .fluorescence light from extensive air showers enters the aperture of the telescope through an uv transparent filter ( nm ) and is focussed by a spherical mirror on a camera made of 440 photomultiplier tubes ( pmts ) as shown in fig .gaps between adjacent pmts on the focal surface are covered by reflective triangular inserts , termed mercedes stars . these act like winston cones and reflect light toward the pmts whichwould otherwise be lost in the gaps .this maximizes the light collection and results in a smooth transition between adjacent pmts .the diameter of the telescope was increased from original 1.7 m to 2.2 m in the final design to increase the effective aperture area by about a factor 2 .a corrector ring of annular shape covers the additional outer aperture with radii of m m to preserve a small spot size also at the increased diameter .schematic view of the auger fluorescence telescopes . also shownare the positions of the diffusers for the relative optical calibrations : a at the center of the mirror , b on both sides of the camera and , c in the aperture box.,height=245 ] the signal of each pixel passes through a programmable gain amplifier , an anti - aliasing filter and is digitized by a 10 mhz , 12-bit adc in the front - end electronics associated to each telescope .the absolute calibration provides the conversion between the digitized signal ( in adc units ) and the photon flux incident on the 3.80 m telescope aperture .the wavelength dependent transparency of the uv filter , the mirror reflectivity and , the pmt quantum efficiency results in wavelength dependent absolute calibration constants .the electronic gain amplifier is adjusted during the absolute calibration procedure such that the response of the telescope to light is identical at a fixed wavelength .changes in the properties of any optical component would change the absolute calibration and must be tracked in time by routine relative calibration measurements .these measurements determine the response of the telescope to light pulses from either a led source or a xenon flash lamp .the results is compared to reference measurements made directly after absolute calibration .therefore , the main goal of the relative calibration is to monitor short term and long term changes between successive absolute calibration measurements and to check the overall stability of the fluorescence detector .: optical fibres distribute light from a super - bright led array ( 470 nm ) to the diffusers ( denoted with `` a '' in fig .1 ) which illuminate the cameras in the same fd building simultaneously .part of the led light is directed by a fibre to a monitoring photodiode .a led calibration unit ( lcu ) drives the led array with programmable current pulses , and records the photodiode signal .the lcu consists of the following main functional units : a central control , a programmable pulse generator and a pulse monitor .the central control unit provides a tcp / ip connection to the local lan and controls the operation of all lcu functions .the programmable pulse generator is able to generate current pulses of arbitrary shape encoded as consecutive 256 samples of a 40 mhz 10-bit digital - to - analog converter or rectangular pulses up to long with programmable amplitude ( a ) .the pulse monitor consists of a photodiode ( hamamatsu s1336 with zero sensitivity drift ) , a transimpedance amplifier , an anti - aliasing filter of 3.1 mhz cut - off frequency and a 10 mhz 12-bit adc .the rms noise ( sigma ) of the monitor circuit is 2.5 adc counts .the lcu is controlled during relative calibration by the daq .it defines ( via tcp - ip ) the total number of light pulses , the pulse shape , its amplitude and its duration .the lcu can be triggered by its internal timer or by an external trigger signal .the light pulse is generated synchronous to the 10 mhz sampling clock of the front - end electronics with low jitter .the pulse monitor stores 1000 consecutive adc samples from the monitoring diode for later readout by the daq process .: two xenon flash lamps are coupled to optical fibers to distribute light pulses to different destinations ( denoted `` b '' and `` c '' in fig .1 ) on each telescope .source `` b '' pulses terminate at 1 mm thick teflon diffusers on both sides of the camera and are directed towards the mirror .source `` c '' pulses are fed through ports on the sides of the aperture and are directed onto a reflective tyvek foil mounted on the inner side of the telescope doors .the foil reflects the light back into the optic of the telescope . in this paperwe focus on the results of relative calibration with the led source . for details on the xenon flash lamp calibration as well as for technical references see .we simply mention that monitoring the los leones and coihueco xenon light source stability results in mean light intensity distribution characterized by a relative uncertainty smaller than .the two relative calibration light sources are therefore quite stable and provide a good tool for monitoring the long term stability of the fluorescence detector .a- , b- and c- calibrations are routinely taken twice per data taking shift . as the light sources were essentially constant , the a - source calibration signals in each pixel ( of each telescope ) provided an optimal monitor of the pixel stability .the relative calibration constants are measured with respect to a given reference relative calibration run taken within one hour after the absolute calibration measurement .the total integrated charge is then measured with respect to this run . a typical result for telescope # 4 in los leonesis shown in fig .[ fig2 ] .the relative calibration constants fluctuate around 1 with a typical width of a few percent . to monitor the short term stability of the system we calculated the camera averaged relative calibration constants and its rms every day .the trend over the measuring period in october 2004 is shown in fig .[ fig3 ] for telescope # 2 of los leones as an example .the rms reported in fig . [ fig3 ] ( bottom diagram ) is typical for all relative calibration runs , resulting in a rms of about .the stability over a year or even longer times can be monitored by calculating monthly averages and displaying the trend for each telescope over the full data taking period .the data of telescope # 4 in los leones shown in fig .[ fig4 ] represent a typical situation .the system is stable within a few percent both on the long term and on a monthly base .the overall uncertainty , as deduced from the long term monitoring of the system is , typically , in the range of 1 to 3 % .the relative calibration is also a very effective tool to find problems of any kinds with the pmts response or in the electronics .in addition , effects on the telescopes sensitivity caused by work on the hardware or software show up immediately in daily relative calibrations .thus , any variation of the entire fd response due to the intervention on the system can be monitored and cross - checked .
|
the stability of the fluorescence telescopes of the pierre auger observatory is monitored with the optical relative calibration setup . optical fibers distribute light pulses to three different diffuser groups within the optical system . the total charge per pulse is measured for each pixel and compared with reference calibration measurements . this allows monitoring the short and long term stability with respect of the relative timing between pixels and the relative gain for each pixel . the designs of the led calibration unit ( lcu ) and of the xenon flash lamp used for relative calibration , are described and their capabilities to monitor the stability of the telescope performances are studied . we report the analysis of relative calibration data recorded during 2004 . fluctuations in the relative calibration constants provide a measure of the stability of the fd .
|
this paper considers the problem of matrix recovery from a small set of noisy observations .suppose that we observe a small set of entries of a matrix .the problem of inferring the many missing entries from this set of observations is the _ matrix completion _ problem . a usual assumption that allows to succeed sucha completion is to suppose that the unknown matrix has low rank or has approximately low rank .the problem of matrix completion comes up in many areas including collaborative filtering , multi - class learning in data analysis , system identification in control , global positioning from partial distance information and computer vision , to mention some of them .for instance , in computer vision , this problem arises as many pixels may be missing in digital images . in collaborative filtering, one wants to make automatic predictions about the preferences of a user by collecting information from many users .so , we have a data matrix where rows are users and columns are items . for each user, we have a partial list of his preferences .we would like to predict the missing rates in order to be able to recommend items that may interest each user .the noiseless setting was first studied by cands and recht using nuclear norm minimization .a tighter analysis of the same convex relaxation was carried out in . for a simpler approach ,see more recent papers of recht and gross .an alternative line of work was developed by keshavan _et al . _ in .a more common situation in applications corresponds to the noisy setting in which the few available entries are corrupted by noise .this problem has been extensively studied recently .the most popular methods rely on nuclear norm minimization ( see , e.g. , ) .one can also use rank penalization as it was done by bunea __ and klopp . typically , in the matrix completion problem, the sampling scheme is supposed to be uniform .however , in practice , the observed entries are not guaranteed to follow the uniform scheme and its distribution is not known exactly . in the present paper , we consider nuclear norm penalized estimators and study the corresponding estimation error in frobenius norm .we consider both cases when the variance of the noise is known or not .our methods allow us to consider quite general sampling distribution : we only assume that the sampling distribution satisfies some mild `` regularity '' conditions ( see assumptions [ l ] and [ asspi ] ) .let be the unknown matrix .our main results , theorems [ thmu3 ] and [ thm3 ] , show the following bound on the normalized frobenius error of the estimators that we propose in this paper : with high probability where the symbol means that the inequality holds up to a multiplicative numerical constant .this theorem guarantees , that the prediction error of our estimator is small whenever .this quantifies the sample size necessary for successful matrix completion .note that , when is small , this is considerably smaller than , the total number of entries . for large and small ,this is also quite close to the degree of freedom of a rank matrix , which is .an important feature of our estimator is that its construction requires only an upper bound on the maximum absolute value of the entries of .this condition is very mild .a bound on the maximum of the elements is often known in applications .for instance , if the entries of are some user s ratings it corresponds to the maximal rating .previously , the estimators proposed by koltchinskii __ and by klopp also require a bound on the maximum of the elements of the unknown matrix but their constructions use the uniform sampling and additionally require the knowledge of an upper bound on the variance of the noise .other works on matrix completion require more involved conditions on the unknown matrix . for more details ,see section [ completionknown ] .sampling schemes more general than the uniform one were previously considered in .lounici considers a different estimator and measures the prediction error in the spectral norm . in authors consider penalization using a weighted trace - norm , which was first introduced by srebro __ in assume that the sampling distribution is a product distribution , that is , the row index and the column index of the observed entries are selected independently .this assumption does not seem realistic in many cases ( see discussion in ) .an important advantage of our method is that the sampling distribution does not need to be equal to a product distribution ._ in propose a method based on the `` smoothing '' of the sampling distribution .this procedure may be applied to an arbitrary sampling distribution but requires a priori information on the rank of the unknown matrix . moreover , unlike in the present paper , in the prediction performances of the estimator are evaluated through a bound on the expected -lipschitz loss ( where the expectation is taken with respect to the sampling distribution ) .the weighted trace - norm , used in , corrects a specific situation where the standard trace - norm fails .this situation corresponds to a non - uniform distribution where the row / column marginal distribution is such that some columns or rows are sampled with very high probability ( for a more thorough discussion see ) . unlike , we use the standard trace - norm penalization and our assumption on the sampling distribution ( assumption [ l ] ) guarantees that no row or column is sampled with very high probability .most of the existing methods of matrix completion rely on the knowledge or a pre - estimation of the standard deviation of the noise .the matrix completion problem with unknown variance of the noise was previously considered in using a different estimator which requires uniform sampling .note also that in the bound on the prediction error is obtained under some additional condition on the rank and the `` spikiness ratio '' of the matrix .the construction of the present paper is valid for more general sampling distributions and does not require such an extra condition .the remainder of this paper is organized as follows . in section [ preliminaries ], we introduce our model and the assumptions on the sampling scheme . for the readers convenience , we also collect notation which we use throughout the paper . in section [ completionknown ]we consider matrix completion in the case of known variance of the noise .we define our estimator and prove theorem [ thm2 ] which gives a general bound on its frobenius error conditionally on bounds for the stochastic terms .theorem [ thm3 ] , provides bounds on the frobenius error of our estimator in closed form .therefore , we use bounds on the stochastic terms that we derive in section [ stochastic ] . to obtain such bounds , we use a non - commutative extension of the classical bernstein inequality . in section [ completionunknown ], we consider the case when the variance of the noise is unknown .our construction uses the idea of `` square - root '' estimators , first introduced by belloni __ in the case of the square - root lasso estimator .theorem [ thmu3 ] , shows that our estimator has the same performances as previously considered estimators which require the knowledge of the standard deviation of the noise and of the sampling distribution .let be an unknown matrix , and consider the observations satisfying the trace regression model the noise variables are independent , with and ; are random matrices of dimension and denotes the trace of the matrix .assume that the design matrices are i.i.d .copies of a random matrix having distribution on the set where are the canonical basis vectors in .then , the problem of estimating coincides with the problem of matrix completion with random sampling distribution .one of the particular settings of this problem is the uniform sampling at random ( usr ) matrix completion which corresponds to the uniform distribution .we consider a more general weighted sampling model .more precisely , let be the probability to observe the entry .let us denote by the probability to observe an element from the column and by the probability to observe an element from the row .observe that .as it was shown in , the trace - norm penalization fails in the specific situation when the row / column marginal distribution is such that some columns or rows are sampled with very high probability ( for more details , see ) .to avoid such a situation , we need the following assumption on the sampling distribution : [ l ] there exists a positive constant such that in order to get bounds in the frobenius norm , we suppose that each element is sampled with positive probability : [ asspi ] there exists a positive constant such that in the case of uniform distribution .let us set .assumption [ asspi ] implies that we provide a brief summary of the notation used throughout this paper .let be matrices in .* we define the _ scalar product _ . * for the _ schatten - q _ ( _ quasi-_)_norm _ of the matrix is defined by where are the singular values of ordered decreasingly .* where .* let be the probability to observe the element .* for , and for , .* and .* let , and . * .* let be an i.i.d .rademacher sequence and we define * define the observation operator as . * .in this section , we consider the matrix completion problem when the variance of the noise is known . we define the following estimator of : where is a regularization parameter and is an upper bound on .this is a restricted version of the matrix lasso estimator .the matrix lasso estimator is based on a trade - off between fitting the target matrix to the data using least squares and minimizing the nuclear norm and it has been studied by a number of authors ( see , e.g. , ) . a restricted version of a slightly different estimator , penalised by a weighted nuclear norm , was first considered by negahban and wainwright in .here and are diagonal matrices with diagonal entries and , respectively . in ,the domain of optimization is the following one where is a bound on the `` spikiness ratio '' of the unknown matrix . here and .in the particular setting of the uniform sampling ( [ domainwainwrite ] ) gives where is an upper bound on the `` spikiness ratio '' .the following theorem gives a general upper bound on the prediction error of estimator given by ( [ estimator ] ) .its proof is given in appendix [ proof - thm2 ] .the stochastic terms and play a key role in what follows . [ thm2 ]let be i.i.d . with distribution on which satisfies assumptions [ l ] and [ asspi ] and .assume that for some constant .then , there exist numerical constants such that with probability at least , where . in order to get a bound in a closed form, we need to obtain suitable upper bounds on and , with probability close to , on .we will obtain such bounds in the case of _ sub - exponential noise _ , that is , under the following assumption : [ noise ] let be a constant such that . the following two lemmas give bounds on and .we prove them in section [ stochastic ] using the non - commutative bernstein inequality .[ delta ] let be i.i.d . with distribution on which satisfies assumptions [ l ] and [ asspi ] .assume that are independent with , and satisfy assumption [ noise ] .then , there exists an absolute constant that depends only on and such that , for all with probability at least we have where .[ edelta ] let be i.i.d . with distribution on which satisfies assumptions [ l ] and [ asspi ] .assume that are independent with , and satisfy assumption [ noise ] .then , for , there exists an absolute constant such that where .an optimal choice of the parameter in these lemmas is .larger leads to a slower rate of convergence and a smaller does not improve the rate but makes the concentration probability smaller . with this choice of the second terms in the maximum in ( [ max ] ) is negligible for where .then , we can choose where is an absolute numerical constant which depends only on .if are , then we can take ( see lemma 4 in ) . with this choice of , we obtain the following theorem . [ thm3 ]let be i.i.d . with distribution on which satisfies assumptions [ l ] and [ asspi ] .assume that for some constant and that assumption [ noise ] holds .consider the regularization parameter satisfying ( [ lambda ] ) .then , there exist a numerical constant , that depends only on , such that with probability greater than ._ comparison to other works _ :an important feature of our estimator is that its construction requires only an upper bound on the maximum absolute value of the entries of ( and an upper bound on the variance of the noise ) .this condition is very mild .let us compare this matrix condition and the bound we obtain with some of the previous works on noisy matrix completion .we will start with the paper of keshavan _ et al .their method requires a priori information on the rank of the unknown matrix as well as a matrix incoherence assumption ( which is stated in terms of the singular vectors of ) . under a sampling scheme different from ours ( uniform sampling without replacement ) and sub - gaussian errors , the estimator proposed in satisfies , with high probability , the following bound the symbol means that the inequality holds up to multiplicative numerical constants , is the condition number and is the aspect ratio . comparing ( [ keshavan ] ) and ( [ revisionthm ] ) , we see that our bound is better : it does not involve the multiplicative coefficient which can be big ._ in propose an estimator which uses a priori information on the `` spikiness ratio '' of .this method requires bounded by a constant , say , in which case the estimator proposed in satisfies the following bound in the case of uniform sampling and bounded `` spikiness ratio '' this bound coincides with the bound given by theorem [ thm3 ] .an important advantage of our method is that the sampling distribution does not need to be equal to a product distribution ( i.e. , need not be equal to ) as is required in .the methods proposed in use the uniform sampling . similarly to our construction , an a priori bound on is required .an important difference is that , in these papers , the bound on is used in the choice of the regularization parameter .this implies that the convex functional which is minimized in order to obtain depends on .a too large bound may jeopardize the exactness of the estimation . in our construction, determines the ball over which we are minimizing our convex functional , which itself is independent of .our estimator achieves the same bound as the estimators proposed in these papers ._ minimax optimality _ : if we consider the matrix completion setting ( i.e. , ) , then , the maximum in ( [ revisionthm ] ) is given by its first therm . in the case of gaussian errors and under the additional assumption that for some constant this rate of convergence is minimax optimal ( cf .theorem 5 of ) .this optimality holds for the class of matrices defined as follows : for given and if and only if the rank of is not larger than and all the entries of are bounded in absolute value by ._ possible extensions _ : the techniques developed in this paper may also be used to analyse weighted trace norm penalty similar to one used in .in this section , we propose a new estimator for the matrix completion problem in the case when the variance of the noise is unknown .our construction is inspired by the square - root lasso estimator proposed in .we define the following estimator of : where is a regularization parameter and is an upper bound on .note that the first term of this estimator is the square root of the data - dependent term of the estimator that we considered in section [ completionknown ] .this is similar to the principle used to define the square - root lasso estimator for the usual vector regression model .let us set .the following theorem gives a general upper bound on the prediction error of the estimator .its proof is given in appendix [ proof - thmu1 ] .[ thmu1 ] let be i.i.d . with distribution on which satisfies assumptions [ l ] and [ asspi ] .assume that for some constant and .then , there exist numerical constants , that depends only on , such that with probability at least where . in order to get a bound on the prediction risk in a closed form, we use the bounds on and given by lemmas [ delta ] and [ edelta ] taking .it remains to bound .we consider the case of sub - gaussian noise : [ subg ] there exists a constant such that \leq\exp \bigl(t^{2}/2k \bigr)\ ] ] for all .note that condition implies that . under assumption[ subg ] , are sub - exponential random variables .then , the bernstein inequality for sub - exponential random variables implies that , there exists a numerical constant such that , with probability at least , one has using lemma [ delta ] and the right - hand side of ( [ bq ] ) , for , we can take note that _ does not depend _ on and satisfies the two conditions required in theorem [ thmu1 ] .we have that with probability greater then and for large enough , more precisely , for such that where .we obtain the following theorem .[ thmu3 ] let be i.i.d . with distribution on which satisfies assumptions [ l ] and [ asspi ] .assume that for some constant and that assumption [ subg ] holds .consider the regularization parameter satisfying ( [ lambdaun ] ) and satisfying ( [ nun ] ) .then , there exist numerical constants such that , with probability greater than .note that condition ( [ nun ] ) is not restrictive : indeed the sampling sizes satisfying condition ( [ nun ] ) are of the same order of magnitude as those for which the normalized frobenius error of our estimator is small .thus , theorem [ thmu3 ] shows , that has the same prediction performances as previously proposed estimators which rely on the knowledge of the standard deviation of the noise and of the sampling distribution .in this section , we will obtain the upper bounds for the stochastic errors and defined in ( [ stoch1 ] ) . in order to obtain such bounds, we use the matrix version of bernstein s inequality .the following proposition is obtained by an extension of theorem 4 in to rectangular matrices via self - adjoint dilation ( cf ., for example , 2.6 in ) .let be independent random matrices with dimensions .define and [ pr1 ] let be independent random matrices with dimensions that satisfy .suppose that for some constant and all .then , there exists an absolute constant , such that , for all , with probability at least we have where .we apply proposition [ pr1 ] to .we first estimate and .note that is a zero - mean random matrix which satisfies then , assumption [ noise ] implies that there exists a constant such that for all .we compute where ( resp . , )is the diagonal matrix with ( resp ., ) on the diagonal .this and the fact that the are i.i.d .imply that note that which implies that and the statement of lemma [ delta ] follows .the proof follows the lines of the proof of lemma 7 in . for sake of completeness, we give it here . set . is the value of such that the two terms in ( [ max ] ) are equal .note that lemma [ delta ] implies that and we set , . by hlder s inequality ,we get the inequalities ( [ proba1 ] ) and ( [ proba2 ] ) imply that \\[-8pt ] & & \quad\leq \biggl(d \int^{+\infty}_{0}\exp\bigl \{-t^{1/\log ( d)}\nu_1\bigr\}\,\mathrm{d}t+d \int ^{+\infty}_{0}\exp\bigl\{-t^{1/(2\log ( d ) } \nu_2\bigr\}\,\mathrm{d}t \biggr)^{1/2\log(d ) } \nonumber\\ & & \quad\leq \sqrt{e } \bigl(\log(d)\nu_1^{-\log(d)}\gamma \bigl(\log(d)\bigr)+2\log(d ) \nu_2^{-2\log(d)}\gamma\bigl(2\log(d ) \bigr ) \bigr)^{1/(2\log(d))}. \nonumber\end{aligned}\ ] ] the gamma - function satisfies the following bound : ( see , e.g. , ) . plugging this into ( [ estem ] ) , we compute observe that implies and we obtain we conclude the proof by plugging into ( [ estem-1 ] ) .it follows from the definition of the estimator that which , using ( [ model ] ) , implies hence , where .then , by the duality between the nuclear and the operator norms , we obtain let be the projector on the linear vector subspace and let be the orthogonal complement of .let and denote , respectively , the _ left _ and _ right _ orthonormal _ singular vectors _ of . is the linear span of , is the linear span of .we set by definition of , for any matrix , the singular vectors of are orthogonal to the space spanned by the singular vectors of .this implies that .then note that from ( [ ineq ] ) , we get this , the triangle inequality and lead to \\[-8pt ] & \leq & \frac{5}{3}\lambda\bigl{\vert}\mathbf p_{a_0 } ( a_0-\hat a ) \bigr{\vert}_1 . \nonumber\end{aligned}\ ] ] since and we have that . from ( [ 2 ] ) ,we compute for a , we consider the following constrain set note that the condition is satisfied if .the following lemma shows that for matrices the observation operator satisfies some approximative restricted isometry . its proof is given in appendix [ proof - thm1 ] .[ thm1 ] let be i.i.d . with distribution on which satisfies assumptions [ l ] and [ asspi ] .then , for all with probability at least . we need the following auxiliary lemma which is proven in appendix [ pl2 ] .[ l2 ] if lemma [ l2 ] implies that set . by definition of , we have that .we now consider two cases , depending on whether the matrix belongs to the set or not ._ case _ 1 : suppose first that , then ( [ ass1 ] ) implies that and we get the statement of theorem [ thm2 ] in this case . _ case _ 2 : it remains to consider the case .then ( [ revisioncondition ] ) implies that and we can apply lemma [ thm1 ] . from lemma [ thm1 ] and ( [ 3 ] ) , we obtain that with probability at least one has now ( [ ass1 ] ) and imply that , there exist numerical constants such that which , together with ( [ o1 ] ) , leads to the statement of the theorem [ thm2 ] .the main lines of this proof are close to those of the proof of theorem 1 in . set . we will show that the probability of the following `` bad '' event is small note that contains the complement of the event that we are interested in . in order to estimate the probability of we use a standard peeling argument .let and . for set if the event holds for some matrix , then belongs to some and for each consider the following set of matrices and the following event note that implies that .then ( [ bl ] ) implies that holds and we get .thus , it is enough to estimate the probability of the simpler event and then apply the union bound . such an estimation is given by the following lemma . its proof is given in appendix [ pl1 ] .let [ l1 ] let be i.i.d . with distribution on which satisfies assumptions [ l ] and [ asspi ] .then , where .lemma [ l1 ] implies that .using the union bound , we obtain where we used .we finally compute for this completes the proof of lemma [ thm1 ] . as we mentioned in the beginning ,the main lines of this proof are close to those of the proof of theorem 1 in .let us briefly discuss the main differences between these two proofs .similarly to theorem 1 in we prove a kind of `` restricted strong convexity '' on a constrain set . however , our constrain set defined by ( [ constrain ] ) is quite different from the one introduced in : the present proof is also less involved ( e.g. , we do not need use the covering argument used in ) . one important ingredient of our proof is a more efficient control of given by lemma [ edelta ] ( compare with lemma 6 in ) .our approach is standard : first we show that concentrates around its expectation and then we upper bound the expectation . by definition , .massart s concentration inequality ( see , e.g. , , theorem 14.2 ) implies that where .next , we bound the expectation . using a standard symmetrization argument ( see , e.g. , , theorem 2.1 ), we obtain where is an i.i.d .rademacher sequence .the assumption implies .then , the contraction inequality ( see , e.g. , ) yields where . for , we have that where we have used ( [ ass1 ] ) .then , by the duality between nuclear and operator norms , we compute finally , using and the concentration bound ( [ concentration ] ) , we obtain that with as stated .let us set and .we have that where .this implies we need the following auxiliary lemma which is proven in appendix [ plu1 ] ( and are defined in ( [ projector ] ) ) .[ lu1 ] if , then where . note that from ( [ ineq ] ) we get the definition of and ( [ un2 ] ) imply that \\[-8pt ] & \leq & 2q(a_0)+\lambda \bigl(\bigl{\vert}\mathbf p_{a_0 } ( \delta)\bigr{\vert}_1-\bigl{\vert}\mathbf p_{a_0}^{\bot } ( \delta)\bigr{\vert}_1 \bigr ) \nonumber\end{aligned}\ ] ] and lemma [ lu1 ] implies that . from ( [ un3 ] ) and ( [ un4 ] ) , we compute \\[-8pt ] & & \quad=\lambda q(a_0 ) \bigl{\vert}\mathbf p_{a_0 } ( \delta)\bigr{\vert}_1 - 2\lambda q(a_0 ) \bigl{\vert}\mathbfp_{a_0}^{\bot}(\delta)\bigr{\vert}_1 \nonumber\\ & & \qquad{}+2\lambda^{2 } \bigl{\vert}\mathbf p_{a_0 } ( \delta)\bigr{\vert}^{2}_1+\lambda^{2 } \bigl { \vert}\mathbf p_{a_0}^{\bot}(\delta)\bigr{\vert}^{2}_1 - 3 \lambda^{2 } \bigl{\vert}\mathbf p_{a_0}(\delta)\bigr { \vert}_{1}\bigl{\vert}\mathbf p_{a_0}^{\bot}(\delta ) \bigr{\vert}_1 . \nonumber\end{aligned}\ ] ] lemma [ lu1 ] implies that and we obtain from ( [ un5 ] ) \\[-8pt ] & & \quad\leq 4\lambda q(a_0 ) \bigl{\vert}\mathbf p_{a_0}(\delta)\bigr { \vert}_1 -2\lambda q(a_0 ) \bigl{\vert}\mathbf p_{a_0}^{\bot}(\delta)\bigr{\vert}_1 + 2 \lambda^{2 } \bigl{\vert}\mathbf p_{a_0}(\delta)\bigr { \vert}^{2}_1 .\nonumber\end{aligned}\ ] ] plugging ( [ un6 ] ) into ( [ un1 ] ) , we get then , by the duality between the nuclear and the operator norms , we obtain using we compute which leads to the condition implies that set . by the definition of we have that .we now consider two cases , depending on whether the matrix belongs or not to the set ._ case _ 1 : suppose first that , then ( [ ass1 ] ) implies that and we get the statement of the theorem [ thmu1 ] in this case ._ case _ 2 : it remains to consider the case .lemma [ lu1 ] implies that and we can apply lemma [ thm1 ] . from lemma [ thm1 ] ,( [ ass1 ] ) and ( [ un7 ] ) we obtain that , with probability at least one has a simple calculation yields and \\[-8pt ] & & { } + \sqrt{792 a^{2}\mu m_1m_2\rank(a_0 ) \bigl ( \be \bigl ( { \vert}\sigma_r{\vert}\bigr ) \bigr)^{2}}.\nonumber\end{aligned}\ ] ] this and imply that , there exist numerical constant such that which , together with ( [ un8 ] ) , leads to the statement of the theorem [ thmu1 ] .by the convexity of and using we have using the definition of , we compute this and ( [ ineq ] ) implies that as stated .by the convexity of , we have using the definition of , we compute then ( [ ineq ] ) and the triangle inequality imply and the statement of lemma [ lu1 ] follows .i would like to thank miao weimin for his interesting comment .
|
in the present paper , we consider the problem of matrix completion with noise . unlike previous works , we consider quite general sampling distribution and we do not need to know or to estimate the variance of the noise . two new nuclear - norm penalized estimators are proposed , one of them of `` square - root '' type . we analyse their performance under high - dimensional scaling and provide non - asymptotic bounds on the frobenius norm error . up to a logarithmic factor , these performance guarantees are minimax optimal in a number of circumstances .
|
tor and its users currently face serious security risks from adversaries positioned to observe traffic into and out of the tor network . large - scale deanonymization has recently been shown feasible for a patient adversary that controls some network infrastructure or tor relays .such adversaries are a real and growing threat , as demonstrated by the ongoing censorship arms race and recent observations of malicious tor relays . in light of these and other threats, we propose an approach to representing and using trust in order to improve anonymous communication in tor .trust information can be used to inform path selection by tor users and the location of services that will be accessed through tor , in both cases strengthening the protection provided by tor . a better understanding of trust - related issueswill also inform the future evolution of tor , both the protocol itself and its network infrastructure .attacks on tor users and services include first last correlation , in which an adversary correlates traffic patterns between the client and a guard with traffic patterns between a tor exit and a network destination in order to link the client to her destination .they also include more recently identified attacks on a single end of a path such as fingerprinting users or services .with trust information , users could choose trusted paths through the tor network and services could choose server locations with trusted paths into the network in order to reduce the chance of these attacks .other work has considered the use of trust to improve security in tor .the work presented here is novel in that ( _ i _ ) it considers trust in network elements generally and not just tor relays and ( _ ii _ ) it considers more general adversary distributions than in previous work . the system we describe here is designed to produce a distribution on the sets of network locations that might be compromised by a single adversary . in the case of multiple , non - colluding adversaries , multiple distributions could be produced .these distribution can then be used , _e.g. _ , as part of a user s path - selection algorithm in tor . in constructing our preliminary experiments, we suggest how our distributions may be used in this way . here, we capture these distributions using bayesian belief networks ( bbns ; see , _ e.g. _ , ) .the contribution of this work is the proposal of a modular system that ( _ i _ ) allows users to express beliefs about the structure and trustworthiness of the network , ( _ ii _ )uses information about the network , modified according to the user - provided structural information , to produce a `` world '' that captures how compromise is propagated through the network , and ( _ iii _ ) combines this world with the user s trust beliefs to produce a bbn representing a distribution on the sets of network elements that an adversary might compromise . as part of our contribution ,we present results of proof - of - concept experiments .these show that users can employ our system to reduce their risk of first last correlation ; this risk is reduced even further when our system also informs the locations that services choose for their servers .the body of this paper provides a high - level view of our system , starting with an overview of its operation .in addition to describing what the system provides and how it is combined with user beliefs to produce a bbn , we discuss some issues related to users trust beliefs .we then present our experimental results and sketch ongoing and future work . as noted throughout , additional details and examplesare provided in the appendices .we survey our system , which is largely modular .this allows it to be extended as new types of trust information are identified as important , _ etc_. the system comes with an ontology that describes types of network elements ( _ e.g. _ , autonomous system ( as ) and relay - operator types ) , the relationships between them that capture correlated compromise by an adversary , and attributes of these things . using the ontology and various published data about the network, the system creates a preliminary `` world '' populated by real - world instances of the ontology types ( _ e.g. _ , specific ases and relay operators ) .the world also includes relationship instances that reflect which particular type instances are related in ways suggested by the ontology .user - provided information may include revisions to this system - generated world , including the addition of types not included in the provided ontology and instances of both ontology - provided and user - added types .the user may also enrich the information about the effects of compromise ( adding , _e.g. _ , budget constraints or some correlations ) .the user also provides beliefs about her trust in particular network elements and how her trust in network elements is affected by different attributes of those elements .this user - provided information is used , together with the edited world , to create a bayesian belief network ( bbn ) that encodes the probability distribution on the adversary s location arising from the user s trust beliefs .the bbn can , for example , provide samples from the distribution of the tor relays and tor `` virtual links '' ( transport - layer connections with tor relays ) that are observed by the adversary .appendix [ ap : overview ] provides an expanded survey of the system s architecture .figure [ fig : ontology ] shows the elements of our ontology . rounded rectangles are types , and ovals are output types .cylinders are attributes ; with the exception of relay software and physical location , which the user may modify , these are provided by the user. the user may also provide new attributes .directed edges show expected relationships between types .for example , the edge from the `` as '' type to the `` router / switch '' type indicates that we expect that the compromise of an as will likely contribute to the compromise of one or more routers and switches .other ontologies may modularly replace the one described here if they satisfy the assumptions described in app .[ ap : ontology ] .that appendix also provides details about the elements of the ontology we use here .the system constructs a preliminary world including instances of tor relays , relay families , ases , internet exchange points ( ixps ) , as and ixp organizations , virtual links between every as and tor relay , and countries ( as legal jurisdictions ) .the system assigns to relevant instances the relay - software ( from tor descriptors ) and physical - location ( from , _e.g. _ , the maxmind geoip database ) attributes .the system also creates relationships from families to their relays , countries to the relays and ixps they contain , as and ixp organizations to their members , and ases and ixps to the virtual links on which they appear ( determined by an as - level routing map ) .a fuller description of this part of the process is given in app .[ app : sysworld ] .the user may provide various data to inform the operation of the system .however , many users may not wish to do this , and the system includes a default belief set designed to provide good security for average users . in sect .[ sec : exp ] we describe a possible default belief set motivated by the discussion of trust in sect .[ sec : trust ] . for simplicity, we refer to beliefs as being provided by the user , but wherever they are not , the defaults are used instead .the user provides structural information that is used to revise the system - generated world .this may include new types ( _ e.g. _ , law - enforcement treaties ) and the addition or removal of type instances and relationships between them ( _ e.g. _ , adding relay operators known to the user ) .the user may also define new attributes , change the system - provided attributes , or provide values for empty attributes ( _ e.g. _ , labeling countries by their larger geographic region ) .the user s beliefs may incorporate boolean predicates that are evaluated on instances in the revised world .for example , the user may have increased trust in ases above a certain size .we sketch a suitable language for this in app .[ app : predlang ] , but this can be replaced with another if desired . finally , the user provides beliefs of four types that are used in constructing the bbn from the revised world . the first two concern the propagation of compromise .budget beliefs allow the user to say that an instance in the edited world has the resources ( monetary or otherwise ) to compromise of its children that satisfy some predicate . enforcing this as a hard bound appears to be computationally harder than we are willing to use in the bbn ,so we do this in expectation .compromise - effectiveness ( ce ) beliefs allow the user to express some correlations between the compromises of nodes by saying that , if an instance is compromised , then , with probability , all of s children satisfying a predicate are compromised .for example , this captures the possibility that a compromised as compromises all of its routers except those of a particular model , for which the as has made an error in their ( common ) configuration file .the other two belief types concern the likelihood of compromise .relative beliefs allow the user to say that instances satisfying a given predicate ( _ e.g. _ , relays running a buggy os , network links that traverse a submarine cable , or ases that are small as determined by their number of routers ) have a certain probability of compromise .( in particular , it specifies the probability that they remain uncompromised if they are otherwise uncompromised . )absolute beliefs allow the user to say that instances satisfying a given predicate ( _ e.g. _ , the node is an as and the as number is 7007 ) are compromised with a certain probability , regardless of other factors .the bbn construction from the edited world is described in detail in app .[ app : bbntrans ] . in brief , the nodes from the edited world are copied to the bbn .compromise - effectiveness beliefs add nodes to guarantee the correlations ( these new nodes are compromised with some probability ; if one is compromised , then all of its children are compromised with probability 1 ) .other than accounting for these new nodes , the directed edges in the bbn are those from the edited world .budget beliefs may further change the probability that compromise is propagated along a directed edge .the values associated with the relative beliefs that apply to a node are associated with that node . unless there is an absolute belief that applies to the node ( and would determine the node s compromise probability ) ,the node s probability of compromise is , where is the ( multi)set of compromise - propagation values associated with the edges from the node s compromised parents and is the ( multi)set of values from the relative beliefs that apply to the node .we now discuss where trust judgments come from by sketching , as a simple example , the trust rationale behind a tor trust policy that might be distributed with client software as a default .such a policy would be designed not to offer the best protection to particular classes of users but to adequately protect most tor users regardless of where they are connecting to the network or what their destinations and behaviors are .the most useful information about tor relays for setting a default level of trust is probably relay longevity .running a relay in order to observe traffic at some future time or for persistent observation of all traffic requires a significant investment of money and possibly official authorization approval .this is all the more true if the relay contributes significant persistent capacity to the network .further , operators of such relays are typically more experienced in many senses and thus somewhat less open to external compromise via hacking .the amount of relay trust is thus usefully tied to the length of presence in the network consensus , uptime , and bandwidth .this approach does not resist a large - budget , nation - state - scale adversary with authority to monitor relays persistently , but it will help limit attacks to adversaries with such persistent capabilities . there is no general reason to trust one as , ixp , _etc_. , more than another , but one should not presume that they are all completely safe .it thus is reasonable to assume the same moderate risk of compromise for all elements forming the links to the tor network and between the relays of the network when creating a default trust policy .an example of an important non - default case is connecting users to sensitive destinations that they especially do not want linked to their location or possibly to their other tor behaviors .for example , some users need to connect to sensitive employer hosts , and dissident bloggers could be physically at risk if seen posting to controversial sites .these users may have rich trust beliefs ( either of their own or supplied by their organizations ) about particular relays , ases , _etc_. , based on who runs the relay , hardware , location , _etc_. note that the average client using a default trust policy may be subject to errors because the average client will rarely be exactly at the client average , and all clients may be subject to errors in judgments underlying a trust policy .as a proof of concept , we constructed a trust belief that models a pervasive adversary and ran some experiments to examine how trust might improve security in tor .in particular , we considered how trust might be used to prevent the first last correlation attack when accessing a given online chat service .these experiments just show the potential for improvement from using trust ; they do not take into account other attacks or how to maintain good performance .we suppose that users are trying to avoid a powerful adversary called `` the man . ''this adversary might compromise relay _ families _ and as or ixp _ organizations _, where a family or organization is a group controlled by the same entity .each family is compromised by the adversary independently with probability between 0.001 and 0.1 , where the probability increases as the family s longevity in tor decreases .each as and ixp organization is compromised independently with probability 0.1 . against the man ,we examine both how users can choose more - secure paths through tor and how the service can choose server locations to make them more securely accessible via tor .the algorithm we consider for trust - aware path selection begins by choosing as its _ guards _ ( _ i.e. _ , relays used by a client to start all connections into tor ) the three guard relays with the smallest probabilities that the adversary observes the path from the client to the guard or the guard itself .then for a given destination , the algorithm chooses one of these guards and an _ exit _ ( _ i.e. _ , a relay that will initiate connections outside the tor network ) to minimize the probability of a first last correlation attack .the algorithm for choosing server location considers only those ases containing an exit , which minimizes the chance for the adversary to observe traffic between the exit and destination .the algorithm greedily locates each server for the greatest reduction in the probability that users in the most common locations ( and using the given trust - aware path - selection algorithm ) are open to a first last correlation attack .probabilities are estimated by repeated sampling . for our experiments , we used web chat server ` webirc.oftc.net ` as the destination service .this irc service is run by the open and free technology community and is popular with tor developers .we considered users coming from 58 of the top ases as measured by juen , which in their observations included the client location for over 95% of tor client packets .our results are shown in table [ table : experiments ] .the first row shows a first last compromise probability of over 0.1 for a client using tor s default path - selection algorithms to connect to the current chat server location .we can see that by using trust to choose guard and exit relays , clients can reduce the compromise probability by a factor of over 2.8 on average .when in addition the service changes the location of its server , that probability drops again by a factor of over 2.7 and approaches the minimum possible of .it appears that adding additional server locations does not add significantly to user security .note that each probability is estimated with 100,000 samples , which can explain why some probabilities are slightly below 0.01 and why the probabilities sometimes increase slightly when a server is added .see app .[ ap : exp ] for further experiment details . to construct the man adversary, we must create a routing map of the internet that includes ases , ixps , and tor relays .we must also group ases and ixps into organizations , identify relay families , and evaluate the longevity of tor relays . we do so using the techniques and data sources described in appendix [ app : sysworld ] . to build the routing map , we used caida topology and link data from 12/14 and routeviews data from 12/1/14 .the resulting map included 46,368 ases , 279,841 links between ases , and 240,442 relationship labels .to group ases by the organization that controls them , we used the results of cai _ et al_. . these included data about 33,824 of the ases in our map , and they resulted in 3,064 organizations that included more than one as with a maximum size of 81 and a median size of 2 .we used the results of augustin _ et al_. to identify ixps and their locations between pairs of ases .these results show 359 ixps and 43,337 as - pairs between which at least one ixp exists .we then used the results of johnson _ et al_. to group ixps into organizations .these produce 19 ixp organizations with more than one ixp , for which the maximum size is 26 and the median size is 2 .we add relays to the routing map using tor consensuses and descriptors from tor metrics .we used the tor consensus of 12/1/14 at 00:00 .the network at this time included 1,235 relays that were guards only , 670 relays that were exits only , and 493 relays that were both guards and exits .the consensus grouped relays into 152 families of size greater than one , of which the maximum size was 25 and the median size was 2 .family uptime was computed as the number of assignments of the flag to family members , averaged over the family members and the consensuses of 12/2014 .we mapped the tor guards and exits to ases using routeviews prefix tables from 12/1/14 , 12/2/14 , and 11/30/14 , applied in that order , which was sufficient to obtain an as number for all guards and exits .note that we observed one exit relay that mapped to an as that did nt appear in our map , and so we added that additional as .there were 699 unique ases among the guards and exits .we created paths from each as in our map to each guard and exit as .the median number of paths that we could infer to a guard or exit as was 46,052 ( out of the 46,369 possible ) .the maximum as path length was 12 , and the median as path length was 4 .the maximum number of ixps on a path was 18 , and the median number was 0 .the resulting bbn for the man thus included 2398 relay variables ( one for each guard and exit ) and 32,411,931 virtual links ( one from each as to each guard or exit as ) .for any path missing from our routing map , we simply took the path to include only the source as and destination as .the probability of compromise for a family with uptime was set to be . for all of our experiments , we considered security from 58 of the 60 most common client ases as measured by juen ( as8404 and as20542 did not appear in our map ) .juen reports that these 58 ases covered 0.951 of client packets observed .in addition , for all of our experiments , the compromise probability ( _ i.e. _ , the probability of a first last correlation attack by the man ) was estimated by sampling from the man bbn ( and from tor s relay selection distribution in the default case ) 100,000 times and using the fraction of compromised samples as the probability .* * tor default path selection * : for each of our 58 client locations , we choose an exit and guard using tor s path - selection algorithm as implemented in torps .note that ( among other considerations ) this does ensure that the guard and exit do nt share the same family or /16 subnet .then we sample the man bbn to determine if the resulting circuit to the server is vulnerable to a first last correlation attack .* * clients use trust * : guards are chosen for each client location to be the three relays with the smallest probabilities that the adversary compromises the guard or an as or ixp on the path to the guard . to compute the compromise probability of a connection from a given client location to a given destination , we consider using each of the client location s three guards with each tor exit relay , estimate the compromise probability , and choose the lowest resulting probability . * * service uses trust * : we consider each as containing an exit relay as a possible location for the server .for each server location , we compute the probability of compromise for each client location .this is estimated for a given client location by considering each of its guards , considering each exit sharing the server location , estimating the compromise probability , and using the minimum of these probabilities .we choose the server location with the minimum average compromise over all client locations . for each additional server ,we repeat the same process except that we only update the compromise probability for a client location if it decreases when using the new potential server location .ongoing and future work includes the further development and investigation of tor path - selection algorithms that use trust as formalized here , the further development and analysis of methods to express trust that are natural and usable , and the continued analysis of possible trust errors and their effects .two particularly important tasks are the development of collections of trust beliefs that capture important use cases and the study of how users can use different trust beliefs without being identified by that behavior .biryukov , a. , pustogarov , i. , weinmann , r.p . : trawling for tor hidden services : detection , measurement , deanonymization . in : proceedings of the 2013 ieee symposium on security and privacy .sp 13 ( 2013 ) cai , x. , zhang , x.c . ,joshi , b. , johnson , r. : touching from a distance : website fingerprinting attacks and defenses . in : proceedings of the 2012 acm conference on computer and communications security .. 605616 .ccs 12 , acm , new york , ny , usa ( 2012 ) , http://doi.acm.org/10.1145/2382196.2382260 cai , x. , heidemann , j. , krishnamurthy , b. , willinger , w. : an organization - level view of the internet and its implications ( extended ) .isi - tr-2009 - 679 , usc / information sciences institute ( 2012 ) elahi , t. , goldberg , i. : cordon a taxonomy of internet censorship resistance strategies .tech . rep .cacr 2012 - 33 , university of waterloo cacr ( 2012 ) , http://cacr.uwaterloo.ca/techreports/2012/cacr2012-33.pdf johnson , a. , syverson , p. : more anonymous onion routing through trust . in : proceedings of the 2009 22nd ieee computer security foundations symposium .csf 09 , ieee computer society , washington , dc , usa ( 2009 ) , http://dx.doi.org/10.1109/csf.2009.27 johnson , a. , syverson , p. , dingledine , r. , mathewson , n. : trust - based anonymous communication : adversary models and routing algorithms . in : proceedings of the 18th acm conference on computer and communications security .. 175186 .ccs 11 , acm , new york , ny , usa ( 2011 ) , http://doi.acm.org/10.1145/2046707.2046729 johnson , a. , wacek , c. , jansen , r. , sherr , m. , syverson , p. : users get routed : traffic correlation on tor by realistic adversaries . in : proceedings of the 2013 acm sigsac conference on computer & communications security .ccs 13 , acm , new york , ny , usa ( 2013 ) , http://doi.acm.org/10.1145/2508859.2516651 syverson , p. , tsudik , g. , reed , m. , landwehr , c. : towards an analysis of onion routing security . in : international workshop on designing privacy enhancing technologies : design issues in anonymity and unobservability .springer - verlag new york , inc ., new york , ny , usa ( 2001 ) , http://dl.acm.org/citation.cfm?id=371931.371981the system comes with an ontology that describes types of network elements ( _ e.g. _ , as , link , and relay - operator types ) , the relationships between them that capture the effects of compromise by an adversary , and attributes of these things .while we provide an ontology , this may be replaced by another ontology as other types of threats are identified .appendix [ ap : ontology ] describes the requirements for replacement ontologies . roughly speaking, the ontology identifies the types of entities for which the system can automatically handle user beliefs when constructing the bayesian belief network ( bbn ) for the user .a user may express beliefs about other types of entities , but she would need to provide additional information about how those entities relate to entities whose types are in the ontology . the ontology is provided to the user in order to facilitate this . in general, we expect that the system will provide information about network relationships , such as which ases and ixps are on a certain virtual link or which tor relays are in a given relay family .we generally expect the user to provide information about human network relationships such as which individual runs a particular relay .note that this means the user might need to provide this type of information in order to make some of her beliefs usable ; if she has a belief about the trustworthiness of a relay operator , she would need to tell the system which relays that operator runs in order for the trustworthiness belief to be incorporated into the bbn . using the ontology and various published information about the network ,the system creates a preliminary `` world '' populated by real - world instances of the ontology types ( _ e.g. _ , specific ases and network links ) .the world also includes relationship instances that reflect which particular type instances are related in ways suggested by the ontology .user - provided information may include revisions to this system - generated world , including the addition of types not included in the provided ontology and instances of both ontology - provided and user - added types .the user may also enrich the information about the effects of compromise ( adding , _ e.g. _ , budget constraints or some correlations ) .the user expresses beliefs about the potential for compromise of various network entities ; these beliefs may refer to specific network entities or to entities that satisfy some condition , even if the user may not be able to effectively determine which entities satisfy the condition . this user - provided information is used , together with the edited world , to create a bayesian belief network ( bbn ) that captures the implications of the user s trust beliefs .a user may express a belief that refers to an entity or class of entities whose type is in the given ontology .for such beliefs , the system will be able to automatically incorporate those beliefs into the bbn that the system constructs .a user may also express beliefs about entities whose types are not included in the ontology .if she does so , she would need to provide the system with information about how those entities should be put into the bbn that the system constructs .the system and the user need to agree on the language(s ) in which she will express her beliefs .different users ( or , more likely , different organizations that want to provide collections of beliefs ) may find different languages most natural for expressing beliefs .the language specification(s ) must describe not only the syntax for the user but also ( _ i _ ) how her structural beliefs will be used in modifying the system - generated world and ( _ ii _ ) how her other beliefs will be used to translate the edited world into a bbn .the user s beliefs may include boolean predicates evaluated on elements in the world .we sketch a default language for these predicates , but this could be replaced by any other language on which the user and system agree .an overview of the system s actions is below .the procedure to produce the bbn is treated as a black box . in reality , it involves many steps , but these depend on the belief language used . the procedure for the belief language described in app .[ app : slang ] is presented in app .[ app : bbntrans ] . 1 . world generation from ontology : * as described in app .[ app : sysworld ] , the system generates a preliminary view of the world based on the ontology and its data sources .we denote the result by .* this should include system attributes 2 . augmenting the types with the user s types : * the user may provide additional types ( as a prelude to adding instances of those types to the world ) .we use to denote the augmentation of by adding the user s types .3 . adding user - specified instances of types ( ontology and user - provided ) : * the user may add instances of any of the types in .we use to denote the augmentation of by adding these new instances and removing any that the user wishes to omit .4 . adding user - specified relationships ( between instances in ) : * the user may specify additional parent / child relationships beyond those included in . in particular , any new instances that she added in the previous step will not be related to any other instances in the world unless she explicitly adds such relationships in this step .we use to denote the augmentation of by adding these new relationships and by removing any that the user wishes to omit .5 . edit system - provided attributes ( not budgets or compromise effectiveness ) 6 .add new user - provided attributes 7 .add budgets 8 .add compromise effectiveness ( this will default to something , perhaps specified in the ontology , if values are nt given ; for relationships of types not given in the ontology , we will use a default value unless the user specifies something when providing the relationship instance ) 9 . produce bbn* the translation process is described in sec .[ app : bbntrans ] .before presenting the ontology that we use in this work , we describe our general requirements for ontologies in this framework .this allows our ontology to be replaced with an updated version satisfying these requirements .for example , mutual legal assistance treaties ( mlats ) are a topic of current interest .there is presently no suitable source of information about mlats for our system to use .if a database of these is developed that can be reliably used to determine automatically the effects of mlats on the power of state - level adversaries , it would be natural to update the ontology to reflect the system s ability to do this .* it has a collection of _types_. we use the ontology to describe relationships between the types in the ontology . * a collection of ( directed ) _ edges _ between types ( with ) .the edges are used to specify relationships ; if there is an edge from to in the ontology , then the compromise of a network element of type has the potential to affect the compromise of a network element of type . * viewed as a directed graph , is a dag . *a distinguished set of called the _output types_. this is for convenience ; these are the types of instances that we expect will be sampled for further use .we generally expect the output types to be exactly the types in the ontology that have no outgoing edges . *each element of has a _ label _ that is either `` system '' or `` user . ''for an edge from type to type , if either or has the label `` user , '' then must also have the label `` user . ''these labels will be used to indicate the default source of instances of each type .( however , the user may always override system - provided information . ) + types or edges with the label `` user '' might be natural to include in an ontology when the type / edge is something about which the system can not reliably obtain information but the ontology designer is able to account for instances of the edge / type in the bbn - construction procedure . * a collection of _ attributes_. each attribute includes a name , a data type , a source ( either `` system '' or `` user '' ) . each element of may be assigned multiple boolean combinations of attributes ; each combination is labeled with either `` required '' or `` optional . '' figure [ fig : ontology ] depicts the ontology used in our system .the two ovals at the bottom are the output types : tor relays and tor ( virtual ) links , which include the links between clients and guards and between exits and destinations .the rounded rectangles correspond to types in the ontology ; instances of these will be factor variables in the bbn .attributes are depicted as cylinders ; the interpretation of these will be described below . filled - in types and solid edges indicate elements and attributes whose label is `` system . '' unfilled types / attributes and dotted edges indicate elements whose label is `` user . '' as noted above , all attributes in the ontology we present here have the label `` optional . ''the types and relationships that are provided by the system in constructing the preliminary world are described in app .[ app : sysworld ] .we describe the others here ; instances of these are added by the user in ways specified below . hosting service ( and incident edges ) : : hosting services that might be used to host tor relays .if a service hosts a particular relay , there would be a relationship instance from the service to the relay .if a service is known to be under control of a particular legal jurisdiction or company , the appropriate incoming relationship instance can be added .corporation ( and incident edges ) : : corporate control of various network elements may be known .a corporation that is known may be added as an instance of this type . if the corporation is known to be subject to a particular legal jurisdiction , then a relationship edge from that jurisdiction to the corporation can be added .similarly , hosting services , ases , and ixps that a corporation controls may be so indicate via the appropriate relationship instances .router / switch/_etc_. : : this corresponds to a physical router or switch .we do not attempt to identify these automatically , but ones known to the user ( or a source to which the user has access ) may be added as instances of this type .physical connection : : particular physical connections , such as a specific cable or wireless link , may be known and of interest .( physical connection , virtual link ) : : if a virtual link is known to use a specific physical connection , then that can be reflected in a relationship between the two .the attributes in our ontology are depicted by cylinders in fig .[ fig : ontology ] .the two at the box in the top right can be applied to all non - output type instances , so we do not explicitly show all of the types to which they can be applied .system - generated attributes : : these include relay - software type and physical location .users may edit these , _e.g. _ , to provide additional information .connection type : : this is an attribute of physical - connection instances .it is represented as a string that describes the type of connection ( _ e.g. _ , ` submarine cable ` , ` buried cable ` , or ` wireless connection ` ) .a user would express beliefs about connection types ; if the type of a connection is covered by the user s beliefs , then the probability of compromise would be affected in a way determined by the belief in question .budget : : this attribute , which is supplied by the user at her option , may be applied to any non - output type instance .there are two variants .both are represented as an integer and another value . in the first variant ,the other value is a type ; in the second variant , the other value is the string ` all ` .multiple instances of this attribute may be applied to a single type instance as long as they have distinct second values ; if one of these is the second variant , then all others will be ignored .this allows the user to express the belief that , if the type instance is compromised , then its resources allow it to compromise of its children . in the first variant of this attribute , the instance may compromise of its children of the specified type ( and perhaps of its children of a different type , if so specified ) . in the second variant of this attribute , the instance may compromise of its children across _ all _ types . + as discussed below , we must approximate the effects of resource constraints so that the bbn can be efficiently sampled .region : : this is an attribute of legal jurisdiction .it is represented as a boolean predicate on geographic coordinates .compromise effectiveness : : this attribute is syntactically similar to the budget attribute .it is supplied by the user at her option for instances of any non - output type , and there are effectively two variants .this is represented as a probability ] , where is a string literal , is a string ( the name of the type ) that must be distinct from all other values the user specifies and from all elements of , and where and are both descriptions of data structures ( these may be empty data structures , which might be indicated by ) .+ we write for the set containing the elements of together with all of the values provided by the user .type instances : : an ordered list of tuples , where , is a data structure that is valid for , and is a unique identifier among these tuples . in the user s list of tuplesare distinct from those identifiers . ]+ we write for the set formed by augmenting with these new instances .relationship instances : : a set of pairs , where and are type instances from . and in place of the unique identifiers associated with each type instance in the edited world .] we do not need to specify new relationship types , only the additional relationship instances .relative beliefs : : these are beliefs of the form , where is a string other than , is a predicate on factor variables , and .+ note that , in our translation procedure below , relative beliefs affect the probability of compromise of a factor in the bbn that is not otherwise compromised through the causal relationships captured in the world .absolute beliefs : : these are beliefs of the form , where is a predicate on factor variables and .a belief such as this says that the chance a variable satisfying is compromised is captured by .note that it is the user s responsibility to ensure that no two different absolute beliefs have predicates that are simultaneously satisfied by a node if those beliefs have different values for .we do not specify what value is used if this assumption is violated .budget : : expressed as either or , where and are string literals , is a type instance in the edited world , is a type in the edited world , and is an integer .the interpretation is that , in expectation , compromise of the node with this attribute will lead to compromise of of its children ( of type in the first variant , or of all its children in the second variant ) .compromise effectiveness : : expressed as either or \top , v) ] instead of a value as the last element of these tuples .a translation procedure in general needs to take the edited world ( reflecting the structural beliefs and attribute values provided by the user ) and the user s trust beliefs as input and produce a bbn as output .the output variables of the bbn should match the nodes in the edited world that are instances of types designated as output types in the ontology or the user s structural beliefs . here , we present a translation procedure that fits with the rest of the system we describe ( it matches our particular ontology , _etc_. ) .* for each node ( type instance ) in ,the bbn contains a corresponding variable .we refer to the bbn variable by the same name as the node in . * for each compromise - effectiveness belief about a node , there is a corresponding child of in the bbn .the table for is such that , if is uncompromised , then is uncompromised ; if is compromised , then is compromised with probability and uncompromised otherwise .( we use to denote the probability value that the system assigns to the value that is part of the user s belief language . )the children of in the bbn are the nodes in the bbn that correspond to nodes in that ( 1 ) are children of and ( 2 ) satisfy the predicate from the belief .assign these edges the weight set .+ if there are children of in that do not satisfy any of the predicates in the compromise - effectiveness beliefs about ( including , _e.g. _ , when the user has no compromise - effectiveness beliefs ) , then make these nodes children of in the bbn .assign to each of these edges the singleton weight set whose element is the appropriate default probability . as a common default value . ]* for each budget belief about a node , let be the number of children of ( in ) that satisfy .for each of these children , in the bbn , replace the single value in the edge s weight set by that value multiplied by . * assign to each non - ce - belief node a `` risk set '' that is initially empty . for each belief that has not already been evaluated and whose initial entry is not , if satisfies , then add to ( retaining duplicates , so that is a multiset ) . *construct the tables for each non - ce node in the bbn .( we have already constructed the tables for the ce - belief nodes . )let be a non - ce node .for each subset of s parents , if is the multiset of weights on the edges from nodes in to , and if is the multiset of risk weights associated with , then the probability that is compromised given that its set of compromised parents is exactly is : note that , if the user has no parents , then the first product will be empty ( taking a value of ) , and the probability of compromise will be determined solely by the risk factors unless the user expresses beliefs that override these .* if the user provides a belief , then nodes satisfying are disconnected from their parents .their compromise tables are then set so that they are compromised with probability and uncompromised with probability .this allows a user to express absolute beliefs about factor variables in the bbn ( hence `` '' ) .in particular , she may express beliefs about input variables whose compromise would otherwise be determined by their attributes .we assume that adversaries are acting independently , although this may not always be the case .one natural example of inter - adversary dependence occurs with the compromise of resource - constrained instances in the world .for example , an isp s resources may limit it to monitoring of its routers .if both the isp and the country ( or other legal jurisdiction ) controlling it are a user s adversaries , then they should compromise the same set of the isp s routers .( this is true whether we model this compromise probabilistically , with routers compromised in expectation , or through some other means . )this might be modeled statically by changing the structure of the bbn , but dynamic compromise and more general inter - adversary dependence may require other approaches . at this point , our system does not include instances in the world in constructs that correspond to cities or states / provinces .these are most naturally viewed as instances of legal jurisdictions , and the user may well have beliefs about the corresponding laws or enforcement regimes .one way that we envision the user may address these is by adding to the world instances of legal jurisdictions that carry a `` boundary '' attribute , effectively a predicate that can be evaluated on the system - provided geolocation data .the system could then determine which network entities are in which of these user - supplied jurisdictions .physical locations might be handled this way as well , as long as the location is `` large enough '' relative to the resolution of the geolocation process .mutual legal assistance treaties ( mlats ) concern the exchange of information between countries about possible violations of the laws of a participating country .if a user has a state - level adversary , then an mlat between the adversary country and another country might effectively compromise the second country .as noted above , it may be natural to add these to the ontology once suitable related sources of information become available .the bbns that we presently construct could be extended to include mlats by adding two additional layers of variables .one would contain a variable for each mlat known to the system ; the children of these variables would be the country variables ( in the presently constructed bbn ) corresponding to countries that are obligated by the respective mlats to act as adversaries .the other added layer would contain a new variable for each country ; the children of any one of these variables would be all of the mlats that obligate other countries to provide information to the parent country .the inherent compromise of countries would be reflected in the top layer ; this would propagate through the mlat layer to effectively compromise other countries , and the rest of the bbn would behave as it does presently .the following examples of beliefs illustrate how a user might express her beliefs in our five - valued example language .we suggest that the compromise probabilities corresponding to the values , , , , and might be taken to be , , , , and , respectively .1 . countries in set are likely trustworthy 2 .countries in set are likely compromised 3 .countries in set are surely compromised 4 .ams - ix points are likely trustworthy 5 .msk - ix points are of unknown trustworthiness 6 .relay family is likely compromised 7 .relay family is surely uncompromised 8 .relay operator is surely uncompromised 9 .relay operator is likely uncompromised 10 . hosting company is surely trustworthy 11 .submarine cables are of unknown level of trustworthiness 12 .wireless connections are likely compromised 13 .relays running windows are of uncertain trustworthiness ( system gets this from relay descriptors ) 14 .if an as is compromised , then it is expected to be able to compromise of the links that it is onbayesian belief networks have both strengths and weaknesses as a component of our system. their general strengths of being concise , being efficiently sampleable , and allowing computation of other properties of the distribution ( _ e.g. _ , marginal probabilities and maximum likelihood values ) are beneficial in our system .bbns are especially well - suited to our approach here because of the close structural similarity between our revised worlds and the bbns we construct from these . as a disadvantage ,bbns do not represent hard resource constraints efficiently ; we can only approximate those here by constraining resources in expectation .more generally , other negative correlations may be difficult at best to capture , but it is possible that users will hold beliefs that imply negative correlations between compromise probabilities .the purpose of this system is to produce an efficiently sampleable representation of compromise probabilities .other representations of distributions could also be used , but they might be most naturally generated from trust beliefs in different ways .a detailed discussion of such approaches is beyond the scope of this work .we propose that a collection of default beliefs be distributed with this system . as noted in sect .[ sec : trust ] , this collection would be designed to provide adequate protection for general users .users with particular concerns might use other collections of beliefs ; these could be provided by , _e.g. _ , government entities , privacy organizations , political groups , journalism - focused organizations , or organizations defending abuse victims .
|
motivated by the effectiveness of correlation attacks against tor , the censorship arms race , and observations of malicious relays in tor , we propose that tor users capture their trust in network elements using probability distributions over the sets of elements observed by network adversaries . we present a modular system that allows users to efficiently and conveniently create such distributions and use them to improve their security . the major components of this system are ( _ i _ ) an ontology of network - element types that represents the main threats to and vulnerabilities of anonymous communication over tor , ( _ ii _ ) a formal language that allows users to naturally express trust beliefs about network elements , and ( _ iii _ ) a conversion procedure that takes the ontology , public information about the network , and user beliefs written in the trust language and produce a bayesian belief network that represents the probability distribution in a way that is concise and easily sampleable . we also present preliminary experimental results that show the distribution produced by our system can improve security when employed by users ; further improvement is seen when the system is employed by both users and services . * keywords : * tor , trust , bayesian belief network
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.