article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
let be a continuous time markov process evolving in such that is an absorbing point , so , and absorption occurs almost surely , that is for all , , where . in this paper , we assume that the process is nt explosive and we provide a sufficient condition for the existence and uniqueness of a quasi - stationary distribution for .we recall that a _ quasi - stationary distribution _ ( qsd ) for is a probability measure on such that , for all , the question of existence and uniqueness of such distributions has been extensively studied in the past decades . for irreducible finite state space processes, there exists a unique qsd , as proved by darroch and seneta in their seminal work . in our case of a process evolving in a countable state space , the question is more intricate . indeed , studying the particular case of birth and death processes , van doorn shown in that a process evolving in can have no qsd , a unique qsd or an infinite continuum of qsds .thus the existence of a qsd itself is not always true . in 1995 ,ferrari , kesten , martnez and picco proved a necessary and sufficient condition for the existence of a quasi - stationary distribution for under the assumption that it is irreducible and that the process does nt come back from infinity in finite time .more precisely , the authors proved that if is an irreducible class for the process and if for any , then there exists at least one qsd for if and only if there exist two constants and such that in this paper , we prove a complementary criterion for the existence and uniqueness of a qsd , since we mainly assume that the process comes back quickly from infinity ( see hypothesis [ hypothesis : main ] below ) .our theorem generalizes the recent results of ferrari and maric ( whose proof uses a particle system approximation method ) , as explained in section [ section : application1 ] .our approach , inspired by , is based on a strong mixing property . indeed ,we prove that , under our assumption , there exists a constant ,1] ] is the integer part of .let us also remark that the notion of qsd has always been closely related to the study of the long - time behavior of conditioned not to be absorbed .indeed , it is well known that a probability measure is a qsd if and only if it is a _ quasi - limiting distribution _ ( qld ) , which means that there exists such that for a proof of this statement , we refer the reader to and .qlds have originally been studied by yaglom , which stated that the above limit exists for any sub - critical galton - watson process and does nt depend on the initial distribution , , of the process .the question has also been addressed in for finite state space processes and in for birth and death processes . for further information on qlds and qsds, we refer the reader to the recent surveys and . in this paper, we prove that , under our main assumption ( see hypothesis [ hypothesis : main ] below ) , there exists a constant such that ,\ \forall \mu\in{\cal m}_1(\n^*),\ \forall t\geq 0,\ ] ] where is the unique qsd of the process .this clearly implies that is the unique qld for , independently of the initial distribution of the process .we present two different applications of our result . in the first one , we show that the sufficient condition for existence and uniqueness of a qsd proved in can be considerably relaxed . in the second one, we develop the case of the logistic birth and death process .it is well known that this process has a unique qsd ( see for instance and when the individual competition rate is small ) . in section [ section : application2 ] , we give a new ( and shorter ) proof of this result .we also prove the following new result : the conditional distribution of the logistic birth and death process converges exponentially fast to its qsd , uniformly in its original distribution .the paper is organized as follows . in section [ section : main - result ] , we state and prove our main theorem . in section [ section : application1 ] , we prove that this result leads to a generalization of ( * ? ? ? * theorem 1.1 ) . in section [ section :application2 ] , we give a new proof of the existence and uniqueness of a qsd for logistic birth and death processes , and we complete this result by proving that the conditioned distribution of the process converges exponentially fast to its qsd , uniformly in its original distribution .our main assumption is based on the existence of a uniform exponential moment for the hitting time of a fixed subset , which will typically be a finite subset in applications ( see sections [ section : application1 ] and [ section : application2 ] ) . for any subset ,we denote by the hitting time of and we set for any . [hypothesis : main ] [ hypothesis : main - k ] we assume that there exist a subset , a point and five positive constants and such that * * for all , * for all , * we are now able to state the main result of this paper , which concerns the existence and uniqueness of a qsd and the existence of a qld for , independent of the initial distribution of the process . [theorem : main ] if hypothesis [ hypothesis : main ] is fulfilled , then }\ ] ] and there exists a unique qsd for .in particular , for any probability measure on , we have }\ ] ] moreover , is a qld for and any initial distribution , which means that , for any probability measure on , the proof of theorem [ theorem : main ] is divided into three parts . in a first step , we show that , for all , secondly , using the techniques developed in del moral and villemonais , we prove inequality for all . in a third step , we prove that implies the existence and uniqueness of a qsd , which immediately implies the last assertion of the theorem .step 1 : let us show that holds . for all , we have on the one hand , we deduce from hypothesis [ hypothesis : main ] ( 3 and 4 ) that , for all , on the other hand , the markov property yields to } e^{-\lambda_0 s}\p_y(t - s < t_0)\\ & \leq c_4 \sup_{y\in k } \sup_{s\in[0,t ] } e^{-\lambda_0 s}\p_y(t - s < t_0),\end{aligned}\ ] ] by hypothesis [ hypothesis : main ] ( 4 ) . by hypothesis [ hypothesis : main ]( 2 and 3 ) , we have for all ] denotes the integer part of .inequality of theorem [ theorem : main ] is thus proved for any pair of initial probability measures , with .let us now prove that the inequality extends to any couple of initial probability measures .let be a probability measure on and .we have }\\ & \leq 2 ( 1-\frac{c_1c_2c_3}{2 c_4})^{[t]}.\end{aligned}\ ] ] the same procedure , replacing by any probability measure , leads us to inequality of theorem [ theorem : main ] .step 3 : let us now prove that inequality implies the existence and uniqueness of a qsd for .let us first prove the uniqueness of the qsd . if and are two qsds , then we have for and any .thus , we deduce from inequality that },\ \forall t\geq 0,\ ] ] which yields to .let us now prove the existence of a qsd . by ( * ?* proposition 1 ) , this is equivalent to prove the existence of a qld for ( see the introduction ) .thus it is sufficient to prove that there exists a point such that converges when goes to infinity .let be any point in .we have , for all , } \xrightarrow[s , t\rightarrow+\infty ] { } 0.\end{aligned}\ ] ] thus any sequence is a cauchy sequence for the total variation norm .but the space of probability measures on equipped with the total variation norm is complete , so that converges when goes to infinity .finally , we have proved that there exists a unique quasi - stationary distribution for .the last assertion of theorem [ theorem : main ] is proved as follows : for any probability measure on , we have } \\ & \xrightarrow[t\rightarrow + \infty ] { } 0.\end{aligned}\ ] ] this concludes the proof of theorem [ theorem : main ] .in this section , we denote by the transition rate matrix of the process and we give a sufficient criterion on for the existence and uniqueness of a qsd . in ( * ? ? ?* theorem 1.1 ) , ferrari and maric assume that , that and that then they prove that there exists a unique qsd for and that is also a qld for and any initial distribution . in this section , we show that this result can be seen as an application of theorem [ theorem : main ] and that the main assumptions can be relaxed , allowing in particular .[ theorem : countable - state - space ] assume that is stable and non explosive , that , and that there exists a finite subset such that then hypothesis [ hypothesis : main ] holds .in particular , there exists a unique qsd for and is a qld for and any initial distribution .moreover , there exist two positive constants and such that , for any initial distribution on all , we emphasize that implies that there exists a finite subset such that thus implies inequality .let us prove that hypothesis [ hypothesis : main ] holds under the assumptions of theorem [ theorem : countable - state - space ] . by assumption, we have it follows that and then fix . since is finite , we have by assumption using the strong markov property , we deduce from the two above inequalities that \right)>0.\ ] ] but the process is assumed to be stable , so that it remains in during a time with positive probability .we finally deduce that which of course implies the first part of hypothesis [ hypothesis : main ] . since is finite , for any there exists such that moreover , for , the markov property yields to but is finite , thus we have by assumption and we finally deduce that now , assume . then for , where is the sojourn time , so the second part of hypothesis [ hypothesis : main ] is fulfilled .since the absorption rate of the process is uniformly bounded by , we have by the markov property , we deduce that in particular , setting , the third point of hypothesis [ hypothesis : main ] is fulfilled from eqrefequation : servet - jaime - correction-1 .we set the process jumps into from any point with a rate bigger than .this implies that is uniformly bounded above by an exponential time of rate .in particular , we have it yields that the fourth part of hypothesis [ hypothesis : main ] is fulfilled with .this concludes the proof of theorem [ theorem : countable - state - space ] .in this section , we consider the logistic birth and death process studied in , which describes the stochastic evolution of a population whose individuals are competing with each other . denoting by the individual competition rate , the individual birth rate and the individual death rate, the transition rate matrix of is given by one easily checks that is an absorbing point for .the existence and uniqueness of a qsd for the logistic birth and death process has already been proved by van doorn and , by different means , by barbour and pollett when ( they also proved that its qsd can be approximated by the renewal process introduced in ) . in what follows, we give an alternative proof of the existence and uniqueness of this qsd , using theorem [ theorem : main ] .this leads us to the exponential convergence of the conditioned distribution of to its qsd , which is a completely new result . the logistic birth and death process fulfills hypothesis [ hypothesis : main ] . in particular , it has a unique qsd and there exist two positive constants and such that , for any initial distribution , for any , we define the quantity by and we set this sum is clearly finite in the logistic birth and death process case . by* chapter 8) , for any , we have in particular , .this clearly implies that the first part of hypothesis [ hypothesis : main ] is fulfilled . since , the term converges to when goes to .we deduce that there exists such that let us define . since is finite ,the same argument that the one in the proof of theorem [ theorem : countable - state - space ] yields to thus the second part of hypothesis [ hypothesis : main ] is fulfilled . (strong mixing properties for time inhomogeneous diffusion processes with killing . chapter 5 of _ distributions quasi - stationnaires et mthodes particulaires pour lapproximation de processus conditionns_. .cole polytechnique .
|
we consider a markov process evolving in such that is an absorbing point and we study the long time behavior of the distribution of the process conditioned not to be absorbed when it is observed . our main condition is that the process comes back quickly from infinity to a finite subset of . in particular , we prove that this conditional distribution admits a limit when the time goes to infinity , and that this limit does nt depend on the initial distribution of the process . this limiting distribution , usually called a yaglom limit , will then appear to be the unique quasi - stationary distribution of the process .
|
over the last decade , a world - wide network of ground based laser interferometers has been constructed and operated in pursuit of the first direct detection of gravitational waves ( gws ) . the u.s .laser interferometer gravitational - wave observatory ( ligo ) operates detectors in livingston , la and hanford , wa , each consisting of a suspended michelson interferometer with 4 km fabry - perot arm cavities .these detectors attained their best sensitivity yet during the most recent scientific data taking run , known as `` s6 '' , which took place between july 2009 and october 2010 , in a configuration called `` enhanced ligo '' .enhanced ligo featured several improvements with respect to the earlier initial ligo configuration ( 2001 - 2007 ) .one of the critical upgrades was the increase in the laser power circulating inside the arm cavities by about a factor four .the 40kw of laser power stored in the enhanced ligo cavities greatly complicated the relative alignment of the interferometer mirrors . for the laser interferometer to operate properly, its mirrors must be aligned to each other with a relative rms misalignment not larger than about a tenth of a microradian . meetingthis stringent requirement is particularly challenging in the presence of radiation pressure effects .radiation pressure exerts torque on the suspended mirrors , adding to the fixed restoring torque of the suspension .the possibility of this torque to de - stabilize optical cavities was first recognized in 1991 by solimeno et al . . by 2003, it was clear in the ligo community that the effect of radiation pressure on angular dynamics was relevant for ligo and the full details of the effects were described by sidles and sigg in 2006 .fan et al . measured the predicted optical - mechanical torsional stiffness at the gingin facility in australia ,driggers et .al . demonstrated its effect at the caltech 40 m prototype and hirose et al . showed that although the optical torque in initial ligo ( about 10kw of laser power circulating in the initial ligo arm cavities ) was measurable and similar in magnitude to the suspension restoring torque , it was not yet significant enough to require a change to the angular controls this paper we show the effect of optical torque in the enhanced ligo interferometers and also present the design concept and implementation of an alignment sensing and control scheme ( asc ) which allowed us to operate an interferometer with angular mechanics dominated by radiation pressure .two of the authors ( barsotti and evans ) created a numerical model of the asc for enhanced ligo that specifically included radiation pressure torque .they showed that , in principle , the radiation pressure torque can be controlled without detrimental consequences to the sensitivity of the detector .the proposed solution rotates the control basis to one that naturally represents the eigenmodes of mirror motions coupled by radiation pressure .we implemented this control scheme on the enhanced ligo interferometers with up to 40kw of circulating power , successfully controlling the angular degrees of freedom in the presence of the radiation pressure instability .the demonstrated solution meets the ligo requirements and is extensible to the next generation of ligo detectors currently under construction , advanced ligo .the interferometer layout and the control scheme are introduced in section [ sec : alignment ] .section [ sec : design ] presents the modified design after a review of the physics of radiation pressure induced torque on the mirrors .this section also highlights a direct measurement of the opto - mechanical modes that are controlled .section [ sec : results ] presents the results of using the new alignment control scheme at high laser powers , including the residual mirror motion and the noise performance .key differences and implications for advanced ligo are outlined in section [ sec : aligoasc ] , and section [ sec : summary ] provides a summary .all data presented are from the livingston observatory ; results from the hanford observatory are similar .each ligo detector is a power - recycled fabry - perot michelson laser interferometer featuring suspended test masses ( mirrors ) in vacuum .a stabilized laser beam ( with a wavelength of 1064 nm ) is directed to the interferometer , whose two arm lengths are set to maintain nearly destructive interference of the recombined light at the michelson ( dark ) anti - symmetric port .an appropriately polarized gw differentially changes the arm lengths , producing a signal at the anti - symmetric port proportional to the gw strain .the test masses are suspended by a single loop of steel wire to provide isolation from ground motion , as depicted in fig .[ fig : asc ] .each mirror is equipped with five magnet - coil actuators to control the mirror s longitudinal and angular position .furthermore , the carrier laser field is phase modulated by an electro - optic modulator at 24.4mhz and 61.1mhz to generate sidebands for use in a modulation - demodulation technique of sensing the interferometer s longitudinal and angular degrees of freedom .there are several reasons why the interferometer s mirrors must be actively aligned : * to maximize optical power coupling * to suppress motion from external disturbances * to counteract a static instability at high laser power the requirements for how much residual motion is tolerable stem from the mechanisms by which misalignment couples to strain sensitivity .the most significant coupling of angular motion to cavity length occurs when the beam spot is off - center from the mirror s axis of rotation .the combination of mirror angular motion and beam spot motion on the test masses changes the length of the arms by : and results in an increase in the sensed longitudinal motion .the relevant quantities for describing the mirror s motion are its root - mean - square ( rms ) and in - band ( audio frequency ) noise .it is worth noting that once all of the interferometer cavities are brought to resonance and the dc pointing no longer contributes to the rms , the rms is dominated by the pendular motion .there are additional mechanisms by which misalignment affects displacement sensitivity .first , a high order effect arises because misalignments affect power build - up quadratically which in turn modulates the noise floor in the shot - noise - limited regime . a second mechanism results as a side effect of having active angular alignment . due to imperfections in the actuators, there will always be a small amount of longitudinal acutation along with the desired angular actuation .external disturbances that cause misalignment include : seismic noise , pitch / yaw mode thermal noise , length - to - angle coupling , acoustic noise , and radiation pressure torque . mechanical and electrical design of suspensions and sensors , isolation in vacuum , and periodic balancing of mirror actuators are measures taken to reduce the level of angular motion in the first place .an active control system is used to mediate the motion that remains , which in turn is itself a source of misalignments due to sensing noise . as reflected in the noise budget of one of the alignment sensors in figure [ fig : wfs_nb ] , direct seismic and suspension thermal noises are in fact quite small . above 20hz where the seismic isolation platforms strongly isolate, sensor noise dominates . as a result ,the angular motions of the cavities above these frequencies are dominated by the control system itself .sensor noise is thus a primary consideration in servo design .the alignment of the interferometer is accomplished via feedback and there are several frames of reference to which the mirrors are aligned .ultimately , the mirrors must be aligned to one another , and this will be presented in detail shortly .each individual optic also has two servos of its own to provide velocity damping .first , local shadow sensors provide damping around the pitch and yaw eigenfrequencies of the mirrors ( 0.6hz and 0.5hz , respectively ) .this damping is relative to the suspension cage which is already isolated at high frequencies .second , optical levers mounted to heavy piers on the ground provide a reference to the local ground motion .they are more sensitive than the shadow sensors and serve to suppress the motion which arises from the isolation table stack resonances from 0.2hz to 2hz .the interaction of these two velocity damping servos with the main alignment servo results in some increased complexity of the main servo design . the fundamental physical principle behind sensing relative mirror misalignment is the fact that when an optical cavity is misaligned relative to an incident field , a hermite - gaussian mode is generated with an amplitude proportional to the misalignment .alignment signals are produced by directing some of this light onto a quadrant photodiode ( qpd ) , where the interference of the fundamental mode and misalignment mode at the sideband frequency can be compared on each half of the split diode .the qpd together with the resonant rf circuit and demodulation system is called a wavefront sensor ( wfs ) .the amplitude of the alignment signal is a function of the relative gouy phase between the and modes , which is a function of the longitudinal position of the detector along the optical axis .angular misalignments of different combinations of mirrors can therefore be distinguished by placing detectors at different locations along the optical path .the basic formalism of how alignment signals are generated is presented in ref . .a detailed description of the control scheme design for the initial ligo configuration is found in ref . and key aspects relevant for the description of the enhanced ligo asc are provided here .there are 8 mirrors whose pitch ( rotation about the mirror s horizontal axis ) and yaw ( rotation about the vertical axis ) angles must be sensed and controlled . the sensing is accomplished through the use of 8 sensors , which can be organized into three types : * wavefront sensors ( wfs1 , wfs2 , wfs3 , wfs4 ) which sense the angular misalignment of the cavities with respect to their input beams * ccd image of the beam spot on the beam splitter ( bs ) * quadrant photodiodes ( qpdx , qpdy ) which see the beam transmitted through the arm cavities figure [ fig : asc ] shows the basic power - recycled michelson interferometer layout , highlighting the locations of these angular sensors and the eight mirrors they must control .two wfs , separated in gouy phase , are located at the reflected port of the interferometer where common mode signals appear .the third sees a pick - off of light from the recycling cavity which contains common and differential signals , and the fourth gets a pick - off of the light at the anti - symmetric port where differential mode signals are transmitted .the common mode represents motion where the optics of one arm rotate in the same direction as those in the other arm and differential represents rotations in opposite directions .the eight mirrors include the four test masses that make up the fabry - perot arm cavities ( itmx , itmy , etmx , etmy ) , the beam splitter ( bs ) , the recycling mirror ( rm ) , and two input beam directing mirrors that also serve as a mode matching telescope ( mmt1 and mmt3 ) .the ccd image and the qpds are used in slow feedback loops as part of drift control servos to maintain the beam spot positions at the three corners of the interferometer .their bandwidths are below a few mhz and below 0.1hz , respectively , and are significantly lower than the bandwidths of the wfs loops , which keep the mirrors aligned to one another from dc up to several hertz . .components within the dashed box are analog . ]figure [ fig : block ] shows a simplified block diagram of the wfs servo .the interferometer converts individual mirror motions into optical modes which in turn are converted into error signals by the wfs .the angular error signals are digitized , filtered , and converted into analog control signals for individual mirrors .two matrices in series rotate the alignment signals from the wfs basis to the optic basis .control filters are implemented in the intermediate basis . in initial ligo ,the sensing basis was that of common and differential etm / itm motion and the rm , and servos were designed in this basis .the input matrix was diagonal and the output matrix was created to send equal or equal and opposite signals to the etms and itms , respectively . in this work ,we describe a change of basis to improve the stability of the interferometer in the presence of radiation pressure torque .the effectiveness of the initial ligo asc design is limited in the regime of high circulating power where radiation pressure modifies the simple pendulum plant in a way which is power - dependent .as is detailed in this section , torque due to radiation pressure couples the angular motions of the arm cavity mirrors such that the simple single resonance of a given mirror s torque - to - angle transfer function splits into two , with frequency shifts dependent on power . controlling this new plantcould be accomplished with the initial ligo system by increasing the gains of the wfs loops , but it would be at the expense of introducing too much control noise in the gw measurement band .an alternative solution is thus required to achieve both adequate angular control and minimal noise impression . in this section ,we first review the formalism of radiation pressure torque in cavities .then , we present a direct measurement of the opto - mechanical modes of the enhanced ligo arm cavities for several powers . finally , we describe the modified control scheme and present its implementation . in the limit of no circulating power in a suspended fabry - perot cavity ,each of the individual mirrors has independent equations of motion . with powercirculating in the cavity , however , radiation pressure effects couple the equations of motion of the two mirrors . as a beam impinging a mirror off - center creates a torque , an opto - mechanical angular spring is created due to the geometric relationship of beam displacements and mirror angles .this fact has two important consequences : on one hand , as the torque induced by radiation pressure is proportional to the power stored inside the cavity , the opto - mechanical angular transfer functions of the cavity mirrors change as a function of the stored power . on the other hand , for large powers, radiation pressure can even overcome the restoring torque of the mirror suspension , creating an unstable system . to understand how the cavity dynamics are affected by radiation pressure , it is useful to diagonalize the coupled equations of the mirror motion into two normal cavity modes .we refer to ref . for a complete derivation of the torsional stiffness matrix which couples the static misalignment of the two cavity mirrors , and here we use only the final expressions for the two eigenvalues and eigenvectors of that matrix : \\ v_h & = [ \frac{k_0}{k_0 g_2-k_h } , 1 ] \label{eq : ksh}\end{aligned}\ ] ] where ( m is the cavity length , the speed of light , and and the geometric -factors of the cavity ) .the resonant frequency of each of the opto - mechanical modes can then be written as : where is the mirror moment of inertia ( =0.0507 kg m ) , and is the restoring torque of the mirror suspension ( .72 nm / rad pitch and 0.5 nm / rad yaw ) .for the initial and enhanced ligo interferometers , the -factors of the cavities are : so is negative and is positive .known as the sidles - sigg effect , the radiation pressure torque either softens or stiffens the mechanical springs .we therefore refer to the two modes as `` soft ( s ) '' or `` hard ( h ) '' . as power increases , the frequency of the hard mode increases , but the frequency of the soft mode decreases until when there is no longer a real resonant frequency , corresponding to an unstable system ..resonant frequencies ( pitch ) in hz for the soft and hard opto - mechanical modes of a typical initial ligo circulating power ( 9 kw ) and the highest of enhanced ligo powers ( 40 kw ) .the soft mode in enhanced ligo is unstable . [cols="<,<,<,<,<",options="header " , ] [ table : sensing ]in this section we present measurements of the performance of the asc system with up to 27kw circulating power and demonstrate that the asc design meets the ligo requirements .the open loop transfer function of each of the wfs loops is the product of the radiation - pressure - modified pendulum and the control filters .figure [ fig : olgs6w ] shows the open loop transfer functions of each of the wfs loops as measured during a 10.3kw lock with the loops closed . as anticipated from the large dsoft signal seen by wfs1 in the sensing matrix measurement ( table [ table : sensing ] ) , that is the mode for which we can and do provide the strongest suppression . in order to achieve this much suppression , it is necessary to make the feedback loop conditionally stable . as shown here , the dsoft unity gain frequency ( ugf ) is at 5hz .all of the other degrees of freedom have ugfs of and are designed to be unconditionally stable . the dsoft loop could have a higher gain compared to the other loops because it caused no harm in strain sensitivity above 60hz , as is presented later in section [ sec : noisebudget ] .the ugfs of all other loops were selected as a necessary minimum .figure [ fig : fom ] shows spectra of the control signal and residual angular motion in each of the eigenbasis degrees of freedom during a 17kw lock .the typical residual rms angular motion is rad/ . above 2025hz, the wfs signals do not represent true angular motion but instead are limited by a combination of optical shot noise , photodetector electronics noise and acoustic noise .unless sufficiently filtered , the control signal derived from frequencies in this band will increase the mirror motion .the resulting need for low - pass filters limits the achievable bandwidth of the loops . because reducing the design ugf allows us to reduce the corner frequency of the ( steep ) low pass filters , the reduction of noise in the gw band is inversely proportional to the ugf raised to the third or even fourth power .the residual beam spot motion on the test masses is shown in figure [ fig : bsm ] .the rms beam spot motion on the etms is 1 mm and on the itms it is 0.8 mm .these measurements are acquired from the pitch and yaw signals of the qpds in transmission of the etms and the pitch and yaw dc signal from wfs2 for the itms .the magnitudes of the beam spot motion and the residual mirror motion are consistent .for example , for rad of soft or hard mode motion in one arm , we expect the maximum cavity tilt and displacement to be 0.1rad and 1 mm , respectively .one of the most important figures of merit for the control system is how much noise it contributes to the gw strain signal . as described in section [ sec : alignment ] , the dominant way in which angular motion creates a change in cavity length is the convolution of beam spot motion with angular mirror motion . ideally , we want the length displacement due to this coupling to be an order of magnitude below the desired displacement sensitivity .the effective transfer function magnitude of the angle - to - length effects can be estimated with a broadband noise injection that amplifies the mirror motion .this non - linear technique is necessary because the linear coupling of torque to cavity length is minimized by periodically balancing the mirror actuators .due to the near - elimination of the linear coupling , the remaining dominant angle - to - length process ( refer to eq .[ eq : a2l ] ) has a coupling coefficient of mean and the traditional coherent transfer function measurement would therefore also yield .to arrive at an estimate of the magnitude of the remaining time - dependent angle - to - length coupling , the broadband excitation must be averaged over some time .we injected a 40 to 110hz broadband excitation into the error point after the input matrix and computed a transfer function between the hard / soft eigenmode error point and the gw signal .the transfer function may be multiplied by an asc signal at any time to estimate a noise budget .the wfs noise budget in the eigenmode basis for pitch at a time when the interferometer was locked with 24kw power is shown in figure [ fig : nb]a .each degree of freedom s contribution of control noise to displacement sensitivity is the same within about a factor of two , except for the rm , which is not included in these plots .we were not able to measure the transfer function for rm motion to displacement sensitivity because so large of an excitation was required to see an effect that the interferometer would lose lock .the soft modes contributes more length noise than the hard modes .the asc is , in fact , the limiting noise source for frequencies up to 55hz and it becomes less and less of a primary noise source as frequency increases . by 100hzthe asc noise floor is a factor of 10 below displacement sensitivity .the specific structure of the noise contributions , including the apparent notches , is a direct result of the shape of the control filters .imperfections in the estimate of displacement noise below 50hz arise because the transfer function is not perfectly stable in time .figure [ fig : nb]b shows a broader view of the role of angular control noise with respect to other primary noise sources .the alignment noise shown is the quadrature sum of the pitch and yaw contributions . measured seismic and optical lever noisesare also shown , in addition to models of thermal , shot , and radiation pressure noises . in this example, asc noise hinders the interferometer sensitivity up to 55hz by about an order of magnitude . at a later time ,steeper and lower frequency low pass control filters were made ( at the expense of reduced stability ) to reduce the alignment noise to a level similar to that of seismic noise .the ligo detectors are currently being upgraded to a configuration known as advanced ligo to achieve up to a factor of ten improvement in broadband sensitivity .the noise performance of the angular control scheme in the advanced ligo detectors must meet the most stringent requirements to date , as imposed by the improved sensitivity and the goal that the displacement noise produced by the asc is no greater than 10% of the design sensitivity . giventhe asc was a limiting noise source below 55hz in enhanced ligo , some additional steps must be taken to achieve the advanced ligo goal .for instance , in order to mitigate the largely acoustic dominated wfs noise above 10hz , the wfs will be placed in vacuum for advanced ligo .in addition , the angle - to - length coupling at low frequencies will be reduced through the use of a seismic feed - forward scheme .the sidles - sigg effect will not be as important in advanced ligo despite the laser power stored in the arm cavities being as high as 800 kw , 20 times higher than in enhanced ligo .a number of design changes have made the impact of radiation pressure less dramatic : * four times heavier mirrors ( 40 kg instead of 10 kg ) ; * arm cavity -factor chosen to suppress the soft mode * a larger restoring angular torque due to new multi - stage pendulum suspensions figure [ fig : khardsoft ] shows a plot of soft and hard mode frequency as a function of stored power in the arms for the enhanced and advanced ligo configurations .it can be seen that although the hard mode is hardly affected by the advanced ligo changes , the new -factor greatly pushes out the power at which the soft mode becomes unstable .nevertheless , the control strategy developed for enhanced ligo gives us confidence that we can control the hard and soft modes .the advanced ligo asc design detailing the effects of the above changes to the design presented here is found in ref .the enhanced ligo interferometer is a complex opto - mechanical system whose angular mechanics are dominated by radiation pressure effects .we show that radiation pressure shapes the angular dynamics of the suspended mirrors and plays an important role in the design of an angular control system .we implemented and characterized a novel control scheme to deal with the instabilities that radiation pressure causes to the angular degrees of freedom of the interferometer , without compromising the strain sensitivity of the detector .the alignment control scheme that we describe allowed the ligo detectors to operate at their best sensitivity ever , as achieved during the scientific run s6 .the solution that we demonstrate here is extensible to the next generation of ligo detectors , advanced ligo , and is more broadly applicable in systems in which radiation pressure torques are dominant over mechanical restoring forces .we would like to thank d.h .reitze , d. sigg , and y. aso for helpful discussions .this work was supported by the national science foundation under grant phy-0757058 .ligo was constructed by the california institute of technology and massachusetts institute of technology with funding from the united states national science foundation .this paper has ligo document number https://dcc.ligo.org/ligo-p1100089/public[p1100089 ] .
|
we describe the angular sensing and control of the 4 km detectors of the laser interferometer gravitational - wave observatory ( ligo ) . the culmination of first generation ligo detectors , enhanced ligo operated between 2009 and 2010 with about 40 kw of laser power in the arm cavities . in this regime , radiation pressure effects are significant and induce instabilities in the angular opto - mechanical transfer functions . here we present and motivate the angular sensing and control ( asc ) design in this extreme case and present the results of its implementation in enhanced ligo . highlights of the asc performance are : successful control of opto - mechanical torsional modes , relative mirror motions of rad rms , and limited impact on in - band strain sensitivity .
|
since the discovery of human blood groups in 1900 , their distributions in various countries and ethnicities have been attracting attention of researchers .such distributions vary a lot across the world and , in general , evolve over time .it is classically known that in the absence of evolutionary influences the allele frequency of a single trait achieves the hardy - weinberg equilibrium already in the second generation and then remains constant .besides , blood groups frequencies satisfy an algebraic relation .the interplay between the frequencies of all possible genotypes or phenotypes combinations of a pair of genes ( even though their frequencies are uncorrelated ) is however much more complex . for a random initial population, the frequencies of all possible combinations of blood group and rh factor phenotypes will only stabilize after an infinitely long evolution . by computing the linkage disequilibria between allelesone can , in principle , find the frequency of any genotype in a given generation .however , finding explicit analytic formulas for the evolution of genotypes frequencies and invariant varieties of the polynomial dynamical system describing this evolution is a problem of great computational complexity . in the present paperwe develop a symbolic solution technique which allows us to give a closed form description of the evolutionary trajectory .we show how the frequencies of human blood genotypes ( distinguished by both blood group and rh factor variations ) with arbitrary initial distribution will evolve after any given number of generations in a population where no blood genotype is favored over another with respect to the ability to pass its genes to the next generation . throughout the paper ,we will be denoting the blood group traits by a , b , o , and the rh factor traits by h ( positive ) and h ( negative ) .the 18 human blood genotypes in the abo rh system will be denoted by oohh ( rh negative 1st blood group ) , aohh , aahh , bohh , bbhh , abhh , oohh , aohh , aahh , bohh , bbhh , abhh , oohh , aohh , aahh , bohh , bbhh , and abhh ( homozygously rh positive 4th blood group ) . in the sequel , we will always be using this particular ordering of the blood genotypes . since a and b traitsare codominant over o while the h trait is dominant over h , the above genotypes comprise the 8 blood phenotypes : oh ( rh negative 1st blood group , same as oohh ) , ah ( rh negative 2nd blood group comprising genotypes aohh and aahh ) , bh , abh , oh , ah ( rh positive 2nd blood group comprising genotypes aohh , aohh , aahh , aahh ) , bh , and abh . in demography and transfusiology , it is often important to know and predict the frequencies of blood genotypes or phenotypes with respect to _ both _ blood group and rh factor .for instance , one would like to know the expected frequency of the rh negative 4th blood group after a given number of years in a certain population . as we will see later , the convergence of the blood genotypes frequencies towards the limit distribution is rather slow ( in the real time scale ) for generic choice of their initial distribution .for instance , table 1 shows that for the frequency of the oh genotype in example [ ex : examplewithtable ] below after two generations is 0.109 while its equilibrium value is 0.187 .moreover , the limit distribution might not be ever reached for a particular real population because of migration and evolutionary influences that affect the blood genotypes frequencies .the hardy - weinberg result gives the equilibrium genotypes frequencies after an infinitely long evolution and the bernstein equation relates these frequencies in a population that is already at the equilibrium .the purpose of the present paper is to fill the gap between an initial distribution and the equilibrium state ( that is in general only achieved after an infinitely long evolution ) by giving an explicit closed form analytic formula for the frequencies distribution .we describe the evolution of the frequencies of all possible genotypes of human blood in the clinically most important abo and rh blood group systems for an arbitrary initial distribution of these frequencies and after any number of generations .we will be assuming that blood group and rh factor are statistically correlated neither with gender nor with fertility or any aspect of sexual behavior of a human .that is , we will consider a population where an individual s chances to pass her / his genes to the next generation do not depend on her / his blood genotype .since we assume that blood genotypes frequencies are uncorrelated with gender , their distribution is the same for males and females .the blood genotype distribution in such a population is therefore completely determined by a vector with 18 real nonnegative components where denotes the frequency of oohh , is the frequency of aohh , etc .( see the ordering of the blood genotypes introduced above ) .we will only consider vectors not all of whose components are zero since zero population has trivial dynamics .moreover , since we are only interested in the proportions of the population having prescribed blood genotypes , we will identify proportional vectors .thus for the purpose of studying blood genotypes dynamics a population is identified with a point in the 17-dimensional projective space let be the vector encoding the blood genotypes distribution in a population . using the well - known blood inheritance rules together with the above statistical assumptions on the population under study , we conclude that the distribution of blood genotypes in the next generation is described by the vector whose components are the following 18 quadratic forms : for instance , the polynomial can be obtained by observing that the only blood genotypes that contribute to the frequency of the genotype oohh in the next generation are oohh , aohh , bohh , oohh , aohh , and bohh .recall that their frequencies in the initial generation are denoted by and respectively . in a family where both parents blood belongs to the oohh genotype , 100% of the children will have the same blood .an offspring of the parents with the blood genotype aohh will have blood of the type oohh with the probability 1/4 .computing the probabilities for an offspring to have blood of the type oohh for all possible combinations of the parents blood genotypes and clearing common denominators ( this makes use of our projective model and must be done for all components of the polynomial map ( [ eq : explicitpolynomialdynamicalsystem ] ) simultaneously ) , we arrive at the other components of ( [ eq : explicitpolynomialdynamicalsystem ] ) are obtained by means of similar arguments .the polynomials ( [ eq : explicitpolynomialdynamicalsystem ] ) vary greatly in their complexity and three patterns are easily distinguishable .the polynomials that are squares of linear forms , that is , correspond to blood genotypes that are homozygous for both blood group and rh factor .the polynomials that are products of two different linear forms , that is , correspond to the blood genotypes that are homozygous for either blood group or rh factor but not both .finally , the three complicated polynomials are the counterparts of the fully heterozygous genotypes aohh , bohh , and abhh . observe that no particular population growth model has been used for computing the polynomials since our goal is to compute the frequencies of the blood genotypes in the next generation no matter hownumerous it is .( it only has to be numerous enough for the law of large numbers to hold . )choosing a particular growth model would result in multiplying the polynomials with a common normalizing function .thus the distribution of the blood genotypes in the next generation is completely described by the polynomial vector - valued function from the projective space into itself : such a map defines a polynomial dynamical system .finding blood genotypes distributions in subsequent generations means computing the sequence of iterates of the polynomial vector - valued function here is what the initial distribution evolves into after generations .typically a polynomial dynamical system on a complex manifold does not admit an explicit analytic description of the trajectory of a generic point. the vast majority of the results in complex dynamics are ergodic - theoretic in nature .however , the biological origin of the dynamical system ( [ eq : explicitpolynomialdynamicalsystem ] ) suggests that it should not exhibit any chaotic behavior .the polynomial dynamical system ( [ eq : explicitpolynomialdynamicalsystem ] ) is the main object of study in the paper .we aim to find an explicit symbolic description of the orbit of any initial distribution of human blood genotypes frequencies under the action of ( [ eq : explicitpolynomialdynamicalsystem ] ) and to describe its rate of convergence towards the equilibrium. it will often be convenient to identify a distribution of blood genotypes in a population with a linear form whose coefficients are proportions of people with given blood genotypes and whose formal variables are the 18 blood genotypes names oohh , aohh , , abhh .for instance , the linear form + denotes the population where people have blood genotype oohh and people have genotype abhh .the main result of the paper is the following statement .[ thm : maintheorem ] the initial distribution of blood genotypes frequencies will after generations evolve into the distribution here is the -th component of the image of the initial distribution under the linear map defined by the matrix while is the quadratic map defined by although stands for the integer number of iterations of the polynomial map the formula ( [ eq : blooddistrafterngenerations ] ) makes perfect sense for any real as we will see later in section [ sec : discussion ] , it provides a smooth interpolation of the blood genotypes frequencies in subsequent generations described by our discrete model . having the explicit expression ( [ eq : blooddistrafterngenerations ] ) for the iterate of a polynomial map ( [ eq : explicitpolynomialdynamicalsystem ] ) , it is tempting to try to prove it by induction .let denote the right - hand side of ( [ eq : blooddistrafterngenerations ] ) .it is easy to check that in ( that is , these two polynomial vectors are proportional for any ) .thus it only remains to show that while this brute force approach must , in principle , lead to a straightforward proof of the theorem , the difficulty lies in the considerable complexity and the high dimensionality of the polynomial dynamical system ( [ eq : explicitpolynomialdynamicalsystem ] ) .in fact , the first component of the vector is the square of a polynomial of degree 4 with 110490 monomials . other components of this vector are at least as complex as the first one . while modern supercomputers theoretically allow one to deal with polynomials of this size , it is a task of formidable computational complexity to carry out such a calculation .the author s attempts to perform it on nvidia tesla m2090 supercomputer platform with a peak performance of 16.872 tflops ( linpack tested ) were all unsuccessful .besides , it would not provide any explanation for how ( [ eq : blooddistrafterngenerations ] ) arose . for these reasons we will follow a different way of proving theorem [ thm : maintheorem ] . throughout the proof , we will be working with projective coordinates of blood genotypes distributions .thus all the equalities below will relate vectors in projective spaces meaning that two nonzero vectors are equal if and only if they are proportional .let be the vector encoding the initial blood genotypes distribution in a population .it is straightforward to check that the action of the polynomial map ( [ eq : explicitpolynomialdynamicalsystem ] ) on is given by here is the -th component of the image of the initial distribution under the linear map defined by the matrix ( [ matrixm ] ) .we further observe that the matrix has tensor product structure : this equality is the algebraic counterpart of the blood genotypes inheritance rule stating that the blood group variation and the rh factor variation are inherited independently .we define the map to be the composition of the linear map and the quadratic map defined by ( [ mapq ] ) in the reversed order : using ( [ eq : tensorproductdecomposition ] ) we conclude that the -th iterate of the quadratic map acts on in accordance with the formula it follows from ( [ eq : polynomialdynamicalsystem ] ) that recalling that and using ( [ eq : mandqinreversedorder ] ) we arrive at ( [ eq : blooddistrafterngenerations ] ) .this finishes the proof .recall that we identify proportional distributions , so ( [ eq : blooddistrafterngenerations ] ) can be divided by the sum of its components or any other normalizing common factor .the following statement is an immediate consequence of theorem [ thm : maintheorem ] .blood genotypes frequencies after infinitely many generations are obtained by passing to the limit in ( [ eq : blooddistrafterngenerations ] ) .they are given by the tensor product of the hardy - weinberg equilibrium frequencies of the blood groups variations and rh factor variations .these frequencies span an attracting invariant manifold of the dynamical system ( [ eq : explicitpolynomialdynamicalsystem ] ) .describing evolutionary trajectories of initial distributions of genotypes frequencies with respect to a given set of traits is a classical avenue of research in population genetics .while theorem [ thm : maintheorem ] is a formal consequence of the lyubich general evolution formula , see 11 in , few cases admit explicit description . [ex : examplewithtable ] to illustrate the action of the dynamical system ( [ eq : explicitpolynomialdynamicalsystem ] ) on a simple initial distribution , we begin by treating a distribution spanned by two genotypes . consider an initial population where the first rh negative blood group is found with frequency the fourth rh homozygously positive blood group is found with frequency while no other blood group and rh factor variations are present .table 1 summarizes the evolution of this distribution ( empty cells stand for zero frequencies ) .* table 1 : blood group and rh factor phenotypes frequencies evolution * * of the initial population + * [ tab : bloodgroupsdistributions ] 0.2 cm generation & 2nd generation & 3rd generation & -th generation & after infinitely many generations + o & & & & & & + a & & & & & & + b & & & & & & + ab & & & & & & + rh positive & & & & & & + rh negative & & & & & & + o , rh positive & & & & & & + a , rh positive & & & & & & + b , rh positive & & & & & & + ab , rh positive& & & & & & + o , rh negative & & & & & & + a , rh negative & & & & & & + b , rh negative & & & & & & + ab , rh negative & & & & & & + all blood types & 1 & 1 & 1 & 1 & 1 & 1 + table 1 shows that , although the blood groups and rh factor frequencies alone stabilize after one generation , the frequency of the blood with any given combination of blood group and rh factor phenotypes ( such as oh ) does not remain constant after any finite number of generations .there exists however the limit distribution of frequencies that is achieved after infinitely many generations and is described by the hardy - weinberg equilibrium . in fact , the limit frequencies of the 8 phenotypes are given by the tensor product of the frequencies of the four blood groups and the two values of rh factor . to illustrate the nontrivial dynamics of a generic initial distribution of blood genotypes frequencies, we consider the evolution of the distribution 2 + aohh + 2 spanned by three genotypes .the phenotypes frequencies in the subsequent generations are shown in fig .[ fig : frequenciesdynamics ] .oohh + aohh + 2 , width=453 ] -9.4 cm -8 cm ah -8 cmbh 0.7 cm -8 cm oh 0.0 cm -8.0 cm ah -0.0 cm -8 cm abh 0.0 cm -8 cm oh 0.4 cm -8 cm bh 0.0 cm -8 cm abh -0.2 cm 8.5 cm number of generations we remark that it takes six human generations ( around 200 years in the real time scale ) for the phenotypes frequencies in this example to arrive at the 2% relative error neighborhood of their limit values .while the evolutionary trajectory of any particular blood genotypes distribution is completely described by theorem [ thm : maintheorem ] , the algebraic structure of the attracting invariant manifold of the dynamical system ( [ eq : explicitpolynomialdynamicalsystem ] ) is far from being clear .the analysis of its properties is a task of substantial computational complexity and was done by means of a package developed by the author and run under computer algebra system _ mathematica _ 9.0 .one of the core algorithms implemented in this package is as follows. * algorithm 1 .* ` step 0 ` define the list ` g ` with the 18 blood genotypes names oohh , , abhh as formal symbolic algebraically independent variables .a population will from now on be identified with a linear form in the elements of ` g ` .its mass is defined to be the sum of the coefficients of this linear form .denote the space of all such forms by ` l(g ) ` . `step 1 ` define the blood genotypes inheritance matrix ` a ` to be the matrix of normalized ( i.e. , with mass 1 ) linear forms in the elements of ` g ` that encode the human blood genotypes inheritance rules . ` step 2 ` define a bilinear map ` s ` acting on ` g``g ` and with values in ` l(g ) ` by means of the matrix ` a ` . with this map ,the normalized next generation is computed as follows : ` nextgeneration[population_]:=collect[simplify[expand[s[population , population]]/ mass[population]],g ] ` . `step 3 ` choose a population by specifying the values of some of the elements of the list ` g ` and imposing algebraic relations on the other . `step 4 ` find an algebraic parametrization of the attracting submanifold for the population in question by integrating the evolutionary equations . `step 5 ` eliminate the parameters and return the complete set of algebraic equations defining the attracting submanifold .while there exist several computer programs for the numerical simulation of the evolution of recombination frequencies , the above algorithm appears to be new .the structure of invariant manifolds of a general multivariate map defined by a family of quadratic forms is far from being clear and presumably does not admit any algebraic description .the linearization of an evolutionary trajectory in a neighborhood of the equilibrium manifold has been given in .computer experiments with the _ mathematica _ package reveal the following intrinsic property of the attracting manifold : it is given by a _binomial ideal _ generated by quadratic forms .a full list of these forms ( many of them being algebraically dependent ) contains 96 elements and the following shape : we now apply algorithm 1 to investigate invariant manifolds of the polynomial dynamical system ( [ eq : explicitpolynomialdynamicalsystem ] ) .we consider special cases of particularly simple distributions of blood genotypes in the initial population .consider the special case of a population consisting of people with first blood group only ( such as present day s south american indians , see , p. 2189, table 132 - 2 ) .assume that the population in question comprises people with the genotype oohh , people with the genotype oohh , and people with the genotype oohh .using the notation introduced above , we will denote such a population by then , in accordance with the blood inheritance rules and their mathematical formulation ( [ eq : explicitpolynomialdynamicalsystem ] ) , the blood genotypes distribution in the next generation will be and it will remain unchanged in any subsequent generation . in other words ,the variety parametrized for by is an invariant manifold of the polynomial map ( [ eq : explicitpolynomialdynamicalsystem ] ) which moreover consists of fixed points of this map .the three nonzero equilibrium frequencies of the three genotypes in this example lie on the discriminant hypersurface from now on we will be using the linear form notation for blood genotypes distributions since they allow one to avoid vectors with plenty of zeros . since rh negative blood is a recessive trait , a population consisting of rh negative people only can be represented in the following form : such a distribution of blood genotypes will also stabilize already in the next generation .this new stable distribution is given by for any the point ( [ eq : limitdistrrhnegative ] ) is another fixed point of the polynomial map ( [ eq : explicitpolynomialdynamicalsystem ] ) .it is easily seen that the equilibrium frequencies satisfy the binomial relations consider the population + + . here and are arbitrary positive numbers representing proportions of people with corresponding blood genotypes .any real population is of course very far from having such a distribution of blood genotypes .we consider this example since it is essentially different from the previous ones and still allows one to explicitly compute the limit distribution of the blood genotypes .already the third generation of the population defined above will contain all nine genotypes that belong to the first or the second blood group . using the wolfram mathematica 9.0 computer algebra system and a package for blood genotypes analysiswe conclude that after sufficiently many generations the blood genotypes distribution in the population under study will be arbitrarily close to the limit distribution for any initial frequencies this blood genotypes distribution is invariant under the map the nonzero equilibrium frequencies span the manifold defined by the binomial equations statistics shows that in most populations blood group and rh factor distributions do not correlate with each other . since blood genotype ( including all the information on homo- or heterozygosity of a person for blood group and rh factor ) is much more difficult to detect clinically than the dominating blood group and rh factor , the corresponding statistics for their variations is not available . yet , genetics of these traits suggests to consider them as statistically independent . in the present example, we investigate the blood genotypes dynamics of a population satisfying this additional assumption .such a population is completely determined by the two vectors and giving the numbers of people with rh - factor variations ( hh , hh , hh ) and the distribution of blood group variations ( oo , ao , aa , bo , bb , ab ) within every such set . with this notation , the 18-dimensional vector defining a population that satisfies the above assumption is given by the tensor product of the vectors and defined as follows : computation shows that the blood genotypes distribution of such a population will also stabilize in the next generation and the new stable distribution is the tensor product of the distributions ( [ eq : limitdistrfirstgroup ] ) and ( [ eq : limitdistrrhnegative ] ) : where image of the space of all blood genotypes distributions under the map ( [ eq : explicitpolynomialdynamicalsystem ] ) encoding the blood inheritance rules is six - dimensional .thus the evolution of an 18-dimensional initial distribution is completely determined by its six parameters which can be chosen to be the frequencies of the fully homozygous blood genotypes .the initial distributions that evolve differently are those that do not differ by a vector in the kernel of the matrix ( [ matrixm ] ) .it is classically known that the equilibrium phenotypes frequencies o , a and b of the 1st , 2nd and 3rd blood groups respectively satisfy the bernstein algebraic relation this relation ( together with o+a+b+ab=1 ) allows one to express e.g. the frequency of the 4th blood group as a function of the frequencies of the first two . using statistical data available at www.bloodbook.com/world-abo.html where blood group distributions for 88 ethnicities of the world are collected , we find that the relative error of the estimate exceeds 30% in 11% of all cases .also , this relative error is greater than 20% for 19% of all the observed blood groups phenotypes distributions .this suggests that many of the observed distributions do not lie on the bernstein hypersurface ( see fig .[ fig : bernsteinhypersurface ] ) and therefore are not at the equilibrium and will necessarily evolve further . while extensive statistical data on blood phenotypes distributions in the various populations of the world is available at www.bloodbook.com/world-abo.html and similar sources ,little is known about the human blood genotypes distributions .the reason for this is that a human s blood genotype is much more difficult to detect clinically than her / his blood phenotype .however , to compute the expected blood phenotypes or genotypes distribution in the next generation , the present genotypes distribution must be known .this lack of statistical data does not allow us to directly apply theorem [ thm : maintheorem ] to a real population . yet, the formula ( [ eq : blooddistrafterngenerations ] ) shows which initial distributions evolve along the same trajectories as well as their rate of convergence towards the equilibrium .sato t. et al . , 2010 .polymorphisms and allele frequencies of the abo blood group gene among the jomon , epi - jomon and okhotsk people in hokkaido , northern japan , revealed by ancient dna analysis .journal of human genetics * 55 * , 691 - 696 . cantat s. , chambert - loir a. , guedj v. , 2010 .quelques aspects des systmes dynamiques polynomiaux .( french ) [ some aspects of polynomial dynamical systems . ]panoramas et synthses [ panoramas and syntheses ] , 30 .socit mathmatique de france , paris .x+341 pp . isbn : 978 - 2 - 85629 - 338 - 6 .
|
the frequencies of human blood genotypes in the abo and rh systems differ between populations . moreover , in a given population , these frequencies typically evolve over time . the possible reasons for the existing and expected differences in these frequencies ( such as disease , random genetic drift , founder effects , differences in fitness between the various blood groups etc . ) are in the focus of intensive research . to understand the effects of historical and evolutionary influences on the blood genotypes frequencies , it is important to know how these frequencies behave if no influences at all are present . under this assumption the dynamics of the blood genotypes frequencies is described by a polynomial dynamical system defined by a family of quadratic forms on the 17-dimensional projective space . to describe the dynamics of such a polynomial map is a task of substantial computational complexity . we give a complete analytic description of the evolutionary trajectory of an arbitrary distribution of human blood variations frequencies with respect to the clinically most important abo and rhd antigens . we also show that the attracting algebraic manifold of the polynomial dynamical system in question is defined by a binomial ideal .
|
the mathematics of transport in nonlinear dynamical systems has received considerable attention for more than two decades , driven in part by applications in fluid dynamics , atmospheric and ocean dynamics , molecular dynamics , granular flow and other areas .we refer the reader to for reviews of transport and transport - related phenomena .early attempts to characterise transport barriers in fluid dynamics include time - dependent invariant manifolds ( such as lobe - dynamics ) and finite - time lyapunov exponents .more recently , in two - dimensional area - preserving flows , proposed finding closed curves whose time - averaged length is stationary under small perturbations ; this aim is closest in spirit to the predecessor work of this paper , though the latter theory applies in arbitrary finite dimensions and the curves need not be closed . in parallel to these efforts , the notion of almost - invariant sets in autonomous systems spurred the development of probabilistic methods to transport based around the transfer operator . in relation to transport barriers , numerical observations indicated connections between the boundaries of almost - invariant sets and invariant manifolds of low - period points .transfer operator techniques were later extended to dynamical systems with general time dependence , with the introduction of coherent sets as the time - dependent analogues of almost - invariant sets .topological approaches to phase space mixing have also been developed , including connections with almost - invariant sets . in , froyland introduced the notion of a dynamic isoperimetric problem , namely searching for subsets of a manifold whose boundary size to enclosed volume is minimised in a time - averaged sense under general time - dependent nonlinear dynamics .solutions to this problem were constructed from eigenvectors of a dynamic laplace operator , a time - average of pullbacks of laplace operators under the dynamics .it was shown in that the dynamic laplace operator arises as a zero - diffusion limit of the transfer operator constructions for finite - time coherent sets in .this result demonstrated that finite - time coherent sets ( those sets that maximally resist mixing over a finite time interval ) , also had the persistently small boundary length to enclosed volume ratio property ; intuitively this is reasonable because diffusive mixing between sets can only occur through their boundaries .thus , finite - time coherent sets have dual minimising properties : slow mixing ( probabilistic ) and low boundary growth ( geometric ) .the theory in was restricted to the situation where the advective dynamics was volume - preserving , and to tracking the transport of a uniformly distributed tracer in euclidean space .in the present work , we extend the results of in three ways : ( i ) to non - volume preserving dynamics , ( ii ) to tracking the transport of nonuniformly distributed tracers , and ( iii ) to dynamics operating on curved manifolds .we now begin to be more specific about the results of the present paper .let denote a connected -dimensional compact riemannian manifold and denote a hypersurface disconnecting into submanifolds ; that is is a partition of .for example , could be the unit square ^ 2\subset \mathbb{r}^2 ] and is not area - preserving , then ( 2-dimensional lebesgue measure ) will be transformed by to a probability measure with a non - constant density .furthermore , since ^ 2\subset \mathbb{r}^2 ] be a -dimensional cylinder in , where is identification at interval endpoints ; that is , is periodic in the first coordinate with period . the riemannian metric on is given by the kronecker delta , so that the volume form on is .to form a weighted riemannian manifold , we set the density of to be a positive and periodic function .consider the hypersurface ; we choose this surface because it is the solution of the classical `` static '' isoperimetric problem defined by minimising ( [ cheeger0 ] ) without the second term in the numerator .the curve is two vertical lines on that pass over regions with minimal density as shown in figure [ fig : static1a ] .one can compute analytically by noting that the induced riemannian metric on is given by ; thus 0.5 -dimensional cylinder under nonlinear shear . colours are values of , and black lines are the hypersurface . values of , and .,title="fig : " ] 0.5 -dimensional cylinder under nonlinear shear . colours are values of , and black lines are the hypersurface . values of , and .,title="fig : " ] let us now apply the following transformation to , where the first coordinate is computed modulo .the map is a nonlinear horizontal shear .the hypersurface is transformed to under the action of as shown in figure [ fig : static2a ] .the shearing magnitude is chosen to simplify the analytical computation of .it is easy to verify that is area - preserving .since is area - preserving and , one has which implies in this example . to compute the measure on ,we parametrise the curve by for ] .therefore thus the measure of is almost double that of the measure of .correspondingly , the numerator in ( [ cheeger0 ] ) will be undesirably large . in section [ sect631 ]we show how to use our new machinery to find an improved choice for that takes into account both the weight _ and _ the dynamics of .our goal is to detect lagrangian coherent structures on the weighted riemannian manifold ; i.e. subsets of that resist mixing with the surrounding phase space by having persistently small boundary size to internal size .following , we introduce a version of the _ dynamic isoperimetric problem _ , generalised to the situation where the dynamics need not be volume preserving , and occurs on a possibly weighted , possibly curved manifold .let be a compact -hypersurface in that disconnects into two disjoint open subsets and with . to begin with , we model the dynamics as a single iterate of .the subsets and are transformed into and , with the disconnecting surface separating and in .consider the following optimisation problem : [ def : cc ] define the _ dynamic cheeger ratio _ by the _ dynamic isoperimetric problem _ is defined by the optimisation problem where varies over all -hypersurfaces in that partition into .the number is called the _ dynamic cheeger constant_. note that by the definition of , one has and .importantly , one does _ not _ have in general , because is not necessary the push - forward of ( see also the direct computation in section [ example1 ] ) .thus , one could rewrite as by searching over all -hypersurfaces in to minimise , the first ratio term of attempts to minimise mixing between the subsets and across the boundary , through the mechanism of small co - dimensional mass at the initial time , and small co - dimensional mass at the final time .having a persistently small boundary is consistent with slow mixing in the presence of small magnitude diffusion , and is also consistent with measures of mixing adapted to purely advective dynamics such as the mix - norm and negative index sobolev space norms .the reason for the constraint is to ensure that and found , _ both _ have macroscopic -dimensional mass to avoid trivial solutions .thus , the optimal solution for is a -hypersurface that represents an excellent candidate for a lagrangian coherent structure , in the sense that the corresponding subsets and are able to retain their resistance to mixing in the presence of the prescribed dynamics . to see why this problem is a truly dynamic problem ,consider the -dimensional flat cylinder ] and \cup(3.5 , 4)\times [ 0,1] ] under a ( possibly time - dependent ) ode , where is at each ; i.e the initial manifold is transformed under the smooth flow maps arising from for each ] ; one has an evolving weighted riemannian manifold .note that the metrics need not be related for different . for all ] .one now has the time - continuous push - forward operator given by for all ] .furthermore , by a straightforward modification of corollary [ thm : dwlp3 ] in the appendix , one has for each ] ( see ( [ eq : lapt ] ) as in theorem [ thm : spec ] .moreover , by a modification ( see appendix [ sec : multip ] for details ) , one can obtain a continuous - time dynamic cheeger inequality }^d\leq 2\sqrt{-\lambda_{2 , \tau}},\ ] ] where is the second eigenvalue of } ] , partition into via , , and the hypersurface .compute for each ] and are the components of the vectors and respectively .therefore , we have \approx \sum_{j=1}^j p_{ij } g_j.\ ] ] the operator is numerically estimated by the matrix under right multiplication . to numerically solve the eigenvalue problem on ,we discretise using the second equality of ; that is in preparation for the numerical approximations for our -dimensional examples , which will be a rectangle , cylinder or torus , we construct a by grid system for .let be euclidean coordinates on .we cover with grid boxes of uniform size ( one can easily consider the more general case of nonuniform box sizes ) , and re - index the boxes to , indexing the -direction with , and the -direction with ; clearly .let and denote the components of discrete functions and respectively .we employ standard finite - difference schemes to obtain numerical approximations for the rhs of . starting with the approximation of , one has in euclidean coordinates , the vector . to compute the derivatives and numerically , we apply the standard central - difference technique to obtain on the grid box , thus on the grid box next , we numerically solve the divergence applied to the rhs of . by central - difference approximations ,one has on the grid box .\end{aligned}\ ] ] denote the resulting finite - difference approximation of by the matrix . rearranging , then applied to the vector is a vector of length with components {k+k(l-1 ) } = \frac{1}{4b_{x_1}^2}\frac{u_{k+1 , l}}{u_{k , l}}f_{k+2 , l}+\frac{1}{4b_{x_1}^2}\frac{u_{k-1 , l}}{u_{k , l}}f_{k-2 , l}+\frac{1}{4b_{x_2}^2}\frac{u_{k , l+1}}{u_{k , l}}f_{k , l+2 } + \frac{1}{4b_{x_2}^2}\frac{u_{k , l-1}}{u_{k , l}}f_{k , l-2}\notag\\ -\bigg(\frac{1}{4b_{x_1}^2}\frac{u_{k+1 , l}+u_{k-1 , l}}{u_{k , l}}+ \frac{1}{4b_{x_2}^2}\frac{u_{k , l+1}+u_{k , l-1}}{u_{k , l } } \bigg)f_{k , l},\end{aligned}\ ] ] for , .note that if is constant for all and , then the expression becomes the standard -point stencil laplace matrix . to treat the numerical approximation of at the boundary of , we apply the usual neumann boundary condition for all ( where is unit normal to ) .this neumann boundary condition is imposed by symmetric reflection in the above modified finite - difference scheme as follows : consider the grid boxes for ; one has a boundary on the left side edge of each of these grid boxes . by construction ,the unit normal along the left side edge of the grid boxes is given by .therefore , the boundary condition is satisfied by reflecting the artificial , and for all .one applies similar symmetric reflections to all at the boundary of . by definition , and the numerical approximations we obtained for , , and , one has the finite - difference approximation for the weighted dynamic laplacian given by where the matrices and are given by and respectively , and , by .we note that the matrices and are sparse and consequently is sparse .one can numerically solve the finite dimensional eigenvalue problem for small eigenvalues , and in particular and corresponding eigenfunction . to find a good solution to the dynamic isoperimetric problem , one can use the level sets of as candidates for as in algorithm [ alg ] .we now demonstrate our technique on a weighted -dimensional cylinder , where ] whenever there is no confusion on whether the partial differential is carried out on or .it is straightforward to verify that the set forms a basis for the vector fields on .hence , one can write the metric tensor in coordinates as . given a diffeomorphism , and local charts , on and respectively .observe that is smooth .therefore , it is possible to carry out the operation of partial differentials on at the point as t:=\frac{\partial ( \vartheta\circ t\circ \varphi^{-1})}{\partial x_j}(\varphi(x)).\ ] ] one can construct the jacobian matrix in local coordinates via , as a matrix with entries , where is the smooth projection of the image of onto the coordinate , and the abbreviation had been applied .let be local coordinates on .denote by the differential -forms dual to the tangent basis , for each . for , one can express a differentiable -form in coordinates via the exterior product of -forms where are real - valued functions on .the exterior derivative on a differentiable is a -form given by , and the exterior derivative on the -form defined by is a -form satisfying the interior derivative on a -form with respect to a vector field on , is a -form satisfying (\mathcal{v}_1 , \mathcal{v}_2 , \ldots , \mathcal{v}_{p-1})=\eta(\mathcal{v } , \mathcal{v}_1 , \mathcal{v}_2 , \ldots , \mathcal{v}_{p-1}),\ ] ] for all vector fields on .recall the definition of the tangent and cotangent mappings and associated with the , are given by and respectively .for the differential -form given by , one has for all vector fields on . therefore , in coordinates where the last line is due to the fact that \mathcal{v}=\mathcal{v}(f\circ t)=[d(f\circ t)]\mathcal{v} n ] .let be the signed distance function as in , and let .fix , then is in .hence is restricted to , and for each .therefore , by the implicit function theorem there exist open neighborhoods about each point , and local coordinates for , such that are local coordinates on .let be the matrix with entries in the coordinates .then the volume form on is given by moreover , by a combination of the stokes and divergence theorem ( see p.122 , and p.7 , equation ( 38 ) respectively ) , one has where is the unit normal bundle along .hence for all .now , since along , the vector is normal to the hypersurface ; which implies , and for . therefore where the penultimate equality is due to the leibniz rule on interior product , and the fact that for . to complete the proof, we note that because is the signed distance function , is continuous by assumption , and is smooth since is smooth . hence is a continuous density for all .therefore is continuous on ] .thus , one can apply the fundamental theorem of calculus to the last line of to obtain where is the anti - derivative of .now , to obtain the inequality via lemma [ thm : molli ] , we start with the term on the numerator of .note that is constant on , which implies for all .but if , then for smaller than as in lemma [ thm : contmu ] .therefore , by lemma [ thm : alw ] one has next , we consider the term on the numerator of .observe that at each point , where we have used and the fact that for all to obtain the last line .hence , the integral vanishes . set to the signed distance function defined by , then for all .thus on .let be the level surfaces of the signed distance function ; that is . then are generated by the level surfaces of ; that is .therefore , by the co - area formula one has , by a straightforward modification of lemma [ thm : contmu ] , the expression appearing on the rhs of is continuous as a function of on the interval ] for all .suppose the solution exists for the eigenvalue problem .then under the boundary condition , one has by and proposition [ thm : wbc ] the following formulation for the eigenvalue problem : for all .equivalently , \,d\mu_r=-2\lambda\int_m g\phi \,d\mu_r.\ ] ] for all .let be a weighted sobolev space with weights .recall from section [ sec : ws ] that the weak gradient with respect to the metric is denoted by . due to , the the weak formulation for the eigenproblem is given by we show existence of solutions for the above weak formulation , for all .we call such pairs weak solutionsfor the eigenvalue problem. our approach to finding the weak solutions for is based on the construction of functionals and , and using the method of lagrange multipliers . for , we define and , where and .first we list some useful properties of the functionals , and .a. the functional is well - defined , b. the derivative is linear and bounded ( hence ) , c. is frchet - differentiable , d. is continuous as a map from to . a. let be an atlas of . then due to the fact that is a -diffeomorphism , there exists a set of finite constants such that on for each and .hence , by writing in coordinates form via ( with respect to weak partial derivatives ) , one has on all points in for all , where . furthermore ,since is compact , there exists a partition of unity subordinate to the covering ( lemma [ thm : partu ] ) .therefore , where . since , .it follows that is well defined .b. for all where to obtain the last line , we have used the fact that the coefficient of the term on the penultimate line is finite from part .clearly is linear .furthermore , by the cauchy - schwartz inequality , one has where is the same constant that appeared in part .therefore , is bounded .[ thm : rm1 ] one may obtain analogous results of lemma [ thm : fnl ] for by setting as the identity map in , while the corresponding results for is a straightforward modification with an important concept associated with linear functionals is the weak convergence .let be a sequence in .we say that weakly in , if for all ( where is the linear dual of ) .moreover , since is a hilbert space ( proposition [ thm : comw ] ) , by the riez representation theorem , if weakly in then for all .one has the following standard result ( see p.174 , ) recall from section [ sec : ws ] that the condition on the density has important consequences for the weighted sobolev space . by assumption ,the density is smooth and uniformly bounded away from zero .hence , by proposition [ thm : smoothap ] , the density is an weight on the the space .define the inner - product for all , and denote the norm associated with by .set , and select a sequence such that and .first , we show that the sequence is bounded in both and . due to lemma[ thm : gpc ] , there exists a constant ( independent of ) such that for each .hence , moreover , by cauchy - schwartz hence so that is a bounded sequence in . by applying similar arguments as in the proof of lemma [ thm : fnl](i ), one can verify that is also a bounded sequence in .since is a bounded sequence in , and a hilbert space ( due to proposition [ thm : comw ] ) , by lemma [ thm : wsc ] , there exists a subsequence such that weakly in .moreover , due to lemma [ thm : rcom ] , the embedding is compact , which implies the existence of a subsequence of , such that in .the strong convergence in implies in , because by the change of variable next , we use the fact that the subsequence is bounded in together with the weak convergence of in , to show that convergences weakly in . due to lemma and remark [ thm : rm1 ], one has and for all .therefore where the penultimate line is due to the weak convergence of in .now , the weak convergence of in implies where the inequality on the last line is due to cauchy - schwartz .set , , and , and consider the inequality as a consequence of , one has thus , . furthermore, the subsequence is bounded in , and is the largest number smaller than .thus , similarly , the weak convergence of the bounded subsequence in gives finally , due to and by the strong convergence of in and in , one has from and , we conclude that ; thus the minimum of is attained by . to complete the proof the theorem, it remains to show that ; that is .one has since . due to lemma[ thm : fnl ] , the functionals and are continuously differentiable . in addition , by lemma [ thm : fmin ] there exists a function which minimises over the constraint set .therefore , using the method of lagrange multipliers , one has the equation for some and all . expanding this equation with and yields for all , and some . by comparing and, one sees immediately that is a solution pair for the weak formulation .if we fix to be in , then moreover , as a consequence of lemma [ thm : fmin ] , is minimising for . thus rearranging yields let the solution to be denoted by . to find other solution pairs to of the form ,one follows the standard induction arguments presented in and p.212 in : one constructs a sequence of decreasing , closed and -orthogonal subspaces of ; that is for , a sequence of subspaces of of the form , where is constant .one then uses the fact that the solutions and are -orthogonal for ( this follows immediately from lemma c.3 . in ) , and the fact that each is complete ( closed subspace of a hilbert space ) , to apply the variational method on to obtain for . note that is a solution pair to , thus , the sequence is monotone decreasing and tends to , with the solution space finite for each ( lemma c.4 . in ) . to complete the proof of theorem [ thm : spec ] , it remains to verify that the eigenfunctions of are smooth and unique for each .for then , the smoothness of on implies that the weak solution pairs which solves are also solution to .moreover , the uniqueness of implies that the solutions of the eigenvalue problem are given by or ( with the weak gradients replaced with standard version due to the additional smoothness of ) . to determine the regularity and uniqueness of on , we utilise the elliptical regularity theorem ( see theorem 8.14 in ) .we say that an operator of the form is strictly uniformly elliptic if , and are bounded , real - valued functions on , and there exists a constant such that where is non - zero . as a consequence of the elliptical regularity theorem , if is smooth , and is a strictly uniformly elliptic operator with and in , then there exist unique solutions in for the eignproblem .[ thm : ul ] let be a -diffeomorphism , and assume is smooth and uniformly bounded away from zero . the weighted laplacian is a strictly uniformly elliptic operator of the form , with in and on . for this proof, we say that an operator has property , if it is a strictly uniformly elliptic , with coefficients in and on . by lemma[ thm : dwlp2 ] clearly the sum of operators with property is an operator with property .additionally , if the second and fourth terms of has property , then by setting as the identity , one immediately see that the first and third terms of also has property .thus , it is sufficient to show that the second and fourth terms of has property . to show that second term of has property , we note by corollary [ thm : app3.3 ] that . therefore in local coordinates at any point in , for all . using jacobi s formula for differentiating the determinant of a matrix ;that is for all , one has therefore , by using the product rule to expand the partial derivative in the summation on the rhs of , and then applying to the first term one has \partial_if+(t^*n)^{ij}\partial_j\partial_i f. \end{aligned}\ ] ] now the riemannian metric is a bilinear symmetric form and positive - definite for every .moreover , the mapping is a -diffeomorphism .hence , the components and are both bounded and smooth for each . therefore , the coefficients and in are both bounded and smooth .additionally , due to lemma [ thm : app3.1 ] we have at the point , where we have used the fact that the matrix is positive definitive at every to obtain the last inequality .hence , there is a such that for all .thus satisfies the condition , so by - the term has property . to show that the fourth term of has property , we consider the numerator term .one has at each point , writing the rhs of in local coordinates , one has at any point therefore , at each as before , due to the properties of the metric , the smoothness of , and the fact that is diffeomorphism , the coefficient is bounded and smooth , and so the fourth term of has property .this proof is a straightforward modification of theorem 3.2 in .let be nonnegative and smooth .since by , and densities are both positive and smooth , the function is also nonnegative and smooth .denote by the level surfaces generated by ; that is .then the level surfaces of are generated by .now , due to the co - area formula given by lemma [ thm : ca ] , one has let be smooth , and the median of with respect to ; i.e and .set and , so that .observe that for each point , either , or .therefore in addition , if is positive then , and if is positive then .hence , by using the fact that is the median of , one has for all .moreover , if then , and if then .hence , and finally , by definition and . hence analogous to and analogous to \big|_m\,d\mu_r+\int_m \big|\nabla_n [ ( \mathcal{l}f-\sigma)^2]\big|_n\,d\nu_r\notag\\ = & \int_m \big|\nabla_m(f_+^2+f_-^2)\big|_m\,d\mu_r+\int_n \big|\nabla_n(\mathcal{l}f_+^2+\mathcal{l}f_-^2)\big|_n\,d\nu_r\quad\mbox{by \eqref{eq : wci2a } and \eqref{eq : wci2b}}\notag\\ = & \int_m\left ( |\nabla_m ( f_+^2)|_m+ |\nabla_m ( f_-^2)|_m \right)\,d\mu_r+ \int_n \left(|\nabla_n ( \mathcal{l}f_+^2)|_n+|\nabla_n ( \mathcal{l}f_-^2)|_n\right)\,d\nu_r,\end{aligned}\ ] ] where the last line is due to and .now , consider the rhs of . since and are nonnegative and smooth almost everywhere , one can set and independently in , and then apply to the result to obtain where the equality on the last line is due to . applying the cavalieris principle ( proposition i.3.3 in ) to the rhs of yields next , we consider the lhs of . in local coordinates , one has by & = \sum_{i , j=1}^r m^{ij}\partial_i ( f-\sigma)^2 \partial_j\\ & = 2\sum_{i , j=1}^r m^{ij } ( f-\sigma)\partial_i f\partial_j\\ & = 2(f-\sigma)\nabla_m f.\end{aligned}\ ] ] therefore , by cauchy - schwartz \big|_m\ , d\mu_r & = 2\int_m |f-\sigma|\cdot\big|\nabla_m f\big|_m\,d\mu_r\notag\\ & \leq 2\|f-\sigma\|_{2 , m , \mu}\cdot \|\nabla_mf\|_{2 , m , \mu}.\end{aligned}\ ] ] also , analogous to \big|_n\,d\nu_r & \leq 2\|\mathcal{l}f-\sigma\|_{2 , n , \nu}\cdot \|\nabla_n \mathcal{l}f\|_{2 , n , \nu}\notag\\ & = 2\left(\int_n ( \mathcal{l}f-\sigma)^2 \,d\nu_r \right)\cdot \|\nabla_n \mathcal{l}f\|_{2 , n , \nu}\notag\\ & = 2\left(\int_m ( f-\sigma)^2 \,d\mu_r \right ) \cdot\|\nabla_n \mathcal{l}f\|_{2 , n , \nu}\quad\mbox{by \eqref{eq : cov3}}\notag\\ & = 2\|f-\sigma\|_{2 , m , \mu}\cdot \|\nabla_n \mathcal{l}f\|_{2 , n , \nu}.\end{aligned}\ ] ] therefore , by - , one has be the mean of with respect to ; that is .then as a function of is minimum when .hence , by squaring both sides of , one has for all , where we have used the fact that for to obtain the inequality on the last line . furthermore, if is the smallest magnitude nonzero eigenvalue of with corresponding eigenfunction , then by theorem [ thm : spec ] , one has , , and for the infimum of is attained by .thus , by setting in , this concludes the proof of the theorem .to generalise theorem [ thm : wci ] to the time - continuous dynamic cheeger inequality , we note that apart from all arguments are applied linearly with respect to time .hence , the results up to are immediate via the constructions outlined in sections [ sec : mts ] and [ sec : lts ] . to modify the argument used to obtain ,we apply cauchy - schwartz to obtain for the time - discrete case , one applies cauchy - schwartz analogously .recall the definition of the diffusion operator given by . for ,we wish to evaluate the limit of the image of under the operator , where by and , with let be a chart on containing the point . recall normal coordinates at the point , are the local coordinates on such that the metric tensor satisfies and for all .[ thm : texpdet ] let be a chart of containing the point with corresponding coordinates . the asymptotic expansion of about , centered at is given by where depend only on the riemannian curvature tensor and covariant derivatives of at the point .moreover , if is bounded on , then let with , fix and set to be smaller than the injectivity radius of the point .it is well known that the exponential map at the point is a diffeomorphism of a neighbourhood of onto ( see theorem 5.11 , ) .moreover , there exist normal coordinates on the chart ; that is the components of the metric tensor satisfy , and at the point for all ( see corollary 5.12 , ) .recall the definition of from section [ sec:3 ] . by the gauss lemma for riemannian manifolds ,the exponential map is a radial isometry from to ( see lemma 3.5 , p.69 in ) . thus , for all .moreover , due to the fact that , the function vanishes for all .let denote normal coordinates on .recall that the volume form on is given by , where is a matrix with entries .hence , where is the lebesgue measure on .moreover , since , one has where the last line is due to .an application of the change of variable to the rhs of yields to complete the proof of the lemma from , we follow the proof of lemma d.1 .we apply taylor s theorem to the real - valued function on , centered at to obtain where the remainder term is given by due to the above taylor expansion of , the rhs of becomes \cdot\sqrt{\det g_m}\circ \exp_{x_0}(\epsilon v)\,d\ell(v).\end{aligned}\ ] ] we evaluate the above integral term by term . for the term, we note that the real - valued function is symmetric .hence , are odd functions of for .therefore , where we have applied lemma [ thm : texpdet ] to obtain the second equality , with constants depend only on the riemannian curvature tensor , and the covariant derivatives of at the point .we return to this term later in the proof , but for now we proceed to the term . for the term , due to the property for and the approximation of by lemma [ thm : texpdet ] , one has where we have applied lemma [ thm : laplaciannormal ] to obtain the last line .now set , where is smaller than the injectivity radius for every , then the approximations - are valid for every point . moreover , since is compact is bounded on . therefore , by , if then there exists a constant such that for each and all , if then there exists a constant such that for each and all . due to - , one has for all . consider the term on the second line of , one has for some constant . therefore ,rearranging yields since the first and second order derivatives of are bounded for by , the first two terms on the rhs of converge to as .hence , to complete the proof of the theorem it suffices to show that is uniformly bounded on , for and every with .let and .since is less than the injectivity radius of , the exponential map is a -diffeomorphism from onto .thus , if , then all derivatives of up to order are bounded above by for some on . now since , one has for all . hence ,the term is uniformly bounded in for , and all .it follows that the remainder is uniformly bounded on , for and every .let , and set to be smaller than the injectivity radius of each point in .we start with the asymptotic expansions of .since and is bounded in the -norm , one has , for some constant .consider such that .lemma [ thm : app2.1 ] yields , where denotes the class of polynomials , with all coefficients bounded on and independent of . combining the expansion of with the linearity of , then .now , since is given by and is a -diffeomorphism , one has .therefore , by a straightforward modification of lemma [ thm : app2.1 ] , we have uniformly on +\mathcal{o}(\epsilon^3)\notag\\ = & \mathcal{p}(fh_\mu ) + \frac{c\epsilon^2}{2}\left[\mathcal{p}\triangle_m ( fh_\mu)+\triangle_n \mathcal{p}(fh_\mu)\right]+\mathcal{o}(\epsilon^3),\end{aligned}\ ] ] where is the same constant as in lemma [ thm : app2.1 ] ( since the constant comes from the property of , independent of ) .therefore , using the fact that +\mathcal{o}(\epsilon^3 ) } { h_\nu + \frac{c\epsilon^2}{2}\left[\mathcal{p}\triangle_m h_\mu+\triangle_n h_\nu \right]+\mathcal{o}(\epsilon^3)},\ ] ] uniformly on .next we apply to . according to ,the first step is the application of the dual diffusion operator to . in preparation for this, we consider a general polynomial quotient of the form where are a set of known coefficients . by polynomial long division and truncating at ,one has applying to , and noting that ( see ) yields + \mathcal{o}(\epsilon^3)\\ & = \mathcal{l}f+\frac{c\epsilon^2}{2}\left[\frac{\mathcal{p}\triangle_m(fh_\mu)}{h_\nu}+\frac{\triangle_n\mathcal{p}(fh_\mu)}{h_\nu } -\frac{\mathcal{l}f\cdot \mathcal{p}\triangle_m h_\mu}{h_\nu}-\frac{\mathcal{l}f\cdot\triangle_n h_\nu}{h_\nu}\right ] + \mathcal{o}(\epsilon^3)\end{aligned}\ ] ] uniformly on .since is uniformly bounded away from zero , one can check that .hence , it is now straightforward to compute via lemma [ thm : app2.1 ] to obtain \notag\\ + \frac{c\epsilon^2}{2}\triangle_n\mathcal{l}f + \mathcal{o}(\epsilon^3),\end{aligned}\ ] ] uniformly on .we write using the fact that for the last line .thus , the and terms of can be combined to form also , thus , the and terms of can be combined to form +\frac{c\epsilon^2}{2}\triangle_n\mathcal{l}f\notag\\ = & \frac{c\epsilon^2}{2}\triangle_n \mathcal{l } f+c\epsilon^2\frac{n(\nabla_n h_\nu ,\nabla_n \mathcal{l}f)}{h_\nu}+\frac{c\epsilon^2}{2}\triangle_n\mathcal{l}f\notag\\ = & c\epsilon^2\left(\triangle_nf + \frac{n(\nabla_n h_\nu , \nabla_n \mathcal{l}f)}{h_\nu}\right)\notag\\ = & c\epsilon^2(\triangle_\nu \mathcal{l}f ) , \end{aligned}\ ] ] where the last line is due to . substituting and into the rhs of yields , uniformly on .it is straightforward to apply to the rhs of via lemma [ thm : app2.1 ] , which yields uniformly on , where we have used to obtain the last line .since the coefficients of the are uniform on and independent of , rearranging gives r. m. rustamov .aplace - beltrami eigenfunctions for deformation invariant shape representation . in _ proceedings of the fifth eurographics symposium on geometry processing _ , pages 225233 .eurographics association , 2007 .
|
transport and mixing in dynamical systems are important properties for many physical , chemical , biological , and engineering processes . the detection of transport barriers for dynamics with general time dependence is a difficult , but important problem , because such barriers control how rapidly different parts of phase space ( which might correspond to different chemical or biological agents ) interact . the key factor is the growth of interfaces that partition phase space into separate regions . the paper introduced the notion of _ dynamic isoperimetry _ : the study of sets with persistently small boundary size ( the interface ) relative to enclosed volume , when evolved by the dynamics . sets with this minimal boundary size to volume ratio were identified as level sets of dominant eigenfunctions of a _ dynamic laplace operator_. in this present work we extend the results of to the situation where the dynamics ( i ) is not necessarily volume - preserving , ( ii ) acts on initial agent concentrations different from uniform concentrations , and ( iii ) occurs on a possibly curved phase space . our main results include generalised versions of the dynamic isoperimetric problem , the dynamic laplacian , cheeger s inequality , and the federer - fleming theorem . we illustrate the computational approach with some simple numerical examples .
|
in this paper we consider a minimal metapopulation model with two competing populations .it consists of two different environments among which migrations are allowed . as migrations do occur indeed in nature , ,the metapopulation tool has been proposed to study populations living in fragmented habitats , .one of its most important results is the fact that a population can survive at the global level , while becoming locally extinct , .an earlier , related concept , is the one of population assembly , , to account for heterogeneous environments containing distinct community compositions , providing insights into issues such as biodiversity and conservation . as a result , sequential slow invasion and extinction shape successivespecies mixes into a persistent configuration , impenetrable by other species , , while , with faster invasions , communities change their compositions and each species has a chance to survive .a specific example in nature for our competition situation for instance is provided by _ strix occidentalis _ , which competes with , and ofter succumbs to , the larger great horned owl , _bubo virginianus_. the two in fact compete for resources , since they share several prey , .if the environment in which they live gets fragmented , the competition can not be analysed classically , and the metapopulation concept becomes essential to describe the natural interactions .this paper attempts the development of such an issue in this framework .note that another recent contribution in the context of patchy environments considers also a transmissible disease affecting the populations , thereby introducing the concept of metaecoepidemic models , . an interesting competition metapopulation model with immediate patch occupancy by the strongest population and incorporatingpatch dynamics has been proposed and investigated in .patches are created and destroyed dynamically at different rates .a completely different approach is instead taken for instance in , where different competition models , including facilitation , inhibition and tolerance , are investigated by means of cellular automata .the model we study bears close resemblance with a former model recently appeared in the literature , . however , there are two basic distinctions , in the formulation and in the analysis . as for the model formulation , in populations are assumed to be similar species competing for an implicit resource .thus there is a unique carrying capacity for both of them in each patch in which they reside .furthermore their reproduction rates are the same .we remove both these assumptions , by allowing in each patch different carrying capacities for each population , as well different reproduction rates .methodologically , the approach used in uses the aggregation method , thereby reducing the system of four differential equations to a two - dimensional one , by assuming that migrations occur at a different , faster , timescale than the life processes .this may or may not be the case in real life situations .in fact , referring to the herbivores inhabiting the african savannas , this movement occurs throughout the lifetime , while intermingling for them does not constitute a `` social '' problem , other than the standard intraspecific competition for the resources , .the herbivores wander in search of new pastures , and the predators follow them .this behavior might instead also be influenced by the presence of predators in the surrounding areas , .thus the structure of african herbivores and the savanna ecosystems may very well be in fact shaped by predators behavior . in the current classical literature in this context, it is commonly assumed that migrations of competing populations in a patchy environment lead to the situation in which the superior competitor replaces the inferior one .in addition , it is allowed for an inferior competitor to invade an empty patch , but the invasion is generally prevented by the presence of a superior competitor in the patch , .based on this setting , models investigating the proportions of patches occupied by the superior and inferior competitors have been set up , .the effect of patch removal in this context is analysed in , coexistence is considered in , habitat disruptions in a realisting setting are instead studied in .note that in this context , the migrations are always assumed to be bidirectional .our interest here differs a bit , since we want to consider also human artifacts or natural events that fragment the landscape , and therefore we will examine particular subsystems in which migrations occur only in one direction , or are forbidden for one of the species , due to some environmental constraints . our analysis shows two interesting results .first of all , a kind of competitive exclusion principle for metapopulation systems also holds in suitable conditions .further , the competitive exclusion principle at the local patch level may be overcome by the migration phenomenon , i.e. two competing populations may coexist , provided that either only one of them is allowed to freely move , or that migrations for both populations occur just in one and the same direction .this shows that the assumptions of the classical literature of patchy environments may at times not hold , and this remark might open up new lines of investigations .the paper is organized as follows . in the next sectionwe formulate the model showing the boundedness of its trajectories .we proceed then to examine a few special cases , before studying the complete model : in section [ one ] only one population is allowed to migrate , in section [ direc ] the migrations occur only in one direction . then the full model is considered in the following section .a final discussion concludes the paper .we consider two environments among which two competing populations can migrate , denoted by and .let , , , their sizes in the two environments . herethe subscripts denote the environments in which they live .let each population thrive in each environment according to logistic growth , with possibly differing reproduction rates , respectively for and for , and carrying capacities , respectively again for and for .the latter are assumed to be different since they may indeed be influenced by the environment .further let denote the interspecific competition rate for due to the presence of the population and denote conversely the interspecific competition rate for due to the presence of the population .let the migration rate from environment to environment for the population and similarly let be the migration rate from to for the population .the resulting model has the following form : note that a very similar model has been presented in .but ( [ sistema ] ) is more general , in that it allows different carrying capacities in the two patches for the two populations , while in only one , , is used , for both environments and populations .further , the environments do not affect the growth rates of each individual population , while here we allow different reproduction rates for the same population in each different patch .also , competition rates in are the same in both patches , while here they are environment - dependent .the analysis technique used in also makes the assumption that there are two time scales in the model , the fast dynamics being represented by migrations and the slow one by the demographics , reproduction and competition .based on this assumption , the system is reduced to a planar one , by at first calculating the equilibria of the fast part of the system using the aggregation method , and then the aggregated two - population slow part is analysed . herewe thus remove the assumption of a fast migration , compared with the longer lifetime population dynamics because for the large herbivores the migration process is a lifelong task , being always in search of new pastures . in different environmentsthe resources are obviously different , making the statement on different carrying capacities more closely related to reality . finally , it is also more realistic to assume different carrying capacities for the two populations , even though they compete for resources , as in many cases the competition is only partial , in the sense that their habitats overlap , but do not completely coincide .we will consider several subcases of this system , and finally analyse it in its generality .table [ tab:1 ] defines all possible equilibria of the system ( [ sistema ] ) together with the indication of the models in which they appear . for each different model examined inwhat follows , we will implicitly refer to it frequently , with only changes of notation and possibly of population levels , but not for the structure of the equilibrium , i.e. the presence and absence of each individual population ..all the possible equilibria of the three ecosystems : y means that the equilibrium is possible .we indicated also the unconditional instability , and with a star the instability verified just numerically .critical means that stability is achieved only under very restrictive parameter conditions , i.e. in general the corresponding point must be considered unstable . [ cols="<,^,^,^,^,^,^,^ " , ] for the stability analyses we will need the jacobian of ( [ sistema ] ) , where and denote the generic equilibrium point and we will now show that the solutions of ( [ sistema ] ) are always bounded .we shall explain the proof of this assertion for the complete model , but the same method can be used on each particular case , with obvious modifications .let us set .boundedness of implies boundedness for all the populations , since they have to be non - negative .adding up the system equations , we obtain a differential equation for , the right hand side of which can be bounded from above as follows let substituting in we find if we set we find let us now set and let be the solution of the cauchy problem by means of the generalized grnwall inequality we have that for all , and so this implies at once that is bounded , and thus the boundedness of the system s populations as desired .observe that the boundedness result obtained here for this minimal model is easily generalized to meta - populations living in patches .here we assume that the population can not migrate between the two environments .this may be due to the fact that it is weaker , or that there are natural obstacles that prevent it from reaching the other environment , while these obstacles instead can be overcome by the population .thus each subpopulation and is segregated in its own patch .this assumption corresponds therefore to setting into ( [ sistema ] ) . in this casewe will denote the system s equilibria by , with .it is easy to show that equilibria , , , do not satisfy the first equilibrium equation , and , , , do not satisfy the third one , so that all these points are excluded from our analysis since they are unfeasible . at the origin , , the jacobian ( [ jac ] ) has the eigenvalues and , , from which its instability follows .the point is unconditionally feasible , but the eigevalues of ( [ jac ] ) evaluated at turn out to be together with , , so that also is inconditionally unstable .the point is always feasible .two eigenvalues for ( [ jac ] ) are easily found , , .the other ones come from a quadratic equation , for which the routh - hurwitz conditions reduce to for parameter values satisfying these conditions then , is stable .equilibrium is always feasible , and the jacobian ( [ jac ] ) has eigenvalues again with , so that in view of the positivity of the last eigenvalue , is always unstable .existence for the equilibrium can be established as an intersection of curves in the phase plane .the equations that define them describe the following two convex parabolae ,\\ \pi_2 : \quad p_1(p_2)\equiv \frac 1{m_{21}}\left [ r_2p_2(1-\frac{p_2}{k_2})-m_{12}p_2 \right].\end{aligned}\ ] ] both cross the coordinate axes at the origin and at another point , namely respectively for and for .now by drawing these curves it is easily seen that they always intersect in the first quadrant , independently of the position of these points , except when both have negative coordinates . the latter case need to be scrutinized more closely . to ensure a feasible intersection , we need to look at the parabolae slopes at the origin .thus , the feasible intersection exists if ^{-1}<1 $ ] or , explicitly when however , coupling this condition with the negativity of the coordinates of the above points and , intersections of the parabolae with the axes , the condition for the feasibility of becomes simply which is exactly the assumption that the coordinates of the points and be negative .hence it is automatically satisfied .further , in the particular case in which one or both such points coalesce into the origin , i.e. for either or , is it easily seen that the corresponding parabola is tangent to the origin and a feasible always exists . in conclusion ,the equilibrium is always feasible . by using the routh - hurwitz criterionwe can implicitly obtain the stability conditions as \left[r_2\left(1-\frac{2}{k_2}p_2\right)-m_{12}\right ] > m_{12}m_{21}.\end{aligned}\ ] ] numerical simulations reveal that the stability conditions are a nonempty set , we obtain for the parameter values , , , , , , , , , , , , , , , .for the equilibrium point we can define two parabolae in the plane by solving the equilibrium equation for : .\end{aligned}\ ] ] the first parabola intersects the axis at the point , it always has two real roots , one of which is positive and the other negative , and has the vertex with abscissa .the second parabola intersects the axis at the points given that the two parabolae always have one intersection on the boundary of the first quadrant , we can formulate a certain number of conditions ensuring their intersection in the interior of the first quadrant .these conditions arise from the abscissa of the vertex of , of the leading coefficient of and by the relative positions of the roots of . by denoting as mentioned by the abscissa of vertex of , by the leading coefficient of and by the ordinate of , we have explicitly sets of conditions : 1 . , , : the feasibility condition reduces just to the intersection between and the axis being larger than the positive root of ; explicitly , together with either or 2 . : the feasibity condition is that the slope of at the point be smaller than that of at the same point .but the value of the population in this case would be negative , thus this condition is unfeasible ; 3 . :the feasibity condition requires the slope of at the point to be smaller than that of at the same point .but the value of the population would then be negative , so that this condition is unfeasible ; 4 . : in general there is no intersection point ; 5 . : the feasibity condition states that the slope of at the point be smaller than that of at the same point ; explicitly 6 . : for feasibility , the intersection between and the axis must be larger than the positive root of ; in other words 7 . : there can be no intersection point ; 8 . : for feasibity the slope of at the point must be smaller than that of at the same point . in this case, explicitly we have the feasibility conditions the stability conditions given by the routh - hurwitz criterion can be stated as together with and finally where the population values are those at equilibrium .also in this case the simulations show that this equilibrium can be achieved for the parameter values , , , , , , , , , , , , , , , . for the equilibrium the same above analysis can be repeated , with only changes in the parabolae and in the subscripts of the above explicit feasibility conditions .the details are omitted , but the results provide a set of feasibility conditions and the following stability conditions given by the routh - hurwitz criterion together with and finally with population values evaluated at equilibrium . again , the whole set of conditions can be satisfied to lead to a stable configuration for the following parameter choice : , , , , , , , , , , , , , , , , with initial conditions .the equilibrium coordinates are .the coexistence equilibrium has been deeply investigated numerically .it has been found to be always feasible , but never stable for all the sets of parameters used .in this case , we assume that it is not possible to migrate from patch 2 back into patch 1 , so that the coefficients and vanish . the reasons behind this statement can be found in natural situations .for instance it can be observed that freshwater fishes swim downstream much more easily than upstream . in particular obstacles like dams and waterfalls may hinder the upstream migrations .in any case the overcoming of these obstacles requires a sizeable effort , for which sufficient energy must be allocated .this however may not always be available .we denote the equilibria here by , .equilibria , , , , , , are found to be all infeasible . the origin has two positive eigevalues and , so that it is unstable . the points and are feasible . for the former , the eigenvalues of the jacobian are , , , , giving the stability conditions for the latter instead , the eigenvalues are , , , , with the following conditional stability conditions equilibrium is feasible for either one of the two alternative sets of inequalities the eigenvalues are , , , where \ ] ] in case ( [ tildee4_feas_a ] ) holds , we find so that is unstable . in case instead of ( [ tildee4_feas_b ] )the stability conditions are and simulations show that this point is indeed stably achieved for the parameter values , , , , , , , , , , , , , , , , giving the equilibrium .the next points come in pairs .they are where and with respective conditions for the non - negativity of their first components given by note further that if ( [ tildee6_feas ] ) and ( [ tildee11_feas ] ) hold , then .but then and , so that and have the second component negative , i.e. they are infeasible .the feasibility conditions for and are then respectively given by ( [ tildee6_feas ] ) and ( [ tildee11_feas ] ) .the eigenvalues for are and giving the stability conditions where we used ( [ tildee6_feas ] ). eigenvalues of are and from which the stability conditions follow having again used ( [ tildee11_feas ] ) . for the next two equilibria , we are able only to analyse feasibility .we find with ^{1/2 } \right\}\\ b=&\frac{1}{2s_2(a_2 b_2 h_2 k_2 r_1 - r_1 r_2 s_2 ) } \left\ { h_2 s_2 - \frac{a_2 b_2 h_2 ^ 2 k_2 r_1 s_2+b_2 h_2 k_2 r_1 r_2 s_2 } { 2 ( a_2 b_2 h_2 k_2 r_1 - r_1 r_2 s_2)}\right.\\ & - \left [ b_2 h_2 ( -4 ( k_1 k_2 m_{21}^2 s_2 - k_1 k_2 m_{21 } r_1 s_2 ) ( -a_2 b_2 h_2 k_2 r_1 + r_1 r_2 s_2 ) \right.\\ & \left .\left . + ( a_2 h_2 k_2 r_1 s_2 - k_2 r_1 r_2 s_2)^2\right]^{1/2}\right\},\end{aligned}\ ] ] and where ^{1/2}\right\}\\ c=&\frac{1}{2 r_2 ( b_2 a_2 k_2 h_2 s_1 - s_1 s_2 r_2 ) } \left\ { k_2 r_2 - \frac{b_2 a_2 k_2 ^ 2 h_2 s_1 r_2+a_2 k_2 h_2 s_1 s_2 r_2 } { 2 ( b_2 a_2 k_2 h_2 s_1 - s_1 s_2 r_2 ) } \right . \\& - \left [ a_2 k_2 ( -4 ( h_1 h_2 n_{21}^2 r_2 - h_1 h_2 n_{21 } s_1 r_2 ) ( -b_2 a_2 k_2 h_2 s_1 + s_1 s_2 r_2 ) \right .\\ & \left .+ ( b_2 k_2 h_2 s_1 r_2 - h_2 s_1 s_2 r_2)^2\right]^{1/2}\right\}.\end{aligned}\ ] ] feasibility for is ensured by while for by numerical simulations show in fact their stability , respectively for the parameter values , , , , , , , , , , , , , , , , giving equilibrium and for the parameter values , , , , , , , , , , , , , , , , giving .the coexistence equilibrium has been numerically investigated for the parameter values , , , , , , , , , , , , , , , . from which its stability under suitable parameter values is shown .note that the parameters have been chosen in a very peculiar way , the reproduction rates all coincide , as do all the carrying capacities , the competition rates and the migration rates .however , numerical experiments reveal that by slightly perturbing these values , the stability of this equilibrium point is immediately lost .we conclude then that the coexistence equilibrium can be achieved at times , but is generically unstable .we consider now the full system ( [ sistema ] ) in this case , the points , , , , , , , , , , , are seen to be all infeasible . at the origin , the characteristic polynomial factors to give the two quadratic equations and stability conditions are then ensured by the routh - hurwitz conditions , which explicitly become these conditions are nevertheless incompatible , since from the second one we have and similarly , contradicting thus the first one . the origin is therefore always unstable .the points and may be studied by the same means of and therefore are always feasible .the stability of is given implicitly by \left[r_2\left(1-\frac{2}{k_2}p_2\right)-m_{12}\right ] > m_{12}m_{21},\end{aligned}\ ] ] whereas for the equilibrium we have the conditions \left[s_2\left(1-\frac{2}{h_2}q_2\right)-n_{12}\right ] > n_{12}n_{21}.\end{aligned}\ ] ] simulations were carried out to demonstrate that the stability conditions of these points can be satisfied .the equilibrium is stably achieved for the parameter values , , , , , , , , , , , , , , , .equilibrium is attained with the choice , , , , , , , , , , , , , , , , with initial conditions . for the coexistence equilibrium we have similar results as for the one of the one - migration only case .it exists and is stable for the very specific parameter values , , , , , , , , , , , , , , , .its stability however is easily broken under slight perturbations of the system parameters . again , thus , the coexistence equilibrium is not generically stable .the metapopulation models of competition type here considered show that only a few populations configurations are possible at a stable level .first of all , in virtue of our assumptions , all these ecosystems will never disappear .table [ tab:1 ] shows that equilibria , , , , can not occur in any one of the models considered here . of these , and are the most interesting ones .they show that one competitor can not survive solely in one patch , while the other one thrives alone in the second patch .thus it is not possible to reverse the outcome of a superior competitor in one patch in the other patch .further , in the first patch the two populations can coexist only in the model in which only one population is allowed to migrate back and forth into the other patch , equilibrium . in that case, the migrating population thrives also alone in the second environment .the coexistence of all populations in both environments is `` fragile '' , it occurs only under very limited assumptions .coexistence in the second patch can occur instead with the first one empty at , only in the following two cases .for the one - directional migration model , with immigrations into the second patch , the first patch is left empty . when the first patch is instead populated by one species only , at equilibria for both the one - population and unidirectional migrations models and at , again for the one - directional migrations model .the equilibria in which one population is wiped out from the ecosystem instead , and , occur in all three models .finally , the three remaining equilibria contain only one population in just one patch . at , only for the unidirectional migration model , the migrating population survives in the arrival patch . at is the residential , i.e. the non - migrating , population that survives in its own patch , only for the one - population migrations model . at for both particular cases instead, the residential population survives in the `` arrival '' patch of the other migrating population .the model with unrestricted migration possibilities allows the survival of either one of the competing populations , in both patches , and . coupling this result with the fact that the interior coexistence has been numerically shown to be stable just for a specific parameter choice , but it is generally unstable , this result appears to be an extension of the classical competitive exclusion principle , , to metapopulation systems , in agreement with the classical literature in the field , e.g. .it is apparent here , as well as in the classical case , that an information on how the basins of attraction of the two mutually exclusive boundary equilibria is important in assessing the final outcome of the system , based on the knowledge of its present state . to this end, relevant numerical work has been performed for two dimensional systems , .an extension to higher dimensions is in progress .for the model in which only one population can migrate , two more equilibria are possible in addition to those of the full model , i.e. the resident , non - migrating , population can survive just in one patch with the migrating one , and the patch can be either one of the two in the model , equilibria and . the resident population can not outcompete the migrating one , since the equilibria and are both unconditionally unstable .thus , when just one population migrates , the classical principle of competitive exclusion does not necessarily hold neither at the wider metapopulation level , nor in one of the two patches , as shown by the nonvanishing population levels of patch 2 in equilibrium and in patch 1 in equilibrium .the coexistence in one of the two patches appears to be possible since the weaker species can migrate to the other competitor - free environment , thrive there and migrate back to reestablish itself with the competitor in the original environment .but the principle of competitive exclusion can in fact occur also in this model , since the numerical simulations reveal it , consider indeed the equilibrium .however , restrictions in the interpatch moving possibilities of one population might prevent its occurrence .the coexistence of all the populations appears to be always impossible in view of the instability of the equilibrium . for this model where just one population is allowed to migrate , keeping the following demographic parameters fixed , and using the following migration rates we have respectively the following stable equilibria , .the separatrices are pictured in the top row of figure [ fig : separ_onemigr ] , the right frame containing patch 1 and the left one patch 2 . if we change the migration rates , allowing a faster return toward patch 1 , the second equilibrium remains unchanged , but we find instead that the point has moved toward higher and lower population values .the separatrices are plotted in the bottom row of figure [ fig : separ_onemigr ] .it is also clear that the basins of attraction in patch 1 hardly change , while in patch 2 the basin of attraction of the population appears to be larger with a higher emigration rate from patch 2 . correspondingly , the one of becomes smaller in patch 2 , according to what intuition would indicate .when migrations are allowed from patch 1 into patch 2 only , a number of other possible equilibria arise , in part replacing some of the former ones . grantedthat coexistence is once again forbidden for its instability , three new equilibria arise , containing either one or both populations in the patch toward which migrations occur , leaving the other one possibly empty .the principle of competitive exclusion in this case may still occur at the metapopulation level , but apparently coexistence at equilibrium might be possible in the patch toward which populations migrate if the stability conditions ( [ tildee4_stab ] ) coupled with the feasibility conditions ( [ tildee4_feas_b ] ) are satisfied .this appears to be also an interesting result . again exploiting the algorithm of , we investigated also the change in shape of the basins of attraction of the two equilibria and , for this unidirectional migrations model .using once again the demographic parameters ( [ param_demog ] ) , we take at first the migration rates as follows obtaining equilibria and . this result is shown in the top row of figure [ fig : separ_unidir ] , again patch 1 in the right frame and patch 2 in the left one . instead with the choice allowing a faster rate for the population , we again find that the second equilibrium is unaffected , but the first one lowers its population values , becoming , see bottom row of figure [ fig : separ_unidir ] . in this casethe basins of attraction seem to have opposite behaviors . with a higher migration rate for , its basin of attraction in patch 2gets increased , while in patch 1 becomes smaller .this result is in agreement with intuition , in patch 1 the population become smaller and larger instead in patch 2 .we briefly discuss also the model bifurcations for the unidirectional migration model . if and , the only feasible equilibria are , , which are stable under the additional conditions and .when crosses the value and similarly , the two previous equilibria become unstable , and transcritical bifurcations give rise respectively to the equilibria and .the equilibrium may coexist with each one of the previous equilibria , but in this case and must be unstable , whereas and may be stable if their stability conditions hold . in the two particular cases above discussed , of just one population allowed to migrate and of unidirectional migrations , our analysis shows that the standard assumptions used to study configurations in patchy environments may not always hold . under suitable conditions ,competing populations may coexist if only one migrates freely , or if migrations for both populations are allowed in the same direction and not backwards .this appears to be an interesting result , which might open up new research directions ., _ approximation of dynamical system s separatrix curves _ , in t. simos , g. psihoylos , ch .tsitouras , z. anastassi ( ed.s ) , numerical analysis and applied mathematics icnaam 2011 aip conf .proc . , * 1389 * ( 2011 ) 12201223 ; doi : 10.1063/1.3637836 . , competition and species coexistence in a metapopulation model : can fast asymmetric migration reverse the outcome of competition in a homogeneous environment ?, journal of theoretical biology * 266 * ( 2010 ) 256263 . is able to migrate : separatrix of the basins of attraction of the equilibria and lying on the axes .the demographic parameters are given by ( [ param_demog ] ) .right column : patch 1 ; left column : patch 2 .top : , , , .bottom : , , , . , title="fig:",width=188 ] is able to migrate : separatrix of the basins of attraction of the equilibria and lying on the axes .the demographic parameters are given by ( [ param_demog ] ) .right column : patch 1 ; left column : patch 2 .top : , , , .bottom : , , , . , title="fig:",width=188 ] is able to migrate : separatrix of the basins of attraction of the equilibria and lying on the axes .the demographic parameters are given by ( [ param_demog ] ) .right column : patch 1 ; left column : patch 2 .top : , , , .bottom : , , , ., title="fig:",width=188 ] is able to migrate : separatrix of the basins of attraction of the equilibria and lying on the axes .the demographic parameters are given by ( [ param_demog ] ) .right column : patch 1 ; left column : patch 2 .top : , , , .bottom : , , , . , title="fig:",width=188 ] and lying on the axes .demographic parameters are given by ( [ param_demog ] ) .right column : patch 1 ; left column : patch 2 .top : , , , .bottom : , , , ., title="fig:",width=188 ] and lying on the axes .demographic parameters are given by ( [ param_demog ] ) .right column : patch 1 ; left column : patch 2 .top : , , , .bottom : , , , ., title="fig:",width=188 ] and lying on the axes .demographic parameters are given by ( [ param_demog ] ) .right column : patch 1 ; left column : patch 2 .top : , , , .bottom : , , , ., title="fig:",width=188 ] and lying on the axes .demographic parameters are given by ( [ param_demog ] ) .right column : patch 1 ; left column : patch 2 .top : , , , .bottom : , , , ., title="fig:",width=188 ]
|
in this paper we present and analyse a simple two populations model with migrations among two different environments . the populations interact by competing for resources . equilibria are investigated . a proof for the boundedness of the populations is provided . a kind of competitive exclusion principle for metapopulation systems is obtained . at the same time we show that the competitive exclusion principle at the local patch level may be prevented to hold by the migration phenomenon , i.e. two competing populations may coexist , provided that only one of them is allowed to freely move or that migrations for both occur just in one direction . * keywords * : populations , competition , migrations , patches , competitive exclusion * ams msc 2010 * : 92d25 , 92d40
|
we consider a differential problem of advective viscous flow in a domain , and let a computational grid on with an initial grid step .we consider a sistem of cartesian coordinates for the representation of the nodes . in 1dthe grid is a linear set of _ nodes _ , in 2d is a plane surface where nodes are vertices of squares with edge length equal to , in 3d is a set of adjacent cubes with squares of edge as faces ( see e.g. , ) . the grid is said _ adaptive _ if , during the evolution in time , its points are clustered in regions of high flow - field gradients , so that the step can change ( see e.g. ) .if * x * is the vector of the cartesian coordinates of a node and is the time variable , then for an adaptive grid we have . + we consider a real positive _ grid - field _ on defined on the nodes , i.e. if is an index parameter of the -th node and is the total number of nodes , we have .assume that this field represents a physical or logical property associated to the flow , e.g. the number of fluid particles that interact by molecular forces with the particle on -th node , or the number of possible nodes where a particle can move from time to time using the physical rules of the flow .as other example , in the particular matter of numerical weather prediction the topography of ground can influence heavly the computation of metheorological variables ; its geometry is modelled by special grid using _ terrain - following _ coordinates ( see e.g. ) .another interesting possible interpretation for is the number of values of the velocity field , computed at time on nodes of index , that we could use in a numerical scheme for computing the value on -th node at time with the desidered accuracy ( see ) .+ from these examples we can assume that has high values in the regions of where the flow is turbulent or in general not laminar ; in these cases , e.g. , for a good accuracy of a numerical approximation of the velocity field * u * at time , the number of values , computed at time , for computing the value at -th node can be greater then the number necessary for computing * u * at a node placed , at time , in a region of laminar flow ( see e.g. ) .hence , where the grid step has small values , the grid field can have high values .+ we apply to field the methods of a _ scale - free network _ , known in scientific literature as _ barabsi networks _ ( ) , assuming that the flow at time on a node of the grid depends on the flow computed at time at -th node with a _ probability function _so defined : using a _ continuum approach _ , barabsi and albert ( ) assume that is a continuous real variable and the rate at which it changes is proportional to : in the factor of proportionality is supposed in general constant .but if we want to describe an adaptive grid as a network changing in space and time , we suppose that the parameter is in general a function depending on the node and on time variable : in this case the grid becomes a scale - free network of nodes with _ fitness _ , using the terminology of bianconi and barabsi ( ) . also , the dependence on the grid variable * x * suggests to consider the possibility of a law like ( [ rate1 ] ) with the gradient at first member .+ the analytical dependence of from and is determined by the physical and logical properties of the flow , such as viscosity , density , geometry of domain and the velocity field * u * itself .hence in general we suppose that the mathematical expression of is determined by a _constitutive relation _ such as , e.g. , where is the dynamical viscosity of the fluid . in the next sectionwe ll present some possible examples of choices for the analytical shape of the function .+ consider , .from ( [ rate1 ] ) and the fact that by hypothesis the field is positive , we can write if is the set of initial ( ) values of , we have and therefore the previous equation gives the relation between the value of the grid field on the -th node and the variable parameter . we can write it as a field equation in this way : now we state the relation between the variable grid step and the grid field .let and two _ contiguous _ nodes of , e.g. in 1d , or the nodes are two vertices of the same edge of a geometrical cube in 3d .if is the value of grid step between the nodes , according to the previous discussion we assume that depends on the inverse of the arithmetic mean of the grid field values : where is a real positive function .if the grid is very fine , for every exists a topological neighbour such that , . in this casewe can assume and , using ( [ kexp2 ] ) , it is convenient to use the following definition : all the next applications we assume constant and with constant . we discuss some possible constitutive relation for the function .+ _ case 1_. let for all nodes and for every .then , from ( [ kexp1 ] ) , follows that .therefore , from ( [ step1 ] ) , we have , that is the grid is constant in time but it can be etherogeneous . in particular , if , then the grid is homogeneous .this is the case appropriate for a laminar flow , for which , in general , an adaptive grid is not strictly necessary .+ _ case 2_. let . in this case and let . in this casethe values of grid step at time are smaller than the corresponding values at time , so we can compute , as previously discussed , the velocity field of a flow which has a complexity growing in time .the flow is asintotically turbulent in all points of the domain .+ let . in this casethe values of grid step at time are greater than the corresponding values at time .the flow is asintotically laminar in all points of the domain .+ we can consider the particular case for which and . in this case , therefore the grid step is homogeneous in and \ ] ] _ case 3_. according to adaptive mesh refinement methods ( e.g. ) , grid step values can strictly depend on flow velocity gradient .also , if the shape of domain is constant in time and the viscosity of the fluid is a variable in space ( and in time ) , the grid should be refined in the regions of turbulence , and in particular where has small values ( ) .for these reasons we can assume , as first approximation , that the parameter is constant in time and depends only on the ratio , under the hypothesis .using a formal taylor expansion , we write where are constants in all the grid .+ if , then and we obtain the situations described in the previous cases .the flow becomes laminar or turbulent , depending on the sign of .+ in the general case , assuming constant in time , from ( [ kexp2 ] ) we have }\right)\ ] ] let . using the definitions , , and remembering that the coefficients are constants, we can expand to first order the exponential expression and obtain from ( [ step2 ] ) the grid step is consider a fixed . if , then the grid step for computing the solution at time becomes in general , if , for those regions in where at time the viscosity has very small values , from ( [ case3h1 ] ) follows that this result is coherent with usual conditions of numerical stability for differential problems where the advective terms are dominant ( see e.g. ) .in this section we show how , at least in a simple differential problem where the theorical solution is known , is possible to determine the grid field .+ we consider the one - dimensional diffusion - transport problem ( ) where and are positive constants .if , case of dominant transport , the theorical solution is well approximated by the function which presents a boundary layer effect at ( see fig.1 ) .[ expfig1 ] for a good accuracy of a numerical resolution based on a possible numerical scheme , as finite differences or finite elements , we should impose a condition on the _ local pclet number _ , that is .for the dominant transport situation , the condition implies that a constant grid step could be very small , with large computational cost for the numerical resolution . from fig.1we can see that the theorical solution has significant positive values only for , therefore for the resolution of the boundary layer effect we might construct a grid with a variable step , where is decreasing on , e.g. , $ ] .+ because we want ever smaller when , on the basis of the _ case 2 _ of the preceding section , we suppose that the entities and are constant , and , but from next considerations follows that should be sufficient the ratio be constant .therefore , from ( [ kexp1 ] ) , considering the unique variable of the problem : we suppose for simplicity . then , and from ( [ step2 ] ) follows that , so let , , the node from which we impose the condition . from definition of pclet numberwe have and , with no restriction , we can impose for the -th node . in this way we obtain the required value for : for our aim , the argument of the logarithmic function must to be greater then , so the value of can be choosen so that . notethat , from ( [ bvp1 ] ) , , so that we can consider the equation ( [ m0s ] ) as a constitutive relation of kind ( [ constitutive1 ] ) : using ( [ h ] ) , the adaptive grid nodes greater than can be so determined ( see fig.2 ) : [ expfig2 ] in the numerical case of fig.[plotfig3 ] , we have therefore the number of nodes for this heterogeneous grid is only few greater than the number of nodes for the homogeneous grid with and , as shown in fig.[plotfig3 ] , using the same numerical scheme the solution in the first case is more accurate than the solution in the latter .[ plotfig3 ]we have developed a model of adaptive grid based on a constituive relation between the geometrical step and other physical variables of a differential problem .the basic mathematical hypothesis is a possibile relation connecting the rate or gradient of the step and a probabilistic function which describes the physics or the geometry of the problem . + further developments can regard optimization of the technical scheme and theorical treatment of the accuracy of numerical solutions obtained using this type of adaptive grids .b. becker , m. braack and r. rannacher , _ adaptive finite elements methods for flow problems _ , in foundations of computational mathematics , london mathematical society lecture note series , * 284 * , cambridge university press ( 2001 ) .
|
in this paper we present a possible model of adaptive grids for numerical resolution of differential problems , using physical or geometrical properties , as viscosity or velocity gradient of a moving fluid . the relation between the values of grid step and these entities is based on the mathematical scheme offered by the model of scale - free networks , due to barbasi , so that the step can be connected to the other variables by a constitutive relation . some examples and an application are discussed , showing that this approach can be further developed for treatment of more complex situations .
|
synchronization is an important phenomenon that occurs in many biological and physical systems . in the brain , the rhythmic oscillations of concerted electrical activity , which are representative for the synchronous firing of neurons ,have been observed in different regions , including the neocortex , hippocampus and thalamus . sinceneural oscillations are associated with many high - level brain functions , the pertinent research has attracted considerable attention in the past decades .it has been proposed that these oscillations not only carry information by themselves , but that they may also regulate the flow of information and assist by its storage and retrieval in neural circuits . in the brain , fast - spiking interneuronsare mutually connected by both inhibitory chemical synapses as well as electrical synapses ( gap junctions ) .the evidence is mounting that networks composed of fast - spiking interneurons could provide synchronization mechanisms by means of which important rhythmic activities , such as the gamma ( : 25 - 100 hz ) rhythm and the mixed theta ( : 4 - 12hz ) and gamma rhythm , can be generated .computational studies indicate that inhibitory and electrical synapses play an important role by the generation of these oscillations .for example , it has been shown that interneuronal networks with solely inhibitory synapses can produce gamma oscillations , but that adding gap junctions to the network can further increases their stability and coherence .it has also been reported that interneuronal networks coupled by both fast and slow inhibitory synapses can produce the mixed theta and gamma rhythmic activity .in addition , several theoretical studies have been performed to provide a deeper understanding of how the inhibitory and electrical synapses promote synchronization amongst coupled neurons .information transmission delays , which are due to the finite propagation speeds and due to time lapses occurring by both dendritic and synaptic processing , are also an inherent part of neuronal dynamics . in particularthe transmission delays of chemical synapses are not to be neglected .physiological experiments have revealed that they can be up to several tenths of milliseconds in length . on the other hand ,the transmission delays introduced by electrical synapses are comparably short , usually not exceeding 0.05 milliseconds , so they are often not taken explicitly into account . in terms of dynamical complexity , the existence of time delays makes a nonlinear system with a finite number of degrees of freedom become an infinite - dimensional one , which may enrich the dynamics , enhance synchronization , and facilitate spatiotemporal pattern formation .although existing studies attest clearly to the fact that information transmission delays have a significant impact on the synchronization of interneuronal networks , to the best of our knowledge the focus has always been on considering only short inhibitory synaptic delays .however , since as noted above , the delays of chemical synapses may be substantial , it is also of interest to consider long inhibitory synaptic delays . given that previous studies have shown that different time delay lengths may have rather different effects on the synchronization of coupled nonlinear oscillators , we anticipate that the consideration of long inhibitory synaptic delays may lead to new and inspiring results .to resolve this , we here study the synchronization in an interneuronal network that is coupled by both delayed inhibitory as well as fast electrical synapses , focusing specifically on the effects of inhibitory synaptic delays covering a wide window of values .our simulations reveal that the delayed inhibition not only plays an important role in network synchronization , but that it can also lead to different oscillatory patterns .the comparatively fast gap - junctional coupling , on the other hand , contributes solely to the synchronization of the network but does not affect the emergence of oscillatory patterns .most interestingly , we show that a sufficiently long inhibitory synaptic delay induces a rapid transition from the one - frequency to the two - frequency state , thus leading to the occurrence of a mixed oscillatory pattern .moreover , we also show that the unreliability of inhibitory synapses has a significant impact on both the synchronization and the emergence of oscillatory patterns .our findings thus add to the established relevance of time delays in neuronal networks and highlight the importance of synaptic mechanisms for the generation of synchronized neural oscillations .the reminder of this paper is organized as follows . in section [ sec:2 ] , we present the mathematical model and introduce the synchronization measure .main results are presented in section [ sec:3 ] , while in section [ sec:4 ] we summarize our work and briefly discuss potential biological implications of our findings .we consider a network composed of fast - spiking interneurons .neurons in the network are randomly connected by inhibitory and electrical synapses with probability and , respectively . for simplicity ,all synapses are bidirectional .we do not allow a neuron to be coupled to another neuron more than once by using the same type of synaptic coupling , or a neuron to be coupled with itself .we assume that all electrical synapses are fast , thus considering delays only by the inhibitory synapses .this assumption is reasonable , as we have argued in the introduction .the dynamics of fast - spiking interneurons is described by the wang - buzsaki model .it has a form similar to the classical hodgkin - huxley model , with details as follows : where the three gating variables obey the following equations,\\ \frac{dh_i}{dt}&=\phi\left[\alpha_{h_i}(v_i)(1-h_i)-\beta_{h_i}(v_i)h_i\right],\\ \frac{dn_i}{dt}&=\phi\left[\alpha_{n_i}(v_i)(1-n_i)-\beta_{n_i}(v_i)n_i\right ] .\end{split } \label{eq:2}\ ] ] here is the neuron index , denotes the membrane potential of neuron , and the six rate functions are : , , , , , and . is the synaptic current of neuron due to the interactions with other neurons within the network ( also referred to as internal synaptic current in this paper ) , and is an externally applied current representing the collective effect of inputs coming from the outside of the network . in this work , we model the externally applied current as , where is the mean current, is an independent gaussian white noise with zero mean and unit variance , and is the intensity of stochastic fluctuations .the parameters of the wang - buzsaki model assume standard values : / , ms/ , mv , ms/ , mv , ms/ , mv , and .a spike is detected whenever the membrane potential exceeds the threshold of mv .figure [ fig:1 ] shows the firing rate curve of the wang - buzsaki model when the later is driven solely by the externally applied current ( ) .( color online ) firing rate curve of the wang - buzsaki model in dependence on the externally applied current in the absence of additional inputs.,width=264 ] for each neuron , the internal synaptic current consists of two terms , which are + \sum\nolimits_{k}g_{ik}[v_k - v_i ] .\end{split } \label{eq:3}\ ] ] in this equation , the first and second outer sums run over all inhibitory and electrical synapses onto neuron , is the inhibitory synaptic strength from neuron to neuron , is the corresponding inhibitory synaptic variable , mv is the reversal potential for inhibitory synapses , and is the electrical synaptic strength from neuron to neuron . for inhibitory synapses ,once a presynaptic neuron emits a spike , the corresponding is updated after a fixed spike transmission delay , according to .otherwise decays exponentially with a fixed time constant . for simplicity , we set , and throughout this paper , implying that the coupling is identical for the same type of synapses . to characterize the synchronization within the network , a dimensionless synchronization measure is introduced , following .we first compute the time fluctuations of the average membrane potential according to where the sign denotes the average over time and is the average membrane potential at time .subsequently , the population - averaged variance of the activity of each individual neuron is determined according to finally , the synchronization measure is computed as from this it follows that the larger the value of the better the synchronization in the network . the described mathematical model is integrated numerically using the fourth - order runge - kutta algorithm with a fixed time step of ms .this is sufficiently small to ensure an accurate simulation of the wang - buzsaki model . for each set of parameters ,the initial membrane potentials of neurons are uniformly distributed between -70 and 30 mv .the network size is , while and .we always generate both types of synapses , but use to denote the network without the fast gap - junctional coupling .the two parameters that determine the externally applied current are / and ms/ . under these conditions ,the mean firing rate of the wang - buzsaki model is approximately 80 hz in the absence of the internal synaptic current .we perform all simulations up to 3000 ms , and collect the data from 1000 to 3000 ms for further statistical analysis .the reported results , except for the spike raster diagrams , are averages over 30 independent runs .( color online ) spike raster diagrams for different values of the inhibitory synaptic delay .( a ) without fast electrical synapses ( ) .( b ) with fast electrical synapses ( ms/ ) . in all cases we set ms/ and ms . from top to bottom , = 0 , 7 , 13 , 18 and 30 ms , respectively .the red rectangle in the middle panel of ( a ) illustrates that , without the fast gap - junctional coupling , the synchronous firing of the last group in each periodic cycle is weak when is near the transition point .the black double arrow line in the bottom panel of ( b ) denotes one periodic cycle for the mixed oscillatory pattern.,width=332 ] we first show elementary simulation results that reveal how the delayed inhibitory and fast gap - junctional coupling influence the synchronous behavior of the considered interneuronal network . in figs .[ fig:2](a ) and [ fig:2](b ) , several typical spike raster diagrams for different values of the inhibitory synaptic delay , without ( ) and with ( ) the fast electrical synapses , are plotted , respectively .presented results show clearly that delayed inhibitory as well as fast electrical synapses play an important role by the synchronization of the network . without the fast gap - junctional coupling , the neuronal firings at are rather disordered .essentially this is because the considered network ( ) is sparse , i.e. , there are not much more links between the neurons ( on average ) than there are neurons constituting the network , and hence can not be easily synchronized in the absence of additional mechanisms that promote the onset of synchronization . by introducing inhibitory synaptic delays , an improvement in the synchronization of the network can be observed .however , it can also be observed that this depends significantly on the length of the inhibitory synaptic delay .only suitable delays can help the network to maintain a high level of synchronization ( compare results obtained with , 18 and 30 ms , and ms in fig .[ fig:2](a ) ) .we demonstrate this quantitatively in fig .[ fig:3](a ) , where also a near periodic oscillatory behavior in can be detected , which may be related to the matching of the inherent neuronal time scales with the duration of the delay , as proposed in ( we will discuss this near - periodic oscillatory behavior further shortly ) . furthermore , the results presented in fig . [ fig:2](b ) suggest that the fast electrical synapses provide a strong mechanism for fostering synchronization . withthe fast gap - junctional coupling turned on , the high - quality synchronization may be observed even at . as the strength of the electrical synaptic coupling ( )is increased , the neuronal firings become more and more synchronized ( see fig . [ fig:3](b ) ) .for sufficiently strong , the measure approaches 1 , indicating that the synchronization is almost perfect . indeed , several previous studies have concluded that gap - junctional coupling is more effective than chemical coupling in leading to highly synchronized states .one possible mechanism for this is that chemical synapses only act while the presynaptic neuron is spiking , whereas the electrical synapses are more efficient and can transmit the membrane potentials of presynaptic neurons to the corresponding postsynaptic neurons at all times .( color online ) ( a ) dependence of the synchronization measure on the delay for different values of the inhibitory synaptic strength . here( b ) dependence of on for different values of the electrical synaptic strength . here ms/ . in all cases we use ms .the units of parameters and shown in panels ( a ) and ( b ) are ms/ .the positions of the vertical dashed lines are : , 25 and 37.5 ms , respectively.,width=302 ] ( color online ) dependence of the synchronization measure on the inhibitory synaptic time constant for different values of the inhibitory synaptic strength .( a ) without the fast electrical synapses ( ) , and ( b ) with the fast electrical synapses ( ms/ ) . in all caseswe set ms . the values of the inhibitory synaptic strength considered here are , 0.02 , and 0.05 ms/ , respectively.,width=302 ] results presented in fig .[ fig:2 ] reveal also that the oscillatory pattern is largely influenced by the inhibitory synaptic delay . for suitably short values of a regular oscillatory pattern can be observed .interestingly however , if the delay is sufficiently long , we can observe the emergence of a mixed oscillatory pattern , which implies that there are two main oscillation frequencies present in the network .one is the low frequency of the whole mixed oscillatory pattern and the other is the high frequency of the fast oscillations within each periodic cycle ( see fig .[ fig:2 ] ) . theoretically , the mixed oscillatory pattern appears only for sufficiently long inhibitory synaptic delay . under this condition ,neurons in the synchronized or near - synchronized network have enough time to fire more than once during a whole periodic cycle , before the inhibitory synaptic currents caused by the first synchronous spiking group within the same periodic cycle start to suppress their firing .obviously , the longer the inhibitory synaptic delay , the more groups of synchronous spikes might be contained in each periodic cycle ( see and 30 ms in figs .[ fig:2](a ) and [ fig:2](b ) ) .moreover , once the inhibitory synaptic currents caused by the first synchronous spiking group start to have effect , these currents tend to decrease the membrane potentials of neurons and prolong their firing period . in this case , the following inhibitory synaptic bombardments due to one or more synchronous spiking groups from the same periodic cycle will further suppress neuronal firings , and therefore the neurons in the network can fire again only after these inhibitory effects wear down or become fully absent .this provides a viable mechanism for the emergence of the low - frequency component in the mixed oscillatory pattern . while performing additional simulations , we have discovered that without the long delayed inhibitory synapses , networks of interneurons can not generate the mixed oscillatory pattern even if we consider the non - physiological case of long electrical synaptic delays ( data not shown ) .the above results thus indicate that the inhibitory synaptic delay serves as an important control parameter for the selection of the oscillatory pattern in the network , and that long inhibitory synaptic delays provide a stable mechanism for the emergence of the mixed oscillatory pattern in interneuronal networks .we now return to fig .[ fig:3](a ) and further discuss the near - periodic oscillatory behavior in .it can be determined that the frequency of this near - periodic behavior matches with the oscillation frequency of the high - frequency component quite well .note that we will show later that the oscillation frequency of the high - frequency component is mainly influenced by the parameter . in the narrow regions of where the minima of appear (note that these regions also correspond to the transition points for the number of synchronously spiking groups contained in each periodic cycle ) , only a limited amount of neurons will participate in the last group of synchronous firings within each periodic cycle ( see the red rectangle in the middle panel of fig .[ fig:3](a ) ) . this can be attributed to the matching between the synaptic delay and the high - frequency component , which ultimately causes the near - periodic behavior in the synchronization measure .we have also performed additional numerical simulations by using other models of neuronal dynamics , such as the model by izhikevich with fast - spiking dynamics and the standard hodgkin - huxley model , and we have observed qualitatively identical results , thus confirming the generality of this phenomenon ( see also ) .on the other hand , our results also reveal that the fast gap - junctional coupling tends to suppress the occurrence of such near - periodic oscillations , which may be attributed to the overall promotion of synchronization ( see figs .[ fig:3](b ) ) .indeed , if the electrical coupling is sufficiently strong this phenomenon disappears altogether because then the neurons in the network are perfectly synchronized .in addition to the time delay , we also find that the synchronization depends significantly on the other two important inhibitory synaptic parameters , which are the strength and the time constant .figures [ fig:4](a ) and [ fig:4](b ) depict the synchronization measure as a function of for different values of , without ( ) and with ( ) the fast electrical synapses , respectively . in the absence of fast gap - junctional coupling, there exists an optimal region of in each depicted dependence of , which implies that the network can support synchronization optimally only for intermediate values of ( see also for related results ) . in this case, a strong inhibitory synaptic strength can drive the network towards a high - level of synchronization at the corresponding optimal value of .however , with the increasing of , it can also be observed that the top plateau region of the curve becomes narrower and shifts to the left ( the direction of short ) .an explanation why longer can no longer produce high values of is as follows .due to the heterogeneity of connectivity , some neurons in the considered network will have more inhibitory synaptic inputs than others . for long , the slowly decaying synaptic inhibition accumulates in time and thus may lead to a tonic level of hyperpolarizing currents that cancel the external depolarizing currents by neurons with more inhibitory inputs .this will suppress or even fully disable the firing of such neurons , which in turn means that if the synaptic time constant is too long , the synchronization will deteriorate significantly . at a fixed ,a large value of will introduce more inhibition to the network , and thus a relatively shorter will be needed to impair synchronization . as a result , although strong can enhance the synchronization in the corresponding optimal region , they may also reduce the size of this region . on the other hand , adding the fast gap - junctional coupling to the network , as expected ,will enhance the synchronous firing of neurons .this enhancement is quite remarkable , even if the gap - junctional coupling is rather weak , as shown in fig .[ fig:4](b ) .further increasing the strength of gap - junctional coupling can lead to the perfect synchronization ( data not shown ) .several previous experiments have shown that some of the inhibitory synapses between interneurons can have rather slow synaptic kinetics .therefore , to some extent , our results suggest that the fast electrical synapses might be essential for taming desynchronization if an interneuronal network contains a considerable number of slow inhibitory synapses .( color online ) dependence of the oscillation frequency on the delay for different levels of inhibition . in all cases we set ms/ .the units of parameters and shown here are ms/ and ms , respectively .the oscillation frequency is divided into three bands : theta band ( : 4 - 12 hz ) , beta band ( : 12 - 25 hz ) , and gamma band ( : 25 - 100 hz ) .different colors ( shades of gray ) refer to different numbers of synchronously spiking groups contained in each periodic cycle.,width=332 ] figure [ fig:5 ] shows how the oscillation frequency depends on the inhibitory synaptic delay for different levels of inhibition .as can be observed , the oscillation frequency of the considered interneuronal network is determined by both the inhibitory synaptic delay as well as the inhibition . in the short region ,the oscillations are characterized by a single frequency . in this case , increasing and can reduce the oscillation frequency from the band to band ( see fig .[ fig:5 ] ) .once exceeds a critical time delay , we observe that the network oscillations transit from the one - frequency state to the two - frequency state , indicating the emergence of the mixed oscillatory pattern .this transition is rapid and stable with the aid of fast gap - junctional coupling . for the mixed oscillatory pattern ,our results show that both and influence the low - frequency component , but only has a significant effect on the high - frequency component .when is short , the inhibitory synaptic currents from one periodic cycle decay fast , so that they may completely vanish before the firing of neurons enters into the next periodic cycle and almost do not influence the neuronal firing in the next periodic cycle . thus , in this case , the critical time delay is approximately 12.5 ms , and based on the same reason , the number of synchronous spiking groups contained in each periodic cycle is also increased once about every 12.5 ms .the above analysis suggests that the oscillation frequency of the high - frequency component is around 80 hz ( in the band ) for short ( see ms in fig [ fig:5 ] ) , corresponding to the firing rate of a single wang - buzsaki neuron that is driven solely by the considered externally applied current .if is sufficiently long , the inhibitory synaptic currents from one periodic cycle can persist to a certain extent even after the neuronal firing enters into the next periodic cycle .these remaining inhibitory synaptic currents will suppress the neuronal firing in the next periodic cycle , and thus increase the firing interval between the first and the second synchronous spiking groups in the next periodic cycle .therefore , the firing intervals of the high - frequency component are not perfectly identical , i.e. , the first firing interval is slighter larger than the other firing intervals .this in turn yields a relatively smaller average frequency of the high - frequency component . as a result ,for long the system needs a relatively longer to generate the mixed oscillatory pattern , and it also exhibits a slightly smaller frequency in the high - frequency component of its output ( see ms in fig [ fig:5 ] ) .moreover , our results also show that the frequency of the whole mixed oscillatory pattern is quite low , even in the case of weak inhibition ( small and short values of ) . for sufficiently long values of oscillation frequency can be maintained in the band quite efficiently .the mixed theta and gamma rhythm is believed to play an important role in brain cognitive functions .the traditional viewpoint is that interneuronal networks with fast and slow inhibitory synaptic dynamics are the basic neural circuits to generate this special type of neural oscillations .our results provide a new insight related to this , which is that long transmission delays of inhibitory synapses may also lead to the mixed theta and gamma rhythm in interneuronal networks .we note that this mechanism is still functional even if only some ( not all ) of the inhibitory synapses are considered to have long transmission delays , provided only that the delayed inhibitory synaptic currents are strong enough .( color online ) effects of the time constant of the recovered variable on the synchronization and the emergence of oscillatory pattern in the interneuronal network .( a ) without the fast electrical synapses ( ) .( b ) with the fast electrical synapses ( ms/ ) . in all caseswe set ms , ms/ , ms , ms , and .left panel : from top to bottom , , 120 , 250 , and 400 ms , respectively .right panel : from top to bottom , , 150 , 320 , and 600 ms , respectively.,width=332 ] finally , we examine how the unreliability of inhibitory synapses influences the synchronization and oscillatory patterns in the studied interneuronal network .this investigation is carried out because the synaptic transmission through real chemical synapses is indeed to a degree unreliable , and also because several previous studies have advocated that the unreliable synapses may play important functional roles in neural computation . in principle , the unreliability of chemical synapses can be explained by the phenomenon of probabilistic transmitter release , which has been confirmed by biological experiments . typically , the synaptic unreliability is associated with synaptic depression , which can be simulated by a well - established phenomenological model proposed in . in this model , three parameters , , and , which denote the fractions of synaptic resources in the recovered , active , and inactive states , are employed and their dynamical equations are given by : , , and . here is the dirac delta function , gives the timing of presynaptic spikes , is the time constant of the inactive variable , is the time constant of the recovered variable , and describes the utilization of synaptic efficacy . herewe apply this model to modulate the updating of synaptic conductance as follows : whenever a presynaptic neuron fires a spike , the corresponding postsynaptic conductances are increased instantaneously after a fixed spike transmission delay , according to ; otherwise decays exponentially with a fixed time constant . in the following simulations , we set ms and , and change the variable to control the synaptic depression .a longer corresponds to a stronger synaptic depression , and therefore denotes a lower level of synaptic reliability .( color online ) dependence of the oscillation frequency on for different values of . in all cases we set ms/ , ms/ , ms , ms , and .the values of considered here are , 15 , and 25 ms , respectively.,width=313 ] in figs .[ fig:6](a ) and [ fig:6](b ) , we present several typical spike raster diagrams for different values of , without ( ) and with ( ) the fast electrical synapses , respectively .we choose the inhibitory synaptic delay to equal ms , ensuring that the network will be in the mixed oscillatory pattern when ms .results presented in fig . [ fig:6 ] demonstrate that the unreliability of inhibitory synapses has a great impact on both the network synchronization and the emergence of oscillatory patterns . in the absence of fast gap - junctional coupling, the synchronization reduces markedly with increasing . in this case, high synaptic unreliability ( long ) leads to insufficient synaptic information interaction , which largely deteriorates network synchronization and causes the neural oscillations to disappear completely ( see ms in fig .[ fig:6](a ) ) . with the fast gap - junctional coupling incorporated , we find that the network synchronization can be maintained in the majority of the region . again, this is because the gap - junctional coupling itself can provide an effective mechanism for network synchronization .however , our results also show that the considered network needs a certain level of inhibitory synaptic reliability for the mixed oscillatory pattern to be preserved . for sufficiently long , it can be observed that the mixed oscillatory pattern transforms to the regular oscillatory pattern due to the lack of inhibition ( see ms in fig .[ fig:6](b ) ) .the transitions in the oscillatory patterns can be observed more clearly from the data presented in fig .[ fig:7 ] .these findings suggest that the unreliability of inhibitory synapses might also provide a flexible mechanism for controlling the switch between different oscillatory patterns in interneuronal networks .in summary , we have employed a computational approach with the aim of investigating the complex synchronous behavior in interneuronal networks that are coupled by delayed inhibitory and fast electrical synapses .we have shown that these two types of synaptic coupling play an important role in warranting network synchronization .in particular , the considered network can achieve a high level of synchronization either by means of a suitable tuning of the inhibitory synaptic delay , by enhancing the strength of electrical synapses , or by means of both . on the other hand ,our simulations have revealed that only delayed inhibition significantly influences the emergence of oscillatory patterns , while electrical synapses play at most a side role by this phenomenon . in particular , we have shown that short inhibitory delays evoke regular oscillatory patterns , while sufficiently long delays can lead to an abrupt emergence of mixed oscillatory pattern . by analyzing the oscillation frequencies, we found that the considered interneuronal network can generate both types of oscillations in physiologically relevant frequency bands , such as the gamma rhythm and the mixed theta and gamma rhythm .this fact might have biological implications as these rhythmic activities are frequently associated with fast - spiking interneurons , and are also believed to play prominent functional roles in cognitive tasks .lastly , we have also demonstrated that the unreliability of inhibitory synapses plays an important role by the synchronization of the network as well as by the emergence of oscillatory patterns .more precisely , we have shown that high levels of unreliability destroy synchronization , and that a minimal level of reliability is needed for the emergence and stability of the mixed oscillatory pattern .we hope that the presented results will improve our understanding of the synaptic mechanisms that are responsible for the generation of synchronous oscillations in the neural tissue . indeed, our findings suggest that delayed inhibitory synapses are a viable candidate for controlling the emergence of oscillatory patterns . depending on the actual biological circumstances , the same interneuronal ensembles may produce neural oscillations with different patterns in an adaptive way through the modulation of synaptic transmission .we also hope that this study will inspire further research on this topic , in particular by taking into account additional physiological properties of neuronal networks , such as the anatomical connectivity and distance - dependent synaptic information transmission delays .this research was supported by the national natural science foundation of china ( grant no .11172017 ) and the slovenian research agency ( grant no .j1 - 4055 ) .d. g. acknowledges the financial support from the university of electronic science and technology of china .a. t. winfree , _ the geometry of biological time _ ( springer , new york , 1980 ) ; a. pikovsky , m. rosenblum , and j. kurths , _ synchronization : a universal concept in nonlinear sciences _ ( cambridge university press , cambridge , 2001 ) ; a. arenas , a. daz - guilera , j. kurths , y. moreno , and c. zhou , phys .469 , 93 ( 2008 ) .j. j. hopfield , nature 376 , 33 ( 1995 ) ; g. buzsaki and j. j. chrobak , curr .neurobiol . 5 , 504 ( 1995 ) ; m. whittington , r. d. traub , n. kopell , b. ermentrout , and e. h. buhl , j. neurosci .38 , 315 ( 2000 ) ; a. b. l. tort , r. w. komorowski , j. r. manns , n. kopell , and h. eichenbaum , proc .usa 106 , 20942 ( 2009 ) .s. ostojic , n. brunel , and v. hakim , j. comput .26 , 369 ( 2009 ) ; s. coombes , siam j. appl . dyn .syst . 7 , 1101 ( 2008 ) ; n. brunel and v. hakim , neural comput .11 , 1621 ( 1999 ) ; f. skinner , n. kopell , and e. marder , j. comput .neurosci . 1 , 69 ( 1994 ) .w. gerstner and w. m. kistler , _ spiking neuron models : single neruons , populations , plasticity _ ( cambridge university press , cambridge , 2002 ) ; e. kandel , j. schwartz , and t. jessell , _ principles of neural science _( elsevier , amsterdam , 1991 ) .h. a. swadlow , j. neurophysiol .54 , 1346 ( 1985 ) ; h. a. swadlow , j. neurophysiol .68 , 605 ( 1992 ) ; a. roxin , n. brunel , and d. hansel , phys .94 , 238103 ( 2005 ) ; c. masoller , m. c. torrent , and j. garca - ojalvo , phys .e 78 , 041907 ( 2008 ) ; c. masoller , m. c. torrent , and j. garca - ojalvo , phil .r. soc . a 367 , 3255 ( 2009 ); t. perez , v. eguluz , and a. arenas , chaos 21 , 025111 ( 2011 ) ; v. eguluz , t. perez , j. borge - holthoefer , and a. arenas , phys . rev .e 83 , 056113 ( 2011 ) .m. dhamala , v. k. jirsa , and m. ding , phys .92 , 074104 ( 2004 ) ; q. wang , g. chen , and m. perc , plos one 6 , e15851 ( 2011 ) ; q. wang , m. perc , z. duan , and g. chen , phys .e 80 , 026206 ( 2009 ) ; o. v. popovyh , s. yanchuk , and p. a. tass , phys .107 , 228102 ( 2011 ) ; z. wang , h. fan , and k. aihara , phys .e 83 , 051905 2011 .i. franovic and v. miljkovic , chaos , solitons fractals 44 , 122 ( 2011 ) ; o. dhuys , i. fischer , j. danckaert , and r. vicente , phys .e 83 , 046223 ( 2001 ) ; j. zhou and z. liu , phys .e 77 , 056213 ( 2008 ) .r. maex and e. de schutter , j. neurosci .23 , 10503 ( 2003 ) ; m. bartos , i. vida , m. frotscher , a. meyer , h. monyer , j. r. p. geiger , and p. jonas , proc .usa 99 , 13222 ( 2002 ) ; n. brunel and x. j. wang , j. neurophysiol .90 , 415 ( 2003 ) .r. t. canolty , e. edwards , s. s. dalal , m. soltani , s. s. nagarajan , h. e. kirsch , m. s. berger , n. m. barbaro , and r. t. knight , science 313 , 1626 ( 2006 ) ; o. jensen , neurosci .139 , 237 ( 2006 ) ; k. m. kendrick , y. zhan , h. fischer , a. u. nicol , x. zhang , and j. feng , bmc neurosci . 12 , 55 ( 2011 ) . b. katz , _ the release of neural transmitter substances _ ( liverpol university press , liverpool , 1969 ) ; m. abeles , _ corticonics : neural circuits of the cerebral cortex _( cambridge university press , new york , 1991 ) ; d. k. smetters and a. zador , current biology 6 , 1217 ( 1996 ) .m. tsodyks and h. markram , proc .usa 94 , 719 ( 1997 ) ; m. tsodyks , k. pawelzik , h. markram , neural comput .10 , 821 ( 1998 ) ; a. morrison , m. diesmann , and w. gerstner , biol .98 , 459 ( 2008 ) .
|
networks of fast - spiking interneurons are crucial for the generation of neural oscillations in the brain . here we study the synchronous behavior of interneuronal networks that are coupled by delayed inhibitory and fast electrical synapses . we find that both coupling modes play a crucial role by the synchronization of the network . in addition , delayed inhibitory synapses affect the emerging oscillatory patterns . by increasing the inhibitory synaptic delay , we observe a transition from regular to mixed oscillatory patterns at a critical value . we also examine how the unreliability of inhibitory synapses influences the emergence of synchronization and the oscillatory patterns . we find that low levels of reliability tend to destroy synchronization , and moreover , that interneuronal networks with long inhibitory synaptic delays require a minimal level of reliability for the mixed oscillatory pattern to be maintained .
|
the increasingly important role played by information technology and by the ubiquity and success of web - based retail shops is rapidly transforming our lives and buying patterns , and is producing a huge quantity of detailed data sets about customers preferences and habits .the availability of such data sets has made possible to study in a quantitative way how people select items in several different scenarions such as , for instance , how they choose movies to watch , books to read , or food to eat . in most of the cases ,the number of different items available on an online retail shop is so large that it is extremely difficult to have a clear idea of the specific products that would better fit the taste of each customer .hence the necessity to devise intelligent automatic systems to provide useful recommendations , based for instance on the knowlegde about previous purchases made by users . given its practical importance ,the study of recommendation systems is nowadays a very active research topic , with relevant contributions from different fields including computer science , economics , sociology , complex networks , and engineering .the natural framework to represent selection or purchasing patterns is by means of a bipartite graph , namely a graph consisting of two distinct classes of nodes ( respectively associated to users and objects ) in which two nodes belonging to different classes are connected by an edge if the corresponding user has chosen or purchased that particular object . within this framework ,a recommendation is no more than the suggestion of ( a relatively small ) set of objects to which a specific user might be interested , and corresponds to a set of new potential edges in the bipartite graph . in many cases , an objectis recommended to a user based on her similarity with other users , so that the definition of appropriate similarity measures is crucial for the development of efficient _ personalized recommendation systems_. various recommendation systems and algorithms have been proposed over the years , such as the collaborative filtering ( cf ) , methods based on diffusion across the user - object network , and hybrid ( parametric ) combinations of different algorithms . in most of the cases ,the quantification of similarity between two users is based on the number of objects which have been chosen by both users in the past . however , it is also possible to define a similarity between two objects based on the number of users who have chosen them .not always in the literature recommendations based on user similarity have been properly distinguished from those based on object similarity , and the predictions provided by these two types of recommendation systems have been usually compared while discarding the different nature of the similarity measures involved . the question of which similarity measure is the most reliable in providing tailored and accurate recommendations is still a matter of open debate , and it is not clear yet how to choose one similarity definition or another for a specific recommendation task .this paper provides a contribution in this direction .in particular , we focus here on the duality user - similarity versus object - similarity , showing that it is possible to improve the quality of recommendation by making a combined use of the two classes of similarity .we start by proposing two new definitions of similarity based on heuristic arguments , and then we compare the accuracy of recommendations based on these definitions with the accuracy of other methods proposed in the literature . the first measure we propose takes into account the popularity of objects and the heterogeneity of user selection patterns , while the second one is based on the concept of pearson correlation generalized to the case of binary vectors .we then show that any definition of similarity between users induces a definition of similarity between objects , and vice versa .this fact actually increases the number of different possible similarity definitions , and allows to consider recommendation methods which combine together user - similarities with object - similarities .we also test the robustness of different similarity measures against the presence of noise , by adding an increasing percentage of random edges to the actual bipartite graph , and we show that the measure based on generalised pearson correlation proposed in the paper is able to filter out noise more effectively than most of the other existing similarity measures .the paper is organized as follows . in section [ measures ]we review different similarity measures proposed in the literature , we introduce two new definitions of similarity , and we show how to associate a recommendation score to an object starting from a given similarity definition . in section [ validation ] we provide a brief description of three data sets corresponding to to user / object associations in different contexts , and we validate the performance of different recommendation strategies on the corresponding bipartite networks . in section [ sec : specular ] we show how a measure of similarity between users can be transformed into an analogue measure of similarity measure between objects .we then investigate recommendation methods which combine the recommendation scores obtained from similarities between users and similarity between objects .finally , in section [ randomization ] we study how the presence of spurious information in the data sets can affect the performance of recommendation , and we show that some recommendation strategies actually perform better than others in noisy data sets .let us consider a set of users and objects , where each user is associated to a subset of the objects she has expressed a preference for .this is for instance the case of users buying objects from an online retail shop , where each user is associated to each of the items she has bought from that website .such systems can be naturally represented by means of bipartite graphs , where users and objects are considered as two distinct classes of nodes .a bipartite graph can be described by a adjacency matrix whose entry is equal to 1 if and only if user is associated to object ( for instance , because has bought that object ) , and otherwise .notice that in a bipartite graph each edge always connects one user with one object . in the following, latin subscripts are associated to users , whereas greek ones are associated to objects .the total number of objects collected by a user is equal to the number of edges incident on the corresponding node of the graph , i.e. to the degree .similarly , the degree of object , defined as , is equal to the total number of users that have collected that object . within this framework , making a _ recommendation _ for user corresponds to compiling a list of objects which have not already been chosen ( or bought ) by user but to which might be interested . in other words ,a recommendation is just a proposal of new potential edges of the bipartite graph whose one endpoint is node .the main hypothesis on which almost all recommendation systems rely is that the set of objects actually collected ( or bought ) by user represents a sample of her tastes and preferences , and can therefore be used to compile a profile of user and to predict which kind of objects might be interested .consequently , each recommendation systems relies on some measure of similarity . in general , it is possible to define a similarity for the ( ordered ) pair of users and and also a similarity between the pair of objects and , and various different definitions have been proposed in the literature ._ similarity measures . _ a very simple way of quantifying the similarity between user and user is by counting the number of objects that they have in common : one of the limitations of is that it does nt take into account the differences in the total number of objects collected by each user .this problem can be somehow alleviated by using the so - called jaccard similarity , defined as the ratio between the number of items collected by both users and , and the sum of the degrees of the two users : another widely used similarity measure is the one of the _ collaborative filtering _ ( cf ) approach , which is defined as in eq .( [ s - min ] ) , the similarity measure is proportional to the number of objects users and have in common , and inversely proportional to the smallest of the two degrees , i.e. to . in this way , if user has collected exactly one object , which has also been collected by who instead has degree , then , i.e. the similarity between two users is effectively determined by the user with the smallest degree .the jaccard and the collaborative filtering similarity do not take into account another type of heterogeneity , namely the fact that not all objects have the same popularity .intuitively , objects that have been collected by a relative large number of users ( in a limiting case , by all users ) , do not provide useful information for a personalised recommendation , for the simple reason that they are common to too many users , and therefore the fact that one user has collected them does not tell much about her tastes .hence , it might be a good idea to discount the contribution of an object to the similarity between two users by a function of the degree of the object . the so called network - based inference ( nbi ) recommendation method is based on a measure of similarity which takes into account the heterogeneity of users and objects ( this recommendation strategy is also called _ probabilistic spreading _ in a subsequent paper ) .in this case , the similarity measure is defined as this is a similarity between objects , where the contribution of the user which collects the two objects and is discounted by the degree of that user , and the whole sum is divided by the degree of one of the two objects , according to the _ resource - allocation _ procedure defined by the authors in ref . .it is worth noting that this definition of similarity , like the analogous one investigated in , is asymmetric , meaning that , and .though asymmetry is not in general an issue for the recommendation task , it has been shown that better performance can be achieved by using symmetrized versions of these measures .nevertheless , nbi has proved to be a quite reliable recommendation method , and in the following we will consider it as a reference to quantify the effectiveness of the recommendation strategies we propose .the two new recommendation methods we propose in this paper are based on symmetric similarity measures . specifically , the first measure we propose is we call it maximum degree weighted ( mdw ) similarity because the total number of objects collected by both and is weighted by the maximum of the degrees of the two users .moreover , the contribution of object is weighted by its degree .we note here that some recent studies have investigated the effect of a tunable power - law function of the degree , i.e. of similarity measures in which the contribution due to object is divided by . in the following, we will briefly explain the rationale behind eq .( [ s - mdw ] ) .first of all , the contribution to the similarity measure of each object collected by both users is weighted with the degree of the object . in this way, popular objects will provide smaller contributions to the similarity between users .in particular , the value in the denominator allows to obtain a maximum contribution to similarity ( exactly equal to 1 ) if and only if and .this takes into account the very special case in which and are the only two users who have collected a certain object .secondly , the similarity measure is divided by the maximum of the degrees of the two users .as we explained above , this choice allows to properly take into account the existing heterogeneity in the number of selection made by each user .for instance , if we consider the similarity defined in eq .( [ s - min ] ) and we assume that users and have degree and and have exactly one object in common , then the contribution of that object to the similarity between the two users would be equal to .however , the contribution of the only object in common between and is equal to also when , and , despite one would argue that in the latter case the two users are more similar than in the former . by dividing for the maximum of the degrees of the two users , the similarity measure given in eq .( [ s - mdw ] ) assigns a higher value of similarity to the two users in the latter case ( when and we get ) than in the former case ( i.e. , when and , for which we obtain ) . a second similarity measure we propose hereis based on the pearson correlation coefficient between binary vectors , and is defined as follows : this measure , which is denoted in the following as binary pearson ( bp ) similarity , is based on the fact that the row of the adjacency matrix represents the profile vector of user , i.e. the set of objects selected by the user .if we have two users , and , who have collected and objects in total , respectively , and we consider the corresponding profile vectors and , then we have : where is the number of objects collected by both and .therefore the pearson s correlation coefficient between the two vectors is : which is identical to eq .( [ s - p ] ) .this similarity measure has some remarkable properties .first of all , it is invariant with respect to a rescaling of the system , i.e. with respect to a transformation , , , and , where is a positive integer .furthermore , can be interpreted in terms of the hypergeometric distribution .indeed , the mean value of is and the variance is .therefore is proportional to the standard score associated with observation according to the hypergeometric distribution : this equation reveals that is conceptually different from all the other similarity measures introduced above . indeed , according to , the similarity between two users does not depend only on the number of objects selected by both users , , but it depends on the difference between and the number of shared objects that is expected under the hypothesis that the two users have picked the objects at random .therefore can also take negative values , and this fact influences the way in which a personalized recommendation value is obtained , as discussed in the reminder of this section . _ constructing recommendation lists . _ once we have assigned a similarity value to each possible pair of users in the system , using a certain similarity measure , we need an algorithm to construct a recommendation list , i.e. a list of suggested objects which have not been yet collected by a certain user .the simplest way of constructing a recommendation list is the _ global ranking method _ ( grm ) .it consists in creating a user recommendation list by considering all the objects not collected by user in decreasing order of their degree .this method is not personalized , except for the fact that objects already collected by that user are excluded from the corresponding list .a more effective and widely used basic procedure is the _ collaborative filtering _( cf ) , which is based on the similarity measure given in eq .( [ s - min ] ) .the similarity scores between user and all the other users in the system are used to construct a personalized recommendation value , that is an estimation of how much user might be interested on object : the presence of the absolute value at the denominator is not necessary for many of the similarities presented above , being their values always positive .the only exception is the bp similarity , which may take both positive and negative values , thus requiring a proper normalization to avoid possible divergences . in the nbi recommendation method ,the recommendation value is defined in a quite different way in fact the computation of the recommendation value includes self similar terms ( ) which are not taken into account in eq .( [ eq : vu ] ) , and does not include any additional normalization factor .we considered three classical data sets of user - object associations , namely the movielens ( http://www.grouplens.org/node/73 ) database , where users have rated the movies , the jester jokes database ( http://eigenstate.berkeley.edu/dataset/ ) where we find records of users who have rated jokes , and the fine foods database ( https://snap.stanford.edu/data/web-finefoods.html ) , containing amazon reviews of fine foods .in all these databases users have rated items with an ordinal attribute . in our studywe will perform recommendation procedures on the adjacency matrix of the corresponding bipartite network . for this reason , according to a similar choice done in other studies ( see ref . ) , we assume that a user has collected an object if and only if he has rated the object with a score higher or equal to a certain preselected threshold . in table[ table : db ] we report some information about the size of each data set and the values of the thresholds considered for the definition of the corresponding bipartite network ._ distributions of similarity values . _ as a preliminary investigation , we evaluated the heterogeneity of the degree of users and objects in the three databases . in fig .[ db_f ] we show the degree distributions for the three databases used . with the only exception of the degree distribution of jokes ( objects ) in the jester jokes database , all the distributions exhibit relatively broad tails .this suggests that similarity measures which properly take into account degree heterogeneities should indeed provide better recommendations ..summary statistics of the three databases used in the paper [ cols="<,^,^,^,^,^ " , ] [ table : r ] by analyzing the results summarized in table [ table : r ] we see that the best results are obtained by different methods in different databases .moreover the two indicators and always single out a different method as the best one .however , an overall analysis shows that the best recommendation methods are nbi ( the best method according to in the movielens database and the best method according to in the fine foods database ) , mdw ( the best method according to to in the movielens and fine foods databases ) , and bp ( the best method according to in the jester jokes and in the fine foods databases ) .they clearly overcome the results obtained by grm , cf , and jaccard ( j ) .the most important difference between the recommendation methods compared in table [ table : r ] is that while nbi is based on a definition of similarity among objects , all the other methods make use of similarity measures defined between users . in general , it is possible to define a transformation rule to obtain a similarity score between users starting from a similarity between objects , and viceversa .in fact , the similarity between objects can be obtained from the similarity between users by appropriately swapping latin indexes with greek ones , and quantities defined for users with the analogous ones defined for objects : the transformation rule is valid in both directions from user to objects and from objects to users .we propose to define new recommendation scores by using the dual similarity measures obtained with the above defined transformation .for example , the recommendation value , which is the dual of eq .( [ eq : vu ] ) and is valid for objects instead of users , is obtained as : whereas the dual recommendation score of the nbi algorithm is it is interesting to note that according to the definition of the nbi we have this relation can be verified by replacing ( eq . [ s - nbi ] ) with into the equations ( [ eq : fo ] ) and ( [ eq : fu ] ) , respectively .hence , nbi is invariant under the transformation rules of eq .( [ spec ] ) .it is interesting to investigate how the duality user / object similarity affects the quality of recommendation . to this aim, we propose to define a recommendation value which is the result the convex combination of the two recommendation values and obtained from the similarity between users and between objects , respectively . in formula : where the relative weight of the user and object recommendation values is controlled by the parameter $ ] , so that when we recover the recommendation score induced by the similarity between users , while for we have the recommendantion score corresponding to the similarity between objects .our hypothesis , which is validated in the following , is that better recommendations can be obtained by appropriately tuning the value of .the mean values for different recommendation methods are reported in fig .[ fig : sym ] , where the three panels show the results obtained in the three data sets .it is worth noting that the nbi algorithm is independent of .in fact , by using eq .( [ eq : f ] ) one verifies that in fig .[ fig : sym ] we notice that the cf recommendation method performs poorly for almost all the values of , in all the three data sets . in the case of movielens , three recommendation methods ( mdw ,nbi and bp ) perform in a similar way when only the user similarity measure is taken into account ( ) as we already noticed in the results summarized in table [ table : r ] . on the other hand , for , i.e. , when only the object similarity measure is taken into account , the mdw method performs better than the others . in the case of the fine foods data set ,the bp similarity performs slightly better than the others for and for a relatively large range of values .when , the recommendation with the mdw measure performs slightly better . finally , in the jester jokes data set the bp similarity clearly outperforms all the others when , while for all the methods provide similar results , with the only exception of the cf recommendation , whose performance is much worse . the richness of profiles observed in fig .[ fig : sym ] suggests that the performance of a recommendation method depends both on the specific database and on the specific linear combination of user and object recommendation values adopted .quite often , the best recommendation is not the one corresponding to or .some methods perform better at the user limit ( ) , others at the object limit ( ) , some of them for an intermediate value of .moreover , the specific shape of as a function of actually depends on the database .we would like to stress two interesting aspects of these results .first , some methods exhibit a convex profile of as a function of , where the minimum indicates the best linear combination of user and object recommendation values .second , the variability of the values of obtained by different recommendation systems is much higher for than for .in this section we analyse the robustness of recommendation systems against the presence of different sources of noise in the data sets .we consider three different kinds of randomization . in the first scenariowe add a certain amount of random edges to the bipartite graph , mimicking erroneously reported user selections . in the second casewe rewire a given percentage of the edges of the bipartite network by maintaining the degree of users unaltered ( while the degree distribution of object is in general modified ) .finally , in the third case we rewire a fraction of the edges of the graph by maintaining unaltered both the user and object degree distribution . for the sake of simplicity, we show the results obtained for the three randomizing methods only for the movielens database . in fig .[ fig : m_rand ] we show the average rank quality index for the different methods with as a function of the percentage of edges randomly added or rewired . as expected, is an increasing function of the percentage of noise , signalling a degradation of the recommendation performance .however , the actual profile of depends on the specific recommendation method used .in fact , several curves crosses at different values of the induced randomness .this is clearly observed for the first and second kinds of randomization .we performed the same analysis also on the jester jokes and fine foods databases , and we report in fig .[ fig : j_rand ] the results corresponding to the first type of randomization ( addition of a fraction of random edges ) .the results show a prominent role of the bp similarity measure , which seems the most robust in dealing with noisy data sets .our findings suggest that the bp similarity measure is a good candidate to provide good and robust recommendations in databases where there is a high degree of uncertainty about the validity of records .in fact , while the use of the bp similarity does not give substantially better recommendation prediction in databases like movielens and fine foods , its performance is consistently higher in the case of jester jokes .we have considered three real - world users / items bipartite networks , we have investigated the performance of several traditional recommendation methods recently presented in the literature , and we have proposed two new similarity measures which take into account the heterogeneity of users and objects degrees .we showed that these two new similarity indexes can outperform traditional recommendation systems in most of the cases , even if there is a clear dependence of the results on the structural characteristics of the data set under study .then , we focused on hybrid recommendation systems based on the convex combination of the recommendation scores induced by the similarity between users and objects , parametrised by a coefficient .we showed that different outcomes can be obtained in personalized recommendation methods by using similarity between users , or between objects , or a combination of the two . in some cases ,the quality of recommendation as measured by the average rank quality index is a convex function of the parameter .this means that the combination of different recommendation scores might actually provide better performance with respect than the employment of user or object similarities alone and , more importantly , that depending on the data set at hand , the quality of recommendation can be actually optimised through an appropriate tuning of .conversely , for some similarity measures we observed a monotonically decreasing dependence of on , so that the best recommendation is obtained by using an object - based similarity .we finally investigated the robustness of recommendation systems to the addition and rewiring of edges , and the results suggested that the binary person correlation similarity can consistently outperform other similarity measures in noisy data sets .although we do not observe a specific recommendation method outperforming all the others in all conditions and for all the data sets considered , it seems that recommendations based on mdw and bp are able to produce better results than those using other similarity measures .however , our results show that the performance of the recommendation methods depends on both the specific investigated database and on the way similarities between users and objects are used to derive recommendation scores . this work is partially supported by the epsrc project gale ep / k020633/1 .this research utilised queen mary s midplus computational facilities , supported by qmul research - it and funded by epsrc grant ep / k000128/1 .b. sarwar , g. karypis , j. konstan , and j. riedl .item - based collaborative filtering .www10 , acm 1 - 58113 - 348 - 0/01/0005 , 285 ( 2001 ) d. goldberg , d. nichols , b.m .oki , d. terry , commun .acm * 35 * 61 ( 1992 ) j.b .schafer , d. frankowski , j. herlocker , s. sen , _ collaborative filtering recommender systems . in : the adaptive web _ springer 291 ( 2007 ) .c . zhang , m. blatter , and y .- k .yu , _ physlett . _ * 99 * 154301 ( 2007 ) .
|
we propose here two new recommendation methods , based on the appropriate normalization of already existing similarity measures , and on the convex combination of the recommendation scores derived from similarity between users and between objects . we validate the proposed measures on three relevant data sets , and we compare their performance with several recommendation systems recently proposed in the literature . we show that the proposed similarity measures allow to attain an improvement of performances of up to 20% with respect to existing non - parametric methods , and that the accuracy of a recommendation can vary widely from one specific bipartite network to another , which suggests that a careful choice of the most suitable method is highly relevant for an effective recommendation on a given system . finally , we studied how an increasing presence of random links in the network affects the recommendation scores , and we found that one of the two recommendation algorithms introduced here can systematically outperform the others in noisy data sets .
|
random mutations on different scales of the genome introduce non - deter - ministic genetic diversity to an evolving population , opening up new pathways for exploration of the genotypic space . at the same time selection restricts the number of possible evolutionary trajectories in a deterministic manner . from the interplay between these two contrary forcesarises the question whether evolution as a whole is predictable and reproducible . in an environment of strong selective pressure and weak mutation rates and/or small population size , possible steps towards higher fitnessare largely limited by the structure of the fitness landscape on which adaptation takes place . in this_ strong selection weak mutation ( sswm ) _ regime populations can not overcome fitness valleys by generating multiple mutants .rather , each single mutation , introduced one at a time , has to prove beneficial , resulting in an uphill walk on the fitness landscape . on a fully additive landscape , where each genetic locus contributes independently to the overall fitness, beneficial mutations can occur in any order , which implies many possible mutational pathways. however , often the fitness contributions of different loci are not independent .mutations whose effect depends on the state of other loci ( the _ genetic background _ ) are known as epistatic .cases in which not only the value of fitness change but also the sign of change ( beneficial or deleterious ) depends on the state of other loci are known as sign - epistatic .landscapes with sign - epistatic interactions tend to be rugged and may have multiple local optima .recent empirical evidence suggests that sign epistasis is common in biological entities ranging from single proteins to entire organisms , see for review . as part of the general problem of understanding possible evolutionary outcomes and pathways , we here focus on the question : how does epistasis influence the accessibility of the global fitness maximum in the sswm regime ? in recent work , this question has been addressed for several well known models of fitness landscapes , in particular the house - of - cards / random energy model , the rough mt .fuji model and kauffman s nk - model . in the nk - modeleach genetic locus interacts with a neighborhood of other loci , and different genetic architectures can be realized through different ways of chosing the neighbors . despite its simplicity and lack of biological detailthe nk - model has proven to be useful for parametrizing empirical fitness landscapes , thus providing a quantitative characterization of the strength and type of epistatic interactions in these data sets .the versatility of the model can be further increased by considering linear superpositions of nk - landscapes with different values of . herewe will focus on fitness landscapes that have a modular structure , in that the genetic loci are divided into disjoint sets , called blocks , which contribute independently to the overall fitness .such a model was first introduced by perelson and macken , and it can be viewed as a special case of kauffman s nk - model .we will see that the block structure significantly facilitates analytic calculations , to the extent that a detailed characterization of the full probability distribution of the number of accessible mutational pathways becomes possible .surprisingly , the exact expression for the mean number of accessible paths , similar to the mean number of optima derived in , turns out to closely match the numerical estimates obtained for other versions of the nk - model . at the same timethe fluctuations in these quantities show a strong dependence on the genetic architecture , leading in particular to a very low evolutionary accessibility of the block model landscape compared to the nk - model with random ( non - modular ) interactions studied previously . in the next sectionwe explain the basic mathematical concepts required for the description of genotype spaces and fitness landscapes , and introduce the models of interest .our results on the evolutionary accessibility of modular landscapes are presented in sect .[ results ] , and the paper concludes with a summary and an outlook in sect . [ summary ] .in the sswm regime the genetic variability in a population is small and it can be assumed that all individuals have the same genotype most of the time , apart from the transient appearance of single new mutations .the genotype of a population can be modeled as a binary sequence of length , where each is either or representing two different alleles at locus or a wild type and a mutated type .the space of all possible genotypes is then the binary hypercube , which we extend into a normed space by introducing the hamming norm and the induced hamming metric .this metric represents the number of loci in which two genotypes differ and hence the minimal number of point mutations needed to reach one from the other . for future referencewe define the _ antipodal _ or reversal sequence of a genotype through .a genotype and its antipodal sequence are maximally distant from each other , for all .since we only consider point mutations , we define the mutation operator which mutates locus as we can extend this notion to simultaneous mutations at several loci .let be the set of loci that are to be mutated .we then denote the group mutation operator as .a fitness landscape on the space of sequences of length is a mapping from into the real numbers .we use the notation to refer to the change in fitness by mutating all loci in starting from genotype . by applying each single locus mutation to each genotype on the fitness landscapewe generate an -dimensional real vector field on the genotype space , .this field determines the effect of every possible mutation at each point of the fitness landscape .it defines the fitness landscape uniquely up to a constant .therefore all relevant properties of the fitness landscape are determined by .however not all mappings are valid mutation fields of a fitness landscape . in the following weintroduce the fitness landscape models of interest in this work .they are random field models in the sense of and bear a close resemblance to spin glass models of statistical physics .a common way of quantifying the ruggedness of such fitness or energy landscapes is through the number of local maxima , and we compile some known results for this quantity for the different models below . in the house - of - cards ( hoc ) model every fitness value is drawn identically and independently from a real - valued probability distribution . since only the sign of fitness change is relevant to accessibility , it is sufficient to consider the hoc model as a random rank order on the genotype space .the properties discussed here therefore do not depend on the chosen probability distribution .up to a change of sign the hoc landscape is equivalent to the energy landscape in derrida s random energy model ( rem ) of spin glasses . for completenesswe note that also the rem in an external magnetic field has an evolutionary analogue in the rough mt .fuji ( rmf ) model . the mean number of local maxima of the hoc landscape can be obtained from a simple argument .a given genotype is a local maximum if its fitness value exceeds that of its neighbors , which is true with probability by symmetry .since there is a total of genotypes , the expected value of the number of optima is the corresponding variance is which implies that the coefficient of variation tends to zero for large , i.e. the distribution of becomes increasingly localized near its mean .in fact asymptotically the distribution is normal . for small full distribution can be obtained by exact enumeration , see table [ table1 ] ..[table1]distribution of the number of fitness maxima in the hoc and constrained hoc models for and .note that the largest possible number of maxima on the -dimensional hypercube is .[ cols="^,^,^,^ " , ] consider a block landscape with blocks of size .a mutation mutating a locus in block will only change the fitness contribution of this block , a subsequent mutation in a different block generates the fitness change which , since does belong to block , simplifies to hence the order in which two loci are mutated is irrelevant to the accessibility if the two loci are not part of the same block and are mutated directly one after another . introducing the indicator function this property reads .consider now a path .switching two adjacent elements of the path will not change the accessibility if they do not share a block .it is therefore possible to reorder the path in the form such that for all and . for each such ordered paththere are original paths reducing to it in the way described .the number of accessible paths on the block landscape therefore has to be an integer multiple of .note that this feature of the block model does not depend on the blocks consisting of hoc landscapes .the combinatorial factor is only determined by the block structure and will be present in all fitness landscapes composed of independent sets of loci .the ordered path can be divided into subpaths operating on each block seperately .steps in other blocks do not influence the accessibility of the subpaths in a given block .it is thus possible to write the number of paths on the block landscape as the product of the number of paths in each block , in close analogy to the corresponding relation ( [ nopt_block ] ) for the number of maxima .the end point of a subpath is also the global maximum of the block landscape , since is the sum of independent blocks .therefore the distribution of the number of paths to the global maximum can be derived from the distribution of the number of paths to the global maximum of the blocks according to where is the set of all ordered decompositions of the non - negative integer into a product of non - negative integer factors and is the number of accessible paths in a hoc landscape of size . from this general relationtogether with the result ( [ hoc_paths ] ) for the hoc model the following expressions for the statistics of accessible paths in the block model emerge : ^b - 1\right),\ ] ] ^b - 1},\ ] ] all of these results easily carry over to variations in which the block landscapes are not of hoc type , however in the following we continue to assume hoc blocks .it follows from ( [ block_paths_access ] ) that the accessibility in the block model always tends to zero , so block landscapes with high almost surely do not have any path to the global maximum . in this regardthere is no difference to the hoc model .however in the block model accessibility tends to zero much faster . for fixed block size the decrease is exponential in , whereas for a fixed number of blocks the hoc asymptotics ( [ access_hoc ] ) implies that , which is smaller than ( [ access_hoc ] ) for any . since , eq .( [ block_paths_access ] ) implies that accessibility at constant is governed by the quantity {\mathbb{p}(n_\mathrm{p}^{\mathrm{hoc}(m ) } > 0)}\ ] ] defined such that . by construction , andaccording to the asymptotics ( [ access_hoc ] ) approaches unity from below for large because .it follows that is minimal at an intermediate block size , which turns out to be , see fig .[ fig : accesshocroot ] . at block model thus displays minimal accessibility .accessibility for the nk model with random neighborhood ( rn , circles ) , adjacent neighborhood ( an , triangles ) and block neighborhood ( bn , crosses ) .a ) accessibility as a function of at fixed block number . for models are equivalent . for block model shows monotonically decreasing accessibility which falls below the hoc value ( ) with increasing , whereas for the rn and bn models accessibility displays a minimum and increases for large .b ) accessibility for fixed as a function of .block model data show a minimum at , whereas rn and an models display a maximum . for rn and an dataare essentially indistinguishable .c ) accessibility as a function of for fixed .block model data decrease monotonically while the rn model displays a transition between decreasing accessibility for to increasing accessibility for .d ) same as c ) for .results were obtained from simulations of landscape realizations per data point . ]figure [ fig : compaccess ] shows the comparison of evolutionary accessibility for the bn , an and rn models . for constant [ fig .[ fig : compaccess ] a ) ] there is a significant difference between the behavior of hoc / bn models and an / rn models . while the accessibility in the hoc model and block model is monotonically falling , both the rn and an model exhibit a minimum in the accessibility followed by an increase for large . for constant the block models minimal accessibility at is recognizable in fig .[ fig : compaccess ] b ) .interestingly , the an and rn models display a reverted behavior with a maximum accessibility at intermediate .this figure also shows that the accessibility values for the rn and an models are numerically indistinguishable for while important differences arise for smaller , see also figs .[ fig : compaccess ] c ) and d ) .compared to the hoc and block model the an and rn models are surprisingly accessible even for high . while it is virtually impossible to find a block landscape with accessible paths for , the an and rn landscapes of that size have a chance of more than 50% to be accessible for suitable values of . the comparison of different models at constant in figs .[ fig : compaccess ] c ) , d ) shows that the rn and an models behave qualitatively similar to the block model for , but differ strongly from the block model and from each other for .while the an data generally seem to display a maximum followed by decreased accessibility for larger , the accessibility in the rn model remains nearly independent of for and increases monotonically with for .the transition in accessibility at for the rn model was already observed and discussed in , but here we see that the behavior in the an model appears to be qualitatively different .mean number of accessible paths for the nk model with random neighborhood ( rn , circles ) , adjacent neighborhood ( an , triangles ) and block neighborhood ( bn , crosses ) shown a ) as a function of for different values of , b ) as a function of for different values of , c ) as a function of for and d ) as a function of for . results were obtained from simulations of ( ) landscape realizations per data point for ( ) . ] the mean number of paths ( [ block_paths ] ) in the block model equals its first non - vanishing path count greater than zero which is a property inherited from the hoc model .asymptotically for large the mean behaves as {m!}\right)^{-l } , \\b = \text{const.}:\;\ ; & \mathbb{e}(n_\mathrm{p}^{\mathrm{bn } } ) \approx \left(\frac{1}{\sqrt{2\pi l}}\right)^{b-1 } b^{l+\frac{b}{2}}. \end{split}\ ] ] for constant block size the mean increases asymptotically faster than for constant block number .nonetheless , even for constant the mean path number on the block landscape increases nearly exponentially and therefore much faster than the mean on hoc landscapes conditioned to be accessible , see eq .( [ hoc_paths_ifaccess ] ) .this behavior does not appear to be unique to the block model .in fact , simulation results shown in fig . [ fig : comppathsmean ] suggest that the mean number of accessible paths in all versions of the nk - model is rather similar . the formula ( [ block_paths ] ) derived abovemight therefore be useful for estimating the mean for these other variants of the nk model .a consistent ordering between the an , rn and bn models is however not recognizable : while for small the mean for the block model is highest , it becomes lowest in the regime of large and .coefficient of variation of the number of paths for the nk model with random neighborhood ( rn , circles ) , adjacent neighborhood ( an , triangles ) and block neighborhood ( bn , crosses ) .a ) as a function of at fixed block size ; data are plotted on double - logarithmic scales to facilitate the comparison with the asymptotic prediction ( [ block_paths_cv_asym ] ) .b ) as a function of at fixed ; note that all models coincide for ( additive fitness landscape ) and ( hoc model ) .c ) as a function of at fixed ; d ) same as c ) for .results were obtained from simulations of landscape realizations per data point . ] to characterize the fluctuations in the number of accessible paths we consider the coefficient of variation .for the block model , the relation ( [ block_paths_cv ] ) shows that increases exponentially with for fixed , while for constant the asymptotic result ( [ hoc_paths_var ] ) for the hoc model implies that for large . although the distribution of paths becomes increasingly broader with increasing also in the hoc model , the increase of is thus seen to be much faster in the block model , especially for constant .the simulation results for displayed in figure [ fig : comppathscv ] a ) show that the asymptotics ( [ block_paths_cv_asym ] ) is attained only for sequence lengths substantially larger than , which are beyond the reach of our simulations .the coefficient of variation for the block model is seen to increase faster with for larger , but even for the ordering of the data points is not yet consistent with the asymptotic behavior , in that is slightly larger for than for .the path number fluctuations in the rn and an models are generally smaller than in the bn model , with the exception of , where the block model is very close to the value for the rn model , see fig .[ fig : comppathscv ] b ) .this figure shows that the dependence of on is generally non - monotonic , with a maximum attained at an intermediate value of .the -dependence of at fixed is shown in figs .[ fig : comppathscv ] c ) and d ) .while all models behave similarly for small , at larger the increase of is markedly steeper for the block model than for the other models . at larger values of rn and an curves develop a minimum in which is followed by a rapid increase ( not shown ) . for is feasible to explicitly examine all possible rank orders over the hypercube for their number of accessible paths , and thus to find the exact path number distributions for the hoc and choc models , see table [ table2 ] . using these probabilities the exact distribution of the number of accessible paths for the block modelcan be calculated by applying eq .( [ block_paths_full ] ) for and small ( fig .[ fig : blockpathsfull ] ) .in particular for the distribution simplifies to where is the probability density function of the symmetric binomial distribution with samples .this means that the logarithm of the scaled number of paths on _ accessible _ block landscapes ( conditioned on ) with is distributed according to the symmetric binomial distribution [ fig .[ fig : blockpathsfulllog ] a ) , b ) ] . for larger distribution becomes more complex and more difficult to write down explicitly , however for the distribution of the logarithm of number of paths seems again to be similar to a symmetric , single - peaked distribution [ fig .[ fig : blockpathsfulllog ] c ) , d ) ] .this indicates that for block landscapes that do possess at least one accessible path , the number of paths is roughly log - normally distributed .exact distribution of the number of accessible paths for the block model with a ) and , b ) and , c ) and , d ) and . ] same as fig .[ fig : blockpathsfull ] with the number of paths in logarithmic scales , and conditioned on . ]we have shown in this paper that imposing a modular block structure on the set of genetic loci substantially changes the behavior of fitness landscapes . while mean values for the number of optima as well as for the number of accessible paths are similar between block landscapes and other types of nk landscapes , there is a qualitative difference between the overall structure of the distributions of these topographic features . in both casesthe distributions show higher variability for large in the block model than in the an and rn models and also display strong discreteness effects .the most pronounced difference is observed in the overall evolutionary accessibility , defined here as the probability for the existence of at least one accessible path to the global fitness maximum , which decreases very fast with on block landscapes .together with the rapid increase of the expected number of accessible pathways this implies that , while in most instances there is no path to the global maximum , _ if _ the landscape is accessible there are many possible paths . on such untypical landscapesthe global maximum is then relatively likely to be the end result of the evolutionary process , but the pathway itself is hard to reconstruct . although we used a specific model of modular fitness landscapes our main results hold qualitatively for a broader variety of landscapes with modules of independent sets of loci .more precisely , the values of the block fitness functions in ( [ fblock ] ) may be chosen in any way rather than being independent and identically distributed random variables , as long as all functions are constructed independently from the same ensemble . also the operation connecting the may be any operation that is monotonic in both operands instead of summation ( e.g. , multiplication ) . under these broader conditionsthe number of accessible paths will still be the product of the accessible paths on the modules and basic results such as the exponential decrease in accessibility for constant block sizes will still hold . this way it would also be possible to apply our results to modular fitness landscapes that incorporate other biologically important properties , such as neutral mutations .the strict conditions of the sswm regime may also be lifted .as long as the maximal allowed number of mutations present in the population at any time is limited to a value below the size of blocks it will be impossible for the population to skip over an entire module and thus any block will still have to be crossable on its own .the number of accessible paths is then still the product of accessible paths on the single blocks .our results suggest that the choice of neighborhoods in the nk model and , more generally , the architecture of genetic interactions is an important aspect to consider when relating fitness landscape models to real world data . assuming that the genetic architecture itself is , in some sense , under evolutionary selection , the low accessibility of modular landscapes would seem to favor connected genetic interaction networks , as unconnected block structures make it impossible to reach the global optimum in the sswm regime . on the other hand, we have also seen that the rare realizations that contain at least one path tend to have many paths . if each module could evolve independently towards high accessibility , block landscapeswould therefore prove advantageous by allowing many routes to the optimal genotype .interestingly , in the presence of recombination the modular structure appears to facilitate rather than impede evolutionary adaptation , and to elucidate the interplay of recombination and genetic architecture is a promising direction for future research .we can make use of the findings of the present paper to revisit the observation , first reported in , that rn model landscapes are rather inaccessible for small values of , in particular for ( see fig .[ fig : compaccess ] ) .this is surprising because ruggedness is generally expected to increase with , such that landscapes should be quite smooth .however , at low the random graph of interactions between loci is sparse ( compare to fig . [fig : nk ] ) , and the likelihood for the graph being disconnected , thus effectively giving rise to a modular landscape of low accessibility , is increased .inspection of individual instances of the rn model indeed indicates a negative correlation between the accessibility and the number of components of the interaction graph . however , comparison with the an model , which by construction has a connected interaction graph but displays even lower accessibility than the rn model ( fig .[ fig : compaccess ] ) , shows that graph connectivity can not be the main factor determining the accessibility of these landscapes .further investigations are therefore needed to clarify the mechanisms governing evolutionary accessibility in generic versions of the nk model .we acknowledge useful discussions with peter hegarty , anders martinsson , johannes neidhart , stefan nowak and ivan szendro , and support by dfg within sfb 680 and spp 1590 .jk takes this opportunity to thank herbert spohn for many years of guidance , encouragement and inspiration .spmpsci travisano , m. , mongold , j.a . ,bennett , a.f ., lenski , r.e . : experimental tests of the roles of adaptation , chance , and history of evolution .science * 267 * , 8790 ( 1995 ) hall , b.g .: predicting evolution by in vitro evolution requires determining evolutionary pathways .agents chemother . *46 * , 30353038 ( 2002 ) jain , k. , krug , j. : deterministic and stochastic regimes of asexual evolution on rugged fitness landscapes .genetics * 175 * , 12751288 ( 2007 ) conway morris , s. : evolution : like any other science it is predictable .b * 365 * , 133145 ( 2010 ) lobkovsky , a.e . ,koonin , e.v . : replaying the tape of life : quantification of the predictability of evolution .frontiers in genetics * 3 * , 246 ( 2012 ) szendro , i.g . ,franke , j. , de visser , j.a.g.m ., krug , j. : predictability of evolution depends nonmonotically on population size .sci . * 110 * , 571576 ( 2013 ) gillespie , j.h .some properties of finite populations experiencing strong selection and weak mutation .* 121 * , 691708 ( 1983 ) macken , c.a . ,perelson , a.s . : protein evolution on rugged landscapes .usa * 86 * , 61916195 ( 1989 ) macken , c.a . ,hagan , p. , perelson , a.s .: evolutionary walks on rugged landscapes .siam j. appl.math .* 51 * , 799827 ( 1991 ) flyvbjerg , h. , lautrup , b. : evolution in a rugged fitness landscape .a * 46 * , 67146723 ( 1991 ) orr , h.a . : the population genetics of adaptation : the adaptation of dna sequences .evolution * 56 * , 13171330 ( 2002 ) neidhart , j. , krug , j. : adaptive walks and extreme value theory . physical review letters * 107 * , 178102 ( 2011 ) phillips , p.c . :epistasis - the essential role of gene interactions in the structure and evolution of genetic systems .* 9 * , 855-867 ( 2008 ) weinreich , d.m . ,watson , r.a . ,chao , l. : perspective : sign epistasis and genetic constraints on evolutionary trajectories .evolution * 59 * , 11651174 ( 2005 ) poelwijk , f.j ., kiviet , d.j . ,weinreich , d.m ., tans , s.j . :empirical fitness landscapes reveal accessible evolutionary paths .nature * 445 * , 383386 ( 2007 ) kvitek , d.j . , sherlock , g. : reciprocal sign epistasis between frequently experimentally evolved adaptive mutations causes a rugged fitness landscape .plos genet .* 7 * , e1002056 ( 2011 ) poelwijk , f.j ., tnase - nicola , s. , kiviet , d.j ., tans , s.j . :reciprocal sign epistasis is a necessary condition for multi - peaked fitness landscapes .. biol . * 272 * , 141144 ( 2011 ) crona , k. , greene , d. , barlow , m. : the peaks and geometry of fitness landscapes . j. theor . biol .* 317 * , 110 ( 2013 ) weinreich , d.m. , delaney , n.f . , depristo , m.a . , hartl , d.m .: darwinian evolution can follow only very few mutational paths to fitter proteins .science * 312 * , 111114 ( 2006 ) franke , j. , klzer , a. , de visser , j.a.g.m . ,krug , j. : evolutionary accessibility of mutational pathways .plos comput .* 7 * , e1002134 ( 2011 ) szendro , i.g ., schenk , m.f . ,krug , j. , de visser , j.a.g.m .: quantitative analyses of empirical fitness landscapes. j. stat .mech . : theory exp .p01005 ( 2013 ) klzer , a. : nk fitness landscapes .diploma thesis , university of cologne ( 2008 ) carneiro , m. , hartl , d.l . : adaptive landscapes and protein evolution .usa * 107 * , 17471751 ( 2010 ) franke , j. , krug , j. : evolutionary accessibility in tunably rugged fitness landscapes .. phys . * 148 * , 705722 ( 2012 ) hegarty , p. , martinsson , a. : on the existence of accessible paths in various models of fitness landscapes .arxiv:1210.4798 ( 2012 ) . to appear in ann .nowak , s. , krug , j. : accessibility percolation on n - trees .epl * 101 * , 66004 ( 2013 ) berestycki , j. , brunet , . , shi , z. : how many evolutionary histories only increase fitness ?preprint arxiv:1304.0246 ( 2013 ) roberts , m.i . ,zhao , l.z . : increasing paths in trees .preprint arxiv:1305.0814 ( 2013 ) kingman , j.f.c . : a simple model for the balance between mutation and selection .* 15 * , 112 ( 1978 ) kauffman , s. , levin , s. : towards a general theory of adaptive walks on rugged landscapes .. biol . * 128 * , 11-45 ( 1987 ) aita , t. , uchiyama , h. , inaoka , t. , nakajima , m. , kokubo , t. , _ et al ._ : analysis of a local fitness landscape with a model of the rough mt .fuji - type landscape : application to protyl endopeptidase and thermolysis .biopolymers * 54 * , 64-79 ( 2000 ) franke , j. , wergen , g. , krug , j : records and sequences of records from random variables with a linear drift . j. statp10013 ( 2010 ) kauffman , s.a . ,weinberger , e.d . : .the nk model of rugged fitness landscapes and its application to maturation of the immune response .* 141 * , 211245 ( 1989 ) kauffman , s.a .: the origins of order .oxford university press ( 1993 ) neidhart , j. , szendro , i.g . , krug , j. : exact results for amplitude spectra of fitness landscapes . j. theor . biol . * 332 * , 218227 ( 2013 ) perelson , a.s . ,macken , c.a . : protein evolution on partially correlated landscapes .usa * 92 * , 86579661 ( 1995 ) stadler , p.f . , happel , r. : random field models for fitness landscapes . j. math* 38 * , 435478 ( 1999 ) mzard , m. , parisi , g. , virasoro , m. : spin glass theory and beyond .world scientific ( 1987 ) bovier , a. : statistical mechanics of disordered systems : a mathematical perspective .cambridge university press ( 2006 ) derrida , b. : random - energy model : limit of a family of disordered models .* 45 * , 7982 ( 1980 ) derrida , b. : random - energy model : limit of a family of disordered systems .b * 24 * , 26132626 ( 1981 ) baldi , p. , rinott , y. : asymptotic normality of some graph - related statistics .* 26 * , 171175 ( 1989 ) haldane , j.b.s . : a mathematical theory of natural selection .part viii .metastable populations .cambridge philos .* 27 * , 137142 ( 1931 ) weinberger , e.d .: local properties of kauffman s n - k model : a tunably rugged energy landscape .a * 44 * , 63996413 ( 1991 ) fontana , w. , stadler , p.f ., bornberg - bauer , e.g. , griesmacher , t. , hofacker , i.l . , tacker , m. , tarazona , p. , weinberger , e.d . ,schuster , p. : rna folding and combinatory landscapes .e * 47 * , 20832099 ( 1993 ) altenberg , l. : nk fitness landscapes . in : bck t , fogel db , michalewicz z ( eds . ) , handbook of evolutionary computation .iop publishing ltd and oxford university press ( 1997 ) campos , p. , adami , c. , wilke , c. : optimal adaptive performance and delocalization in nk fitness landscapes .physica a * 304 * , 495506 ( 2002 ) .ibid . _ * 318 * , 637 ( 2003 ) evans , s.n . ,steinsaltz , d. : estimating some features of nk fitness landscapes .* 12 * , 12991321 ( 2002 ) durrett , r. , limic , v. : rigorous results for the nk model .31 * , 17131753 ( 2003 ) limic , v. , pemantle , r. : more rigorous results on the kauffman - levin model of evolution .prob . * 32 * , 21492178 ( 2004 ) gokhale , c.s . ,iwasa , y. , nowak , m.a . , traulsen , a. : the pace of evolution across fitness valleys .* 259 * , 613620 ( 2009 ) alon , n. , spencer , j : the probabilistic method ( 2nd edition ) .wiley ( 2000 ) .watson , r.a . ,weinreich , d.m . ,wakeley , j. : genome structure and the benefits of sex .evolution * 65 * , 523-536 ( 2010 )
|
a fitness landscape is a mapping from the space of genetic sequences , which is modeled here as a binary hypercube of dimension , to the real numbers . we consider random models of fitness landscapes , where fitness values are assigned according to some probabilistic rule , and study the statistical properties of pathways to the global fitness maximum along which fitness increases monotonically . such paths are important for evolution because they are the only ones that are accessible to an adapting population when mutations occur at a low rate . the focus of this work is on the block model introduced by a.s . perelson and c.a . macken [ proc . natl . acad . sci . usa 92:9657 ( 1995 ) ] where the genome is decomposed into disjoint sets of loci ( ` modules ' ) that contribute independently to fitness , and fitness values within blocks are assigned at random . we show that the number of accessible paths can be written as a product of the path numbers within the blocks , which provides a detailed analytic description of the path statistics . the block model can be viewed as a special case of kauffman s nk - model , and we compare the analytic results to simulations of the nk - model with different genetic architectures . we find that the mean number of accessible paths in the different versions of the model are quite similar , but the distribution of the path number is qualitatively different in the block model due to its multiplicative structure . a similar statement applies to the number of local fitness maxima in the nk - models , which has been studied extensively in previous works . the overall evolutionary accessibility of the landscape , as quantified by the probability to find at least one accessible path to the global maximum , is dramatically lowered by the modular structure .
|
the eigenvalue problem of the biharmonic equation ( biharmonic eigenvalue problem ) is one of the fundamental model problems in linear elasticity , and can find applications in , e.g. , modelling the vibration of thin plates .there has been a long history on developing the finite element methods of the biharmonic eigenvalue problem , and many schemes have been proposed for discretization , computation of guaranteed upper and lower bounds , and adaptive method and its convergence analysis .this paper is devoted to studying the multi - level efficient method of the biharmonic eigenvalue problem .specifically , we present a discretization scheme which preserves the nested essence on nested grids , and then construct a multi - level algorithm based on the scheme .the cost of the multi - level algorithm versus the intrinsic accuracy of the scheme is asymptotically optimal .as well known , the multi - level algorithm based on nested essence has been a key tool in computational mathematics and scientific computing fields . for the eigenvalue problem ,many multi - level algorithms have been designed and implemented .for example , there are several successful methods for the poisson eigenvalue problem .the two - grid method has been proposed and analyzed by xu - zhou in .the idea of the two - grid method is related to the ideas in [ 23 , 24 ] for nonsymmetric or indefinite problems and nonlinear elliptic equations . since then , many numerical methods for solving eigenvalue problems based on the idea of the two - grid method are developed ( see , e.g., ) .a type of multi - level correction scheme is presented by lin - xie and xie .the method is a type of operator iterative method ( see , e.g , ) . besides, xie presents a multi - level correction scheme , and the guaranteed lower bounds of the eigenvalues can be obtained . the correction method for eigenvalue problems in these papersare based on a series of finite element spaces with different approximation properties related to the multi - level method ( cf . ) . with the proposed methods ,the eigenvalue problem is transformed to an eigenvalue problem on the coarsest grid and a series of source problem on the fine grids .the scheme can be proved asymptotically optimal .the same strategy can be implemented on the stokes equation , and similar asymptotic optimality is constructed .these works mentioned above have indeed presented a framework of designing multi - level schemes which works well for the elliptic eigenvalue problem and stable saddle point problem , provided a series of subproblems with intrinsic nestedness .in contrast to the second order problem , the multi - level method for the biharmonic eigenvalue problem has seldom been discussed , due to the lack of nested subproblems .indeed , when we consider the primal formulation of the biharmonic problem , the high stiffness of the sobolev space makes it difficult to construct nested discretizations . besides spline - type elements, the rectangular bfs element is the only element which can form nested finite element spaces on nested grids ; a multi - level algorithm has been designed based on bfs element for fourth order problems on rectangular grids .moreover , elements that are able to form nested spaces are proved to be conforming ones ; therefore , people can not obtain guaranteed lower bounds of eigenvalues with these elements .one way for this situation is to loose the stiffness of the finite element spaces .mixed element method is then frequently used , and several schemes for the biharmonic eigenvalue problem with polynomials of low degree have been designed . also , some discretization schemes of mixed type for boundary value problems can be naturally utilized for the eigenvalue problem ; we refer readers to for related discussion. however , we have to remark that the order - reduced nestedness discretizations is still not straightforward .for example , the ciarlet - raviart formulation admits us to discretize the biharmonic operator with piecewise continuous linear polynomials .however , as this formulation is stable on the space pair , the inheritance of the _ topology _ onto the finite element space is an issue , and the finite element spaces on nested grids are not _ topologically _ nested .the same problem is encountered for some other mixed formulations which introduce direct auxiliary variables , such as .more discussion can be found in .these may explain why few multi - level scheme is discussed for the biharmonic eigenvalue problem . in this paper, we seek to implement multi - level strategy by constructing amiable nested finite element discretization for the biharmonic eigenvalue problem .we first introduce a mixed formulation whose corresponding source problem is discussed in and .this mixed formulation is stable on sobolev spaces of zero and first orders ( cf .lemma [ eq : contiiso ] below ) .as the stiffness is loosened , polynomials of low degree are enough for its discretization , and optimal accuracy can be expected .therefore , it admits discretizations that are nested algebraically and _ topologically_. secondly , we construct a family of multi - level schemes for the mixed formulation of the eigenvalue problem .the multi - level algorithms for biharmonic eigenvalue problem possess optimal accuracy and optimal computational cost . for the proposed algorithms , both theoretical analysis and numerical verificationare given .we remark that , though the multi - level strategy is essentially the same as the one used by lin - xie , the theoretical analysis is not directly by the same virtue .actually , if we separate the primal variables " from lagrangian multipliers " , we will find the skeleton bilinear form is not coercive on the primal variables nor on the lagrangian multipliers .this makes the classical theory of the spectral approximation of the saddle - point problems ( cf . ) not directly usable in the present paper .a precise discussion can be found in remark [ rem : vsstokes ] . meanwhile , because of the saddle - point - type essence , the problem is also different from the steklov eigenvalue problem discussed in .we therefore construct different theory framework and interpret the eigenvalue problem in mixed formulation as the eigenvalue problem of a generalized symmetric operator rather than a self - adjoint one , and accomplish the theoretical analysis .the differences between our theory and the existing theory for elliptic or saddle point problems include : ( 1 ) we represent some existing results which are originally in variational formulation into operator formulation , and then present error estimation in that context ; the operator formulation can bridge the gap between the biharmonic problem and the classical theory of spectral approximation , and can avoid complicated appearance especially for the mixed formulation ; ( 2 ) we figure out some properties of generalized symmetric operators which are not necessarily self - adjoint ; and ( 3 ) in our theory , we do not try to interpret the problem as a restrained problem on primal variables or one on lagrangian multipliers , which is usually done for saddle - point problem ; this makes the algorithm construction and theoretical analysis more straightforward .the remaining of the paper is organized as follows . in section[ sec : sa ] , we present the theory of spectral approximation of the generalized symmetric operators .some existing results are restated and re - proved , and some new results are presented . in section [ sec : mm ] , we present a mixed formulation of the biharmonic eigenvalue problem , and construct its ( single - level ) discretization schemes . a multi - level algorithm is then constructed accordingly . both the single- and multi - level algorithms are optimal in accuracy , and the multi - level one also possesses optimal computational cost .the theoretical proof is obtained under the framework discussed in section [ sec : mm ] .numerical examples are then given in section [ sec : ne ] with respect to both single- and multi - level methods .finally , in section [ sec : cr ] , some concluding remarks and further discussion are given .in this section , we present some known and new results , including * an estimate of spectral projection operator ( lemma [ lem : proj ] ) ; * an multi - level algorithm ( algorithm [ alg : mlalg ] ) and its convergence estimate ( theorem [ thm : cmla ] ) ; * spectral approximation of generalized symmetric operator ( lemmas [ lem : listast],[lem : listastdis ] and [ lem : estgsev ] ) ; * corresponding results in variational form ( lemma [ lem : listtevvf ] , algorithm [ alg : variational ] and theorem [ thm : cmlavf ] ) . some bibliographic comments are given around .in this subsection , we collect some preliminaries from chapter ii of .let be a hilbert space , and be a compact operator on .let be a nonzero eigenvalue of with algebraic multiplicities .denote the eigenspace .let be a circle on the complex plane centered at which encloses no other points of .let be a family of compact operators that converges to in norm .then for sufficiently small , there exist eigenvalues of , counting multiplicities , located inside .denote them by .let be the eigenvectors of with respect to .denote .then is the approximation of , measured by the * gap * between them .a * gap * between two closed subspaces and of a banach space is defined by [ lem : delta]( ) if , then ^{-1} ] , right of figure [ fig : initmesh ] ) .the initial meshes with mesh size are given in both of the figures , the finest mesh is obtained by five bisection refinements .we run series of numerical experiments on the these two domains , and test the accuracies of both the single - level and multi - level finite element schemes .two kinds of finite element triples of lowest degree are tested , they are triple a : : the reduced lagrangian type triples ; triple b : : the lagrangian type triples . on each domain , we construct a series of nested grids and construct finite element triples thereon with some specific finite elements .particularly , we will set the grid sizes . on each series of meshes, we will run the single - level and multi - level algorithms , to generate two series of approximated eigenvalues and , and two series of approximated eigenfunctions and .the convergence order is computed by from all these numerical results , we observe 1 ) both the schemes provide convergent discretization to the eigenvalue problem ; their accuracy may depend on the regularity of the eigenfunctions , and essentially the domain ; 2 ) the multi - level algorithm construct the same performance as the single - level scheme , but less computation cost if both of them use the finest mesh ; 3 ) for * triple a * , the convergence rate of eigenfunction is higher than the estimation ; and 4 ) for both single- and multi - level methods , the computed eigenvalues can provide upper or lower bounds for the eigenvalues by different triples on convex domain .figure [ fig : singlesqa ] gives the convergence rates of the eigenvalues and eigenfunctions for the square with finite element * triple a * , we give the errors for the first six eigenvalues and eigenfunctions , all the rates are almost 2 , here we obtain the lower bound of the eigenvalues , the errors are given by , the convergence rates of the eigenfunctions are better than the theoretical result , the errors are given by .figure [ fig : singlesqb ] gives the convergence rates of the the first six eigenvalues and eigenfuctions for the square with finite element * triple b * , all the convergence rates of eigenvalues are almost 4 , here we obtain the upper bound of the eigenvalues , the errors are given by .all the convergence rates of eigefunctions are almost 2 which is consistent with the theoretical result .figure [ fig : singlelsa ] gives the convergence rates of the first six eigenvalues and eigenfuctions for the l - shape domain with finite element * triple a * , all the convergence rates of the eigenvalues are almost 2 , here we obtain the lower bound of the eigenvalues , the errors are given by .the convergence rates of the eigenfunctions are almost 2 which is better than the theoretical result .table [ tab : p2p1_direct_lshape ] gives the convergence rates of the the first six eigenvalues and eigenfunctions for the l - shape domain with finite element * triple b * , the change of the eigenvalues is not monotone ..[tab : p2p1_direct_lshape]the performance of * triple b * on l - shape domain with single - level scheme . [ cols="^,^,^,^,^,^,^,^",options="header " , ]in this paper , we construct a multi - level mixed scheme for the biharmonic eigenvalue problem .the algorithm possesses both optimal accuracy and optimal computational cost .we remark that , the mixed formulation given in the present paper is equivalent to the primal one ; namely , at continuous level , no spurious eigenvalue is brought in . by the mixed formulation presented in this paper, the biharmonic eigenvalue problem can be discretized with low - degree lagrangian finite elements .discretized poisson equation and stokes problems also play roles in the implementation of the multi - level algorithm , which can reduce much the computational work . both theoretical analysis and numerical verificationare given .for the theoretical analysis , we reinterpret the mixed formulation as an eigenvalue problem of a generalized symmetric operator on an augmented space .this view of point may take hint to the research on other topics of these saddle - point problems ; these will be discussed in future .aiming at the multi - level algorithm , in this paper , we only discuss the conforming cases that .the nonconforming cases that can also be used as a single - level algorithm lonely . also , the utilization to biharmonic equation with other boundary condition and eigenvalue problems with other types can be expected .it is observed that both the single- and multi - level algorithms tend to be able to provide upper or lower bounds of the eigenvalues , at least when the domain is convex .the theoretical verification and further utilization of this phenomena will be meaningful .actually , the computation of the guaranteed bounds with the mixed formulation is not that trivial , as the operator associated is not adjoint in the hilbert space .some new techniques may have to be turned to for the theoretical analysis .also , once we can get the guaranteed bounds , the multi - level algorithms can be improved in both its design and performance .the guaranteed computation of the upper and lower bounds will be discussed in future works . because the mixed formulation admits nested discretization , the combination and interaction between the multi - level algorithm and the adaptive algorithm seem expected .this will also be discussed in future .+ 2 a. andreev , r. lazarov , and m. racheva , _ postprocessing and higher order convergence of mixed finite element approximations of biharmonic eigenvalue problem , _ j. comput. appl . math .* 182 * ( 2005 ) , 333349 .i. babuka and j. osborn , _ eigenvalue problems _ , in finite element methods handbook of numerical analysis , vol .2 , edited by p. g. ciarlet and j. l. lions .elsevier science publisher ( north holland ) , ( 1991 ) .f. bogner , r. fox , and l. schmidt , _ the generation of interelement compatible stiffness and mass matrices by the use of interpolation formula _, proc . of the conference on matrix methods in structural mechanics , wright paterson air force base , ohio , ( 1965 ) .p. ciarlet and p. raviart , _ a mixed finite element method for the biharmonic equation _ , in mathematical aspects of finite elements in partial differential equations , academic press , new york , ( 1974 ) , 125145 .
|
in this paper , we discuss approximating the eigenvalue problem of biharmonic equation . we first present an equivalent mixed formulation which admits amiable nested discretization . then , we construct multi - level finite element schemes by implementing the algorithm as in to the nested discretizations on series of nested grids . the multi - level mixed scheme for biharmonic eigenvalue problem possesses optimal convergence rate and optimal computational cost . both theoretical analysis and numerical verifications are presented .
|
six years after their first detection by the cobe satellite ( smoot et al . 1992 ) , it is now well appreciated that cosmic microwave background ( cmb ) temperature fluctuations contain rich information concerning virtually all the fundamental cosmological parameters of the big bang model ( bond et al .1994 ; knox 1995 ; jungman et al .new observations from a variety of experiments , ground based and balloon borne , as well as the two planned satellite missions , map and planck surveyor , are and will be supplying a constant stream of ever more precise data over the next decade .it is in fact already possible to extract interesting information from the existing data set , consisting of almost 20 different experimental results ( lineweaver et al .1997 ; bartlett et al .1998a , b ; bond & jaffe 1998 ; efstathiou et al .1998 ; hancock et al . 1998 ; lahav & bridle 1998 ; lineweaver & barbosa 1998a , b ; lineweaver 1998 ; webster et al . 1998 ; lasenby et al . 1999 ) .these experimental results are most often given in the literature as power estimates within a band defined over a restricted range of spherical harmonic orders . our compilation , similar to those of lineweaver et al .( 1997 ) and hancock et al .( 1998 ) , is shown in figure 1 and may be accessed at our web site .the band is defined either directly by the observing strategy , or during the data analysis , e.g. , the electronic differencing scheme introduced by netterfield et al .this permits a concise representation of a set of observations , reducing a large number of pixel values to only a few band power estimates , and for this reason the procedure has been referred to as `` radical compression '' ( bond et al .if the sky fluctuations are gaussian , as predicted by inflationary models , then little or nothing has been lost by the reduction to band powers ( tegmark 1997 ) .this is extremely important , because the limiting factor in statistical analysis of the next generation of experiments , such as , e.g. , boomeranglgg / boom / boom.html ] , maxima , and archeops , is calculation time .working with a much smaller number of band powers , instead of the original pixel values , will be essential for such large data sets .the question then becomes how to correctly treat the statistical problem of parameter constraints starting directly with band power estimates .[ fig_fignew ] standard approaches to parameter determination , whether they be frequentist or bayesian , begin with the construction of a likelihood function . for gaussian fluctuations , the only kind we consider here , this is a multivariant gaussian in the pixel temperature values , where the covariance matrix is a function of the model parameters ( see below ) .the likelihood is then used as a function of the parameters , but as just mentioned , the large number of pixels makes this object very computationally cumbersome. it would be extremely useful to be able to define a likelihood function starting directly with the power estimates in figure 1 .this is the concern of this _ paper _ , where we develop an approximation to the the full likelihood function which requires only band power estimates and very limited experimental details .as always in such procedures , it is worth emphasizing that the likelihood function , and therefore all derived constraints , only applies within the context of the particular model adopted . in our discussion , we shall focus primarily on inflationary scenarios , whose theoretical predictions have become easily calculable thanks to the development of fast boltzmann codes , such as cmbfast ( seljak & zaldarriaga 1996 ; zaldarriaga et al .1998 ) .much of the recent work on parameter determination has relied on the traditional technique . as is well known ,this amounts to a likelihood approach for observables with a gaussian probability distribution .power estimates do not fall into this category ( knox 1995 ; bartlett et al .1998c ; bond et al .1998 ; wandelt et al . 1998 ) they are not gaussian distributed variables , not even in the case of underlying gaussian temperature fluctuations .the reason is clear : power estimates represent the _ variance _ of gaussian distributed pixel values ( the sky temperature fluctuations ) , and they therefore have a distribution more closely related to the .we begin , in the following section , by a general discussion of the likelihood approach applied to cmb observations . in the context of an ideally simple situation, we find the _ exact _ analytic form for the likelihood function of a band power estimate .reflections concerning the likelihood function in the context defined by actual experiments motivates us to propose this analytic form as an approximation , or ansatz , in the more general case .it is extremely easy to use , requiring little information in order to be applied to an experimental setup , because it contains only two adjustable parameters .these can be completely determined if one is given two confidence intervals , say the 68% and 95% confidence intervals , of the true , underlying likelihood distribution ( notice that here we see the non gaussian nature of the likelihood a gaussian function would only require one confidence interval , besides the best power estimate , to be completely determined ) .we ask that in the future at least two confidence intervals be given when reporting experimental band power estimates ( more would be better , say for adjusting more complicated functional forms ) .an important limitation of the approach is the inability at present to account for more than one , correlated band powers , as will be discussed further below .we quantitatively test the accuracy of the approximation in section 3 by comparison to several experiments for which we have calculated the full likelihood function .the approximation works remarkably well , and it can represent a substantial improvement over both single and `` 2winged '' gaussian forms commonly used in standard ; and it is as easy to use as the latter .the proposed likelihood approximation , the main result of this _ paper _ , is given in eqs .( [ eq : approx ] ) ( [ eq : ninty5 ] ) .we plan to maintain a web page with a table of the best fit parameters required for its use .detailed application of the approximate likelihood function to parameter constraints and to tests of the gaussianity of the observed fluctuations is left to future papers . other , similar work has been performed by bond et al . (1998 ) and wandelt et al .temperature anisotropies are described by a 2dimensional _ random _ field , where is a unit vector on the sphere .this means we imagine that the temperature at each point has been randomly selected from an underlying probability distribution , characteristic of the mechanism generating the perturbations ( e.g. , inflation ) .it is convenient to expand the field in spherical harmonics : for inflation generated perturbations , the coefficients are _ gaussian random variables _ with zero mean and covariance this latter equation defines the _ power spectrum _ as the set of .the indicated averages are to be taken over the theoretical ensemble of all possible anisotropy fields , of which our observed cmb sky is but one realization .since the harmonic coefficients are gaussian variables and the expansion is linear , it is clear that the temperature values on the sky are also gaussian , and they therefore follow a multivariate gaussian distribution ( with an uncountably infinite number of variables , one for each position on the sky ) .the covariance of temperatures separated by an angle on the sky is given by the _ correlation function _ where is the legendre polynomial of order and .the form of this equation , which follows directly from eq .( [ eq : defcl ] ) , is dictated by the statistical isotropy of the perturbations the two point correlation function can only depend on separation .observationally , one works with sky brightness integrated over the experimental beam where is the beam profile and gives the position of the beam axis .the beam profile may or may not be a sole function of , i.e. , of the separation between sky point and beam axis ; if it is , then this equation is a simple convolution on the sphere , and we may write for the beam smeared correlation function , or covariance between experimental beams separated by . the beam harmonic coefficients , , are defined by with .for example , for a gaussian beam , and . given these relations and a cmb map , it is now straightforward to construct the likelihood function , whose role is to relate the observed sky temperatures , which we arrange in a _ data vector _ with elements , to the model parameters , represented by a _parameter vector _ .as advertised , for _ gaussian _ fluctuations ( with gaussian noise ) this is simply a multivariate gaussian : the first equality reminds us that the likelihood function is the probability of obtaining the data vector given the model as defined by its set of parameters . in this expression, is the pixel covariance matrix : where the expectation value is understood to be over the theoretical ensemble of all possible universes realisable with the same parameter vector .the second equality separates the model s pixel covariance , , from the noise induced covariance , . according to eq .( [ eq : cbeam ] ) , .the parameters may be either the individual ( or band powers , discussed below ) , or the fundamental cosmological constants , , etc ... in the former case , eq .( [ eq : cbeam ] ) shows how the parameters enter the likelihood ; in the latter situation , the parameter dependence enters through detailed relations of the kind ] .this may be written in terms of multipoles as \right\}\ ] ] identifying the diagonal elements of as the expression in curly brackets .notice that the power in this variance is localized in , being bounded towards large by the beam smearing and towards small by the difference .the off diagonal elements of depend on the relative positions and orientations of the differences on the sky ; in general these elements are not expressible as simple legendre series .band powers are defined via eq .( [ eq : defw ] ) .one reduces the set of contained within the window to a single number by adopting a spectral form . the so called _ flat band power _ , ,is established by using ] .thus , our problem is reduced to finding an expression for , but as we have seen , this is a complicated function of , requiring use of all the measured pixel values and the full covariance matrix with noise the very thing we are trying to avoid .our task then is to find an approximation for . in order to better understand the general form expected for , we shall proceed by first considering a simple situation in which we may find an exact analytic expression for this function .we are guided by the observation that the covariance matrix may always be diagonalized around an adopted fiducial model .although this remains strictly applicable only for this model , we imagine that the likelihood function could be approximated as a simple product of one dimensional gaussians near this point in parameter space .if we further suppose that the diagonal elements of the covariance matrix ( its eigenvalues ) are all identical , we can find a very manageable analytic expression for the likelihood in terms of the best power estimate .we will then pose this general form as an ansatz for more realistic situations , one which we shall test in the following section .we return to these remarks after developing the ansatz .consider , then , a situation in which the band temperatures ( that is , generalized pixels which are the elements of the general data vector ) are independent random variables ( is diagonal ) and that the experimental noise is spatially uncorrelated and uniform : where is the model predicted variance and is the constant noise variance . for simplicity ,we assume that all diagonal elements of are the same , implying that is a constant , independent of .we discuss shortly the nature of such a data vector in actual observational set ups .this situation is identical to one where values are randomly selected from a single parent distribution described by a gaussian of zero mean and variance .power we wish to estimate is proportional to the model predicted variance according to ( i.e. , eq .[ eq : t ] ) ( independent of ) , and we know that in this situation the maximum likelihood _ estimator _ for the model predicted variance is simply ^ 2 = \frac{1}{{n_{\rm pix } } } \sum_{i=1}^{{n_{\rm pix } } } d_i^2 - { \sigma_n}^2 \equiv [ { \hat{\delta t_{\rm fb}}}]^2 { \cal r}_{band}\ ] ] as follows from maximizing the likelihood function ^{{n_{\rm pix}}/2 } } e^{-\frac{{n_{\rm pix}}({\hat{\sigma}_m}^2+{\sigma_n}^2)}{2({\sigma_m}^2+{\sigma_n}^2)}}\ ] ] notice that this is _ a function of _ , which peaks at the best estimate , and whose form is specified by the parameters , and . to obtain the likelihood function for the band power, we simply treat this as a function of , using eq .( [ eq : rband ] ) , parameterized by , and : ^{{n_{\rm pix}}/2 } } \\\nonumber & & \hspace*{2 cm } \times e^{-\frac{{n_{\rm pix}}({\hat{\delta t_{\rm fb}}}^2{\cal r}_{band}+{\sigma_n}^2 ) } { 2({\delta t_{\rm fb}}^2{\cal r}_{band}+{\sigma_n}^2)}}\\ \nonumber \\\nonumber & \equiv & g({\delta t_{\rm fb}};{\hat{\delta t_{\rm fb}}},{\sigma_n},{n_{\rm pix}})\end{aligned}\ ] ] it clearly peaks at .thus , in this ideal case , we have a simple band power likelihood function , with corresponding best estimator , , given by eq .( [ eq : esigm ] ) .although not immediately relevant to our present goals , it is all the same instructive to consider the _ distribution _ of .this is most easily done by noting that the quantity is with degrees of freedom .we may express the maximum likelihood estimator for the band power in terms of this quantity as \ ] ] from , we see immediately that the estimator is unbiased its variance is explicitly demonstrating the influence of sample / cosmic variance ( related to ) . all the above relations are _ exact _ for the adopted situation eq . ( [ eq : liketfb ] ) is the _ complete _ likelihood function for the band power defined by the _generalized _ pixels satisfying eq .( [ eq : simplecov ] ) .such a situation could be practically realized on the sky by observing well separated generalized pixels to the same noise level ; for example , a set of double differences scattered about the sky , all with the same signal to noise .this is rarely the case , however , as scanning strategies must be concentrated within a relatively small area of sky ( one makes maps ! ) .this creates important off diagonal elements in the theory covariance matrix , representing correlations between nearby pixels due to long wavelength perturbation modes .in addition , the noise level is quite often not uniform and sometimes even correlated , adding off diagonal elements to the noise covariance matrix .thus , the simple form proposed in eq .( [ eq : simplecov ] ) is never achieved in actual observations .nevertheless , as mentioned , even in this case one could adopt a fiducial theoretical model and find a transformation which diagonalizes the full covariance matrix , thereby regaining one important simplifying property of the above ideal situation .the diagonal elements of the matrix are then its eigenvalues .because of the correlations in the original matrix , we expect there to be fewer significant eigenvalues than generalized pixels ; this will be relevant shortly .one could then work with a reduced matrix consisting of only the significant eigenvalues , an approach reminiscent of the signal to noise eigenmodes proposed by bond ( 1995 ) , and also known as the karhunen - loeve transform ( bunn & white 1997 , tegmark et al .there remain two technical difficulties : the covariance matrix does not remain diagonal as we move away from the adopted fiducial model by varying only when this band power corresponds to the fiducial model is the matrix really diagonal .the second complicating factor is that the eigenvalues are not identical , which greatly simplified the previous calculation .all of this motivates us to examine the possibility that a likelihood function of the form ( [ eq : liketfb ] ) could be applied , with appropriate redefinitions of and .we therefore proceed by renaming these latter and , respectively , and treating them as parameters to be adjusted to best fit the full likelihood function .thus , given an actual band power estimate , ( i.e. , an experimental result ) , _ we propose as an ansatz for the band power likelihood function , with parameters and _ : + & \equiv & \frac{([{\delta t_{\rm fb}}^{(o)}]^2 + \beta^2)}{([{\delta t_{\rm fb}}]^2 + \beta^2)}\nu \\ \nonumber\end{aligned}\ ] ] we have only two parameters and to determine in order to apply the ansatz .this can be done if two confidence intervals of the complete likelihood function are known in advance .for example , suppose we were given both the 68% ( & ) and 95% ( & ) confidence intervals ; then we could fix the two parameters with the equations \ ; { \cal l}({\delta t_{\rm fb } } ) } { \int_0^\infty d[{\delta t_{\rm fb}}]\ ; { \cal l}({\delta t_{\rm fb } } ) } \\ \nonumber \\ \label{eq : ninty5 } 0.95 & = & \frac{\int_{{\delta t_{\rm fb}}^{(o)}-\sigma^-_{95}}^{{\delta t_{\rm fb}}^{(o)}+\sigma^+_{95 } } d[{\delta t_{\rm fb}}]\ ; { \cal l}({\delta t_{\rm fb } } ) } { \int_0^\infty d[{\delta t_{\rm fb}}]\ ; { \cal l}({\delta t_{\rm fb } } ) } \\ \nonumber\end{aligned}\ ] ] we shall see in the next section ( figures 27 ) that this produces excellent approximations .this is the main result of this _paper_. [ fig_saskcomp1 ] [ fig_saskcomp2 ] unfortunately , most of the time only the 68% confidence interval is reported along with an experimental result ( we hope that in the future authors will in fact supply at least two confidence intervals ) .is there any way to proceed in this case ?for example , one could try to judiciously choose and then adjust with eq .( [ eq : sixty8 ] ) . the most obvious choice for would be , although from our previous discussion , we expect this to be an upper limit to the number of significant degrees of freedom ( the significant eigenvalues of ) , due to correlations between pixels .the comparisons we are about to make in the following section show that a smaller number of effective pixels ( i.e. , value for ) is in fact required for a good fit to the true likelihood function .one could try other games , such as setting ( scan length)/(beam fwhm ) for unidimensional scans .this also seems reasonable , and certainly this number is less than or equal to the actual number of pixels in the data set , but we have found that this does not always work satisfactorily .the availability of a second confidence interval permits both parameters , and , to be unambiguously determined and in such a way as to provide the best possible approximation with the proposed ansatz .bond et al .( 1998 ) have recently examined the nature of the likelihood function and discussed two possible approximations .the form of the ansatz just presented is in fact identical to one of their proposed approximations , parameterized by and .these parameters are simply related to our and as follows : and .notice that the above development and motivation for the ansatz essentially follow for a single band power .a set of uncorrelated power estimates is then easily treated by simple multiplication .however , the approximation as proposed does not simultaneously account for several _ correlated _ band powers , and it s accuracy is therefore limited by the extent to which such inter band correlations are important in a given data set . as a further remark along these lines ,we have noted that flat band estimates of any kind , be it from a complete likelihood analysis or not , do not always contain all relevant experimental information , ( douspis et al . 2000 ) ; any method based on their use is then fundamentally limited by nature of the lost information . the only way to test the ansatz is , of course , by direct comparison to the full likelihood function calculated for a number of experiments . if it appears to work for a few such cases , then we may hope that it s general application is justified .we now turn to this issue .in order to quantitatively test the proposed ansatz , we have calculated the complete likelihood function for several experiments .our aim will be to compare the true likelihoods to the approximation .figures 25 summarize our comparisons with the saskatoon and max data sets . for the saskatoon and max experiments ,we compare the approximation directly to the band power likelihood functions . in all cases ,the complete likelihood functions have been calculated as outlined in section 2 above .[ fig_maxcomp ] the first comparison will be made to the saskatoon q band 1995 4point and 10point differences ( experimental information can be found in netterfield et al . 1997 ; all relevant information concerning the experiment can be found on the group s web pagecmb / skintro/ + sask_intro.html ] ; for useful and detailed information on a number of experiments , see caltech s web page ) .this particular choice of window functions was arbitrary .the approximation , applied using the constraints ( [ eq : sixty8 ] ) and ( [ eq : ninty5 ] ) , is shown in figures 2 and 3 as the dashed ( red ) curve .we see that it provides a good representation of the complete likelihood functions , traced by the solid ( black ) curves in each figure ; in fact , the fit is truly spectacular for the 10point difference . taking as a benchmark the rule of thumb that 1 , 2 and 3 confidence intervals may be estimated by , 4 and 9 , respectively , we see that the approximation reproduces almost perfectly all of these , and more .consider now setting and , for the 4point and 10point differences , respectively , and then adjusting to the 68% confidence interval .in so doing , we obtain the dot dashed ( blue ) curves , which in fact are not too bad in both cases .these values of should be compared to the values of and found previously by adjusting to two confidence intervals .thus , we see that the effective number of degrees of freedom describing these saskatoon likelihood functions is indeed , as we expected from the above discussion . finally , the 3dot dashed ( green ) curves show `` 2winged '' gaussians with separate positive and negative going variances , sometimes employed in traditional .this is also a fare representation of the two likelihood functions , although the proposed ansatz does perform slightly better .we will return to this point , but we should not be too surprised that the gaussian works reasonably well when , as here , becomes large ( all the same , notice that the curves are not symmetric and that a single gaussian , with a single , would not fare particularly well ) .comparison to the max experiment is shown in figure 4 for the region i d ( experimental details can be found in tanaka et al .1996 ) ; we have combined all three frequency channels to construct the complete likelihood function . the scan strategy consisted in taking single differences aligned along a unidimensional scan . once again , the approximation , applied using eqs .( [ eq : sixty8 ] ) and ( [ eq : ninty5 ] ) , supplies an excellent representation of the likelihood function , down to values well below `` '' ( 0.01 of the peak ) . the effective number of degrees of freedom is , demonstrating again that . here, the difference is rather large , due to the significant overlap between adjacent pixels along the scan , and we see that the ansatz with does not produce a good approximation .[ figsaskbin3 ] could there a way to proceed if only one confidence interval is given ?this would require a choice for one of the parameters , say , based on some knowledge of the scan strategy .we have just seen that for max leads to a bad representation of the likelihood function .one might be tempted to try instead ( scan length)/(beam fwhm) , which is in fact very close to the best value of found from adjusting to two confidence intervals .although this is successful in this case , it is nevertheless guess work , the problem being that it is really not clear if there is a unique rule for judiciously choosing .for saskatoon , worked reasonably well , while here it does not , something much less being required because of the significant redundancy in the scan .we have found that it is difficult to justify a priori a general rule for choosing when lacking two confidence intervals .the most sure way of finding the effective number of degrees of freedom to be used in the ansatz remains the use of two confidence intervals , via eqs .( [ eq : sixty8 ] ) and ( [ eq : ninty5 ] ) .a noteworthy aspect of this max likelihood function is its asymmetry , i.e. , it is manifestly non gaussian .even a `` 2winged '' gaussian is clearly a very bad representation . as the number of statistically independent elements entering the power estimation increases, we should expect the likelihood function to approach a gaussian distribution .the question is , what is meant by _ statistically independent elements _ ?it is obviously * not * something like , for max covers multipoles near 100 ; rather , as we have argued above , it is really the parameter which measures this , what we have been calling the effective number of degrees of freedom .the fact that tells us that the number of generalized pixels is an _ upper limit _ to this number degrees of freedom determining the non gaussian nature of the likelihood function .we make the connection to the familiar only when we have full sky coverage and bands consisting of single multipoles ; then , the number of generalized pixels defining each ( single multipole ) band corresponds to . in the general case ,it is more useful and correct to reason with the number of pixels ( really , ) .we may also conclude from this that although experiments with relatively large sky coverage should provide gaussian likelihood functions on scales much smaller than the survey area , band power estimates on scales approaching the survey area will always be non gaussian .the proposed ansatz represents a substantial improvement over either a single or `` 2winged '' gaussian in such cases .these comparisons focus on simple cases where the power over a single band defined by the observing strategy is to be estimated , although in the max case the analysis did include three frequency channels simultaneously .a more subtle test of the approximation is its extension to a power estimate over several bands defined by _different _ window functions .such is the situation presented by the five standard saskatoon power bins .each _ bin _ comprises several _ bands _ , of the type considered above , and the bin power is estimated using the joint likelihood of the contributing bands , including all band band correlations .one could worry that the information carried by several bands might not be adequately incorporated by the two parameters of the ansatz .in figure 5 we compare the approximation to the likelihood function of a combination of 10 , 11 and 12point differences .included are the k band 1994 and q band 1994 and 1995 cap data .the true likelihood function for this bin is calculated from the complete covariance matrix accounting for all correlations , and the approximation was fit using two confidence intervals . even in this more complicated situationwe see that the ansatz continues to work quite well , once the appropriate best power estimate and errors for the complete bin are used to find and .it is on the basis of such comparisons that we believe the proposed ansatz and method of application produces acceptable likelihood functions .besides the comparisons shown here , we have also tested the approximation against 11 other complete likelihood functions , all kindly provided by k. ganga ; these comparisons may be viewed on our web page .the approximation works well in all cases .we emphasize again that the particular value of the proposed ansatz resides in its simplicity we obtain very good approximations with little effort .study of cmb temperature fluctuations have over the short interval of time since their discovery become the cosmological tool with the greatest potential for determining the values of the fundamental cosmological constants .the present data set is already capable of eliminating some regions of parameter space , and this is only a fore - taste of what is to come .experimental results are often quoted as band power estimates , and for _ gaussian _ sky fluctuations , these represent a complete description of an observation . because there are far fewer band powers than pixel values for any given experiment , the reduction to band powers has been called `` radical compression '' ( bond et al .1998 ) ; and as the number of pixels explodes with the next instrument generations , this kind of compression will become increasingly important in any systematic analysis of parameter constraints . for these reasons , it is extremely useful to develop statistical methods which take as their input power estimates .since most standard methods use as a starting point the likelihood function , one would like to have a simple expression for this quantity given a power estimate one that does not require manipulation of the entire observational pixel set .one difficulty is that even for gaussian sky fluctuations , the band power likelihood function is not gaussian , most fundamentally because the power represents an estimate of the _ variance _ of the pixel values . for any fiducial model, the data covariance matrix can be diagonalized and the likelihood function near this point in parameter space expressed as a product of individual gaussians in the data elements ( this is strictly speaking only possible for the model in question ) .this consideration lead us to examine the ideal situation where the eigenvalues of were all identical , for which we can analytically find the exact form of the likelihood function in terms of the best power estimate . using this as motivation, we have proposed the same functional form for band power likelihood functions , eq .( [ eq : approx ] ) , as an ansatz in more general cases .it contains two free parameters , and , which may be uniquely determined if two confidence intervals of the full likelihood function ( the thing one is trying to fit ) are known ; for example , the 68% and 95% confidence intervals ( eqs .[ eq : sixty8 ] and [ eq : ninty5 ] ) .we have seen that the resulting approximate distributions match remarkably well the complete likelihood functions for a number of experiments those discussed here as well as 11 others ( calculated by k. ganga and b. ratra ) .all of these comparisons may be viewed at our web site , where we also plan to provide and continually up date the appropriate parameter values and for each published experiment .although at least one confidence interval is normally given in the literature ( usually at 68% ) , a second confidence interval is rarely quoted . to aid the kind of approach proposed here, we would ask that in the future experimental band power estimates be given with at least two likelihood based confidence intervals ( additional intervals , such as 99.8% , would allow one to fit other functional forms with 3 free parameters ) .this remains the surest way of finding the effective number of degrees of freedom of the likelihood , .an otherwise a priori choice for this number appears difficult , among other things because it depends on the nature of the scan strategy .we have noted in this light that , precisely because of correlations between pixels , which depend on the scan geometry . one important aspect of the approximate nature of the proposed method is its inability to account for correlations between several band powers .when analyzing a set of band powers , one is obliged to simply multiply together their respective approximate likelihood functions .the accuracy of the approximation is thus limited by the extent to which inter band correlations are important .although one s desire is to give experimental results as independent power estimates , this is not always possible .furthermore , and as discussed in douspis et al .( 2000 ) , the very use of flat band powers may lead to a loss of relevant experimental information otherwise contained in the original pixel data .the accuracy of any method based on their use is thus additionally limited by the importance of this lost information .these limitations define in practice the approximate nature of the proposed method .another important point to make is that the approximation is extremely easy to use , as easy as the ( inappropriate ) method ; and for experiments with a small number of significant degrees of freedom , it represents a substantial improvement over the latter .this is the case , for example , with the max i d likelihood function , and it will always be the case when estimating power on the largest scales of a survey . when the effective number of degrees of freedom becomes large , a gaussian becomes an acceptable approximation , and the gain in using the proposed ansatz is less significant .nevertheless , the approximation s facile applicability promotes its use even in these cases . in the future, we will apply the proposed approximation in a systematic study of parameter constraints and for a test of the gaussianity of the cmb fluctuations .we are very grateful to k. ganga and b. ratra for so kindly providing us with an additional 11 likelihood functions with which to test the approximation ; and we also thank d. barbosa for supplying much information concerning current experimental results .bartlett j.g . , blanchard a. , le dour m. , douspis m. & barbosa d. 1998a , in : fundamental parameters in cosmology ( moriond proceedings ) , eds .j. trn thanh vn et al .( editions frontires : paris , france ) , astro ph/9804158 bartlett j.g . ,blanchard a. , douspis m. & le dour m. 1998b , to be published in : evolution of large scale structure : from recombination to garching ( munich , germany ) , astro ph/9810318 bartlett j.g . , blanchard a. , douspis m. & le dour m. 1998c , to be published in : the cmb and the planck mission ( santander , spain ) , astro ph/9810316 bond j.r ., jaffe a.h . &knox l. 1998 , astro ph/9808264 bond j.r .& jaffe a.h .1998 , to appear in philosophical transactions of the royal society of london a , 1998 .`` discussion meeting on large scale structure in the universe , '' royal society , london , march 1998 , astro ph/9809043 bond j.r .1995 , phys .74 , 4369 bond j.r ., crittenden r. , davis r.l . , efstathiou g. & steinhardt p.j .1994 , phys .72 , 13 bunn e.f . &white m. 1997 , apj 480 , 6 efstathiou g. , bridle s.l ., lasenby a.n ., hobson m.p . &ellis r.s .1998 , astro ph/9812226 hancock s. , rocha g. , lasenby a.n .& gutierrez c.m . 1998 , mnras 294 , l1 jungman g. , kamionkowski m. , kosowsky a. & spergel d. 1996 , phys .d54 , 1332 knox l. 1995 , phys .d52 , 4307 lahav o. & bridle s.l .1998 , to be published in : evolution of large scale structure : from recombination to garching ( munich , germany ) , astro ph/9810169 lasenby a.n ., bridle s.l .& hobson m.p . 1999 , to be published in : the cmb and the planck mission ( santander , spain ) , astro ph/9901303 lineweaver c. , barbosa d. , blanchard a. & bartlett j.g .1997 , a&a 322 , 365 lineweaver c.h .& barbosa d. 1998a , a&a 329 , 799 lineweaver c.h . &barbosa d. 1998b , apj 496 , 624 lineweaver c.h .1998 , apj 505 , 69 netterfield c.b . ,devlin m.j . , jarolik n. , page l. & wollack e.j .1997 , apj 474 , 47 seljak u. & zaldarriaga m. 1996 , apj469 , 437 smoot g.f ., bennett c.l ., kogut a. et al .1992 , apj 396 , l1 tanaka s.t ., clapp a.c ., devlin m.j .1996 , apj 468 , l81 tegmark m. , taylor a.n .& heavens a.f . 1997 , apj 480 , 22 tegmark m. 1997 , phys rev .d55 , 5895 wandelt b.d ., hivon e. & grski k.m .1998 , astro ph/9808292 webster m. , bridle s.l . ,hobson m.p . ,lasenby a.n ., lahav o. & rocha g. 1998 , astro ph/9802109 zaldarriaga m. , seljak u. & bertschinger e. 1998 , apj 494 , 491
|
band power estimates of cosmic microwave background fluctuations are now routinely used to place constraints on cosmological parameters . for this to be done in a rigorous fashion , the full likelihood function of band power estimates must be employed . even for gaussian theories , this likelihood function is not itself gaussian , for the simple reason that band powers measure the _ variance _ of the random sky fluctuations . in the context of gaussian sky fluctuations , we use an ideal situation to motivate a general form for the full likelihood function from a given experiment . this form contains only two free parameters , which can be determined if the 68% and 95% confidence intervals of the true likelihood function are known . the ansatz works remarkably well when compared to the complete likelihood function for a number of experiments . for application of this kind of approach , we suggest that in the future both 68% and 95% ( and perhaps also the 99.7% ) confidence intervals be given when reporting experimental results .
|
there has been a great deal of interest in multi - modal artificial intelligence research recently , bringing together the fields of computer vision and natural language processing .this interest has been fueled in part by the availability of many large - scale image datasets with textual annotations .several vision+language tasks have been proposed around these datasets .image captioning and visual question answering have in particular attracted a lot of attention .the performances on these tasks have been steadily improving , owing much to the wide use of deep learning architectures .a central theme underlying these efforts is the use of natural language to identify how much visual information is perceived and understood by a computer system .presumably , a system that understands a visual scene well enough ought to be able to describe what the scene is about ( thus `` captioning '' ) or provide correct and visually - grounded answers when queried ( thus `` question - answering '' ) . in this paper ,we argue for directly measuring how well the semantic representations of the visual and linguistic modalities align ( in some abstract semantic space ) . for instance , given an image and two captions a correct one and an incorrect yet - cunningly - similar one can we both qualitatively and quantitatively measure the extent to which humans can dismiss the incorrect one but computer systems blunder ? arguably , the degree of the modal alignment is a strong indicator of task - specific performance on any vision+language task .consequentially , computer systems that can learn to maximize and exploit such alignment should outperform those that do not .we take a two - pronged approach for addressing this issue .first , we introduce a new and challenging dual machine comprehension ( dmc ) task , in which a computer system must identify the most suitable textual description from several options : one being the target and the others being `` adversarialy''-chosen decoys .all options are free - form , coherent , and fluent sentences with _ high degrees of semantic similarity _ ( hence , they are `` cunningly similar '' ) .a successful computer system has to demonstrate comprehension beyond just recognizing `` keywords '' ( or key phrases ) and their corresponding visual concepts ; they must arrive at a coinciding and visually - grounded understanding of various linguistic elements and their dependencies .what makes the dmc task even more appealing is that it admits an easy - to - compute and well - studied performance metric : the accuracy in detecting the true target among the decoys .second , we illustrate how solving the dmc task benefits related vision+language tasks . to this end, we render the dmc task as a classification problem , and incorporate it in a multi - task learning framework for end - to - end training of joint objectives .our work makes the following contributions : ( 1 ) an effective and extensible algorithm for generating decoys from human - created image captions ( section [ sec : creation ] ) ; ( 2 ) an instantiation of applying this algorithm to the coco dataset , resulting in a large - scale dual machine - comprehension dataset that we make publicly available ( section [ sec : mcic - coco ] ) ; ( 3 ) a human evaluation on this dataset , which provides an upper - bound on performance ( section [ sec : human_eval ] ) ; ( 4 ) a benchmark study of baseline and competitive learning approaches ( section [ sec : results ] ) , which underperform humans by a substantial gap ( about 20% absolute ) ; and ( 5 ) a novel multi - task learning model that simultaneously learns to solve the dmc task and the image captioning task ( sections [ sec : seq+ffnn ] and [ sec : lambda_gen ] ) .our empirical study shows that performance on the dmc task positively correlates with performance on the image captioning task .therefore , besides acting as a standalone benchmark , the new dmc task can be useful in improving other complex vision+language tasks .both suggest the dmc task as a fruitful direction for future research .image understanding is a long - standing challenge in computer vision .there has recently been a great deal of interest in bringing together vision and language understanding . particularly relevant to our workare image captioning ( ic ) and visual question - answering ( vqa ) . both have instigated a large body of publications ,a detailed exposition of which is beyond the scope of this paper .interested readers should refer to two recent surveys . in ic tasks ,systems attempt to generate a fluent and correct sentence describing an input image .ic systems are usually evaluated on how well the generated descriptions align with human - created captions ( ground - truth ) .the language generation model of an ic system plays a crucial role ; it is often trained such that the probabilities of the ground - truth captions are maximized ( mle training ) , though more advanced methods based on techniques borrowed from reinforcement learning have been proposed . to provide visual grounding ,image features are extracted and injected into the language model . notethat language generation models need to both decipher the information encoded in the visual features , and model natural language generation . in vqa tasks ,the aim is to answer an input question correctly with respect to a given input image . in many variations of this task ,answers are limited to single words or a binary response ( `` yes '' or `` no '' ) .the visual7w dataset contains anaswers in a richer format such as phrases , but limits questions to `` wh-''style ( what , where , who , etc ) . the visual genome dataset , on the other hand , can potentially define more complex questions and answers due to its extensive textual annotations . our dmc task is related but significantly different . in our task ,systems attempt to discriminate the best caption for an input image from a set of captions all but one are decoys . arguably , it is a form of vqa task , where the same default ( thus uninformative ) question is asked : _ which of the following sentences best describes this image ?_ however , unlike current vqa tasks , choosing the correct answer in our task entails a deeper `` understanding '' of the available answers .thus , to perform well , a computer system needs to understand both complex scenes ( visual understanding ) and complex sentences ( language understanding ) , _ and _ be able to reconcile them .the dmc task admits a simple classification - based evaluation metric : the accuracy of selecting the true target .this is a clear advantage over the ic tasks , which often rely on imperfect metrics such as bleu , rouge , meteor , cider , or spice .related to our proposal is the work in , which frames image captioning as a ranking problem .while both share the idea of selecting captions from a large set , our framework has some important and distinctive components .first , we devise an algorithm for smart selection of candidate decoys , with the goal of selecting those that are sufficiently similar to the true targets to be challenging , and yet still be reliably identifiable by human raters .second , we have conducted a thorough human evaluation in order to establish a performance ceiling , while also quantifying the level to which current learning systems underperform .lastly , we show that there exists a positive correlation between the performance on the dmc task and the performance on related vision+language tasks by proposing and experimenting with a multi - task learning model .our work is also substantially different from their more recent work , where only one decoy is considered and its generation is either random , or focusing on visual concept similarity ( `` switching people or scenes '' ) instead of our focus on both linguistic surface and paragraph vector embedding similarity .we propose a new multi - modal machine comprehension task to examine how well visual and textual semantic understanding are aligned . given an image , human evaluators or machinesmust accurately identify the best sentence describing the scene from several decoy sentences .accuracy on this task is defined as the percentage that the true targets are identified .it seems straightforward to construct a dataset for this task , as there are several existing datasets which are composed of images and their ( multiple ) ground - truth captions , including the popular coco dataset .thus , for any given image , it appears that one just needs to use the captions corresponding to other images as decoys .however , this nave approach could be overly simplistic as it is provides no control over the properties of the decoys .specifically , our desideratum is to recruit _ challenging _ decoys that are sufficiently similar to the targets .however , for a small number of decoys , e.g. 4 - 5 , randomly selected captions could be significantly different from the target .the resulting dataset would be too `` easy '' to shed any insight on the task .since we are also interested in human performance on this task , it is thus impractical to increase the number of decoys to raise the difficulty level of the task at the expense of demanding humans to examine tediously and unreliably a large number of decoys . in short ,we need an _ automatic procedure to reliably create difficult sets of decoy captions _ that are sufficiently similar to the targets .we describe such a procedure in the following .while it focuses on identifying decoy captions , the main idea is potentially adaptable to other settings .the algorithm is flexible in that the difficulty " of the dataset can be controlled to some extent through the algorithm s parameters .the main idea behind our algorithm is to carefully define a `` good decoy '' .the algorithm exploits recent advances in paragraph vector ( pv ) models , while also using linguistic surface analysis to define similarity between two sentences .due to space limits , we omit a detailed introduction of the pv model .it suffices to note that the model outputs a continuously - valued embedding for a sentence , a paragraph , or even a document .the pseudo - code is given in algorithm [ amcic ] ( the name mc - ic stands for `` machine - comprehension for image & captions '' ) .as input , the algorithm takes a set of pairs , as those extracted from a variety of publicly - available corpora , including the coco dataset .the output of the algorithm is the set . + + + + concretely , the mc - ic algorithm has three main arguments : a dataset where is an image and is its ground - truth caption ; an integer which controls the size of s neighborhood in the embedding space defined by the paragraph vector model ; and a function which is used to score the items in each such neighborhood .the first two steps of the algorithm tune several hyperparameters .the first step finds optimal settings for the model given the dataset .the second finds a weight parameter given , dataset , and the function .these hyperparameters are dataset - specific .details are discussed in the next section .the main body of the algorithm , the outer loop , generates a set of ( 4 here ) decoys for each ground - truth caption .it accomplishes this by first extracting candidates from the neighborhood of the ground - truth caption , excluding those that belong to the same image . in the inner loop, it computes the similarity of each candidate to the ground - truth and stores them in a list .if enough candidates are generated , the list is sorted in descending order of score .the top captions are marked as `` decoys '' ( * false * ) , while the ground - truth caption is marked as `` target '' ( * true * ) .the score function is a crucial component of the decoy selection mechanism .its definition leverages our linguistic intuition by combining linguistic surface similarity , , with the similarity suggested by the embedding model , : where the common argument is omitted .the higher the similarity score , the more likely that is a good decoy for . note that if the surface similarity is above the threshold , the function returns 0 , flagging that the two captions are too similar to be used as a pair of target and decoy . in this work , computed as the bleu score between the inputs ( with the brevity penalty set to 1 ) .the embedding similarity , , is computed as the cosine similarity between the two in the pv embedding space .[ cols= " < " , ] the results in table [ table : lambda_gen ] illustrate one of the main points of this paper . that is , the ability to perform the comprehension task ( as measured by the accuracy metric ) positively correlates with the ability to perform other tasks that require machine comprehension , such as caption generation. at , the model not only has a high accuracy of detecting the ground - truth option , but it also generates its own captions given the input image , with an accuracy measured on at 0.9890 ( dev ) and 0.9380 ( test ) cider scores . on the other hand , at an accuracy level of about 59% ( on test , at ) , the generation performance is at only 0.9010 ( dev ) and 0.8650 ( test ) cider scores .we note that there is an inherent trade - off between prediction accuracy and generation performance , as seen for values above 4.0 .this agrees with the intuition that training a model using a loss with a larger means that the ground - truth detection loss ( the first term of the loss in eq.[eq : mloss ] ) may get overwhelmed by the word - generation loss ( the second term ) .however , our empirical results suggest that there is value in training models with a multi - task setup , in which both the comprehension side as well as the generation side are carefully tuned to maximize performance .we have proposed and described in detail a new multi - modal machine comprehension task ( dmc ) , combining the challenges of understanding visual scenes and complex language constructs simultaneously .the underlying hypothesis for this work is that computer systems that can be shown to perform increasingly well on this task will do so by constructing a visually - grounded understanding of various linguistic elements and their dependencies .this type of work can therefore benefit research in both machine visual understanding and language comprehension .the architecture that we propose for addressing this combined challenge is a generic multi - task model .it can be trained end - to - end to display both the ability to choose the most likely text associated with an image ( thus enabling a direct measure of its `` comprehension '' performance ) , as well as the ability to generate a complex description of that image ( thus enabling a direct measure of its performance in an end - to - end complex and meaningful task ) .the empirical results we present validate the underlying hypothesis of our work , by showing that we can measure the decisions made by such a computer system and validate that improvements in comprehension and generation happen in tandem .the experiments presented in this work are done training our systems in an end - to - end fashion , starting directly from raw pixels .we hypothesize that our framework can be fruitfully used to show that incorporating specialized vision systems ( such as object detection , scene recognition , pose detection , etc . )is beneficial .more precisely , not only it can lead to a direct and measurable impact on a computer system s ability to perform image understanding , but it can express that understanding in an end - to - end complex task .m. abadi , a. agarwal , p. barham , e. brevdo , z. chen , c. citro , g. corrado , a. davis , j. dean , m. devin , s. ghemawat , i. goodfellow , a. harp , g. irving , m. isard , y. jia , r. jozefowicz , l. kaiser , m. kudlur , j. levenberg , d. man , r. monga , s. moore , d. murray , c. olah , m. schuster , j. shlens , b. steiner , i. sutskever , k. talwar , p. tucker , v. vanhoucke , v. vasudevan , f. vigas , o. vinyals , p. warden , m. wattenberg , m. wicke , y. yu , and x. zheng .: large - scale machine learning on heterogeneous systems , 2015 .software available from tensorflow.org .s. banerjee and a. lavie . : an automatic metric for mt evaluation with improved correlation with human judgments . in _ proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization _ , 2005 .r. bernardi , r. cakici , d. elliott , a. erdem , e. erdem , n. ikizler - cinbis , f. keller , a. muscat , and b. plank .automatic description generation from images : a survey of models , datasets , and evaluation measures ., 55 , 2016 .k. cho , b. van merrienboer , .glehre , d. bahdanau , f. bougares , h. schwenk , and y. bengio . learning phrase representations using rnn encoder - decoder for statistical machine translation . in _ proceedings of emnlp ,october 25 - 29 , 2014 , doha , qatar _ , pages 17241734 , 2014 .j. donahue , l. a. hendricks , s. guadarrama , m. rohrbach , s. venugopalan , k. saenko , and t. darrell .long - term recurrent convolutional networks for visual recognition and description . in _ proc .of ieee conference on computer vision and pattern recognition ( cvpr ) _ , 2014 .h. fang , s. gupta , f. iandola , r. srivastava , l. deng , p. dollr , j. gao , x. he , m. mitchell , j. platt , et al . from captions to visual concepts and back . in _ proc . of ieee conference on computer vision and pattern recognition ( cvpr ) _ , 2015 .r. krishna , y. zhu , o. groth , j. johnson , k. hata , j. kravitz , s. chen , y. kalantidis , l .- j .li , d. a. shamma , m. bernstein , and l. fei - fei . : connecting language and vision using crowdsourced dense image annotations . 2016 .i. sutskever , o. vinyals , and q. v. v. le .sequence to sequence learning with neural networks . in _ advances in neural information processing systems 27_ , pages 31043112 .curran associates , inc . , 2014 .k. xu , j. ba , r. kiros , a. courville , r. salakhutdinov , r. zemel , and y. bengio .show , attend and tell : neural image caption generation with visual attention . in _ proc .of the 32nd international conference on machine learning ( icml ) _ , 2015 .b. yao and f .- f .li . modeling mutual context of object and human pose in human - object interaction activities . in _ proceedings of the 2010 ieee conference on computer vision and pattern recognition ( cvpr ) _ , 2010 .
|
we introduce a new multi - modal task for computer systems , posed as a combined vision - language comprehension challenge : identifying the most suitable _ text _ describing a scene , given several similar options . accomplishing the task entails demonstrating comprehension beyond just recognizing `` keywords '' ( or key - phrases ) and their corresponding visual concepts . instead , it requires an alignment between the representations of the two modalities that achieves a visually - grounded `` understanding '' of various linguistic elements and their dependencies . this new task also admits an easy - to - compute and well - studied metric : the accuracy in detecting the true target among the decoys . the paper makes several contributions : an effective and extensible mechanism for generating decoys from ( human - created ) image captions ; an instance of applying this mechanism , yielding a large - scale machine comprehension dataset ( based on the coco images and captions ) that we make publicly available ; human evaluation results on this dataset , informing a performance upper - bound ; and several baseline and competitive learning approaches that illustrate the utility of the proposed task and dataset in advancing both image and language comprehension . we also show that , in a multi - task learning setting , the performance on the proposed task is positively correlated with the end - to - end task of image captioning .
|
testing statistical hypotheses is an important step in building statistical models .often it is checked whether the data deviate significantly from a null model . in point process statistics , typical null modelsare complete spatial randomness ( csr ) , independent marking or some fitted model . unlike in classical statistics , where null models are typically represented by a single hypothesis, the hypotheses in spatial statistics have a spatial dimension and therefore a multiple character .usually a summary function is employed in the test , where is a distance variable .a typical example is ripley s -function .the tests are based on the differences of empirical and theoretical values of , which are called `` residuals '' in the following .a problem is how to handle the residuals for different values of .one possibility is construction of envelopes around the theoretical summary function and to look if the empirical summary function is completely between the envelopes .this very popular method , which goes back to , has a difficult point : to guarantee a given significance level and to determine -values , see the discussion in and .an alternative approach was proposed by , who introduced statistics which compress information from the residuals for intervals of -values to a scalar .this approach has analogues in classical statistics , namely the kolmogorov - smirnov and von mises tests . in the present paper tests in diggle s spiritare called _ deviation tests_. though diggle s procedure is accepted as a standard in spatial point process statistics , to our knowledge there are no studies which explore its properties systematically . several power comparisons for different forms of deviation tests have been reported ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , but these investigations concern only specific issues . in the present paper , we consider the construction of deviation tests in more detail and systematically and give general recommendations for their use , for stationary as well as for finite point processes. recall that a classical deviation test in point process statistics is based on a summary function for the null model and its unbiased estimator .if there would be _ a priori _ a value of distance which is of main interest , then one could proceed as in classical tests by comparing the empirical value with for this , i.e. to consider the residual .however , since usually such a single special distance is not given , one would like to consider the residuals simultaneously for all distances in some interval ] denote a point process and a marked point process , respectively .the are the points , while is the mark of point . in the present paper planar pointprocesses are considered , but the main ideas hold true also for point processes in for . if and are stationary , they have an intensity which is denoted by .the mark distribution function is denoted by .its mean and variance are and . the statistical analysis is based on observations in a window , which is a compact convex subset of . in the planar casethe window is often a rectangle .when distributional hypotheses for point processes have to be tested , deviation tests as suggested by are a popular tool .we consider these tests here in a generalised form .such a test is based on a test function that characterises in some way the spatial arrangement of the points and/or marks in the window .there are many possibilities for such functions . in the classical case ,a common choice for is an unbiased estimator of some summary function , see e.g. , and for examples .popular functions for stationary processes in the case without marks are ripley s -function , the nearest neighbour distance distribution function ( -function ) and the empty space function / spherical contact distribution ( -function ) . for stationary marked point processes ,various mark correlation functions including the mark - weighted -functions are available . for non - stationary or finite processesanalogues of stationary - case characteristics can be used .since deviation tests are in essence monte carlo tests , one needs to be able to generate spatial point patterns for the tested null model in and to calculate for data and each simulated pattern .the function for data is denoted below by and for simulations the corresponding functions are for .these functions are then compared with the expectation of for the null model in . for this a global deviation measure used that summarises the discrepancy between and into a single number for all .the deviation measure is calculated for the data ( ) and for the simulated patterns from the null model ( ) .the rank of among the is the basis of the monte carlo test .there are various ways to obtain .the classical case is that of a stationary point process , an unbiased estimator of a summary function and the known form of for the null model .then it is simply and .usually edge - corrected estimators are then needed . also in some other cases , the expectation is analytically known , as for the example considered in section [ sec : simulation_study_setup ] .otherwise , has to be determined statistically based on simulations of the null model in the window .a simple estimator of is the mean of functions obtained from another independent set of simulations of the null model in . in order to save computing time, the same samples can be used for determining the and . suggested to use for each simulation , ( is data ) , its own mean value . in this casethe statistics are exchangeable and , under the null hypothesis , all rankings of among the are equiprobable as in the case where is obtained analytically or from another set of simulations .this section discusses in detail how the global deviation test statistic is constructed from a test function of a pattern observed in a window and its expectation under the null hypothesis .the raw residual is simply all raw residuals for ] used by and , e.g. , for a third order analogue of ripley s function for planar processes . for - and -functionsthe aitkin - clayton variance stabilising transformation may be useful .the present paper uses the well - established square root transformation , also in the context of marked point processes . note that , in the general case of , the -function is defined by the transformation {\cdot / b_d} ] . for each simulatedmarked point pattern , we performed tests based on random permutations of marks .the tests were made with 1 . ( with the translational edge correction ) with the mark test functions given in table [ table : summaries ] , 2 .the transformation , 3 .raw , studentised , quantile and directional quantile residuals , 4 .deviation measures and , and 5 .three intervals of -values : ] and ] .since the contributions of raw residuals of for different are approximately equal on , the test based on these residuals can be used without any transformations or scalings . on this narrow interval , also differences between the deviation measures and are negligible and , thus , figure [ fig_simexp_marktestfs ] shows the results only for .the power for the various summary functions ( or mark test functions ) depends on the alternative model : 1 . the function leads to powers at least as high as for any other function for the seqnimpp , expnimcp and gnimcp models , while , for the random field model gncp , its power is low . for the exppimcp model ,the power related to is slightly lower than that for .the function is approximately as powerful as for the seqnimpp , exppimcp and gnimcp models , where the marks tend to be either smaller or larger ( depending on model parameters ) than the mean mark for points close together . on the other hand , in the expnimcp model , the marks at points close together are not clearly smaller than the mean mark and , therefore , does not have high power .for the gncp model , the power is similarly low .the function leads to a powerful test for the gncp model , because of its sensibility to similar marks at short distances .note that the marks of points close together also tend to be similar also in the gnimcp models , but for these models and lead to much more powerful test than for the reasons explained above .we conclude that indeed the power of deviation test depends on the choice of the summary function , which should be adapted to the alternative model .functions with different mark test functions for the models of table [ table : simexp_models ] ( top panel ) using the test based on the supremum deviation measure applied to the raw residuals of on ] . ]figures [ fig_simexp_scalings_kbased ] and [ fig_simexp_scalings_lbased ] show power curves for the deviation measures and applied to the raw , studentised , quantile and directional quantile residuals of the test function and its transformation , respectively .we found the following : 1 .always the power of the tests based on raw residuals is lowest .the results with the studentised and quantile scalings are very similar .3 . for the expnimcp and gncp models ( for the latter only for ), the directional scaling improves further the power , while for the exppimcp the scaling is counter productive .the problem of asymmetry plays a role for these models : the expimcp models have an asymmetric mark distribution , and we found that also the empirical distributions of residuals are asymmetric .moreover , we found that the residual distributions are more asymmetric for than for and , which explains the result for the gncp model .note that the improvements of power by scalings are smaller for than for , since the residuals of the transformed test function are more uniform than the raw residuals . with the most powerful mark test functions ( top panel ) using the deviation measures and ( right panel ) applied to the different residuals of on ] . ]figure [ fig_simexp_scalings_vsl ] compares the tests based on the scaled residuals of to those of .we observe the following : 1 . for the seqnimpp and gnimcp models ,the differences between the powers of the tests based on scaled residuals of or are very small , and these tests have clearly higher power than the test based on the raw residuals of the transformed test function .2 . for the expnimcp and gncp models , the power for the test based on the quantile residuals of is lower than for the test based on the corresponding residuals of ( and , for the measure , even lower than for the test based on raw residuals of ) . for the exppimcp model ,the opposite occurs for the measure .3 . similarly , applying first the square root transformation and then the scaling results in more powerful tests than applying pure scaling to for the expnimcp and gncp models , and the other way around for the exppimcp model .4 . for the gncp model, the scaling has no advantage over the scaling if the transformation is employed first .the result 1 above indicates that the transformation , prior to scaling , is unnecessary for the seqnimpp and gnimcp models , whereas the results 2 - 3 show that it is useful in the case of the expnimcp and gncp models . as pointed out already above , for the latter models the asymmetry plays a role .clearly the scalings and do not decrease asymmetry in the distribution of the residuals , but , in this particular case , it appears that the square root transformation does , as does the scaling .thus , a transformation can lead to a change in the form of the distribution of residuals .in general , it can reduce as well as induce asymmetry .however , typically it will not make the distribution of residuals completely uniform for all distances on and scalings lead to further improvements in power , see figures [ fig_simexp_scalings_lbased ] and [ fig_simexp_scalings_vsl ] .we conclude that a good strategy appears to be to use a suitable transformation , if available , and then scaling to further reduce inhomogeneity of residuals . with the most powerful mark test functions ( top panel ) using the deviation measures and ( right panel ) on ] . for the seqnimpp model ,for which the deviance from the null model is reasonably sharp , the supremum measure leads to a clearly higher power than the integral measure . for the other models , the measures andlead approximately to the same power and , thus , do not play an important role . and integral deviation measures applied to the residuals of on ] , ] for the models of table [ table : simexp_models ] with the most powerful mark test functions ( top panel ) using the supremum deviation measure applied to the raw residuals of and and scaled residuals of ( right panel ) . ] at the end of this section we mention briefly the two further results .the first concerns the problem of edge - correction .as discussed already , the estimator stems from the stationary process context . in the random labelling testthere is no need of edge - correction since we deal with a fixed point pattern in the bounded window .so we also made the tests on for the processes in table [ table : simexp_models ] without edge - correction ( i.e. ) and observed that the power was a little higher than for the estimator with edge - correction if no scalings were used .we also made a corresponding simulation experiment as reported above based on points in a window of size \times[0,200]$ ] .the results with were analogous to those with , except the values of power were larger , showing the consistency property empirically .this paper considers the construction of deviation tests for point processes in a general form .the tests are applicable also for non - stationary and finite point processes , since they are based on a test function and its expectation under the null hypothesis , which can be estimated from simulations from the null model if it is not known analytically .thus , there is no need to assume stationarity and to use unbiased estimators of summary functions in the test .the paper demonstrates the tests for finite and stationary point processes in the case of the random labelling hypothesis .the main point in constructing a powerful deviation test is to choose a suitable test function . for the random labelling test we used mark - weighted functions , where the choice of the mark test function is essential .which mark test functions will lead to the most powerful test depends on properties of the point pattern under analysis .if a researcher has an alternative model in mind , this model may suggest the mark test function to be used . when the test function is chosen , the next problem is search for a suitable transformation .the simulation study shows that transformations of summary functions can increase the power of deviation tests , since they allow to make the distribution of residuals more uniform over distances .there are some classical transformations , and perhaps new transformations can be developed .finally , scalings of residuals can further improve the power of tests based on transformed summary functions , because they lead to approximately even variances of residuals for different distances , unlike transformation only .while transformations may reduce or induce asymmetry in the distribution of residuals , scaling is a simple way to make distributions of residuals more uniform . as shown by the simulation study, scalings also act in some form of balance with the choice of the length of the interval , i.e. an appropriate scaling can make the choice of unimportant . in some sense, the role of transformations and scalings is similar to that of prior distributions in bayesian statistics .if there is not an a priori interesting distance , then it makes sense to give similar importance to all residuals in the chosen interval of distances .thus , the residual distributions for different distances should be made uniform , which , in the context of deviation tests , can be done by means of transformations and scalings .as the toy examples and the simulation study demonstrated , such transformations and scalings typically , but not necessarily , improve the power of the deviation test .m.m . has been financially supported by the academy of finland ( project number 250860 ) and p.g . by rfbr grant ( project 12 - 04 - 01527 ) .the authors thank tom ' a mrkvika ( university of south bohemia ) for his comments on an earlier version of the article .32 natexlab#1#1 aitkin , m. and clayton , d. ( 1980 ) .the fitting of exponential , weibull and extreme value distributions to complex censored survival data using glim . _ applied statistics _ * 29 * , 156163 .baddeley , a. j. , kerscher , m. , schladitz , k. and scott , b. t. ( 2000 ) . estimating the function without edge correction .neerl . _ * 54 * , 315328 .barnard , g. a. ( 1963 ) .discussion of professor bartlett s paper ._ j. r. stat .methodol . _ * 25 * , 294 .besag , j. and diggle , p. j. ( 1977 ) .simple monte carlo tests for spatial pattern ._ j. r. stat .soc . ser ._ * 26 * , 327333 .besag , j. e. ( 1977 ) .comment on ` modelling spatial patterns ' by b. d. ripley ._ j. r. stat .methodol . _ * 39 * , 193195 .bretz , f. , hothorn , t. and westfall , p. ( 2010 ) ._ multiple comparisons using r_. chapman and hall / crc , 1st edn .cressie , n. a. c. ( 1993 ) ._ statistics for spatial data _ , revised edn .wiley , new york .diggle , p. j. ( 1979 ) .on parameter estimation and goodness - of - fit testing for spatial point patterns ._ biometrics _ * 35 * , 87101 .diggle , p. j. ( 2003 ) ._ statistical analysis of spatial point patterns _ , 2nd edn .arnold , london .gignoux , j. , duby , c. and barot , s. ( 1999 ) .comparing the performances of diggle s tests of spatial randomness for small samples with and without edge - effect correction : application to ecological data ._ biometrics _ * 55 * , 156164 .grabarnik , p. and chiu , s. n. ( 2002 ) . goodness - of - fit test for complete spatial randomness against mixtures of regular and clustered spatial point processes ._ biometrika _ * 89 * , 411421 .grabarnik , p. , myllymki , m. and stoyan , d. ( 2011 ) .correct testing of mark independence for marked point patterns ._ ecological modelling _ * 222 * , 38883894 .grabarnik , p. and srkk , a. ( 2009 ) . modelling the spatial structure of forest stands by multivariate point processes with hierarchical interactions ._ ecological modelling _ * 220 * , 12321240 .guan , y. ( 2005 ) .tests for independence between marks and points of a marked point process. _ biometrics _ * 62 * , 126134 .ho , l. p. and chiu , s. n. ( 2006 ) . testing the complete spatial randomness by diggle s test without an arbitrary upper limit ._ j. stat .simul . _ * 76 * , 585591 .ho , l. p. and chiu , s. n. ( 2009 ) . using weight functions in spatial point pattern analysis with application to plant ecology data .simulation comput . _* 38 * , 269287 .ho , l. p. and stoyan , d. ( 2008 ) .modelling marked point patterns by intensity - marked cox processes ._ statist .lett . _ * 78 * , 11941199 .illian , j. , penttinen , a. , stoyan , h. and stoyan , d. ( 2008 ) ._ statistical analysis and modelling of spatial point patterns_. john wiley & sons , ltd , chichester .loosmore , n. b. and ford , e. d. ( 2006 ) . statistical inference using the g or k point pattern spatial statistics ._ ecology _ * 87 * , 19251931 .mller , j. and berthelsen , k. k. ( 2012 ) . transforming spatial point processes into poisson processes using random superposition ._ adv . in appl .* 44 * , 4262 .myllymki , m. ( 2009 ) ._ statistical models and inference for spatial point patterns with intensity - dependent marks_. thesis , university of jyvskyl , jyvskyl .myllymki , m. and penttinen , a. ( 2009 ) .conditionally heteroscedastic intensity - dependent marking of log gaussian cox processes .* 63 * , 450473 .penrose , m. d. and shcherbakov , v. ( 2009 ) .maximum likelihood estimation for cooperative sequential adsorption . _ adv . in appl .probab . _ * 41 * , 9781001 .penttinen , a. and stoyan , d. ( 1989 ) . statistical analysis for a class of line segment processes .j. stat . _* 16 * , 153168 .ripley , b. d. ( 1976 ) .the second - order analysis of stationary point processes ._ j. appl .* 13 * , 255266 .ripley , b. d. ( 1977 ) .modelling spatial patterns ._ j. r. stat .methodol . _ * 39 * , 172212 .ripley , b. d. ( 1979 ) .tests of randomness for spatial point patterns ._ j. r. stat .methodol . _* 41 * , 368374 .schladitz , k. and baddeley , a. j. ( 2000 ) .a third order point process characteristic ._ * 27 * , 657671 .schlather , m. ( 2001 ) . on the second - order characteristics of marked point processes ._ bernoulli _ * 7 * , 99117 .schlather , m. , ribeiro jr . , p. j. and diggle , p. j. ( 2004 ) . detecting dependence between marks and locations of marked point processes ._ j. r. stat .soc . ser .methodol . _ * 66 * , 7993 .thnnes , e. and van lieshout , m .- c .( 1999 ) . a comparative study on the power of van lieshout and baddeley s j function . _ biom . j. _ * 41 * , 721734 . van lieshout , m. n. m. and baddeley , a. j. ( 1996 ) . a nonparametric measure of spatial interaction in point patterns_ * 50 * , 344361 .let , , .then follows the folded normal distribution with cumulative distribution function where is the cumulative distribution function of the standard normal distribution .further , the distribution function of is if , the distribution is called half - normal distribution and it simplifies to the critical value for the null hypothesis that for all can be obtained by solving , where is the significance level and is given in .thereafter the power of the unscaled test for the alternative hypothesis : for , where is a subset of , can be obtained from the power of the scaled test can be obtained similarly , because for it holds and the distribution of is assume the random variable has the distribution and . since ,the distribution of is for ( 0 otherwise ) .then the distribution of is similarly as .let then , where and are weights .the distribution of is and the distribution of is obtained similarly as : the powers of the tests based on and can then be calculated in the same way as in the case of the toy example 1 above .that is , for , solve first the critical value for the null hypothesis that for all from , and then calculate for the alternative hypothesis with where for , and similarly for .
|
the deviation test belong to core tools in point process statistics , where hypotheses are typically tested considering differences between an empirical summary function and its expectation under the null hypothesis , which depend on a distance variable . this test is a classical device to overcome the multiple comparison problem which appears since the functional differences have to be considered for a range of distances simultaneously . the test has three basic ingredients : ( i ) choice of a suitable summary function , ( ii ) transformation of the summary function or scaling of the differences , and ( iii ) calculation of a global deviation measure . we consider in detail the construction of such tests both for stationary and finite point processes and show by two toy examples and a simulation study for the case of the random labelling hypothesis that the points ( i ) and ( ii ) have great influence on the power of the tests . _ key words _ : deviation test ; marked point process ; marking model ; mark - weighted -function ; monte carlo test ; multiple comparison ; random labelling ; simulation study
|
there exist several approaches in the study of chaotic behavior of dynamical systems using the concepts such as ( 1 ) entropy and dynamical entropy , ( 2 ) chaitin s complexity , ( 3 ) lyapunov exponent ( 4 ) fractal dimension ( 5 ) bifurcation ( 6 ) ergodicity .but these concepts are rather independently used in each field . in 1991 , one of the authorsproposed information dynamics ( i d for short ) to try to treat such chaotic behavior of systems from a common standing point .then a chaos degree to measure the chaos in dynamical systems is defined by means of two complexities in id . in particular , among several chaos degrees , the entropic chaos degree was introduced in and it is applied to some dynamical systems . recently ,semiclassical properties and chaos degree for quantum baker s map has been considered in . in this paper , we give a new treatment of quantum chaos by introducing the chaos degree for quantum transition dynamics , and we prove some fundamental properties for non - chaotic maps .moreover we show , as an example , that our chaos degree well describes chaotic behavior of spin systems .in order to contain more general dynamics such as one in continuous systems , we define the entropic chaos degree in c*-algebraic terminology .this setting will be too general in the sequel discussions , but for mathematical completeness including both classical and quantum systems , we start from the c*-algebraic setting .let be an input c * system and be an output c * system ; namely , is a c * algebra with unit and is the set of all states on .we assume in the sequel for simplicity . fora weak * compact convex subset ( called the reference space ) of , take a state from the set and let be an extremal orthogonal decomposition of in , whose measure describes a certain degree of mixture of in the reference space .the measure is not uniquely determined unless is the choquet simplex , so that the set of all such measures is denoted by the entropic chaos degree with respect to and a channel a map from to , is defined by where is the mixing entropy of a state in the reference space , hence it becomes von neumann entropy when is the set of all density operaotrs , or it does shannon entropy when is the set of all probability distributions .this contains the classical chaos degree and the quantum one .now in the case of we simply denote by we use this degree to judge whether the dynamics causes a chaos or not as follows : for a give state a dynamics causes chaos iff , and it does cause chaos ( i.e. , may be called stable ) iff . in usual quantum system including classical discrete system , is the set of all bounded operators on a hilbert space and is the set of all density operators , in which an extremal decomposition of is a schatten decomposition ( i.e. , are one dimensional orthogonal projections with so that the entropic chaos degree is written as where the infimum is taken over all possible schatten decompositions and is von neumann entropy .note that in classical discrete case , the schatten decomposition is unique with the delta measure and the entropic chaos degree is written by where is the probability distribution of the orbit obtained from a dynamics of a system , and that dynamics generates the channel whose details are discussed in section 2 . before closing this section we remark that in the case when a certain decomposition of the state is fixed , say , the entropic chaos degree ( ecd in the sequel ) becomes without the infimum .let us consider a map on ^{\mathbf{n}}\subset \mathbf{r}^{\mathbf{n}}$ ] with ( a difference equation ) , .take a finite partition of with the state at time determined by the difference equation is the probability distribution of the orbit , that is , where is the characteristic function and .when the initial value is distributed due to a measure on the above is given as the joint probability distribution between the time and is defined by or then the channel at is defined by and the chaos degree at time is given by therefore once we find a suitable partition such that becomes positive , we conclude that the dynamics produces chaos .this entropic chaos degree has been applied to several dynamical maps such logistic map , baker s transformation and tinkerbel map , and it could explain their chaotic characters .this chaos degree has several merits to usual measures such as lyapunov exponent .let us consider von neumann - liouville equation \label{3.1}\ ] ] with the initial condition the solution of ( [ 3.1 ] ) is given in the form where and it follows from ( [ 3.4 ] ) and ( [ 3.3 ] ) that the relations and hold . from ( [ 3.5 ] ) and( [ 3.6 ] ) one finds that and that is , the relation between and is linear one .let us put then one finds the time dependence of is generally very complicated .one can consider as an example the following case .let be selfadjoint operators such that are linearly independent .suppose that where are solutions of the equations with initial conditions and it is assumed that the equations ( [ 3.13 ] ) can lead to chaos .however its discrete version is quite complicated .another example can be given in the form and suppose that are determined as follows or more explicitly where and are as above .thus the channel describing a discrete dynamics of a quantum systems as in the above examples is written as and it follows from the above examples that operators and consequently maps may inherit some chaotic properties of the equations ( [ 3.13 ] ) or ( [ 3.15 ] ) .however it seems that the only way to investigate the properties of is to calculate expectation values of some observables .the simple choice is to consider one observable .let be an observable and be an state of the system .let us consider the sequence where and in the special case ( [ 3.14 ] ) one has the sequence characterize the changes of .let and be fixed and take a proper .let be in this case , the interval in section 2 can be written by .\label{3.20}\ ] ] using the same way as we mentioned in section 3 , one can calculate the entropic chaos degree .one can generalize the chaos degree taking the set of observables .the quantum entropic chaos degree is applied to analyze quantum spin systems and quantum baker s map , and we could measure the chaos of these systems .in this section we explain some properties of the entropic chaos degree for quantum dynamics .we have the following theorem . for any , , , we have : * let be a unitary operator . if , then , * if , then , * let be a fixed state on . if , then , * let be one dimensional projections such that . if , then * proof : * let be an observable , be real numbers given by ( [ 3.19 ] ) , be large natural numbers and for .\ ] ] \(1 ) by a direct calculation , we obtain let a domain including the point .then we have because is independent of .one finds that ( [ 4.1 ] ) and ( [ 4.2 ] ) imply that for any partition hence \(2 ) note that for .let be a domain including the point .then we have because is independent of .one finds that one can show that this equation ( [ 4.3 ] ) implies \(3 ) a direct calculation yields for .let be a domain including the point .then we have because is independent of .it follows that similarly ( 2 ) , we obtain \(4 ) by a direct calculation , we have for .let be a domain including the point .then we have because is independent of .one can show that this equation ( [ 4.4 ] ) implies us study a spin 1/2 system in external magnetic field .the hamiltonian of the system has the form and in what follows we will proceed according to the second example ( [ 3.15 ] ) one has denoting one has in this case instead of ( [ 5.4 ] ) we can rewrite , \quad \vec{e}_{n}=\left ( e_{n}^{\left ( 1\right ) } , e_{n}^{\left ( 2\right ) } , e_{n}^{\left ( 3\right ) } \right ) .\label{5.5}\ ] ] any observable can be written in the form , then one finds e_{n}^{\left ( 1\right ) } -2\left ( \sin \theta \right ) e_{n}^{\left ( 3\right ) } e_{n}^{\left ( 2\right ) } \\ e_{n+1}^{\left ( 2\right ) } & = & \left [ -1 + 2\left ( 1-\cos \theta \right ) \left ( e_{n}^{\left ( 3\right ) } \right ) ^{2}\right ] e_{n}^{\left ( 2\right ) } -2\left ( \sin \theta \right ) e_{n}^{\left ( 3\right ) } e_{n}^{\left ( 1\right ) } \\ e_{n+1}^{\left ( 3\right ) } & = & \left ( 1-a\right ) e_{n}^{\left ( 3\right ) } + a\left ( e_{n}^{\left ( 3\right ) } \right ) ^{2}\end{aligned}\ ] ] choosing and we have \cos \omega \tau + \left ( e_{n}^{\left ( 3\right ) } \right ) ^{2 } \\ & = & \cos \omega \tau + \left ( e_{n}^{\left ( 3\right ) }\right ) ^{2}\left ( 1-\cos \omega \tau \right ) .\end{aligned}\ ] ] it is clear that in order to investigate the properties of the sequence , it is enough to consider the sequence
|
a measure describing the chaos of a dynamics was introduced by two complexities in information dynamics , and it is called the chaos degree . in particular , the entropic chaos degree has been used to characterized several dynamical maps such that logistis , baker s , tinckerbel s in classical or quantum systems . in this paper , we give a new treatment of quantum chaos by defining the entropic chaos degree for quantum transition dynamics , and we prove that every non - chaotic quantum dynamics , e.g. , dissipative dynamics , has zero chaos degree . a quantum spin 1/2 system is studied by our chaos degree , and it is shown that this degree well describes the chaotic behavior of the spin system .
|
the most important observable phenomena for the investigation of the solar activity are the surface magnetic fields , in particular the longest observed features : the solar active regions , sunspots , and sunspot groups .for this reason reliable documentation of the sunspot activity is of basic importance for the understanding of the solar dynamo and also for any other components of the solar activity .two parallel approaches have been established in solar research .one of them provides a single value for each day with no regard for structural properties of sunspot groups .this approach resulted in the series of international sunspot number ( issn : ) and group sunspot number .they are based on full - disc solar drawings , which can be extended back to the first telescopic observations in 1610 .these data are indispensable for the long - term studies of solar activity of the last four centuries .the issn is widely used as the main index of solar activity ; this is the reason for the recent cooperative efforts to ensure the homogeneity of this dataset .the task of the other approach is the recording of the observable properties of all active regions , primarily their sizes and positions .these databases are based on white - light full - disc photographic observations , and they play an important role in the study of spatial and temporal distributions of sunspots and active - region development . the first detailed sunspot catalog was the greenwich photoheliographic results ( gpr : ) described by and recently .the volumes of gpr contain the position and area data of all observable sunspot groups on a daily basis . in the first decades, the data of some ( not all ) individual sunspots were also published , and it also contained white - light facular data until 1955 . when the gpr program was terminated , the international astronomical union charged the debrecen heliophysical observatory ( dezs , 1982 ) with the continuation of the program from 1977 onwards .the present article describes the current state and services of the debrecen sunspot - data programs .the core program is the debrecen photoheliographic data ( dpd ) , the first sunspot catalog containing the position and area data of all observable sunspots and sunspot groups on a daily basis ; this is the formal continuation of the gpr with higher complexity .the dpd has recently reached the total coverage of the post - gpr era , and it has been unified with the gpr . nowthe gpr and the dpd constitute a homogeneous dataset with three overlapping years and all of the on - line tools of dpd have been extended for the gpr .the dpd team also extended the catalog work to space - borne full - disc continuum observations .this team created sunspot and facular datasets based on the _ michelson doppler imager _( mdi ) of the _ solar and heliospheric observatory _ ( soho )called soho / mdi debrecen data ( sdd , 19962010 ) , and the datasets based on the _ helioseismic and magnetic imager _ ( hmi ) of the _ solar dynamics observatory _ ( sdo ) called sdo / hmi debrecen data ( hmidd , 20102014 ) .the space - borne catalogs are even more detailed databases containing the magnetic - field data of all spots with a temporal resolution of 1 - 1.5 hours .the included magnetic data greatly extend the possibilities for investigations because of the higher cadence and the distinction of leading and following polarities .at present , the most detailed ground - based catalog is the dpd , providing area and position data for each observable sunspot on a daily basis along with images of sunspot groups , full - disc scans and magnetograms .the dpd is mainly compiled by using white - light full - disc observations taken at dho and its gyula observing station with an archive containing more than 150,000 photoheliograms observed since 1958 .observations of a number of other observatories around all of the world help in making the catalog complete .the numerical part of the dpd contains area and position of each spot , the total areas and the mean positions of the sunspot groups , and the daily sums of the area of groups .these three kinds of data relating to spots , sunspot groups , and daily sums are organized into three kinds of rows of data .the first character of the row indicates the type of the row .if the first letter is , it means that the row contains data for a spot .if the first character is , the row contains the total areas and the mean positions of a sunspot group .if the first character is , the row contains daily data .the following data are available for each spot : time of observation , the noaa number of its group , the measured ( projected ) and the corrected ( for foreshortening ) areas of umbrae [ and the whole spot [ , formerly , latitude [ , longitude [ , distance in longitude from the central meridian [ , position angle [ , and distance from disc center [ expressed in solar radii .several kinds of numerical data are presented in ascii files : yearly tables for daily sums of area data ; time series of daily data ( rows only ) ; tables containing the whole area of sunspot groups and their mean position data ( rows only ) ; tables for the sunspot area and position data ( rows only ) ; combined datasets containing all three kinds of rows .the dpd contains of all the observable sunspots , and it also contains data for umbrae . if there is a darker part within a spot , this part can be identified as an umbra based on its intensity exclusively .the corrected area of the whole spot [ ] and the umbral area [ are measured in millionth of solar hemisphere [ msh ] .if the observed area is smaller than 0.5 msh , it is indicated with 0 msh at the measured position .a zero may mean that the derived is smaller than 0.5 msh , or the observable structure of the spot does not allow one to identify any internal pattern .if an umbra is identified within a spot , the position of the umbra is published in the row .otherwise , the centroid of the spot determines the position of the spot .if there is more than one umbra within a penumbra , this fact is also published . in such a case , the is published in the row of one of these umbrae and the rows of the other umbrae contain the ordinal number of this umbra with negative sign , which indicates that the given umbra shares a penumbra with the umbra of the ordinal number indicated with negative sign .( _ e.g. _ as can be seen in figure 3 , the umbra number 2 shares the penumbra with umbra number 1 .this fact is indicated in such a way that there is -1 in the columns of in the row of the umbra number 2 .it is indicated in a similar way that the umbra number 4 shares its penumbra with lots of other umbrae . )scans of sunspot groups are appended showing the spots numbered as in the numerical dataset .full - disc white - light images and magnetic observations are appended to provide the morphological and polarity information available concerning the sunspots .all of the data and images are accessible by ftp to provide an easy bulk download , but the entire material is also provided in a user - friendly interactive graphical presentation .the daily page of the on - line presentation of the data contains a schematic full - disc drawing created from the spot data of dpd and a magnetic observation . below the schematic full - disc drawing ,there is a link to open the jpg version of the original full - disc white - light observation in a pop - up window .all of the information on a sunspot group can be reached by clicking on the group number .the days can be surveyed by turning the pages both at the daily pages and at the group pages .figures 1 and 2 show the full - disc images available for a day at the website of dpd and figure 3 shows an example for a page of sunspot data of dpd here .there is also an on - line mysql query at the website of the catalogs , which makes possible a quick and easy selection of the numerical data ( figure 4 ) . ] .right panel : magnetogram or polarity drawing available for the given day.,width=453 ] the dpd is available starting in 1974 .some of its volumes are still in a preliminary format and need a further quality check .the instrumental background for dpd catalog compilation has changed over the years .the software package called sunspot automatic measurement ( sam ) ( gyri 1998 , 2005 ) was developed to handle the scans of the ground - based photographic observations .after that , it was further developed to handle the ground - based and space - borne ccd fits images .the rate of space - borne observations included in the dpd increased in recent years .however , the ground - based observations remain essential to contribute to the completeness of the whole material .detailed analysis of the precision of the data can be found in the articles by .the soho / mdi - debrecen data ( sdd ) catalog is based on the soho / mdi continuum intensity images and magnetograms .this catalog is similar to that of dpd in its data format , image products , and on - line tool but the temporal cadence is about one hour depending on the availability of mdi observations .the novelty is that the sdd contains magnetic information because the sam is suitably modified to determine the mean line - of - sight magnetic field of umbral and penumbral parts of spots from the quasi - simultaneous magnetogram .the software automatically finds the sunspots in the solar - disc images of 1024 pixels , it draws their internal ( umbra ) and external ( penumbra ) contours , and it determines their positions , areas , and mean magnetic - field strength .the full - disc version of the sdd catalog ( fdsdd ) is based on these data .the spots are numbered by the computer program on the basis of their longitude , and they are not assigned to sunspot groups . in the next step ,the arrangement of spots into sunspot groups was made automatically by using the sunspot - group data of the pre - existing dpd catalog , finally , the arrangement was checked and corrected by human assistance .the procedure of checking and improving is time consuming so it is not yet complete . in spite of its partially preliminary state ,this synoptic dataset makes it possible to investigate the internal dynamics and evolution of the active regions with high time resolution. the unique level of detail of sdd can be demonstrated with a series of images for the sunspot group noaa 10486 in figure 5 .this group produced a powerful x17.2-class flare on 28 october 2003 .figure 5 shows the last three sets of data in sdd before this flare .the schematic polarity drawing of the sunspot group in the middle column of figure 5 illustrates the data content of sdd .each image in the middle is created from the position and area data of spots derived from the observations alongside it .the spots ( and pores ) are represented by ellipses outlining approximately the spot roughly as a projection of a circle on a sphere onto a plane .the centroid of the ellipse is at the position of the visible centroid of the spot .the area of the ellipse is the projected whole spot area .the projected umbral area is represented by a smaller ellipse within the ellipse of the spot .the colors of the ellipses of umbra and penumbra show the polarities of their mean magnetic field derived from the magnetogram ( light- gray penumbra and white umbra correspond to positive polarity , while dark - gray penumbra and black umbra correspond to negative polarity ) .at least a small part of the spot is always colored in white or black ( even if =0 ) to emphasize the polarity information . in the case of several umbrae in common penumbra , there is an umbral ellipse for each umbra at its position . in the case of mixed polarities ,the colors of umbra and penumbra show opposite polarities .the amount of data for this group is more than two orders of magnitude larger than that of dpd ; the sdd contains more than 10,000 independent ( non - redundant ) position , area , and magnetic data of spots in this group for this day ( 15 images per day ; about 120 spots in this group per image ; two position data , two area data , and two magnetic data per spot ). 19 heliographic degrees.,width=453 ]for the hmidd , the data structure and the on - line tools are very similar to those of sdd .the difference comes from the fact that the spatial resolution of hmi is larger than mdi . the 4096 pixels high - quality images allow measuring much smaller features with higher precision in the images . on one hand , this results in large data files with somewhat different format from that of dpd and sdd . on the other hand, it results in sunspot - group images where the numbers and indicating lines of spots are too crowded . because of the large amount of data , the presentation of the data with a tool showing detailed information has a greater importance .the website of hmidd is extended with extra pages at the links see sunspots with tool " below the images in the pages of sunspot groups .this simple tool is useful to study positions and polarities of sunspots by completing the sunspot - group images in that page with additional images in high resolution and polarity drawings . in the upper panel of these pages ,the white - light image or the magnetic observation of a sunspot group can be seen depending on the viewer choice by the radio button on the right while the lower panel shows the schematic polarity drawing of sunspot group reconstructed from the hmidd data . if one moves the mouse cursor over a sunspot ( or umbra ) near its centroid in the upper or lower panel , the actual related data row of that sunspot will pop up on the row between the upper and lower panels as the data of spot ( umbra ) no .26 can be seen in figure 6 . to help browsing data , the full - disc schematic drawings reconstructed from the data are also published ( _ e.g. _ figure 7 is created from position , area , and magnetic - field strength data of 566 sunspots derived from the continuum image and magnetogram for 20140706t20:00:41ut ) .the huge amount of data in the hmidd requires that the assignment of spots to sunspot groups is processed with an automatic method exclusively .this method is mainly based on the information on sunspots and sunspot groups listed in dpd .that spot , which is not included into dpd , is assigned to the closest group of dpd if the distance is smaller than five heliographic degrees .if there are no dpd groups within this distance , the spot or the cluster of nearby spots is assigned to a newly created sunspot group with a name created from an existing noaa active - region number by adding a previously unused letter to it .the quick - look version of dpd is also based on the hmi observations . to provide daily sunspot data as soon as possible, one hmi image per day is regularly evaluated starting the workdays with this task and publishing the data within a few hours . 9 heliographic degrees in this case .before creating these images , the hmi observations are enlarged from 4096 pixels to 6600 pixels.,width=453 ]the direction of the line connecting the leading and following portions of a bipolar sunspot groups is usually tilted with respect to the solar equator .the white - light images only allow a simple method for the estimation of this angle but more reliable tilt - angle data can be determined by distinguishing between the polarities of spots . the tilt angle is traditionally defined to range between and to be positive if the absolute value of the heliographic latitude of the leading part is smaller than that of the following part but other definitions are also possible ( see , _e.g. _ , and ) . considering the diagnostic importance of this angle ,all catalogs of dho have appendices containing the tilt angles of the sunspot groups .the dpd lacks magnetic information ; in this case the method of had to be used , in which area - weighted positions of the umbrae of leading and following portions were derived in the portions located to the west and east of the area - weighted centroid of the entire sunspot group .this procedure has also been carried out by using the whole spot area as the weight .these resulted in two sets of columns of data .the novelty of sdd and hmidd tilt - angle datasets is that they are based on two new different tilt definitions including the information of magnetic polarity of spots in addition to the traditional data . in this cases, the tilt angles also range between degrees but it is the polarity of the spot that determines its leading or following role instead of the position with respect to the centroid of the group . in the case of space - borne datasets , we have four tilt data values for each group , which makes it possible to select the most unambiguous cases in which these values are closer to each other than a pre - selected criterion . these tilt - angle data and their methodology are described in detail by .figure 8 shows the on - line query available for dpd and sdd tilt data showing the options for selection criteria .the gpr contained facular data besides the spot data but the dpd did not include this additional task . however , the quality of the space - borne continuum images allowed the derivation of detailed facular data in a similar way to that in which spot data are measured .the full - disc facular data both in sdd and hmidd have a similar format to that of full - disc sunspot data .the main difference is that the columns of umbral - area data are filled with zeros .the first column of magnetic data contains the line - of - site magnetic - field value at the brightest pixel within the facular contour in the intensity image . concerning the comparison of sdd and hmidd facular data ,see .figure 9 shows an example for a page showing facular data .the dho hosts solar - image databases containing heritages of two former hungarian observatories .the first set of drawings was observed between 1872 and 1891 at the gyalla observatory ( now hurbanovo , slovakia ) founded by mikls konkoly - thege ( 18421916 ) .the other set of solar drawings was observed at the haynald observatory in kalocsa between 1880 and 1919 .the full set of drawings is available at the site of hungarian historical solar drawings ( fenyi.solarobs.unideb.hu/hhsd.html ) , but the selected images are also included in an interactive on - line presentation combined with the data of gpr .these drawings , which are rich in details , help us to reveal the structure of sunspots and sunspot groups in those years when there are no photographic images available .the drawings may also contain information for faculae drawn with colors different from those of spots . in a few cases ,the features above the photosphere may also be seen at the limb observed by using a spectrohelioscope .two examples for these drawings can be seen in figure 10 .the gpr contains the heliographic coordinates and total areas of the sunspot groups on a daily basis in each year between 1874 and 1976 . in the beginning, they also included the data of some individual spots but later this was abandoned .thus , only the group data were suitable to produce a homogeneous dataset for the whole gpr era .the greenwich books are available in pdf format at the site of the uk solar system data centre ( ukssdc : www.ukssdc.ac.uk/ ) .they have two types of sections of sunspot group data between 1874 and 1955 : the ledgers " of sunspot groups and the daily measures " of sunspots and faculae .the basic dataset ( 18741976 ) consists of group data in files with extension gpr " .it is the electronic version of gpr published at noaa national geophysical data center ( ngdc : www.ngdc.noaa.gov ) based on the ledgers .that database was compiled by ward , usaf afcrl , in the 1960s and later updated by hoyt in the 1990s .the additional dataset of solar white light faculae contains detailed daily lists of sunspots ( 18741915 ) or daily lists of sunspot groups ( 19161955 ) based on the tables of measures " .the summaries of total daily data are available for 19561976 with extension sum " . concerning the digitized datasets of gpr , see .the method of revision was mainly based on the comparison of basic files with the extension gpr " and the additional files called saf " ( spots and faculae ) .these datasets were compared to eliminate or decrease the discrepancies because of the various typographic errors of the printed or digitized versions .the comparison of various types of published or derived position and area data resulted in lists of errors , which were usually corrected after checking them against the books , or they were recalculated from the data reckoned as correct .if the discrepancy could not be resolved by checking the books , the most probable error was corrected to achieve a consistency within the given threshold . in some cases ,the correction was made after checking the data against the hhsd or mount wilson ( mw ) drawings .the main goal of our work was to improve the gpr " files because that dataset was suitable to be unified with dpd .however , the saf " files were also corrected in many cases to achieve agreement between the two types of files .we list below the types of errors or problems detected during the quality check of the data and the method used to solve the problem : if a redundant row was found in the gpr " file , it was deleted ( _ e.g. _ the row was a duplication , or it contained false or deviating data for a group ) . if there was a group in the saf " with no pair in the gpr " , the data of the missing group were entered into gpr " .if the observational date of a given group was different in the saf " and in the gpr " , the date was checked against the book and the erroneous file was corrected .if the name of the group in the saf " differed from its name in the gpr " , a similar method of correction was used .the name was different in a number of cases because the symbols ( * , * * , # , # # ) added to the group number in the book and in the saf " files were transformed to additional numbers in the original version of gpr " .these symbols were transformed to letters a , b , c , d in the revised version of gpr " for the years 18741915 .the names created from the carrington rotation remained unchanged in the years 19561976 .in this way , each group had a unique name in the revised gpr " dataset .we have also searched for outliers in the position and areal data . if the difference between the position of the group derived from the saf " and that in the gpr " was larger than one heliographic degree , we checked the error against the books .the same was made if the difference between the corrected area of a group in the saf " and in the gpr " was larger than 5 msh .we also checked whether the projected or corrected total daily area data computed from gpr " data were different from those contained in sum " files .we corrected two types of errors after checking the internal consistency of the gpr " data .if the difference between the published heliographic coordinates and and those values of and that are derived from the polar coordinates and is larger than a radius - dependent threshold , the polar coordinates were recalculated from and . if the difference between the published projected area ( or ) and the projected area derived from the corrected area is larger than 10 msd and 10% , the projected area was recalculated from corrected area and position data . by using this method , we achieved that a number of errors and outliers were filtered from the revised version of the gpr , and its level of internal consistency was increased .we documented the changes at the page of list of data modifications " ( fenyi.solarobs.unideb.hu/gpr/modifications/ ) where both the original and the new data can be seen .the improved saf " files are also available at the ftp site of dho ( ftp://fenyi.solarobs.unideb.hu:2121/pub/gpr/saf/ ) in the same format as they are created at ngdc .the transitional versions of these files ( fin.tmp " , fin.new " ) are also published here ( ftp://fenyi.solarobs.unideb.hu + : 2121/pub / gpr / fin/ ) , which may help in , _e.g. _ , handling the temporal data in different formats or the lines of individual spot data in case they are needed .the revised version of gpr " files were converted into dpd format . as a result ,there are no individual spot data in the rows of sunspot data in the converted files ; only the rows of groups are repeated in them to follow the dpd format .these converted files are used to create the web pages of gpr at fenyi.solarobs.unideb.hu / gpr/. the graphical on - line presentation of the data is completed with the solar drawings of hhsd and the mount wilson observatory .figure 11 shows four screenshots showing the possible use of the interactive tool of gpr .the first snapshot shows a page with a drawing from gyalla .the gyalla drawings are east - west side - reversed as it is earlier mentioned , thus two options are available for displaying them .the original view is suitable for reading the text in the image ( date or group number ) .the side - corrected view is suitable for comparing the gpr data .if someone moves the mouse over the file names below the image , the versions of the image can be swapped .if one clicks on a file name , the image opens in a pop - up window in which the image can be enlarged ( see the right - hand side of the first snapshot reported in figure 11 ) .if someone wants to compare the observation with the schematic drawing of gpr data , it has to be taken into account that the orientation of the original observation differs from that of the schematic drawing . in the gyalla images ,the terrestrial north is at the top while in the schematic gpr drawings the solar north is at the top .the small orientation figure in the middle of the page between the original and schematic drawings helps in comparing the orientation of the two images showing the position angle of the solar north ( n in red ) measured eastwards from the terrestrial north point ( n in black ) of the solar disc . in the second screenshot ,a web page can be seen with a drawing from kalocsa . in this caseit is somewhat more difficult to compare the original and schematic drawings because the orientation and its marks in the kalocsa drawings were not standardized .the terrestrial ew and/or solar ew are usually indicated with lines crossing the disc and/or marks at the limb ( the relative position of the terrestrial and solar coordinate systems is the same as that of the black and red ones in the small orientation drawing but they may be rotated with a quantifiable angle in the observational drawing ) . if the orientation is indicated ambiguously in a drawing , it can be reconstructed by comparing the various data presented in the related web page . if there are two original observations for a day , both of them are listed with links to the images in the web page .the third screenshot shows that the user may select two different images to be shown in the main page and in the pop - up window .in this screenshot there is a kalocsa drawing in the main page and there is an gyalla drawing in the pop - up window .the fourth screenshot shows a page with a mw polarity drawing .the mw drawings are also east - west side - reversed observations , thus the correct position of spots and the readable text of magnetic information can not be seen at the same time .the screenshot shows that the user may select two different views of a drawing to be shown in the main page and in the pop - up window .now that the gpr and dpd data are unified , it is edifying to summarize briefly their common history by using historical resources , the volumes of gpr , a few recent documents ( e. g. volumes of iau transactions ) and personal communications : 1843 : the german amateur astronomer heinrich schwabe discovered the 11-year cycles of solar activity .1845 : the first clear image of the sun was a daguerreotype taken by a.h.l .fizeau and j.b.l . foucault .1852 : edward sabine announced that the schwabe s sunspot cycle was correlated very closely with the earth s 11-year geomagnetic cycle .astronomers became interested in observing the sun .1854 : john herschel argued the importance of obtaining daily photographic pictures of the sun s disc , the kew observatory committee of the british association took the matter up , the royal astronomical society decided to support the building a photoheliograph for kew .1857 : warren de la rue produced the design for the kew photoheliograph , the first telescope specifically built to photograph the sun .1858 : the systematic photographic observations started at kew observatory , where sabine controlled the geomagnetic and meteorological research , and he secured funds for the sunspot record .1859 : on 1september , the magnetometers at kew recorded a brief but very noticeable jump in the earth s magnetic field at exactly the same time as two amateur astronomers , r.c .carrington and r. hodgson , were the first to observe a flare on the sun .it was the first observation of a space - weather event .1860 : the photoheliograph was briefly removed from kew to a site in spain , where de la rue used it to take the first good pictures of a total solar eclipse .186172 : the photoheliograph returned to kew , where the observers gathered 2778 white - light full - disc photographic observations for a full solar cycle .the observatory gained renown and was well regarded for its three - fold activities ( solar physics , geomagnetism , and meteorology ) .george b. airy , who was the astronomer royal at the royal observatory in greenwich ( rgo ) , regarded kew as a rival over this decade . finally , airy achieved the transfer of kew s photoheliograph to greenwich in august 1872 .1873 : the daily photoheliographic observations and their evaluation started at rgo . the goal was to construct a homogeneous and precise solar dataset and to provide information on solar activity for the magnetic observatory of rgo .18741913 : it was the golden era " of gpr based on new instruments , increasing network of contributing observatories , and publications of detailed sunspot and facular data in addition to sunspot group data , and new types of tables ( _ e.g._for recurrent sunspot groups ) .the data were published within one to three years after observations .191436 : because of the first world war , the backlog of publication increased to four to five years . in 1916 , the observers decreased the published information : the daily data only contained the mean position and the whole area of sunspot groups but sunspot data were no longer published .this resulted in a quicker publication after 1921 ; the data were published within one to two years again .193766 : the second world war caused several problems in data production starting with the data for 1937 .after the war , the photoheliograph was moved to herstmonceux from greenwich to achieve better seeing conditions .these two things together caused a large lack in various resources .thus , the publication of the gpr suffered from a large delay ( 914 years , average : 12 years ) . to decrease the time - lag ,the observers decreased the information content of gpr again , starting at 1956 the daily detailed facular data were not published any longer and the tables of sunspot groups with projected area data were omitted .after that , the delay of publication was three to nine years ( average : 5.9 years ) .196776 : in 1967 , the magnetic observatory became officially separated from rgo . in this way , the direct interest of rgo in solar observations based on research of the relationship between solar activity and geomagnetic storms ceased .however , at this time the publication of gpr was regarded as an international duty of rgo ; thus , the gpr project was continued but it had only a very low priority . in 1971 , the fundamental reorganization of rgo and the rearrangement of resources to research groups at universities started .because of these changes , the rgo had to decide to finish a number of its research programs .1976 : the termination of gpr was announced at the iau general assembly ( ga ) in grenoble ; commission 10 ( solar activity ) accepted this decision .the last volume of gpr for 197276 was published in 1980 .the dho of the hungarian academy of sciences ( has ) had already a long tradition of high - quality photoheliographic observations and precise measurement of sunspot position at this time .commission 10 encouraged the dho to undertake this task .197778 : the dho and the has formally assumed responsibility for this program in january 1977 and established collaborative work with pulkovo , kislovodsk , kodaikanal , and rgo to ensure a continuous daily sunspot record .197981 : the debrecen photoheliograph program was approved at the iau xvii ga in 1979 .the team of debrecen photoheliographic results ( dpr ) wanted to provide information on sunspots similar to or even exceeding the great content of gpr in the golden era " .the planned procedure was very time consuming .19821992 : in 1982 , the dho became a department of the konkoly observatory ( ko ) , budapest .the dpr1977 was only published in 1987 and dpr1978 in 1995 .19932014 : the solar community encouraged dho to speed up the process and therefore a separate project was launched to produce the dpd .its team has left out the most time - consuming components of the dpr - procedure , thus the information content of dpd got closer to that of the early gpr but it was still much more detailed , while the speed of publication of dpd gradually increased .42 volumes of dpd have been published during 23 years .the revised version of gpr has been converted to dpd format and they were published in a unified form .2015 : it was announced at the iau xxix ga that the authors of the present article , the permanent members of the dpd team , fulfilled the undertaking on sunspot database with more extended content and services than was required .2016 : at the beginning of the year , the director of the konkoly observatory decided to close the dho and its gyula observing station by the end of the year . the future of the dpd catalog is unclear at present but some kind of continuation of the catalog work is planned in the framework of the konkoly observatory .( this may mean that the links published in this article will change in the future . )it may be useful to demonstrate the research potential of the databases .their unique features enabled several recent studies that have been carried out by exploiting these advantages .data of both sunspots and sunspot groups in the same tables enable one to track the internal dynamics of the active regions . a detailed study of addresses the variation of the distances of leading following subgroups , the leading following asymmetry of compactness , as well as of the rates of development . a possible new activity parameter is proposed by ; this method also needs the magnetic and areal data of each spot within the groups .the high temporal resolution in the last two decades is a powerful tool to refine the internal processes of active regions , all of the above - mentioned studies exploit it .the internal dynamics of sunspot groups leading to flares has been studied by by focusing on the mutual displacements of spots of opposite polarities . by combining various space - borne spot andflare data , a new tool can be developed to study the position of flares within sunspot groups .the position and size of sunspot groups can be useful to reveal the spatial and temporal distribution of small flares before major flares .the long homogeneous series of sunspot data is indispensable for studies of long - term processes .this has been exploited by in tracking the migration of solar active longitudes . after comparing the positions of flares and active longitudes, showed that the most flare - productive active regions tend to be located in or close to the active longitudinal belt .the phase lags of solar hemispheric cycles studied by also needed a dataset covering more than a century .the availability of tilt - angle data has allowed the publication of a number of articles recently .the comparison of tilt angles derived from white - light images with those of magnetograms by revealed that the latter include the contribution of facular areas , which tend to result in greater axial inclinations than the adjacent sunspots .the large amount of tilt data allowed the refinement of joy s law : showed that the refined diagram of joy s law presents a plateau in the domain around the mean latitude of the active - region belt .the debrecen data were used by as reference datasets to obtain a statistical mapping of sunspot umbral areas derived from historical solar drawings and they contributed to the validation of the historical tilt - angle data . used the dpd for investigating the north south asymmetry of the solar dynamo by estimating poloidal - field formation from sunspot data . examined the anti - hale groups by means of the dpd website . compared the spatial distribution of sunspot butterfly diagram constructed from the dpd sunspot catalog , the dpd tilt angles , and the radial magnetic field to study polar magnetic - field reversal and surface flux transport . by combining the dpd data with other types of data , give examples to demonstrate that the regular polar - field build - up is disturbed by surges of leading polarities that resulted from violations of joy s law at lower latitudes . investigated how tilt angle and footpoint separation of bipolar sunspot regions vary during emergence and decay by using hmidd .the debrecen data helped in developing a new method of image patch analysis of sunspots and active regions . in the studies of the solar irradiance climate data record by , the dpd will be used as an independent data source that is accessed for quality assurance of the sunspot darkening .these examples show that there is a wide range of scientific topics that can be studied in an efficient way by exploiting the data and tools presented in this article .the article presents an overview about the most complex ensemble of sunspot databases edited by the heliophysical observatory , debrecen , hungary . the aim of the team was to compile sunspot datasets containing all relevant data for studies of sunspot activity .the developing observational techniques allowed varying data contents in different time intervals , as summarized in table 1 , but the team endeavored to obtain the maximum information from all types of observations .the users of the datasets can easily carry out investigations of sunspots on large statistical samples .while the earlier types of sunspot datasets only enabled to investigate the sunspot activity globally , the present databases also make possible to study the internal dynamics of sunspot groups .the presentation of the data is user friendly and provides several tools to make easy the search and selection .the high - level data are compiled in tables but all involved images of observations are appended so the sunspot development can be tracked by stepping through the consecutive observations , while the images have links to the numerical data by clicking on the images of spots .the search is also facilitated by a mysql query .thus the database ensemble is not only the most detailed and complete documentation of the sunspot activity but also the most versatile tool to find the necessary data ..the contents of sunspot catalogs available at the debrecen observatory [ cols="<,^,^,^,^,^ " , ] table 1 summarizes the main features of the mentioned catalogs to overview the information content of the materials of debrecen accessible at + fenyi.solarobs.unideb.hu/deb_obs_en.html .we cite the resolution of the iau ( iau transactions xvib , 107 , 1977 ) on the responsibilities of dho concerning the continuation of gpr : + -to carry out direct photographic observations at debrecen + -to organize cooperation between other observatories willing to contribute to such a project + -with the assistance of the greenwich observatory to ensure a homogeneous continuity of the gathering , reduction and publication of such data + -to ensure the archiving of the original photographs and this access to interested scientists from around the world .the present article shows that the existing materials exceed the above requirements .now the gpr and the dpd constitute a homogeneous dataset that is supported by various the on - line tools .the debrecen catalogs provide a new window of enhanced resolution to the solar activity. the dpd team would highly appreciate if the users of the tools and data presented in this article acknowledged its long - term efforts by referring to this article and to the article on the method of evaluation by in their publications .this work was supported by the european community s seventh framework program ( fp7 sp1-cooperation ) under grant agreement no .284461 ( eheroes ) .the various tasks related to the databases and tools described in this paper were supported during the past 23 years by the following grants : european community s seventh framework programme ( fp7/20072015 ) under grant agreement no .284461 ( eheroes , mar .2012 feb .2015 ) and no .218816 ( soteria , nov .2008 oct .2011 ) ; esa pecs contracts no .98017 ( 20042007 ) and no .c98081 ( 20092012 ) ; national development agency under grant agreement no .bonus_hu_08/2009 - 003 ( 201011 ) and tmop 4.2.2.c-11/1/konv/20120015 ( 201213 ) ; u.s .- hungarian joint fund for science and technology under contract no .95a-524 ( 19961998 ) ; scostep supplemental step grant ( 1995 ) ; hungarian ministry of cultural heritage under millenium program grant agreement no .szp422 ( 19992001 ) ; grants of the hungarian national foundation for scientific research nos .otka t037725 ( 20022005 ) , t025640 ( 19982000 ) , t019829 ( 19961999 ) , t014036 ( 19941996 ) , t007422 ( 19931996 ) , f4142 ( 19921995 ) , p31104 ( 1998 ) , and u21342 ( 1996 ) .we thank the referees and grant providers for supporting our proposals .we express our deepest gratitude to the colleagues at the collaborating observatories for participating in the daily routine observations and putting the necessary material at our disposal .the contributing observatories taking white - light full - disc and/or magnetic observations were : abastumani astrophysical observatory ( georgia ) , astronomical observatory of ural state university ( russia ) , inaf - catania astrophysical observatory ( italy ) , crimean astrophysical observatory ( russia ) , ebro observatory ( spain ) , helwan observatory ( egypt ) , huairou solar observing station of national astronomical observatories of cas ( china ) , institute of geophysics and astronomy of cuba ( cuba ) , kanzelhhe solar observatory ( austria ) , kiev university observatory ( ukraine ) , pulkovo observatory and its kislovodsk observing station ( russia ) , kodaikanal observatory ( india ) , mauna loa solar observatory ( usa ) , mount wilson observatory ( usa ) , san fernando observatory ( usa ) , solar observatory of national astronomical observatory of japan ( japan ) , rome astronomical observatory ( italy ) , royal observatory of belgium ( uset data / image of uccle / brussels , belgium ) , royal greenwich observatory ( uk ) , sayan observatory of institute of solar - terrestrial physics of siberian department of ras ( russia ) , tashkent observatory ( uzbekistan ) , ussuriysk astrophysical observatory of far - eastern branch of the ras ( russia ) , valask mezi observatory ( czech republic ) .the mount wilson white - light full - disc scans are available thanks to the mt .wilson solar photographic archive digitization project supported by the national science foundation ( nsf ) under grant no .the magnetic database includes data from the synoptic program at the 150-foot solar tower of the mount wilson observatory . the mt .wilson 150-foot solar tower is operated by ucla , with funding from nasa , onr and nsf , under agreement with the mt .wilson institute .the observations of kanzelhhe solar observatory are available by courtesy of the central european solar archives ( cesar ) .michelson doppler imager _( mdi ) data are used by courtesy of the soho / mdi research group at stanford university ._ solar and heliospheric observatory _ ( soho ) is a mission of international cooperation between esa and nasa .the sdo / hmi images are available by courtesy of nasa / sdo and the aia , eve , and hmi science teams .nso / kitt peak magnetic data used here are produced cooperatively by nsf , nasa / gsfc , and noaa / sel .we acknowledge the courtesy of editors of solnechnie dannie solar catalog , who permit the use of magnetic - polarity drawings observed by several contributing observatories .this work utilizes data obtained by the global oscillation network group ( gong ) program , managed by the national solar observatory , which is operated by aura , inc . under a cooperative agreement with the nsf .the data were acquired by instruments operated by the big bear solar observatory , high altitude observatory , learmonth solar observatory , udaipur solar observatory , instituto de astrofsica de canarias , and cerro tololo inter - american observatory .data used here from mees solar observatory , university of hawaii , are produced with the support of nasa grant nng06ge13 g .we acknowledge the courtesy of yunnan astronomical observatory ( ynao ) for permitting the use of magnetic - polarity drawings published in publications of yunnan observatory .the images of precision solar photometric telescope ( pspt ) at mauna loa are available by courtesy of the mauna loa solar observatory , operated by the high altitude observatory , as part of the national center for atmospheric research ( ncar ) .ncar is supported by the nsf .we appreciate the long - term work of noaa / ngdc providing a wide range of scientific products and services for solar physics , and publishing the volumes of solar - geophysical data ( sgd ) .we thank norbert nagy , who was a programmer mathematician at dho , for playing an important role in development of the on - line tools and data pipeline .we are grateful to our colleagues at dho and at the collaborating institutes who helped the data evaluation and participated in the observations during the last decades .the authors declare that they have no conflicts of interest .clette , f. , berghmans , d. , vanlommel , p. , van der linden , r.a.m . ,koeckelenbergh , a. , wauters , l. : 2007 , from the wolf number to the international sunspot index : 25 years of sidc , _ adv .space res ._ * 40 * , 919 .ads : , doi : .erwin , e.h . ,coffey , h.e ., denig , w.f . , willis , d.m . ,henwood , r. , wild , m.n . : 2013 , the greenwich photo - heliographic results ( 1874 - 1976 ) : initial corrections to the printed publications , _ solar phys . _* 288 * , 157 .ads : , doi : .gyenge , n. , ballai , i. , baranyi , t. : 2016 , statistical study of spatio - temporal distribution of precursor solar flares associated with major flares , _ mon . not .soc . _ * 459 * , 3532 .ads : , doi : .moon , k.r . ,delouille , v. , li , j.j . ,de visscher , r. , watson , f. , hero , a.o . : 2016 , image patch analysis of sunspots and active regions ii .clustering via matrix factorization , _ j. space weather space clim . _* 6 * , a3 .ads : , doi : .willis , d.m . ,coffey , h.e ., henwood , r. , erwin , e.h . , hoyt , d.v . , wild , m.n . ,denig , w.f . : 2013 , the greenwich photo - heliographic results ( 1874 - 1976 ) : summary of the observations , applications , datasets , definitions and errors , _ solar phys . _ * 288 * , 117 .ads : , doi : .willis , d.m . ,henwood , r. , wild , m.n . ,coffey , h.e ., denig , w.f . ,erwin , e.h . ,hoyt , d.v . : 2013 , the greenwich photo - heliographic results ( 1874 - 1976 ) : procedures for checking and correcting the sunspot digital datasets , _ solar phys . _ * 288 * , 141 .ads : , doi : .willis , d.m . , wild , m.n . , appleby , g.m . , macdonald , g.m .: 2016 , the greenwich photo - heliographic results ( 1874 - 1885 ) : observing telescopes , photographic processes , and solar images , _ solar phys . _
|
the primary task of the debrecen heliophysical observatory ( dho ) has been the most detailed , reliable , and precise documentation of the solar photospheric activity since 1958 . this long - term effort resulted in various solar catalogs based on ground - based and space - borne observations . a series of sunspot databases and on - line tools were compiled at dho : the debrecen photoheliographic data ( dpd , 1974 ) , the dataset based on the _ michelson doppler imager _ ( mdi ) of the _ solar and heliospheric observatory _ ( soho ) called soho / mdi debrecen data ( sdd , 19962010 ) , and the dataset based on the _ helioseismic and magnetic imager _ ( hmi ) of the _ solar dynamics observatory _ ( sdo ) called sdo / hmi debrecen data ( hmidd , 2010 ) . user - friendly web - presentations and on - line tools were developed to visualize and search data . as a last step of compilation , the revised version of greenwich photoheliographic results ( gpr , 18741976 ) catalog was converted to dpd format , and a homogeneous sunspot database covering more than 140 years was created . the database of images for the gpr era was completed with the full - disc drawings of the hungarian historical observatories gyalla and kalocsa ( 18721919 ) and with the polarity drawings of mount wilson observatory . we describe the main characteristics of the available data and on - line tools .
|
the kuramoto model was originally motivated by the phenomenon of collective synchronisation whereby a system of coupled oscillating nodes will sometimes lock on to a common frequency despite differences in the natural frequencies of the individual vertices .this model has since been applied to a wide variety of application fields including , biological , chemical , engineering , and social systems ( see the survey articles including , , , and recently ) . while kuramoto studied the infinite complete network it is natural to consider finite networks of any given topology .this would correspond to a notion of coupling that is not universal across all node pairs , but rather applies to a subset of all possible links .for example the work patterns of human individuals in an organisation might enjoy a coupling effect in relation to pairs of individuals that have a working relationship . in this articlewe explain the significance of stable fixed point solutions to the model , and explore the algorithmic complexity of finding such solutions .each node has an associated phase angle , as well as its own natural frequency .the basic governing equation is the differential equation : where is the adjacency matrix of the network and is a coupling constant which determines the strength of the coupling .note that each is understood to denote a function of time .an important special case is where all the natural frequencies are equal .we shall call this case _ homogeneous _ and the general case as _inhomogeneous_. typically is a ( 0,1)-matrix , however it may also be any real positive symmetric matrix reflecting coupling constants that vary for each edge .this case is of particular interest to us and we specify this by referring to it as the _ weighted _ kuramoto model . it is well known that results for unique stable fixed points can be obtained for restricted phase angles , say in the range ] , shown in figure 2 , with corresponding adjacency matrix and show that partition is satisfied if and only if there is a non - zero stable fixed point for the corresponding homogeneous kuramoto model .let .set the vertex set of to be .for each there is a link of weight between and , a link of weight between and , and a link of weight between and .the graph ] at , ] at and ] .for large this means that -\theta^*[y] ] is close to ) and for small certainly -\theta^*[y]| \leq 1/t ] corresponding to ] as follows .a vertex is adjacent to nodes , and a vertex is adjacent to nodes . additionally for each both and are adjacent to separate complete graphs on nodes .this is illustrated in figure 5 .the graph ] .thus .\ ] ] comparing equations ( 33 ) and ( 34 ) we have . also by the stability condition ( 10 ) .it follows that . if on the other hand then which contradicts the stability condition ( 10 ) .thus the nodes of every clique in must also have the same phase angle . in order to relate the non - zero stable phase angles to nodes of let be at angle =0 ] , at ] , and the clique of nodes adjacent to and at ] and -\theta^*[u_{i}]) ] , ] are between and ] and .thus the phase angle geometry must follow the structure of figure 6 where we assume by symmetry that .the fixed point conditions then take the form of equations ( 26)-(30 ) .finally we note that can not all be in or all in . if for example then for the phase angle differences between and , and between and can be shown to be greater than so that the stability inequality ( 10 ) is violated by taking .the stability condition is therefore equivalent to the condition . by equation( 28 ) and ( 29 ) this is satisfied if and only if is at least where thus the stability condition is equivalent to inequality ( 31 ) .we have now established the conditions ( 26)-(32 ) . by setting equation ( 17 ) follows by substitution from ( 26)-(29 ) into ( 30 ) while ( 18 ) follows from ( 31 ) and ( 32 ) .we therefore have a solution to kuramoto partition .thus we have shown that kuramoto partition is satisfied if and only if there is a non - zero stable fixed point for the corresponding unweighted homogeneous kuramoto model .the essence of the difficulty of kuramoto partition lies in the use of square roots with typically infinite decimal expansions . to emphasise thiswe provide a more generic but related partition problem that may be of interest to complexity and number theorists and conjecture that this problem is np - hard .we note that if the square roots are omitted in the following then the problem is well known to have a polynomial solution . + * surd partition * + instance : a positive integer and a collection of positive integers such that + question : is there a subset of in which this paper we demonstrated the fundamentally complex relationship between the network topology and the solution space of the kuramoto model by showing that determining the possibility of multiple stable fixed points from the network topology is np - hard for the weighted kuramoto model .specifically we show that the np - complete partition problem is satisfied if and only if there is a non - zero stable fixed point solution to a related weighted homogeneous kuramoto model . in the case of the unweighted kuramoto modelwe show that a particular real partition problem is satisfied if and only if there is a non - zero stable fixed point solution to a related unweighted homogeneous kuramoto model .a simplified variant of this partition problem that may be of interest to complexity and number theorists is given and conjectured to be np - hard .as a consequence ee conclude that it is unlikely that stable fixed points of the kuramoto model can be characterized in terms of easily computable network invariants .10 acebron j a , bonilla l l , vicente c j p , rotort f , and spigler f 2005 the kuramoto model : a simple paradigm for synchronization phenomena , _ reviews of modern physics _ , 77 , pp .137 - 185 .arenas a , diaz - guilera a , kurths j , moreno y , and zhou c 2008 synchronization in complex networks , _ phys_ , 469 , pp .bronski j c , deville l , and park m j 2012 fully synchronous solutions and the synchronization phase transition for the finite - n kuramoto model , _ chaos _ , 22 , 033133 .canale e a , monzon p a , and robledo f 2010 the wheels : an infinite family of bi - connected planar synchronizing graphs , _ ieee conf .industrial electronics and applications - taichung , tiawan _ , pp .2202 - 2209 .dekker a h 2010 average distance as a predictor of synchronizability in networks of coupled oscillators , _ proceedings of the 33rd australian computer science conference , crpit 102 , australian computer society , canberra , australia _ , pp .127 - 131 .. dekker a h 2011 analyzing c2 structures and self - synchronization with simple computational models , _ proceedings of the 16th iccrts , quebec city , canada_. dekker a h , and taylor r 2013 synchronization properties of trees in the kuramoto model , _ siam journal on applied dynamical systems _ , 12 ( 2 ) , pp .596 - 617 .dorfler f , and bullo f 2013 synchronization in complex networks : a survey , _ automatica _ , in review .dorogotsev s n , goltsev a v , and mendes j f f 2008 critical phenomena in complex networks , _ rev .modern phys ._ , 80 , pp .1275 - 1353 .garey m r , and johnson d s 1979 _ computers and intractability - a guide to the theory of np - completeness _ , w h freemean , new york .n , motee n. , and barahona m 2004 on the stability of the kuramoto model of coupled nonlinear oscillators , _ proceedings of the american control conference , washington dc , usa _ , 5 , pp . 4296 - 4301 .kalloniatis a 2008 a new paradigm for dynamical modelling of networked c2 processes , _ proceedings of the 13th iccrts , bellevue , washington , usa_. kalloniatis a 2010 from incoherence to synchronicity in the network kramoto model , _ phys ., 82 , 066202 .karmarker n , and karp r 1982 the differencing method of set partitioning , _ technical report ucb / csd _ , 82/113 .n , karp r m , lueker g s , and odlyzko a m 1986 probabilistic analysis of optimum partitioning , _ journal of applied probability _ , 23 , pp . 626 - 645 . kuramoto y 1975 self - entrainment of a population of of coupled nonlinera oscillators , _ int . symp . on mathematical problems in theoretical physics - lecture notes in physcis, 39 , pp .420 - 422 .kuramoto y 1984 _ chemical oscillations , waves , and turbulence _ ,springer , new york .monzon p a , and paganini f 2005 global considerations on the kuramoto model of sinusoidal coupled oscillators , _proceedings 44th ieee conf .decision and control , control conf .seville , spain , dec .12 - 15 _ , pp .3923 - 3928 .ochab j and gora p f 2010 synchronization of coupled oscillators in a local one - dimensional kramoto model , _ summer solstice 2009 : int .conf . on discrete models of complex systems - acta phys , pol ._ , 3 , pp .453 - 462 .rohn j h 1994 checking positive definiteness or stability of symmetric interval matrices is np - hard , _ comment ._ , 35 , 4 , pp .795 - 797 .strogatz s h , and mirollo r e 1988 phase - locking and critical phenomena in lattices of coupled non - linear oscillators with random intrinsic frequencies , _ phys ., 31 , pp .143 - 168 .strogatz s h 2000 from kuramoto to crawford : exploring the onset of synchronization in populations of coupled oscillators , _ phys ., 143 , pp . 1 - 20 .strogatz s h 2003 _ sync : the emerging science of spontaneous order _ , hyperion , new york .taylor r 2012there is no non - zero stable fixed point for dense networks in the kuramoto model , _ journal of physics a : mathematical and theoretical _ , 45 , 055102 .winfree a t 2003 _ the geometry of biological time _ , springer , new york .
|
the kuramoto model when considered over the full space of phase angles [ ) can have multiple stable fixed points which form basins of attraction in the solution space . in this paper we illustrate the fundamentally complex relationship between the network topology and the solution space by showing that determining the possibility of multiple stable fixed points from the network topology is np - hard for the weighted kuramoto model . in the case of the unweighted model this problem is shown to be at least as difficult as a number partition problem , which we conjecture to be np - hard . we conclude that it is unlikely that stable fixed points of the kuramoto model can be characterized in terms of easily computable network invariants .
|
yurii fedorovich smirnov passed away in 2008 and marcos moshinsky in 2009 .they were two famous physicists with common interests in nuclear physics , atomic and molecular physics , and mathematical physics .more generally , both of them were at the origin of significant achievements in symmetry methods in physics .they actively participated in several symmetries in science symposia in bregenz .these two giants had parallel centers of interest in the sense that they developed separately some complementary works in nuclear physics ( that led in particular to the concept of moshinsky - smirnov coefficients ) , dealt with some related problems in atomic , molecular and mathematical physics , and , finally , combined efforts to produce a beautiful and very useful book on the applications of the harmonic oscillator system to various areas of physics and chemistry .it is not the purpose of these notes to extensively list and analyse the numerous papers by marcos and yurii .i shall focus on some particular facets of their works .i had the opportunity to meet marcos and yurii several times in bregenz and on several other occasions , and to discuss with them about wigner - racah algebras for finite groups , lie groups and quantum groups .i had also a chance to collaborate with yurii smirnov .therefore , i shall devote the main part of these notes to some specific domains of importance to yurii and marcos and to some more personal reminiscences on yurii .marcos moshinsky was a mexican physicist .he was born in kiev ( ukraine ) in 1921 .he arrived as a refugee in mexico when he was three years old and obtained mexican citizenship in 1942 .he received a bachelor s degree in physics from the _ universidad nacional autnoma de mxico _ ( u.n.a.m . ) and a ph.d .degree in theoretical physics , under the guidance of eugene p. wigner , from princeton university .marcos was also the recipient a post - doctoral fellowship at the _institut henri poincar _ in paris .afterwards , he returned to mexico and pursued a brillant career at the u.n.a.m . in mexico city .professor marcos moshinsky had important responsabilities as the president of the _ sociedad mexicana de fsica _ , as a member of _ el colegio nacional _ , and as a member of the editorial board of several international scientific reviews .he produced and/or co - produced more than 200 scientific papers and four books among which the most well - known are the one written in collaboration with thomas a. brody on transformation brackets for nuclear shell - model calculations and the one with yurii f. smirnov on the applications of the harmonic oscillator in various fields of physics and quantum chemistry .he received several prizes , namely the _premio nacional de ciencias y artes _ in 1968 , _premio luis elizondo _ in 1971 , _. de ciencias exactas _ in 1985 , _ premio prncipe de asturias de investigacin cientfica y tcnica _ in 1988 , and the prestigious unesco science prize in 1997 for his work in nuclear physics .he also received the wigner medal in 1998 .a first seminal paper by marcos concerned the transient dynamics of particle wavefunctions , a phenomenon that gives rise to diffraction in time . however , most of his scientific work dealt with collective models of the nucleus , canonical transformations in quantum mechanics , and group theoretical methods in physics , with a special emphasis on symplectic symmetry in nuclear , atomic and molecular physics .these themes were of interest to yurii too .following the pioneering work of talmi ( who prepared his ms.s .thesis with guilio racah , his doctorate thesis with wolfgang pauli and who was a post - doctoral fellow with eugene p. wigner ) , both marcos and yurii were interested in the description of pairs of nucleons in a harmonic - oscillator potential . in 1959 , moshinsky developed a formalism to connect wavefunctions in two different coordinate systems for two particles ( with identical masses ) in a harmonic - oscillator potential . in this formalism ,any two - particle wavefunction , expressed in coordinates with respect to the origin of the harmonic - oscillator potential , is a linear combination of wavefunctions , expressed in relative and centre - of - mass coordinates of the two particles . the so - called transformation brackets make it possible to pass from one coordinate system to the other .moshinsky gave an explicit expression of these coefficients in the case and derived recurrence relations that can be used to obtain the coefficients for and from those for .along this vein , brody and moshinsky published extensive tables of transformation brackets . at the end of the fifties ,smirnov worked out a similar problem , viz .the calculation of the talmi coefficients for unequal mass nucleons , and gave solution for the case and .( indeed , the transformation brackets and the talmi coefficients are connected via a double clebsch - gordan transformation . )the coefficients , called _ transformation brackets _ by moshinsky and _ total talmi coefficients _ by smirnov , are now referred to as moshinsky - smirnov coefficients .both the moshinsky - smirnov coefficients and the talmi coefficients were revisited at the end of the seventies in terms of generating functions in the framework of the approaches of julian s. schwinger and valentine bargmann to the harmonic - oscillator bases .( the work by mehdi hage hassan , who prepared his state doctorate thesis at the _ institut de physique nuclaire de lyon _ and conducted his career in beyrouth , constitutes a very deep and original approach to the talmi coefficients and moshinsky - smirnov coefficients . )it should be noted that the transformation brackets or moshinsky - smirnov coefficients are also of importance for atoms and molecules as shown by marcos and yurii in their book written during the time yurii was a visiting professor at the _ instituto de fsica _ of the _ universidad nacional autnoma de mxico_. a second area of common interest to both marcos and yurii concerns the many - body problem considered from the point of view of unitary and symplectic groups and the use of nonlinear and nonbijective canonical transformations . in this vein ,moshinsky and some of his collaborators introduced the concept of an ambiguity group , a group required for taking into account the nonbijectivity of certain canonical transformations .indeed , this concept is closely related to the one of lie algebra under constraint , which in turn is connected to nonbijective transformations like hopf fibrations on spheres and hopf fibrations on hyperbolids . among the nonbijective transformations ,one may mention the kustaanheimo - stiefel transformation and the levi - civita transformation as well as their various extensions .in particular , the kustaanheimo - stiefel transformation allows one to pass from a four - dimensional harmonic oscillator subjected to a constraint to the three - dimensional hydrogen atom ( see for instance ) .this subject was of interest to yurii and he revisited the hydrogen - oscillator connection with corrado campigotto . the harmonic oscillator is a central ingredient in numerous studies by smirnov and moshinsky .many applications of the nonrelativistic and relativistic harmonic oscillators to modern physics ( from molecules , atoms and nuclei to quarks ) were pedagogically exposed in the book by marcos and yurii with a special attention paid to the -body problem ( in the hartree - fock approximation ) , the nuclear collective motion and the interacting boson model .but their common interests were not limited to transformation brackets and harmonic oscillators .let us briefly mention that both of them were also interested in group theoretical methods and symmetry methods in physics and also contributed to several fields of mathematical physics including , for instance , the state labelling problem , special functions , and generating functions ( see , for example , ) .yurii fedorovich smirnov was a russian physicist .he was born in the city of ilinskoe ( yaroslavl region , russia ) in 1935 .he graduated from moscow state university .subsequently , he completed his doctorate thesis at the same university under the guidance of yurii m. shirokov and benefited from fruitful contacts with other distinguished physicists including yakov a. smorodinsky .he pursued his career in the ( skobeltsyn ) institute of nuclear physics and in the physics department of ( lomonosov ) moscow state university with many stays abroad .the last fifteen years of his life were shared between moscow and mexico city where he was a visiting professor and later a professor at the _ instituto de fsica _ and at the _ instituto de ciencias nucleares _ of the u.n.a.m .( he spent almost 11 years in mexico ) .he received prestigious awards : the k.d .sinelnikov prize of the ukrainian academy of sciences in 1982 and the m.v .lomonosov prize in 2002 .he was also a member of the academy of sciences of mexico .yurii smirnov authored and/or co - authored eleven books and more than 250 scientific articles .he also translated several scientific books into russian .he translated , for example , a book on the harmonic oscillator written by marcos moshinsky in 1969 , precisely the book that was a starting point for their common book on the same subject , published in 1996 .he was a member of the editorial board of several journals and a councillor of the scientific councils of the skobeltsyn institute of nuclear physics and of the chemistry department of moscow state university , as well as of the institute for theoretical and experimental physics ( itep ) in moscow .my first contact with the work of yurii smirnov dates back to 1978 when my colleague j. patera showed me , on the occasion of a nato advanced study institute organised in canada by j.c .donini , a beautiful book written by d.t .sviridov and yu.f .smirnov .this book dealt with the spectroscopy of ions in inhomogeneous electric fields ( part of a disciplinary domain known as crystal- and ligand - field theory in condensed matter physics and explored via the theory of level splitting from a theoretical point of view ) . in 1979 , b.i .zhilinski , while visiting dijon and lyon in france in the context of an exchange programme between the ussr and france , gave me another interesting book , on ions in crystalline fields , written by d.t .sviridov , yu.f . smirnov and v.n .tolstoy . at that time, the references for mathematical aspects of crystal- and ligand - field theory were based on works by y. tanabe , s. sugano and h. kamimura from japan , j.s .griffith from england , and tang au - chin and his collaborators from china ( see also some contributions by the present author ) .the two above - mentioned books by smirnov and his colleagues shed some new light on the mathematical analysis of spectroscopic and magnetic properties of partly filled shell ions in molecular and crystal surroundings . in particular , special emphasis was put on the derivation of the wigner - racah algebra of a finite group of molecular and crystallographic interest from that of the group .my second ( indirect ) contact with yurii is related to an invitation to participate in the fifth workshop on _ symmetry methods in physics _ in obninsk in july 1991 .unfortunately , i did not get my visa on time reducing my participation to a paper in the proceedings of the workshop edited by yu.f .smirnov and r.m .asherova . in the beginning of the 1990 s, i had a chance to discover another facet of yurii s work . in 1989 , a russian speaking student from switzerland , c. campigotto , spent one year in the group of prof .he started working on the kustaanheimo - stiefel transformation , an transformation associated with the hopf fibration with compact fiber .( such a transformation makes it possible to connect the kepler - coulomb system in to the isotropic harmonic oscillator in . ) then , campigotto ( well - prepared by smirnov and his team , especially andrey m. shirokov and valeriy n. tolstoy ) came to lyon to prepare a doctorate thesis ( partly published in ) .he defended his thesis in 1993 with george s. pogosyan ( representing yu.f .smirnov ) as a member of the jury .a fourth opportunity to work with yurii stemmed from our mutual interest in quantum groups and in nuclear and atomic spectroscopy .i meet him for the first time in dubna in 1992 .we then started a collaboration ( partly with r.m .asherova ) on - and -boson calculus in the framework of hopf algebras associated with the lie algebras and .in addition , we pursued a group - theoretical study of the coulomb energy averaged over the states with a definite spin .we also had fruitful exchanges in nuclear physics .indeed , prof .smirnov and his colleagues d. bonatsos ( from greece ) , s.b .drenska , p.p .raychev and r.p .roussev ( all from bulgaria ) developed a model based on a one - parameter deformation of for dealing with rotational bands of deformed nuclei and rotational spectra of molecules ( see also ) .along the same line , a student of mine , r. barbier , developed in his thesis a two - parameter deformation of with application to superdeformed nuclei in mass region and ( partly published in ) .it was a real pleasure to receive yurii in lyon on the occasion of the defence of the barbier thesis in 1995 .indeed , from 1992 to 1995 , yurii made four stays in lyon ( one with his wife rita and one with his daughter tatyana ) and we jointly participated in several meetings , one in clausthal in germany ( organised by h .- d .doebner , v.k .dobrev and a.g .ushveridze ) and two in bregenz in austria ( organised by b. gruber and m. ramek ) .i can not do justice to all of the fields in which yurii was recognized as a superb researcher .it is enough to say that he contributed to many domains of mathematical physics ( e.g. , finite groups embedded in compact or locally compact groups , lie groups and lie algebras , quantum groups , special functions ) and theoretical physics ( e.g. , nuclear , atomic and molecular physics , crystal- and ligand - field theory ) .let me mention , among other fields , that he achieved alone and with collaborators significant advances in the theory of clustering of shell - model ( nuclear ) systems , in projection operator techniques for simple lie groups , in the theory of heavy ion collisions , and in the so - called -matrix formalism for quantum scattering theory ( see and references therein ) .the -matrix formalism requires the solution of three - term recurrence relations ( or second - order difference equations ) ; along this line , yurii and some of his collaborators published several works ( see for instance ) . as another major contribution , at the end of the sixties he proposed in collaboration with vladimir g. neudatchin a method , the so - called ( e,2e ) method ( an analog of the ( p,2p ) method used in nuclear physics ) , for the experimental investigation of the electronic structure of atoms , molecules and solids ; this method was successfully tested in many laboratories around the world ( see and references therein ) .yurii was also an exceptional teacher .it was very pleasant , profitable and inspiring to be taught by him .i personally greatly benefited from discussions with yurii smirnov .yurii and marcos had many students who are now famous physicists .they interacted with many collaborators in their countries and abroad , and had an influence on many scientists .marcos moshinsky and yurii fedorovich smirnov will remain examples for many of us .we shall not forget them .99 kibler m and labastie p 1989 on some transformations generalizing the levi - civita , kustaanheimo - stiefel , and fock transformations _ group theoretical methods in physics _eds y saint - aubin and l vinet ( singapore : world scientific ) hage hassan m and kibler m 1990 non - bijective quadratic transformations and theory of angular momentum _ selected topics in statistical mechanics _ eds a a logunov , n n bogolubov , jr , v g kadyshevsky and a s shumovsky ( singapore : world scientific ) hage hassan m and kibler m 1991 on hurwitz transformations _le problme de factorisation de hurwitz : approche historique , solutions , applications en physique _ eds a ronveaux and d lambert ( namur : fundp ) ( _ preprint _hepth/9409051 ) vonsovski c v , grimailov c v , tcherepanov v i , meng a n , sviridov d t , smirnov yu f and nikiforov a e 1969 _ crystal - field theory and optical spectra of partly filled shell transition ions _( moscow : nauka ) tang au - chin , sun chia - chung , kiang yuan - sun , deng zung - hau , liu jo - chuang , chang chain - er , yan guo - sen , goo zien and tai shu - shan 1979 _ theoretical method of the ligand field theory _ ( peking : science press ) kibler m and daoud m 1992 symmetry adaptation and two - photon spectroscopy of ions in molecular or solid - state finite symmetry _symmetry methods in physics _ eds yu f smirnov and r m asherova ( obninsk : russian federation ministry of atomic energy institute of physics and power engineering ) kibler m , campigotto c and smirnov yu f 1994 recursion relations for clebsch - gordan coefficients of and _ symmetry methods in physics _eds a n sissakian , g s pogosyan and s i vinitsky ( dubna : joint institute for nuclear research ) barbier r and kibler m 1995 a system of interest in spectroscopy : the -rotor system _ finite dimensional integrable systems _ eds a n sissakian and g s pogosyan ( dubna : joint institute for nuclear research ) barbier r and kibler m 1995 on the use of quantum algebras in rotation - vibration spectroscopy _ modern group theoretical methods in physics _ eds j bertrand , m flato , j - p gazeau , d sternheimer and m irac - astaud ( dordrecht : kluwer ) smirnov yu f , shirokov a m , lurie yu a and zaytsev s a 1995 harmonic oscillator representation in the theory of scattering and nuclear reactions _harmonic oscillators _ eds d han and k b wolf ( washington : nasa )
|
some particular facets of the numerous works by marcos moshinsky and yurii fedorovich smirnov are presented in these notes . the accent is put on some of the common interests of yurii and marcos in physics , theoretical chemistry , and mathematical physics . these notes also contain some more personal memories of yurii smirnov .
|
numerical analysis of periodic materials and structures has proven to be an extremely powerful tool .it provides detailed information , such as failure initiation sites and stress - strain at smaller scales ( meso / micro ) it has been successfully used to determine homogenised properties , study the detailed stress - strain fields at nano- and microscopic scales to obtain structural damage initiation conditions and sites , as well as to simulate damage development and associated deterioration of the homogenised mechanical properties .several works can be found discussing the application of periodic boundary conditions to representative regions , e.g. . for periodic structures ,the unit cell ( uc ) is used as the representative region , and the analysis is performed by applying periodic displacement boundary conditions .the topological complexity of many ucs found in practice , such as in typical woven composites , often leads to unpractical modelling and analysis times .for this reason , internal symmetries of the ucs must whenever possible be exploited to reduce the analysis domain further ( provided the appropriate boundary conditions are applied ) , thus reducing both modelling and analysis time . a comprehensive study on the determination of reduced unit cells ( rucs ) for ud and particle reinforced composites was performed by and .different rucs , loading cases and correspondent boundary conditions were determined and presented in detail .applied to textile composites , proposed a general framework for determining rucs . in the first part of the present work , the derivation of the framework proposed in revisited andsome of its building blocks redefined , resulting in a different , formally defined and more concise formulation .the framework proposed by requires the distinction of two different cases of equivalence between subcells : ( i ) equivalence is obtained by a symmetry operation or a translation , and ( ii ) equivalence is obtained by the combination of a symmetry operation and a translation . in the second case an additional vector of constants ( see ) needs to be considered when applying the boundary conditions .the non - zero components of this vector are tabulated for different cases and are determined by the fem as part of the solution .the formulation derived in the present work is more generic , in that no cases need to be treated separately , and mathematically complete in that no vector needs to be determined from tabulated data .all terms in the equation that assigns the periodic boundary conditions for a ruc are fully defined , simplifying the formulation and consequently their use . in the second part of this paper ,the application of the formulation developed and its potential is illustrated through two practical examples : 3d woven composites and honeycombs .additionally , particular attention is given to offset - reduced unit cells as they allow the domain reduction without load restrictions .in this section , the equivalence framework is formally defined .it is based on four concepts : physical equivalence , load equivalence , periodicity and load admissibility . in the following sections each of these concepts is detailed .consider a domain in space and within it a sub - domain .the latter has a defined boundary , local coordinate system ( lcs ) , and a certain spatial distribution of physical properties with .each of these physical properties are expressed as a tensor written in the lcs of , i.e. .two distinct sub - domains and are _ physically equivalent _ , denoted:_ _ if for every point in there is a point in such that , for each physical property , and vice - versa . in eq .[ eq : physicalcond ] , and are the coordinate vectors of and given in the lcs and associated with and respectively , fig .[ fig : geomrel ] .the points and for which eq . [ eq : physicaleq ] is verified are designated as physically equivalent points .geometrical relation between equivalent points . ] across the literature , different definitions can be found for periodic structure and uc . in the present work , periodic structure and uc are defined based on the concept of physical equivalence .a domain is periodic if it can be reconstructed by tessellation of , non - overlapping , physically equivalent sub - domains with parallel lcs , i.e. if for all : the smallest sub - domain verifying the periodicity definition is designated as an unit cell .the concept of load equivalence ( see ) provides a relation between physically - equivalent sub - domains , once the structure they are part of is loaded .let us consider a periodic structure as defined in the previous section .load equivalence between two physically equivalent points and is verified if the strains and stresses at these points , given in the lcs of the sub - domains , can be related by: where the load reversal factor , , is used to enforce the equivalence between fields of physically equivalent sub - domains . for eqs .[ eq : loadeqstrain ] and [ eq : loadeqstress ] to hold , the length scale of the loading variation must be larger than the length scale of the sub - domains , such that an approximately periodic variation of the strains and stress fields is assured .if a structure is entirely composed by load equivalent sub - domains , its response can be obtained by analysing one of these domains alone , instead of analysing the entire structure .however , the appropriate boundary conditions have to be applied .these guarantee that the sub - domain , although isolated , has the same response it would have if it was embedded in the structure .not all physically equivalent sub - domains can be used to analyse the response of a periodic structure under all loading conditions .the use of sub - domains smaller than the uc to analyse the response of a periodic structure is restricted by the relations between the lcs of these sub - domains . the sufficient and necessary condition for admissibility of a sub - domain to be used in the analysis of a periodic structureis derived below . for convenience, the strain field of a sub - domain at a point is decomposed as the sum of a volume average and a fluctuation term , see fig.[fig : fluct_avg_1d]: where is the volume average operator over the volume , and is the fluctuation term , see .it is possible to find the displacements at a given point by integration of eq .[ eq : average_fluct_strain ] . assuming small displacements and no rigid body rotations, the displacement relative to the origin of a lcs , attached to the subdomain , comes as : idealised relation between the fluctuation and average fields of strain and displacement in a periodic structure . ] knowing that the coordinate vectors of two equivalent points and given in their lcss are identical , eq . [ eq : physicalcond ] , they can be related in the lcs of the sub - domain by: where is the transformation matrix between the lcss of and , and is the position vector of the origin of the lcs of the sub - domain given in the lcs of the sub - domain , fig . [ fig : geomrel ] . similarly , using eq .[ eq : loadeqstrain ] , the strains at two equivalent points can be related in the lcs of by: the relation between the volume average of the strain of the equivalent sub - domains and , in the lcs of the first , can be obtained directly by integrating eq .[ eq : strainglobala_ahat]: where the lower index of refers to the coordinate system , and the upper index to the domain over which the volume average was taken . decomposing the strain field in eq .[ eq : strainglobala_ahat ] into its average and fluctuation parts and using eq .[ eq : strain_avg_global ] , the relation between the strain fluctuations field of two equivalent points is obtained: in general , the displacement of a point can be obtained from: where , and is the coordinate vector of point . considering the structure has no rigid body motion , nor rotation , the displacement of a point can be simply obtained by: the strain fluctuations of two equivalent points belonging to the sub - domains and are related in the lcs of by ( see also eq . [eq : strain_fluct_global ] ) : knowing that two equivalent points are related by ( see also eq .[ eq : positionglobala_ahat ] ) : and that the displacement at the origin is equal to zero , integrating eq .[ eq : strain_fluct ] it is possible to obtain: equation [ eq : disp_fluct_1 ] provides a relation between the displacement perturbations of two equivalent points given in the lcs of one of the sub - domains .apart from , all variables are known ; below it is shown that . according to the definition of periodicity, a periodic structure can be reconstructed from physically equivalent sub - domains with parallel coordinate systems .the strain fields at two equivalent points belonging to different sub - domains are related by: if we consider that the sub - domains are ucs , since the coordinate systems are parallel , the matrix will be equal to the identity matrix. moreover , all equivalent sub - domains will be admissible and have a load reversal factor .equation [ eq : strainglobala_ahat_ap ] can then be simply written as: if the two ucs being considered are adjacent , a vector can be defined and eq .[ eq : strainglobala_ahat_apuc ] can be generalized for any point of the structure: where is commonly named the periodicity vector , and corresponds to the period of the function .the integral of a periodic function of period can always be written as: where is also a periodic function of period , is the average the periodic function , and is a constant . using eq .[ eq : periodicfprep ] it is possible to write: the average term that would appear in eq .[ eq : dispfluct_def ] is zero since by definition has zero average .additionally , knowing that at the origin is equal to zero , one can conlude that will also be zero and thus will be a periodic function with zero average . using the above result , and knowing that the integration over a period of a periodic function with zero average is equal to zero , one can integrate both sides of eq .[ eq : disp_fluct_1 ] over a period: obtaining: substituting eq .[ eq : disp_fluct_or_zerofinal ] in eq .[ eq : disp_fluct_1 ] , the relation between the displacement perturbations at two equivalent points can be finally obtained: for a sub - domain to be admissible , the volume average ( homogenised ) strain calculated for this sub - domain on a given reference system must equal that volume average on any other sub - domain ( on the same reference system ) , as the volume average is a homogenised entity , hence independent of the sub - domain where it was calculated . from load equivalence ,the strains at physically - equivalent points are related ( eq .11 is obtained by simply integrating this relation over the sub - domain , but does not enforce directly that the volume average strain is a macroscopic entity independent of the particular sub - domain . for the sub - domain to be admissible , the following condition must be verified: as , for a sub - domain to be admissible , the homogenised strain on a given reference system ( in this case ) must be the same for any sub - domain ( in this case and ) .therefore , eq . [ eq : strain_avg_global ] with eq .[ eq : volavgeq ] lead to the condition of sub - domain admissibility , as defined below .a given sub - domain is admissible for the analysis of a periodic structure under a given loading , if and correspondent to any other sub - domain are such that , for all : equation [ eq : sub_domain_adm_gen ] can be used to , for a given loading , determine the load reversal factors associated with each of the sub - domains .the admissibility of a subdomain for structural analysis leads to the definition of a ruc .a reduced unit cell is a domain , smaller than the unit cell , that can be used to determine the response of a periodic structure to a given loading .the condition to be verified by a reduced unit cell in structural analysis is defined by eq .[ eq : sub_domain_adm_gen ] .to ensure the response of a periodic structure under a given loading can be determined from the response of a ruc , the appropriate boundary conditions that must applied to the latter need to be determined . in this section ,the equivalence framework , presented previously , is used to derive the periodic boundary conditions for the analysis of rucs .consider two adjacent sub - domains and that are physically and load equivalent .if a point belonging to is chosen to be at the boundary of the sub - domain , then its equivalent point is also be at the boundary of .since both points and belong to , the displacement at each point can be obtained using eq .[ eq : average_fluct_disp]: all quantities in eqs .[ eq : dispa ] and [ eq : dispahat ] are written in the lcs of , and the volume average is taken over the sub - domain ( the subscript will be omitted hereafter for convenience ) .since both points are equivalent , their positions are related by eq .[ eq : positionglobala_ahat ] leading to: knowing that the displacement fluctuations at two equivalent points are related by eq .[ eq : disp_fluct_global_ap ] , if eq .[ eq : dispahat ] is multiplied by and then subtracted to eq .[ eq : dispa ] , the displacement fluctuations cancel , leading to: provided the sub - domain is admissible , see definition 4 , the term is zero . using this result , eq . [ eq : dispconstrain2 ] can be simplified to eq .[ eq : dispconstrainfinal ] , which is the main outcome of this analysis and can be used directly to apply periodic boundary conditions to a sub - domain: once a displacement constraint equation is associated to all points at the boundary of the sub - domain , loading can be applied by prescribing a volume average strain .it is relevant to notice that the displacement constraint equation traditionally used to impose periodic boundary conditions on a uc , see for example , is a particular case of eq .[ eq : dispconstrainfinal ] where the matrix is equal to the identity matrix , since the lcss of the ucs are parallel by definition and consequently , from the sub - domain admissibility evaluation , the load reversal factor is equal to one .it is important to highlight the differences between the result obtained above and the one obtained in ; as referred before , eq .[ eq : dispconstrainfinal ] is completely generic and self sufficient : no distinction needs to be made , in the current formulation , between the type of operation needed to achieve equivalence between subdomains .moreover , all terms in eq .[ eq : dispconstrainfinal ] are fully defined , and can therefore be readily used to prescribe periodic boundary conditions to a given subdomain . according to the periodicity definition given in [ sub : per_def ] , a uc is the smallest sub - domain that allows a periodic structure to be reconstructed by tessellation of sub - domains that are physically equivalent to the uc and have parallel lcs .nevertheless , in most applications the uc is defined such that the lcs are not only parallel but orthogonally translated , fig .[ fig : unit cell ] .however , smaller ucs can in general be defined if non - orthogonal translations are considered , fig .[ fig : oruc ] .although , according to the definition , the representative sub - domains obtained through non - orthogonal translation are in fact ucs , in the present paper they are referred to as offset - reduced unit cells , since they lead to a reduction in the domain of the traditionally defined ucs , fig .[ fig : unit cell vs oruc ] . an important feature of orucs is that all loading combinations are admissible .this key feature has been identified by and used in the derivation of pbcs for rucs of ud composites , and cracked laminates .it is relevant to highlight that using the present formulation this feature comes as a natural result : since the lcs of all sub - domains are parallel , they relate to each other by the identity matrix , i.e. , as a consequence eq .[ eq : sub_domain_adm_gen ] is always verified and therefore all loading cases are admissible .in the present section two applications of the formulation presented previously are illustrated .the ucs of 3d woven composites can be significantly larger than their 2d counterparts , mostly due to the more intricate reinforcement architecture and 3d nature .therefore , the domain reduction enabled by the use of rucs can be very significant .figure [ fig : ruc3d3d ] , shows an uc , an oruc and a ruc of a given 3d woven architecture , highlighting the domain reduction achieved : oruc and ruc reduce the analysis domain to and of the uc , respectively . to define the periodic boundary conditions for the analysis of the ruc of fig .[ fig : ruc3d3d ] , eq .[ eq : dispconstrainfinal ] , the geometric relations between equivalent points at the ruc boundary need to be found .these are obtained by applying eq .[ eq : positionglobala_ahat ] to the equivalent domains at the boundary of the ruc and are given in table [ tab : ruc3dgeom ] and illustrated in fig .[ fig : ruc3d_geom_rel ] . since in general, , the load admissibility needs to be evaluated and , for each adjacent subdomain , determined .this is performed evaluating eq .[ eq : sub_domain_adm_gen ] , and summarized in table [ tab : ruc3d&admissibleloadings ] . given a certain loading and using the data from tables [ tab : ruc3dgeom ] and [ tab : ruc3d&admissibleloadings ] , eq .[ eq : dispconstrainfinal ] can be fully defined and periodic boundary conditions prescribed to the ruc .uc , oruc and ruc of a 3d woven reinforcement architecture ; representation of the reduced unit cell ( ruc ) and adjacent sub - domains .] geometrical relations between equivalent points at the boundary of the ruc . ]> p0.55incccccc & & & & & & + & ] & ] & ] + & ] & ] & ] + & ] & ] & ] + & ] & ] & ] + > p1inc > p1 in & & admissible loading + & ] + & ] + honeycombs are other example of an extensively used periodic structure for which uc modelling and analysis can be simplified by the use of rucs .figure [ fig : honeyuc&ruc ] shows the uc and ruc for a honeycomb structure . followingthe procedure described previously , the geometrical relations between equivalent points at the boundary are first determined , table [ tab : ruchoney&geom ] , and figs [ fig : ruchoney_adj_sub ] and [ fig : ruchoney_geom_rel ] . the load admissibility is evaluated and the load reversal factors found , table [ tab : ruchoney&admissibleload ] . unit cell ( uc ) and reduced unit cell ( ruc ) of a honeycomb structure . ] .[tab : ruchoney&geom]geometrical relations between equivalent points at the boundary of the honeycomb ruc . , are respectively , the length and width of the ruc . [ cols="^,^,^,^",options="header " , ]a theoretical framework leading to a sound derivation of pbcs for the analysis of domains smaller then the unit cells ( ucs ) , named reduced unit cells ( rucs ) , by exploiting non - orthogonal translations and symmetries whithin the uc was developped .the investment in defining the problem formally resulted in a simple and readily usable formulation .the method is applied to two different periodic structures illustrating the potential of the ruc concept .offset reduced unit cells are highlighted as a particular case with interesting features , allowing the analysis of domains smaller than the uc without any load restrictions .lomov , d.s .ivanov , i. verpoest , m. zako , t. kurashiki , h. nakai , and s. hirosawa .meso - fe modelling of textile composites : road map , data flow and algorithms ._ composites science and technology _ , 670 ( 9):0 18701891 , july 2007 .z. xia , y. zhang , and f. ellyin . a unified periodical boundary conditions for representative volume elements of composites and applications ._ international journal of solids and structures _ , 400 ( 8):0 19071921 , april 2003 .z. xia , c. zhou , q. yong , and x. wang . on selection of repeated unit cell model and application of unified periodic boundary conditions in micro - mechanical analysis of composites ._ international journal of solids and structures _ , 430 ( 2):0 266 278 , 2006 .sun and r.s .prediction of composite properties from a representative volume element ._ composites science and technology _ , 560 ( 2):0 171179 , 1996 . s. li .boundary conditions for unit cells from periodic microstructures and their implications ._ composites science and technology _ , 680 ( 9):0 1962 1974 , 2008 .s. li . on the unit cell for micromechanical analysis of fibre - reinforced composites ._ proceedings : mathematical , physical and engineering sciences _ , 4550 ( 1983):0 pp .815838 , 1999 . s. li .general unit cells for micromechanical analyses of unidirectional composites ._ composites part a : applied science and manufacturing _ , 320 ( 6):0 815 826 , 2001 .s. li and a. wongsto .unit cells for micromechanical analyses of particle - reinforced composites ._ mechanics of materials _ , 360 ( 7):0 543 572 , 2004 .x. tang and j.d .general techniques for exploiting periodicity and symmetries in micromechanics analysis of textile composites ._ journal of composite materials _ , 370 ( 13):0 11671189 , 2003 .p. suquet .elements of homogenization theory for inelastic solid mechanics . in sanchez - palencia e. and zaouia. , editors , _ homogenization techniques for composite media _ , lecture notes in physics , pages 194275 .springer - verlag , berlin . , 1987 .ellermeyer and d.g .integrals of periodic functions . _ mathematics magazine _ , 740 ( 5):0 393396 , 2001 .s li , c.v .singh , and r. talreja . a representative volume element based on translational symmetries for fe analysis of cracked laminates with two arrays of cracks ._ international journal of solids and structures _ , 460 ( 7 - 8):0 1793 1804 , 2009 .
|
a theoretical framework is developped leading to a sound derivation of periodic boundary conditions ( pbcs ) for the analysis of domains smaller then the unit cells ( ucs ) , named reduced unit cells ( rucs ) , by exploiting non - orthogonal translations and symmetries . a particular type of ucs , offset - reduced unit cells ( orucs ) are highlighted . these enable the reduction of the analysis domain of the traditionally defined ucs without any loading restriction . the relevance of the framework and its application to any periodic structure is illustrated through two practical examples : 3d woven and honeycomb .
|
two recent papers ( yang and rannala 2005 ; lewis , holder and holsinger 2005 ) highlighted a phenomenon that occurs when sequences evolve on a tree that contains a polytomy - in particular a three - taxon unresolved rooted tree . as longer sequences are analysed using a bayesian approach , the posterior probability of the trees that give the different resolutions of the polytomy do not converge on relatively equal probabilities - rather a given resolution can sometimes have a posterior probability close to one . in response kolaczkowski and thornton ( 2006 ) investigated this phenomena further , providing some interesting simulation results , and offering an argument that seems to suggest that for very long sequences the tendency to sometimes infer strongly supported resolutions suggested by the earlier papers would disappear with sufficiently long sequences . as part of their case the authors use the expected site frequency patterns to simulate the case of infinite length sequences , concluding that with infinite length data , posterior probabilities give equal support for all resolved trees , and the rate of false inferences falls to zero . " of course these findings concern sequences that are effectively infinite , and , as is well known in statistics , the limit of a function of random variables ( in this case site pattern frequencies for the first sites ) does not necessarily equate with the function of the limit of the random variables . accordingly kolaczkowski and thornton offer this appropriate cautionary qualification of their findings : analysis of ideal data setsdoes not indicate what will happen when very large data sets with some stochastic error are analyzed , but it does show that when infinite data are generated on a star tree , posterior probabilities are predictable , equally supporting each possible resolved tree . "yang and rannala ( 2005 ) had attempted to simulate the large sample posterior distribution , but ran into numerical problems and commented that it was `` unclear '' what the limiting distribution on posterior probabilities was as became large . in particular ,all of the aforementioned papers have left open an interesting statistical question , which this short note formally answers - namely , does the bayesian posterior probability of the three resolutions of a star tree on three taxa converge to 1/3 as the sequence length tends to infinity ?that is , does the distribution on posterior probabilities for ` very long sequences ' converge on the distribution for infinite length sequences ?we show that for most reasonable priors it does not .thus the ` star paradox ' does not disappear as the sequences get longer . as noted by ( yang and rannala 2005; lewis , holder and holsinger 2005 ) one can demonstrate such phenomena more easily for related simpler processes such as coin tossing ( particularly if one imposes a particular prior ) .here we avoid this simplification to avoid the criticism that such results do not rigorously establish corresponding phenomena in the phylogenetic setting , which in contrast to coin tossing involves considering a parameter space of dimension greater than 1 .we also frame our main result so that it applies to a fairly general class of priors .note also that it is not the purpose of this short note to add to the on - going debate concerning the implications of this ` paradox ' for bayesian phylogenetic analysis , we merely demonstrate its existence .some further comments and earlier references on the phenomenon have been described in the recent review paper by alfaro and holder 2006 ( pp .35 - 36 ) .[ starfig ] [ overview ]on tree ( in fig .1 ) let , denote the probabilities of the four site patterns ( , respectively ) under the simple symmetric markov process ( the argument extends to more general models , but it suffices to demonstrate the phenomena for this simple model ) . from eqn . ( 2 ) of ( yang and rannala 2005 ) we have and it follows by elementary algebra that for , and thus with strict inequality unless ( or in the limit as tends to infinity ) . to allow maximal generality we make only minimal neutral assumptions about the prior distribution on trees and branch lengths .namely we assume that the three resolved trees on three leaves ( trees in fig .1 ) have equal prior probability and that the prior distribution on branch lengths is the same for each tree , and has a continuous joint probability density function that is everywhere non - zero .this condition applies for example to the exponential and gamma priors discussed by yang and rannala ( 2005 ) .any prior that satisfies these conditions we call _reasonable_. note that we do not require that and be independent .let be the counts of the different types of site patterns ( corresponding to the same patterns as for the s ) .thus is the total number of sites ( i.e. the length of the sequences ) . given a prior distribution on for the branch lengths of ( for ) let ] could be close to 1 or whether the chance of generating data with this property goes to zero as the sequence length gets very large .we show that in fact the latter possibility is ruled out by our main result , namely the following : [ main ] consider sequences of length generated by a star tree on three taxa with strictly positive edge length and let be the resulting data ( in terms of site pattern counts ) .consider any prior on the three resolved trees and their branch lengths that is reasonable ( as defined above ) .for any , and each resolved tree ( ) , the probability that has the property that does not converge to as tends to infinity ._ proof of theorem [ main ] _ consider the star tree with given branch lengths ( as in fig .1 ) . let denote the probability of the four types of site patterns generated by with these branch lengths .note that and so ) .suppose we generate sites on this tree , and let be the counts of the different types of site patterns ( corresponding to the s ) .let and for let for a constant , let denote the event : \mbox { and } \delta_0 \in [ -c , c].\ ] ] notice that implies ] by ( [ identities1 ] ) and ( [ identities2 ] ) we have ; \mbox { and } { \mathbb p}(t_3|{\bfn } ) = p({\bf n})^{-1 } \times { { \mathbb e}}_1[p_0^{n_0}p_1^{n_3}p_2^{n_1+n_2}]\ ] ] where again expectation is taken with respect to the prior for on .consequently , }{{{\mathbb e}}_1[y]},\ ] ] where as will be shown later , it suffices to demonstrate that the ratio in ( [ eqfrac ] ) can be large with nonvanishing probability in order to obtain the conclusion of the theorem .in order to do so we use the following lemma , whose proof is provided in the appendix .[ lem3 ] let be non - negative continuous random variables , dependent on a third random variable that takes values in an interval ] . to apply this lemma , select a value so that , and let ] and ._ claim : _ for sufficiently large , and conditional on the data satisfying : * \geq { { \mathbb e}}_1[y|p_0 \in i_1] ] .the proofs of these two claims is given in the appendix .applying lemma [ lem3 ] to the claims ( i ) and ( ii ) we deduce that conditional on satisfying and being sufficiently large , }{{{\mathbb e}}_1[y ] } \geq 6c^2 \cdot { \mathbb p}(p_0 \in i_0).\ ] ] select ( this is finite by the assumption that the prior on is everywhere non - zero ) . as stated before , the probability that satisfies is at least for sufficiently large .then , and so by ( [ cbound ] ) , }{{{\mathbb e}}_1[y ] } > \frac{2}{\epsilon}. ] .note that ( [ eqin2 ] ) implies that \geq b \cdot { { \mathbb e}}[y|\lambda \in i_0] ] above .let .now , conditional on satisfying we have where and , and denotes kullback - leibler distance .now , ( the first inequality is a standard one in probability theory ) .in particular , if , then .moreover , \leq \max\{y(t_0 , t_1 ) : p_0(t_0 , t_1 ) \in i_1\}.\ ] ] summarizing , \leq \max\{y(t_0 , t_1 ) : p_0(t_0 , t_1 ) \in i_1\ } < \mu(n)e^{-\frac{1}{2}s^2n + o(n)}.\ ] ] in the reverse direction , we have : \geq a(n)b(n)\ ] ] where \times [ t_1 ^ 0 , t_1 ^ 0+n^{-1 } ] \right\}\ ] ] and \times [ t_1 ^ 0 , t_1 ^ 0 + n^{-1}]).\ ] ] now , now , the first term of this product converges to a constant as grows ( because and are each of order ) while the condition ensures that the second term decays no faster than for a constant .thus , for a positive constant .the term is asymptotically proportional to .summarizing , for a constant ( dependent just on ) \geq c_3 \mu(n)n^{-2}e^{-c_1\sqrt{n}},\ ] ] which combined with ( [ eqo1 ] ) establishes claim ( i ) for sufficiently large . let .then for each there exists a value that depends continuously on so that the following holds . for any continuous random variable on ] , we have - { { \mathbb e}}[z^{k+1}])}{{{\mathbb e}}[z^k ] } \geq \frac{1}{2}\ ] ] for all . [ lem1 ] _ proof ._ let .then = \int_{0}^{t_k } t^k f(t)dt + \int_{t_k}^1 t^k f(t)dt.\ ] ] now , where denotes asymptotic equivalence ( i.e. iff ) . using integration by parts , now , provided we have and so the absolute value of the second term on the right is at most consequently , - \frac{f(1)}{k+1}\right| ] is bounded above by times a term of order , and the lemma now follows by some routine analysis . write ] .note that , for any , = p_0^{n_0}{{\mathbb e}}_1[p_1^{r}p_2^{s}|p_0] ] as claimed .this completes the proof of claim ( ii ) .
|
the ` star paradox ' in phylogenetics is the tendency for a particular resolved tree to be sometimes strongly supported even when the data is generated by an unresolved ( ` star ' ) tree . there have been contrary claims as to whether this phenomenon persists when very long sequences are considered . this note settles one aspect of this debate by proving mathematically that there is always a chance that a resolved tree could be strongly supported , even as the length of the sequences becomes very large . keywords : phylogenetic trees , bayesian statistics , star trees
|
the problem of designing efficient coding schemes for the additive white gaussian ( awgn ) channel has been studied for a very long time .shannon showed that random codes can achieve the capacity of the awgn channel .showing that structured codes can achieve capacity remained open till it was shown in , and later in , that lattice codes can achieve capacity with maximum likelihood ( ml ) decoding .erez and zamir then showed that nested lattice codes can achieve capacity with closest lattice point decoding .lattice codes have been shown to be optimal for several other problems such as dirty paper coding , gaussian multiple access channels , quantization , and so on .they have also been used in the context of physical layer network coding and physical layer security .we refer the reader to the book by zamir for an overview of the applications of lattices for channel coding and quantization .a drawback with the proposed nested lattice schemes is that there are no known polynomial - time algorithms for encoding and decoding .a notable exception is the polar lattice scheme which can achieve the capacity of the awgn channel with an encoding / decoding complexity of . ), the encoding / decoding complexity of polar lattices is .] however , the probability of error goes to zero as for any .there are also lattice constructions with low decoding complexity that have been empirically shown to achieve rates close to capacity .however , it is still an open problem to theoretically show that these codes achieve the capacity of the awgn channel .low density construction - a ( lda ) lattices are a class of lattices obtained from low - density parity - check codes and have been shown to achieve capacity with lattice decoding .simulation results suggest that they can approach capacity with low - complexity belief propagation decoding , but we still do not have a theoretical proof of the same .concatenated codes were introduced by forney as a technique for obtaining low - complexity codes that can achieve the capacity of discrete memoryless channels .concatenating an inner random linear code with an outer reed - solomon code is a simple way of designing good codes .using this idea , joseph and barron proposed the capacity - achieving sparse regression codes for the awgn channel , having quadratic ( in the blocklength ) encoding / decoding complexity .they used a concatenated coding scheme with an inner sparse superposition code and an outer reed - solomon code .the probability of decoding error goes to zero exponentially in .recently , proposed an approximate message passing scheme having complexity that grows roughly as for decoding sparse regression codes , and showed that the new decoder guarantees a vanishingly small error probability as for all rates less than the capacity .however , does not provide any guarantees for the rate of decay of the probability of error .the objective of this article is to show that using the technique of concatenation , we can reduce the asymptotic decoding complexity of nested lattice codes that operate at rates close to capacity .we start with a sequence of nested lattice codes having rate , where denotes the capacity of the awgn channel , and is a small positive constant that denotes the gap to capacity .these codes typically have exponential encoding / decoding complexity . by concatenating these with suitable linear codes, we obtain a sequence of concatenated codes that have transmission rate at least , but whose encoding / decoding complexity scales polynomially in the blocklength . we show that concatenating an inner nested lattice code with an outer reed - solomon code yields a capacity - achieving coding scheme whose encoding / decoding complexity is quadratic in the blocklength .furthermore , the probability of error decays exponentially in . replacing the reed - solomon code with an expander code yields a capacity - achieving coding scheme with decoding complexity and encoding complexity .to the best of our knowledge , this is the first capacity - achieving coding scheme for the awgn channel whose encoding and decoding complexities are polynomial , and the probability of error decays exponentially in the blocklength .the techniques that we use are not new , and we use results of and to prove our results .an attractive feature of this technique is that it can also be used to reduce the complexity of nested lattice codes for several other gaussian networks .it can be used as a tool to convert any nested lattice code having exponential decoding complexity to a code having polynomial decoding complexity .this comes at the expense of a minor reduction in performance ( in terms of error probability ) of the resulting code .furthermore , we are able to give guarantees only for large blocklengths . as applications ,we show how these ideas can be used to obtain a capacity - achieving scheme for the gaussian wiretap channel and to reduce the decoding complexity of the compute - and - forward protocol for gaussian networks .more recently , these techniques have also been used to obtain polynomial - time lattice coding schemes for secret key generation from correlated sources . throughout this article , we measure complexity in terms of the number of binary operations required for decoding / encoding , and we are interested in how this complexity scales with the blocklength .we assume that arithmetic operations on real numbers are performed using floating - point arithmetic , and that each real number has a -bit binary representation , with being independent of the blocklength .the value of would depend on the computer architecture used for computations ( typically 32 or 64 bits ) .in essence , we assume that each floating - point operation has complexity . the rest of the paper is organized as follows .we describe the notation used in the paper and recall some concepts related to lattices in section [ sec : notation ] .we then describe the concatenated coding scheme for the awgn channel in section [ sec : codingscheme ] , with theorem [ thm : main_rs ] summarizing the main result . in section [ sec :parallelconcatenation ] , we use an outer expander code to reduce the decoding complexity to .this is summarized by theorem [ thm : main_expander ] .the performance of the two concatenated coding schemes are compared with polar lattices and sparse superposition codes in table [ table : comparison ] .we make some remarks on extending these ideas to the gaussian wiretap channel and the compute - and - forward protocol in section [ sec : discussion ] .we also indicate how the same technique can be used to reduce decoding complexity and improve the probability of error of lda lattices and polar lattices .we conclude the paper with some final remarks in section [ sec : concluding ] .the proof of lemma [ lemma : cf_lowcomplexity ] is provided in appendix a.for a detailed exposition on lattices and their applications in several communication - theoretic problems , see .we denote the set of integers by and real numbers by .the set of nonnegative real numbers is denoted by . for a prime number and positive integer ,we let denote the field of characteristic containing elements . for and , we define to be the set . given and , we say that if .we use the standard big - o and little - o notation to express the asymptotic relationships between various quantities .if is an full - rank matrix with real entries , then the set is called a _ lattice _ in .we say that is a _ generator matrix _ for .let denote the lattice _quantizer _ that maps a point in to the point in closest to it . for , we define \bmod \l ] , which is transmitted across the channel .this process of translating the message by modulo prior to transmission is called _dithering_. the encoder satisfies a maximum transmit power constraint given by . upon receiving ,the receiver uses a decoder to estimate , which does the following .it computes \bmod\lcn ] .let .erez and zamir showed that there exist nested lattices with which we can approach the capacity of the awgn channel .specifically , for every , there exists a sequence of nested construction - a lattice pairs such that for all sufficiently large , the maximum transmit power is the transmission rate is and the probability of error decays exponentially in for all , i.e. , there exists a function so that for every and all sufficiently large , we have \leq e^{-n e(r^{(n)})}.\ ] ] furthermore , the quantity is positive for all .[ lemma : erezresult ] the decoding involves solving two closest lattice point problems , which are the and operations .therefore , the decoding complexity is .if the encoder uses a look - up table to map messages to codewords , the complexity would also be .let us now give a brief description of the concatenated coding scheme .see for a more detailed exposition and application to the discrete memoryless channel .the code has two components : * inner code : a nested construction - a lattice code with the fine lattice obtained from a linear code over .* outer code : an linear block code ( where is the minimum distance of the code ) over .the message set has size , and each message can be represented by a vector in .the outer code maps this vector to a codeword in in a bijective manner .let us call this ^t ] , is at most for all sufficiently large .furthermore , these lattices are obtained from linear codes over for prime ( which is a function of ) .let us fix an large enough so that where denotes the binary entropy function . for a fixed , the parameters and will remain constant , and we will let only grow to infinity .we use an outer expander code whose construction is similar to the one in .this has two components : * a sequence of -regular bipartite expander graphs with vertex set and edge set , with . here, denotes the set of left vertices and denotes the set of right vertices , with , where is a large constant independent of . the graph is chosen so that the second - largest eigenvalue of its adjacency matrix , denoted , is at most . explicit constructions of such graphs can be found in the literature ( see for a stronger result ) . this graph is a normal factor graph for the outer expander code .we choose a sufficiently large so that the inequality holds . *a linear code over having blocklength and dimension . for convenience ,let us choose to be a reed solomon code over ( assuming that ) with .the minimum hamming distance of is .let us define let us order the edges of in any arbitrary fashion , and for any , let denote the set of edges incident on , where according to the order we have fixed .we define the expander code as follows : the codeword entries are indexed by the edges of .a vector is a codeword of the expander code iff for every , we have that is a codeword in . following , we will call this the code .the code has blocklength and dimension at least .zmor proposed an iterative algorithm for decoding expander codes .suppose that the received ( possibly erroneous ) vector is .the vector is initialized with for all , and iteratively updated to obtain an estimate of . in every odd - numbered iteration ,the algorithm replaces ( for all ) by a nearest - neighbour codeword ( to ) in . in every even - numbered iteration , every for replaced by a nearest - neighbour codeword .this is repeated till converges to a codeword in the expander code , or a suitably defined stopping point is reached .since forms a disjoint partition of the edge set , the nearest - neighbour decoding can be done in parallel for all the s in .the same holds for the vertices in .we direct the interested reader to for more details about the code and the iterative decoding algorithm .let be fixed .the iterative decoding algorithm can be implemented in a circuit of size and depth that always returns the correct codeword as long as the number of errors is less than .[ lemma : zemorconcatenated ] since and , we see from lemma [ lemma : zemorconcatenated ] that the decoder can recover the transmitted outer codeword as long as the fraction of errors is less than . although lemma [ lemma : zemorconcatenated ] was proved in for binary expander codes , it can be verified that the result continues to hold in the case where the expander code is defined over , provided that is a constant independent of . for every , there exists a sequence of concatenated codes with inner nested lattice codes and outer expander codes that satisfies the following for all sufficiently large : * rate , * maximum transmit power * the probability of error is at most , * the encoding complexity grows as , and * the decoding complexity grows as .[ thm : main_expander ] recall that the overall blocklength , where is a sufficiently large constant .the probability that an inner ( lattice ) codeword is recovered incorrectly is at most .let us fix and define , the fraction of errors that the outer expander code is guaranteed to correct according to lemma [ lemma : zemorconcatenated ] . from our choice of parameters ,this quantity is at least .the probability of error of the concatenated code can be upper bounded as follows : for all sufficiently large , we can say that where the error exponent , since , we have by our choice of in ( [ eq : peinner_condition ] ) .let us now inspect the encoding and decoding complexity . recall that each floating - point operation has a complexity of .since is a constant , encoding / decoding each inner ( nested lattice ) codeword requires floating - point operations , and there are many codewords , leading to a total complexity of .since the outer code is linear , encoding requires operations in .since is a constant , the outer code has an encoding complexity of .decoding the outer code requires operations in .we can therefore conclude that the decoding the concatenated code requires a complexity of , and encoding requires a complexity of .this completes the proof of theorem [ thm : main_expander ] .( ) * & * encoding complexity * & * error probability * & * error probability as a function of * + polar lattice & & & & + & & & for any & for any + sparse regression codes & & & & + & & & & + & & & & +the approach used in the previous sections can be used as a recipe for reducing the decoding complexity of optimal coding schemes for gaussian channels . a nested lattice scheme that achieves a rate over a gaussian channelcan be concatenated with a high - rate outer reed - solomon code or expander code to achieve any rate arbitrarily close to .the only requirement is that the nested lattice code has a probability of error which decays exponentially in its blocklength .this procedure helps us bring down the decoding complexity to a polynomial function of the blocklength while ensuring that the probability of error continues to be an exponential function of the blocklength . as an application ,consider the gaussian wiretap channel .tyagi and vardy gave an explicit scheme using 2-universal hash functions that converts any coding scheme of rate over the point - to - point awgn ( main ) channel to a coding scheme that achieves a rate over the wiretap channel while satisfying the strong secrecy constraint .this `` conversion '' adds an additional decoding complexity which is polynomial in the blocklength .we can therefore use this result with theorem [ thm : main_rs ] or theorem [ thm : main_expander ] to conclude that we can achieve the secrecy capacity of the gaussian wiretap channel with polynomial time decoding / encoding .the compute - and - forward protocol was proposed by nazer and gastpar for communication over gaussian networks .let us begin by describing the setup .we have source nodes , having independent messages respectively .the messages are chosen from for some prime number and positive integers .let denote the addition operator in .these messages are mapped to -dimensional real vectors respectively and transmitted across a gaussian channel to a destination which observes where are real - valued channel coefficients and is awgn with mean zero and variance .the destination must compute , where are integers .we assume that each source node must satisfy a maximum power constraint of .we only consider symmetric rates here , i.e. , all sources have identical message sets .the rate of the code is .this problem is relevant in many applications such as exchange of messages in bidirectional relay networks , decoding messages over the gaussian multiple access channel , and designing good receivers for mimo channels to name a few .the basic idea is that instead of decoding the messages one at a time and using successive cancellation , it may be more efficient to decode multiple linear combinations of the messages . if we have linearly independent such combinations , then we can recover all the individual messages .we can extend the scheme of to a concatenated coding scheme that achieves the rates guaranteed by , but now with encoders and decoders that operate in polynomial time . recall that the messages are chosen from .we say that a rate is achievable if for every , there exists a sequence of encoders and decoders so that for all sufficiently large blocklengths , we have the transmission rate , and the probability of error is less than .we can show the following : consider the problem of computing from ( [ eq : cforward_rec ] ) .any rate where is achievable with encoders and decoders whose complexities grow as using an outer reed - solomon code , and a decoder whose complexity grows as with an outer expander code . for transmission rates less than , the probability that the decoder makes an error goes to zero exponentially in .[ lemma : cf_lowcomplexity ] see appendix a. the technique of concatenation can also be used to improve the error performance of other lattice codes that achieve the capacity of the awgn channel .for example , polar lattices have an error probability that decays as for any , and lda lattices have an error probability that behaves as .it is easy to show that if denotes the probability of error of the inner nested lattice code , then the probability of error of the ( both rs and expander ) concatenated code is for some . in any case, the probability of error goes to zero as irrespective of whether we use polar or lda lattices .we can therefore conclude that the probability of error decays as for the corresponding reed - solomon concatenated ( polar / lda ) lattice code and for the corresponding expander concatenated ( polar / lda ) lattice codes . the decoding complexities would grow as ( for rs concatenated codes ) and ( for expander concatenated codes ) respectively .we have seen that concatenation can be a very powerful tool in reducing the asymptotic decoding complexity of nested lattice codes . however , it must be noted that achieving good performance using this scheme would require very large blocklengths .although the probability of error decays exponentially in , and the decoding / encoding complexities are polynomial in , this is true only for very large values of .the fact that is at least exponential in the blocklength of the inner code is a major reason for this .nevertheless , the concatenated coding approach shows that it is possible to obtain polynomial - time encoders and decoders for which the probability of error decays exponentially in the blocklength .the exponential decay is under the assumption that the gap between the transmission rate and capacity , , is kept fixed . for a fixed error probability ,the blocklength required by the concatenated coding scheme to achieve rate and error probability does not scale polynomially with . for a fixed error probability , we would like the complexity to not grow too fast as the rate approaches .it has been recently shown that polar codes have this property for binary memoryless symmetric channels . designing codes for the gaussian channel whose decoding / encoding complexitiesare also polynomial in for a fixed probability of error still remains an open problem .the authors would like to thank prof .sidharth jaggi for a discussion that led to this work .the work of the first author was supported by the tata consultancy services research scholarship program , and that of the second author by a swarnajayanti fellowship awarded by the dept . of science and technology ( dst ) , govt . of india .the technique used to prove lemma [ lemma : cf_lowcomplexity ] is a simple extension of the coding scheme of using the methods described in section [ sec : codingscheme ] . for completeness, we will briefly describe the scheme . for more details regarding the compute - and - forward protocol ,see .we use the concatenated coding scheme of section [ sec : concat_scheme_awgn ] .the inner code is obtained from nested construction - a lattices .suppose that is constructed using a linear code over .the outer code is an reed - solomon code , with and to be specified later .the transmission rate is .the messages are chosen from .let the message at the user be ^{t} ] .each is then encoded to using the inner code and then transmitted .recall that there exists a group isomorphism from to . for and , let be the representative of in .independent dither vectors are generated at the sources .transmitter successively sends \bmod\lcn ] .the estimate of \bmod\lcn ] . recall the definition of in ( [ eq : cforward_rate ] ) .nazer and gastpar showed in that there exists a sequence of nested construction - a lattices with for which the probability that the decoder makes an error in estimating the desired linear combination decays as , where is some function which is positive for all .as we did before for the awgn channel , we choose . assuming that fewer than inner codewords are in error, the decoder can recover \bmod\lcn,\ldots,\big[\sum_{l}a_l\x_{\nout}^{(l)}\big]\bmod\lcn\big]^t$ ] without error . due to the existence of a group isomorphism between and ,this implies that the decoder can recover , and hence , . arguing as in section [ sec : codingscheme ] , the probability that the decoder makes an error goes to zero exponentially in , and the decoding / encoding complexities grow as .the same arguments can be used to show that using an outer expander code , we can have the encoding complexity to be and decoding complexity to be .m. mondelli , s.h .hassani , and r. urbanke , `` unified scaling of polar codes : error exponent , scaling exponent , moderate deviations , and error floors , '' 2015 .[ online ] .available : http://arxiv.org/abs/1501.02444 . c. rush ,a. greig , and r. venkataramanan , `` capacity - achieving sparse regression codes via approximate message passing decoding , '' in _ proc. 2015 ieee int .information theory ( isit ) , _ hong kong , 2015 , pp .20162020 .s. vatedka and n. kashyap , `` a lattice coding scheme for secret key generation from gaussian markov tree sources , '' _ 2016 ieee int .information theory ( isit ) , _ barcelona , spain , 2016 , pp .y. yan , l. liu , c. ling , and x. wu , `` construction of capacity - achieving lattice codes : polar lattices , '' 2014 .[ online ] .available : http://arxiv.org/abs/1411.0187 .
|
a fundamental problem in coding theory is the design of an efficient coding scheme that achieves the capacity of the additive white gaussian ( awgn ) channel . the main objective of this short note is to point out that by concatenating a capacity - achieving nested lattice code with a suitable high - rate linear code over an appropriate finite field , we can achieve the capacity of the awgn channel with polynomial encoding and decoding complexity . specifically , we show that using inner construction - a lattice codes and outer reed - solomon codes , we can obtain capacity - achieving codes whose encoding and decoding complexities grow as , while the probability of error decays exponentially in , where denotes the blocklength . replacing the outer reed - solomon code by an expander code helps us further reduce the decoding complexity to . this also gives us a recipe for converting a high - complexity nested lattice code for a gaussian channel to a low - complexity concatenated code without any loss in the asymptotic rate . as examples , we describe polynomial - time coding schemes for the wiretap channel , and the compute - and - forward scheme for computing integer linear combinations of messages .
|
recent years have seen the proliferation of graphics processing units ( gpus ) as application accelerators in high performance computing ( hpc ) systems , due to the rapid advancements in graphic processing technology over the past few years and the introduction of programmable processors in graphics processing units ( gpus ) , which is also known as gpgpu , or general - purpose computation on graphic processing units . as a result ,a wide range of hpc systems have incorporated gpus to accelerate applications by utilizing the unprecedented floating point performance and massively parallel processor architectures of modern gpus , which can achieve unparalleled floating point performance in terms of flops ( floating - point operations per second ) up to the teraflop barrier .such systems range from clusters of compute nodes to parallel supercomputers . while examples of gpu - based computer clusters can be found in academia for research purpose , such as and .the contemporary offerings from supercomputer vendors have begun to incorporate professional gpu computing cards into the compute blades of their parallel computer products ; example include the latest cray xk7 and sgi altix uv supercomputers .yet another notable example is the titan supercomputer currently ranking the 2^nd^ in the top 500 supercomputer list .titan is equipped with 18,688 nvidia tesla gpus and is thereby able to achieve a sustained 17.59 pflops linpack performance ..gpu - based supercomputers in the top 30 list [ cols="<,<,<,<",options="header " , ] our previous execution models depict inter - process parallelism and overlapping under the sharing scheme achieved through gpu virtualization . as we use two kernel cases to analyze the overlapping behaviors in the execution model , herewe first utilize two extreme benchmark cases ( highly compute - intensive and highly i / o - intensive ) as experimental evaluation of potential overlapping behavior .the purpose is to demonstrate the different overlapping behavior for c - i and io - i kernel cases and compare the actual performance gain with non - virtualization solution .the i / o - intensive application we use is a very large vector addition benchmark while the compute - intensive benchmark is the gpu version of ep ( problem size : m=30 ) from nas parallel benchmarks ( npb ) .the ep kernel grid size is designed small merely to show the effectiveness of concurrency under virtualization , while the actual grid size decides the overlapping and concurrency extent in real applications .we list all benchmark kernel profiles used in this section in table [ tb : apps ] . our experiment with ep(m30 ) andvecadd primarily focuses on evaluating process turnaround time by emulating a process - level parallel spmd program for both benchmarks , while launching multiple processes with the same benchmark kernel simultaneously . as an spmd program generally requires __ n~process~__~processor~ _ and our computing node consists of 8 microprocessor cores , we varied the number of spmd parallel processes from 1 to the maximum of 8 .figure [ fig : vecadd ] and [ fig : ep ] demonstrate the effectiveness of gpu virtualization in reducing the process turnaround time with the increasing number of processes for both c - i and io - i cases .for the i / o - intensive benchmark in figure [ fig : vecadd ] , when the number of parallel processes increases , without virtualization , the turnaround time increase sharply due to the zero overlapping and context - switch overheads .with virtualization , the turnaround time still increases but much slowly comparatively .this is because i / o - intensive application can not achieve complete overlapping as explained in the model earlier , but can still partially overlap i / o as well as eliminate context - switch and initialization overheads . for the compute - intensive benchmark in figure [ fig : ep ] , with virtualization ,the turnaround time increases very little with the increasing number of processes , which clearly shows that our gpu virtualization implementation can achieve the expected execution concurrencies for smaller kernels only using a portion of the gpu resource in the case of compute - intensive kernel .cuda s current concurrent kernel execution support heavily depends on kernel profiles .in other words , blocks from multiple kernels are concurrently executed on separated sms inside gpu to achieve the concurrency when cuda streams are used .thus small kernels ( small number of blocks ) can achieve better kernel execution concurrency compared with large kernels . in the previous modeling analysis ,we assume kernel execution overlapping is complete since we are focused on studying overlapping behaviors . thus in order to verify our previous modeling analysis, we utilize ep(m24 ) and vecmult shown in table [ tb : apps ] as the benchmark kernels to verify c - i and io - i models , respectively . for both kernels ,we carry out initial profiling analysis to empirically derive _t~data_in~ _ , _ t~comp~ _ and _ t~data_out~_. as both execution models are to estimate the total execution time of _n~process~ _ kernels sharing the gpu under virtualization .the theoretical time can be derived using equation and respectively with the profiling results .experimentally , we here launch the emulated spmd kernel programs while varying _n~process~ _ from 1 to 8 .instead of measuring process turnaround time , we here only measure the time all kernels spend on sharing the gpu inside the gvm of our virtualization infrastructure .thus , we are able to avoid bringing unnecessary virtualization overheads into the model validation . comparisons of experimental results and modeling results are shown in figure [ fig : ep_model ] for c - i model and in figure [ fig : vecm_model ] for io - i model , both of which demonstrate accurate modeling results .we also note that for c - i model validation , utilizing ep(m24 ) with kernel size of one is merely to guarantee that all kernels are executed on separated sms ( up to 8 kernels in our case ) . in other words , complete overlapping of actual kernel computationcan be achieved with kernels executed on separated sms .the comparisons in both figures validate our previous execution model results with an average model deviation of 0.42% for ep(m24 ) and 4.76% for vecmult .considering the virtualization infrastructure is an add - on layer , possible overhead can be added on top of the theoretical modeling results .as our implementation mainly uses posix shared memory and message queue , the vast majority overhead comes from data movement and message synchronization between the api layer and base layer .we here conduct another micro benchmark using the i / o - intensive vecadd benchmark with multiple data sizes .we measure the overhead by launching a single process and compare the time purely spent on the gpu in the base layer with the process turnaround time .as shown in figure [ fig : overhead ] , the overhead , which is the differences between the turnaround time and pure gpu time , increases with the size of data being transfered as expected . even in the case when the data size is very large ( 400 mb in our case ) , the virtualization overhead is measured around 20% , which demonstrates that our virtualization implementation incurs comparatively low overhead , especially considering that an add - on virtualization layer generally brings much more overhead . as a further step, we conduct several additional benchmarks to demonstrate the efficiency of the proposed virtualization approach in addressing real - life applications with different profiles .as table [ tb : apps ] shows , mm refers to the 2048x2048 single precision floating - point matrix multiplication .mg and cg refer to gpu versions of npb kernel mg and cg , respectively , with the problem size of class s. black scholes is a european option pricing benchmark used in financial area , adapted from nvidia s cuda sdk .we set option prices over 512 iterations as default .electrostatics refers to fast molecular electrostatics algorithm as a part of the molecular visualization program vmd and we set the problem size to be 100k atoms with 25 iterations . by evaluating i / o and computing time ratio , we further profile the class of each benchmark as shown in table [ tb : apps ] .experimentally , we emulate process - level spmd execution of each benchmark kernel with multiple processes and compare the process turnaround time between virtualization and non - virtualization scenario .figure [ fig : mm ] , [ fig : mg ] , [ fig : bs ] , [ fig : cg ] and [ fig : es ] respectively compare the turnaround time with and without gpu virtualization .it is worth mentioning that the performance improvement using one process is due to the elimination of initialization overheads by the virtualization implementation , even with the add - on virtualization overhead . since mm is profiled as intermediate and the grid size is large enough to occupy the whole gpu , it partially benefits from both i / o and kernel computing overlapping with virtualization .both mg and cg are compute - intensive benchmarks and class s problem sizes ( small kernel sizes ) only make mg and cg utilize partial gpu resource .thus mg and cg can achieve more overlapping by concurrent kernel execution under virtualization . with the default problem size and a grid size of 480 ,a single black scholes benchmark can utilize full gpu resource and can hardly be concurrently executed under virtualization . since it is also i / o - intensive application , it is only able to achieve limited overlapping between the i / o and kernel - computing as described earlier . as for electrostatic benchmark ,since it is compute - intensive while the grid size of 288 making it occupy the whole gpu , the overlapping potential is small using virtualization .however , it still benefits from zero context - switch and initialization overhead due to virtualization .therefore , within the five application benchmarks , as each achieves certain amount of performance gain through virtualization due to overlapping and elimination of overheads , mg and cg achieve better performance gains .figure [ fig : sp ] summarizes an example speedup comparison scenario utilizing all available system processors ( 8 processes ) . including ep(m30 ) andvecadd as two extreme cases along with the five real - life benchmarks , all seven benchmarks we conduct achieve speedups from 1.4 to 7.4 with 8 process - level parallelism under gpu virtualization . therefore , while all benchmarks can achieve certain amount of performance gains , the efficiency of the virtualization approach also depends on the profiles of the applications , including the i / o and computing time ratio as well the gpu resource usage . to summarize from figure [ fig : sp ] ,small compute - intensive kernels can achieve the best performance improvement as ep(m30 ) , mg and cg show .intermediate kernels can achieve reasonable speedups with partial i / o and compute overlapping as shown from mm .i / o - intensive kernels ( bs and vecadd ) can only achieve i / o overlapping , while large compute - intensive kernels ( es ) can overlap i / o ( small portion ) and very limited kernel execution .thus they achieve relatively less performance gain .however , the elimination of context - switch and initialization overhead plus well - increased gpu utilization from gpu virtualization allow considerable speedup for application in general .in fact , our virtualization experimental results show a good agreement with the proposed analytical model , and demonstrate that our gpu virtualization implementation is an effective approach allowing multi - processes to share the gpu resource efficiently under spmd model , while incurring comparatively low overhead .in this paper , we proposed a gpu virtualization approach which enables efficient sharing of gpu resources among microprocessors in heterogeneous hpc systems under spmd execution model . in achieving the desired objective of makingeach microprocessors effectively utilize shared resources , we investigated the concurrency and overlapping potentials that can be exploited on the gpu device - level .we also analyzed the performance and overheads of direct gpu access and sharing from multiple microprocessors as a comparison baseline .we further provided an analytical execution model as a theoretical performance estimate of our proposed virtualization approach .the analytical model also provided us with better understanding of the methodologies in implementing our virtualization concept .based on these concepts and analyses , we implemented our virtualization infrastructure as a run - time layer running in the user space of the os .the virtualization layer manages requests from all microprocessors and provides necessary gpu resources to the microprocessors .it also exposes a vgpu view to all the microprocessors as if each microprocessor has its own gpu resource . inside the virtualization layer, we managed to eliminate unnecessary overheads and achieve possible overlapping and concurrency of executions . in the experiments, we utilized our nvidia fermi gpu computing node as the test bed .we used initial i / o - intensive and compute - intensive benchmarks as well as application benchmarks with multiple folds of analyses in our experiments .our experimental results showed that we were able to achieve considerable performance gains in terms of speedups with our virtualization infrastructure with low overhead .the experimental results also demonstrate an agreement with our theoretical analysis .proposed as a solution for microprocessor resource underutilization by providing a virtual spmd execution scenario , our approach proves to be effective and efficient and can be deployed to any heterogeneous gpu clusters with imbalanced cpu / gpu resources .37 natexlab#1#1[1]`#1 ` [ 2]#2 [ 1]#1 [ 1]http://dx.doi.org/#1 [ ] [ 1]pmid:#1 [ ] [ 2]#2 , , , , , in : , , , pp . . ., , , , , in : , , , p. ., , , , , , , , , in : , , , pp . . , , ., , . , . ,, , , , volume , , . , . , ., , , , , , , , in : , , , pp . . , , , , in : , , , pp . . , , , , , ( ) . , , , , , , , in : , , , pp . . , , , , , in : , , ., , , ( ) . , , , , in : , , , pp . . ., , , , in : , cf 14 , , , , pp . .http://doi.acm.org/10.1145/2597917.2597925 . . ,thesis , the george washington university , ., , , ( ) . , , , , , ( ) . , , , , in : , , , pp . . , , ., , , , , , , ( ) . http://dx.doi.org/10.1002/cpe.1860 . ., , , , , , , , , , et al ., , , ( ) . , .
|
the high performance computing ( hpc ) field is witnessing a widespread adoption of graphics processing units ( gpus ) as co - processors for conventional homogeneous clusters . the adoption of prevalent single - program multiple - data ( spmd ) programming paradigm for gpu - based parallel processing brings in the challenge of resource underutilization , with the asymmetrical processor / co - processor distribution . in other words , under spmd , balanced cpu / gpu distribution is required to ensure full resource utilization . in this paper , we propose a gpu resource virtualization approach to allow underutilized microprocessors to efficiently share the gpus . we propose an efficient gpu sharing scenario achieved through gpu virtualization and analyze the performance potentials through execution models . we further present the implementation details of the virtualization infrastructure , followed by the experimental analyses . the results demonstrate considerable performance gains with gpu virtualization . furthermore , the proposed solution enables full utilization of asymmetrical resources , through efficient gpu sharing among microprocessors , while incurring low overhead due to the added virtualization layer . gpu , virtualization , resource sharing , spmd , heterogeneous computing , high performance computing
|
it is known that many animal and insect species are capable of sensing extremely weak magnetic fields .of particular interest amongst biologists , chemists , and physicists is the problem of how migrating species of birds use the earth s magnetic field for navigation .the mechanism granting birds the ability to use the geomagnetic field for guidance is known as the avian compass and there is now substantial evidence that certain species ( e.g. the european robin ) do indeed possess this compass . to date , the most promising model of the avian compass is known as the radical - pair mechanism , a chemical reaction which takes place inside a photoreceptor molecule in the bird s eye known as cryptochrome .the radical - pair model has been a platform on which many interesting theoretical investigations have been carried out since it was first proposed as a candidate explanation for the avian compass . one interesting problem , which has also been the subject of recent debate , is the form of its reaction operator .we will use quantum walks , which is essentially an elaborate theory of kraus maps , to shed some light on this topic .this illustrates that quantum walks is a suitable framework for describing coherent chemical kinetics .the description of radical - pair reactions has conventionally been that of haberkorn s from 1976 .this approach is phenomenological and based on arguing which of two existing differential equations for the radical - pair state should be preferred .the first is proposed by johnson and merrifield , and evans _ et al .the second is due to pedersen and freed .we shall refer to refs . collectively as the johnson evans equation for mere convenience .haberkorn chose the johnson evans equation instead of the one due to pedersen and freed by showing that pedersen and freed s equation leads to negative eigenvalues for the radical - pair state whereas the johnson and evans version does not .both differential equations for the radial - pair state can now be seen to contain terms making up what is known as the lindblad form of master equations , though neither is actually in lindblad form .haberkorn s solution was to simply consider what the system state would look like if propagated using the two proposed state derivatives .the nontriviality in distinguishing the two types of state evolution lies in the fact that it is not immediately obvious how to interpret the proposed state derivatives as opposed to the state itself . in more general and formal language, haberkorn made his argument by referring to the map which takes a system state from to , rather than the map s generator , which is related to the map by .hence the generator hence defines the derivative of the state , i.e. . in the context of chemical kinetics , the superoperator is referred to as the reaction operatorof course , today , the lindblad form is well understood ( at least in the quantum optics and quantum information community ) , so the problem pointed out by haberkorn may not be perceived by some to be significant .however , there is still something to be learnt from haberkorn s work , which is that when considering non - unitary evolution the map may be a more intuitive quantity to consider than the state derivative because is given explicitly by but only implicitly by .one could in fact argue that this is also the reason why lindblad s result on the form of master equations is nontrivial .this motivates us to use instead of in this paper .haberkorn s preferred reaction operator went unchallenged until recently , bringing the debate about its form into the limelight again .this has resulted in one side arguing in defence of haberkorn and is now referred to as the conventional , phenomenological , or haberkorn approach to radical - pair reactions .a separate camp , called the quantum - measurement approach to radical pairs has proposed two new reaction operators , due separately to kominis and jones and hore .of particular interest to us is the paper by jones and hore who derived their reaction operator using kraus maps .the jones hore equation predicts a different singlet - triplet dephasing rate to the conventional approach of haberkorn s and has been the subject of a recent experiment aimed at discriminating the two models .this experiment showed the jones hore equation to be inconsistent with the measured dephasing rate .in this paper we also use kraus maps to describe the radical - pair kinetics but we obtain a dephasing rate that is consistent with ref .a key factor in our approach is the recognition that any intermediate transition in a multistate reaction involve only two states at once .although this seems trivial , it is what separates our paper from the work of jones and hore because it implies that one can derive the map for a multistate reaction by composing two - state maps only .maps for multistate reactions derived in this way will thus be correct by virtue of the method ( provided that we have the correct two - state maps ) .two - state transitions are particularly well understood in quantum information theory since qubits , which are essentially two - state systems , are the central object of study .the toolbox provided by quantum information theory allows us to construct maps for multistate reactions that are robust to modelling errors .our approach to the radical - pair reaction kinetics views the reaction as simply a system evolving between a discrete set of states in a probabilistic manner .since such systems are analogous to random walks , our approach to the problem is one of _ quantum _ walks .we review quantum walks below and point out the sense in which our version of quantum walks differ from those in the quantum - walk literature .quantum walk was first introduced in the early 90s by aharanov and coworkers .they sought to generalise the idea of a classical walker who can only move left or right in discrete units along one spatial dimension to the quantum case .they managed to define a quantum walk as the analogue of a classical random walk by correlating the system s spatial coordinate to its internal degree of freedom such as spin , generically called a coin . the coin s ability to be in a superposition of statescan be seen to give rise to quantum walks although the use of a coin is actually not necessary .since then quantum walks have proven to be useful in quantum information where they have found a variety of algorithmic applications in hitting and searching .quantum walks were first introduced for closed systems which follow unitary evolution , but have recently been generalised to open systems which follow nonunitary evolution , called open quantum walks .such evolution is described by a map that is completely positive and trace preserving , and like their unitary counterpart , the maps defined in refs . include changes in the internal degrees of freedom of the open system . to model the radical - pair reaction we propose an evolution map which makes no reference to any internal degrees of freedom .in this sense our model of the radical - pair reaction is similar to ref . with the exception that we allow for nonunitary evolution .the rest of the paper is organised as follows . in sec .[ reactopinlit ] we define the standard radical - pair reaction and review the different reaction operators that have so far been proposed .these results form the backdrop against which our approach to radical - pair kinetics should be considered .we then introduce kraus maps in sec .[ reactopfromqw ] which writes maps in a particular form known as the operator - sum representation . heremaps describing processes known as amplitude - damping , dephasing , and unitary evolution are explained .we give a detailed exposition of the amplitude - damping map in appendix [ appad ] to illustrate how the operator - sum representation provides insight to the process which would not have come by so easily if the process was described using a lindblad - form master equation .this forms our toolbox for describing radical - pair kinetics and is used in secs .[ reactopstdrpm ] and [ moregeneralqws ] to derive a reaction operator for the standard radical - pair reaction . in sec .[ reactopstdrpm ] we focus on reaction operators which can be described with only amplitude - damping maps .this corresponds to the reaction operators reviewed in sec .[ reactopinlit ] where only recombination processes are assumed to occur .an interesting result here is the radical - pair density operator obtained from a partial trace over the chemical products .this gives a new state in the radical - pair basis which has been overlooked in previous models . in sec .[ moregeneralqws ] we generalise the results of sec .[ reactopstdrpm ] to include dephasing and unitary dynamics and comment on the relation between our quantum - walk approach and previous work .finally we summarise our results in sec .[ conclusion ] and discuss its connection with a sequel paper and other relevant literature .the body of literature discussed here refers to the standard radical - pair reaction shown in fig .[ stdrpmrates ] .this is often referred to as a spin - selective recombination reaction .it is spin selective because the reaction product depends on the spin state of the reactants , i.e. the radical pair , while the term recombination refers to the physical process by which the chemical products are obtained : the radical pair is typically formed through the transfer of an electron from one molecule , called the donor , to another molecule , the acceptor , creating a spatially separated pair of entangled spins .the formation of the chemical products usually involve a back - transfer of this electron from the acceptor to the donor a recombination of the electron with the vacancy left on the donor molecule ( see e.g. refs . for a more complete account of the radical - pair mechanism ) .the standard radical - pair reaction .the spin state of the radical pair oscillates coherently between singlet and triplet states under the hyperfine interaction with neighbouring nuclei spins .this oscillation can be modulated by an external magnetic field and shown to be sensitive to the magnetic field s direction .this effectively modulates the amount of time the radical pair spends in the singlet state versus the triplet .since each spin state decays to their own unique product , information about the magnetic field direction is then encoded in the concentrations of the singlet and triplet products.,scaledwidth=35.0% ] in line with previous works we assume a minimal basis for the reaction where denotes the singlet state and the triplet state with zero total spin .physically this corresponds to the high - field limit where the triplet states with nonzero total spin are sufficiently far away in energy so that they can be safely neglected .we label the singlet and triplet product states as and .the and transitions are then characterised by the respective rates and .we have circled the singlet and triplet states in fig .[ stdrpmrates ] with a dashed line to emphasise that the system comprises of only the and states . allknown reaction operators are expressed in terms of these states so it will be convenient to define the green dashed boundary in fig .[ stdrpmrates ] then serves to remind us that in the present discussion , and sums to the identity , i.e. . this may be useful to keep in mind for later as our approach extends the system hilbert space to include the product states so that and no longer form a complete set .the problem is to determine the appropriate reaction operator which describes the changes brought upon due solely to the spin - selective recombination taking the spin states to product states . in particular, this recombination contributes to the singlet - triplet dephasing and we would like to determine what exactly this contribution is .all other effects such as spin relaxation or effects of molecular motion are ignored .spin - dependent interactions such as the zeeman , hyperfine , exchange , or dipolar are also ignored . including additional processes other than the spin - selective recombination will tend to increase the dephasing rate .this will be assumed in all of the reaction operators which are reviewed next , which all describe fig .[ stdrpmrates ] but without the coherent evolution .beginning with haberkorn , the reaction operator preferred by his positivity - preserving argument for the radical - pair state is given by \equiv { } & - \frac{1}{2 } \ ; { k_{\rm s}}\big [ { \hat{q}_{\rm s}}\ , \rho(t ) + \rho(t ) \ , { \hat{q}_{\rm s}}\big ] { \nonumber}\\ & - \frac{1}{2 } \ ; { k_{\rm t}}\big [ { \hat{q}_{\rm t}}\ , \rho(t ) + \rho(t ) \ , { \hat{q}_{\rm t}}\big ] \;.\end{aligned}\ ] ] note that this is a non - trace - preserving equation for , the effect of which is to describe leakage of the singlet and triplet populations over time at the rates and respectively . equation also predicts a damping of the singlet - triplet coherence at the rate of .this can be seen by calculating the time derivative of the off - diagonal elements of : where .this reaction operator was first questioned by kominis who argued that the radical - pair reaction is analogous to two coupled quantum dots under continuous measurement by a point contact .kominis then derived a reaction operator using similar methods as refs . which gives we have written in terms of to emphasise that the difference between and lies in the terms and .the addition of these terms puts kominis s result in linblad form making the evolution trace preserving .this means that does not describe the loss of singlet or triplet populations as in .kominis therefore augments his description of the radical - pair kinetics by an additional equation in which the radical - pair population is given by \;,\end{aligned}\ ] ] where dt \ ; , \quad p_{\rm t } = { k_{\rm t}}{\rm tr}\big [ { \hat{q}_{\rm t}}\ , \rho(t ) \big ] dt \end{aligned}\ ] ] are the respective probabilities of a transition from either to , or to in the infinitesimal interval .equation does however predict the same dephasing rate as .this is obvious from since and do not contribute anything to due to the orthogonality of and . spurred on by the measurement analogy made by kominis , jones and hore attempted a derivation of the radical - pair reaction operator using the operator - sum representation ( see sec . [ reactopfromqw ] ) .their result is given by \equiv { } & - ( { k_{\rm s}}+ { k_{\rm t } } ) \rho(t ) { \nonumber}\\ & + { k_{\rm s}}\ , { \hat{q}_{\rm t}}\ , \rho(t ) \ , { \hat{q}_{\rm t}}+ { k_{\rm t}}\ , { \hat{q}_{\rm s}}\ , \rho(t ) \ , { \hat{q}_{\rm s}}\;.\end{aligned}\ ] ] similar to the phenomenological approach given by , this equation also does not preserve the trace of .as already mentioned , this is attributed to the loss of the singlet and triplet populations to the reaction products .it can be seen that gives this loss rate at and for the singlet and triplet states respectively as expected .however , the jones hore reaction operator gives a dephasing rate which is the sum of the recombination rates , i.e. , rather than the average predicted by ( or ) . despite being perceived by jones and hore to be too small a difference to be detectable in an experiment , maeda and colleagueshave recently managed to place the two models under experimental scrutiny using pulsed electron paramagnetic resonance spectroscopy .we briefly review some key elements of this experiment below .the experiment reported in ref . uses radical pairs in a modified version of the carotenoid - porphyrin - fullerene triad molecule of ref .this system has previously been demonstrated to exhibit the anisotropic chemical response to earth - strength magnetic fields required for the avian compass .this model system also minimises the singlet - triplet dephasing due to processes other than recombination and has a much smaller than for some temperatures ( between 200240k ) .this means the dephasing rate in this temperature regime is approximately according to , and according to .the rate was then measured for this temperature range .it was shown that for certain temperatures ( a range of about 30k ) , the dephasing rate from the jones hore model lied above the measured values while the rate from haberkorn s model always remained below . in practicethere will be other uncontrollable processes which tend to increase the dephasing rate so the measured values of dephasing will not come solely from the recombination .this means that any reaction operator must produce a dephasing rate which lies below the measured values of for all temperatures and therefore suggests the recombination kinetics according to to be incorrect . in the next sectionwe show how the idea of quantum walks can be used to derive a reaction operator with a dephasing rate consistent with the experimental data of ref . .kraus published his theory of general state changes in quantum mechanics in which he asked what form must a superoperator take if it is to map a physically valid state to another physically valid state .note that we have not actually mentioned time so can be the state of a quantum system after some abstract operation , not necessarily a state at a particular instant in the future of ( although we will use it to propagate the system in time ) .the answer to the question just posed is that must be of the kraus form , also known as the operator - sum representation of .kraus s result therefore has the power to describe a large class of state transitions without referring to the underlying physics which makes the theory operational .this is what gives the kraus formalism its versatility . to describe time evolution we simply associate with the system state at some arbitrary time , and at some later time .the operator - sum representation can then be stated as {^\dagger}\;,\end{aligned}\ ] ] where the set of kraus operators satisfy the condition {^\dagger}\ , \hat{k}^{(n)}({\delta t } ) = \hat{1 } \;.\end{aligned}\ ] ] the theory also gives a prescription for calculating the probability that event is observed , given by {^\dagger}\ , \hat{k}^{(n)}({\delta t } ) \ , \rho(t ) \big\ } \;.\end{aligned}\ ] ] the sum in can be understood to be over states conditioned on events ( indexed by ) that may be observed in an interval , and hence its connection to measurements .condition is then equivalent to the conservation of probability expressed by note that is simply the norm of the post - measurement state so that may also be written as an average over normalised conditioned states : where {^\dagger}/ \ , \wp^{(n)}({\delta t } ) \;.\end{aligned}\ ] ] the first idea that we will borrow from quantum information theory to describe the radical - pair reaction is the amplitude - damping kraus map .a derivation of this map can be found in ref . but is given in terms of a photon incident on a beam splitter .this is an example of a process that can be described by the amplitude - damping map but we believe an explanation involving only ideas from probability theory and basic quantum mechanics to be more direct and fitting for developing the reaction operator .we have thus included such a detailed exposition in appendix [ appad ] and provide only a sketch of the amplitude - damping map below .assuming first for simplicity that we have a two - dimensional system described by and . aside from its hilbert space dimension , the system is otherwise general , and the states and are arbitrary .we now wish to describe the change in the system state over some time interval , say from to , allowing for the possibility that a transition from to may occur during this interval .this is schematically shown in fig .[ basicprocesses ] ( a ) . to .this map transfers population from one state to another .( b ) dephasing between and .the map tends to localize the random walker onto or ( i.e. turn a superposition of and into a mixture ) .the triangle can be thought of as a wedge driven into a line that connects the two states being dephased .( c ) coherent oscillations between states and .the interconversion rate between states and is given by [ see ] .,width=264 ] we can think of fig . [ basicprocesses ] ( a ) as describing a possible change of state for a single molecule out of an ensemble of identical molecules .thus some fraction of the molecules will jump from state to over some finite time interval . by regarding the transition as a random event we can characterise the process by a probability , , for a given molecule to go from state to in time .if we prepare all molecules in state and observe them for a time of , then the fraction of molecules that make the transition to state is given by . we should then expect that the longer we observe the process the greater the number of molecules to jump to state . in the long - time limit ,all molecules end up in state so we expect as .conversely if the process is only observed for a very short interval then we would not expect many molecules to have jumped to state .we thus expect for .we denote the map describing such a process by .it has the operator - sum representation with two kraus operators : = { } & \hat{m}^{(1)}_{21}({\delta t } ) \ , \rho(t ) \big [ \hat{m}^{(1)}_{21}({\delta t } ) \big]{^\dagger}{\nonumber}\\ & + \hat{m}^{(2)}_{21}({\delta t } ) \ , \rho(t ) \big [ \hat{m}^{(2)}_{21}({\delta t } ) \big]{^\dagger}\;,\end{aligned}\ ] ] where its effect on an arbitrary state can be seen most directly by calculating the matrix representation of in the basis .this gives the matrix where we have defined .the population transfer from state to is apparent on the diagonal terms represent the occupation probabilities to be in each of the basis states . the actual number of molecules occupying state at time is given by . assuming the total number of molecules to be conserved , and differ only by a factor of . ] in : a fraction has been subtracted from and added to .note that the off - diagonal terms of are also affected by this process . unless the population transfer will simultaneously reduce the coherence between and .this can be seen from appendix [ appad ] where we argued about the form of and without ever referring to the system coherences .the decay of the off - diagonal elements in should thus be taken as a consequence of the population transfer .this is a crucial difference between our formulation of the reaction operator and that of ref . where the decay of coherences was put into the system evolution by hand .when represents a state of higher energy than , is said to describe a dissipative process ( hence the name amplitude damping ) . in this casecaptures the well - known idea from open - systems theory that dissipation implies decoherence .we now generalise the amplitude - damping map to a system with states . by essentially the same argument as in appendix [ appad ] , the map describing a transition from state to is simply given by {^\dagger}{\nonumber}\\ & + { \hat{m}}^{(2)}_{jk}({\delta t } ) \ , \rho(t ) \ , \big [ { \hat{m}}^{(2)}_{jk}({\delta t } ) \big]{^\dagger}\;,\end{aligned}\ ] ] with the kraus operators where ] .note that only the coherences ( the off - diagonal terms in ) are damped .the generalisation to a system with states can be stated simply in kraus form as {^\dagger}{\nonumber}\\ & + { \hat{v}}^{(2)}_{jk}({\delta t } ) \ , \rho(t ) \ , \big[{\hat{v}}^{(2)}_{jk}({\delta t } ) \big]{^\dagger}\;,\end{aligned}\ ] ] where as with amplitude damping , we can work with the rate of dephasing rather than with . denoting the rate of dephasing between states and as , we can write the evolution under the dephasing map can then be expressed by the differential equation , \end{aligned}\ ] ] which defines the generator for : the map describes pure decoherence and is useful for modelling processes which counter act any coherent evolution of the system that may occur in the basis .the singlet - triplet interconversion in the radical - pair mechanism is one such process .we depict coherent evolution between two states graphically by using a green two - way arrow as shown in fig .[ basicprocesses ] ( c ) . in general, coherent oscillations between states and can be generated by a hamiltonian of the form ( for ) where is the expectation value of in the state .the coupling between states and is denoted by .note that for to be hermitian must be real and symmetric with respect to its indices , i.e. unitary evolution can be understood as a special case of the kraus decomposition with only one kraus operator : where we have set for convenience . the actual evolution over a time of is then effected by the map the differential form of has the familiar commutator form : \;.\end{aligned}\ ] ] we can also express in terms of as just as we parameterised the amplitude - damping and dephasing maps by their probability of occurrence , we can similarly parameterise unitary evolution by this is the probability of making a transition to after time assuming the system was initially in . if we wish to relate to the transition rate under unitary evolution then an explicit expression for is required .this can be shown to be \;,\end{aligned}\ ] ] where we have defined equation tells us that can be defined as the frequency at which the system oscillates between states and .note that this depends on both the coupling between and ( i.e. ) as well as their separation ( given by ) .we can also see from that increasing lowers the peak of the transition probability between and .the proof of [ and hence ] will be presented in a sequel paper where it is actually used to simulate an example quantum walk .here it is sufficient to see how is related to the rate of the process and the effect of varying and .the quantum - walk formalism visualises state transitions in a quantum system as a network of nodes ( representing states ) connected by edges ( representing transitions ) , called graphs .such models have a wide applicability because the nodes can represent abstract degrees of freedom , such as a spatial coordinate , or in our case , the state of a molecule .we therefore begin our quantum - walk model of the reaction outlined in fig .[ stdrpmrates ] by simply representing the different radical - pair and product states as nodes on a graph labelled according to : while the rates are taken as the standard radical - pair reaction without coherent evolution represented as a graph .black arrows represent population transfer , which here is associated with the recombination reaction of the radical pair.,scaledwidth=19.0% ] the states in and are assumed to represent distinct stages of the radical - pair reaction , therefore we take to be an orthornormal basis .this is shown in fig .[ stdrpmprob ] . as explained in sec .[ reactopinlit ] , here we concentrate only on the recombination processes which take the system from the singlet to singlet product ( ) , and from the triplet to the triplet product ( ) . extensions to include dephasing and coherent evolution will be covered in sec .[ moregeneralqws ] .our goal is to derive a reaction operator .this means that we should express the evolution of in differential form . as before , we can simply obtain such an equation by propagating over an infinitesimal interval , except now there is more than one process happening at a time .this can easily be dealt with by using a single map composed from a series of maps .each map in represents a particular process in the system . attributing each recombination process to an amplitude - damping map, we describe the evolution of by where on using we have note the second equality follows because is infinitesimal .this can be seen by expanding the exponentials and neglecting terms on the order of .this also means that reversing the order of and does not change the second equality of .since and are independent of time , shows that is the generator of for any finite [ otherwise it is only the generator of ] .this is simply equivalent to we can now apply to to obtain { \nonumber}\\ & \quad + k_{43 } \bigg [ \hat{q}_{43 } \ , \rho(t ) \ , \hat{q}{^\dagger}_{43 } - \frac{1}{2 } \ : \hat{q}_{3 } \ ,\rho(t ) - \frac{1}{2 } \ : \rho(t ) \ : \hat{q}_{3 } \bigg ] \;.\end{aligned}\ ] ] a few remarks can be made about our result directly from the form of but it will be easier to refer to its matrix representation in the site basis . this is a set of rate equations which includes the coherences between sites as well .we follow the standard convention of writing matrices where denotes the element at the row and column of .the matrix form of can then be easily verfied to be given by it is obvious from the matrix form of that the spin - selective recombination reduces all coherences except for the coherences of the two product states , given by and to be zero [ recall and ] .of special importance is the decay of the singlet - triplet coherence .this is given by in which can be seen to be consistent with the experiment of ref .note that because is hermitian it follows that is also hermitian , so referring to is the same as referring to . adding the diagonal elements of we see that is trace preserving .in particular we see that that is to say , the rate at which singlet - state radical pairs are lost due to recombination is exactly balanced by the rate of increase of the singlet product .recall from sec .[ reactoplitreview ] that previous treatments on the radical - pair kinetics use trace - decreasing reaction operators which refer only to and [ see and ] .it has been noted in ref . that such reaction operators produce singlet and triplet populations which satisfy and thus poses no problem. however , a description in the minimal basis still fails to account for coherences between the radical pair and product states .jones and hore have argued that such coherent superpositions between the radical pair and products decohere very quickly and is therefore consistent with a model in which they are neglected .however , on accepting , we see that coherences between the radical pair and products in fact decay at a rate less than the radical - pair dephasing ( e.g. compared with ) so the remark by jones and hore is not actually correct . nevertheless , a model in which the product states are neglected is still permissible so long as the radical - pair populations and coherences do not depend on the populations and coherences of the products .this is clearly true from the matrix form of so we can write down such a reaction operator directly . for ease of comparison with previous resultswe express this reaction operator in the notation of sec .[ reactoplitreview ] [ recall , and ] : we have used an overbar on to indicate that it is no longer trace preserving .we can simply read off , , and from to get the reader may have already noticed that , , and are in fact the same as those given by the haberkorn reaction operator given in , which means that and should in fact be the same .this can be shown by collecting terms porportional to as one group and terms proportional to as one group in : - { k_{\rm t}}\ , \big [ { \hat{q}_{\rm t}}\ , \rho(t ) \ , { \hat{q}_{\rm t}}{\nonumber}\\ & + \frac{1}{2 } \ : { \hat{q}_{\rm t}}\ , \rho(t ) \ , { \hat{q}_{\rm s}}+ \frac{1}{2 } \ : { \hat{q}_{\rm s}}\ , \rho(t ) \ , { \hat{q}_{\rm t}}\big ] \\[0.2 cm ] \label{barlqw3 } = { } & - \frac{1}{2 } \ : { k_{\rm s}}\big [ { \hat{q}_{\rm s}}\ , \rho(t ) + \rho(t ) \ , { \hat{q}_{\rm s}}\big ] { \nonumber}\\ & - \frac{1}{2 } \ : { k_{\rm t}}\big [ { \hat{q}_{\rm t}}\ , \rho(t ) + \rho(t ) \ , { \hat{q}_{\rm t}}\big ] \;. \end{aligned}\ ] ] the second equality is obtained by using and in the terms proportional to and respectively in .note the identity operator carries a subscript 2 because it is now only an identity on the subspace spanned by the singlet and triplet states .the resolution of the full identity operator , , requires all four states of the radical pair and products .this is why we have circled the singlet and triplet states in fig .[ stdrpmrates ] in sec .[ stdradicalpairreact ] .we have thus derived the conventional spin - selective recombination operator using the operational and systematic treatment of quantum walks .writing in the form of also makes the comparison with the jones hore reaction operator easier .this is because , as argued by jones and hore , an alternative derivation of their result given by begins with { \nonumber}\\ & \quad - { k_{\rm t}}\ , \big [ { \hat{q}_{\rm t}}\ , \rho(t ) \ , { \hat{q}_{\rm t}}+ { \hat{q}_{\rm t}}\ , \rho(t ) \ , { \hat{q}_{\rm s}}+ { \hat{q}_{\rm s}}\ , \rho(t ) \ , { \hat{q}_{\rm t}}\big ] \;.\end{aligned}\ ] ] the difference between and can thus be seen in the coefficient of the cross terms [ i.e. and . the singlet - triplet dephasing rate was thus incorrectly posited at the outset in their argument .alternatively , this can also be attributed to an incorrect formulation of the kraus operators in the minimal basis .our approach on the other hand begins with all four states of the radical pair and products .this allows us to focus on describing the and transitions with the dephasing between and occurring as a consequence .since the and transitions are identical we in fact only have to find the correct operator - sum representation for one of them and apply it twice to to obtain the map for the full reaction as explained in . this makes our quantum - walk approach less prone to modelling errors .next we illustrate this point further by using the quantum - walk idea to obtain the appropriate map in the minimal basis by ignoring the chemical products .if we are interested in deriving the kraus map for only the radical - pair state then we need some way of capturing the effect of the decay from the radical pair to the products but without including the products in .the above treatment of first deriving and then reading off the equations of motion for the radical pair provides one way of achieving this result .here we show an alternative method which is also based on quantum walks .the approach here is to find the operator - sum representation for the radical - pair state with the chemical products ignored .although it may sound contradictory , we will begin by including the product states in .however , we will lump the states and into a single state which we denote as .this is shown in fig .[ nochemprodgraph ] .the singlet and triplet states are defined as before in , but instead of and we now have a simplified graph sufficient for deriving the kraus operator corresponding to a trace - decreasing reaction operator . due to the decoupling of the matrix elements in , this graph can in fact be used to simulate the partial trace provided that we ignore the coherences between the product and radical pair and that we understand the distinction between and [ see sec .[ maintextpartialtrace ] , especially the discussion between and ] .,scaledwidth=18.0% ] the reason for introducing fig .[ nochemprodgraph ] is twofold : first , it is easier to consider kraus operators for a trace - preserving map .the evolution corresponding to can then be extracted by using just one of the kraus operators and hence will be trace decreasing . for this purposeit is sufficient to introduce only one additional state .this will then allow us to use the quantum - walk approach by composing two trace - preserving maps corresponding to the transitions shown in fig .[ nochemprodgraph ] . the second reason for introducing fig .[ nochemprodgraph ] has to do with what is meant by `` ignoring the products '' . in the language of quantum probability ,any unobserved degrees of freedom in a system can be traced over to give a so - called reduced state .this procedure is called a partial trace and is known to be a trace - preserving operation .we will find that this gives a reduced state for the radical pair that seems equivalent to fig .[ nochemprodgraph ] but is in fact subtly different .it will thus be convenient to refer to fig .[ nochemprodgraph ] for comparison .we discuss these two ideas below .the operator - sum representation of can be derived by considering the quantum walk shown in fig .[ nochemprodgraph ] , which can easily be described by it can be shown in general that a composition of two kraus maps is another kraus map .this means that we can define the kraus operators for the total map by substituting the operator - sum representation of and into .this gives us {^\dagger}\;,\end{aligned}\ ] ] where we have defined using and we find that which means that it is redundant and we need only three kraus operators to describe fig . [ nochemprodgraph ] . using and the binomial expansion to first order in in [ see ] we find we should note that could have also been obtained by generalising the argument in appendix [ appad ] to three states . in this casewe would actually just write down three kraus operators directly .it is simple to check that satisfy the condition to be a valid set of kraus operators [ see from sec .[ reactopfromqw ] ] : {^\dagger}\hat{m}^{(n)}(dt ) = \hat{1 } \;.\end{aligned}\ ] ] the advantage of using is that it decomposes into a sum of states {^\dagger}\;,\end{aligned}\ ] ] conditioned on the outcome of a measurement performed at time .the measurement is devised to give us information about the transitions occurring in the system so that we associate the outcome with the transition , with , and with no transitions .it is easy to see that describes a jump from state to giving the conditional state note that is the trace of so that upon normalisation the evolved state is simply . similarly applying weget which is seen to describe a jump from state to .this leaves which gives dt { \nonumber}\\ & - \frac{1}{2 } \ : k_{23 } \big [ \hat{q}_3 \ , \rho(t ) + \rho(t ) \ , \hat{q}_3 \big ] dt \;.\end{aligned}\ ] ] note that this is the evolution given by haberkorn s equation. the effect of can be seen directly by applying it to different initial states .letting we find where is the trace of .a similar result can also be seen by letting .if the system is in the product state then we expect it to remain in the product state forever since there is no process to take the system out of . indeed , setting we find the evolution described byis thus conditioned on the absence of recombinations .that describes radical - pair evolution conditioned on no recombinations is not surprising in view of appendix [ appad ] where it can be seen to be so by construction .the kraus operator consistent with the conventional description of radical - pair kinetics in the minimal basis is thus given by and is the measurement approach that jones and hore sought after in ref .that such an evolution equation is given by conditioning on unrecombined radical pairs has also been noted in ref . , but with the operator - sum formalism incorrectly applied as the experiment of ref . has now shown . herewe would like to derive an equation of motion for the radical pair from tracing over the products in .the partial trace is a formal procedure for obtaining a density operator with the products ignored and is quite different to simply discarding the products .note that the partial trace is not simply the sum , but rather an operation defined only on systems with a tensor product structure . as such , we require that the hilbert space of in be of the form where is the hilbert space for the radical pair and is the space for the products. we can then derive the state of the radical pair with the products ignored from \equiv \sum_{k } \ ; { \langle{\chi_k}| } \rho(t ) { |{\chi_k}\rangle } \;,\end{aligned}\ ] ] where is any basis of . for this reason, we will represent each site with a two - dimensional hilbert space ( ) with basis indicating the presence ( ) or absence ( ) of a random walker .since denotes a random walker at site , we can write the site basis as this method of defining sites can be viewed as defining a four - mode state in quantum optics with each mode containing at most one photon ( or equivalently a four - mode fermionic state ) .note that this expresses in the form of with it will be convenient to introduce a short - hand notation in which the tensor product is omitted .in general , we will write the partial trace of over sites 2 and 4 is then given by we have noted the expectation of with respect to the state is identically zero since there can be at most one particle ( or walker ) in the system .the calculation of is a little bit involved so we leave the details to appendix [ partialtrace ] .the result is however simple to state . noting that now refers only to , we will simply denote a state with and as and write \ , { |{0,0}\rangle \langle{0,0}| } - k_{43 } \ , \rho_{33}(t ) \ , { |{0,1}\rangle \langle{0,1}| } - k_{21 } \ , \rho_{11}(t ) \ , { |{1,0}\rangle \langle{1,0}| } { \nonumber}\\ & - \frac{1}{2 } \ ; \big ( k_{21 } + k_{43 } \big ) \ , \rho_{13}(t ) \ , { |{1,0}\rangle \langle{0,1}| } - \frac{1}{2 } \ ; \big ( k_{21 } + k_{43 } \big ) \ , \rho_{31}(t ) \ , { |{0,1}\rangle \langle{1,0}| } \;.\end{aligned}\ ] ] as before , this equation is perhaps easier to read from its matrix representation , given by this gives the correct rates for dephasing and population transfer for the radical pair except now we have one additional state , , whose effect is to drain the populations out of and .it would be tempting to compare the dynamics given by to the graph of fig .[ nochemprodgraph ] and define , , and .however , the partial trace does not correspond to this identification , because as we have just shown above , the evolution defined by fig .[ nochemprodgraph ] is given by [ or equivalently by ] which has the matrix representation note that we have used and and reordered the matrix elements for ease of comparison with .the difference between and is clear .equation has coherences between the product and radical - pair states whereas does not .the two matrices do not even refer to the same basis we are correct to equate to , and to in , but we would be mistaken to identify with .the reason is because conveys no other information except the absence of the random walker from sites one and three .it does not say where the walker is .thus should be regarded as a radical - pair state because it gives us only information about the radical pair , namely that it is in neither the singlet nor triplet state , consistent with the fact that we have traced over the products in .in contrast , says exactly which site the random walker is at .it is represented by a node on the graph in fig .[ nochemprodgraph ] whereas is not . for notational consistencywe will write where n may stand for `` neither '' , `` none '' , or `` null '' .this distincition between and is an important and interesting one because it suggests that is another radical - pair state that we should consider and therefore extend the minimal basis from to .previous treatments on the radical - pair reaction operator have been to use either a trace - decreasing without products , or a trace - preserving with products .the partial trace has the advantage that it is both trace - preserving and excludes the products .it achieves this by regarding as just another radical - pair state which has not been considered ( or taken seriously ) before .we summarise the previous approaches to radical - pair kinetics alongside the partial - trace method schematically in fig .[ summary ] .finally , we make a couple of observations of the model defined by and : * we first note some similarities and differences between the partial - trace approach and the conventional haberkorn model .the two models are essentially equivalent in that the former description uses a matrix with unit trace , whereas the latter uses a matrix plus one scalar equation : we have used to express the product populations on the right - hand side in terms of and .however , the right - hand side of explicitly refers to product populations whereas the trace of does not .this is because is a radical - pair state so that \frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2}\frac{1}{2} ] .the probability of not seeing a transition is therefore the sum of the probabilities for each of these scenarios . that describes a combination of these two scenarioscan also be seen from its form , which we can understand by a simple analogy to .consider first the case where the system is in state and remains in . instead of taking to as in , we now take to itself .this means that we simply replace the transition operator in by the projector .we would also have to replace the under the square root in by since now we are concerned with the case where the system stays in .doing so gives us , but we know this is not the complete description yet as we have not considered the contribution due to the system being in and staying there .if the system is already in , the process should take to itself because no other processes are present which can take the system out of .the probability that the system remains in given that it was in is thus simply 1 .therefore we simply add the projector ( with coefficient 1 ) to to arrive at the resultant form of .it is trivial to show that produces the correct state by letting be and in turn . for the mathematically inclined reader, we note that can be derived directly by using and the constraint {^\dagger}\hat{m}^{(1)}_{21}({\delta t } ) + \big [ \hat{m}^{(2)}_{21}({\delta t } ) \big]{^\dagger}\hat{m}^{(2)}_{21}({\delta t } ) = \hat{1 } \;.\end{aligned}\ ] ] by resolving the identity on the right - hand side in the site basis , gives {^\dagger}\hat{m}^{(2)}_{21}({\delta t } ) = { } & \big [ 1-\gamma_{21}({\delta t } ) \big ] \ , { |{\psi_1}\rangle \langle{\psi_1}| } { \nonumber}\\ & + { |{\psi_2}\rangle \langle{\psi_2}| } \;.\end{aligned}\ ] ] note that is simply the operator square root of this equation .since is diagonal , we arrive at on taking the sqaure root of the coefficients of and .this derivation of is simple but does contain the insight provided above . now that we have the necessary kraus operators the evolution of the system follows directly by forming the sum .notice from the above that and are essentially time - evolution operators but conditioned on our knowledge of whether the system underwent a jump or not . equations and are in fact quantum analogues of the classical bayes rule .conditioning requires that we monitor the system for the entire duration of . what the operator - sum representation of describes is how the state should evolve without us having to monitor the system continuously , or in the language of probability theory, it describes the unconditioned state .this can be understood as follows :if one does not monitor the system then all we can say is that with probability the system will be in the state , and with probability the system will be in the state . from probability theory, we would say that the state in the absence of such monitoring at time ( i.e. the unconditioned state ) is therefore a weighted sum of the conditioned states and , substituting in and we obtain exactly the result of kraus . in practice oneoften has an ensemble of particles and all we know is the fraction of particles that underwent a state transition during . in this case is simply the fraction of particles that jumped and the fraction that did not .note also the difference between and former is the probability of observing a transition from to without assuming that we know which state the system is in at time , whereas the latter does , is the probability of jumping conditioned on the system being in state at time . we have now described a simple one - way population transfer completely as a probabilistic process . instead of expressing the system evolution as a sum over conditioned states we can express it in the form of a differential equation .such an equation can be derived by considering the evolution of over an infinitesimal interval dt . in this caseit is more appropriate to refer to the rate at which the system jumps from to over some interval rather than the probability .if we denote the fraction of particles that jump from to per second by , the probability is then related to by .this means that the probability of a jump in an infinitesimally small time interval is also an infinitesimal .the evolution of the system state is now given by {^\dagger}{\nonumber}\\ & + \hat{m}^{(2)}_{21}(dt ) \ , \rho(t ) \big [ \hat{m}^{(2)}_{21}(dt ) \big]{^\dagger}\;,\end{aligned}\ ] ] using the binomial expansion we have we can then write the kraus operators for infinitesimal evolution using and as substituting and into and neglecting terms on the order of we arrive at \;.\end{aligned}\ ] ] this is the master equation corresponding to the amplitude damping map and can be put in the lindblad form if one wishes by using the property .we have derived this equation rather simply by applying the kraus formalism , hence it can be regarded as merely a restatement of the operator - sum representation of in differential form .it is only a matter of preference whether one wants to use a map or a differential equation to simulate the system dynamics but the kraus formalism provides a simple way to understand the essential physics of the process by using only basic probability ideas .here we show how to obtain an equation of motion for the radical pair where the products are ignored by using the partial trace operation on . we have already argued in sec .[ maintextpartialtrace ] that this is given by where { \nonumber}\\ & + k_{43 } \bigg [ \hat{q}_{43 } \ , \rho(t ) \ , \hat{q}{^\dagger}_{43 } - \frac{1}{2 } \ : \hat{q}_{3 } \ , \rho(t ) - \frac{1}{2 } \ : \rho(t ) \ : \hat{q}_{3 } \bigg ] \;,\end{aligned}\ ] ] and we have written the site basis using as [ recall and ] the sum is most easily calculated by first rewriting as the following identities will therefore prove useful \label{traceid2 } { } & { \langle{n_2=0,n_4=1\,}|{\psi_m}\rangle } { \nonumber}\\ { } & = ( 1-\delta_{m2 } ) \, \delta_{m4 } \ , { |{n_1=\delta_{m1},n_3=\delta_{m3}}\rangle } \ ; , \\[0.2 cm ] \label{traceid3 } { } & { \langle{n_2=1,n_4=0\,}|{\psi_m}\rangle } { \nonumber}\\ { } & = \delta_{m2 } \ , ( 1-\delta_{m4 } ) \ , { |{n_1=\delta_{m1},n_3=\delta_{m3}}\rangle } \;. \end{aligned}\ ] ] we will also use the summation convention where a repeated index is summed over . using the first term incan thus be calculated as follows . where we have noted on using that while the first term in is { } & = \rho_{1 m } \ , ( 1 - \delta_{m4 } - \delta_{m2 } - \cancel{\delta_{m2 } \ , \delta_{m4 } } ) { \nonumber}\\ & \quad \times { |{n_1=1,n_3=0}\rangle \langle{n_1=\delta_{m1},n_3=\delta_{m3}}| } { \nonumber}\\[0.2 cm ] { } & = { |{n_1=1,n_3=0}\rangle } \big ( \rho_{1 m } \ , { \langle{n_1=\delta_{m1},n_3=\delta_{m3}}| } { \nonumber}\\ & \quad - \delta_{m4 } \ , \rho_{1 m } \ , { \langle{n_1=\delta_{m1},n_3=\delta_{m3}}| } { \nonumber}\\ & \quad - \delta_{m2 } \ , \rho_{1 m } \ , { \langle{n_1=\delta_{m1},n_3=\delta_{m3}}| } \ , \big ) { \nonumber}\\[0.2 cm ] { } & = { |{n_1=1,n_3=0}\rangle } \big ( \rho_{11 } \ , { \langle{n_1=1,n_3=0}| } { \nonumber}\\ & \quad + \cancel{\rho_{12 } \ , { \langle{n_1=0,n_3=0}| } } + \rho_{13 } \ , { \langle{n_1=0,n_3=1}| } { \nonumber}\\ & \quad + \cancel{\rho_{14 } \ , { \langle{n_1=0,n_3=0}| } } - \cancel{\rho_{14 } \ , { \langle{n_1=0,n_3=0}| } } { \nonumber}\\ & \quad - \cancel{\rho_{12 } \ , { \langle{n_1=0,n_3=0}| } } \ , \big ) { \nonumber}\\[0.2 cm ] \label{a2 } { } & = \rho_{11 } \ , { |{n_1=1,n_3=0}\rangle \langle{n_1=1,n_3=0}| } { \nonumber}\\ & \quad - \rho_{13 } \ , { |{n_1=1,n_3=0}\rangle \langle{n_1=0,n_3=1}| } \;.\end{aligned}\ ] ] this also gives us the second term in since it is just the hermitian conjugate of , { } & = \rho_{11 } \ , { |{n_1=1,n_3=0}\rangle \langle{n_1=1,n_3=0}| } { \nonumber}\\ & \quad - \rho_{31 } \ , { |{n_1=0,n_3=1}\rangle \langle{n_1=1,n_3=0}| } \;.\end{aligned}\ ] ] the third term in can be calculated in exactly the same manner with replacing , and replacing . by inspection of the above workingwe see that this amounts to making the following replacements in ( bras remain unchanged ) : we thus obtain and taking the hermitian conjugate , { } & = \rho_{13 } \ , { |{n_1=1,n_3=0}\rangle \langle{n_1=0,n_3=1}| } { \nonumber}\\ & \quad - \rho_{33 } \ , { |{n_1=0,n_3=1}\rangle \langle{n_1=0,n_3=1}| } \;.\end{aligned}\ ] ] substituting , , , and into and collecting like terms we arrive at the remaining two terms in are much easier to calculate with the help of and .the second term in is given by where we have noted from that and similarly , the third term in is where we have noted from that collecting , , and , we finally have the state which has the products traced out .the final form of is then where we have used the fact that must be spanned by to omit writing out and explicitly in .j. kempe ._ discrete quantum walks hit exponentially faster_. in s. arora , k. jansen , j. d. p. rolim , and am sahai ( eds . ) , _ approximation , randomization , and combinatorial optimization ( proceedings of the 6th international workshop on approximation algorithms for combinatorial optimization problems , approx 2003 and 7th international workshop on randomization and approximation techniques in computer science , random 2003 , princeton , nj , usa , august 24 - 26 , 2003 ) _ , lecture notes in computer science , page 354 .( springer , 2003 ) .
|
classical chemical kinetics use rate - equation models to describe how a reaction proceeds in time . such models are sufficient for describing state transitions in a reaction where coherences between different states do not arise , or in other words , a reaction which contain only incoherent transitions . a prominent example reaction containing coherent transitions is the radical - pair model . the kinetics of such reactions is defined by the so - called reaction operator which determines the radical - pair state as a function of intermediate transition rates . we argue that the well - known concept of quantum walks from quantum information theory is a natural and apt framework for describing multisite chemical reactions . by composing kraus maps that act only on two sites at a time , we show how the quantum - walk formalism can be applied to derive a reaction operator for the standard avian radical - pair reaction . our reaction operator predicts a recombination dephasing rate consistent with recent experiments [ j. chem . phys . * 139 * , 234309 ( 2013 ) ] , in contrast to previous work by jones and hore [ chem . . lett . * 488 * , 90 ( 2010 ) ] . the standard radical - pair reaction has conventionally been described by either a normalised density operator incorporating both the radical pair and reaction products , or by a trace - decreasing density operator that considers only the radical pair . we demonstrate a density operator that is both normalised and refers only to radical - pair states . generalisations to include additional dephasing processes and an arbitrary number of sites are also discussed .
|
recently , there is no doubt about the cosmological origin of the gamma - ray bursts ( hereafter grbs ) . then , assuming a large scale isotropy for the universe , one expects the same property for the grbs as well .another property , which is also expected to occur that grbs should appear fully randomly , i.e. if a burst is observed it does not give any information about the place of the next one .if both properties are fulfilled , then the distribution is called completely random ( for the astronomical context of spatial point processes see ) .there are several tests for checking the complete randomness of point patterns , however , these procedures do not always give information for both properties simultaneously .there are increasing evidence that all the grbs do not represent a physically homogeneous group .hence , it is worth investigating that the physically different subgroups are also different in their angular distributions . in the last years the authors provided several different tests probing the intrinsic isotropy in the angular sky - distribution of grbs collected in batse catalog . shortly summarizing the results of these studiesone may conclude : a. the long subgroup ( ) seems to be distributed isotropically ; b. the intermediate subgroup ( ) is distributed anisotropically on the % significance level ; c. for the short subgroup ( ) the assumption of isotropy is rejected only on the % significance level ; d. the long and the short subclasses , respectively , are distributed differently on the % significance level .( about the definition of subclasses see ; is the duration of a grb , during which time the % of the radiated energy is received . ) independently and by different tests , confirmed the results a. , b. and c. with one essential difference : for the intermediate subclass a much higher - namely % - significance level of anisotropy is claimed .again , the short subgroup is found to be `` suspicious '' , but only the % significance level is reached .the long subclass seems to be distributed isotropically ( but see ) . found significant angular correlation on the scale for grbs with durations . reported a correlation between the locations of previously observed short bursts and the positions of galaxies in the local universe , indicating that between 10 and 25 per cent of short grbs originate at low redshifts ( ) .it is a reasonable requirement to continue these tests using more sophisticated procedures in order to see whether the angular distribution of grbs is completely random or has some sort of regularity .this is the subject of this article .new tests will be presented here .mainly the clarification of the short subgroup s behaviour is expected from these tests . in this paper , similarly to the previous studies , the _ intrinsic _ randomness is tested ; this means that the non - uniform sky - exposure function of batse instrument is eliminated .the paper is organized as follows . in section [ mat ]the three new tests are described .this section does not contain new results , but - because the methods are not widely familiar - this minimal survey may be useful .section [ tests ] contains the statistical tests on the data .section [ disc ] summarizes the results of the statistical tests , and section [ conc ] presents the main conclusions of the paper .the voronoi diagram - also known as dirichlet tesselation or thiessen polygons - is a fundamental structure in computational geometry and arises naturally in many different applications .generally , this diagram provides a partition of a point pattern ( `` point field '' , also `` point process '' ) according to its spatial structure , which can be used for analyzing the underlying point process .assume that there are points ( ) scattered on a sphere surface with an unit radius .one says that a point field is given on the sphere .the voronoi cell of a point is the region of the sphere surface consisting of points which are closer to this given point than to any other ones of the sphere .this cell forms a polygon on this sphere .every such cell has its area ( ) given in steradian , perimeter ( ) given by the length of boundary ( one great circle of the boundary curve is called also as `` chord '' ) , number of vertices ( ) given by an integer positive number , and by the inner angles ( ) .this method is completely non - parametric , and therefore may be sensitive for various point pattern structures in the different subclasses of grbs .note that the behaviour of this tesselation method on the sphere surface is quite different from that on the infinite plane .this follows from the fact that the polygon s area will not be independent from each other , because the total surface of the sphere is fixed in steradian .hence , the spherical voronoi tesselation is not effected by border effects , and the voronoi diagram becomes a closed set of convex polygons .the points on sphere may be distributed completely randomly or non - randomly ; the non - random distribution may have different characters ( clustering , filaments , etc .; for the survey of these non - random behaviours see , e.g. , ) .random and some regular patterns have distributions of one characteristic maxima ( unimodal ) but with different variances .the multimodality means different characteristic maxima indicating hierarchical ( cluster ) structure , the number of modes is determined by the number of scales in the sample .the vt method is able both to detect the non - randomness and to describe its form ( for more details see and for the astronomical context ) .peak - flux range in galactic coordinates .the peak - flux is given in dimesion photon/().,width=355 ] contrary to vt , this method considers the distances ( edges ) among the points ( vertices ) .clearly , there are distances among points .a spanning tree is a system of lines connecting all the points without any loops .the minimal spanning tree ( mst ) is a system of connecting lines , where the sum of the lengths is minimal among all the possible connections between the points . in this paper the spherical version of msfis used following the original prim s paper .the separate connecting lines ( edges ) together define the minimal spanning tree .the statistics of the lengths and the angles between the edges at the vertices can be used for testing the randomness of the point pattern .the mst is widely used in cosmology for studying the statistical properties of galaxy samples ..,width=355 ] let denote the probability for finding a point in an area of radius . if scales as ( i.e. ) , then is called the local fractal dimension ( e.g. for a completely random process on the plane ) . in the case of a monofractal independent on the position .a multifractal ( mfr ) on a point process can be defined as unification of the subsets of different ( fractal ) dimensions .one usually denotes with the fractal dimension of the subset of points at which the local fractal dimension is in the interval of .the contribution of these subsets to the whole pattern is not necessarily equally weighted , practically it depends on the relative abundances of subsets . the functional relationship between the fractal dimension of subsets and the corresponding local fractal dimensionis called the mfr or hausdorff spectrum . in the vicinity of -th point ( )one can measure from the neighbourhood structure a local dimension ( `` rnyi dimension '' ) .this measure approximates the dimension of the embedding subset , giving a possibility to construct the mfr spectrum which characterizes the whole pattern ( for more details see ) .if the maximum of this convex spectrum is equal to the euclidean dimension of the space , then in classical sense the pattern _ is not a fractal _ , but the spectrum remains sensitive to the non - randomness of the point set . there is a wide variety of astronomical phenomena , where the concept of fractal and/or multifractal can be successfully applied ., corresponding to the completely random 2d euclidean case.,width=355 ]the three procedures outlined in section [ mat ] enable us to derive several stochastic quantities well suited for testing the non - randomness of the underlying point patterns .up to the present the most comprehensive all - sky survey of grbs was done by the batse experiment on board of the cgro satellite in the period of 1991 - 2000 . in this periodthe experiment collected 2704 well justified burst events and the data are available in the current batse catalog .since there are increasing evidence ( and references therein ) that the grb population is actually a mixture of astrophysically different phenomena , we divided the grbs into three groups : ( ) , ( ) and ( ) . to avoid the problems with the changing detection threshold we omitted the grbs having a peak flux .this truncation was proposed by .the bursts may emerge at very different distances in the line of sight and it may happen that the stochastic structure of the angular distribution depends on it .therefore , we also made tests on the bursts with in the short and long population , separately .table [ test ] defines the 5 samples to be studied here ..tested samples of batse grbs . [ cols="<,<,<,^ " , ] we assigned to every mc simulated sample 13 values of the test variables and , consequently , a point in the 13d parameter space . completing 200 simulations in all of the subsamples we get in this way a 13d sample representing the joint probability distribution of the 13 test - variables . using asuitable chosen measure of distance of the points from the sample mean we can get a stochastic variable characterizing the deviation of the simulated points from the mean only by chance. an obvious choice would be the squared euclidean distance . in case of a gaussian distribution with unit variances and without correlationsthis would resulted in a distribution of 13 degree of freedom .the test - variables in our case are correlated and have different scales .before computing squared euclidean distances we transformed the test - variables into non - correlated ones with unit variances . due to the strong correlation between some of the test - variables we may assume that the observed quantities can be represented with non - correlated variables of less in number .factor analysis ( fa ) is a suitable way to represent the correlated observed variables with non - correlated variables of less in number .since our test - variables are stochastically dependent following we attempted to represent them by fewer non - correlated hidden variables supposing that in the above equation mean the test - variables ( in our case ) , the hidden variables and a noise - term , respectively .equation ( [ fac ] ) represents the basic model of fa . after making some reasonable assumptions , can be constrained by the following inequality : which gives in our case .factor analysis is a common ingredient of professional statistical software packages ( bmdp , sas , s - plus , spss , etc ) .the default solution for identifying the factor model is to perform principal component analysis ( pca ) .we kept as many factors as were meaningful with respect to equation ( [ constr ] ) .taking into account the constraint imposed by equation ( [ constr ] ) we retained 8 factors . in this way we projected the joint distribution of the test - variables in the 13d parameter space into a 8d one defined by the non - correlated hidden variables .the hidden variables in equation ( [ fac ] ) are non - correlated and have zero means and unit standard deviations . using these variables we definedthe following squared euclidean distance from the sample mean : if the variables had strictly gaussian distributions equation ( [ eudist ] ) would define a variable of degrees of freedom .hidden variables ( factors ) in the 8d parameter space .there are altogether 1000 simulated points .full line marks a distribution of 8 degree of freedom , normalized to the sample size .the distances of the batse samples are also indicated .the departures of samples and exceed all those of the simulated points .the probabilities , that these deviations are non - random , equal 99.9% and 99.98% , respectively ., width=468 ] in addition to the significance obtained by the binomial test in subsection [ bnom ] using the distribution of the squared euclidean distances , defined by equation ( [ eudist ] ) , one can get further information whether a batse sample represented by a point in the parameter space of the test - variables deviates only by chance or it significantly differs from the fully random distribution . in all categories ( , , , , ) we made 200 , altogether 1000 , simulations .we calculated the squared distances for all simulations and compared them with those of the batse samples in table [ test ] .figure [ hist ] shows a histogram of the simulated squared distances along with those of the batse samples .full line represent a distribution of degree of freedom .figure [ hist ] clearly shows that the departures of samples and exceed all those of the simulated points . the probabilities , that these deviations are non - random , equal 99.9% and 99.98% , respectively . the full randomness of the angular distribution of the long grbs , in contrast to the regularity of the short and in some extent to the intermediate ones , points towards the differences in the angular distribution of their progenitorsthe recent discovery of the afterglow in some short grbs indicates that these events are associated with the old stellar population accounted probably for the mergers of compact binaries , in contrast to the long bursts resulting from the collapse of very massive stellar objects in young star forming regions .the differences in progenitors reflects also the differences between the energy released by the short and long grbs .unfortunately , little can be said on the physical nature of the intermediate class . the statistical studies ( and the references therein ) suggest the existence of this subgroup - at least from the purely statistical point of view .also the non - random sky distribution is occurring here .but its physical origin is fully open yet .we made additional studies on the degree of the randomness in the angular distribution of samples selected from the batse catalog . according to the durations and peak fluxes of the grbs in the catalog we defined five groups : ( s & ) , ( s & ) , ( & ) , ( s & ) and ( s & ) . to characterize the statistical properties of the point patterns , given by the samples , we defined 13 test - variables based on the voronoi tesselation ( vt ) , minimal spanning tree ( mst ) and multifractal spectra .for all five grb samples defined we made 200 numerical simulations assuming fully random angular distribution and taking into account the batse exposure function .the numerical simulations enabled us to define empirical probabilities for testing the null hypothesis , i.e. the assumption that the angular distributions of the batse samples are fully random .since we performed 13 single tests simultaneously on each subsamples the significance obtained by calculating it separately for each test can not be treated as a true indication for deviating from the fully random case . atfirst we supposed that the test - variables were independent and making use the binomial distribution computed the probability of obtaining significant deviation in at least one of the variables only by chance .in fact , some of the test - variables are strongly correlated . to concentrate the information on the non - randomness experienced by the test - variables, we assumed that they can be represented as a linear combination of non - correlated hidden factors of less in number .actually , we estimated as the number of hidden factors . making use the hidden factors we computed the distribution of the squared euclidean distances from the mean of the simulated variables . _comparing the distribution of the squared euclidean distances of the simulated with the batse samples we concluded that the short1 , short2 groups deviate significantly ( 99.90% , 99.98% ) from the fully randomness , but it is not the case at the long samples . atthe intermediate group squared euclidean distances also give significant deviation ( 98.51% ) . _this study was supported by otka grant no .t048870 , by a bolyai scholarship ( i.h . ) , by a research program msm0021620860 of the ministry of education of czech republic , and by a gauk grant no .46307 ( a.m. ) .we are indebted to an an anonymous referee for his valuable comments and suggestions .
|
we studied the complete randomness of the angular distribution of gamma - ray bursts ( grbs ) detected by batse . since grbs seem to be a mixture of objects of different physical nature we divided the batse sample into 5 subsamples ( short1 , short2 , intermediate , long1 , long2 ) based on their durations and peak fluxes and studied the angular distributions separately . we used three methods , voronoi tesselation , minimal spanning tree and multifractal spectra to search for non - randomness in the subsamples . to investigate the eventual non - randomness in the subsamples we defined 13 test - variables ( 9 from the voronoi tesselation , 3 from the minimal spanning tree and one from the multifractal spectrum ) . assuming that the point patterns obtained from the batse subsamples are fully random we made monte carlo simulations taking into account the batse s sky - exposure function . the mc simulations enabled us to test the null hypothesis i.e. that the angular distributions are fully random . we tested the randomness by binomial test and introducing squared euclidean distances in the parameter space of the test - variables . we concluded that the short1 , short2 groups deviate significantly ( 99.90% , 99.98% ) from the fully randomness in the distribution of the squared euclidean distances but it is not the case at the long samples . at the intermediate group the squared euclidean distances also give significant deviation ( 98.51% ) . [ firstpage ]
|
in contrast to information - losing or lossy data compression , the lossless compression of data , the central problem of information theory , was essentially opened and closed by claude shannon in a 1948 paper .shannon showed that the entropy formula ( introduced earlier by gibbs in the context of statistical mechanics ) establishes a lower bound on the compression of data communicated across some channel - no algorithm can produce a code whose average codeword length is less than the shannon information entropy .if the probability of codeword symbol is : this quantity is the amount of information needed to invoke the axiom of choice and sample an element from a distribution or set with measure ; any linear measure of choice must have its analytic form of expected log - probability .this relies on the knowledge of a probability distribution over the possible codewords . without a detailed knowledge of the process producing the data , or enough data to build a histogram, the entropy may not be easy to estimate .in many practical cases , entropy is most readily measured by using a general - purpose data compression algorithm whose output length tends toward the entropy , such as lempel - ziv . when the distribution is uniform , the shannon / gibbs entropy reduces to the boltzmann entropy function of classical thermodynamics ; this is simply the logarithm of the number of states .the entropy limit for data compression established by shannon applies to the exact ( lossless ) compression of any type of data . as such, shannon entropy corresponds more directly to written language , where each symbol is presumably equally important , than to raw numerical data , where leading digits typically have more weight than trailing digits . in general , an infinite number of trailing decimal points must be truncated from a real number in order to obtain a finite , rational measurement .since some bits have much higher value than others , numerical data is naturally amenable to information - losing ( lossy ) data compression techniques , and such algorithms have become routine in the digital communication of multimedia data . for the case of a finite - precision numerical datum , rather thanthe shannon entropy , a more applicable complexity measure might be chaitin s algorithmic prefix complexity which measures the irreducible complexity of the leading digits from an infinite series of bits .the algorithmic prefix complexity is an example of a kolmogorov complexity , the measure of minimal descriptive complexity playing a central role in kolmogorov s formalization of probability theory .prior to the twentieth century , this basic notion of a probability distribution function ( pdf ) had not changed significantly since the time of gauss .after analysis of the brownian motion by einstein and others , building on the earlier work of markov , the stochastic process became a popular idea .stochastic processes represent the fundamental , often microscopic , actions which lead to frequencies tending , in the limit , to a probability density .stochastic partial differential equations ( for example , the fokker - planck equation ) generate a pdf as their solution , as do the master equations from whence they are derived ; such pdfs may describe , for instance , the evolution of probabilities over time .they were used notably by langevin to separate dynamical systems into a deterministic classical part and a random stochastic component or statistical model . given empirical data from such a system , the langevin approach may be combined with the maximum likelihood method or bayesian inference ( maximum posterior method ) to identify the most likely parameters for an unknown noise function . in practice, langevin s approach either posits the form of a noise function or fits it to data ; it does not address whether or not data is stochastic in the first place .kolmogorov addressed this issue , refining the notion of stochastic processes and probability in general .some objects , a solid black image , for example , are almost entirely regular .other data seem totally random ; for instance , geiger counters recording radioactive decays .stochastic objects lay between these two extremes ; as such , they exhibit both deterministic and random behavior .kolmogorov introduced a technique for separating a message into random and nonrandom components .first , however , he defined the kolmogorov complexity , . is the minimum amount of information needed to completely reconstruct some object , represented as a binary string of symbols , x . recursive function f may be regarded as a particular computer and p is a program running on that computer .the kolmogorov complexity is the length of the shortest computer program that terminates with x as output on computer f. in this way , it symbolizes perfect data compression . for various reasons ( such as the non - halting of certain programs ), it is usually impossible to prove that non - trivial representations are minimal . on the other hand, a halting program always exists , the original string , so a minimal halting program also exists , even if its identity ca nt be verified . in practice , the kolmogorov complexity asymptotically approaches the shannon entropy , and the complexity of typical objects may be readily approximated using the length of an entropic code .often , a variant of kolmogorov complexity is used - chaitin s algorithmic prefix complexity which considers only self - delimiting programs that do not use stop symbols . sincea program may be self - delimiting by iteratively prefixing code lengths , returning to the separation of signal and noise , we now define stochasticity as it relates to the kolmogorov structure function . for natural numbers and , we say that a string x is -stochastic if and only if there exists a finite set such that : the deviation from randomness , , indicates whether x is a typical or atypical member of a. this is minimized through the kolmogorov structure function , : the minimal set minimizes the deviation from randomness , , and is referred to as the kolmogorov minimal sufficient statistic for x given n. the kolmogorov structure function specifies the bits of additional entropy ( shannon entropy reduces to the logarithm function for a uniform distribution ) necessary to select the element x from a set described with k or fewer bits . for a regular object ,the structure function has a slope less than -1 .specifying another bit of k reduces the entropy requirement by more than a bit , resulting in compression . beyond a critical threshold , corresponding to the minimal sufficient statistic , stochastic objects become random . beyond this point , specifying another bit of k increases the entropy by exactly one bit , so the slope of the structure function reaches its maximum value of -1 . for this reason ,kolmogorov identified the point at which the slope reaches -1 as the minimal sufficient statistic .the kolmogorov minimal sufficient statistic represents the amount of information needed to capture all the regular patterns in the string x without literally specifying the value of random noise . while conceptually appealing, there are practical obstacles to calculating the kolmogorov minimal sufficient statistic .first , since the kolmogorov complexity is not directly calculable , neither is this statistic .approximations may be made , however , and when using certain common data compression algorithms , the point having slope 1 is actually a reasonable estimate of the onset of noise . when certain data are compressed more fully , however , this point may not exist .for example , consider a color photograph of black and white static on an analog tv set .the pattern of visible pixels emerges from nearly incompressible entropy ; chaos resulting from the machine s attempt to choose values from a nonexistent signal .since a color photograph has three channels , and the static is essentially monochromatic ; the channels are correlated to one another and hence contain compressible mutual information . as such, the noise in the color photograph , though emergent from essentially pure entropy , is intrinsically compressible - hence , the compression ratio never reaches 1:1 and the kolmogorov minimal sufficient statistic does not exist . instead of the parameter value where the compression ratio reaches 1:1 , which may not exist , one often seeks the parameter value which provides the most information about the object under consideration .the problem of determining the most informative parameters in a model was famously addressed by the statistician r.a .fisher .the fisher information quantifies the amount of information expected to be inferred in a local neighborhood of a continuously parameterizable probability distribution .the fisher information quantifies information at specific values of the parameters - it quantifies the informative - ness of a local observation .if the probability density of x is parameterized along some path by t , f(x;t ) , then the fisher information metric at some value of t is the expectation of the variance of the hartley information , also known as the score : \ ] ] the fisher information quantifies the convexity ( the curvature ) of an entropy function at a specific point in parameter space , provided sufficient regularity and differentiability . in the case of multiple parameters, the fisher information becomes the fisher information metric ( or fisher information matrix , fim ) , the expected covariance of the score : \ ] ] the fisher - rao metric is simply the average of the metric implied by hartley information over a parameterized path .the space described by this metric has distances that represent differences in information or entropy .the differential geometry of this metric is sometimes called _information geometry_. we seek the parameter values maximizing the fisher - rao metric , for variations in these values lead to the largest possible motions in the metric space of information .if we take to be the universal probability of obtaining a string x from a randomly generated program on a universal computer , this probability is typically dominated by the shortest possible program , implying that , where we have used instead of so the sum over all x will convergence to unit probility . if we make the string a function of some parameter t , , then , the hartley information is times , and its associated fisher - rao metric is : \ ] ] since the spaces we consider are generally discrete , we will consider paths from one parameter value to the next and evaluate partial differences in place of the partial derivatives .the one - dimensional fisher information of a path from to is , replacing the continuous differentials with finite differences , and ignoring the expectation operator , which becomes the identity operator since the expectation covers only one point : maximizing this quantity is equivalent to maximizing , which is also the denominator in the slope of the kolmogorov structure function . for incompressible data ,the numerator ( which is the number of additional bits erased by the uncompressed beyond those erased by the more descriptive ) also takes on this value .since the parameter in the kolmogorov structure function corresponds to bits of description length , the literal description corresponding to each subsequent parameter value differs in length by a constant , minimizing the slope of the kolmogorov structure function is equivalent to maximizing and the fisher information .the minimal parameter that maximizes the fisher information is the kolmogorov minimal sufficient statistic . sometimes , rather than considering the point at which a phase transition is complete , we wish to consider the critical point at which it proceeds most rapidly .for this , we use the expectation of the hessian of the hartley information : \ ] ] this is in contrast to the expectation of the hartley information , the entropy , or the expected curvature of the hartley information , the fisher information .when this function is maximized , the fisher information ( or slope of the kolmogorov structure function ) is changing as rapidly as possible .this means that the phase transition of interest is at its critical point and proceeding at its maximum rate . beyond this point , the marginal utility of each additional bit decreases as the phase transition proceeds past criticality to completion at the minimal sufficient statistic .the derivatives in the fisher information were approximated using backwards differences in complexity ; however , a forward second difference may applied subsequently to complete the hessian , the net result of this is a central difference approximation to the second derivative of complexity : .the maximum resulting from this approximation is between the maximum and minimum values of the parameter , exclusively . in practice , since we ca nt calculate exactly , it is helpful to treat any value of the fisher information ( or the slope of the kolmogorov structure function ) within some tolerance of the maximum as being a member of a nearly - maximum set , and select the element of this set having the fewest bits .usually , the representation having the lowest complexity is the one with the lowest bit depth or resolution , but not always - when lossless compression is applied to highly regular objects , the lossless representation may be simpler than any 2-part or purely lossy code .this statistic the represents all the bits of signal that can be described before additional bits require a nearly maximal description - it quantifies the minimum complexity needed to complete a phase transition from a low - complexity periodic signal to a high - complexity chaotic one .this also applies to the maximum of the second derivative , as considered above .any value of the hessian that is within some tolerance of the maximal is considered part of a nearly - maximal set , and the simplest element of this set is selected as the critical point .the sufficiency of a statistic was also defined by fisher in 1921 . if a statistic is sufficient , then no other statistic provides any additional information about the underlying distribution .fisher also demonstrated the relationship between maximum likelihood and sufficient statistics .the fisher - neyman factorization theorem says that for a sufficient statistic t(x ) , the probability density factors into terms dependent and independent of the parameter : .the maximum likelihood function for the parameter depends only the sufficient statistic . as a result ,a sufficient statistic is ideal for determining the parameters of a distribution using the popular method of maximum likelihood estimation ( mle ) .the most efficient possible articulation of a sufficient statistic is a minimal sufficient statistic .a sufficient statistic is minimal if and only if , for all sufficient statistics , there exists a function f such that .partitioning the information content of a string into the complexity of its signal and the entropy of its noise is a nuanced idea that takes on several important forms , another is the algorithmic entropy of a string .this is defined in its most basic form as : in this context , is a description of a macroscopic observation constructed by truncating a microscopic state x to a bit string of length m. k(z ) is the algorithmic prefix complexity of this representation , and is the boltzmann entropy divided by its usual mulplicative constant , k , the boltzmann constant , and , since we are using bits .n is the logarithm of the multiplicity or volume of truncated states , having universal recursive measure , so and the algorithmic entropy is .this function is also known as the martin lf universal randomness test and plays a central role in the theory of random numbers .the partitioning of a string into signal and noise also allows the determination of the limit to its lossy compression , relative to a particular observer .if p(x ) is the set of strings which some macroscopic observer p can not distinguish from string x , then the simplest string from this set is the minimal description equivalent to x : we refer to this complexity as the macrostate complexity or macrocomplexity since its criterion of indistinguishability corresponds to the definition of a macrostate in classical thermodynamics ; a macrostate is a set of indistinguishable microstates .likewise , its entropy function has the form ( logarithm of cardinality ) of the boltzmann entropy ; it may be shown that if the probability of the class is dominated by the shortest program in the class such that , the macrocomplexity is approximately : this first - order approximation to macrocomplexity is close to the effective complexity of gell - mann and lloyd . the effective complexity , y , is summed with the shannon entropy , i , or an even more general entropy measure , such as rnyi entropy , to define an information measure , the total information , that is typically within a few bits of k(x) .critical data compression codes the most significant bits of an array of data losslessly , since they are typically redundant , and either fits a statistical model to the remaining bits or compresses them using lossy data compression techniques . upon decompression ,the significant bits are decoded and added to a noise function which may be either sampled from a noise model or decompressed from a lossy code .this results in a representation of data similar to the original .this type of scheme is well - suited for the representation of noisy or stochastic data . attempting to find short representations of the specific states of a system which has highentropy or randomness is generally futile , as chaotic data is incompressible . as a result ,any operation significantly reducing the size of chaotic data must discard information , and this is why such a process is colloquially referred to as lossy data compression .today , lossy compression is conventionally accomplished by optionally preprocessing and/or partitioning data and then decomposing data blocks onto basis functions .this procedure , canonicalized by the fourier transform , is generally accomplished by an inner product transformation projecting the signal vector onto a set of basis vectors . however , this is not an appropriate mathematical operation for stochastic data .stochastic variables are not generally square - integrable , meaning that their inner products do not technically exist .though a discrete fourier transform may be applied to a stochastic time series sampled at some frequency , the resulting spectrum of the sample will not generally be the spectrum of the process , as parseval s theorem need not apply in the absence of square - integrability .worse , fourier transforms such as the discrete cosine transform do not properly describe the behavior of light emitted from complex geometries .a photograph is literally a graph of a cross - section of a solution to maxwell s equations .the first photographs were created by the absorption of photons on silver chloride surface , for instance .while it is true that the solution to maxwell s equations in a vacuum take the form of sinusoidal waves propagating in free space , a photograph of a vacuum would not generally be very interesting and , furthermore , the resolution of macroscopic photographic devices is nowhere close to the sampling frequency needed to resolve individual waves of visible light , which typically have wavelengths of a few hundred nanometers . in a limited number of circumstances ,this is appropriate - for example , a discrete cosine transformation would be ideal to encode photons emerging from a diffraction grating with well - defined spatial frequencies . in general , however , most photographs are sampled well below the nyquist rate necessary to reconstruct the underlying signal , meaning that microscopic detail is being lost to the resolution of the optical device used .if a photographic scene contains multiple electrons or other charged particles , the resulting wavefront will no longer be sinusoidal , instead being a function of the geometry of charges .though the sine and cosine functions are orthogonal , they are complete in the sense that they may be used as a basis to express any other function . however , since sinusoids do not generally solve maxwell s equations in the presence of boundary conditions , the coefficients of such an expansion do not correspond to the true modes that carry energy through the electromagnetic field .the correct set of normal modes - which solve maxwell s equations and encode the resulting light - will be eigenfunctions or green s functions of the geometry and acceleration of charges .for example , when designing waveguides ( for example , fiber optics ) the choice of a circular or rectangular cross - section is crucially important as this geometry determines whether the electric or the magnetic field is allowed to propagate along its transverse dimension . calculating the fourier cosine transform of these fields produces a noisy spectrum ; however , expanding over transverse electric and magnetic modes could produce an idealized spectrum that has all of its intensity focused into a single mode and no amplitude over the other modes .the proper , clean spectrum is appropriate for information - losing approximations - since the ( infinite ) spectrum contains no energy beyond the modes under consideration , it can be truncated without compromising accuracy . for the complex electronic geometries and motions that comprise interesting real - world photograph , these modes may be difficult to calculate , but they still exist as solutions of maxwell s equations . attempting to describe them using sinusoids that do nt solve maxwell s equations leads to incompressible noise and spectra that ca nt be approximated accurately . for audio , however , we note that the situation is somewhat different .audio signals have much lower frequency than visible light , so they are sampled above the nyquist rate .44,100hz is a typical sampling rate which faithfully reconstructs sinusoids having frequency components less than 22,050hz , which includes the vast majority of audible frequencies .auditory neurons will phase - lock directly to sinusoidal stimuli , making audio perception amenable to fourier - domain signal processing .if compressed in the time domain , the leading bits ( which exhibit large , rapid oscillations ) often appear more random to resource - limited data compressors than the leading bits of a fourier spectrum . at the same time ,the less - important trailing bits are often redundant , given these leading bits , owing to vibrational modes which vary slowly compared to the sampling rate .this reverses the trend observed in most images - their most significant bits are usually smoother and more redundant than trailing bits . in the strictly positive domain of fourier - transformed audio ,however , the leading bits become smoother , due the use of an appropriate sampling rate . for macroscopic photographic content ,however , individual waves can not be resolved , making fourier - domain optical processing less effective .nonetheless , scientists and engineers in all disciplines all over the world successfully calculate fourier transforms of all sorts of noisy data , and a large fraction ( if not the vast majority ) of all communication bandwidth is devoted to their transmission .jpeg images use discrete cosine transforms , a form of discrete fourier transform , as does mp3 audio and most video codecs .other general - purpose transformations , such as wavelets , are closely related to the fourier transform and still suffer from the basic problem of projecting stochastic data onto a basis - the inner products do nt technically exist , resulting in a noisy spectrum . furthermore, since the basis used does nt solve generally solve maxwell s equations , finite - order approximations that truncate the spectrum will not translate into results that are accurate to the desired order .as such , we seek an alternative to expanding functions over a generic basis set .the limit of lossy data compression is the kolmogorov complexity of the macrostate perceived by the observer . explicitly describing the macrostates perceptually coded by a human observeris prohibitively difficult , which makes optimal compression for a human observer intractable , even if the complexity were exactly calculable .however , the truncation of data which appears in the definition of prefix complexity provides a very natural means of separating 2-part codes , the prefix complexity appears in the definition of the algorithmic entropy , which is a special case of macrostate complexity .truncation of amplitude data provides a simple but universal model of data observation - an observer should regard the most significant bits of a datum as being more important than its least significant bits .the codes described in this paper are the sum of a truncated macrostate , , which we call the signal , as well as a lossy approximation of the bits that were truncated from this signal , which we will refer to as that signal s residual noise function .this is in contrast to the algorithmic entropy , which combines a truncated macrostate with all the information needed to recover its microstate .if samples are truncated from bits to , the boltzmann entropy is proportional to and the algorithmic entropy is , however , since we only store bits using lossless compression , the savings resulting from a two - part code ( compared to a lossless entropic code ) could approach the boltzmann entropy in the case of a highly compressed lossy representation .first , the bits of datum are reordered in the string , from most significant to least significant .this simplifies truncation and its correspondence to the conditional prefix complexity .the resulting string is truncated to various depths , and the compressibility of the resulting string is evaluated .the point beyond which the object has attains maximum incompressibility also maximizes the fisher information associated with the distribution .as discussed in the previous section , this phase transition proceeds at its maximum rate when the expected hessian of the hartley information , , is maximal . since the phase transition between periodicity and chaos is generally somewhat gradual , several possible signal depths could be used , to varying effect .following ehrenfest s categorization of critical points by the derivative which is discontinuous , we will also refer to critical points by the order of derivatives . due to the discrete nature of our analysis, our difference approximations never become infinite , instead , we seek the maxima of various derivatives of the information function .in the first - order approximation to the universal probability , the hartley information is simply the complexity , therefore , multiplying the problem by -1 , we classify critical points by the derivatives of complexity which have minima there .the first of these , , corresponds to the fisher - rao metric , and its maxima correspond to sufficient statistics .if the object is chaotic beyond some level of description , then this level is also the kolmologorov minimal sufficient statistic .the second order matrix , , is the point at which the fisher information increases most rapidly and hence the point beyond which descriptional complexity results in diminishing returns .higher - order critical points may be considered as well , but become progressively more difficult to determine reliably in the presence of imperfect complexity estimates , so we will analyze only the first two orders . the choice of a first order critical point ( ) or a second order critical point ( ) as a cutoff for data compression will reflect a preference for fidelity or economy , respectively .other considerations may lead to alternative signal depths - the mean squared errors of images having various cutoffs are considered in the examples section , for instance .regardless of the critical point chosen , the redundant , compressible signal component defined by the selected cutoff point is isolated and compressed using a lossless code .ideally , an accurate statistical model of the underlying phenomenon , possibly incorporating psychological or other factors , would be fit to the noise component using maximum likelihood estimation to determine the most likely values of the parameters of its distribution . instead of literally storing incompressible noise ,the parameters of the statistical model are stored .when the code is decompressed , the lossless signal is decompressed , while the noise is simulated by sampling from the distribution of the statistical model . the signal and simulated noiseare summed , resulting in an image whose underlying signal and statistical properties agree with the original image . since statistical modeling of general multimedia data may be impractical , lossy data compression methods may be applied to the noise function .a successful lossy representation may be regarded as an alternate microstate of the perceived noise macrostate ; it is effectively another sample drawn from the set of data macroscopically equivalent to the observed datum . as such , in the absence of an appropriate model , the noise function is compressed using lossy methods , normalized such that the resulting intensities do not exceed the maximum possible amplitude .this representation will be decompressed and added to the decompressed signal to reconstruct the original datum .this has several advantages .first , the signal is relatively free of spurious artifacts , such as ringing , which interfere with the extraction of useful inferences from this signal . artifacts from lossy compression can not exceed the amplitude of the noise floor , and higher levels of lossy compression may be used as a result of this fact .furthermore , lossy compression algorithms tend to compress high frequencies at a higher level than lower frequencies .the eyes and ears tend to sense trends that exhibit change over broader regions of space or time , as opposed to high - frequency oscillations .the compressibility of signal and noise leads to an information - theoretic reason for this phenomenon - the former naturally requires less of the nervous system s communication bandwidth than the latter .the compression ratios afforded by such a scheme can be dramatic for noisy data . as a trivial example , consider a string containing only random noise , such as the result of independently distributed bernoulli trials having probability , such as a coin flip .lossless entropic compression can not effectively compress such a string .decomposing such a string into basis functions , such as the fourier amplitudes or wavelets used in the jpeg algorithms , inevitably results in a mess of spurious artifacts with little resemblance to the original string .the critical compression scheme described , however , easily succeeds in reproducing noise that is statistically indistinguishable from ( though not identical to ) the original string .furthermore , all that needs to be stored to sample from this distribution is the probability of the bernoulli trial , which has complexity o(1 ) .the observer for which this scheme is optimal makes statistical inferences of amplitude in a manner similar to a physical measurement .the observer records the statistics of the data , e.g. mean , variance , etc ., rather than encoding particular data , which could introduce bias .if the data is a waveform sampled at a frequency exceeding its effective nyquist rate , such as an audio recording sampled at more than twice the frequency of a listener s ear , then its spectrum may be analyzed rather than its time series .this will make the data smoother and non - negative , resulting in better compression for the leading bits . in practical terms , this means that we may compress audio by compressing an one - dimensional image which is a graph of its spectrum , or the spectrum of some portion of the time series .hence , we will develop the method using images as a canonical example , with the understanding that audio may be compressed , for example , using 1-d images of fourier transforms , and that video may be compressed using an array having the third dimension of time , or by embedding information into lower - dimensional arrays . for many photographic and video applications ,it is conventional to rotate a pixel s rgb color space to a color space , , which more naturally reflects the eye s increased sensitivity to brightness as compared to variations in color .this is normally done in such a way that takes into account the perceived variations in brightness between different phosphors , inks , or other media used to represent color data .the or luma channel is a black - and - white version of the original image which contains most of the useful information about the image , both in terms of human perception and measurements of numerical error .the luma channel could be said to be the principal component ( or factor ) of an image with respect to perceptual models .the blue and red chroma channels ( and , respectively ) effectively blue - shift and/or red - shift white light of a particular brightness ; they are signed values encoding what are typically slight color variations from the luma channel .it is conventional for the luma channel to receive a greater share of bandwidth than the less important chroma channels , which are often downsampled or compressed at a lower bitrate . as an alternative to consistently using a perceptual model optimized for the output of , for example , particular types of monitors or printers, one could use a similar approach to determine the principal components of color data as encoded rather than perceived .principal components analysis , also called factor analysis , determines linear combinations of inputs which have the most influence over the output . in principal components analysis , samples of -channel sample dataare placed in the columns of an -by- matrix a and the matrix product is constructed to obtain an -by- matrix .the eigenvectors of having the largest eigenvalues are the most influential linear combinations of data , the magnitude of these eigenvalues ( sometimes called factor weights ) reflects the importance of a particular combination .the result of applying principal components analysis to photographic content leads to a customized color space whose principal component is a luma channel whose channel weights correspond to the eigenvector having the largest eigenvalue .this channel is best communicated at a higher bitrate than the secondary and tertiary components , which are effectively chroma channels . in the appendix, we will compare the results of critically compressing photographs in rgb format against compression using a critical luma channel with lossy chroma channels .for most of these photographs , a critically compressed luma channel leads to more efficient representations than using only lossy wavelet transformations. in general , perceived output may be optimized by analyzing the principal components of perceived rather than raw data .in contrast , directly applying principal component analysis ( or factor analysis ) to the raw data leads to a universal coordinate system for sample space which has improved compressibility , albeit optimized for a particular instance of data rather than the perceived output of a particular medium .in addition to improved compression , another advantage of this approach is that it applies to a wide variety of numerical data and this facilitates a general approach to lossy data compression .the critical bit depth determines the level of compressible content of a signal .we now determine expressions for the first and second order critical depths .this will allow us to separate signal from noise for audio , image , and video data by determining a bit depth that effectively separates signal from noise .if higher compression ratios are desired , a supercritical signal may be used , meaning that more bits may be truncated , at the cost of destroying compressible information and potentially impeding inference . on the other hand, a signal retaining nearly all of its bits would necessarily be similar to the original . for a string which encodes the outcome of a series of independent bernoulli trials ( coin flips , for instance ) as zeros and ones, each bit constitutes the same amount of information - one bit is one sample . for a string comprised of a series of numeric samples at a bit depth greater than one ,this is not usually the case . in the traditional representation of numerals ,leading bits are generally more informative than trailing bits , so an effective lossy data compression scheme should encode leading bits at a higher rate . from the viewpoint of compressibility , on the other hand , the smooth , redundant leading bits of a typical stochastic processes are more compressible than its trailing bits . since the leading bits of multi - bit samples are often more compressible and more significant than the trailing bits , they are candidates for exact preservation using lossless data compression. since the trailing bits are generally less important and also less compressible , lossy compression can greatly reduce their descriptive complexity without perceptible loss . since images will be easy to illustrate in this medium , and provide a middle ground as compared to one or three dimensions for audio or video , respectively, we will treat the two - dimensional case first .we will then generalize to data having any number of dimensions .we will refer to the matrices ( rank-2 tensors ) as images , since this is a canonical and intuitive case , but these expressions apply generally to all two - dimensional arrays of binary numbers .let represent a tensor of rank three ( a tensor of rankn is an n - dimensional array ) representing one channel of a bitmap image .subscript indices and represent and coordinates in the image , and the superscript indexes the bits encoding the amplitude of pixel ( i , j ) in the channel , ordered from most significant to least significant .let the set contain all the images whose n leading bits agree with those of : this set can be represented as the original image channel with bit depth truncated from to .the implied observer sees n significant bits of ( learnable ) signal and insignificant bits of ( non - learnable ) noise .for reference , the algorithmic entropy of the truncated string is : the literal length of the reduced image is , and most of this will be saved in a critical compression scheme , as noise can be coded at a high loss rate .if is the lossy representation , the complexity of the critically compressed representation is : we may now consider the fisher information ( and hence the minimal sufficient statistic ) of . the fisher information of a path from to is , replacing the continuous differentials with finite differences , and ignoring the expectation operator ( which becomes equivalent to the identity operator ) : the first order bit depth parameterizing the minimal sufficient statistic is the argument n that maximizes the change in complexity , : the first order bit depth of the image channel represented by is .that is , the first most significant bits in each amplitude encode the signal ; the remaining bits are noise .the noise floor of the image is .the second order depth , on the other hand , determines the point of diminishing returns beyond which further description has diminished utility .it is the maximum of the expected hessian of hartley information , $ ] , so it becomes : this minimizes .the signal having bits per sample has the high - value bits and the residual noise function contains the bits determined to have diminishing utility . when considering multiple channels at once , which allows data compression to utilize correlations between these channels , we simply consider a superset of that is the union of the for each channel , .if all the channels have the same bit depth , for instance , this superset becomes : its corresponding representation is the union of the truncated channels , traditionally , an image would have three of these .given this new definition , the calculation of first - order depth proceeds as before .its fisher information is still , which takes its maximum at the minimal sufficient statistic , the first order depth maximizing .the second order depth , as before , maximizes .it is also possible to take the union of channels having different bit depths .the first - order critical parameters ( bit depths ) are best determined by the maximum ( or a near - maximum ) of the fisher - rao metric .the second - order critical parameters are determined by the maximum ( or a near - maximum ) of the hessian of hartley information .if stochastic data is not ergodic , that is , if different regions of data have different statistical properties , these regions may have different critical bit depths .such data can be compressed by separating regions having different bit depths .this phenomenon occurs frequently in photographs since brighter regions tend to be encoded using more significant bits , requiring fewer leading bits .brighter regions thus tend to have lower critical depth than darker regions whose signal is encoded using less significant bits .darker regions require a greater number of leading bits , but their leading zeros are highly compressible .the simplest way to accomplish this separation , perhaps , is to divide the original data into rectangular blocks ( see the example ) and evaluate each block s critical depth separately .this is suboptimal for a couple of reasons - one , regions of complex data having different bit depths are rarely perfect rectangles ; two , normalization or other phenomena can lead to perceptible boundary effects at the junctions of blocks . for this reason, we develop a means of masking less intense signals for subsequent encoding at a higher bit depth . in this way , the notion of bit depth will be refined - by ignoring leading zeros , the critical bit depth of the data becomes the measure of an optimal number of significant figures ( of the binary fraction ) for sampled amplitudes .ideally , we would like to encode the low - amplitude signals at a higher bit depth ( given their higher compressibility ) while we make a lossy approximation of the fourier transform of the periodic noise .if a statistical model is available for this approximation , we use this model for the lossy coding , otherwise , a lossy data compression algorithm is employed .given the original data , it is easy to distinguish low - amplitude signals from truncated noise - if the original amplitude is greater than the noise floor , a pixel falls into the latter category , otherwise , the former .this allows us to create a binary mask function associated with bit depth d , it is 0 if the amplitude of the original data sample exceeds the noise floor , and 1 otherwise : this mask acts like a diagonal matrix that left - multiplies a column vector representation of the image .the resulting signal preserves regions of non - truncated low - intensity signal while zeroing all all other amplitudes .its complement , not acts on the noise function to preserve regions of periodic truncated noise while zeroing the low - intensity signal .it is also helpful to consider the literal description length of the samples contained in this region , its bit depth times the number of ones appearing in : , as well as the complexity of the entire signal , , which includes the shape of the region , since this is additional information that needs to be represented .we may now describe an algorithm that calculates critical depths while separating regions of low - intensity signal .this procedure truncates some number ( the bit depth ) of trailing digits from the original signal , separates truncated regions having only leading zeros , and calculates the complexity of the reduced and masked signal plus the complexity ( calculated via recursion ) of the excised low - intensity signal at its critical depth .once this is done , it proceeds to the next bit depth , provided that the maximum depth has not yet been reached . starting with shallow representations having only the most significant bits ( n=0 or 1 , typically ) , we truncate the data to depth n , resulting in the truncated representation of the signal , as well as its truncated residual ( previously noise ) function , which we will call . at this point , the initial truncated signal is compressed using lossless methods , while the mask and its complement are applied to .this results in a residual signal , ( for these pixels , agrees with the original data ) as well as a complementary residual periodic noise function .since it contains only residual noise , taken modulo some power of two , the noise is compressed using lossy methods that are typically based on fourier analysis .the residual signal becomes input for the next iteration .the procedure iterates using a new that is a truncation of the masked signal , as opposed to the original image .let the notation represent an operator that truncates amplitudes to bit depth n. in this notation , , and its residual function is .a new mask is determined from .using the new mask , we produce a new residual signal , , and a new residual noise , . is compressed and stored using lossless methods , while is compressed and stored using lossy methods , and the procedure iterates to the next value of n , using as the new signal provided that additional bits exist .if the maximum bit depth has been reached , there can be no further iteration , so is stored using lossy methods .though the separation of signal and noise is now iterative , the criterion for critical depth has not changed , only the that appears in their definition .if is nearly maximal - its largest possible value is the literal length , - the first - order depth has been reached ; if is nearly maximal , the second - order critical depth has been reached .once the desired depth is reached , the iteration may break , discarding and and storing using lossy methods . if higher compression ratios are desired , more bits can be truncated and modeled statistically or with lossy methods .however , the signal introduced to the noise function in such a manner might not fit simple statistical models , and the loss of compressible signal tends to interfere with inference .since it relates an image s most significant bits to important theoretical notions such as kolmogorov minimal sufficient statistic and algorithmic entropy , critical bit depth is the canonical example of critical data representation .one could reorder the image data to define a truncated critical scale , simply sampling every nth point , however , this discards significant bits , which tends to introduce aliasing artifacts dependent on the sampling frequency and the frequency of the underlying signal .these considerations are the topic of the celebrated nyquist - shannon sampling theorem - essentially , the sampling frequency must be at least twice the highest frequency in the signal .this is known as the nyquist rate , as it was stated by nyquist in 1928 before finally being proved by shannon in 1949 . as a result of the nyquist - shannon theorem, a downsampling operation should incorporate a low - pass filter to remove elements of the signal that would exceed the new nyquist rate .this should occur prior to the sampling operation in order to satisfy the sampling theorem . to complicate matters further ,idealized filters ca nt be attained in practice , and a real filter will lose some amount of energy due to the leakage of high frequencies . a low - pass filter based on a discrete fourier transform will exhibit more high - frequency leakage than one based on , for example , polyphase filters . since sampling is not the topic of this paper , we will simply refer to an idealized sampling operator that applies the appropriate low - pass filters to resample data in one or more dimensions .the ability to perform complexity - based inference on a common scale is important since it allows the identification of similar objects , for instance , at different levels of magnification .critical scale is useful as it provides another degree of freedom along which critical points may be optimized , analogous to phase transitions in matter that depend on both temperature and pressure .occasionally , the two objectives may be optimized simultaneously at a triple point that is critical for both bit depth and scale .we now define the critical scale , which we define in terms of operators which resample the image instead of truncating it . forsome data , a minimal sufficient statistic for scale can not be reliably selected or interpreted , and hence a critical scale ca nt be determined .consider the critical scale of an image at a particular bit depth , which may or may not be the original bit depth of the image . let the linear operator represent a resampling operation , as described above , applied to an image with a spatial period of r in the x dimension and a period of s in the y dimension .it s action on is a by matrix of resampled amplitudes .this operator gives us two possible approaches to scaling .on one hand , given divisibility of the appropriate dimensions , we may vary and to resample the image linearly . on the other hand, we may also apply the operator repeatedly to resample the image geometrically , given divisibility of the dimensions by powers or r and s. the former may identify the scale of vertical and horizontal components separately , and the latter identifies the overall scale of the image .we will consider first overall scale , using the iterated operator , and then the horizontal and vertical components . , then , is the result of applying this operator n times .let the set used by the structure function , , contain all the images whose m leading bits agree with those of : note that in this case , unlike the critical bit depth , is an averaging which is not necessarily a prefix of the original image .the original image has bits .the reduced image has bits .we may now write the first - order critical scale parameterizing the minimal sufficient statistic , an expression unchanged from the previous case : if higher compression ratios are needed , additional signal may be discarded , as described previously .the expression for the second - order critical scale is also unchanged : as mentioned previously , the repeated application of averaging operators is not always appropriate or possible .we will consider linear scaling parameterized along the horizontal axis , with the understanding that the same operations may be applied to the vertical axis , or to any other index .as such , we equate the parameter with the horizontal factor .the set then contain all the images whose m leading bits agree with those of : note that this set may not be defined for all n. given this set , the expressions for the maximum fisher information ( the minimal sufficient statistic ) and the maximum of the hessian of hartley information ( the critical point ) do not change . if a signal is to compressed at some scale other than its original scale , then it will need to be resampled to its original scale before being added to its decompressed ( lossy ) noise function .note than in this case , the resampled signal is not generally lossless .this smoothing of data may be acceptable , however , since it enables analysis at a common scale .we now consider critical bit depths and scales of multidimensional data . instead of a two dimensional array containing the amplitudes of an image , we consider an array with an arbitrary number of dimensions .as noted earlier , monophonic audio may be represented as a one dimensional array of scalar amplitudes , and video data may be represented as a three dimensional array which also has three channels .this generalizes the results of the previous two sections , which used the case of a two - dimensional image for illustrative purposes .let represent a tensor of rank .its subscripts index coordinates in an -index array whose values are -dimensional vectors . each vector component is a -bit number .the superscripts index the bits in these numbers , ordered from most significant to least significant .we will first determine its critical bit depths and then its critical scales .let the set contain all possible tensors whose whose leading bits agree with those of channel in the original tensor : since there are multiple parameters involved in determining the first - order bit depth , a general tensor requires the use of the full fisher - rao metric , rather than a fisher information : where the expectation value in the definition of i(t ) becomes the identity , as before . for any particular parameter , takes on a maximum with respect to some value that parameter .this value is the critical depth of channel .however , this does not necessarily indicate that the set of critical depths globally maximizes i(t ) .the global maximum occurs at some vector of parameter values : if channels having different depths are inconvenient , a single depth may be selected as before , using the one - parameter fisher information of : the second - order critical depth proceeds in a similar manner .the expected hessian of hartley information becomes times again , the maximum occurs at a vector of parameter values : we will now consider the critical scale of . let the linear operator represent an idealized resampling of the tensor by a factor of along each dimension .this operator applies a low - pass filter to eliminate frequency components that would exceed twice the new sampling frequency ( the nyquist rate ) prior to sampling with frequency .let the set used by the structure function , , contain all the tensors which are preimages of , the rescaled tensor : given this new definition of a , the definition of first and second order critical depth do not change .the first - order critical scales maximize ( or approximately maximize ) the fisher information : likewise , the second - order critical scales maximize the expected hessian of hartley information : alternately , a single scale parameter could be chosen in each case : hence , we see that the definition of first and second order critical scale of a general tensor is identical to their definition for rank two images .the above relations are idealized , assuming that k can be evaluated using perfect data compression .since this is not generally the case in practice , as discussed previously , the maxima of i and j may be discounted by some tolerance factor to produce the threshold of a set of effectively maximal parameter values .the maximum or minimum values within this set may be chosen as critical parameters .in addition to multimedia data such as audio ( m=1 ) , images ( m=2 ) , and video ( m=3 ) , these relations enable critical compression and decompression , pattern recognition , and forecasting for many types of data .we now have an approach to separate significant signals from noise .encoding the resulting signal follows traditional information theory and lossless compression algorithms . encoding the noise is a separate problem of statistical inference that could assume several variants depending on the type of data involved as well as its observer .regardless of the details of the implementation , the program is as follows : a statistical model is fit to the noise , its parameters are compressed and stored , and upon decompression , a sample is taken from the model .a lossless data compression scheme must encode both a signal and its associated noise . however , noise is presumed ergodic and hence no more likely or representative than any other noise sampled from the same probability distribution .hence , a lossy perceptual code is free to encode a statistical model ( presuming that one exists ) for the noisy bits , as their particular values do nt matter .this dramatically reduces the complexity of storing incompressible noise ; its effective compression ratio may approach 100% by encoding relatively simple statistical models .the most significant bits of the signal are compressed using a lossless entropic code ; when the image is decompressed , samples are taken from the model distribution to produce an equivalent instance of noise ; this sampled noise is then added to the signal to produce an equivalent image .as noted in the introduction , maximum likelihood estimates correspond to sufficient statistics .maximum likelihood estimation ( mle ) has been one of the most celebrated approaches to statistical inference in recent years .there are a wide variety of probability distribution functions and stochastic processes to fit to data , and many specialized algorithms have been developed to optimize the likelihood .a full discussion of mle is beyond the scope of this paper , but the basic idea is quite simple . having computed a minimal sufficient statistic , we wish to fit a statistical model having some finite number of parameters ( such as moments of the distribution ) to the noisy data .the parameter values leading to the most probable data are selected to produce the most likely noise model .our data compression scheme stores these parameter values , compressed losslessly if their length is significant , in order to sample from the statistical model when decompression occurs . since different regions of data may have different statistical properties , rather than fitting a complex model to the entire noisy string, it may be advantageous to fit simpler models to localized regions of data .the description length of optimal parameters tends to be proportional to the number of parameters stored .if a model contains too many parameters , the size of their representation can approach the size of the literal noise function , reducing the advantage of critical compression .when the image is decompressed , a sample is drawn from the statistical model stored earlier .the problem of sampling has also received considerable attention in recent years , due to its importance to so - called monte carlo methods .the original monte carlo problem , first solved by metropolis and hastings , dealt with the estimation of numerical integrals via sampling .since then , monte carlo has become somewhat of a colloquial term , frequently referring to any algorithm that samples from a distribution . due to the importance of obtaining a random sample in such algorithms ,monte carlo sampling has become a relatively developed field .box - muller is a simple option for sampling from the uniform distibution , which may be transformed into any other distribution with a known cumulative distribution function .the ziggurat algorithm is a popular sampling algorithm for arbitrary distributions .this algorithm provides reasonably high - performance sampling since the sequence of samples produced is known to repeat only after a very large number of iterations .psychological modeling should be incorporated into the statistical models of the noise component rather than the signal , since this is where the loss occurs in the compression algorithm . since noise cant be learned efficiently , different instances of noise from the same ergodic source will typically appear indistinguishable to a macroscopic observer who makes statistical inferences . furthermore, certain distinct ergodic sources will appear indistinguishable .a good psychological model will contain parameters relevant to the perception of the observer and allow irrelevant quantities to vary freely .it may be advantageous to transform the noisy data to an orthogonal basis , for example , fourier amplitudes , and fit parameters to a model using this basis . the particular model used will depend on the type of data being observed and , ultimately , the nature of the observer .for example , a psychoacoustic model might describe only noise having certain frequency characteristics .the lossy nature of the noise function also provides a medium for other applications , such as watermarking .maximum likelihood estimation applies only when an analytic model of noise is available . in the absence of a model , the noise functionmay be compressed using lossy methods , as described previously , and added to the decompressed signal to reconstruct the original datum .the most obvious obstacle to this procedure is the fact that lossy data compression algorithms do not necessarily respect the intensity levels present in an image .for example , high levels of jpeg compression produces blocking artifacts resembling the basis functions of its underlying discrete cosine transformation . fitting these functions to datamay result in spurious minima and maxima , often at the edges or corners of blocks , which frequently exceeds the maximum intensity of the original noise function by nearly a factor of two . withthe wavelet transforms used in jpeg2000 , the spurious artifacts are greatly diminished compared to the discrete cosine transforms of the original jpeg standard , but still present , so a direct summation will potentially exceed the ceiling value allowed by the image s bit depth . in order to use the lossy representation , the spurious extrema must be avoided .it is not appropriate to simply truncate the lossy representation at the noise floor , as this leads to clipping effects - false regularities in the noise function that take the form of artificial plateaus . directly normalizingthe noise function does not perform well , either , as globally scaling intensities below the noise floor leads to an overall dimming of the noise function relative to the reconstructed signal .a better solution is to upsample the noise function , normalizing it to the maximum amplitude , and storing the maximum and minimum intensities of the uncompressed noise . upon decompression ,the noise function image is de - normalized ( downsampled ) to its original maximum and minimum intensities before being summed with the signal .this process preserves the relative intensities between the signal and noise components .therefore , when constructing codes with both lossless and lossy components , a normalized ( upsampled ) noise function is compressed with lossy methods , encoded along with its original bit depth . upon decompression , the noise function is decompressed and re - normalized ( downsampled ) back to its original bit depth before being added to the decompressed signal .last , but not least , it is worth noting that separating a signal from noise generally improves machine learning and pattern recognition .this is especially true in compression - based inference .complexity theory provides a unified framework for inference problems in artificial intelligence , that is , data compression and machine learning are essentially the same problem of knowledge representation .these sorts of considerations have been fundamental to information theory since its inception . in recent years , compression - based inference experienced a revival following rissanen s 1986 formalization of minimal description length ( mdl ) inference .though true kolmogorov complexities ( or algorithmic prefix complexities ) ca nt be calculated in any provable manner , the length of a string after entropic compression has been demonstrated as a proxy sufficient for statistical inference . today , data compression is the central component of many working data mining systems .though it has historically been used to increase effective network bandwidth , data compression hardware has improved in recent years to meet the high - throughput needs of data mining .hardware solutions capable of 1 gbps or more of lempel - ziv compression throughput can be implemented using today s fpga architectures . in spite of its utility in text analysis ,compression - based inference remains largely restricted to highly compressible data such as text .the noise intrinsic to stochastic real - world sources is , by definition , incompressible , and this tends to confound compression - based inference .essentially , incompressible data is non - learnable data ; removing this data can improve inference beyond the information - theoretic limits associated with the original string . by isolating the compressible signal from itsassociated noise , we have removed the obstacle to inference using the complexity of stochastic data . given improved compressibility ,the standard methods of complexity - based inference and minimum description length may be applied to greater effect . by comparing the compressibility of signal components to the compressibility of their concatenations, we may identify commonalities between signals .a full treatment of complexity - based inference is beyond the scope of this paper ( the reader is referred to the literature , particularly the book by li and vitnyi ) but we reproduce for completeness some useful definitions . given signal components a and b , and their concatenation ab , we may define the conditional prefix complexity .this quantity is analogous to the kullback - leibler divergence or relative entropy , and this allows us to measure a ( non - symmetric ) distance between two strings .closely related to the problem of inference is the problem of induction or forecasting addressed by solomonoff .briefly , if the complexity is k(x ) , then the universal prior probability is typically dominated by the shortest program , implying .if x is , for example , a time series , then the relative frequency of two subsequent values may be expressed as a ratio of their universal probabilities .for example , the relative probability that the next bit is a 1 rather than 0 is .clearly , the evaluation of universal probabilities is crucially dependent on compressibility . as such, separating signal from noise in the manner described also improves the forecasting of stochastic time series .sometimes , one wishes to consider not only the complexity of transforming a to b , but also the difficulty of transforming b to a. in the example presented , an image search algorithm , this is not the case - the asymmetric distances produce better and more intuitive results . however , if one wishes to consider the symmetrized distances , the most obvious might be symmetrized sum of conditional complexities and , . this averaging loses useful information , though , and many authors suggest using the max - distance or picture distance . when implementing information measures , the desire for the mathematical convenience of symmetric distance functions should be carefully checked against the nature of the application .for example , scrambling an egg requires significantly fewer interactions than unscrambling and subsequently reassembling that egg , and a reasonable complexity measure should reflect this . for these considerations , as well asits many convenient mathematical properties the conditional prefix complexity is often the best measure of the similarity or difference of compressible signals .empirical evidence suggests that conditional prefix complexity outperforms either the symmetrized mutual information or the max - distance for machine vision .the reverse transformation should not usually be considered since this tends to overweight low - complexity signals .we will demonstrate an example sliding - window algorithm which calculates conditional prefix complexities between a search texture , a , and elements b from a search space .the measure is closely related to the universal log - probability that a will occur in a sequence given that b has already been observed .its minimum is the most similar search element .it is invariant with respect to the size of elements in the search space and does not need to be normalized . estimating the complexities of signal components from the search space and the complexities of their concatenations with signal components from the texture , we arrive at an estimate of which may be effectively utilized in a wide variety of artificial intelligence applications . the first - order critical point represents the level at which all the useful information is present in a signal .at this point , the transition from order to chaos is essentially complete . for the purposes of artificial intelligence ,we want enough redundancy to facilitate compression - based inference , but alto to retain enough specific detail to differentiate between objects .the second - order critical point targets the middle of the phase transition between order and chaos , where the transition is proceeding most rapidly and both of these objectives can be met . in the examples section, we will see that visual inference at the second - order critical depth outperforms inference at other depths . for this reason ,second - order critically compressed representations perform well in artificial intelligence applications .having described the method , we now apply it to some illustrative examples. we will start with trivial models and household data compression algorithms , proceeding to more sophisticated implementations that demonstrate the method s power and utility .these examples should be regarded as illustrations of a few of the many possible ways to implement and utilize two - part codes , rather than specific limitations of this method .one caveat to the application of complexity estimates is the fact that real - world data compression produces finite sizes for strings which contain no data , as a result of file headers , etc .we will treat these headers as part of the complexity of compressed representations when the objective of the compression is data compression , as they are needed to reconstruct the data . when calculating derivatives of complexity to determine critical points , this complexity constant does not affect the result , as the derivative of a constant is zero .it does not affect the conditional prefix complexity , either , canceled by taking a difference of complexities . for certain applications in artificial intelligence , however, small fluctuations in complexity estimates can lead to large difference in estimates of probabilities .when the quality of complexity estimates are important , as is the case for inference problems , we will first compress empty data to determine the additive constant associated with the compression overhead , and this complexity constant is subtracted from each estimate of k(x ) .formally , the data compression algorithm is regarded as a computer and its overhead is absorbed into this computer s turing equivalence constant .this results in more useful estimates of complexity which improves our ability to resolve low - complexity objects under certain complexity measures .first - order critical data compression represents a level of detail at which a signal is essentially indistinguishable from an original when viewed at a macroscopic scale .we will show that second - order critical data compression produces representations which are typically slightly more lossy but significantly more compact .as mentioned earlier , any depth could potentially be used , splitting a two - part code subject to the constraints of the intended application .since images may be displayed in a paper more readily than video or audio , we first consider the second - order critical depth of a simple geometric image with superimposed noise .this is a 256x256 pixel grayscale image whose pixels have a bit depth of 8 .the signal consists of a 128x128 pixel square having intensity 15 which is centered on a background of intensity 239 . to this signalwe add , pixel by pixel , a noise function whose intensity is one of 32 values uniformly sampled between -15 and + 16 .starting from this image , we take the n most significant bits of each pixel s amplitude to produce the images , where n runs from 0 to the bit depth , 8 .these images are visible in figure 1 , and the noise functions that have been truncated from these images are showcased in figure 2 . to estimate , which is needed to evaluate critical depth , we will compress the signal using the fast and popular gzip compression algorithm and compress its residual noise function into the ubiquitous jpeg format .we will then progress to more accurate estimates using slower but more powerful compression algorithms , namely , paq8 and jpeg2000 .the results are tabulated below , with n on the left and the size of gzip s representation , the estimate of , in the right column . [ cols="^,^ " , ] if we allow a tolerance factor within the maximum set , as before , we see that the first - order critical point is at depth 6 and the second order critical point is at depth 4 .we will see that inference at depth 4 outperforms inference at lower or higher bit depths , as predicted .a sliding window is applied to the image to produce string b , and the conditional prefix complexity is calculated .this is done for signals having bit depths of 1 through 8 . for visibility ,the resulting filter is applied to the image four times and the resulting image is normalized to intensity .the result follows .+ + depths 1 and 2 .+ + + depths 3 and 4 .+ + + depths 5 and 6 . + + + depths 7 and 8 .+ figures 24 - 31 : pattern search , depths 1 - 8 .depth 4 has fewer false positives than other depths as its greater economy translates into superior inference . at lower depths ,more false matches occur , since more of the image looks similar at this depth .the full image at depth 8 has strong false matches and inferior performance even though it contains all available information , giving too much weight to bits which do not contain useful information about the signal .this tends to overweight , for instance , short and possibly irrelevant literal matches .signals at depths 5 - 8 also exhibit this phenomenon to a lesser extent .a critical depth ( or other parameter , such as scale ) represents critical data in two senses of the word : on one hand , it measures the critical point of a phase transition between noise and smoothness , on the other , it also quantifies the essential information content of noisy data .such a point separates lossless signals from residual noise , which is compressed using lossy methods .the basic theory of using such critical points to compress numeric data has now been developed .this theory applies to arrays of any dimension , so it applies to audio , video , and images , as well as many other types of data .furthermore , we have demonstrated that this hybridization of lossless and lossy coding produces competitive compression performance for all types of image data tested . whereas lossy transformation standards such as jpeg2000 sometimes include options for separate lossless coding modes , a two - part code adapts to the data and smoothly transitions between the two types of codes . in this waytwo - part codes are somewhat unique in being efficient for compressing both low - entropy and high - entropy sources .the optional integration of maximum likelihood models and monte - carlo - type sampling is a significant departure from deterministic algorithms for data compression and decompression .if sampling is employed , the decompression algorithm becomes stochastic and non - deterministic , potentially producing a different result each time decompression occurs .the integration of statistical modeling into such an algorithm enables two - part codes which are engineered for specific applications .this can lead to much higher levels of application - specific compression than can be achieved using general - purpose compression , as has been illustrated using a simple image corrupted by noise .the test images presented use a bit depth of 8 bits per channel , as is standard in most of today s consumer display technology .however , having more bits per sample ( as in the proposed hdr image standard , for instance ) means that the most significant bits represent a smaller fraction of the total data . as such , the utility of a two - part code is increased at the higher bit depth , since more of the less - significant bits can be highly compressed using a lossy code , while the significant bits still use lossless compression .likewise , high - contrast applications will benefit from the edge - preserving nature of a two - part code .frequency - based methods suffer from the gibbs phenomenon , or ringing, which tends to blur high - contrast edges . in the approachdescribed , this phenomenon is mitigated by limited use of such methods .a two - part code should perform well in applications in which the fidelity of high - contrast regions is important .as suggested previously , a two - part code can significantly outperform lossy transforms by many orders of magnitude for computer - generated artwork , cartoons , and most types of animation .the low algorithmic complexity intrinsic to such sources leads to efficiently coded signals . in most test cases, critical compression also outperforms jpeg2000 by orders of magnitude for black - and - white images . without the advantage of separating color data into chroma subspaces ,the jpeg algorithms seem much less efficient . for this reason ,two - part codes seem to outperform jpeg coding in many monochrome applications .jpeg2000 does perform well for its intended purpose - generating a highly compressed representation of color photographs . for most of the color test photographs ,a two - part code overtakes jpeg2000 at high quality levels .the point at which this occurs ( if it does ) varies by photograph . at relatively low bitrates ,jpeg2000 usually outperforms a two - part code , but usually by less than an order of magnitude .all examples presented to this point were directly coded in the rgb color space . since the theory of two - part codes applies to any array of n - bit integers , we could have just as easily performed analysis in the color space , like the jpeg algorithms , which often improves the redundancy apparent in color data . in the second part of the appendix ,-space critical compression will be compared to -space critical compression for color photographs .one unique aspect of two - part data compression is its ability to code efficiently over a wide variety of data .it can efficiently code both algorithmically generated regular signals and stochastic signals from empirical data .the former tends to be periodic , and the random aspects of the latter tend to exhibit varying degrees of quasiperiodicity or chaos .however , the creation of periodicity or redundancy is essential to the comparison operation - the prefix complexity involves concatenation , which becomes similar to repetition if the concatenated objects are similar .concatenation can create periodicity from similarities , even if the objects being concatenated have no significant periodicity within themselves , as may be the case with data altered by noise .the inferential power of critical compression derives from its ability to compress periodicity which would otherwise be obscured by noise . in spite of its ultimately incalculable theoretical underpinnings, the human eye intuitively recognizes a critical bit depth from a set of truncated images .the mind s eye intuitively recognizes the difference between noisy , photographic , `` real world '' signals and smooth , cartoon - like , artificial ones .human visual intelligence can also identify the effective depth from the noisy bits - it is the depth beyond which features of the original image can be discerned in the noise function .conversely , given a computer to calculate the critical point of an image , we can determine its critical information content .since noise ca nt be coded efficiently due to its entropy , an effective learner , human or otherwise , will tend to preferentially encode the critical content .this leads directly to more robust artificial intelligence systems which encode complex signals in a manner more appropriate for learning .this work was funded entirely by the author , who would like to acknowledge his sister , elizabeth scoville , and his parents , john and lawana scoville .a patent related to this work is pending .10 t. acharya and p. tsai , _jpeg2000 standard for image compression _ , wiley , hoboken , nj , 2005 .chaitin , _ algorithmic information theory _ , cambridge university press , 1987 .cover , _ elements of information theory _ , wiley , 1991 .fisher , _ on the mathematical foundations of theoretical statistics _ , phil .trans . of the royal society of london * series a 222 * ( 1921 ) , 309368 .m. gell - mann and s. lloyd , _ information measures , effective complexity , and total information _ , complexity * 2/1 * ( 1996 ) , 4452 .r. v. l. hartley , _ transmission of information _ , bell system technical journal ( july 1928 ) , 535 ?j. jackson , _ classical electrodynamics _, third ed . , john wiley and sons , new york , 1999 .d. knuth , _ the art of computer programming , vol 2 . _ , addison - wesley , reading , ma , 1998 .m. li and p. vitnyi , _ an introduction to kolmolgorov complexity and its applications _ , second ed ., springer - verlag , new york , 1997 .g. marsaglia and w.w .tsang , _ the ziggurat method for generating random variables _ , journal of statistical software * 5 * ( 2000 ) .h. nyquist , _ certain topics in telegraph transmission theory _ , trans .aiee * 47 * ( 1928 ) , 617644 .j. scoville , _ on macroscopic complexity and perceptual coding _ , arxiv:1005.1684 .shannon , _ the mathematical theory of communication ._ , bell labs tech .j. * 27 * ( 1948 ) , 379423,623656 .to3em , _ communication in the presence of noise _ , proc .institute of radio engineers * 37 * ( 1949 ) , 1021 . w.h .zurek , _ algorithmic randomness and physical entropy ._ , physical review , ser . a * 40(8 ) * ( 1989 ) , 47314751 .in the following plots , each solid line represents two - part codes having various lossy bitrates at a particular signal depth .the dotted lines show the error level at various bitrates of jpeg2000 coding , these are also two - part codes at signal depth zero . the image repository maintained by the university of waterloo s fractal coding and analysis group contains 32 test images .the collection includes a wide variety of content , with photographic and computer generated content in both color and black and white .two part coding dominates direct lossy image coding for the majority of these images , demonstrating the power and versatility of critical data compression using two - part codes .the images which perform better with direct lossy coding are generally color photographs , with jpeg2000 having its greatest advantage at low quality levels .two - part codes seem to have an advantage for the other images , sometimes by multiple orders of magnitude .the method described was applied to 24 uncompressed 24-bit photographic images from a sample kodak photo cd .we compare critical compression at various bitdepths in the color space using paq8l and jpeg2000 , as before , against a ycbcr - space encoding which critically compresses a luma ( ) channel at various bitdepths using paq8l and the chroma channels ( and ) using jpeg2000 . for this transformation , the chroma parameters and are both equal to , making the channel a simple average of the corresponding red , green , and blue color values . the results ( with above and below ) show that while jpeg2000 retains the advantage at low to moderate quality levels , the critical luma / lossy chroma scheme is usually more efficient at moderate to high quality levels than direct jpeg2000 coding or critical compression in .
|
a new approach to data compression is developed and applied to multimedia content . this method separates messages into components suitable for both lossless coding and lossy or statistical coding techniques , compressing complex objects by separately encoding signals and noise . this is demonstrated by compressing the most significant bits of data exactly , since they are typically redundant and compressible , and either fitting a maximally likely noise function to the residual bits or compressing them using lossy methods . upon decompression , the significant bits are decoded and added to a noise function , whether sampled from a noise model or decompressed from a lossy code . this results in compressed data similar to the original . signals may be separated from noisy bits by considering derivatives of complexity in a manner akin to kolmogorov s approach or by empirical testing . the critical point separating the two represents the level beyond which compression using exact methods becomes impractical . since redundant signals are compressed and stored efficiently using lossless codes , while noise is incompressible and practically indistinguishable from similar noise , such a scheme can enable high levels of compression for a wide variety of data while retaining the statistical properties of the original . for many test images , a two - part image code using jpeg2000 for lossy compression and paq8l for lossless coding produces less mean - squared error than an equal length of jpeg2000 . for highly regular images , the advantage of such a scheme can be tremendous . computer - generated images typically compress better using this method than through direct lossy coding , as do many black and white photographs and most color photographs at sufficiently high quality levels . examples applying the method to audio and video coding are also demonstrated . since two - part codes are efficient for both periodic and chaotic data , concatenations of roughly similar objects may be encoded efficiently , which leads to improved inference . such codes enable complexity - based inference in data for which lossless coding performs poorly , enabling a simple but powerful minimal - description based approach audio , visual , and abstract pattern recognition . applications to artificial intelligence are demonstrated , showing that signals using an economical lossless code have a critical level of redundancy which leads to better description - based inference than signals which encode either insufficient data or too much detail .
|
given the rapid current and expected growth in hspa and lte - based networks and in the number of mobile devices that use such networks to download data - intensive , multimedia - rich content , the need for qos - enabled connection management is vital .however , the non - uniform distribution of users and consequent imbalance in usage of radio resources leads to an existence of local areas of under- and over - utilization of these resources in the network .this phenomenon results in challenging network management issues .load balancing is an important technique that attempts to solve such issues , and occurs when a centralized network controller intelligently distributes connections from highly congested cells to neighboring cells which are less occupied .this allows for an increase in network subscriber satisfaction , as more subscribers meet their qos requirements .furthermore , it allows for an increase in overall channel utilization .load balancing is popular among the academic community and has been explored for many years as described , for example , in earlier works such as , as well as more recently in , and many other papers . for simplicity , in the previous treatment of load balancing, it has been assumed that the connection admission process can be neglected . in cellular networks, however , each new connection needs to first send a request to the serving base station ( bs ) through some predefined control channel . in 3gpp standards , this control channel is the physical random access channel ( * ? ? ?2.4.4.4 ) , ( * ? ? ?the success of a connection request by a user is dependent on multiple factors including the number of requesting users ; the pairwise channel quality between the user and the serving base station ( measured , for example , in ber or outage probability ) ; and the actual control channel access technique itself .until now the impact of random access overhead on load balancing performance is not well understood .more specifically , the quantitative relationship between the random access phase length and the user blocking probability in a load balancing - enabled cellular system is unknown .the present paper aims to provide a framework to address this gap .it provides a set of associated results that describe the benefits of load balancing in cellular networks in high detail , using realistic scenarios and network configurations . at the highest level, the approach is based on a markov model in which the state at each point in time provides a full description of user connectivity , and the transitions are determined by a range of factors described below .while many previous studies have also applied markov methods to model network traffic , the approach described here is differentiated from previous work in multiple respects .first , as noted , it incorporates the call admission process , i.e. the process of requesting a connection before the actual assignment of a bs resource .second , specific consideration is given to channel quality information , which in turn has a direct effect on the number of successful connections and terminations thereby having an impact on load balancing performance metrics .third , it incorporates priority differentiation in the load balancing process .we use the prioritization approach , where users that do not have access to multiple base stations get a higher priority in using available cell resources than users that do .more generally , the model allows substantial flexibility with respect to traffic types , admission control , densities of user population , and other parameters , but is also specifically constructed with the function of considering optimal network policy decisions with respect to load balancing .the rest of the paper is organized as follows .the system model is introduced in section [ sec : system_model ] , while the analytical model is introduced in section [ sec : numerical_analysis ] .the numerical results are presented in section [ sec : numerical_results ] .lastly , the paper is concluded in section [ sec : conclusions ] .we consider a cellular system where two base stations ( bss ) are positioned such that they create a region of overlap in coverage . naturally , while cellular systems typically have far more than two bss , a reduction to a two - bs system for analytical reasons enables a tractable analytical framework while still allowing exploration of a large number of microscopic parameters to use in optimizing network performance . as our modelis of microscopic nature , the extension to a multi - bs system is beyond the scope of this paper .we strongly emphasize that consideration of load balancing in the context of a two - bs system only has been used extensively and successfully in previous treatments , e.g. .cell 1 has available basic bandwidth units , as referred to in or more commonly referred to as channels , and cell 2 has available channels .the throughput of every channel in each cell is the same and equal to / second .we assume that channels are mutually orthogonal and that there is no interference in the set of channels belonging to cells 1 and 2 .each bs emits a signal using omnidirectional antennas and we assume a circular contour signal coverage model , in which full signal strength is received within a certain radius of the bs , and no signal is available beyond that radius , as used in e.g. .each mobile terminal , referred to as user equipment ( ue ) , remains at fixed positions following an initial ue placement process , with each ue located in one of three non - overlapping regions as shown in fig .[ fig : system_model ] . ues are in group 1 and have access only to one bs , ues are in group 2 and have access only to the second bs , and ues are in group 3 and can potentially access either bs .such a non - homogenous ue placement has been considered , for example in , which allows for a tractable analysis of the considered system and considers all important groups participating in the load balancing process .the non - homogenous case of ue distribution is the most widespread because ues are generally distributed non - uniformly over a cellular area .ues in group 3 are in the region of overlap in coverage between the two cells , also known as the traffic transferable region ( ttr ) . since only one ue can occupy a channel at a time , a maximum of ues can be connected to both cells in the system at any given time .we assume that ues from groups 1 and 3 are initially registered to cell 1 ( serving as the overloaded cell ) , while ues in group 2 are initially registered to cell 2 .time is slotted and the minimum time unit is a frame length of seconds .connections and channel conditions are assumed to remain constant for the duration of a frame , though they will in general vary from frame to frame .moreover , we assume that the connection and termination processes for ues are directly dependent on the channel states experienced between each group of ues and the bss they are connected to .we also assume that channel states are binary and independent from slot to slot , much as occurs in , and that all of the ues in each group experience the same channel quality to a given bs .therefore , in any given time slot a ue is either experiencing a good state with probability , ( denotes the particular pair - wise connection between group of ues and associated bs , and denotes the downlink and uplink , respectively ) , or a bad state , with probability .the value of is dependent on the distance between ues in group and bs , which is denoted as . in our analysiswe use the distance as an input to a combined path loss and shadowing propagation model . because of the strict boundaries between groups of ues, we assign priorities on a per group basis . a single ,higher priority is given to all ues in groups 1 and 2 because there is no flexibility to reassign them to a different bs ; group 3 ues can potentially be reassigned and thus given a lower priority . a very similar priority model has been used in other treatments of load balancing .for example , in , newly arriving connections in the non - ttr are given first priority to acquire channels from their serving bss , while the connections from the ttr are assigned to the remaining channels .our model allows for the implementation of a wide range of scenarios that require such traffic prioritization .one potential application is for the modeling of networks where load balancing traffic originating from ues in the ttr has lower priority than non - balanced connections due to several factors including a lower average channel quality , qos requirements , non - uniform spatial distribution of traffic classes , or cell dwell time . furthermore , it allows for modeling integrated hybrid cellular / wlan / ad hoc networks as discussed in , where non - cellular terminals in the ttr have a lower priority than cellular ues , and hierarchical cellular systems , where members of different tiers have independent priorities .finally , it enables the modeling of femtocell traffic prioritization , where ues in groups 1 and 3 are those in the closed subscriber group ( csg ) , while ues in the ttr are neighboring ues outside of the csg . in the connection process a ue first attempts to connect to the bs it is initially registered to by requesting a connection through a random access channel .we assume a frequency division duplex transmission mode , where control and data traffic are transmitted and received simultaneously .each ue generates a connection request with probability .a connection is requested randomly in one of non - overlapping , time slotted control resources , unique to group of ues .in other words , each group has a unique set of sub slots within a frame during which ues may , but are not required to , request a connection .the random access phase length is equal to slot length .collisions between connection requests from ues in the same group are possible .the random access procedure considered in this work shares features of the 3gpp - based cellular network standards , which use the physical random access channel ( prach ) , mapped on a one - to - one basis to the logical random access channel ( rach ) .rach uses the s - aloha protocol and , in relation to the priorities assumed in this paper , allows the prioritization of connection requests based on active service classes ( asc ) which are unique to each ue , and can be adapted by the 3gpp - mac layer once the ue are in connected mode ( * ? ? ?2.4.2.6 ) .the bs advertises itself to the ues within range through the broadcast channel using signatures ( 3gpp release 99 , e.g. umts ) , subcarriers ( 3gpp release 8 , e.g. lte ) , or time slots , which each asc can in - turn use for connection requests on rach . the adaptation of the asc is performed in the time intervals predefined by the operator . for the purpose of our paperwe assume that the bss collectively , through the radio network controller , map the received signal from every registered ue to an associated asc .we assume a zero - persistence protocol , i.e. a collision during a connection request implies that connections are lost , and also ues do not retry to generate another dependent connection . due to this assumption a power ramping process , i.e. feedback from the ue to the bs on an unsuccessful connection request ( * ? ? ?ii - b ) , is redundant . to isolate the impact of each group of ues on collision rates ,we assume mutually exclusive rach resources assigned to each asc ( * ? ? ?analysis of prach performance in isolation can be found in .a connection request is granted during the connection arrangement process if a good channel state occurs between the ue and its associated bs at the moment of the request , and if no collisions occur between multiple requests from different ues .once a connection is established , the bs randomly selects a channel and assigns the connected ue to it .the ue then begins to receive downlink data .ues terminate a connection with probability , where is the average connection transfer size and is the average packet size given in bits .we assume that the transfer size is at least one frame long .a connection terminates either when a transmission completes , or when the channel is in a bad state during transmission . in the case of a ue in group 3 ,if a connection request is successful and there are no resources available in cell 1 , we assume that the radio network controller performs load balancing by transferring the call from cell 1 to cell 2 . to avoid overloading cell 2 and protecting ues that are already registered to it , ues in group 3 can access a maximum of channels from cell 2 , where .ues in group 3 have access to an additional shared channels , therefore they have access to a total of channels . using the nomenclature of ( * ? ? ?* sec . 2 and 3 )this load balancing scheme belongs to a class of direct load balancing schemes .it is closest in operation to direct retry .since we allow at most available channels to offload traffic from cell 1 to cell 2 ( as in simple borrowing scheme ( * ? ?2 ) ) we denote our scheme as _ direct retry with truncated offloading channel resource pool _ ( abbreviated as dr ) . with scheme reduces to classical direct retry , while for it reduces to a system in which no load balancing takes place . in our model we do not use a take - back process .that is , once connections from group 3 are offloaded onto cell 2 , they remain connected to cell 2 during the transmission despite whether or not resources have been freed in cell 1 . in the authors remark that the take - back process , although more fair to cell 2 because it minimizes blocking at cell 2 , is not always advantageous to the network due to the high signaling load that accompanies it . additionally , as in , we do not use queuing , so there is no consideration of a call give - up process .moreover , we do not allow preemption of connections from the ttr by connections that have access to channels only from their respective bss .note that all variables introduced in this section , as well as all other variables used in this paper are summarized in table [ tab : variables ] ..summary of the variables used in the paper : functions ( top ) and variables ( bottom ) [ cols=">,<,^",options="header " , ] [ tab : variables ]let denote a state of a markov system , where denotes the number of resources used by group 1 ues , and denote the number of resources used by group 3 ues associated with cell 1 and 2 , respectively , and denotes the number of resources used by group 2 ues .then the steady state probabilities can be denoted as .note that , , , and .these conditions govern what states are possible in the transition probability matrix .we define the state transition probability as where subscripts and denote the current and the previous time slots , respectively .this allows for computation of the transition probability matrix required to obtain , which is in - turn used to compute the performance metrics of the load balancing - enabled cellular system . in the subsequent sections we describe the process of deriving the transition probability .we begin by explaining the computation process for the channel quality , and then focus on the derivation of the functions that support ( [ eq : rabcd ] ) . in the downlink ,the probability of a ue belonging to group and receiving a good signal from bs , is defined as where is the signal reception threshold for the downlink , expressed as the minimum required received power of the received signal , and is the distribution of the signal received at group , which is at a distance of from bs . as an example , we consider an environment with path loss and shadowing , where is expressed in ( * ? ?2.25 ) as where is a unit - less constant dependent on both the antenna characteristics and an average channel attenuation , given as ( * ? ? ?* eq . 2.41 ), is the bs transmitted power ( which is assumed to be the same for both base stations ) , is the distance of the ue in group located furthest from bs ; is the reference distance for the bs antenna far - field ; is the log - normal shadowing variance given in db ; is the path loss exponent ; and the function is defined in the usual manner as in the uplink , the probability that a good signal is received by bs from a ue in group is , and is expressed in the same manner as equations ( [ eq : optage ] ) and ( [ eq : wxy ] ) , by replacing all variables having superscript with , where denotes the ue transmission power ; denotes the constant for the ue antenna ; is the reference distance for the ue antenna far - field ; and is the signal reception threshold for the uplink . the downlink and uplink channel quality information governs the success rate of the connection admission process , as well as the duration of a downlink transmission . an important feature of the model is the consideration of connection admission in the load - balancing process .this process is a function of the total number of ues , the number of ues receiving data from their respective serving bss , the pairwise channel quality between the ues and its serving bss , and the underlying random access algorithm .the probability that new connections have successfully requested downlink data given currently active connections from group , with a random access channel consisting of time slots is where is the probability of a connection request by an individual ue in group ; and is the probability that among ues requesting a connection , were successful in obtaining a resource .note that the reference to in is omitted due to space constraints , keeping in mind that for , , for , , for , , and for , . for consistency with cellular networks such as 3gpp, we consider a prach - like control channel , for which can be described in the manner of ( * ? ? ?( 3 ) ) note that depending on the assumption of how collisions are resolved , different definitions of in equation ( [ eq : prnk ] ) can be applied when calculating the connection arrangement probability according to ( [ eq : sij ] ) .once a ue successfully requests a connection from the serving bs , a downlink transmission is started provided that at least one free channel is available for the ue . the connection terminates when the bs finishes transmitting data to the ue or when the downlink signal received by the ue is in outage .the probability that connections from active connections at group terminate is where is the inverse of the average packet length , accounting for truncation of some packets due to a bad channel quality .again , the indices have been omitted for notational simplicity in the symbol , assuming that the same relationship between and as given in section [ sec : connection_arrangement ] holds . using the definitions of the arrangement and termination probabilities , expressed in ( [ eq : sij ] ) and ( [ eq : tij ] ) , respectively , we can finally introduce the state transition probabilities for the complete model .the transition probability is constructed using the termination and arrangement probability definitions and the respective relationship between the variables of these two definitions ( which are dependent on the start and end states of the transition ) . due to the complexity of the derivation we begin with a highly simplified example . to facilitate understanding the derivation of the complete state transition probabilities , we first consider a network in which no load balancing occurs and all of the ues are in group 1 , such that and .the state of the markov chain then simplifies to and the transition probability becomes , where in ( [ eq : rat-1at ] ) the case , denotes the transition from a higher to a lower channel occupancy , subject to the constraint that the number of channels occupied in the end state can not exceed the total bs capacity .the number of terminating ues is set to compensate for the ues that generate successful connections .the case , denotes the transition from a lower to a higher channel occupancy ( given , again , that the number of occupied channels is less than the total bs capacity ) . in this caseues from group 1 need to generate exactly as many connections as given by the end state , not forgetting to generate connections in order to compensate for the total number of terminations .lastly , for the case of , the end state equals the total channel capacity .the first term in the definition of this transition probability includes exactly the number of connections needed to reach the end state , once again compensating for terminations .the second term accounts for all successful connections generated that exceed those needed to reach the end state , which will not be admitted .as expected , the current state of the ues in each group has a strong impact on the allowable future states in all groups .for example , newly generated connections from ues in group 2 are initially assigned to cell 1 and can only be offloaded to cell 2 once all channels on cell 1 are occupied .therefore , in the process of constructing the complete set of transition probabilities , not only is it necessary to enumerate all conditions that govern these possible transitions , as in ( [ eq : rat-1at ] ) , but it is also important to consider the relationship between the possible combinations of termination and arrangement probabilities . before presenting the general solution , we introduce supporting functions that simplify the description of the transition probabilities .first , because the system is composed of three groups , where groups 1 and 2 have the same level of priority , we define a function that governs the transition probabilities for these groups , i.e. where when and , otherwise .variables and are supporting parameters that will be replaced by respective variables of ( [ eq : rabcd ] ) , once we derive general formulas for transition probabilities . the ranges of and will be defined in the respective transition probabilities shown later .note that ( [ eq : alpha ] ) resembles ( [ eq : rat-1at ] ) , except for the introduction of the indicator function .for the remaining groups , we define the following supporting functions which denote the possible termination probabilities for group 3 note that the termination probabilities for group 3 in ( [ eq : theta ] ) are composed of individual termination function as given in ( [ eq : tij ] ) .one reason for this is that different channel qualities are experienced by ues in the ttr , resulting in unequal termination probabilities , depending on whether these ues are connected to bs 1 or bs 2 .lastly , we define which denotes the possible arrangement probabilities for group 3 .note that the variables , and in ( [ eq : alpha ] ) , ( [ eq : theta ] ) and ( [ eq : xi ] ) are the enumerators .given the above , we can identify two major states of the system as follows : when all channels are occupied on both cells ( called an edge state and having the same meaning as the last condition in ( [ eq : rat-1at ] ) ) ; and the remaining states .we start by describing the edge state conditions .here we list the following sub - cases . for , , , or , , , , have equation ( [ eq:13 ] ) holds when the number of connections from group 3 ues to both bs 1 and bs 2 increases .the first term in the brackets enumerates all the possible cases of terminations and generations in group 3 given a certain starting state . the second term in the brackets accounts for the edge case .this condition is similar in nature to the third case in ( [ eq : rat-1at ] ) .lastly , the remaining terms account for the possible transitions in group 1 and 2 .the indicator function used in the last condition of ( [ eq : alpha ] ) is a function of the termination and connection enumerators in group 3 . that is , depending on how many connections are admitted in bs 1 and bs 2 in a previous frame , a certain number of ues from group 1 and 2 that request connections will not be admitted . for , , or , , or , , or , , we have in this case the number of connections from group 3 to both bs 1 and bs 2 decreases , orthe number of connections in any one of the bss remains the same , while the other decreases .the construction of the transition probability is the same as in ( [ eq:13 ] ) , respectively replacing with .note that the definition of in ( [ eq : theta ] ) defines the number of freed connections at bs 1 and bs 2 because of terminations of group 3 ues .since the number of connections have to be maintained at full cell occupancy for cell 1 and cell 2 , the respective functions for bs 1 and bs 2 , are used to compensate for the possible number of terminations at each cell due to group 3 ues in order to maintain full system - wide occupancy , i.e. to remain at the edge state . for , , or , , , this case describes the situation where the number of connections from group 3 ues to bs 1 strictly decreases , while those from group 3 ues to bs 2 strictly increases .the transition probability represented in ( [ eq:13 ] ) can account for this by replacing with .the definition of from ( [ eq : theta ] ) describes the case when the number of terminations at bs 1 from group 3 ues account for the decrease in the number of connections , while the number of terminations at bs 2 account only for any additional number of generations .the respective function for bs 1 and bs 2 , again , must compensate for the changes in connections from group 3 to bs 1 and bs 2 in order to maintain full - cell occupancy .lastly , for , , the above case is the opposite of that in ( [ eq:15 ] ) . here , the number of connections from group 3 to bs 1 strictly increases , while those from group 3 to bs 2 strictly decreases .again , respective expressions for in ( [ eq:13 ] ) need to be replaced by .the explanation for and given in ( [ eq : theta ] ) is equivalent to the explanation for ( [ eq:15 ] ) .the second major group of cases refers to the situation in which the number of connections at bs 1 or bs 2 is less than or equal to the maximum capacity .this obviously involves more cases to consider than those explained in section [ sec : a+b = m ] .we start by denoting conditions under which a transition from one state to another is not possible .that is , for , , , or , , , for , , , the above case is partially equivalent to ( [ eq:13 ] ) and considers the situation where the number of connections of group 3 ues connected to bs 1 increases and those to bs 2 stay the same .also , the number of new connections at bs 1 is less than its maximum capacity .since this is not an edge case for the system , an additional third summation term is not needed as seen in ( [ eq:13 ] ) .note that this condition only contains one summation because the number of terminations from ues connected to bs 2 can not exceed the resultant connection state .this is because if they do exceed the desired number of terminations , ues that generate connections to compensate for additional terminations will instead choose to connect to bs 1 ( the bs they are registered to ) thereby changing the resultant connection state .now , for , , , this case is an extension of the case described in ( [ eq:18 ] ) .however , full - cell occupancy now occurs at bs 1 , i.e. all channels of bs 1 are occupied after the transition , and bs 2 operates at less than its maximum capacity .an additional summation is used as compared to ( [ eq:18 ] ) because full - cell occupancy on bs 1 allows for terminations to occur on bs 2 from group 3 ues without changing their resultant connection number .now , for , , , , or , , , or , , , the case described by ( [ eq:20 ] ) is a direct extension of ( [ eq:14 ] ) .similar to ( [ eq:18 ] ) , terminations from group 3 ues to bs 2 can not be considered to achieve the resultant connection state .next , for , , , or , , , or , , , the above case is an extension of the transition probability described in ( [ eq:20 ] ) .it considers the situation when full - cell occupancy occurs only at bs 1 .next , for , , , the above case is an extension of the case described by ( [ eq:15 ] ) .however , only full cell occupancy at bs 1 is considered .next , for , , , the above case is an extension of ( [ eq:16 ] ) . in the case of ( [ eq:23 ] )full - cell occupancy does not occur in any of the cells , therefore the respective summation terms from ( [ eq:16 ] ) accounting for the edge case are removed .also , there is only one summation because as in ( [ eq:18 ] ) and ( [ eq:20 ] ) terminations from group 3 ues to bs 2 can not be considered to achieve the desired end state .lastly , for , , , the final case is an extension of the case described by ( [ eq:23 ] ). however , it considers the case when full cell occupancy occurs only at bs 1 .given the complete description of the system , we are able to derive important metrics that would describe the efficiency of the load balancing process involving connection admission . while there are many performance metrics that can be extracted given the above framework , we focus on three primary metrics : ( i ) the blocking probability , which describes the probability that at least one ue which requests a connection from a particular group will be denied access to a channel , ( ii ) the channel utilization , which expresses the fraction of the available channels are being used , and ( iii ) the collision probability on a control channel , which provides the probability that at least one requesting connection will be lost due to a collision with another ue . as used here, blocking occurs when at least one ue requests a new connection , but can not be admitted to any bs due to lack of available channels .since each group has access to a different number of channels and can follow a different connection strategy , it is necessary to define separate blocking probability metrics for ues in groups 1 and 2 , as contrasted with ues in group 3 . for groups 1 and 2 ,the blocking probability is defined according to where for , , and for , , . for group3 ues , the blocking probability is given as where is defined separately for and .for , where and is defined as ( [ eq : g_a ] ) replacing with , with , with , with and with . for , where is defined as ( [ eq : g_a ] ) and where .we briefly explain the above equations . the derivation of the blocking probability for group 3 ues is more complicated than for those ues in groups 1 and 2 because this group can access channels from both cellstherefore , the number of generations for group 3 ues that leads to blocking has to account for the terminations within the same group , and also for the possible changes in the number of connections of ues in groups 1 and 2 . the blocking for group 3 uescan be analyzed in two separate cases .the first case accounts for the number of generations needed to occupy all the channels in cell 1 ( ) , while the second case accounts for the number of generations needed to occupy all available channels in cell 2 ( ) .the first case is simpler to analyze because group 3 ues need only generate as many connections as there are available resources on cell 1 . when , group 3 ues can only access a maximum of channels on cell 2. therefore , extra conditions are added for the situation in which group 3 ues are blocked when exceeding connections in cell 2 .if there are less than available free channels after terminations of group 3 ues connected to cell 2 , the function represents the number of connections to cause blocking by generating the exact number of connections to occupy all available channels .the function is used to lower bound the necessary number of connections for blocking .this is because the number of connections on cell 2 , in general , is not restricted to and can thus have more than current occupancies resulting in a possibly negative value for . on the other hand ,if there are more than available channels after terminations , exactly channels are used by group 3 ues to cause blocking .the overall channel utilization is which refers to the fraction of the collective capacity that has been used by all ues in all groups .the average total system throughput is obtained by multiplying ( [ eq : u ] ) by .because the model uses a random access channel for connection requests , it is necessary to compute the collision probability of the system .the collision probability for ues in group is : where for group , respectively , and for group .since our model incorporates a very large number of parameters , in the interest of clarity and brevity we focus our study on certain scenarios that are the most important in the context of our model .first , we present results that demonstrate the impact of a varying channel quality on the load balancing efficiency .second , we examine the influence of the random access phase on load balancing efficiency .lastly , we provide insight on the optimal channel sharing policy between bs 1 and bs 2 . to confirm the correctness of the analytical model , we created a simulation environment for verifying the analytical results .the results in sections [ sec : cqi ] and [ ses : varying_p ] are obtained using both the analytical and simulation approaches to confirm correctness , while those in section [ sec : varying_k ] are obtained using only simulation . in this simulation , we model , among others , the call admission , termination , and load balancing processes exactly as described in our system model . as an example, we consider a scenario in which two identical cells are positioned such that they form a small ttr . for simplicity , we assume that , where and .this particular analysis represents the effect of an increasing on the overall system - wide channel utilization , while setting for all , assuming reciprocal uplink and downlink conditions .this is equivalent to varying from a location that is out of range to being right next to bs 1 , and setting m .we use the pathloss model with the following parameters : , , , , m andlastly .furthermore , we assume an average channel capacity of per channel with an average packet size kb and slot length .this yields an average packet length of frames , or equivalently 250ms .the channel throughput represents a typical value used in radio access network planning calculations ( * ? ? ?* table 8.17 ) .the packet size represents a realistic packet length sent over the internet , where packets are distributed between a minimum value of 40b ( transport control protocol acknowledgement packet ) and a maximum transmission unit , which for ipv6 equals 1.268 kb , for ieee 802.3 equals 1.492 kb , for ethernet ii equals 1.5 kb , and for ieee 802.11 equals 2.272 kb . .two extremes of shared channels , i.e. ( no load balancing ) and ( all of cell 2 s channels used in load balancing ) are shown .the figure shows good agreement between the results from the analytical model and from simulation . ] in order to determine the best performance of the load - balancing scheme , represented by the proposed dr algorithm , for a varying channel quality we determine the level of traffic intensity which results in maximum channel utilization .[ fig : chan_util ] expresses the channel utilization as a function of increasing traffic intensity for two extreme values , i.e. ( when no load - balancing is used ) and ( all of cell 2 s channels may be used for load balancing ) .as expected , the channel utilization increases with more traffic intensity because an increase in results in more frequent connection requests from ues in all groups leading to a higher probability of successful connections . the increase in channel utilization tails off as the system reaches saturation , i.e. close to 100% channel utilization .similarly , an increase in results in a higher channel utilization as more ues from group 3 , that are blocked from cell 1 , are offloaded onto cell 2 where they have access to an additional channels .we observe that there is a decreasing rate of gain in channel utilization with an increase in the number of shared channels . as fig .[ fig : chan_util ] shows , a traffic intensity of results in the most improvement in channel utilization between the two extremes of , and . with the knowledge of decreasing gains in channel utilization with increasing , theremay exist an intermediate value of that not only leads to an improvement in total channel utilization , but also maximizes improvement with respect to the overall ue experience .this value of is explored in section [ sec : varying_k ] . in the current sectionwe continue our investigation using and explore the impacts of channel quality on performance . for two extreme values of .as improves , more group 3 ues generate successful connections to cell 1 resulting in more ues that connect to bs 1 and consequently are offloaded onto cell 2 , resulting in an overall increase in channel utilization . ][ fig : chan_util_w31 ] illustrates an increase in channel utilization with an increase in for two extreme values of . increasing results in group 3 ues having more successful requests for receiving downlink transmissions because the average channel quality , in which requests are granted for group 3 ues , improves .therefore , ignoring the channel effects by assuming perfect channel conditions ( also done by setting in our model ) in the analysis of load - balancing schemes , even for one particular group of ues , produces a non - trivial difference in the channel utilization and leads to an exaggerated improvement in performance due to load balancing . by selecting a reasonable scheme to determine the channel quality , as presented in section [ sec : outage_probability ] , we are able to provide a more realistic evaluation of the improvements of load balancing .note that the average channel utilization significantly increases as more channels can be borrowed from bs 2 .when increases , the difference in channel utilization between cases and becomes more profound .this proves that with the low channel quality system - wide improvement from load balancing might not be as significant as in the case of perfect channel conditions . in fig .[ fig : incr_w31 ] we examine , , , and , as a function of for .the scenario used in this result is identical to the one used previously . by comparing an increasing value of in fig .[ fig : incr_w31_k=0]fig .[ fig : incr_w31_k=3 ] we observe the impact of an increasing number of shared channels on the considered performance benchmarks .the first interesting observation is that irrespective of the value of the blocking probability for group 1 ues , , is relatively stable .this means that the quality of service requirements for ues not involved in load balancing will be met , even with load balancing enabled .second , as increases , blocking probability for group 3 ues , , significantly decreases , which proves the effectiveness of load balancing in this context .furthermore , the collision probability for ues in group 3 , , also reduces because with more shared channels there are fewer unconnected ues to request connections .note , however , that the difference in collision probability for an increasing is not as significant as observed for the blocking probability .finally , increasing only slightly increases blocking probability because these ues have priority in connecting to any of cell 2 s free channels . focusing on fig .[ fig : incr_w31_k=3 ] only , where load balancing is enabled , we note that as increases , all curves experience an increase . this can be explained as follows : with an increase in , more group 3 ues are able to generate successful connections to bs 1 resulting in an increase in the contention for sub slots during admission control , and hence an increase in .also , there is an accompanied increase in because as more ues generate successful connections , an increasing number of ues contend for free channels on both cell 1 ( where load balancing does not occur ) and cell 2 ( where load balancing occurs ) .consequently , this also results in an increase in and .although these trends are obvious , the exact degradation in ue experience for each group is not .for example , in this specific scenario , fig .[ fig : incr_w31 ] illustrates that is always the primary factor in the degradation of the group 3 ue experience as compared to .this knowledge is significant as the network operator can determine whether an increase in , or an increase will be more beneficial to group 3 ues .observe that fig .[ fig : chan_util_w31 ] and fig .[ fig : incr_w31 ] show an extremely good match between the analytical result and simulation . in this sectionwe present results on the effect of random access phase on the performance of load balancing .the results are presented in fig .[ fig : access_effect ] .all network parameters are set identically to the network considered in section [ sec : cqi ] , except for , where .we begin by investigating the impact of different ue distributions on the performance of load balancing .we perform three experiments and denote each experiment as a specific case . in the first case we set the number of ues , such that more ues are distributed in groups 1 and 2 , than in group 3 , i.e. , . in the second casewe set the number of ues equal in each group , i.e. . and finally , in the third case we set the number of ues in group 3 larger than in the other two groups , i.e. , .the metric that is studied in the three cases described above is the total network - wide blocking probability , calculated as , as a function of channel access probability for .this metric is used in order to give a simple overall indication of the blocking suffered by ues in all groups .results are presented in fig .[ fig : blocking_tot_p ] . the most interesting observation from fig .[ fig : blocking_tot_p ] is that with an increase in the number of ues in the ttr , the total blocking probability becomes smaller for moderate values of .surprisingly , the blocking probability starts to drop sharply as the value of continues to increase .this phenomenon occurs because as increases , so do collisions on the random access channel , which in - turn limits the blocking probability because there are fewer ues that successfully access available channels .it has to be kept in mind that for each case presented in fig .[ fig : blocking_tot_p ] the length of the random access phase remains the same .what is important to note is that for moderate values of , the difference between blocking probabilities for each case is small , i.e. less than 5% ( please compare values of blocking probability for each case in the range of ) .however , as becomes very large , the curves with a higher number of group 3 ues drop off faster because they experience a substantial increase in the number of collisions . therefore, a certain value of can maximize the channel utilization achieved through load balancing and also maintain the blocking probability at approximately the same level ( given negligible changes in ue distribution ) .we now move our focus to the impact of random access phase length on the performance of load balancing .the results are presented in fig .[ fig : varying_l ] .the set of parameters remain the same as in the earlier experiment in this section , however . as an example , three network metrics are evaluated as a function of number of slots in the random access phase , : ( i ) total channel utilization in both cells , , ( ii ) collision probability at group 3 , , and ( iii ) blocking probability at group 3 , . for clarity , the number of slots is set equal among each group of ues . obviously , as the number of random access slots increase the collision probability decreases for group 3 ues , and the overall channel utilization increases .however , as the collision probability decreases the blocking probability , within the same group of ues , becomes larger .this is in line with the results presented in fig .[ fig : blocking_tot_p ] .recall , that as more ues gain access to the bs , the probability that channels become unavailable increases .the results shown in fig . [ fig : varying_l ] further demonstrate the fundamental tradeoff between the delay caused by random access and overall network utilization .apart from this tradeoff , we demonstrate that the network operator has a powerful tool , i.e. random access phase length , through which network metrics can be easily regulated .it is obvious that the operator has no control over the channel access probability , , of individual ues .however , the operator is able to set a higher value of to the asc of interest in order to maintain an expected access delay for each ue against a required blocking probability .we consider a macrocell scenario in which the distribution of ues in groups 1 , 2 and 3 follow the relationship given by and , where .let and .we consider a symmetric system where each cell has channels , i.e. , and the distances between each group of ues and their respective serving bss are identical , i.e. m .once again , we assume an average channel capacity of per channel with an average packet size kb .a frame duration duration of is used .we assume a simplified path loss model with identical parameters as in section [ sec : cqi ] except with , which is more appropriate for outdoor channel conditions . .as increases , there is an obvious improvement in channel utilization , however , there are decreasing gains experienced per additional . ] in fig .[ fig : ch_util_change ] we explore the system - wide improvement in channel utilization ( represented as a percentage on the vertical axis ) as a function of the number of shared channels ( represented on the horizontal axis ) .note that for the remainder of our study we fix traffic intensity for all groups to in order to determine the maximum gain in channel utilization for every value of .the line with circle markers represents the total percentage improvement experienced as a function of , while the line with triangle markers represents the improvement experienced in channel utilization per shared channel .as increases , there is a decreasing improvement in channel utilization per additional channel , which indicates that the extra cost of sharing more channels for load balancing may outweigh the added benefit of serving a greater number of ues . , and blocking probability per shared channel , , where . in this scenario, we observe that the decrease in is always greater than the corresponding increase in for all , suggesting that there is an overall improvement in the ue experience .furthermore , it is seen that at we have the greatest difference between the decrease in and the corresponding increase in suggesting that this is the optimal number of shared channels to use in order to gain the best ue experience per . ]although there is an overall improvement in the system - wide channel utilization , the exact effect of the load - balancing scheme on the ue experience is unknown .obviously , decreases with an increase in because group 3 has access to more channels .in contrast , increases because more ues in group 3 access channels belonging to cell 2 , which are of course also accessible to ues in group 2 .although these general trends are obvious , the exact relationship between the amount of decrease in versus the amount of increase in is unknown . in fig .[ fig : block_prob ] we examine this relationship in more detail , where the decrease in ( solid line with triangle markers ) is plotted with the consequent increase in ( solid line with circles ) as a percentage on the vertical axis with increasing on the horizontal axis .we observe that in this particular scenario the total decrease in is always more than the total increase in , suggesting that the overall ue experience improves with the proposed load - balancing scheme .this reaffirms the increase in channel utilization with an increase in for dr , which is previously observed in fig .[ fig : ch_util_change ] .the total improvement in overall ue blocking probability demonstrates the effectiveness of the load balancing scheme .however , from a network operator viewpoint , knowledge of the changes in ue experience per additional shared channel is also very important .[ fig : block_prob ] examines the effect of an increase in on both the decrease in ( dashed line with triangles ) , and the consequent increase in ( dashed line with circles ) .we observe , that for this particular scenario , a decrease in is always more than the increase in , which suggests that the ues in group 3 experience more of an improvement in performance than the performance degradation experienced by ues in group 2 per additional shared channel .this allows direct evaluation of the effectiveness of the load - balancing scheme on the overall ue experience per additional shared channel . in fig .[ fig : block_prob ] , we note that reaches a maximum at an intermediate value of , i.e. , because the system reaches a balance between the number of ues requesting connections and those that are already connected through load balancing .our model allows for the direct observation of this system state because of the combined modeling of a finite number of ues together with a detailed call admission process . with the use of fig .[ fig : block_prob ] , we are able to determine the best to improve the overall ue experience on a per shared channel basis , and then find the corresponding improvement in the overall channel utilization using fig .[ fig : ch_util_change ] . for this particular scenario ,the maximum difference between the increase in and decrease in occurs at and corresponds to an overall improvement in channel utilization of 12.6% . in summary, we can construct the following optimization function .given fundamental descriptors of the network considered in section [ sec : numerical_analysis ] , i.e. , , , , , , and , the network operator should find where and are the required maximum blocking and collision probabilities , respectively , for group .the developed analytical model provided in section [ sec : numerical_analysis ] allows solving the optimization function ( [ eq : optimization ] ) , since each metric , , and is given in closed form .the optimization formula allows obtaining the value of in order to obtain the maximum utilization per shared channel , such that all considered quality of service metrics required by the operator , and , are met .note that finding the optimal solution to ( [ eq : optimization ] ) is beyond the scope of this paper .we have presented a new analytical model to assess the load balancing process in cellular networks .the model differs in many respects from previous work on load balancing analysis in several important ways .in particular , it incorporates a detailed call admission procedure , addresses traffic prioritization in the connection admission process , and also allows exploration of the impact of channel quality on load balancing efficiency .we have presented a variety of results derived from this framework , and in particular explored the tradeoffs in terms of channel utilization , blocking probability , and collision probability when traffic is transferred from a highly congested cell to a less - loaded neighboring cell .w. cooper , j. r. zeidler , and s. mclaughlin , `` performance analysis of slotted random access channels for w - cdma systems in nakagami fading channels , '' _ ieee trans . veh ._ , vol .51 , no . 3 , pp .411424 , may 2002 .h. jiang and s. s. rappaport , `` prioritized channel borrowing without locking : a channel sharing strategy for cellular communications , '' _ ieee / acm trans .networking _ ,vol . 4 , no . 2 , pp . 163172 , apr .1996 .a. gotsis , d. komnakos , and p. constantionou , `` dynamic subchannel and slot allocation for ofdma networks supporting mixed traffic : upper bound and a heuristic algorithm , '' _ ieee commun ._ , vol . 13 , no . 8 , pp . 576578 , aug . 2009 .f. a. cruz - perz , j. l. vzquez - vila , g. hernndez - valdez , and l. ortigoza - guerrero , `` link quality - aware call admission strategy for mobile cellular networks with link adaptation , '' _ ieee trans . wireless commun . _ ,vol . 5 , no . 9 ,24132425 , sept .w. li , h. ang chen , and d. p. agrawal , `` performance analysis of handoff schemes with preemptive and nonpreemptive channel borrowing in integrated wireless cellular networks , '' _ ieee trans .wireless commun ._ , vol . 4 , no . 3 , pp .12221233 , may 2005 .chung and j .- c .lee , `` performance analysis and overflowed traffic characterization in multiservice hierarchical wireless networks , '' _ ieee trans .wireless commun ._ , vol . 4 , no . 3 , pp .904918 , may 2005 .d. choi , p. monajemi , s. kang , and j. villasenor , `` dealing with loud neighbors : the benefits and tradeoffs of adaptive femtocell access , '' in _ proc .ieee globecom _ , new orleans , la , usa , nov . 30 dec . 4 , 2008 .p. zhou , h. hu , h. wang , and h .- h .chen , `` an efficient random access scheme for ofdma systems with implicit message transmission , '' _ ieee trans .wireless commun ._ , vol . 7 , no . 7 , pp27902797 , july 2008 .
|
we present an analytical framework for modeling a priority - based load balancing scheme in cellular networks based on a new algorithm called direct retry with truncated offloading channel resource pool ( dr ) . the model differs in many respects from previous works on load balancing . foremost , it incorporates the call admission process , through random access . in specific , the proposed model implements the physical random access channel used in 3gpp network standards . furthermore , the proposed model allows the differentiation of users based on their priorities . the quantitative results illustrate that , for example , cellular network operators can control the manner in which traffic is offloaded between neighboring cells by simply manipulating the length of the random access phase . our analysis allow to quantitatively determine the blocking probability individual users will experience given a specific length of random access phase . furthermore , we observe that the improvement in blocking probability per shared channel for load balanced users using dr is maximized at an intermediate number of shared channels , as opposed to the maximum number of these shared resources . this occurs because a balance is achieved between the number of users requesting connections and those that are already admitted to the network .
|
the question of whether all of us , living humans , descend exclusively from an anatomically modern african population which completely replaced archaic populations in other continents , or if africans could have interbred with these local hominids has been the subject of a long lasting and interesting debate .the first of these possibilities , known as out of africa model , is based mainly on genetic evidence further supported by paleontological and archaeological findings .the latter , known as multiregional model , on the contrary , has been more supported by morphological studies , but recently it has also been found consistent with genetic data .a third , intermediate possibility , known as assimilation model , suggests that africans may have interbred with local archaic hominids to a limited extent .the decision of which model correctly describes the origin of _ homo sapiens _ is obscured by the intricacies of the statistical methods proposed for evaluating the models themselves .examples of such intricate methods , their conflicting conclusions and subsequent debate are given in .in this paper we will describe by a simple and realistic model the dynamics of two subpopulations africans and neanderthals interbreeding at a slow rate .in particular , we quantitatively determine the fequency of interbreeding events which are necessary in order that non - african living humans have between 1 and nuclear dna of neanderthal origin , according to the discovery of green _ et al _ . among other important achievements , the recent seminal paper by green _et al _ provides the first direct evidence of interbreeding of modern humans with archaic hominids , neanderthals in this case . by direct evidence we mean having sequenced neanderthal nuclear dna and showing that this dna is more similar to nuclear dna of living non - africans than to nuclear dna of living africans .of course , the findings of green _ et al _ await anxiously for replication by the scientific community .improvements in the resolution of the genome sequencing , in the comparison with present day individuals and dna sequencing of other fossils classified as neanderthals , _ h. erectus _ , _ h. floresiensis _ and modern humans are mostly welcome . based on their findings and on archaeological evidence , it was suggested in that interbreeding between anatomically modern africans and neanderthals might have occurred in the middle east before expansion of modern africans into eurasia , at a time in which both coexisted there .this hypothesis is assumed in this paper , allowing inference of the only parameter in the model , the rate of exchange of individuals between africans and neanderthals , and giving some idea on the size of the total population involved in the interbreeding .the model will be fully explained in the next section , but we anticipate here its main features .total population size is supposed fixed , but african and neanderthal subpopulations sizes fluctuate according to the neutral ( i.e. africans and neanderthals are supposed to have the same fitness ) wright - fisher model for two alleles at a single locus . we also assume no biological barriers for interbreeding and no strong hypotheses on the initial composition of the population .gene flow between subpopulations is implemented by assuming that a fixed number of pairs of individuals per generation is exchanged between them .the model is characterized by a deterministic component a system of two linear ordinary differential equations ( odes ) and a stochastic component a realization of the wright - fisher drift process to be introduced as an external function in the odes .the odes are exactly solvable , up to definite integrals depending on the stochastic part .the stochastic part can be dealt with by simple simulations . assuming a random initial fraction of africans , our main result is the conditional probability density distribution for the exchange parameter , illustrated in fig .[ novafig ] .the condition to be satisifed is that , after interbreeding with neanderthals , a fraction of 1 to of neanderthal genes , as suggested by , will be present in the african population .[ novafig ] shows this condition is attained with maximum probability for , i.e. one pair of individuals is exchanged between the two subpopulations every 77 generations .the mean value of is , which corresponds to one pair of individuals exchanged at each 12 generations .such conclusions are based on a solvable mathematical model and simple simulations , avoiding statistical in favor of probabilistic methods .application of probabilistic methods reminiscent of statistical mechanics to biological problems has been abundant in the literature of physics and mathematics communities , but penetration into biology and anthropology has proved more difficult . in particular , both authors of this paper have previously and separately anticipated that evidences based on mitochondrial dna ( mtdna ) could not rule out the possibility of interbreeding among modern humans and other archaic forms .we hope that the direct experimental proof of such interbreeding provided by can be the occasion for better acceptance of methods such as the ones we will discuss . while writing the present paper a new report concerning the interbreeding of modern humans with another archaic hominid group was published .results have been obtained by studying the fossil nuclear dna extracted from the finger of a single individual previously known only from its mtdna .the individual is considered a representative of an archaic group of hominids ( denisovans ) different both from moderns and neanderthals . according to the authors ,denisovan nuclear dna is present in living melanesians in a proportion of about .very few is known about the morphology of denisovans , as complete fossils belonging to this group are not yet known .although we still have no data concerning the size of populations and the duration of coexistence , the model described in this paper might be used to describe the interbreeding between modern humans and denisovans .consider a population of constant size equal to individuals . we suppose that the population is divided into two subpopulations we call 1 and 2, generations are non - overlapping and the number of generations is counted from past to future .reproduction is sexual and diploid .we also suppose that the subpopulations have lived isolated from each other for a long time before they meet . at generation ,when subpopulations meet , the total population consists then of two groups , each of which consisting in individuals of a pure race . starting at this time subpopulations will share a common environment for a long period .we do not suppose that the numbers and of individuals at generation in each of the two subpopulations are constant , although their sum is . instead , and are random variables which can be determined by wright - fisher rule i.e. , any of the individuals of generation independently chooses to belong to subpopulation 1 with probability and to subpopulation 2 with probability .after that , both father and mother of an individual in generation are uniformly randomly chosen among all males and females of generation in the subpopulation he / she has chosen .with such a reproduction mechanism the numbers and fluctuate as generations pass until one of the subpopulations becomes extinct .this stochastic process is the same as in the simplest version of the neutral , i.e. no selective advantage for any of the alleles , wright - fisher model for two alleles at a single locus .the time for extinction is random as well as which of the two subpopulations becomes extinct .if is the initial fraction of individuals of subpopulation 1 , then subpopulation 1 will survive with probability and the mean number of generations until extinction is $ ] ( see ) . as the mean number of generations for extinction of onesubpopulation scales with , it is reasonable to measure time not in generation units , but in generations divided by . from hereon , we will refer to simply as _ time _ and we will refer to in a realization of the above stochastic process as the _ history _ of the wright - fisher drift process . in the previously described dynamics no mechanism of gene admixture between subpopulations was present and we add it as follows .we assume that at each generation a number of random individuals from subpopulation 1 migrates to subpopulation 2 and vice - versa the same number of random individuals from subpopulation 2 migrates to subpopulation 1 .in other words , _ pairs _ per generation are exchanged .we strongly underline that is a number of order 1 , not of order .migrants will contribute with their genes for the next generation just like any other individual in their host subpopulation .their offspring , if any , is considered as normal members of the host subpopulation . the parameter introduced above may be non - integer and also less than 1 . in such caseswe interpret it as the average number of pairs of exchanged individuals per generation . by the hypothesis of isolation between subpopulations for a long time before , we may suppose that in many _loci _ the two subpopulations will have different and characteristic alleles .therefore , we can assume that there exists a large set of alleles which are exclusive of subpopulation 1 and the same for subpopulation 2 .we will refer to these alleles respectively as _ type 1 _ and _ type 2_. at any time any individual will be characterized by his / her fractions of type 1 and type 2 alleles .we define then as the _ mean fraction of type 1 alleles in subpopulation 1 at time _ and _ as the mean fraction of type 1 alleles in subpopulation 2 at time . the _ mean _ here is due to the fact that individuals in subpopulation 1 in general have different allelic fractions , but is calculated by averaging allelic fractions among all individuals in subpopulation 1 , and similarly for .of course and .similar quantities might have been defined for type 2 alleles , but they are easily related to and and thus unnecessary .it is now possible to derive the basic equations relating the mean allelic fractions at generation with the mean allelic fractions at generation . in doingso we will make the assumption that the individuals of subpopulation 1 migrating to subpopulation 2 all have an allelic fraction equal to .the analogous assumption will be made for all the individuals of subpopulation 2 migrating to subpopulation 1 .of course the above assumption of exchanged individuals all having the mean allelic fractions in their subpopulations is a very strong one and it is not strictly true .nonetheless , it is indeed a very good approximation if is much smaller than .in fact , is the number of generations between two consecutive exchanges of individuals . as the typical number of generations for genetic homogenization in a population of individuals with diploid reproduction and random mating is ,see , the condition that is much smaller than makes sure that subpopulations 1 and 2 are both rather homogeneous at the exchange times .the allelic fraction will be equal to plus the contribution of type 1 alleles from the immigrating individuals of subpopulation 2 and minus the loss of type 1 alleles due to emigration .we remind that these loss and gain terms are both proportional to and inversely proportional to the number of individuals in subpopulation 1 .similar considerations apply to . in symbols : the above equations , after taking the limit , become a system of linear odes we stress here that we think of as a stochastic function obtained by realizing the wright - fisher drift , but eqs .( [ eqdif ] ) and ( [ odes ] ) still hold if is any description of the history of the size of subpopulation 1 , be it stochastic or deterministic .for example , the possibility of individuals in subpopulation 1 being fitter than individuals in subpopulation 2 has been explored , still using ( [ eqdif ] ) , in another work .( [ odes ] ) can be exactly solved up to integrals depending on .although such integrals can not be calculated in general , the exact solution can be used to give a qualitative view of the behaviour of functions and .it turns out that is a decreasing function , whereas increases .the decrease and increase rates are larger when is large and , despite symmetry in our immigration assumption , gene flow between subpopulations is in general asymmetrical .such features are shown in appendix a. moreover , eqs .( [ eqdif ] ) lend themselves to simple and rapid numerical solution for quantitative purposes . in appendix b, we address the question of comparing numerical solutions of eqs .( [ eqdif ] ) and direct simulation of all stochastic processes involved .we see that there is good agreement between simulations and numerical solutions of eqs .( [ eqdif ] ) . in all that follows ,unless explicitly stated , we will use results obtained by numerically solving eqs .( [ eqdif ] ) , because the computer time for numerical solution is much smaller than for simulation .we know that neanderthals were extinct and , according to , before disappearing they interbred with modern humans . despite comparisons between nuclear dna of neanderthals and living humanshaving been limited up to now by a sample of only 3 neanderthals and 5 living humans , the authors of observed that all three non - africans in their sample are equally closer to the neanderthals than the two africans .they estimate that non - african living humans possess to of their nuclear dna derived from neanderthals .supposing that africans are subpopulation 1 in our model , this means that the final value of should lie between and in order to comply with their experimental conclusions .we will refer in the following to the interval between 0.96 and 0.99 as the _ experimental interval _ for the final value of . as we do not know the composition of the total population at the time the two subpopulations met, we will take the initial fraction of africans as a random number . with this hypothesis ,the only free parameter is the exchange rate . as can be seen in fig .s1 the value of largely influences the final value of .furthermore , in both figs . s1 and s2 it can be seen that with or the final values of tend to be too small to be compatible with the experimental interval .we stress that these figures are based only on two realizations of the history and a single value . in order to produce estimates of must produce a large number of histories with many values of and for any of these simulated histories recursively solve eqs .( [ eqdif ] ) in order to determine the associated final value of .the inset in fig .[ novafig ] is realized by producing 400,000 wright - fisher drift histories with random uniformly distributed between 0 and 1 .for all these histories we compute the final theoretical value of by solving eqs .( [ eqdif ] ) using the three values , and .therefore , for each of the three values of we have about 200,000 data which allow inference of the probability density for the final value of .the data plotted in the inset of fig .[ novafig ] show that for the probability that the final value of lies in the experimental interval is approximately equal to .for the corresponding probability is approximately of and for it is approximately of . in all three casesthe density of the final values of is rather thick , meaning that there is a large probability that the final value of does not lie in the experimental interval .the above information shows that the experimental data are better explained by values of much smaller than 1 . by fig .[ novafig ] we see that the value of which explains with largest probability the experimental data is . in order to produce that plot , we simulated a large number of wright - fisher histories with random uniformly extracted between 0 and 0.8 and random values for uniformly distributed between 0 and 2 . from these data we selected the histories in which subpopulation 2 was extinct and such that the final theoretical value of lied in the experimental interval .in this way we can empirically determine the probability that the final value of lies in the experimental interval as a function of .we also see that the probability density for is rather asymmetrical around with values contributing with large probability .this asymmetry is reflected in the fact that the mean value is , much larger than .a technical detail in producing fig.[novafig ] is that the random values for are chosen with uniform distribution in the interval , avoiding values either close to or inside the experimental interval .such a choice is related to the assumption of _ slow _ rather than _ rapid _ interbreeding between africans and neanderthals .see aapendix c and fig .s3 for a more detailed explanation on that choice .o. bar - yosef compares occupation of the middle east by neanderthals and africans with a long football game .the occupants of the caves of skuhl and kafzeh in israel alternated between africans and neanderthals several times over a period of more than 130,000 years . although the model described before becomes independent of the total population , we may obtain some hints on the size of if we accept the constraint that at least for 130,000 yearsneanderthals had not been extinct in the middle east . by taking random values for between 0 and 0.8 and between 0 and 2we obtained a sample of 790 events such that neanderthals were extinct and lied in the experimental interval .for each of these events we recorded the time it took for extinction of neanderthals and we found out that the mean extinction time was 0.58 .if we take this mean value as the typical value , suppose that one generation is 20 years and equate it to 130,000 years , we get individuals .the whole distribution of extinction times in the above sample is shown in fig .s4 . in fig .s3 we plotted the same sample of events in the plane .we see that smaller values of are correlated with smaller values of and also that the events such that lies in the experimental interval are concentrated around the largest values of .the mean value of for the whole sample is 0.64 . using the same samplewe may also explore the values of at the time neanderthals were extinct , i.e. the fraction of african dna in the last neanderthals which interbred with africans .s5 shows a histogram of the values for the events in the sample .observe that typical values of are much larger than the values of , which range from 0.01 to 0.04 .this is due to the fact that in most events such that falls within the experimental interval , africans were the majority of population for most of the time . according to the explanation in appendix a , this implies that , despite symmetry in the number of exchanged individuals , transfer of african alleles to neanderthals will be larger than transfer of neanderthal alleles to africans . by simulating the complete reproduction and individual exchange process described in appendix bwe were able also of empirically determining the conditional probability the condition being that the fraction of african dna in africans is in the experimental interval that the most recent common ancestors in the population for the maternal ( mtdna ) and paternal lineages ( y chromosome ) are both african .we ran several simulations with populations of 100 individuals and random values of uniformly distributed between 0.01 and 0.2 and random constrained to be smaller than 0.8 . in each simulation we waited until all male individuals had the same paternal ancestor and all female individuals had the same maternal ancestor .we selected those simulations in which subpopulation 1 survived and lied in the experimental interval . out of 96 simulations satisfying the above criteria , only in 7 of them the survived ychromosome and mtdna lineages were not both of ancestors belonging to subpopulation 1 .therefore , according to our interbreeding model , the conditional probability of an african origin of both mtdna and y chromosome can be estimated to be of order .for the same 790 histories such that lies in the experimental interval as in fig .[ x0alpha ] we show the correlation between and the time for neanderthal extinction .the mean extinction time in the sample is 0.58.,scaledwidth=90.0% ] the main result of our paper is the probability density distribution for the interbreeding rate shown in fig .other interesting resuts are illustrated here . in order to produce them we simulateda large number of wright - fisher drift histories with random constrained to be smaller than 0.8 and random smaller than 2 . for each historywe numerically obtained the values of ad by iterating eq .we obtained then a set of 790 events such that the final value of lies in the experimental interval .[ x0alpha ] was produced with the above mentioned sample . with the same sample we may also study the questions of extinction times , see fig . [ extinctimes ] , and the final values of , i.e.how much african dna was transmitted to neanderthals before they were extinct in the middle east , see fig .[ africandnaneand ] .comments on these figures were made in the main text .for the same 790 histories such that lies in the experimental interval as in fig .[ x0alpha ] we plot the probability density distribution for the final values of .the mean final value of in the sample is 0.33.,scaledwidth=90.0% ]large samples of mtdna and y chromosomes in living humans have been sequenced .the small variation among living humans is compatible with a single ancestor woman ( mtdna ) and a single ancestor man ( y chromosome ) to the whole population , probably both of african origin and living about 100 - 200 thousand years ago .these facts have been interpreted as proofs of the out of africa model , but our interbreeding model is perfectly compatible with them .in fact , conditioned to being in the experimental interval , our model yields a large probability of 93% for african origin of both mtdna and y chromosome .more recently , the whole mtdna of a few neanderthal fossils became available .the average number of pairwise differences in mtdna between a neanderthal and a living human is significantly larger than the average number of pairwise differences in mtdna among living humans .this has been considered as a further confirmation of the claim that neanderthals belong to a separate species , see e.g. , and also for the out of africa model . before any data on neanderthal nuclear dna was available ,both authors of this paper had separately anticipated that the above facts are all compatible with anatomically modern africans and neanderthals being part of a single interbreeding population at the times they coexisted .some further details about these claims are given in appendix e. in the framework of the model proposed in this article we could infer that the 1 to 4% fraction of neanderthal dna in present day non - africans can be explained with maximum probability by assuming that the african and neanderthal subpopulations exchanged only 1 pair of individuals in about 77 generations . but the mean value of the exchange parameter in the model corresponds to a larger frequency of about 1 pair of individuals exchanged in about 12 generations .we also estimated the mean number of generations for neanderthal extinction in the middle east to be approximately .together with the fact that neanderthals and africans seem to have coexisted in the middle east for at least 130,000 years , this allows us to estimate the total population in the model to be of order individuals .although green _ et al _ have observed in gene flow from neanderthals into africans , they have not observed the reverse flow .this fact is also compatible both with our results and the fact that living europeans are as close to neanderthals as living asians or oceanians .the explanation is that the neanderthal specimens which had their dna sequenced in were all excavated in european sites .it seems that only a part of the total neanderthal population took part in the interbreeding process in middle east , the other part of the population remaining in europe .the descendants of these neanderthals which have never left europe did not interbreed later with africans when they came into europe , or this interbreeding was very small . on the contrary , according to our model , see fig .s5 , we expect to find a larger fraction of african dna in late middle east neanderthal fossils than the 1 to 4% neanderthal fraction of present non - africans .thus , dna sequencing of one such fossil would be a good test for the present model .neanderthals are implicitly considered in this work as a group within the _ homo sapiens _ species and we renounce the strict out of africa model for the origin of our species , in which anatomically modern africans would have replaced without gene flow other hominids in eurasia . in particular , our model is neutral in the sense that we assign the same fitness to neanderthals and africans .our results show that neither strong sexual isolation between africans and neanderthals or else some kind of neanderthal cognitive or reproductive inferiority , are necessary to explain both their extinction and the small fraction of their dna in most living humans .in fact , within the assumptions of the model , if two subpopulations coexist in the same territory for a sufficiently long time , only one of them survives . the fact that neanderthals were the extinct subpopulation is then a random event . although we do not intend to back up any kind of superiority for neanderthals , our neutrality hypothesis is at least supported by recent results by j. zilho _ et al _ , which claim that neanderthals in europe already made use of symbolic thinking before africans arrived there .current knowledge about denisovans morphology and life style is much less than what we know about neanderthals . in particularwe do not know whether denisovans lived only in siberia , where up to now the only known fossils have been found , or elsewhere .where and when this people made contact with the african ancestors of present day melanesians is still a mystery . nevertheless ,if such a contact occurred for a sufficiently long time in a small geographical region , then the present model can be straightforwardly applied . as we now know of our neanderthal and denisovan inheritances , it is time to ask whether they were the only hominids that africans mated .we believe that the future may still uncover lots of surprises when denisovans will be better studied and nuclear dna of many more neanderthal and other hominid fossils will become available . 10 r. l. cann , m. stoneking , and a. c. wilson .mitochondrial dna and human evolution ., 325:31 , 1987 . c. stringer . human evolution: out of ethiopia . , 423:692695 , 2003 .s. mcbrearty and a. s. brooks .the revolution that was nt : a new interpretation of the origin of modern human behavior ., 39(5):453 563 , 2000 .a. g. thorne and m. h. wolpoff .the multiregional evolution of humans , revised paper . , 13(2 ) , 2003 .a. r. templeton .haplotype trees and modern human origins ., 48:3359 , 2005 .n. r. fagundes , n. ray , m.beaumont , s. neuenschwander , f. m. salzano , and l. bonatto , s. l. excoffier .statistical evaluation of alternative models of human evolution ., 104(45):1761417619 , 2007 .a. r. templeton .coherent and incoherent inference in phylogeography and human evolution ., 107(14):63766381 ; , 2010 .r. e. green , j. krause , et al . a draft sequence of the neandertal genome ., 328:710722 , 2010 .o. bar - yosef .neandertals and modern humans in western asia . in t.akazawa , k. aoki , and o. bar - yosef , editors , _ the chronology of the middle paleolithic of the levant _ , pages 3956 .plenum , new york , 1999 .biomathematics ( berlin ) .springer - verlag , 1979 . a. g. m. neves and c. h. c. moreira .applications of the galton - watson process to human dna evolution and demography . , 368:132 , 2006 .a. g. m. neves and c. h. c. moreira .the mitochondrial eve in an exponentially growing population and a critique to the out of africa model for human evolution . in r. p. mondaini and r. dilo , editors , _biomat 2005_. world scientific , 2005 .a. g. m. neves and c. h. c. moreira . the number of generations between branching events in a galton - watson tree and its application to human mitochondrial dna evolution . in r.p. mondaini and r. dilao , editors , _biomat 2006_. world scientific , 2006 .lack of self - averaging and family trees ., 332:387 393 , 2004 .m. serva . on the genealogy of populations : trees , branches and offspring ., 2005(07):p07011 , 2005 .mitochondrial dna replacement versus nuclear dna persistence . , 2006(10):p10013 , 2006 .d. reich , r. e. green et al. genetic history of an archaic hominin group from denisova cave in siberia . , 468(7327):10531060 , 2010 .j. krause , q. fu , j. m. good , b. viola , m. v. shunkov , a. p. derevianko , and s. pbo .the complete mitochondrial dna genome of an unknown hominin from southern siberia ., 464:894897 , 2010 .b. derrida , s. c. manrubia , and d. h. zanette .statistical properties of genealogical trees ., 82(9):19871990 , mar 1999 .b. derrida , s. c. manrubia , and d. h. zanette .distribution of repetitions of ancestors in genealogical trees ., 281(1 - 4):1 16 , 2000 .b. derrida , s. c. manrubia , and d. h. zanette . on the genealogy of a population of biparental individuals ., 203(3):303 315 , 2000 .j. t. chang .recent common ancestors of all present - day individuals . , 31(4):10021026 , 1999 .a. g. m. neves .interbreeding conditions for explaining neandertal dna in living humans : the nonneutral case .preprint ( 2011 ) submitted to biomat 2011 , available at http://arxiv.org/ps_cache/arxiv/pdf/1105/1105.6069v1.pdf b. harder .did humans and neandertals battle for control of the middle east ?, march 8 2002 .p. a. underhill , li jin , a. a. lin , s. q. mehdi , t. jenkins , d. vollrath , r. w. davis , l. l. cavalli - sforza , and p. j. oefner .detection of numerous y chromosome biallelic polymorphisms by denaturing high - performance liquid chromatography ., 7:9961005 , 1997 .m. krings , c. capelli , f. tschentscher , h. geisert , s. meyer , a. von haeseler , k. grossschmidt , g. possnert , m. paunovic , and s. pbo .a view of neandertal genetic diversity . , 90(26):144 146 , 2000 .m. currat and l. excoffier .modern humans did not admix with neanderthals during their range expansion into europe ., 2(12):e421 , 2004 .j. zilho , d. e. angelucci , et al . . ,107(3):10231028 , 2010 .k. wong and j. zilho ., 302(6):7275 , 2010 .by introducing the auxiliary functions and and taking into account the initial conditions , , we may solve odes ( 2 ) in the main text of the paper , obtaining \ ] ] and where in eq .( [ z2 ] ) , is given by eq .( [ z1 ] ) .the same path could be followed for the direct solution of the difference equations eq .( 1 ) in the main text , but formulae corresponding to eqs . ( [ z1 ] ) and ( [ z2 ] ) hereare more involved and , more importantly , the limit will be appropriate for our further analysis .of course eqs .( [ z1 ] ) and ( [ z2 ] ) may be trivially used to derive explicit expressions for and , but we think the result is clearer in the form given by eqs .( [ z1 ] ) and ( [ z2 ] ) . in general, is a complicated function obtained by realizing the wright - fisher drift . in the limit ,it is a solution of the stochastic ode where is standard brownian motion , i.e. and . as a consequence, we can not explicitly compute the integrals in eqs .( [ z1 ] ) and ( [ z2 ] ) .anyway , eqs .( [ z1 ] ) and ( [ z2 ] ) can be used to give a qualitative description of the solutions to eq .( 2 ) in the main text and , if necessary , integrals may be easily numerically computed . as the integrand in the exponent of eq .( [ z1 ] ) is positive , it shows that the difference between and is positive and steadily decreasing .moreover , this information , when plugged into eqs .( 2 ) shows that in fact decreases and increases .( [ z2 ] ) on the other hand shows that gene flow from one subpopulation into the other is generally not symmetric .in fact , measures the difference between the fraction of type 1 alleles in subpopulation 2 and type 2 alleles in subpopulation 1 . by eq .( [ z2 ] ) , this difference decreases at times in which and increases when .moreover , it shows that gene flow is more effective at initial times , when values are larger .with the purpose of illustrating the qualitative behavior of the solutions of eqs .( 2 ) , see appendix a , we show in fig . s1 plots of and numerically obtained in the case of two deterministic histories which illustrate typical situations occurring in the wright - fisher drift . it can be seen that all qualitative features of the solutions to eqs .( 2 ) are present .it should also be noticed in fig . s1that the final values of and , i.e. their values at the time of extinction of one of the subpopulations , do depend very much on the history and on the value of .for two different histories and two different values of we plot the solutions of eqs .( 2 ) . in both plots, the black dotted curve represents .the left plot corresponds to a situation in which subpopulation 2 is rapidly extinct , while the right plot to a situation in which extinction of population 2 occurs after an initial period of oscillating populations . in both pictureswe represent a situation with ( full lines ) and another with ( dashed lines ) . in each picturethe upper ( red ) lines correspond to and the lower ( blue ) lines to .notice that in these examples the allelic fractions of the subpopulations become the same before extinction.,scaledwidth=90.0% ] the final values of and are the most important outputs of the model , because they can be compared with experimental data .as stated above , these values are expected to heavily depend on the particular realization of and on .therefore , although the qualitative behavior of and , as outlined in appendix a , is quite well - understood , it is necessary to simulate the model by a computer program to obtain quantitative information on their final values .we first simulate the history .this part begins by choosing a value for the total population size and a value for the initial fraction of individuals in subpopulation 1 .some comments on the choice of are made in appendix c. the choice of is not so relevant if it is large enough so that agreement between the solutions of eqs .( 1 ) and eqs . ( 2 )is good . in all results shownwe have taken , which produced a good agreement .then , individuals in generation independently and randomly choose the subpopulation to which they belong , being the probability of choosing subpopulation 1 .the fraction of individuals which expressed the choice for subpopulation 1 produces the value . in general , individuals in generation randomly and independently choose their subpopulation , being the probability of choosing subpopulation 1 .this procedure is repeated many times and it generates a realization of the wright - fisher drift , i.e. a sequence of values , until attains either the value 0 or the value 1 .if a realization of is directly plugged into the difference equations ( 1 ) , the _ theoretical _ values of and can be easily obtained .these theoretical values have been used in producing e.g. results shown in fig .1 , which represents the core of the paper .the second part of the program concerns the processes of diploid reproduction and exchange of individuals between subpopulations . in order to obtain the _ simulated _ values of and ,it is necessary to numerically run the stochastic processes of reproduction and individual exchange .the former consists in the choice of the parents of each individual in the next generation and the latter in the random extraction of individuals to be exchanged between subpopulations .we will explain later the details of them .the purpose of this second part is twofold : it is necessary to check , see fig .s1 , the accuracy of the approximations made in deducing eqs .( 1 ) , and also to obtain information concerning the common ancestors of all individuals in the population in paternal and maternal lineages .although this information has no relevance for the the final values of and , it is necessary in order to check whether or not the common ancestors of the whole population in paternal and maternal lines belong to the ancestors of the surviving subpopulation . in the second part of the program , at all time stepswe suppose that half the number of individuals in any subpopulation are males and the other half females . the process of individuals exchange is simulated by randomly picking individuals of subpopulation 1 and individuals of subpopulation 2 and exchanging their subpopulation affiliation . in the more interesting case in which is less than 1, we promote the exchange of 1 random individual of each subpopulation each generations .the reproduction process is simulated as follows : each individual of subpopulation 1 at time makes a random choice of both his / her parents among male and female individuals in subpopulation 1 at time , migrants included .the analogous procedure is followed by individuals in subpopulation 2 . at each generationwe keep track of the entire genealogy of each of the individuals by counting the number of times each one of the ancestors ( individuals of the founding population which lived at time 0 , before interbreeding started ) appears in his / her genealogical tree . then we proceed to computing the simulated value of the fractions and .we first consider a single individual at time in subpopulation 1 and we count the number of times the ancestors belonging to subpopulation 1 appear in his genealogy , then we divide this number by , and finally we average this value with respect to all individuals of subpopulation 1 at time .the result is the _ simulated _ value of .an analogous calculation produces the simulated value of . for each male individual at each generation we also keep track of his ancestor by paternal line in generation 0 . for the female individuals we do the same for the maternal ancestor in generation 0 . the left graph in fig .s2 shows the result of one such simulation , in which we compare the theoretical and simulated values for and using the same wright - fisher drift history .it should be noted that , although not complete , agreement between simulated and theoretical quantities is good .we remind here that the simulated allelic fractions are subject to statistical fluctuations due to the random processes of exchange of individuals and diploid reproduction . for a single wright - fisher drift history plotted with brown full dots and we compare the theoretical and simulated values of and . in both plots ,the theoretical values are shown in full lines .the upper ( red ) line corresponds to and the lower ( blue ) line corresponds to .the corresponding simulated values are shown respectively as red open dots and blue crosses .the left graph shows the simulated values obtained by a single simulation , whereas the right graph shows the averages of 100 simulations.,scaledwidth=90.0% ] indeed , we believe that the randomness in the diploid reproduction process accounts for the largest part of the difference between theoretical and simulated values .in fact , as shown in , with diploid reproduction the contribution of each single individual to the gene pool some generations later is highly variable . on the other hand ,if is much less than , randomness in the process of exchange of individuals is not so important since at the time of exchanges the individuals in each subpopulation are already highly homogeneous from the point of view of allelic fractions .we have directly checked this fact while producing the data shown in fig .it should also be noticed that agreement between theoretical and simulated values is worst for when subpopulation 2 is close to extinction . in this case , in fact , given the small size of subpopulation 2 , even a small number of migrants induces large fluctuations in . the right graph in fig .s2 shows the average of the simulated values and over 100 simulations with the same history .notice that the difference between theoretical and average simulated values is accordingly smaller .as remarked in the main text , in producing fig . 1 , we have taken random values of uniformly distributed between 0 and 0.8 .the reason why we avoided larger values of is that they are too close to the experimental interval or inside that interval .we now explain why this must be done .first we observe that if is in the experimental interval , then the final values of and will necessarily also lie in the experimental interval provided that is large enough .the free mating situation , in which subpopulations interact as if there were no differences among their members , is a particular case of this large regime .free mating , in the infinite population limit , is in fact described " by eqs .( 2 ) with an infinite value for . in this casethe solution to the equations is straightforward : both and become instantaneously equal to .the conclusion is that if lies in the experimental interval , then the model would fail to predict any upper bound to , as easy or free mating situations are allowed .nevertheless , we do not believe that either of these situations were likely to have occurred in reality , since distinct subpopulations coexisted for thousands of years .therefore , the experimental interval has to be excluded in the choice of .if we take instead values of outside the experimental interval , but still close to its boundaries , simulations show that both and take very large values , such values tending to infinity as gets closer to the experimental interval .this is illustrated in fig .[ x0alpha ] .with typical values of become comparable to ( with the value we used and also with of the order of tens of thousands as it could have been in the real events in middle east ) or larger . as already commented , for such large values of , eqs .( 1 ) or ( 2 ) do not describe accurately the interbreeding process .the reason is that the assumption that all individuals in each subpopulation are genetically homogeneous , necessary to deduce eqs .( 1 ) , fails .we plot here the correlation between and for 790 histories such that lies in the experimental interval .the histories were produced with random and random subject to .notice that the number of histories with in the experimental interval increases with , as well as the corresponding values of .,scaledwidth=90.0% ] our choice in fig .1 of taking the values of limited to 0.8 is thus a reasonable consequence of the mathematical characteristics of the model .it is also a reasonable choice from a historical point of view , because we are assuming that the neanderthal subpopulation was comparable to the african one ; it might be smaller , but not extremely smaller , compared with the african one .mitochondrial dna and y chromosome are both inherited in a haploid way .furthermore mtdna is not subject to recombination and recombination seems to be negligible for the y chromosome .it is also believed that large portions of both are selectively neutral .these facts allow an easier mathematical treatment of their statistical properties . from the experimental point of view, large samples of mtdna and y chromosomes in living humans have been sequenced .the small variation among living humans is compatible with a single ancestor woman ( mtdna ) and a single ancestor man ( y chromosome ) , probably both of african origin and living about 100 - 200 thousand years ago .these facts have been interpreted as proofs of the out of africa model .more recently , the whole mtdna of a few neanderthal fossils became available .the average number of pairwise differences in mtdna between a neanderthal and a living human is significantly larger than the average number of pairwise differences in mtdna among living humans .this has been considered as a further confirmation of the claim that neanderthals belong to a separate species , see e.g. , and also for the out of africa model .both authors of this paper have separately claimed that the above facts are all compatible with anatomically modern africans and neanderthals being part of a single interbreeding population at the times they coexisted . in , using kingman s coalescence , it was shown that the probability distribution of genealogical distances in a population of fixed size and haploid reproduction is random even in the limit when the population size is infinite .the random distribution typically allows large genealogical distances among subpopulations . in important fact was statistically described : in a population of fixed size and haploid reproduction one of the two main subpopulations will become extinct at random times with exponential distribution .when such an extinction occurs , average genealogical distances among individuals in the population have a sudden drop .finally , in , it was shown that mtdna may be completely replaced in a population by the mtdna of another neighbor population , whereas some finite fraction of nuclear dna persists .these facts imply that the large genealogical distances between living humans and neanderthals , as seen in mtdna , are not uncommon in an interbreeding population . on the contrary, they turn out to be very likely if the correct statistics is used .furthermore , these facts imply that these distances may have been much larger at the time of neanderthals extinction than they are nowadays .they also imply that extinction of neanderthals mtdna is compatible with the survival of their nuclear dna .exactly the same reasoning can be applied to the mitochondrial and nuclear dnas of the fossil bones found in siberia , later described as the new population of denisovans .the fact that denisovans differ significantly both from neanderthals and living humans in their mtdna does not imply that they could not interbreed with either of them .indeed , nuclear dna proved that they have interbred at least with some anatomically modern populations . in the authors examined the question of survival of mtdna and y chromosome lineages in a population subject to exponential stochastic growth ( supercritical galton - watson branching process ) .it was shown that exponential growth is compatible with the survival of a single mtdna or y chromosome lineage only if the growth rate is in a narrow interval .thus , even if neanderthals and anatomically modern africans belonged to the same interbreeding population and even if this population was allowed to grow exponentially with a small rate , the more probable outcome would still be all humans being descendants either of a single woman ( mtdna ) or a single man ( y chromosome ) . in ,the number of generations between successive branchings in the galton - watson process was computed .it was found that in the slightly supercritical regime , in which the survival of a single lineage is expected , trees typically have very long branches of the size of the whole tree along with shorter branches of all sizes .thus , trees are qualitatively similar to those of the coalescent model and , as a consequence , the phenomenon of sudden drops in genealogical distances , described in is also present in this model .
|
considering the recent experimental discovery of green _ et al _ that present day non - africans have 1 to of their nuclear dna of neanderthal origin , we propose here a model which is able to quantify the interbreeding events between africans and neanderthals at the time they coexisted in the middle east . the model consists of a solvable system of deterministic ordinary differential equations containing as a stochastic ingredient a realization of the neutral wright - fisher drift process . by simulating the stochastic part of the model we are able to apply it to the interbreeding of african and neanderthal subpopulations and estimate the only parameter of the model , which is the number of individuals per generation exchanged between subpopulations . our results indicate that the amount of neanderthal dna in non - africans can be explained with maximum probability by the exchange of a single pair of individuals between the subpopulations at each 77 generations , but larger exchange frequencies are also allowed with sizeable probability . the results are compatible with a total interbreeding population of order individuals and with all living humans being descendents of africans both for mitochondrial dna and y chromosome .
|
compartmental epidemic models have been considered widely in the mathematical literature since the pioneering works of kermack , mckendrick and many others .investigating fundamental properties of the models with analytical tools allows us to get insight into the spread and control of the disease by gaining information about the solutions of the corresponding system of differential equations .determining steady states of the system and knowing their stability is of particular interest if one thinks of the long term behavior of the solution as final epidemic outcome .+ in the great majority of the deterministic models for communicable diseases , two steady states exist : one disease free , meaning that the disease is not present in the population , and the other one is endemic , when the infection persists with a positive state in some of the infected compartments .in such situation the basic reproduction number ( ) usually works as a threshold for the stability of fixed points : typically the disease free equilibrium is locally asymptotically stable whenever this quantity , defined as the number of secondary cases generated by an index infected individual who was introduced into a completely susceptible population , is less than unity , and for values of greater than one , the endemic fixed point emerging at takes stability over by making the disease free state unstable .this phenomenon , known as forward bifurcation at , is in contrary to some other cases when more than two equilibria coexist in certain parameter regions .backward bifurcation presents such a scenario , when there is an interval for values of to the left of one where there is a stable and an unstable endemic fixed point besides the unique disease free equilibrium .such dynamical structure of fixed points has been observed is several biological models considering multiple groups with asymmetry between groups and multiple interaction mechanisms ( for an overview see , for instance , and the references therein ) .however , examples can also be found in the literature where the coexistence of multiple non - trivial steady states is not due to backward transcritical bifurcation of the disease free equilibrium ; in the age - structured sir model analyzed by franceschetti _ endemic equilibria arise through two saddle - node bifurcations of a positive fixed point , moreover wang found backward bifurcation from an endemic equilibrium in a simple sir model with treatment . + in case of forward transcritical bifurcation , the classical disease control policy can be formulated : the stability of the endemic state typically accompanied with the persistence of the disease in the population as long as the reproduction number is larger than one , while controlling the epidemic in a way such that decreases below one successfully eliminates the infection , since every solution converges to the disease free equilibrium when is less than unity . on the other hand , the presence of backward bifurcation with a stable non - trivial fixed point for means that bringing the reproduction number below one is only necessary but not sufficient for disease eradication .nevertheless , multiple endemic equilibria have further epidemiological implications , namely that stability and global behavior of the models that exhibit such structure are often not easy to analyze , henceforth little can be known about the final outcome of the epidemic .+ multi - city epidemic models , where the population is distributed in space over several discrete geographical regions with the possibility of individuals mobility between them , provide another example for rich dynamics . in the special case when the cities are disconnected the model possesses numerous steady states , the product of the numbers of equilibria in the one - patch models corresponding to each city . however , the introduction of traveling has a significant impact on steady states , as it often causes substantial technical difficulties in the fixed point analysis and , more importantly , makes certain equilibria disappear . some works in the literature deal with models wherethe system with traveling exhibits only two steady states , one disease free with the infection not being present in any of the regions , and another one , which exists only for , corresponding to the situation when the disease is endemic in each region ( see , for instance , arino , arino and van den driessche ) . other studies which consider the spatial dispersal of infecteds between regions ( gao and ruan , wang and zhao and the references therein ) do nt derive the exact number for the steady states but show the global stability of a single disease free fixed point for and claim the uniform persistence of the disease for with proving the existence of at least one ( componentwise ) positive equilibrium .+ the purpose of this study is to investigate the impact of individuals mobility on the number of equilibria in multiregional epidemic models .a general deterministic model is formulated to describe the spread of infectious diseases with horizontal transmission .the framework enables us to consider models with multiple susceptible , infected and removed compartments , and more significantly , with several steady states .the model can be extended to an arbitrary number of regions connected by instantaneous travel , and we investigate how mobility creates or destroys equilibria in the system .first we determine the exact number of steady states for the model in disconnected regions , then give a precise condition in terms of the reproduction numbers of the regions and the connecting network for the persistence of equilibria in the system with traveling .the possibilities for a three patch scenario with backward bifurcations ( i.e. , when two endemic states are present for local reproduction numbers less than one ) are sketched in figure [ fig : schematic ] ( cf .corollary [ cor : summaryee ] ) .+ the paper is organized as follows .a general class of compartmental epidemic models is presented in section [ sec : model ] , including multigroup , multistrain and stage progression models .we consider regions which are connected by means of movement between the subpopulations and use our setting as a model building block in each region .section [ sec : dfe ] concerns with the unique disease free equilibrium of the multiregional system with small volumes of mobility , whilst in sections [ sec : ee ] , [ sec : irred ] and [ sec : nodirect ] we consider the endemic steady states of the disconnected system and specify conditions on the connection network and the model equations for the persistence of fixed points in the system with traveling .we finish sections [ sec : ee]-[sec : nodirect ] with corollaries that summarize the achievements .the results are applied to a model for hiv transmission in three regions with various types of connecting networks in section [ sec : hiv ] , then this model is used for the numerical simulations of section [ sec : richdyn ] to give insight into the interesting dynamics with multiple stable endemic equilibria , caused by the possibility of traveling .we consider an arbitrary ( ) number of regions , and use upper index to denote region , . let , and represent the set of infected , susceptible and removed ( by means of immunity or recovery ) compartments , respectively , for .the vectors , and are functions of time .we assume that all individuals are born susceptible , the continuous function models recruitment and also death of susceptible members .it is assumed that is times continuously differentiable .the matrix describes the transitions between infected classes as well as removals from infected states through death and recovery .it is reasonable to assume that all non - diagonal entries of are non - positive , that is , has the z sign pattern ; moreover the sum of the components of should also be nonnegative for any .it is shown in that for such a matrix it holds that it is a non - singular m - matrix , moreover .furthermore we let be a diagonal matrix whose diagonal entries denote the removal rate in the corresponding removed class .+ disease transmission is described by the matrix function , assumed on , an element represents transmission between the susceptible class and the infected compartment .the term thus has the form , . for each pair define a non - negative -vector which distributes the term into the infected compartments ; it necessarily holds that .henceforth individuals who enter the -th infected class when turning infected are represented by , which allows us to interpret the inflow of newly infected individuals into as with , .recovery of members of the disease compartment into the removed class is denoted by the -th entry of the nonnegative matrix .+ in case of disconnected regions we can formulate the equations describing disease dynamics in region , , as due to its general formulation our system is applicable to describe a broad variety of epidemiological models in the literature .this is illustrated with some simple examples .multigroup models epidemiological models where , based on individual behavior , multiple homogeneous subpopulations ( groups ) are distinguished in the heterogeneous population are often called multigroup models .the different individual behavior is typically reflected in the incidence function as , for instance , by sexually transmitted diseases the probability of becoming infected depends on the number of contacts the individual makes , which is closely related to his / her sexual behavior . in terms of our system ( [ basismodel ] ) , such a model is realized if holds and the vector is defined as its component is one with all other elements zero , meaning that individuals who are in the susceptible group go into the infected class when contracting the disease . a simple sir - type model with constant recruitment into the susceptible class , and and as natural mortality rate of the subpopulation and recovery rate of individuals in , , becomes a multigroup model if its ode system reads see also the classical work of hethcote and ark for epidemic spread in heterogeneous populations .stage progression models these models are designed to describe the spread of infectious diseases where all newly infected individuals arrive to the same compartment and then progress through several infected stages until they recover or die .if we let for every then ( [ basismodel ] ) becomes a stage progression model .the example provides such a framework with one susceptible and one removed class . the more general model presented by hyman __ in considers different infected compartments to represent the phenomenon of changing transmission potential throughout the course of the infectious period .multistrain models considering more than one infected class in an epidemic model might be necessary because of the coexistence of multiple disease strains .individuals infected by different subtypes of pathogen belong to different disease compartments , and a new infection induced by a strain always arises in the corresponding infected class . using the interpretation of in ( [ basismodel ] ) this can be modeled with the choice of , , , however it is not hard to see that the model described by the system also exhibits such a structure .van den driessche and wathmough refer to several works for multistrain models in section 4.4 in , and they also provide a system with two strains and one susceptible class as an example ; though we point out that their model incorporate the possibility of `` super - infection '' which is not considered in our framework . after describing our general disease transmission model in separated territorieswe connect the regions by means of traveling with the assumptions that travel occurs instantaneously .we denote the matrices of movement rates from region to region , , , of infected , susceptible and removed individuals by , and , respectively , which have the form , and , where all entries are nonnegative . for connected regions , our model in region reads the absence of traveling , i.e. , when , , for all , the equations for a given region are independent of the equations of other regions .we assume that for each the equation has a unique solution ; this yields that there exists a unique disease free equilibrium in region since and the third equation of ( [ basismodel ] ) implies .we also suppose that all eigenvalues of the derivative have negative real part , which establishes the local asymptotic stability of in the disease free system when system ( [ basismodel ] ) is close to the disease free equilibrium , the dynamics in the infected classes can be approximated by the linear equation where we use the notation .the transmission matrix represents the production of new infections while describes transition between and out of the infected classes .clearly is nonnegative , which together with implies the non - negativity of .we recall that the spectral radius of a matrix is the largest real eigenvalue of ( according to the frobenius perron theorem such an eigenvalue always exists for non - negative matrices , and it dominates the modulus of all other eigenvalues ) .we define the _ local _ reproduction number in region as and obtain the following result .the point is locally asymptotically stable in ( [ basismodel ] ) if , and unstable if .the stability of the disease free fixed point is determined by the eigenvalues of the jacobian of ( [ basismodel ] ) evaluated at the equilibrium .linearizing the system at yields where it holds that has negative real eigenvalues , and by assumption the eigenvalues of have negative real part .the special structure of implies that determines the stability of the disease free equilibrium . + it is known that all eigenvalues of the matrix have negative real part if and only if , and there is an eigenvalue with positive real part if and only if .since was defined as the spectral radius of , one obtains the statement of the proposition .if the regions are disconnected , the basic ( global ) reproduction number arises as the maximum of the local reproduction numbers , hence we arrive to the following simple proposition . [ prop : stabalpha0 ] the system has a unique disease free equilibrium , which is locally asymptotically stable if and is unstable if , where we define let us suppose that all movement rates admit the form , , , where the non - negative constants , and represent connectivity potential and we can think of as the general mobility parameter .using the notation makes , . with this formulationwe can control all movement rates at once , through the parameter , moreover it allows us to rewrite systems in the compact form with and , where , and are defined as the right hand side of the first , second and third equation , respectively , of system ( [ genmodel ] ) , .we note that is an times continuously differentiable function on , and for ( [ compt ] ) gives system . +as pointed out in proposition [ prop : stabalpha0 ] , the point is the unique disease free equilibrium of . since this system coincides with for , it holds that , this is , is a disease free steady state of when , and it is unique .the following theorem establishes the existence of a unique disease free equilibrium of this system for small positive -s .[ th : iftdfe ] assume that the matrix is invertible .then , by means of the implicit function theorem it holds that there exists an , an open set containing , and a unique times continuously differentiable function such that and for .moreover , can be defined such that is the unique disease free equilibrium of system on . the existence of , the continuous function which satisfies the fixed point equations of ( [ compt ] ) for small -s , is straightforward so it remains to show that it defines a disease free steady state when is sufficiently close to zero .+ we consider the following system for the susceptible classes of the model with traveling the jacobian evaluated at the disease free equilibrium and reads , its non - singularity follows from the assumption made earlier in this section that all eigenvalues of , , have negative real part .we again apply the implicit function theorem and get that in the absence of the disease the susceptible subsystem obtains a unique equilibrium for small values of .more precisely , there is an times continuously differentiable function , which satisfies the steady - state equations of ( [ eq : suscsub ] ) whenever is in with close to zero , and it also holds that . on the other hand, we note that the point is an equilibrium solution of system , and by uniqueness it follows that , and necessarily , for . by continuityit is clear from , , that can be defined such that is nonnegative , and thus , it is a disease free fixed point of which is biologically meaningful .if is locally asymptotically stable in system then has only eigenvalues with negative real part , and therefore is invertible . by continuity of the eigenvalues with respect to parameters all eigenvalues of negative real part if is sufficiently small .similarly , if is unstable and has no eigenvalues on the imaginary axis then , for -s close enough to zero , has an eigenvalue with positive real part and thus , is unstable .we have learned from proposition [ prop : stabalpha0 ] that works as a threshold for the stability of the disease free steady state for , and now we obtain that this is not changed when traveling is introduced with small volumes into the system .there exists an such that is locally asymptotically stable on if , and in case and , can be chosen such that it also holds that is unstable for .next we examine endemic equilibria , , of system ( [ basismodel ] ) .we assume that the functions and matrices defined for the model are such that either or holds for , that is , in region if any of the infected ( susceptible ) ( removed ) compartments are at positive steady state then so are the other infected ( susceptible ) ( removed ) classes .endemic fixed points thus admit , which implies and . indeed , the equilibrium condition for system ( [ basismodel ] ) and , gives if , so our assumption above implies that is at positive steady state in endemic equilibria . on the other hand , would make , so using the non - singularity of and the first equation of ( [ basismodel ] ) , contradicts .endemic equilibria of the regions can thus be referred to as positive fixed points .+ without connections between the regions , let region have positive fixed points , .then the disconnected system admits endemic equilibria of the form , , and , the disease free steady state . in the sequel we will use the general notation , where for an means . the upper index ` ' in stands for .we note that holds with defined for system ( [ compt ] ) .+ the implicit function theorem is also applicable for any of the endemic equilibria under the assumption that the jacobian of system ( [ compt ] ) evaluated at the fixed point and has nonzero determinant .we remark that whenever is asymptotically stable , that is , is asymptotically stable in ( [ basismodel ] ) for all , then has no eigenvalues on the imaginary axis and thus , is nonsingular .[ th : iftendemic ] assume that the matrix is invertible .then , by means of the implicit function theorem it holds that there exists an , an open set containing , and a unique times continuously differentiable function such that and for . by continuity of eigenvalues with respect to parameters implies for -s sufficiently small , thus on an interval it holds that is a locally asymptotically stable ( unstable ) steady state of whenever is locally asymptotically stable ( unstable ) in .the last theorem means that , under certain assumptions on our system , it holds that for every equilibrium of the disconnected system there is a fixed point , , of close to when is sufficiently small .if has only positive components then so does , so we arrive to the following result .[ th : posee ] if is a positive equilibrium of then in theorem [ th : iftendemic ] can be chosen such that holds for .this means that the equilibrium of the disconnected system is preserved for small volumes of movement by a unique function which depends continuously on . on the other hand , it is possible that the has some zero components when there is a region , , where and hold , that is , the fixed point is on the boundary of the nonnegative cone of ; nevertheless we recall that is an endemic equilibrium so there exists a , , such that . in the sequelsuch fixed points will be referred to as _ boundary _ endemic equilibria .the biological interpretation of such a situation is that , when the regions are disconnected , the disease is endemic in some regions but is not present in others . in this case may move out of the nonnegative cone of as increases , which means that , though is a fixed point of system , it is not biologically meaningful .henceforth it is essential to describe under which conditions is fulfilled .this will be done in the following two lemmas but before we proceed let us introduce a definition to facilitate notations and terminology .consider an endemic equilibrium of system .+ if there is a region which is at a disease free steady state in then we say that region is dfat ( disease free in the absence of traveling ) in the endemic equilibrium , that is , .+ if there is a region which is at an endemic ( positive ) steady state in then we say that region is eat ( endemic in the absence of traveling ) in the endemic equilibrium , that is , .[ lem : suffforpos ] consider a boundary endemic equilibrium of system .for the function defined in theorem [ th : iftendemic ] to be nonnegative for small -s it is necessary and sufficient to ensure that holds for all -s such that in , that is , is dfat .we recall that in an endemic equilibrium holds by assumption for any , thus for an with the positivity of for small -s follows from and the continuity of . from ( [ genmodel ] )we derive the fixed point equation where is defined as all non - diagonal elements of this matrix are non - positive , thus it has the z sing pattern , moreover we also note that in each column the diagonal element dominates the absolute sum of all non - diagonal entries since , . then , we can apply theorem 5.1 in where the equivalence of properties 3 and 11 claims that is invertible with the inverse nonnegative . using the non - negativity of , , and equation ( [ eq : fxfz ] )we get that for all whenever the vector is nonnegative .if in a region , meaning that the region is endemic in the absence of traveling , then for -s close to zero it holds that since is continuous and .it is therefore enough ( though , clearly , also necessary as well ) to guarantee the nonnegativity of for each region where , that is , the region is dfat .[ lem : firstder ] consider a boundary endemic equilibrium of system . if is satisfied for the function defined in theorem [ th : iftendemic ] whenever region is dfat in , then is positive for -s sufficiently small . on the other hand if there is a region , which is dfat and for which has a negative component then there is no interval for to the right of zero such that is nonnegative .the derivative arises as the solution of the equation we consider a region where , this is , is a dfat region in . using the equilibrium condition we obtain where we remark that is differentiable at fixed points since and when . evaluating ( [ eq : deralpha ] ) at gives where we used that , and for and .note that is an equilibrium in ( [ basismodel ] ) and , since its component for the infected classes is zero , it equals the unique disease free equilibrium .this makes , so applying the definition of in section [ sec : dfe ] the above equations reformulate as before we investigate the solutions of equation ( [ eq : derx ] ) let us point out a few things . when introducing traveling a fixed point of moves along the continuous function . in the case when there are regions where the disease is not present without traveling and the fixed point has zeros for , it is possible that is non - positive for small positive -s .the epidemiological implication of such a situation is that boundary equilibria of the disconnected system might disappear when traveling is introduced .+ considering a boundary endemic equilibrium , lemmas [ lem : suffforpos ] and [ lem : firstder ] describe when such a case is realized and give condition for the non - negativity of , , for small positive -s .the equation ( [ eq : derx ] ) is derived for an for which holds ; the right hand side of ( [ eq : derx ] ) is a nonnegative -vector with the component having the form .it is clear that is positive if and only if there exists a , such that and , or with words , there is a region where the infected class is in a positive steady state in , and there is a connection from that class toward the infected class of region ( we remark that implies , yielding that the region is eat ) .we state two theorems .[ th : r0<1 ] assume that there is a region , , which is dfat in the boundary endemic equilibrium of system .then for the function defined in theorem [ th : iftendemic ] it is satisfied that if . furthermore , if we assume that , then it follows that . from the properties of described in section [ sec : model ] and the non - negativity of we get that holds for , hence has the z sign pattern .theorem 5.1 in says that is invertible and if and only if all eigenvalues of have positive real part ( properties 11 and 18 are equivalent ) ; or analogously , is invertible and if and only if all eigenvalues of have negative real part .we follow and and claim that , for all eigenvalues of to have negative real part it is necessary and sufficient that the spectral radius of which is the local reproduction number is less than unity .+ we conclude that if holds then the equality derived from ( [ eq : derx ] ) shows that is nonnegative .if the sum on the right hand side is strictly positive ( which is possible since is an endemic equilibrium hence there is a region , , where ; furthermore the matrix is also nonnegative ) , then yields .the proof is complete .[ th : r0>1 ] assume that there is a region , , which is dfat in the endemic equilibrium of system . if , then for the function defined in theorem [ th : iftendemic ] it is satisfied that has a non - positive component . furthermore ,if we assume that , then it holds that has a strictly negative component .theorems 5.3 and 5.11 in state that if is a square matrix which satisfies for and if there exists a vector such that , then it holds that every eigenvalue of has nonnegative real part .it is known that all eigenvalues of the matrix have negative real part if and only if , the maximum real part of the eigenvalues is zero if and only if , and there is an eigenvalue with strictly positive real part if and only if . hence, using the above result from with and the non - negativity of the right hand side of ( [ eq : derx ] ) we get that if then there exists no positive vector such that since has an eigenvalue with negative real part .this implies the first statement of the theorem .+ theorem 5.1 in yields that there is no such that ; it follows from the equivalence of properties 1 and 18 of theorem 5.1 that for the existence of such all eigenvalues of should have positive real part .if we now suppose that the last assumption of our statement holds , which ensures the positivity of the right hand side of ( [ eq : derx ] ) , then we get that should satisfy an inequality of the form , which in the light of the argument above is only possible if has a negative component .theorems [ th : r0<1 ] and [ th : r0>1 ] together with lemmas [ lem : suffforpos ] and [ lem : firstder ] give conditions for the persistence of endemic equilibria in system for small volumes of travel .if the fixed point is a boundary endemic equilibrium of system with a dfat region ( that is , ) but , once traveling is introduced , to every infected class in there is an inflow from another region which is eat ( i.e. , if the right hand side of equation ( [ eq : derx ] ) is positive ) , then , , leaves the nonnegative cone of if , since has a negative component and hence , so does for small -s . on the other hand , if for every dfat region , , it holds that the local reproduction number is less than one , and to each infected class there is an inflow from an eat region by means of individuals movement , then for each such implies that the endemic equilibrium is preserved in system when is small .+ we understand that there is a limitation in applying the results of the above stated theorems : to decide whether an endemic steady state of the disconnected system continues to exist in the system with traveling , we need to know the structure of the connecting network and require the pretty restrictive property that for each with , for each there exists a , such that and . in the next sectionwe turn our attention to the case when this property does nt hold , that is , there is a region which is dfat and the right hand side of ( [ eq : derx ] ) is not positive ( nevertheless we emphasize that , considering the biological interpretation of the sum , it is always nonnegative ) .this section is closed with a corollary which summarizes our findings .the result covers the special case when the connecting network of all infected classes is a complete network .[ cor : summaryee ] consider a boundary endemic equilibrium of system .assume that is satisfied whenever , , is a dfat region in ; we note that this condition always holds if the constant is positive for every and , meaning that all possible connections are established between the infected compartments of the regions .then , in case holds in all dfat regions we get that is preserved for small volumes of traveling by a unique function which depends continuously on .if there exists a region which is eat and where then moves out of the feasible phase space when traveling is introduced .knowing the steady states of the disconnected system , we are interested in the effect of incorporating the possibility of individuals movement on the equilibria . the differential system of connected regions reduces to when the general mobility parameter equals zero , thus whenever the jacobian of evaluated at an equilibrium of and , , is nonsingular , the existence of a fixed point , , in is guaranteed for small -s by the implicit function theorem .theorem [ th : posee ] implies that if is a positive steady state of then so is in . on the other hand in case is a boundary endemic equilibrium and holds for some , meaning that region is at disease free state ( dfat ) when the system is disconnected , the continuous dependence of on allows that the fixed point might move out of the feasible phase space as becomes positive .+ in section [ sec : ee ] we gave a full picture of the behavior of for small -s in the case when the condition holds for each region which is dfat ( for a summary , see corollary [ cor : summaryee ] ) .if this condition is not satisfied , then theorem [ th : r0<1 ] yields that the derivative is nonnegative but may have some zero components if , and though following theorem [ th : r0>1 ] it can not be positive if , it might happen that it is still nonnegative . following this argumentit is clear that the problematic case is when and either the derivative is identically zero , or it has both positive and zero components . in both situations lemmas [ lem : suffforpos ] and [ lem : firstder ] through equation ( [ eq : derx ] ) do nt provide enough information to decide whether the boundary endemic equilibrium will be preserved once traveling is incorporated .+ in this section we investigate the question of under what conditions can the derivative be nonnegative but non - positive , and we recall that this can only happen if the right hand side of ( [ eq : derx ] ) is not positive .it is convenient to work with the general equation where , which gives ( [ eq : derx ] ) for and .the statement of the next proposition immediately follows from the z sign pattern property of .[ prop : vu ] if is a nonnegative solution of with , then implies , .[ lem : reduc ] if is a solution of with such that is nonnegative and has both zero and positive components , then the matrix is reducible .if consists of zero and positive components then , without loss of generality we can assume that there are , such that can be represented as with and .we decompose as with the and dimensional matrices and , and derive the equation from . according to proposition [ prop : vu ] from follows that , thus the last equation reduces to which , considering that and , immediately implies and thus the reducibility of .the last lemma has an important implication on equation , as it excludes certain solutions .we will also see that it enables us to answer the question posed at the beginning of this section , namely that the derivative in ( [ eq : derx ] ) can not have both positive and zero but no negative components if is irreducible .[ lem : uveq ] assume that is irreducible . if , then has a unique positive solution if , and it holds that if . in the casewhen , is the only solution if , and for it holds that either or has a negative component . in the proof of theorem[ th : r0<1 ] we have seen that if , which implies the uniqueness of in . if then trivially , and we use lemma [ lem : reduc ] to get that when .similar arguments as in the proof of theorem [ th : r0>1 ] yield that has a non - positive component if , but lemma [ lem : reduc ] again makes only and possible .however is a solution of if and only if , otherwise must have a negative component .the following theorem and proposition are immediate from lemma [ lem : uveq ] .we remark that parts of the results of the theorem are to be found in theorem 5.9 , that is , if is irreducible then equation ( [ eq : derx ] ) has a positive solution .[ th : irred ] assume that there is a region , , which is dfat in the endemic equilibrium of system , and is irreducible . if , then for the function defined in theorem [ th : iftendemic ] it is satisfied that if , and if .[ prop : rhszero ] assume that there is a region , , which is dfat in the endemic equilibrium of system , and is irreducible . if , then is the only solution if , and in the case when the derivative is either zero or has a negative component .we summarize our findings as follows .we consider every region , , which is dfat in a boundary endemic equilibrium of .if the derivative in equation ( [ eq : derx ] ) has some zero but no negative components then lemmas [ lem : suffforpos ] and [ lem : firstder ] are insufficient to decide whether the fixed point , for which , will be biologically meaningful in the system of connected regions . in the casewhen ( with words , some infected classes of region have inflow of individuals from eat regions ) , the statement of theorems [ th : r0<1 ] and [ th : r0>1 ] can be sharpened if the extra assumption of being irreducible holds : as pointed out in theorem [ th : irred ] , the derivative in equation ( [ eq : derx ] ) is positive if , and has a negative component if .applying the results of lemmas [ lem : suffforpos ] and [ lem : firstder ] , this means that if every dfat region has inflow from an eat region and is irreducible in all such regions then , , is a positive steady state of if , and is not a biologically meaningful equilibrium if there is a region where and the local reproduction number is greater than one . for conclusion we state a corollary which is similar to the one at the end of section [ sec : ee ] .[ cor : summaryirred ] consider a boundary endemic equilibrium of system .let us assume that is satisfied whenever , , is a dfat region in ; we remark that this situation is realized if each dfat region has at least one infected class with connection from an eat region . in addition we also suppose that is irreducible for dfat regions. then , in case holds in all regions which are dfat we get that is preserved for small volumes of traveling by a unique function which depends continuously on .if there exists a region which is dfat and where then moves out of the feasible phase space when traveling is introduced. an square matrix is called reducible if the set can be divided into two disjoint nonempty subsets and such that holds whenever and .an equivalent definition is that , with simultaneous row and/or column permutations , the matrix can be placed into a form to have an zero block .when an infectious agent is introduced into a fully susceptible population in some region then as pointed out in section [ sec : dfe ] the matrix describes disease propagation in the early stage of the epidemic since the change in the rest of the population can be assumed negligible during the initial spread .if is reducible then without loss of generality we can assume that it can be decomposed into where , the dimensions of the sub - matrices are indicated in lower indexes and is the zero matrix .this means that there are infected classes in region which have no inflow induced by the other infected classes of region in the initial stage of the epidemic ( by the expression `` inflow induced by an infected class '' we mean either transition from the class described by matrix , or the arrival of new infections generated by the infected class , described by ) .+ in the sequel we will assume that such dynamical separation of the infected classes is not realized in any of the regions , or with other words for each the matrices and are defined in the model such that is irreducible .the biological consequence of this assumption is that whenever a single infected compartment of a dfat region imports infection via a link from the corresponding -class of an eat region then the disease will spread in _ all _ infected classes of the dfat region , not only in the one which has connection from the eat region .furthermore we note that the irreducibility of also ensures by means of lemma [ lem : uveq ] that the fixed point equation of system ( [ basismodel ] ) has only componentwise positive solutions besides the disease free equilibrium , which is in conjunction with the assumption made for the equilibria in section [ sec : ee ] .+ the criterion on being irreducible is satisfied in a wide range of well - known epidemiological models , however we remark that our results obtained in sections [ sec : dfe ] and [ sec : ee ] also hold in the general case , i.e. , when the matrix is reducible .we consider an endemic equilibrium of system , our aim is to investigate the solution of the fixed point equations of system , for which , when is small but positive .the case of positive fixed points has been treated in theorem [ th : posee ] . if is boundary endemic equilibrium, then we assume that the matrix is irreducible for every dfat region ; if for each such it holds that then corollary [ cor : summaryirred ] describes precisely under what conditions is a nonnegative steady state . it remains to handle the scenario when there exists a region which is dfat but , that is , the region is disease free in the disconnected system and so are all the regions which have a direct connection to the infected classes of in .we emphasize here that under _`` direct connection from a region to '' _ we does nt necessarily mean that _ all _ infected classes of have an inbound link from ; in the sequel we will use this term to describe the case when , that is , there is an infected compartment of which is connected to .see figure [ fig : directcon ] which further illustrates the definition .+ henceforth we proceed with the case when there is a region which is dfat in and has no direct connection from any eat regions . for such -s proposition [ prop : rhszero ] yields that our approach of investigating the non - negativity of using lemma [ lem : firstder ] and the first derivative from equation ( [ eq : derx ] ) fails .however , we assume that holds for all dfat regions where and , since if the derivative has a negative component then , as pointed out in corollary [ cor : summaryirred ] , moves out of the feasible phase space when increases and there is no further examination necessary .first we state a few results for later use .[ prop : derxn ] for any positive integer , , it holds that whenever region , , is dfat in the boundary equilibrium , and for every . , ) .every infected class of region 2 has an inbound link from region 3 ( green arrows ) .this means that region 2 has direct connection from 3 , but 3 also has direct connection from 2 since , that is , there are links from the second and third infected classes of region 2 to the corresponding compartments of region 3 ( blue arrows ) .region 1 has no direct connection from either 2 or 3 , and there is direct connection from region 1 to 2 ( red arrow ) but not to 3 . on the other hand ,3 is reachable from 1 because there is a path from 1 to 3 via region 2 .region 1 is not reachable from any of the other two regions .[ fig : directcon],width=264 ] in case , the equation in the proposition reads as ( [ eq : derx ] ) .let us assume that and .we return to equation ( [ eq : deralpha ] ) to obtain the derivative of the equation of in ( [ genmodel ] ) as as , it is satisfied by assumption that is times continuously differentiable in the respective point .clearly whenever , moreover , so if holds for all then ( [ eq : deralphan ] ) at reads since and . our interpretation of the term _ `` direct connection from a region to the infected classes of '' _ can be extended to the expression _`` path from a region to the infected classes of '' _ , representing a chain of direct connections via other regions , starting at and ending in .figure [ fig : directcon ] provides an example for three regions , where there is a path from region 1 to 3 via 2 ( this is , ) .we note , however , that the path does nt necessarily consist of the same type of infected classes in the regions : in terms of the above example , infection imported to region 2 via the link from to spreads in other infected classes of region 2 as well by means of the irreducibility of ( represented by dashed arrows in the figure ) , enabling the disease to reach region 3 via the links from to and from to .we also remark that the notation `` path from a region to the infected classes of '' includes the special case when the path consists of and only , i.e. , there is a direct connection from to .we now define the shortest distance from eat regions to a dfat region .[ def : mi ] consider a region which is dfat in the boundary endemic equilibrium .we define as the least nonnegative integer such that in system there is a path starting with an eat region , ending with region and containing regions in - between .if there is no such path then let .if there is a direct connection from an eat region to the infected classes of then this definition implies .we also note that always holds whenever the path described above exists . in the sequel we omit the words `` infected classes '' from the expression `` direct connection ( path ) for to '' for convenience .clearly infection from endemic regions to disease free territories are never imported via links between non - infected compartments of different regions , so to decide whether the disease arrives to a region it is enough to know the graph connecting infected compartments .[ lem : derivativezero ] assume that is satisfied on an interval whenever a region , , is dfat in the boundary endemic equilibrium .then for any dfat region , , it holds that for . the inequality is satisfied in every region with .the case when is trivial , so we consider a region for which , and using that we derive which is similar to equation ( [ eq : derx ] ) . for every such that it follows from that , thus the right hand side is zero .lemma [ lem : uveq ] yields that is either zero ( in case this is the only possibility ) or has a negative component ( this can be realized only if ) .nevertheless , the derivative having a negative component together with contradicts the assumption that for small -s , this observation makes the only possible case . + next consider a region where and .we have since , so proposition [ prop : derxn ] yields the equation we note that each region for which is dfat since .thus , for it follows that , henceforth holds by induction , and the right hand side of the last equation is zero . using lemma [ lem : uveq ] there are again two possibilities for ,namely that it is either zero or has a negative component ; but , and would imply the existence of an such that for which is impossible .we conclude that holds for all regions where .+ the continuation of these procedure yields that for any region where holds .this proves the lemma .we say that _region is reachable from region _ if there is a path from ( the infected classes of ) to ( the infected classes of ) .directly connected regions are clearly reachable .now we are in the position to prove one of the main results of this section .[ th : pluszeroro>1 ] assume that in the boundary endemic equilibrium there is a region which is dfat and for which holds , furthermore is reachable from an eat region .then there is an such that has a negative component for , meaning that moves out of the feasible phase space when traveling is introduced .the proof is by contradiction .we assume that is such that there are regions and where , , and is reachable from , moreover there exists an such that for , this is , the equilibrium of the disconnected system remains biologically meaningful in the system with traveling .this also means that for all with it necessarily holds that .+ if regions and , as described above , exist then there is a minimal distance between such regions , this is , there exists a least nonnegative integer such that there is a path ( connecting infected compartments of regions ) from an eat region via regions to a region which is dfat in . in the casewhen theorem [ th : irred ] immediately yields contradiction , so we can assume that .we label the regions which are part of the minimal - length path by , , , , where , , moreover note that and hold for . see the path depicted in figure [ fig : lpath ] in the appendix .+ the fact that gives by proposition [ prop : derxn ] .the equation has a non - zero right hand side since , so lemma [ lem : uveq ] and imply .a similar equation follows from .we note that , where was defined in definition [ def : mi ] , hence holds for every such that .the zero right hand side , lemma [ lem : uveq ] and yield , so we can apply proposition [ prop : derxn ] to derive if there is a such that and then would mean that for small -s has a negative component and , , is not in the nonnegative cone , which violates our assumption that for sufficiently small .thus each such derivative is necessarily nonnegative , moreover we have showed that is satisfied , which makes the right hand side of the last equation positive ; this , with the use lemma [ lem : uveq ] , implies since .+ next we consider region , where . for any region which it holds that , thus and hold by lemma [ lem : derivativezero ] and the assumption that for small -s . thus , the right hand side of equation is zero , from and lemma [ lem : uveq ] it follows that and thus proposition [ prop : derxn ] yields we get again that since , as we have seen above , all derivatives in the right hand side are zero and also holds , so lemma [ lem : uveq ] makes the second derivative of zero .finally , using that for , we derive where and .if there is a , , for which has a negative component then so does and for small -s since and , which is a contradiction .otherwise the right hand side of the last equation is positive ( it holds that ) , thus the positivity of follows from and lemma [ lem : uveq ] .+ following these arguments one can prove that for ( we remark that for this reads ) , and that for any fixed and it holds that . we note that , which according to lemma [ lem : derivativezero ] also means that for since holds for small -s by assumption. henceforth we can apply proposition [ prop : derxn ] and derive implies for any for which , hence is satisfied for .the assumption for small -s yields for any region with , so is impossible ; this together with results in the positivity of the right hand side of the above equation .as holds , it follows from lemma [ lem : uveq ] that has a negative component , but we showed that when , so for small -s follows , a contradiction .the proof is complete .theorem [ th : pluszeroro>1 ] ensures that , for a boundary endemic equilibrium of , the point defined by theorem [ th : iftendemic ] with will not be a biologically meaningful fixed point of system if there is a dfat region in which has local reproduction number greater than one and is reachable from another region which is eat in .the question , whether the condition is crucial , comes naturally .we need the following result which is similar to lemma [ lem : derivativezero ] . [lem : derivzeroro<1 ] assume that in the boundary endemic equilibrium there is no dfat region for which and .then for a region which is dfat it holds that for .if is disease free for and the region is not reachable from any region with ( that is , ) , then does nt import any infection by means of traveling and hence we have for all .this also means that holds for all .the case when is trivial , for we use the method of induction .+ we claim that for any it holds that whenever a region is such that , and .if so , the statement of the lemma follows for region with the choice of for . for a region where , and , we get from and lemma [ lem : uveq ] since the right hand side is zero because of .let us assume that there exists an such that the statement holds for all .we consider a region where , and , here clearly so holds and thus proposition [ prop : derxn ] yields for any with it holds that the region is dfat and , thus makes the right hand side zero , and using lemma [ lem : uveq ] we get that since .the next theorem is the key to answer the question stated earlier , that is , an endemic equilibrium of will persist in the system of connected regions via the uniquely defined function , , for small volumes of traveling if holds in all dfat regions of which are reachable from an eat region . in what follows ,we prove that has a positive derivative whenever region is dfat with local reproduction number less than one , and reachable from a region which is eat .then , with the help of lemma [ lem : derivzeroro<1 ] , the statement yields that is positive for small -s , and thus so is by lemma [ lem : suffforpos ] .[ th : pluszeroro<1 ] assume that in the boundary endemic equilibrium there is no dfat region for which and .then for a dfat region where , it holds that if .the proof is by induction . for any such that , and , theorem [ th : irred ] yields .whenever is satisfied in a region where and , lemma [ lem : derivzeroro<1 ] implies , so using proposition [ prop : derxn ] we derive for every with it holds that ( we remark that is well - defined for such regions because implies that all such -s are dfat regions ) ; if either ( this always holds if ) or then lemma [ lem : derivzeroro<1 ] gives , and whenever then necessarily so holds by induction .nevertheless , the positivity of the right hand side of the last equation is guaranteed because we know from that there must exist a with and , hence the inequality follows using lemma [ lem : uveq ] . + we assume that the statement of the theorem holds for an , , that is , if , and .we take a region , , and , and obtain the equation using lemma [ lem : derivzeroro<1 ] and proposition [ prop : derxn ] . makes for each where , and by examining the derivatives on the right hand side of this equation we get from lemma [ lem : derivzeroro<1 ] that for each , , whenever .the case when is only possible if , and for all such -s the inequality holds by induction .hence , the right hand side of the last equation is positive because all the derivatives in it are nonnegative and implies there is a with .we apply lemma [ lem : uveq ] to get that , which completes the proof .let us now summarize what we have learned about steady states of system for small volumes of traveling ( represented by the parameter ) between the regions . with some conditions on the model equations described in theorems [ th : iftdfe ] and [th : iftendemic ] , for every equilibrium of the disconnected system there exists a unique continuous function of on an interval to the right of zero , which satisfies the fixed point equations of .as discussed in theorems [ th : iftdfe ] and [ th : posee ] , corresponding to the unique disease free equilibrium of defines a disease free fixed point for , moreover if is positive then holds for sufficiently close to zero . with other words the connected system admits a single infection - free equilibrium and also several positive fixed points for small -s , regardless of the connections between the regions .+ on the other hand , the structure of the connection network plays an important role when considering boundary endemic equilibria , i.e. , when some regions are disease free for .if there are regions and such that is reachable from then , by increasing the fixed point moves out of the nonnegative cone whenever is such that , , and , this is , is an eat region and is a dfat region with local reproduction number greater than one .however , a boundary equilibrium of the disconnected system will persist through for small volumes of traveling in if the local reproduction number is less than one in all dfat regions which are reachable from eat regions .these last conclusions are stated below in the form of a corollary as well .[ cor : summarypluszero ] consider a boundary endemic equilibrium of system .assume that there is a dfat region in with , and is reachable from a region which is eat .then moves out of the feasible phase space when traveling is introduced . on the other hand ,if there is no such region in the system , then is preserved for small volumes of traveling , and given by a unique function which depends continuously on .human immunodeficiency virus infection / acquired immunodeficiency syndrome ( hiv / aids ) is one of the greatest public health concerns of the last decades worldwide .unaids , the joint united nations programme on hiv / aids reports an estimated 35.3 ( 32.238.8 ) million people living with hiv in 2012 . though the data of 2.3 ( 1.9-2.7 ) million infections acquired in 2012 show a decline in the number of new cases compared to 2001 , enormous effort is devoted to halt and begin to reverse the epidemic .developing vaccine which provides partial or complete protection against hiv infection remains a striking challenge of modern times .iavi the international aids vaccine initiative believes that the earlier results on combining the two major approaches of stimulating antibody production and hiv infection clearance in the human body provides grounds for optimism and confidence in designing hiv vaccines .+ there are several compartmental models ( see , for instance , ) which deal with the mathematical modeling of hiv infection dynamics .the following model for the transmission of hiv with differential infectivity was given by sharomi _et al . _ where the population is divided into the disjoint classes of unvaccinated ( ) and vaccinated ( ) susceptibles , unvaccinated infected individuals with high ( ) and low ( ) viral load , vaccinated infected individuals with high ( ) and low ( ) viral load , and individuals in aids stage of infection ( ) .note that instead of the notation and of the unvaccinated and vaccinated susceptible classes applied in we use and to avoid confusion with the matrix and vector used in section [ sec : dfe ] .the total population of individuals not in the aids stage is denoted by , .disease transmission is modeled by standard incidence , with transmission coefficients and in the infected classes with high and low viral load , the force of infection arises as . relative infectiousness of members of the and compartments is represented by and .parameter is the constant recruitment rate into the population , while stands for natural mortality .susceptible individuals are immunized by vaccination with probability , and is the rate of waning immunity . in the classes of infected individuals with high and low viral load the progression of the diseaseis modeled by and , modification parameters and are used to account for the reduction of the progression rates in and . the disease - induced mortality rate is introduced into the equation of , the individuals in the aids stage .all model parameters are assumed positive .+ it holds that the system ( [ hivmod ] ) has a unique disease free equilibrium with , and , , which is globally asymptotically stable in the disease free subspace , moreover by lemma 3 is a locally asymptotically stable ( unstable ) steady state of ( [ hivmod ] ) if ( ) , where the reproduction number is defined by with and , .it easily follows from the model equations that in an equilibrium an infected compartment is at a positive steady state if and only if all components of the fixed point are positive . according to theorem 4 system ( [ hivmod ] ) has a unique endemic equilibriumif , nevertheless positive fixed points can exist for as well ; under certain conditions on the parameters the model exhibits backward bifurcation at , that is , a critical value can be defined such that there are two distinct positive equilibria for values of in ( see for details ) .+ we consider patches and investigate the dynamics of hiv infection by incorporating the possibility of traveling into model ( [ hivmod ] ) . in each regionthe same model compartments as in the one - patch model can be defined , upper index ` ' is used to label the classes of region , . in terms of our notations in system ( [ genmodel ] ) , , , and we let , , . the equalities , and put the multiregional hiv model into the form of system , moreover arises as by introducing parameter to represent the connectivity potential from class to , and , , as general mobility parameter , system can be extended to in the same way as described in section [ sec : model ] to get an epidemic model with hiv dynamics in regions connected by traveling .we recall that system ( [ hivmod ] ) has a single disease free fixed point with , which is locally asymptotically stable if and unstable if .this also means that the system of the regions connected with traveling admits a single disease free steady state when the general mobility parameter equals zero .we now show that in case of the hiv model the connected system has a disease free equilibrium for every as well .[ th : dfehiv ] the connected system of regions with hiv dynamics admits a unique disease free equilibrium for any .it also holds that the classes of individuals in the aids stage are at zero steady state .when the infected classes are at zero steady state in the hiv model we obtain the fixed point equations with similarly as discussed in section [ sec : ee ] for the matrix , theorem 5.1 implies that the inverses of , and exist and are nonnegative .it immediately follows that , and , , hold for the unique solution of ( [ eq : msvmsma ] ) . in section [ sec : ee ] we required ( that is , with both zero and positive components not possible ) for endemic steady states , we recall that this is fulfilled in the hiv model since the model parameters are assumed positive . at positive fixed points and defined in ( [ eq : hivvari ] ) are infinitely many times continuously differentiable , hence it is possible to derive equations ( [ eq : derx ] ) and ( [ eq : derxn ] ) .the analysis in section [ sec : nodirect ] has been carried out with the extra condition that the matrix is irreducible , which is indeed the case by the hiv model .+ theorem [ th : iftendemic ] contains condition on the non - singularity of the jacobian of the system evaluated at an endemic fixed point and .the matrix has block diagonal form with the block corresponding to region , where we denote and .this gives , so we conclude that the jacobian of the system of regions is non - singular at a fixed point if and only if holds in each region .it is not hard to see that the matrix gives the jacobian of without traveling , that is , it suffices to consider the steady state components in each region separately .the jacobian evaluated at a stable equilibrium has only eigenvalues with negative real part , which guarantees the non - singularity of the matrix ; although in the case when the fixed point is unstable we only know that the determinant has an eigenvalue with positive real part , which does nt exclude the existence of an eigenvalue on the imaginary axis .+ it is conjectured from an example of that if in the one - patch hiv model then the positive fixed point is locally asymptotically stable and the disease free equilibrium is unstable , furthermore in case the model exhibits backward bifurcation , one of the endemic steady states is locally asymptotically stable whilst the other one is unstable for . as noted above , the matrix is always invertible at stable equilibria , and we use the same set of parameter values as the example in to illustrate a case when the determinant of is non - zero at unstable fixed points .the continuous dependence of the determinant on parameters implies that the situation when the jacobian is singular is realized only in isolated points of the parameter space .in fact , for , , , , , , , , , , , , , , , and , the condition for backward bifurcation holds and , moreover the positive equilibria with and are unstable and stable , respectively , with the jacobian evaluated at non - singular .letting makes and the disease free steady state is unstable with no eigenvalues of the jacobian having zero real part .+ in the sequel we assume that the model parameters are set such that at the fixed points and thus the conditions of theorem [ th : iftendemic ] hold .then , as discussed above , all the assumptions made throughout sections [ sec : model ] , [ sec : dfe ] , [ sec : ee ] , [ sec : irred ] and [ sec : nodirect ] are satisfied and we conclude that the results obtained in these sections for the general model are applicable for the multiregional hiv model with traveling .we use this model to demonstrate our findings in the case when .let us assume that the necessary conditions for backward bifurcation are satisfied in all three regions .then each region can have one ( the case when ) , three ( the case when ) or two ( the case when ) equilibria , including the disease free steady state .thus , without traveling the united system of three regions with hiv dynamics has a disease free equilibrium , and endemic steady states where for the integers and it holds that and ; it is easy to check that the possibilities for the number of equilibria are 1 , 2 , 3 , 4 , 6 , 8 , 9 , 12 , 18 and 27 .+ theorem [ th : dfehiv ] guarantees the existence and uniqueness of the disease free fixed point when traveling is incorporated into the system .theorem [ th : posee ] and corollary [ cor : summarypluszero ] give a full picture about the ( non-)persistence of endemic steady states : a boundary endemic equilibrium of the disconnected system , where there is a dfat region with which is reachable from an eat region , will not be preserved in the connected system for any small volumes of traveling , however all other endemic fixed points of the disconnected system will exist if the mobility parameter is small enough .it is obvious that the movement network connecting the regions plays an important role in deriving the exact number of steady states of the system with traveling ; in what follows we give a complete description of the possible cases .first we consider the case when each region is reachable from any other region , that is , the graph consisting of nodes as regions and directed edges as direct connections from ( the infected classes of ) one region to ( the infected classes of ) another region , is irreducible . such network is realized if we think of the nodes as distant territories and the edges as one - way air travel routes .note that the irreducibility of the network does nt mean that each region is directly accessible from any other one ; as experienced by the global airline network of the world , some territories are linked to each other via the correspondence in a third region .figure [ fig : irredrednetwork ] is presented to give examples of irreducible an reducible connection networks .if the network connecting three regions with hiv dynamics is irreducible then the number of fixed points of the disconnected system , which persists in for small volumes of traveling , can be 1 , 2 , 3 , 4 , 9 , 10 or 27 , depending on the local reproduction numbers in the regions .as pointed out in theorem [ th : dfehiv ] the unique disease free equilibrium always exists in .we distinguish four cases on the number of regions with local reproduction number greater than one .+ case 1 : no regions with . + this case is easy to treat : if in all three regions it holds that the local reproduction number is less than one , then theorem [ th : pluszeroro<1 ] implies that all fixed points of the disconnected system of three regions are preserved for some small positive -s . if for , the system may have 1 ( if for ) , 3 ( if , and , ) , 9 ( if and , , ) or 27 ( if for ) equilibria .+ case 2 : exactly one region with .+ let this region be labeled by , system has a disease free and a positive fixed point . by theorem[ th : pluszeroro>1 ] and the assumption that is reachable from both other regions , we get that no endemic equilibrium of , where is dfat , persists with traveling .it follows that besides the disease free equilibrium ( when none of the regions is endemic ) , only fixed points with will exist for small volumes of traveling , which makes the total number of equilibria 2 ( 1 disease free + 1 endemic if , ) , 4 ( 1 disease free + 3 endemic if either and , or and ) or 10 ( 1 disease free + 9 endemic if , ) .+ case 3 : exactly two regions with .+ we let the reader convince him- or herself that if and ( ) hold then a total number of 2 or 4 fixed points of the disconnected regions may persist in system for small -s .the proof can be led in a similar way as by case 2 , considering the two possibilities and for the local reproduction number of the third region .one again gets that the equilibrium where all the regions are disease free will exists , moreover it is worth recalling that no region with can be dfat while another region is eat .+ case 4 : all three regions with . + we apply theorems [ th : pluszeroro>1 ] and [ th : pluszeroro<1 ] to get that if any of the regions is dfat then so should be the other two for an equilibrium to persist and small .this implies that only 2 fixed points of , the disease free and the endemic with all three regions at positive steady state , will be preserved once traveling is incorporated . to summarize our findings, we note that the introduction of traveling via an irreducible network into never gives rise to situations when precisely 6 , 8 , 12 and 18 fixed points of the disconnected system continues to exist with traveling .nevertheless evidence has been showed that new dynamical behavior ( namely , the case when 10 equilibria coexist ) can occur when connecting the regions by means of small volume traveling .we conjecture that lifting the irreducibility restriction on the network results in even more new scenarios .this is proved in the next subsection .it is clear that , with the help of theorems [ th : pluszeroro>1 ] and [ th : pluszeroro<1 ] , the number of fixed points in the disconnected system which persist with traveling can be easily determined for any given ( not necessarily irreducible ) connecting network .the next theorem discusses all the possibilities on the number of equilibria .examples are also provided to illustrate the cases .depending on the local reproduction numbers and the connections between the regions , the system of three regions for hiv dynamics with traveling preserves 17 , 9 , 10 , 12 , 18 or 27 fixed points of the disconnected system for small volumes of traveling .as pointed out in theorem [ th : dfehiv ] the unique disease free equilibrium always exists in . the existence of the unique disease free steady state is guaranteed by theorem [ th : dfehiv ] .the proof will be done in the following steps : 1 .we show that there is no travel network which results in the persistence of 13 - 17 or 19 - 26 equilibria .we prove that the system of three regions with traveling can not have 8 or 11 fixed points .we demonstrate through examples that all other numbers of equilibria up to 27 can be realized .step 1 : + we note that if either holds in any of the regions , or there are two or more regions where , then the number of fixed points does nt exceed 12 .thus , to have at least 13 equilibria there must be two regions with and . if the third region also has three fixed points , that is , then there is no region with local reproduction number greater than one , and thus theorem [ th : pluszeroro<1 ] yields the existence of 27 steady states . otherwise is greater than one and region has two equilibria , one disease free and one endemic .in this case by theorem [ th : iftendemic ] there are 9 fixed points where , all of which preserved for small volumes of traveling .the possible number of equilibria with , which exist with traveling , are one ( if is reachable from both regions ) , 3 ( if is reachable from only one of them ) and 9 ( if is unreachable ) .we conclude that there are only two values greater than 12 for the possible number of fixed points in the travel system , which are 18 and 27 .+ step 2 : + we distinguish 5 cases to consider : a. for ; b. , , , ; c. , , , ; d. for ; e. there is an such that , . in case ( i )each region has three equilibria , hence the connected system obtains 27 fixed points for small -s .we have seen in step 1 that there are 10 , 12 or 18 equilibria in a network with the regions such that case ( ii ) holds .+ let us assume that case ( iii ) is realized , and henceforth the system has maximum 12 fixed points .if neither nor is reachable from then the persistence of an equilibrium for small -s is independent of the steady state value , thus the number of possible fixed points is a multiple of three , which does nt hold for any of 8 and 11 . on the other handif there is a connection from to any of and then some equilibria may vanish once traveling is incorporated .more precisely , let be reachable from . by theorem[ th : pluszeroro>1 ] , steady states where region is dfat and is eat do nt exist in the connected system , which means that the connection from to destroys fixed points out of the maximum 12 ( note that in region there are two positive equilibria , and the steady state value of does nt change the non - persistence of fixed points of the type , ) .this immediately makes 11 equilibria impossible . by means of the above arguments ,either or must be satisfied for each fixed point which persists , and their number can be maximum 8 .in particular the equilibria , where , and where , , , are such fixed points . persists with traveling only if the network is such that is unreachable from , so in this case there must be a path from to due to the connectedness of the network ( recall that we assumed that is reachable from , so any link from would make reachable from ). however this structure makes the persistence of for small -s impossible , and we get that there can not be 8 steady states in the case when and . + the maximum number of equilibria by cases ( iv ) and ( v ) are 8 and 9 , respectively , which observation finishes the investigation of the persistence of precisely 11 steady states in the system with traveling . by case ( iv ) some of the 8 fixed points obtained in the disconnected system clearly wo nt persist in the connected system if , for instance , there is a link from to then the equilibrium where and wo nt be preserved for positive -s . if case ( v )is realized and there is a region with local reproduction number greater than one then the system can not have more than 6 steady states .otherwise holds for all in case ( v ) and theorem [ th : pluszeroro<1 ] yields that all fixed points of the disconnected system continues to exist once traveling is incorporated .it is not hard to check that the number of equilibria is never 8 .+ step 3 : + any network where is satisfied for all exhibits only one , the disease free equilibrium .it is straightforward to see that the complete network of three regions has 2 fixed points when for , and if there is one , two or three region(s ) where whilst holds in the remaining region(s ) then , independently of the connections , the connected system preserves 3 , 9 or 27 equilibria , respectively , of the disconnected system from small volume traveling .+ any network where , , and is reachable from both other regions works as a suitable example for the case of 10 fixed points , since this way the disease free equilibrium coexists with 9 steady states where . a way to obtain 12 and 18fixed points has been described in step 1 , and we use the case when and to construct examples for 4 , 5 , 6 and 7 steady states .figure [ fig:4567 ] depicts one possibility for the network of each case , though it is clear that there might be several ways to get the same number of equilibria .+ if both regions 2 and 3 are reachable from 1 , then fixed points where are preserved with traveling only if and also hold . on the top of these 2 positive equilibria , there surely exists the disease free steady state plus 1 , 2 or 3 non - zero fixed point(s ) with , depending on whether region 2 is reachable from 3 and vice versa , as illustrated in figure [ fig:4567 ] ( a ) , ( b ) and ( c ) .if region 3 is reachable from both regions 1 and 2 then is only possible in the disease free equilibrium ; although all 6 fixed points where region 3 is at the endemic steady state persist for small volumes of traveling if there is no connection from 1 to 2 ( figure [ fig:4567 ] ( d ) shows such a situation ) .the dynamics of the hiv model in connected regions is worth investigating in more depth , although this is beyond the scope of this study .however the numerical simulations presented in the next section reveal some interesting behavior of the model .this section is devoted to illustrate the rich dynamical behavior in the model .the epidemiological consequence of the existence of multiple positive equilibria in one - patch models is that the epidemic can have various outcomes , because solutions with different initial values might converge to different steady states .stable fixed points are of particular interest as they usually attract solutions starting in the neighborhood of other ( unstable ) steady states . for instance , in case of backward bifurcation the presence of a stable positive equilibrium for makes it possible that the disease sustains itself even if the number of secondary cases generated by a single infected individual in a fully susceptible population is less than one. however , considering multiple patches with connections from one to another deeply influences local disease dynamics , since the travel of infected agents induces outbreaks in disease free regions .the inflow of infected individuals might change the limiting behavior when pushing a certain solution into the attracting region of a different steady state , and it also may modify the value of fixed points . + henceforth , knowing the stability of equilibria in the connected system of regions is of key importance . for small volumes of traveling not only the number of fixed points but also their stability can be determined : whenever a steady state of the disconnected regions continues to exists in the system with traveling by means of the implicit function theorem , its stability is not changed on a small interval of the mobility parameter .this means that equilibria of , which have all components stable in the disconnected system , are stable ; although every steady state which contains an unstable fixed point as a component is unstable when and thus , also for small positive . in this paperthe conditions for the persistence of steady states with the introduction of small volume traveling has been described : by a continuous function of , all fixed points of will exist in the connected system but those for which there is a dfat region with local reproduction number greater than one , and to which the connecting network establishes a connection from an eat territory .however , infection - free steady states are typically unstable for , thus the above argument yields that incorporating traveling with low volumes preserves all stable fixed points of the disconnected system , since the equilibria which disappear when exceeds zero are unstable .+ the dependence of the dynamics on movement is illustrated for the hiv model . to focus our attention to how influences the fixed points , their stability and the long time behavior of solutions, we let all model parameters but the local reproduction numbers in the three regions to be equal . in the figures which we present in the appendix , the evolution of four solutions with different initial conditions were investigated as increases from zero through small volumes to larger values .+ if all three regions exhibit backward bifurcation , and the local reproduction numbers are set such that besides the disease free fixed point , there are two positive equilibria , , then , as described in section [ sec : hiv ] , 27 steady states exist for small -s . assuming that the conjectures of section [ sec : hiv ] about the stability of the disease free equilibrium and the steady state with , and the instability of the positive fixed point with hold in each region, we get that system with hiv dynamics exhibits 8 stable and 19 unstable steady states on an interval for to the right of zero .this is confirmed by figures [ fig : hivrl1irred ] and [ fig : hivrl1red ] , where two cases of irreducible and reducible travel networks were considered ( see the appendix for more detailed description of the networks ) , and holds . introducing low volume traveling ( e.g. , letting in our examples ) effects neither the stability of steady states nor the limiting behavior of solutions , however the difference in the type of the connecting network manifests for larger movement rates , as the conditions for disease eradication clearly change along with the equilibrium values ( see figures [ fig : hivrl1irred ] and [ fig : hivrl1red ] ( c ) and ( d ) where were chosen as and , respectively ) .+ when there are regions with local reproduction numbers larger than one in the network , certain fixed points of the disconnected system disappear with the introduction of traveling ; this phenomenon is reasonably expected to have an impact on the final outcome of the epidemic . for all three networks used for the simulations in figures [ fig : hivrg1nofb ] , [ fig : hivrg1feedback ] and [ fig : hivrg1full ] , presented in the appendix , the number of infected individuals takes off in regions with for small , regardless of the initial conditions ( see figures ( b ) where and , in particular , the cases when ) .the results for larger travel volumes ( in the simulations the two settings of and were considered ) further support the conjecture that solutions converge to positive steady states in regions with reproduction number greater than one .however , the case when regions 2 and 3 , , are not reachable from each other and a single endemic equilibrium seems to attract all solutions ( illustrated in figure [ fig : hivrg1nofb ] ) is in contrary to the situation experienced in figure [ fig : hivrg1feedback ] , since we see that establishing a path from region 2 to 3 via region 1 results in the emergence of another positive ( possibly locally stable ) steady state in region 3 .nevertheless , comparing figures [ fig : hivrg1nofb ] and [ fig : hivrg1feedback ] with reducible networks to figure [ fig : hivrg1full ] , where a complete connecting network was considered , highlights the role of the irreducibility of the network on the dynamics .whereas in case of reducible networks , the final epidemic outcome in a region with strongly depends on initial conditions and connections to other regions , making each region reachable from another sustains the epidemic in region 1 ( where ) by giving rise to a single positive steady state of the system .this has an implication on the long term behavior of solutions in regions with as well , since direct connections seem to stabilize only one endemic equilibrium in region 2 and region 3 , and exclude the existence of other steady states .+ in summary , the theoretical analysis performed throughout the paper is in accordance with the numerical simulations for small values of the general mobility parameter , but more importantly , it provides full information about the fixed points of : it determines their ( non-)persistence , along with their stability , in the system with traveling incorporated . on the other hand ,little is known about the solutions of the model when the travel volume is larger , as the structure of the connecting network and initial values deeply influence the dynamics .in this paper a general class of differential epidemic models with multiple susceptible , infected and removed compartments was considered .we provided examples of multigroup , multistrain and stage progression models to illustrate the broad range of applicability of our framework to describe the spread of infectious diseases in a population of individuals .the model setup allows us to investigate disease dynamics models with multiple endemic steady states .such models have been considered in various works in the literature including studies which deal with the phenomenon of backward bifurcation .we extended our framework to an arbitrary number of regions and incorporated the possibility of mobility of individuals ( e.g. , traveling ) between the regions into the model .motivated by well known multiregional models , where the exact number of steady states have not been explored , our aim in this work was to reveal the implication of mobility between the regions on the structure of equilibria in the system .+ we introduced a parameter to express the general mobility intensity , whilst differences in the connectedness of the regions were modeled by constants , each describing the relative connectivity of one territory to another .considering the model equations of the connected system as a function of the model variables and , the implicit function theorem enabled us to represent steady states as continuous functions of the mobility parameter .we showed that the unique disease free equilibrium of the disconnected system along with all componentwise positive fixed points continues to exist in the system with traveling for small , with their stability unchanged . on the other hand ,boundary equilibria of the system with no traveling ( this is , steady states with some regions without infection and others endemic for ) may disappear when becomes positive , as they might move out of the nonnegative cone along the continuous function established by means of the implicit function theorem , and thus , become no longer biologically meaningful .+ throughout the analysis performed in the paper we gave necessary and sufficient condition for the persistence of such equilibria in the system with traveling for various types of the connecting network .it turned out that the local reproduction numbers and the structure of the graph describing connections between the infected compartments of the regions play an important role . if each infected compartment is connected to every other infected class of the same type of other regions , implying that the connecting network includes every possible link , then a boundary equilibrium of the disconnected system wo nt persist with traveling if and only if there is a component of the fixed point corresponding to a disease free region with local reproduction number greater than one .assuming an extra condition on the infected subsystem in each region we showed that the same statement holds in the case when the connection network of infected classes is not complete but is still irreducible , meaning that each region is reachable from any other one via a series of links between any of the infected classes see figure [ fig : directcon ] which illustrates such a situation .the result also extends to the most general case of arbitrary connection network of the infected classes : it was proved that steady states of the disconnected system which have a disease free region with disappear from the system if the possibility of mobility establishes a connection to this region ( maybe via several other regions ) from a territory where the disease is endemic ; nevertheless all other equilibria of the system without traveling continue to exist for small values of the mobility parameter .the epidemiological implication of this behavior is that , even for small volumes of traveling , all regions with local reproduction number greater than one will be invaded by the disease unless they are unreachable from endemic territories .direct or indirect connections from regions with positive disease state make the inflow of infecteds possible and then the imported cases spread the disease in the originally disease free region due to . + in the most common situation of forward transcritical bifurcation of the disease free equilibrium at ,when the disease can not be sustained for values of less than one , our results yield that only connections from regions with have impact on the equilibria of the disconnected system .if a region with local reproduction number greater than one is susceptible in the absence of traveling then isolating it from endemic territories keeps the region free of infection , so a successful intervention strategy can be to deny all connections from regions with .however , the dynamics becomes more complicated when small volume traveling is incorporated into a system of multiple regions with some exhibiting the phenomenon of backward bifurcation : in a case when endemic equilibria exist for as well , protecting a region with from the disease by denying the entrance of individuals from areas where the reproduction number is greater than one is no longer sufficient ( though still necessary ) to prevent the outbreak .such a situation was illustrated by an hiv transmission model for three regions where , under certain conditions , the dynamics undergoes backward bifurcation in each region .we calculated the possible number of steady states of the disconnected system which persist with the introduction of traveling with small volumes into the system , and illustrated by several examples on the network structure and model parameter setting that mobility of individuals between the regions gives rise to various scenarios for the limiting behavior of solutions , and thus makes the outcome of the epidemic difficult to predict .dhk acknowledges support by the european union and the state of hungary , co - financed by the european social fund in the framework of tmop 4.2.4 .a/2 - 11 - 1 - 2012 - 0001 `` national excellence program '' .gr was supported by the european union and the european social fund through project futurict.hu ( grant tmop4.2.2.c-11/1/konv-2012 - 0013 ) , european research council stg nr .259559 , and hungarian scientific research fund otka k109782 . 99999 , _ diseases in metapopulations _ ,modeling and dynamics of infectious diseases , vol .11 of ser .cam , higher ed .press , bejing ( 2009 ) pp .64122 . , _ a final size relation for epidemic models _ , math .4(2 ) , ( 2007 ) pp . 159175 . , _ a multi - city epidemic model _ , math . popul . stud ., 10 ( 2003 ) , pp . 175193 . , _aids : modeling epidemic control _ , science 267 , ( 1995 ) pp ., _ theoretical assessment of public health impact of imperfect prophylactic hiv-1 vaccines with therapeutic benefits _68 ( 2006 ) 577 - 614 ., _ special matrices and their applications in numerical mathematics _ , springer , 1986 . , _ multiple endemic states in age - structured sir epidemic models _ , math . biosci .9(3 ) ( 2012 ) pp . 577599 . , _ an sis patch model with variable transmission coefficients _ , math .232(2 ) ( 2011 ) pp . 110115 . , _ causes of backward bifurcations in some epidemiological models _ , j. math . anal .( 2012 ) pp . 355365 ., _ epidemiological models for heterogeneous populations : proportionate mixing , parameter estimation , and immunization programs _ , math .84(1 ) , ( 1987 ) pp . 85118 . ,_ the differential infectivity and staged progression models for the transmission of hiv _155(2 ) , ( 1999 ) pp . 77109 . ,global report : unaids report on the global aids epidemic 2013 , `` unaids / jc2502/1/e '' , november 2013 , _ modeling the impact of imperfect hiv vaccines on the incidence of the infection _ , math .comput . model .34 , ( 2001 ) pp . 345351 ., imperfect vaccines and herd immunity to hiv , proc .b 253 , ( 1993 ) pp .913 . , _ role of incidence function in vaccine - induced backward bifurcation in some hiv models _ , math . biosci .210 ( 2 ) ( 2007 ) pp . 436463 . ,http://www.iavi.org/pages/default.aspx , _ reproduction numbers and subthreshold endemic equilibria for compartmental models of disease transmission _ ,180 , ( 2002 ) pp . 2948 . , _backward bifurcation of an epidemic model with treatment _ , math .201(1 ) ( 2006 ) pp . 5871 . , _ an epidemic model in a patchy environment _ , math190(1 ) ( 2004 ) pp . 97112 .the following figure illustrates the path of regions considered in the proof of theorem [ th : pluszeroro>1 ] . , , , and , having the property that regions and , , are dfat , and for , furthermore region is eat.,width=264 ] we present the results of the simulations considered in section [ sec : richdyn ] in the following figures , which depict solutions of system with hiv dynamics for four different sets of initial values . + for figures [ fig : hivrl1irred ] and [ fig : hivrl1red ] initial values were chosen as , , , for , and , , , , , ( blue curve ) , , , , , , ( red curve ) , , , , , , ( black curve ) , , , , , , ( green curve ) .+ for figures [ fig : hivrg1nofb ] , [ fig : hivrg1feedback ] and [ fig : hivrg1full ] initial values were chosen as , , , for , and , , , , , ( blue curve ) , , , , , , ( red curve ) , , , , , , ( black curve ) , , , , , , ( green curve ) .
|
we show that disease transmission models in a spatially heterogeneous environment can have a large number of coexisting endemic equilibria . a general compartmental model is considered to describe the spread of an infectious disease in a population distributed over several patches . for disconnected regions , many boundary equilibria may exist with mixed disease free and endemic components , but these steady states usually disappear in the presence of spatial dispersal . however , if backward bifurcations can occur in the regions , some partially endemic equilibria of the disconnected system move into the interior of the nonnegative cone and persist with the introduction of mobility between the patches . we provide a mathematical procedure that precisely describes in terms of the local reproduction numbers and the connectivity network of the patches , whether a steady state of the disconnected system is preserved or ceases to exist for low volumes of travel . our results are illustrated on a patchy hiv transmission model with subthreshold endemic equilibria and backward bifurcation . we demonstrate the rich dynamical behavior ( i.e. , creation and destruction of steady states ) and the presence of multiple stable endemic equilibria for various connection networks . differential equations , large number of steady states , compartmental patch model , epidemic spread . + * ams subject classification : * primary 92d30 ; secondary 58c15 .
|
deep convolutional neural networks ( dcnns ) have brought about breakthroughs in many domains , and their application to online handwritten chinese character recognition ( hccr ) has persistently yielded state - of - the - art results in recent years .these previous work have effectively improved hccr performance , e.g. application of the path signature theory , usage of comprehensive domain knowledge and more advanced training methods , etc .however , several challenges still need to be addressed .the main challenge presented by hccr results from varied handwriting styles and the large number of chinese character classes .a large - scale dataset is vital to hccr performance , especially when using a dcnn .however , data acquisition with high quality ground - truth is a tedious job .one possible solution to this problem may be to apply character distortion to generate artificial samples .such kind of approach enables the generation of a large number of training samples to improve the performance .however , in previous studies , character distortion is usually applied at a fixed degree throughout the entire training process .even though a fixed high - degree distortion reduces overfitting by generating varied samples , it may lead to a distribution that deviates from the underlying data distribution , whereas a fixed low - degree distortion would achieve the opposite .hence , more proper distortion strategies should be investigated .another challenge is to find a good feature representation for online characters .although dcnn is good at capturing visual concepts from raw inputs , prior knowledge can be encoded into the inputs for a dcnn to improve the performance . in online hccr ,8-directional features , or path signature can be regarded as prior knowledge that enhances a dcnn , but it needs further study to determine how these feature representations could be improved .rather than incorporating dynamic information into dcnn , used recurrent neural network ( rnn ) to directly recognize and draw online chinese characters , and showed that rnn is also able to learn complex representation of online characters . in this paper , we focus on dcnn based models and explore several techniques to address these challenges .motivated by the great performance of the path signature in hccr , we utilize path signature as features for online characters .different from previous usage of path signature features for hccr , we add a time dimension to the online characters in accordance with the writing order to enable extraction of more expressive features .our dcnn architecture is carefully designed to enable more effective learning from the signature features , with spatial stochastic max - pooling layers performing feature map distortion and model averaging .the _ dropdistortion _ training strategy , which gradually lowers the character distortion degree during training , is proposed to address the drawbacks of a fixed degree of distortion .1 pictorially illustrates this concept . in early training epochs , a high degree of distortionprovides improved generalization , whereas in later epochs , a decreasing degree of distortion gradually reveals more genuine data distribution .experiments on the casia - olhwdb 1.0 , casia - olhwdb 1.1 , and the icdar2013 online hccr competition dataset achieve state - of - the - art accuracies of 97.67% , 97.30% and 97.99% , respectively . is lowered after certain epochs at the training stage , in order to gradually reveal the genuine data distribution . ] the remaining part of this paper is organized as follows .section provides a detailed analysis of the proposed _dropdistortion _ method .section introduces the path signature .section describes our dcnn architecture and design pipeline .section presents the experimental results and its detailed analysis . finally , section concludes the paper .character distortion is widely used in hccr to generate artificial training samples .the basic structures of the distorted samples are the same as those of the originals .2 illustrates some rotated samples of a chinese character . in previous studies ,however , character distortion is applied upto a certain extent in an empirical way . in this paper , we study the influence of character distortion in detail and propose a simple but novel method , namely _ dropdistortion _ , to help enhance hccr performance .we use affine transformation to distort input characters .let denote the degree of character distortion , _denote a random number drawn from uniform distribution _u(- , ) _ , and [ _ * x * _ _ * y * _ ] denote the coordinate series of a character stroke , a matrix of size _ _ n__ where n indicates the number of data points. then we can apply affine transformation to distort online handwritten chinese characters : \leftarrow[\emph{\textbf{x}}\ \emph{\textbf{y}}]\cdot\left [ \begin{array}{cc } 1+\emph{ } & 0 \\ 0 & 1+\emph{ } \\\end{array } \right],\ ] ] \leftarrow[\emph{\textbf{x}}\ \emph{\textbf{y}}]\cdot\left [ \begin{array}{cc } 1 & \emph{ } \\ 0 & 1 \\ \end{array } \right],\ ] ] \leftarrow[\emph{\textbf{x}}\ \emph{\textbf{y}}]\cdot\left [ \begin{array}{cc } 1 & 0 \\ \emph{ } & 1 \\ \end{array } \right],\ ] ] \leftarrow[\emph{\textbf{x}}\ \emph{\textbf{y}}]\cdot\left [ \begin{array}{cc } \cos(\emph{ } ) & -\sin(\emph{ } ) \\ \sin(\emph{ } ) & \cos(\emph{ } ) \\\end{array } \right],\ ] ] where ( 1 ) stretches or shrinks the stroke , ( 2 ) and ( 3 ) slant the stroke and ( 4 ) performs rotational distortion . a character is distorted by simply applying one or more of the above equations to all its strokes , with _ _ in the same equation fixed within a character .we also randomly translate the characters to achieve a better distortion diversity .first , consider a simple example with two handwritten characters , the arabic numeral `` 1 '' and the chinese character``one '' ( fig .we use _ a _ and _ b _ to denote them respectively for convenience .when written by hand _a _ and _ b _ look quite similar except for their orientations .in other words , they have the same topological structure . let _ _ denote the angle between their orientations and the horizontal direction , and assume _ _ obeys a gaussian distribution : _ _(/2, ) _ , _ _ (0,)_. then from the bayesian perspective , we have where _f(a ) _ and _ f(b ) _ are probabilities predicted by a classifier , _f( ) _ and _ f( ) _ model the orientation information , _f(a ) _ and _ f(b ) _ model the topological structure information , and _f( ) _ is a normalized constant determined by the training set and can be safely removed : if the topological structure of _ a _ and _ b _ is nearly the same , we have although _ _f(a)__(b ) _ , the classifier is still able to correctly classify these two characters according to _ _ due to the difference between _f( ) _ and _ f()_. if rotational distortion is applied to _ a _ and _ b _ with drawn from the uniform distribution _ u(- , ) _ , then the distribution of_ _ is shown in fig .4 . if rotational distortion is applied to an extreme , i.e. , = , then _ _ obeys a uniform distribution as well . under these circumstances , the classifier is unable to distinguish between _ a _ and _ b _ because _ _ f()__=_f( ) _ and _ _ f(a)__(b)_. however , when it comes to hccr , the situation is somewhat different .a rotated chinese character will rarely resemble other chinese characters .a high degree of rotational distortion actually removes the conditional term _f( ) _ and forces the classifier to predict _f(c ) _ more accurately , i.e. , to achieve improved learning of the topological structure of the character _ c_. by preventing co - adaptation of _f(c ) _ and _ f( ) _ , a high degree of rotational distortion reduces overfitting .other distortions such as stretching can be analyzed in the same way . hence using character distortion to generate artificial samplesis a reasonable and effective way to enhance hccr performance .although effective in improving hccr performance , character distortion changes the data distribution .specially , it confuses some similar characters inevitably .for example , the chinese characters `` the sun '' ( fig .3 ( c ) ) and `` say '' ( fig .3 ( d ) ) would be indistinguishable if they were to be highly stretched .the change of distribution should be considered to give further improvements , but no such efforts have been reported in previous studies , where the character distortion is carried out upto a fixed degree during the entire training process . the proposed _ dropdistortion _method is a novel strategy that is designed to take the change of distribution into consideration .it s based on a simple idea that the dcnn should be fine - tuned with more genuine samples , i.e. , low - degree or non distorted samples . the proposed _algorithm is given in algorithm 1 .* input : * training set =( , ) , i=1, ... ,m of k classes . +* initialize : * index ; distortion degree . + * output : * dcnn parameters . +* begin : * sample distortion at degree update w through back propagation algorithm [ alg_lirnn ] in the proposed method , a high degree of distortion is used in the early training epochs to generate varied samples to help the dcnn learn effective features and reduce the risk of overfitting . in the later epochs ,the degree of distortion decreases gradually to allow a subsequent finer adjustment of the dcnn with more genuine samples .the only extra complexity _ dropdistortion _ introduces is to monitor the training loss and decrease the distortion rate if a certain condition is fulfilled . in practice ,_ dropdistortion _ can be simply implemented in a multi - step way , and in this paper it is implemented in a three - step way as explained in fig . 1 .in mathematics , a path signature is a collection of iterated integrals of a path .t_] _ _ denote a continuous path of bounded variation , mapping from time interval [ 0 , _ t _ ] to space _ . then the _ k_-th iterated integral of path _ x _ is where represents the tensor product . by convention, is the number one .the signature of path _ x _ is the collection of all the iterated integrals of _ x _ , denoted by _s(x)_. since this is an infinite series , in practice one often considers the first __ m__th - order integrals , namely truncated path signature : as is often the case , _ x _ is sampled and approximated by a set of discrete points. then the iterated integrals can be approximated by using some simple tensor algebra .online handwriting can be seen as a path mapping from time interval [ 0 , _ t _ ] to .graham first introduced truncated path signature features to online handwritten character recognition as inputs for a dcnn and achieved remarkable performance . some previous work experimented with truncated level _ m _ and found that the accuracy increases with the increase of _ m _ and gradually saturates . in our work , _ m _ is empirically set as 4 because integrals of a higher order typically characterize more trivial details of a path and do not lead to further improvement .the signature is invariant to translations and time re - parameterization of the path , and uniquely characterizes the path if it contains no part that exactly retraces itself .however , in handwritten characters , some local parts do retrace themselves due to joined - up writing .for example , path ( [ 0,0 ] , [ 1.5,2.5 ] , [ 3,3 ] , [ 1.5,2.5 ] ) has the same signature with path ( [ 0,0 ] , [ 1.5,2.5 ] ) because ( [ 1.5,2.5 ] , [ 3,3 ] , [ 1.5,2.5 ] ) exactly retraces itself . to handle this problem, we introduce a monotone time dimension to the original handwriting sequences , i.e. , the _ _ n__ matrix [ _ * x * _ _ * y * _ ] is appended by a column vector _ _* t*__=^t$ ] .this concept is demonstrated in fig .5 . the time dimension ensures the uniqueness for the path signature , hence features extracted from the _ _ n__ matrix [ _ * t * _ _ * x * _ _ * y * _ ] would be more expressive . in practice , if _ _m__=4 , then _ _n__=31 for a 2d character and _ _n__=121 for a 3d character , where _ n _ denotes the number of input channels .deep convolutional neural networks ( dcnns ) have shown great success in computer vision and pattern recognition , and different architectures of dcnn have been explored .however , these designs did not evaluate the max - pooling ( mp ) operation , which incorporates a degree of invariance with respect to translations and elastic distortions into the dcnn .max - pooling operates in a sliding window method , and conventionally the pooling region slides with a fixed integer stride .usually the stride is two , hence the size of feature map is reduced by a factor of two . compared to traditional mp layers , fractional max - pooling ( fmp ) layers reduce the feature map size by a factor of with as a fraction . if 1 , then fmp reduces the size in a slower manner than mp . as is a fraction ,the pooling stride can not be a fixed integer . in this case , the pooling region slides with a pre - calculated stride series .we prefer to call fmp as spatial stochastic max - pooling ( ssmp ) , which describes the pooling process more precisely . let _n _ and _ n _ denote the input and output feature map size respectively. then is renewed above due to the rounding effect .the stride series is a vector for each dimension of the output image .then for the i - th ( i=0,1, ... ) position of the stride series , where is the stride series , is the accumulated stride series with and ( the pooling size is 2 ) , and is a randomly drawn threshold to round up or down . in practice , can be set independently at different position i , or set only once and shared across different positions as a constant .we denote these two strategies as ssmp and ssmp respectively .different to which employs ssmp , we propose a new strategy that is set independently at each position i in early training epochs and set only once in the last epochs .we call this strategy as ssmp . during the pooling process, ssmp introduces a certain degree of randomness into the pooling regions by a random choice of the feasible stride series . fig .6 illustrates this concept vividly .the colored 2 squares are the pooling regions chosen . by setting as 1.5 ,the feature map size is reduced from 6 to 4 with multiple feasible pooling choices ; hence every forward path may have a slightly different output .moreover , ssmp achieves elastic distortion of the feature maps of the previous layer , which implicitly distorts the input characters and improves the generalization ability of a dcnn . in test phase , running the network for multiple times and averaging the outputs achieve the same effect as an ensemble of similar networks . in our dcnn, we use 3 or 2 convolutional filters , which also makes the network deeper compared to larger filter size .the path signature features are extracted from online handwriting data sequences , and then padded into _ _n__ bitmaps embedded in larger grids ( pad zeros around the characters ) .the first pooling layer is max - pooling rather than ssmp , as we find max - pooling works better than ssmp when dealing with the input , partly due to the input noise introduced by padding a continuous handwriting path onto a discrete bitmap .we apply the leaky relu activation function with _ _a__=0.333 , which outperforms relu in terms of convergence speed and representation capability .the datasets we used are casia - olhwdb 1.0 ( db 1.0 ) , casia - olhwdb 1.1 ( db 1.1 ) and icdar2013 competition dataset . db 1.0contains 3740 chinese character classes in standard level-1 set of gb2312 - 80 ( gb1 ) and is obtained from 420 writers ( 336 samples for training , 84 for testing ) .db 1.1 contains 3755 classes in gb1 and is obtained from 300 writers ( 240 samples for training , 60 for testing ) .the database for the icdar2013 online hccr competition consists of three datasets containing isolated chinese characters , namely , casia - olhwdb 1.0.2 ( db 1.0.2 ) .the test set was released after the competition , which contains 3755 classes in gb1 . in the experiments , we use nesterov momentum with momentum =0.9 .the learning rate is set as 0.003 initially , and halved after the first iteration , and then exponentially decreases to 0.00001 .the training mini - batch size is 96 .we trained the dcnn for 70 epochs . ''denotes the reduction of feature map size .the value following drop " means the dropout ratio . ]we first conducted experiment on db1.1 to evaluate the effectiveness of our proposed method .i.e , _ dropdistortion _ , ssmp and 3d signature .we designed a baseline dcnn with simple 0 - 1 bitmaps as input , and fixed the distortion rate .we evaluated the three techniques individually through the variable - controlling approach .the architecture of the baseline dcnn is input - 32c3 - mp2 - 64c3 - 96c3 - mp2 - 128c3 - 160c3 - mp2 - 192c3 - 224c3 - mp2 - 256c3 - output . in order to evaluate the _ dropdistortion _, the baseline dcnn is trained with _dropdistortion_. in order to evaluate the 3d signature features ,the baseline dcnn is trained with 2d signature and 3d signature . for the evaluation of the ssmp and the drawing methods as described in section 4.1 , i.e. ssmp , ssmp and ssmp , two mp layers in the baseline dcnn is replaced with four ssmp layers with =1.5 .refer to table 1 .the last convolutional layer is 256c2 instead of 256c3 because . in the test phase , repeatedly running the network produce a slightly different output each time because of ssmp .averaging these outputs can improve the accuracy .m0.4 cm < m0.4 cm < m0.4 cm < m0.4 cm < m0.8 cm < m0.4 cm < m0.8 cm < m0.4 cm < m0.8 cm < m0.4 cm < m0.8 cm < m0.4 cm < m0.4 cm < m0.4 cm< dimension&32&&64&96&&128&&160&&192&&224&&256 + baseline dcnn & c3 & mp2 & c3 & c3 & * mp2 * & c3 & ( ) & c3 & * mp2 * & c3 & ( ) & c3 & * mp2 * & c3 + ssmp dcnn & c3 & mp2 & c3 & c3 & * ssmp1.5 * & c3 & * ssmp1.5 * & c3 & * ssmp1.5 * & c3 & * ssmp1.5 * & c3 & * mp2 * & c2 + the experimental results are presented in table 2 . the employed three techniques , namely _ dropdistortion _, 3d signature and ssmp , all improve the performance over the baseline and their corresponding counterparts .the 2d signature substantially improves the performance , while the 3d signature gives further improvement .ssmp greatly decreases the error rates by averaging 10 test scores , and the ssmp we proposed is better than ssmp and ssmp ._ dropdistortion _ decreases the error rate with little extra cost . as _ dropdistortion _ and ssmp can be used jointly with path signature , we conducted further experiments to evaluate the joint effects ..evaluation of the proposed three techniques : _ dropdistortion _ , ssmp and path signature ( test error rates , % ) [ cols="^,^,^",options="header " , ] to achieve state - of - art results , we extended the baseline model to be deeper and wider . the overall architecture is shown in fig .the second ( 48c2 ) , fourth ( 64c2 ) and sixth layer ( 96c2 ) reduce the dimension in lower layers and act as regularization .the experimental results are presented in table 3 , where we can see _ dropdistortion_ always achieves the best performance , and a higher degree distortion is usually better than a lower degree distortion ._ dropdistortion _ brings about an obvious improvement when using bitmaps as inputs , as bitmaps only contains structural information and _ dropdistortion _ helps to learn the structure .compared to rendering online characters as bitmaps , the path signature greatly improves the recognition accuracy , which proves its effectiveness in information extraction . andthe 3d signature also gives consistent improvement over the 2d signature .the ssmp averaging results are given in table 4 ( _ dropdistortion _ + 3d signature + ssmp ) . by averaging 10 test scores of the network ,the test error rates decrease significantly on both db1.1 and icdar2013 competition dataset .the best results for db1.1 and icdar2013 competition dataset are both produced by jointly using _ dropdistortion _ , 3d signature and ssmp , which results in relative error reduction of 36.3% ( = ( 4.24 - 2.70)/4.24 * 100% ) and 37.2% ( = ( 3.20 - 2.01)/3.20 * 100% ) respectively .m2.4 cm < m2.5 cm < m2.5 cm < m2.5 cm < & & + & & fixed 0.1&fixed 0.3&dropdistortion : 0.3.2.1 + & bitmap&4.24&4.18&4.08 + db 1.1&2d signature&3.10&3.02&2.99 + & 3d signature&3.01&2.95&2.92 + & bitmap&3.20&3.25&3.13 + icdar2013&2d signature&2.32&2.27&2.24 + & 3d signature&2.30&2.26&2.21 + m3 cm < m1 cm < m1 cm < m1 cm < m1 cm < m1 cm < m1 cm < m1 cm < m1 cm < m1 cm < m1.2 cm < database&1 test&2 tests&3 tests&4 tests&5 tests&6 tests&7 tests&8 tests&9 tests&10 tests + db 1.1 & 2.92 & 2.82 & 2.78 & 2.75 & 2.75 & 2.73 & 2.71 & 2.70 & 2.70 & 2.70 + icdar2013 & 2.21 & 2.11 & 2.09 & 2.08 & 2.05 & 2.05 & 2.03 & 2.03 & 2.03 & 2.01 + we conducted further experiments on db 1.0 , and in table 5 we compared our method with some state - of - the - art achieved for hccr in previous studies . it can be seen from table 5 that our method has achieved the highest recognition accuracies for all the three datasets , showing the effectiveness of the proposed approach . our dcnn architecture in fig . 7follows that of deepcnet and fmp network , e.g. 2 convolutional kernels , increasing kernel numbers and stacked ssmp layers , but it is no deeper than the network in . domain knowledge and _ dropsample _ can be adapted into our approach to give further improvement , but this is beyond the scope of this paper .8-directmap contains directional information , which is equivalent to the first level of path signature .higher levels of path signature we used here contain richer information and can further improve the performance .8 demonstrates some randomly chosen misclassified samples , from which we can discern that the misclassified samples are usually illegible .some are even mislabeled or wrongly written .m3 cm < c c m2 cm <methods&db 1.0&db 1.1&icdar2013 competition + deepcnet & & & + &[0pt]&[0pt]96.42&[0pt]97.39 + domain knowledge & & & + &[0pt]97.20&[0pt]96.87&[0pt]97.20 + fmp ensemble & & & + &[0pt]&[0pt]97.03&[0pt] + dropsample & & & + &[0pt]97.33&[0pt]97.06&[0pt]97.51 + directmap+convnet & & & + &[0pt]&[0pt]&[0pt]97.55 + our method 1 test&97.5&97.08&97.79 + & & & + [ 0pt]our method 10 tests&[0pt]*97.67*&[0pt]*97.30*&[0pt]*97.99 * +this paper presents several new techniques for online hccr that effectively boost the recognition accuracy .we proposed a simple but effective character distortion method called _ dropdistortion _ , which improves the recognition accuracy with little additional computational cost .path signature acts as an effective feature representation for online characters , and the ssmp layers in our dcnn perform feature map distortion and model averaging .experiments on casia - olhwdb 1.0 , casia - olhwdb 1.1 and the icdar2013 online hccr competition dataset achieved new state - of - the - art accuracies of 97.67% , 97.30% and 97.99% , respectively . compared with previous best result on the icdar2013 competition dataset , our method has achieved an error reduction of 17.9% , showing the effectiveness of the proposed approach .although we mainly focus on online hccr , the proposed _method is expected to serve as a general technique in many other tasks in machine learning , such as image classification .the _ dropdistortion _training strategy was applied in a simple three - step way in this paper , and in future work it is of great worth to investigate self - adaptive _ dropdistortion _ , where the distortion degree could be automatically adjusted .25 natexlab#1#1[1]`#1 ` [ 2]#2 [ 1]#1 [ 1]http://dx.doi.org/#1 [ ] [ 1]pmid:#1 [ ] [ 2]#2 , , . , in : , pp . ., , , , . ., , , , . ., , , , . . ,, , , . , in : , pp . ., , , , . . ,, , . , in : , pp . ., , , , . , in : , pp . ., , , . , in : , p. ., , . , in : , pp . ., , , , , , , , , . , in : , pp . ., , , , . ., , , a. , in : , pp . ., , , a. . , ., , , , , b. . , ., , , , b. , in : , pp . . , , , ,, in : , pp . . , , , , , . ., , , . . ,
|
this paper presents an investigation of several techniques that increase the accuracy of online handwritten chinese character recognition ( hccr ) . we propose a new training strategy named _ dropdistortion _ to train a deep convolutional neural network ( dcnn ) with distorted samples . _ dropdistortion _ gradually lowers the degree of character distortion during training , which allows the dcnn to better generalize . path signature is used to extract effective features for online characters . further improvement is achieved by employing spatial stochastic max - pooling as a method of feature map distortion and model averaging . experiments were carried out on three publicly available datasets , namely casia - olhwdb 1.0 , casia - olhwdb 1.1 , and the icdar2013 online hccr competition dataset . the proposed techniques yield state - of - the - art recognition accuracies of 97.67% , 97.30% , and 97.99% , respectively . + online handwritten chinese character recognition , deep convolutional neural network , spatial stochastic max - pooling , character distortion , path signature ,
|
we investigate the simulation of the following special case of the time - independent schrdinger equation where , is a bounded domain , is a piecewise two times continuously differentiable function and is of the form where , for each , is a non - negative constant , is a domain such that for , and this class of functions given by is motivated by the phenomenon called quantum tunneling in particle physics . also , the form can be motivated by being an approximation of general positive .since we are considering the case where is discontinuous , we can not obtain classical solution .instead , we consider weak solutions , i.e. , is a solution of if for all .let be standard -dimensional brownian motion .let be the probability measure such that , -almost surely , .let be the expectation associated with the measure .let i.e. , is the first exit time of the brownian motion from the domain .we assume that the domain , and all the sub - domains , are ( wiener ) regular , i.e. , = 1 \quad\mbox{for all } y\in \partial d.\ ] ] then , by ( * ? ? ?* theorem 4.7 ) , a bounded weak solution to the boundary value problem where is a bounded borel function , is given by .\ ] ] moreover , if is continuous , then is the unique solution satisfying . in the case where is of the form the formula takes the form ,\ ] ] where is the total time the brownian motion spends in the sub - domain before exiting the domain at time .since the brownian motion has the strong markov property and the `` discounting term '' can be considered as exponential killing , we can analyze the terms in the sum independently .this means that on the stochastic set we can consider the corresponding yukawa equation , or brownian motion with exponential killing .this can be analyzed by using the panharmonic measures as in .now we recall some facts of panharmonic measures .the _ harmonic kernel _ is \frac{{\mathrm{d}}{\mathbb{p}}^x}{{\mathrm{d}}t}\left[\tau_d\le t\right].\ ] ] in it was shown that the harmonic kernel exists for all regular domains and that the harmonic measure can be written as the _ -panharmonic measure _ is it was shown in that all bounded solutions to boundary value problem to the yukawa equation , i.e. with constant , can be represented via the -panharmonic measure as it was also shown in that all the panharmonic measures , are equivalent .consequently , there exists radon nikodym derivatives such that the harmonic measure corresponding to is a probability measure .indeed , .\ ] ] the panharmonic measure corresponding to case is a sub - probability measure . indeed , \le 1.\ ] ] this radon nikodym derivative and the classical harmonic measure play a central part in constructing the killing walk on spheres ( kwos ) simulation algorithm in the next section .the algorithm is an extension of the classical walk of spheres algorithm ( wos ) due to muller .the general formula for the schrdinger equation provides an obvious way to solve the boundary value problem by simulating brownian particles . indeed , let , be independent simulated brownian trajectories starting from with time - mesh , each is an independent varying - length vector , where , s are independent standard -dimensional normal random variables , and is the first index such that .set then , by the strong law of large numbers and by the dominated convergence theorem , for all with -probability 1 as and .the obvious problem in the general simulation method described above is that it is computationally very heavy .however , since the solution of the schrdinger equation depends on the entire path of the brownian motion , there seems to be no other way than take the to be very small .consequently , finding an efficient simulation algorithm for the general schrdinger equation by using the brownian motion seems a very challenging problem , indeed .let us then consider the special case of the schrdinger equation , where is of the form .this means that we have the yukawa equation with different parameters in different regions . in this case, we can sometimes avoid simulating the brownian particles in a fine time - mesh by killing the particle in the region with exponential clock .the idea of the simulation is comes from the following representation of the solutions , which follows from ( * ? ? ?* corollary 2.11 ) , the strong markov property of the brownian motion and the memoryless property of the exponential distribution .let be given by .then the function in can be give as ,\ ] ] where is an independent exponential clock with intensity on sub - domain .note that the formula depends on the entire path for the brownian motion only through the killing time .our estimator for is where , are independent simulations of the trajectories brownian particles starting from point , and the set contains the particles that are not killed ; is the termination - step of the algorithm .the optimistic idea of how to individual trajectories are generated as follows : set and suppose are generated .suppose , and not is near the boundary .suppose the -panharmonic radon nikodym derivative and the harmonic measure for the domain are known ( this is the optimistic part ) .generate on the boundary by using the harmonic measure .now . kill the particle with probability .if the particle dies , the algorithm terminates and the particle adds 0 to the sum . if the particle survives and , then the algorithm terminates and is added to the sum . if , then we have to simulate the particle with fine time - mesh in order to allow the particle to enter well inside the domain .once the particle is well inside a domain , we can generate a new particle by using the corresponding harmonic measure and the -panharmonic radon nikodym derivative as before .the harmonic measures and panharmonic radon nikodym derivatives are not known for arbitrary domain .this limits the applicability of the idea above .however , for balls the radon nikodym derivative , is well - known .indeed , by the rotational symmetry of the brownian motion the radon nikodym derivative is independent of and by the self - similarity of the brownian motion where the function is the laplace transform of the first hitting time of a bessel process on the level starting from .this is well - known , see e.g. ( * ? ? ?* theorem 2 ) for : where is the modified bessel function of the first kind of order .the harmonic measure for balls is , due to the rotational symmetry of brownian motion , simply the uniform distribution .this leads to the following simulation algorithm : [ alg : kwos ] 1 .set and suppose are generated .2 . generate as follows : 1 . if and if start generating the brownian path with fine time - mesh : generate where an -vector of independent standard normal random variables . generate exponential killing : with probability .if killing occurs , the algorithm terminates and adds 0 to the sum .2 . if and if generate , uniformly .kill the particle with probability , where is given by .if killing occurs , the algorithm terminates and adds 0 to the sum .if let be the projection of to the boundary .in this case the algorithm terminates and adds to the sum .in this section , we present examples to motivate our algorithm and to illustrate its potential applications in one and two - dimensional settings . these examples were computed by using a simple implementation algorithms on mathematica , and they mainly chosen from the point of view of visualisation .it should be noted that our approach is obviously more much attractive in higher dimensions , where many other methods for solving such are not available , or lead into excessive computation times .next we consider the equation in the case where \to { \mathbb{r}} ] is a real interval , i.e. , , and is as in .then we obtain the impulsive differential equation where , is fixed , ) ] .one should note that the theory of impulsive differential equations is much richer than the corresponding theory of differential equations without impulse effects .for example , initial value problems of such equations may not , in general , possess any solutions at all even when the corresponding differential equation is smooth enough , fundamental properties such as continuous dependence relative to initial data may be violated , and qualitative properties like stability may need a suitable new interpretation .moreover , a simple impulsive differential equation may exhibit several new phenomena such as rhythmical beating , merging of solutions , and noncontinuability of solutions .see e.g. for more information about impulsive differential equations .however , these problems do not arise in the case of as , the boundary value problem has a continuous weak solution , and it coincides piecewise with the classical solution .the following algorithm is for the special case with and to efficiently simulate the particles we combine the gambler s ruin ( i.e. , the harmonic measure ) outside the killing zone and the kwos algorithm [ alg : kwos ] inside the killing zone .[ alg : gamblers ] 1 .set and suppose are generated .2 . generate as follows : 1 .if then it will exit the domain in with probability .the algorithm terminates and gives to the sum .otherwise and we are entering the killing zone .2 . if then it will exit the domain in with probability .the algorithm terminates and gives to the sum .otherwise an we are entering the killing zone .3 . if $ ] we are entering or in the killing zone . in this case usekwos algorithm [ alg : kwos ] .the above algorithm can further be refined by using the harmonic measure and the -harmonic radon nikodym derivative in all the domains , ( away from boundary ) .indeed , all the required formulas can be founded e.g. in .note that in the one - dimensional case , the function of can be obtained from ( * ? ? ?* ( 3.0.1 ) ) : = \frac{\cosh ( ( b+a-2x)\sqrt{\lambda/2 } ) } { \cosh ( ( b - a)\sqrt{\lambda/2 } ) } , \ ] ] where and is the one - dimensional brownian motion by using parameters , , , and .[ ex1 ] let , and consider the boundary value problem for the equation with , and .then the solution of the problem is given by the solution , and approximate solution obtained through gr - kwos algorithm [ alg : gamblers ] , are illustrated in figure [ ex1fig ] .of the bvp of example [ ex1 ] ( dashed ) and its approximation obtained by using gr - kwos algorithm.,width=302 ] next we consider examples where the two dimensional potential function of the equation is computed by using algorithm [ alg : kwos ] .we start by computing the pure yukawa case .potential theory of the yukawa equation has been studied by duffin .stochastic methodology for studying yukawa potentials was developed in .( yukawa equation ) [ ex2 ] let we solve the yukawa equation , i.e. , the equation with constant with the boundary conditions given by .the approximate solution of , where , and a chain of balls ( disks ) used in wos style simulation of the path of one brownian particle , are illustrated in figure [ ex2fig ] .of the bvp of example [ ex2 ] with kwos algorithm ( left ) , and chain of disks used by kwos algorithm in simulation of an individual path ( right).,title="fig:",width=309 ] of the bvp of example [ ex2 ] with kwos algorithm ( left ) , and chain of disks used by kwos algorithm in simulation of an individual path ( right).,title="fig:",width=151 ] ( mixed laplace yukawa equation ) [ ex3 ] consider the problem on the domain where is as in , the boundary values are given by , and .the approximate solution of , is illustrated in figure [ ex3fig ] .of the bvp of example [ ex3 ] with kwos algorithm.,width=377 ]as a conclusion , the killing walk on spheres ( kwos ) algorithm is a very simple tool to compute a solution to the schrdinger equation with piecewise constant positive potential .the algorithm is based on the classical walk on spheres .it avoids estimating the exit time from the ball by using the interpretation of killing .if the potential has negative values , then the killing interpretation is no longer available and one needs to estimate the exit time by using , e.g. , the recent walk on moving spheres ( woms ) algorithm due to deaconu et al .
|
in this paper we introduce a new method for the simulation of a weak solution of the schrdinger - type equation where the potential is piecewise constant and positive . the method , called killing walk on spheres algorithm , combines the classical walk of spheres algorithm with killing that can be determined by using panharmonic measures .
|
this review treats probe diffusion and related methods of investigating polymer dynamics . in a probe diffusion experiment , a dilute dispersion of mesoscopic probe particlesis mixed into a polymer solution . the motions or relative motions of the probe particlesare then measured . in most experiments discussed here ,probe motions arise from diffusion ; the small literature on driven motions of mesoscopic probes through polymer solutions is also reviewed here . in some systems ,probe motions involve multiple relaxations whose time dependences can be independently determined . in others ,a single relaxation determined a probe diffusion coefficient . is sensitive to the probe radius , polymer molecular weight and concentration , solution viscosity , solvent viscosity , and other variables .the dependence of on these and other variables is used to infer how polymers move in solution .the remainder of this section presents a historical background .section ii remarks briefly on the theory underlying major experimental methods for studying probe diffusion .section iii presents the experimental phenomenology .section iv discusses the systematics of that phenomenology .the literature on probe diffusion studies of polymer solutions tends to be divided into three parts , namely ( i ) _ optical probe diffusion _ studies , largely with quasi - elastic light scattering spectroscopy ( qelss ) , fluorescence recovery after photobleaching ( frap ) , and forced rayleigh scattering ( frs ) , of the thermal motion of dilute probe particles , ( ii ) _ microrheology _ studies in which an inferred mean - square particle displacement and a generalized stokes - einstein relation are used to compute dynamic moduli of the solution , and ( iii ) _ particle - tracking _ studies in which the detailed motions of individual particles are recorded . historically ,brown used particle tracking to study the motion now called _brownian_. optical probe diffusion as a method for studying polymer solutions dates back to turner and hallett , who in 1976 examined polystyrene spheres diffusing through dextran solutions .the microrheology approach to interpreting probe diffusion stems from early work on diffusing wave spectroscopy ( dws ) , e.g. , the 1993 chapter by weitz and pines .particle tracking methods were early applied to observing the motion of labelled tags in cell membranes as reviewed , e.g. , by saxton and jacobson .the experimental literatures in these three areas are less than entirely communicating. the optical probe diffusion literature has not recently been reviewed systematically . in 1985 ,phillies , et al. re - examined extant optical probe studies of solutions of bovine serum albumin , polyethylene oxide , and polyacrylic acid , primarily from their own laboratory .a uniform stretched - exponential concentration dependence of was found . a dependence noted over a range of matrix species . for probes in bsa solutions complies with this dependence if bsa is assigned an effective corresponding to its radius .phillies and streletzky presented in 2001 a short review ( 38 references ) of the literature on optical probe diffusion .they treat primarily systems studied with qelss . in some cases follows the stokes - einstein equation evaluated using the solution viscosity .in other cases , diffusion is appreciably faster than would be expected from the solution viscosity . in a few casesone finds re - entrant behavior in which the stokes - einstein equation fails but only over a narrow band of concentrations .finally ( taking probes in hydroxypropyl - cellulose as an examplar ) , in some cases the spectral mode structure is too complicated to be characterized with a single diffusion coefficient .phillies and streletzky s review does not contact the microrheology literature .a series of reviews treat microrheology : diffusing wave spectroscopy was reviewed ( 49 references ) by harden and viasnoff .the primary emphases are on two - cell light scattering measurements , in which light scattered by the sample of interest is rescattered by a second scattering cell before being collected , and on ccd methods , which allow one to collect simultaneously the light scattered into a substantial number of coherence areas .solomon and lu review ( 54 references ) _ microrheology _ studies of probe diffusion in complex fluids , in particular uses of the general stokes - einstein equation and its range of validity , correlations in the motion of pairs of large particles , data analysis methods , and possible paths for extending their methods to smaller probe particles .the reference list has very limited contact with the optical probe diffusion literature .mackintosh and schmidt discuss ( 60 references ) studies on the diffusion of microscopic probe particles through polymer solutions by means of particle tracking and diffusing wave spectroscopy , as performed under the cognomen _ microrheology _ , as well as studies of viscoelasticity using atomic force microscopy and the driven motion of mesoscopic particles .the excellent list of references lacks contact with the qelss / frap - based optical probe diffusion literature .mukhopadhay and granick review ( 41 references ) experimental methods of driving and measuring the displacement of mesoscopic particles , such as dws and optical tweezer techniques .they obtain the complex modulus of the fluid is via a generalized stokes - einstein equation .the discussion and references do not contact the optical probe diffusion literature . among reviews of particle tracking methods ,note : saxton and jacobson treat ( 105 references ) experimental methods for tracking single particles as they move in cell membranes .motion of membrane proteins and other probes is complex , because the motion may be diffusive , obstructed , include nondiffusive convective motion , or involve trapping .correlations between single particle tracking and frap are considered , including the frap - determined nominal fraction of immobile particles .tseng , et al. review ( 52 references ) particle tracking methods as applied to solutions of f - actin and other cytoskeletal proteins .while mention is made of measuring mean - square displacements of diffusing particles , this is in the context of a technique that actually does measure particle positions and displacements . a review ( 56 references ) of microscopy techniques by habdas and weeks emphasizes video microscopy , imaging crystals , glasses , and phase transitions , and measurement of particle interactions with laser tweezer methods .the focus is the application of these techniques to colloidal particles in solution .many methods have been applied to study the motion of mesoscopic probe particles through polymer solutions , including qelss , frs , frap , dws , particle tracking , and interferometry .optical probe methods refer to a special case of scattering from a general three - component probe : matrix polymer : solvent system . in the special case , one of the components , the _ probe _ ,is dilute yet nonetheless dominates the scattering process , even though the other solute component , the _ matrix _ , may be far more concentrated than the probe .most of these methods , in particular qelss , frs , and frap , have also been used to study motion of dilute labelled polymer chains in polymer solutions , leading to measurements of polymer self- and tracer- diffusion coefficients .light scattering spectroscopy is sensitive to the dynamic structure factor of the scattering particles .the theoretical basis for applying qelss , frs , and frap to three - component solutions appears in our prior review and need not be repeated here in detail . for dilute probe particles diffusing in a non - scattering matrix, reduces to here labels the probe particles whose locations at time are the . is determined by the particle displacements during .the scattering vector , with determines the distance a particle must diffuse to have a significant effect on . here is the solution index of refraction , is the illuminating wavelength _ in vacuo _ , and is the scattering angle . in observing a polymer coil with qelss ,complications arise if the polymer coil radius is comparable with , because coil internal modes enter the spectrum if is not .however , the rigid probes considered here have no significant internal modes , so light scattering spectroscopy measures rigid - probe center - of - mass motion regardless of . for dilute monodisperse rigid probes _ in a simple newtonian solvent _, probe motion is described to good approximation by the langevin equation . in this case, the dynamic structure factor reduces to a simple exponential in which is the probe diffusion coefficient . in one dimension , is related to particle displacement by the stokes - einstein equation relates for spheres of radius to other parameters , namely here is boltzmann s constant , is the absolute temperature , and is the experimentally - measured macroscopic solution viscosity . when and or are known , this equation may be inverted to yield a _ or an _ apparent hydrodynamic radius _ , respectively , viz . and for probes in polymer solutions, may differ markedly from the macroscopically measured viscosity , typically with .the may be much less than the experimentally - determined physical radius .we denote and as _ stokes - einsteinian _ behavior . the contrary case ( and ) we describe as _ non - stokes - einsteinian _ behavior . in more complex systems ,the dynamic structure factor is non - exponential . remains monotonic , with laplace transform here is the relaxation distribution function . in some cases, is qualitatively well - described by a sum of relatively separated peaks , commonly termed modes. this terminology does not imply that there is necessarily a 1-to-1 correspondence between individual modes and individual physical relaxation processes , though such a correspondence often exists .an arbitrary physical can be generated by scattering light from an appropriately - chosen mixture of dilute brownian probes in a simple fluid .for each brownian species , the mean - square displacement increases linearly in time .correspondingly , every physical corresponds to a system in which the mean - square particle displacement increases linearly in time .however , a non - exponential can also arise from systems in which has more complex behavior .it is impossible to distinguish these possibilities by examining spectra taken at a single scattering angle or from spectra uniformly averaged over a range of .thus , it is impossible to determine from one light scattering spectrum of an unknown system .recently , a misinterpretation of the historical literature on light scattering spectra has emerged .the starting point is the excellent book of berne and pecora , in particular their treatment of light scattering spectra of brownian particles that follow the langevin equation .berne and pecora s treatment is without error , but the restrictions on the range of validity of their results are not uniformly recognized . the langevin equation is a model for an isolated particle floating in a simple solvent .the particle is assumed to be subject to two forces , namely a drag force and a random force .here is the drag coefficient and is the time - dependent particle velocity .the random force is uncorrelated with the drag force . also, the random forces at any distinct pair of times are uncorrelated , i.e. , for .the particle s motion follows from newton s second law .the model only specifies statistical properties of , so only statistical properties of particle displacements can be obtained from the langevin model . under these conditions ,doob showed in 1942 that : ( i ) is a gaussian in , with mean - square displacement in one dimension satisfying ; ( ii ) the spectrum reduces to and ( iii ) as an absolutely rigorous mathematical consequence , is a single exponential , namely .these results describe , e.g. , the light scattering spectrum of dilute polystyrene spheres in pure water , which was an early target for experimental studies .spectra of probe particles in viscoelastic fluids commonly are not simple exponentials . in the literature misinterpretation , eq . [ eq : sqtaureduceddp ] is incorrectly claimed under the cognomen gaussian assumption or gaussian approximationto be uniformly applicable to light scattering spectra , in particular to spectra that do not show a single - exponential decay .is this realistic ? in a viscoelastic liquid , the elastic moduli are frequency - dependent .correspondingly , the stress fluctuations in the liquid create random forces with non - zero correlation times , i.e. , for a range of .correspondingly , the displacement distribution function is not a gaussian , and shows non - diffusive behavior that is not characterized by a diffusion coefficient . a contrapositive statement of doob s theorem shows that non - exponential spectra correspond necessarily to particleswhose motion is not described by eq .[ eq : sqtaureduceddp ] .an explicit calculation that correctly expresses in terms of all moments , , has been made . reflects all moments of . except under special conditions not satisfied by probe particles in complex fluids , the higher moments make non - trivial contributions to . and qelss single- and multiple - scattering spectra are in general not determined by .equivalently , there are pairs of systems having the same , but very different time dependences for .articles based on the gaussian approximation are considered in section iii.f . is the spatial fourier transform of the probability distribution for finding a probe displacement during .while a single measurement of can not be used to compute , determination of the complete functional dependence of on could in principle be used to determine .to the authors knowledge , this has only been done in the trivial case of simple brownian motion .particle tracking methods yield directly , and can be used to compute complex many - time displacement cross - correlations , such as , where is the probe displacement between and . these cross - correlations , which have not yet been intensively exploited , are substantially inaccessible to conventional light scattering spectroscopy .this section describes experiments on the diffusion of rigid probes through polymer solutions and related systems , and related measurements using other techniques , including particle tracking measurements and true microscopic rheological measurements .the major subsections of section iii are ( a ) probe diffusion in polymer solutions , ( a.1 ) probe diffusion in hydroxypropylcellulose ( hpc ) solutions , ( b ) rotational diffusion of probes , ( c ) particle tracking methods , ( d ) true microrheological measurements , ( e ) probes in polyacrylamide gels , protein gels , and in vivo and in other structured systems , and ( f ) probe spectra interpreted with the gaussian displacement assumption . for simplicity , within each section descriptions of the primary literature are sorted alphabetically , except that studies on hpc : water are ordered chronologically .measurements on probes in solutions of rigid - rod polymers and in colloidal systems will be discussed elsewhere . in generating the results of this section ,experimental findings were taken from tabulated numbers in the original papers , or were recovered from the original papers by scanning original figures and digitizing the images .fitting curves were generated by non - linear least - squares analysis based on the simplex method , leading to parameters in tables i - v and the smooth curves in the figures .figures were generated _de novo _ from digitized numbers and our fitting curves .most smooth curves represent fits to stretched exponentials in concentration , namely here is a scaling exponent , is a scaling prefactor , and reflect probe behavior at very low polymer concentration .this section treats experiments that measured probe translational diffusion .fast and slow modes in spectra of ( a ) 100 nm sulfate latex spheres in polyacrylamide solution , and ( b ) 33 nm hematite particles in sodium polyacrylate solutions , ph 10 , 0.1 m nano , both as functions of polymer concentration , after bremmell , et al. , title="fig : " ] fast and slow modes in spectra of ( a ) 100 nm sulfate latex spheres in polyacrylamide solution , and ( b ) 33 nm hematite particles in sodium polyacrylate solutions , ph 10 , 0.1 m nano , both as functions of polymer concentration , after bremmell , et al. , title="fig : " ] bremmell , et al. used qelss to examine the diffusion of positively - charged 34 nm and negatively - charged 100 nm diameter polystyrene latex ( psl ) spheres and 33 nm diameter hematite particles .matrix solutions included water : glycerol , aqueous 3 mda polyacrylamide ( paam ) , and aqueous high - molecular weight sodium polyacrylate ( napaa ) .spectra were fit to a sum of exponentials .psl spheres in water : glycerol showed single - exponential relaxations whose scaled linearly with as the temperature and glycerol concentration were varied .polyacrylamide solutions show marked shear thinning . as seen in figure [ figurebremm2001adp ] , in polyacrylamide solutions : except at very low polymer concentrations , psl sphere spectra are bimodal , usefully characterized by fast and slow diffusion coefficients and . with increasing polyacrylamide concentration , of the 200 nm spheres falls 30-fold while increases by only 20-fold . does not depend strongly on . shows re - entrance : it first increases with increasing polymer and then decreases to below its zero- value . at elevated , increases profoundly , and more rapidly than linearly , with increasing . with 68 nm spheres , at smaller increases with increasing polymer , though less than the increase with 200 nm spheres , while falls with increasing . with hematite particles in sodium polyacrylate solutions , with concentrations increasing up to 1 wt% , and _ both _ increase , by ten - fold and by at least three - fold .bremmell , et al. refer to speculation that elasticity ( solution viscoelasticity ) is related to the observed hyperdiffusivity .fast ( open points ) and slow ( filled points ) diffusion coefficients of 100 nm polystyrene sulfate spheres in solutions of polyacrylic acid - co - acrylamide in the presence of 0.001 ( ) or 1.0 ( ) m nano , from data of bremmell and dunstan . ] bremmel and dunstan examined the diffusion of 100 nm radius psl spheres in 3 mda poly(acrylic acid - co - acrylamide ) at a range of ionic strengths and polymer concentrations .inverse laplace transformation of qelss spectra found a bimodal distribution of relaxation rates .figure [ figurebremm2002adp ] shows representative results from the smallest and largest ionic strengths examined . in 1 m nano , and both increase with increasing . in 0.001 mnano , shows re - entrant behavior , while simply decreases with increasing . and both show a complex ionic - strength - dependent dependence on .brown and rymden used qelss to examine the diffusion of psl spheres through carboxymethylcellulose ( cmc ) solutions .the primary interest was to treat adsorption by this polymer to the latex spheres .brown and rymden conclude that cmc goes down on the surface in a relatively flat conformation .cmc is a weak polyelectrolyte .the extend of its binding to polystyrene latex is complexly influenced by factors including salt concentration , ph , and probe size and surface chemistry .brown and rymden used qelss to determine the diffusion of 160 nm radius silica spheres and polystyrene random - coil - polymer fractions through polymethylmethacrylate ( pmma ) in chcl , as seen in fig .[ figurebrown1988c1 ] .they also measured the solution viscosity .pmma molecular weights for the sphere diffusion study were 101 , 163 , 268 , and 445 kda .qelss spectra were uniformly quite close to single exponentials . for silica spheres , was to good approximation a universal function of ] , the overlap concentration was = 0.73 ] .figure [ figureonyen1993adp1]b shows and fits to stretched exponentials in . at larger scattering angles , a second , much faster relaxation interpreted as polymer scattering was found .onyenemezu , et al ., assert that within experimental error the stokes - einstein equation is always closely obeyed in their systems ; cf .figure [ figureonyen1993adp2 ] for the modest non - stokes - einsteinian behaviors that their data reveal .phillies reports on the diffusion of bovine serum albumin through solutions of 100 kda and 300 kda polyethylene oxides . depended measurably on the probe concentration . at elevated polymer and low protein concentration , was as much as a third faster than expected from the -dependent solution fluidity . with increasing protein concentration, fell toward values expected from the macroscopic .this study pushed the technical limits of then - current light - scattering instrumentation . of 20 nm ( open points ) and 230 nm ( filled points ) radius probes , and ( , ) , with solutions of 70 ( squares ) and 500 ( circles ) kda aqueous dextran , and fits to stretched exponentials ( lines ) , after phillies , et al. . ]phillies , et al. observed probe diffusion in aqueous dextran .dextran concentrations covered / l , using 9 different dextran samples .probes were polystyrene spheres with radii 21 and 230 nm ; solution viscosities were obtained using thermostatted capillary viscometers .multiple tests confirmed that probe scattering completely dominated matrix polymer scattering .representative measurements of and for two polymer molecular weights appear in fig .[ figurephillies1989bdp1 ] , together with fits to stretched exponentials in . except for the smaller spheres in solutions of the very largest dextran , stokes - einsteinian behaviorwas uniformly observed , and having very nearly the same concentration dependences .this paper represented the first examination of the dependence of and on for probe diffusion through an extensive series of homologous polymers .phillies , et al. report three studies of probe diffusion in polyelectrolyte solutions .two examine probes in dilute and concentrated solutions of high and low partially - neutralized poly - acrylic acid .one examines probe diffusion in solutions of not - quite - dilute polystyrene sulfonate . in ref ., phillies , et al .examined probes ( nm ) diffusing through aqueous non - neutralized and neutralized low - molecular - weight ( kda ) poly - acrylic acid .the parameter space is quite large , with possible effects of concentration , molecular weight , and degree of neutralization , to to mention solution ionic strength .phillies , et al. were only able to sample the behaviors encountered in these systems .in 2/3 neutralized 5 kda paa , fell exponentially with increasing polymer concentration , but was independent of salt concentration ( m nacl ) and very nearly independent of probe radius .probes in 150 kda paa showed a more complex dependence on these parameters : for polymer concentrations 0.1 - 20 g / l , was relatively independent from for at higher salt concentrations ( m ) , but fell by a third to two - thirds as was reduced from 0.01 m toward zero added salt , the decline being larger at elevated polymer concentration .probes in largely - neutralized 150 kda have stretched - exponential dependences on polymer to good accuracy , for concentrations out to 10g / l and as small as 0.25 , except that at very low ( ) polymer concentration is larger than expected from a fit of a stretched exponential to the higher - concentration measurements .careful analysis revealed that at very low concentrations and similarly for .also , the non - stokes - einsteinian behaviors found by lin and phillies in non - neutralized paa are also seen in partially - neutralized paas . of 20 nm radius probes in 596 kda paa , 60 % neutralized , at ionic strengths 0 ( ) , 0.01 ( ) , 0.02 ( ) , and 0.1 ( ) m , after phillies , et al. , and fits to stretched exponentials ( lines ) . ] of 20 nm probes in 1 g / l 596 kda paa , at neutralizations 60% ( ) , 85% ( ) , and 100% ( ) , as functions of ionic strength , and fits to stretched exponentials in , after phillies , et al. . ] in a separate study , phillies , et al. examined the diffusion of 21 , 52 , 322 , and 655 nm radius carboxylate - modified polystyrene spheres through 596 kda partially - neutralized poly - acrylic acid , using light scattering to determine .the dependences of and the solution viscosity on , solution ionic strength ( m ) , and fractional neutralization of the polymer ( ) were determined . has a stretched - exponential dependence on ( figure [ figurephillies1987adp1 ] ) and a stretched - exponential dependence on for .probes generally diffuse faster than predicted by the stokes - einstein equation , the prediction of the stokes - einstein equation being approached more nearly with lower polymer neutralization , larger solvent ionic strength , and larger probes .the dependence of the apparent hydrodynamic radius of the probes on solution ph ( and , implicitly , polymer neutralization ) is much more pronounced for the smaller 21 and 52 nm spheres than for the larger 322 and 655 nm spheres .if one believed that the diffusion of probe particles , whose sizes exceed all of the length scales of the polymer solution , were governed by the macroscopic viscosity , then for this large- polymer the longest length scale is at least 52 nm and perhaps larger than 322 nm . of 7 ( ) , 34 ( ) , and 95 ( ) nm radius polystyrene spheres in solutions of 178 ( open points ) and 1188 ( filled points ) kda polystyrene sulfonate , and fits to simple exponentials in , after phillies , et al. . ]phillies , et al. examined polystyrene spheres , radii 7 , 34 , and 95 nm , diffusing through aqueous polystyrene sulfonate .the purpose of the experiments was to determine the initial slope of against polymer for various polymer .polymers had 7 molecular weights with kda . to minimize low - salt polyelectrolyte anomalies ,the solvent included 0.2 m nacl .carboxylate - modified polystyrene spheres are charge - stabilized . to prevent aggregation on the time scale of the experiments ,the solvent included 1 mm naoh and 0.1 or 0.32 g / l sodium dodecyl sulfate .figure [ figurephillies1997bdp2 ] shows representative data , namely for each sphere size in 178 and 1188 kda polystyrene sulfonate , and fits to simple exponentials in .these results are interpreted below . of 20 nm radius probes as a function of in 2/3 neutralized 5kda polyacrylic acid at polymer concentrations 0 ( ) , 5 ( ) , 25 ( ) , 50 ( ) , and 100 ( )g / l , and linear fits at each showing simple walden s rule behavior , after phillies , et al. . ] of 20 nm radius probes as a function of in 2/3 neutralized 596 kda polyacrylic acid:0.1 m nacl : 0.1 wt% triton x-100 at concentrations 0 ( ) , 3 ( ) , 6 ( ) , 8 ( ) , and 15 ( )g / l and fits at each to the vogel - fulcher - tamman equation using the same at every , after phillies , et al. . ]phillies , et al. report studies of the temperature dependence of probe diffusion through various polymer solutions .these studies constitute a response to a critique , made to the author at several conferences , of using the stretched - exponential concentration dependence .critics complained that data had not been `` reduced relative to the glass temperature '' . to examine this issue , phillies , et al. measured and for probes in various solutions at a series of temperatures .the systems studied included ( i ) 20.4 nm radius carboxylate - modified polystyrene sphere probes in 2/3 neutralized low - molecular - weight 5 and 150 kda poly - acrylic acid with and without added 0.1 m nacl , ( ii ) the same probes in 2/3 neutralized intermediate - molecular - weight 596 kda poly - acrylic acid with or without added 0.1 m nacl or added surfactant , and ( iii ) 34 nm nominal radius polystyrene spheres in solutions of dextrans having kda .polymer concentrations reached 100 , 45 , or 20 g / l , respectively , for the three poly - acrylic acids and 300g / l for the dextrans . for probes in solutions of low - molecular weight polyacrylic acid , the temperature dependence of of seen in figure [ figurephillies1991adp4 ]is entirely explained by the temperature dependence of the solvent viscosity and walden s rule .furthermore , at each polymer concentration is very nearly independent from .this result includes data for 2 - 65 c and fourteen polymer concentrations at two polymer molecular weights .there was no residual temperature dependence of to be explained by glass temperature issues .the notion that is sensitive to other than through is rejected by this data . on the other hand ,if hydrodynamic interactions between the polymer chains and the probes were primarily responsible for controlling , it would be reasonable for to depend linearly on the solvent viscosity , as is observed .note that the chain monomer mobility is also reasonable expected to scale linearly with , so the observations of reference are equally consistent with reptation - type dynamic models . of 34 nm radius probes against in aqueous 200 g / l 83.5 kda dextran ( ) and in 100 g / l 542 kda dextran ( ) , showing the linear dependence of on , after phillies , et al. . ] in ref ., phillies , et al .examined probe diffusion through solutions of intermediate - molecular - weight poly - acrylic acid , comparing the temperature dependence of and with walden s rule and with the vogel - fulcher - tamman equation . for the solution viscosity , this equation is here , , and are material - dependent parameters ; is sometimes replaced with .walden s rule was followed accurately by all of these data .the vft equation uniformly describes the temperature dependence of the measurements accurately .fits of measurements covering all studied to eq .[ eq : dpvogel ] , using the same but different at different , gave excellent results as seen in figure [ figurephillies1992cdp1 ] .these data thus reject the possibility of adjusting at different for a hypothesized variation in with .separately , phillies , et al. also fit their data at each polymer concentration to both with forced and with as an additional free parameter . adding as a further free parameter had no effect on the rms error in the fits , indicating that there was no non - linear dependence of on .phillies , et al .conclude that their data were consistent with hydrodynamic models in which solvent - mediated forces are the dominant forces between polymer chains , and were equally consistent with reptation - type models in which the solvent viscosity determines the monomer friction constant .phillies and quinlan used light scattering spectroscopy to obtain the -dependence of of 34 nm radius carboxylate - modified polystyrene spheres in solutions of various dextran fractions , kda at concentrations up to 300solution viscosities were also measured .representative measurements appear in figure [ figurephillies1992edp1 ] .measurements of were fit to a modified vogel - fulcher - tamman equation and to eq [ eq : dptetapoly ] .a very weak deviation from walden s rule was observed , increasing with increasing faster than expected from the viscosity .the deviations , which are at most twice the random scatter in the measurements , increase with increasing but are independent of .phillies and quinlan note that the deviations are consistent with a slight change in solvent quality with increasing dextran monomer concentration .equation [ eq : dpvogelmod ] , with the same for all solutions , fits almost all measurements to within 2% rms error .the apparent glass temperature of water : dextran solutions , as inferred from the modified vft equation , is independent of dextran molecular weight and dextran concentration .the measurements here serve to exclude any hypothesis that the strong concentration dependence of arises from a strong concentration dependence of , namely ( i ) is independent of , and , alternatively , ( ii ) after removing the temperature dependence of , there is next - to - no remnant -dependence of available to be interpreted as a variation in .roberts , et al. used pfgnmr to examined a model liquid filled - polymer system , formed from silica nanoparticles suspended in monodisperse poly(dimethylsiloxane ) .silica particles had diameters 0.35 and 2.2 nm ; polymers had molecular weights 5.2 and 12.2 kda , with of 1.07 and 1.03 , respectively .these are actually not probe measurements ; the volume fractions of probes and matrix polymers were both always substantial . of the small silica particles and the 5.2 kda polymer in a mixture both fall linearly with increasing polymer volume fraction for polymer volume fraction in the range 0.2 to 0.95 .in contrast , in a mixture of the larger polymer and larger spheres , of the polymer rises and of the mixture decreases with increasing sphere concentration .shibayama , et al. examine the diffusion of polystyrene probe particles in poly(n - isopropylacrylamide ) gels and solutions using qelss .the solution spectra are bimodal , with both relaxation rates scaling linearly in .a faster , relatively concentration - independent mode corresponds to motion of the polymer chains , while a slower mode corresponds to probe diffusion . as a result of the preparation method , solutions at each concentration involved polymers having a different average molecular weight . over the observed range of polymer concentrations , a reduced probe diffusion coefficient changed by several orders of magnitude .shibayama , et al .propose that the probe diffusion coefficient should scale with concentration and polymer molecular weight as , based on the corresponding prediction in shiwa for polymer self - diffusion .spectra of probes in true crosslinked gels were also bimodal , but with increasing crosslinker concentration the amplitude of the probe mode is suppressed to zero . of ( a ) 93 ( ) , 183( ) , and 246 ( ) nm butadiene spheres in aqueous 2 mda dextran , and ( b ) 246 nm spheres in solutions of 20 ( ) , 70 ( ) , 150 ( ) , 500 ( ) , and 2000 ( ) kda dextran , all as functions of dextran concentration , following turner and hallett.,title="fig : " ] of ( a ) 93 ( ) , 183( ) , and 246 ( ) nm butadiene spheres in aqueous 2 mda dextran , and ( b ) 246 nm spheres in solutions of 20 ( ) , 70 ( ) , 150 ( ) , 500 ( ) , and 2000 ( ) kda dextran , all as functions of dextran concentration , following turner and hallett.,title="fig : " ] turner and hallett used qelss to measure the diffusion of carboxylated styrene butadiene spheres , diameters 93 , 183 , 246 nm , in solutions of dextrans having nominal molecular weights 20 , 70 , 150 , 500 , and 2000 kda at polymer concentrations up to 20 g / l .as seen in figure [ figureturner1976adp ] , the normalized diffusion coefficient was very nearly insensitive to probe diameter , but changed more than three - fold as polymer concentration was increased .to good approximation , . at fixed depends appreciably on dextran molecular weight , increasing with increasing .the microviscosity inferred from agreed well with the viscosity measured with a rotating drum viscometer .ullmann and phillies made a study of the diffusion of polystyrene latex spheres through polyethylene oxide : water .these data are discussed with the following paper .they used the macroscopic to compute effective hydrodynamic radii for their psl sphere probes ( radii 21 - 655 nm ) , finding that had a complex dependence on polymer concentration .addition of 0.01% of the nonionic surfactant triton x-100 , which suppresses polymer binding , eliminated the complexity .probe spheres in surfactant - containing mixtures showed a much simpler behavior , namely the apparent hydrodynamic radii of the probes fell smoothly with increasing polymer .the degree of failure of the stokes - einstein equation increased markedly with increasing probe radius . by using measurements on probes in peo : triton x-100 to supply calibrating factors , ullmann and phillies quantitated the substantial degree of polymer adsorption by probes in surfactant - free solutions . of aqueous 7.5 ( ) , 18.5 ( ) , 100 ( ) , and 300 ( ) kda peo and stretched - exponential fits , after ullmann , et al. . ] of ( a ) 20.8 , ( b ) 51.7 , ( c ) 322 , and ( d ) 655 nm psl in aqueous 7.5 ( ) , 8 ( ) , 100 ( ) , and 300 ( ) kda peo and stretched - exponential fits , after ullmann , et al..,title="fig : " ] of ( a ) 20.8 , ( b ) 51.7 , ( c ) 322 , and ( d ) 655 nm psl in aqueous 7.5 ( ) , 8 ( ) , 100 ( ) , and 300 ( ) kda peo and stretched - exponential fits , after ullmann , et al..,title="fig : " ] ullmann , et al. extended ullmann and phillies to study viscosity ( figure [ figureullmann1985adp1 ] ) and diffusion ( figure [ figureullmann1985adp2 ] ) of 20.8 , 51.7 , 322 , and 655 nm diameter carboxylate - modified psl spheres in solutions of 7.5 , 18.5 , 100 , and 300 kda polyethylene oxides . is substantially modified by the addition of triton x-100 , which is believed to suppress polymer binding by the spheres . follows a stretched exponential in polymer concentration , except that 655 nm diameter spheres in the 18.5 kda polymer show reentrant behavior , with first climbing 50% above and then declining markedly .the relationship between and is complex in the lower- polymers ; at large , follows a stretched exponential in polymer concentration . depends relatively weakly on probe radius .solutions of bovine serum albumin , showing their viscosity ( ) and self - diffusion coefficient ( ) ( from refs . ) , and of 322 ( ) and 655 ( ) nm polystyrene spheres , and fits ( lines a - d , respectively ) to stretched exponentials in protein , from data of ullmann , et al. . ]ullmann , et al. studied with qelss the diffusion of 52 , 322 , and 655 nm radius polystyrene spheres in solutions of bovine serum albumin ( bsa ) in 0.15 m nacl , ph 7.0 .they compared with as determined using capillary viscometers , as seen in figure [ figureullmann1985bdp ] . of the two larger spheres had a stretched - exponential dependence on .the stokes - einstein relation failed , of the 322 and 655 nm spheres increasing with increasing . of the 52 nm spheres showed re - entrant behavior , at first increasing above its value in pure solvent and then returning to the values expected from . atvery small , of the 52 nm spheres showed a local minimum .dilution experiments showed that the minimum arose from aggregation of partially - protein - coated spheres due to the bsa in solution .when spheres were diluted from concentrated protein solution to more dilute protein solution , of the 52 nm spheres returned linearly to its zero- value , with no sign of aggregation effects .probe diffusion coefficient of bovine serum albumin in dna solutions containing 0.01 ( ) or 0.1 ( ) m nacl , and fits to stretched exponentials , after wattenbarger , et al. . ] wattenbarger , et al. used frap to examine the diffusion of bovine serum albumin ( bsa ) through solutions of a 160 base - pair dna at dna concentrations 2 - 63 g / l and nacl concentrations 0.01 and 0.1 m .dna molecules had lengths ca .56 nm , identified as being approximately 1 persistence length . as seen in figure [ figurewattenbarger1992adp1 ] , was approximately exponential in .increasing the solution ionic strength increases , especially at larger dna concentrations .diffusion coefficient of 200 nm polystyrene spheres in 1.3 mda pvme : toluene and fit of filled points to a stretched exponential , after won , et al. . ] won , et al. report on the diffusion of 200 nm radius psl spheres in solutions of 1.3 mda poly(vinylmethylether ) : toluene using qelss , as shown in figure [ figurewon1994adp1 ] .additional measurements of made with forced rayleigh scattering were in very good agreement with the qelss data .pvme concentrations reached up to 100g / l , i.e. , ] , except for the concentrations at which re - entrant departure from stokes - einstein behavior is found .tracer diffusion of linear and star polystyrene molecules through pvme solutions has also been studied extensively .diffusion coefficient of 160 nm silica spheres in solutions of 57 ( ) , 95 ( ) , 182 ( ) , 610 ( ) , and 1900 ( ) kda polyisobutylene , and fits to stretched exponentials , after zhou and brown . ] zhou and brown measured of stearic - acid - coated silica spheres in polyisobutylene ( pib ) : chloroform using qelss , as seen in figure [ figurezhou1989adp1 ] .comparison was made with pfgnmr determinations of polymer self - diffusion and qelss measurements of polymer mutual diffusion .polymer molecular weights ranged from 57 kda to 4.9 mda .sphere motion was diffusive , with a -dependent linewidth , leading to diffusion coefficients seen in figure [ figurezhou1989adp1 ] .inverse laplace transform of the spectra clearly resolved a weak peak due to polymer mutual diffusion from an intense peak taken to reflect probe motion .probe diffusion coefficients very nearly tracked the solution viscosity , increasing very slightly over the range of concentrations studied .many of the above studies compare with the macroscopically measured viscosity .a series of studies confirm that in viscous simple liquids the measured with light scattering tracks , no matter whether is changed by varying the temperature or the composition of the liquid .phillies and fernandez confirmed of polystyrene spheres in water depends linearly on , showing that psl spheres do not swell or contract with changing .phillies measured of bovine serum albumin and 45.5 nm radius psl in water : glycerol and the same psl spheres in water : sorbitol for temperatures 5 - 50 c and concentrations as high as 86 wt% glycerol and 65 wt% sorbitol .phillies found probe diffusion in these mixtures followed accurately the stokes - einstein equation , even with as large as 1000 cp .wiltzius and van saarloos , using 19 , 45.5 , and 107.5 nm polystyrene spheres in 99.5% glycerol for k , found to high precision that the temperature dependence of was independent of probe size , contrary to an earlier literature report .phillies and clomenil examined the possibility that this disagreement in the literature was related to temperature - dependent changes in the shape of the spectrum .they made high precision measurements of the spectral first and second cumulants and of optical probes in water : erythritol , water : glycerol , and neat glycerol , their new result being that in these systems was independent from .their measurements serve to reject any hypothesis that the earlier literature disagreement arose because different spectral fitting processes accommodated differently to temperature - dependent variations in a not - quite - exponential line shape of . from these simple - liquid studies , failures of the stokes - einstein equation in viscoelastic polymer solutions can not to be ascribed simply to the large zero - shear viscosity of those solutions .the translational diffusion of probes in hydroxypropylcellulose has been studied extremely extensively .earlier work was assisted by the practical matters that hpc samples are available with a wide range of molecular weights , dissolve well in water , and have interesting thermodynamic properties , including a transition from good to theta solvent behavior with increasing temperature as well as a liquid - to - liquid - crystal transition at extremely high polymer concentration .much of the later work was motivated by the viscosity measurements of phillies and quinlan , who observed for solutions of 300 kda , 1 mda , and 1.3 mda hpc over wide concentration ranges .phillies and quinlan found an unusual viscometric transition not observed in most polymers studied with optical probe diffusion . at first , work focused on finding evidence corroborating the reality of the transition .later work focused on the search for a physical interpretation of this transition .phillies and quinlan established that up to a transition concentration the concentration dependence of of hpc solutions follows accurately a stretched exponential at concentrations , the concentration dependence of follows equally accurately a power law here is a scaling prefactor , and are scaling exponents , and and are dimensional prefactors . to avoid model - dependent phrasings , phillies and quinlan termed the and domains the solutionlike and meltlike regimes .at the transition , is continuous .furthermore , the transition is analytic : both the functions and their first derivatives are continuous .there is no crossover regime : one form or the other describes at every concentration .systematic reviews of the literature showed that such transitions happen in some but not all polymer solutions . in natural unitsthe transition concentration ] rather than the more typical \approx 35 ] was never larger than 6 , but one certainly does not see the reptation prediction at concentrations and molecular weights reached here .lau , et al. use video tracking to measure the two - particle cross - diffusion - coefficients of naturally - occurring particles in the intracellular medium .comparison is made between one - particle and two - particle measurements , each giving a time course of a mean - square displacement , in the same cell at the same time .cell interiors are characterized by extensive chemical reactions , and are not equilibrium systems .the observed behaviors are more complex than would be obtained in a simple viscoelastic medium . for example , in these cells the relative displacement of two particles depends on time as .mason , et al. examine a single 520 nm diameter sphere diffusing through 3 wt% 5 mda polyethylene oxide : water and dna solutions , using laser deflection particle tracking to determine the motion of the single probe particle on which a weak laser beam has been focused .the technique is directly responsive to motion in a plane , variations in the intensity of the light scattered in various directions acting as an optical lever to amplify the time - dependent particle motions .they compare mechanical and dws measurements of and in the same solutions .direct measurement from their graph indicates the two methods gave values for viscoelastic parameters that agree to within perhaps 30% , and sometimes better .microviscosity of polystyrene sulfonate combs via particle tracking of 274 nm radius polystyrene latex probes , and fits to stretched exponentials .polystyrene sulfonates differ in structure , namely ( a ) ( ) 2.1 mda chain with 24 side branches , ( b ) ( ) same backbone as ( a ) but 3/8 as many side chains , ( c ) ( ) double the backbone length of ( a ) but 5/8 as many side chains , and ( d ) ( ) double the backbone length of ( a ) and times as many side chains , together with forced fits to stretched exponentials , after papagiannapolis , et al. . ]papagiannapolis , et al. used diffusing wave spectroscopy and video particle tracking to observe the diffusion of polystyrene latex spheres through solutions of fully - neutralized polystyrene sulfonate comb polymers .probes had 548 nm diameter for particle tracking measurements and 112 nm diameter for the dws measurements .the combs differed roughly two - fold in their main - chain length , in the number of branches , or in the length of the branches .video tracking observed motion during times , during which the mean square displacement increased linearly in time , permitting determination of and thencefrom the solution viscosity at different polymer concentrations . from figure [ figurepapag2005bdp ] , changing the number of sidechains at fixed concentration has a very limited effect on , while doubling the length of the backbone at fixed monomer concentration very markedly increases .fits of to stretched exponentials in are quite unsatisfactory at low concentration , the measured being indistinguishable from the solvent viscosity up to polymer concentrations at which the best - fit stretched exponential predicts should be readily distinguishable from unity .schnurr , et al. demonstrate a novel interference microscopy method for studying probe motion . in their technique , mesoscopic beads are suspended in a solution or gel , their images are identified through a microscopic , and optical interferometry through the microscope stage taking each bead to be one arm of the interferometer was used to track bead motion . the power spectral density of the displacements and the kramers - kronig relation are applied to infer the imaginary and real part of the complex shear modulus of the gels . heresilica beads with radii 0.25 - 2.5 m were studied in f - actin and polyacrylamide gels .tseng and wirtz used particle tracking based on video light microscopy to measure the motion of 485 nm radius polystyrene spheres in solutions of f - actin and -actinin .mean - square displacements were determined as a function of diffusion time and characterized statistically .storage and loss moduli and phase angle were computed from the displacements .the distribution of displacements generally did not have a gaussian form ; its mean square displacement at long times increased less rapidly than linearly in time .the observed displacement distributions depend in a complex way on the f - actin and -actinin concentrations .valentine , et al. used single particle and two - particle tracking methods to examine the diffusion of carboxylate - modified polystyrene spheres in fibrin - f - actin - scruin networks . the objective was to study the effect of modifying the probe surface chemistry .different spheres were uncoated , heavily coated with bovine serum albumin , or surface modified by systematic coupling of large numbers of short methoxy - poly(ethylene glycol ) groups .particle motion was examined using video - tracking microscopy .valentine , et al ., found that the dynamic behavior of the particles changed very substantially as surface treatments were changed .however , particles whose single - particle displacement distributions differ considerably due to different surface coatings can have very similar two - particle displacement correlations .xu , et al. used mechanical rheology , particle tracking microrheology , and dws to characterize the rheological properties of actin filaments .experimental interests included the concentration dependence of the elastic modulus and the frequency dependence at high frequency of the magnitude of the complex viscoelastic modulus .dws spectra were interpreted by invoking the gaussian assumption .xu , et al. used video tracking microscopy to observe the diffusion of 970 nm diameter fluorescent polystyrene spheres through water : glycerol and aqueous wheat gliadin solutions . increasing the gliadin concentration slows probe diffusion . in glycerol ,the distribution of particle displacements was a well - behaved gaussian . for probes in gliadin suspensions : at low gliadin concentrations , measurements of the distribution of mean - square displacements against time found that different particles all had the same .in contrast , in concentrated gliadin solutions for different particles showed a wide range of different time dependences .correspondingly , at low ( 250g / l ) gliadin concentrations at fixed time is a gaussian , but at large ( 400g / l ) gliadin concentrations is extremely non - gaussian .this section considers true microrheological measurements . in a true rheological measurement , external forces or displacementsare imposed , and consequent displacements and/or forces are measured .true microrheological measurements , not to be confused with microrheology studies of brownian motion , differ from classical macroscopic rheological studies in that the probes or apparatus parts function on a mesoscopic length scale .if one believes that probe diffusion measurements could be inverted to obtain , , or , it becomes interesting to compare those viscosities and moduli both with the corresponding quantities measured in classical instruments and also with the same quantities measured with true microrheological instruments build on the same size scale as the diffusing probes .amblard , et al used video microscopy and magnetic tweezers to study probe motion in viscoelastic f - actin systems .the probes were polystyrene spheres having radii of 75 , 760 , and 750 nm .video microscopy determines particle positions ; the tweezers can apply a constant force to a particle .the f - actin filaments at concentration 0.1g / l had an estimated length of 20 m , persistence length ca 14 m , and a mesh size m .video microscopy determined the mean - square displacement as a function of time .magnetic tweezers allowed application of a fixed force , permitting determination of displacement under the influence of an applied force , also as a function of time . with small beads ( ) , the mean - square distance travelled during diffusion followed . for large beads ( ) , over a range of times 0.03 - 2.0 s , amblard , et al . , found , with for motion with an external driving force , and with for free diffusion .for driven motion , the apparent drag on the probe increases after the probe has travelled 10 - 20 m .use of probe surface coatings including streptavidin , surfactant , or bovine serum albumin had no effect on other results .specific surface interactions were concluded not to affect these findings greatly .here we see a direct demonstration that the drag processes for driven motion and for thermal motion can be the same : the mean - square displacements for both processes scale approximately as .the time dependence of the thermal motion is not , so the underlying thermal particle motion is not simple brownian diffusion .bishop , et al. studied the driven rotational motion of small spheres inside a model for the intracellular medium .the spheres were 1 - 10 m birefringent vaterite ( caco ) crystals .the driving force is provided by illuminating the probes with circularly polarized light , and measuring the degree of polarization of the light after it has passed through the sample .the rotation rate is obtained by examining the transmission of one linear polarization of the incident light .bishop , et al ., examined rotational motion of their probes within a drop of hexane and in bulk water , finding that their measured microviscosities were in good agreement with the viscosities measured macroscopically .hough and ou - yang report using optical tweezers to drive the motion of a single 1.58 m silica microsphere through solutions of 85kda end - capped ( c- ) polyethylene oxide in water .the tweezer position was driven with piezoelectrically controlled mirrors at frequencies as large as 40,000 rad / s .measurements of the magnitude and phase of the sphere oscillations as functions of frequency were inverted , using the model that the sphere is a forced damped harmonic oscillator , to obtained and .the dynamic moduli were `` ... quite different from those obtained by a macroscopic rheometer , and are sensitive to surface treatment of the bead . ''keller , et al. introduce an oscillatory magnetic bead rheometer , in which 2.25 m latex beads containing imbedded fe are placed in an oscillating magnetic field , and the bead positions are tracked with video microscopy .the amplitude and phase shift of the bead motion are determined , allowing calculation of and .the paper was a proof of principle demonstrating a method for making measurements at higher frequencies ( up to 40 hz ) , so only a single solution of nondilute f - actin was examined .schmidt , et al. compare microscopic and macroscopic measurements of the storage and loss moduli of f - actin and gelsolin solutions .schmidt , et al .examined f - actin solutions with / l , and controlled average lengths m , all for frequencies 0.004 - 4 hz .the microscopic particles were 4.5 m paramagnetic beads .these results are from true microrheological measurements : the microspheres were subject to a known external force and the amplitude and phase shift of their motions relative to the driving force were obtained .the macroscopic probe was a rotating disk rheometer . with a viscous small - molecule liquid ,microrheology and macrorheology agree as to the measured viscosity .moduli of f - actin solutions measured microscopically were substantially smaller than moduli measured with a rotating disk rheometer .the network relaxation time estimated from microscopic data is the same as or substantially larger than the time measured with the macroscopic instrument .the frequency dependences from microrheological and from macrorheological measurements are somewhat similar , but are clearly not the same : the microrheological measurements typically show stronger dependences of and on . of f - actin solutions , including pure water ( ) , 2 g / l f - actin solutions with random lengths ( ) , and 2g / l f - actin solutions with m ( ) , using data from schmidt , et al. , showing that true moduli measured macroscopically and microscopically are unequal . ] figure [ figureschmidt2000adp1 ] shows representative parts of schmidt , et al.s measurements of . over a wide range of frequencies , for f - actin solutionsis in the range 2 - 8 .schmidt , et al. cite maggs as predicting similar results .maggs examined spherical nanoparticles diffusing through intertwined actin filaments in solution .maggs model treats probes bending filaments and distorting the local mesh , using scaling arguments and the presence of two independent length scales , namely the persistence length and the mesh size .the model does not include hydrodynamic interactions between probes and the mesh .the apparent storage and shear moduli are seen to be sensitive to the apparatus length scale .schmidt , et al. studied the rheological properties of solutions of fd - virus , using classical mechanical and magnetic tweezers rheometry to determine and .macroscopic and microscopic measurements are reported to be in reasonable agreement . comparison was made with some modern theoretical calculations ; the observed frequency dependence of at low frequency was much weaker than predicted .comparison was also made with studies of actin solutions , permitting separation of fundamental physical properties from peculiarities of particular chemical systems .schmidt , et al ., observe that actin aggregation is extremely sensitive to a wide variety of proteins and other environmental factors .they propose that a practical way to avoid these challenges to the use of f - actin solutions as a model for testing theories of polymer solution dynamics is to use alternative physical systems , such as the fd - virus that they studied , as sound models for testing theories of polymer solution dynamics . from the published literature , in small - molecule liquidstrue microrheology and classical measurements give the same results . in many thoughnot all polymer solutions , true microrheology and classical rheometry do not agree , the viscosity measured with true microrheology generally being smaller than the viscosity determined by a classical , macroscopic rheometer .particle tracking gives direct access to correlation functions .these functions in turn give access to the memory functions for the langevin - equation random force in ways that light scattering spectroscopy does not .bandwidth for video tracking presently limits that technique , but bandwidth improvements are a matter of time , and alternative paths to determining displacements exist .true gels are not solutions . however , some models of polymer solution dynamics invoke analogies with polymer motion through gels , at least on favorable time scales , so it is clearly worthwhile to compare probe motion through polymer solutions with polymer motion through true gels .much work has focused on ( 1 ) cross - linked polyacrylamide gels and ( 2 ) cross - linked actin and other protein gels , but there are also results on ( 3 ) probes in the complex interior of living cells and other gelling systems .allain , et al. studied the diffusion of 0.176 m diameter polystyrene spheres through solutions of acrylamide / n , n-methylene bisacrylamide during irreversible gel formation .qelss was used to measure the relaxation spectra of the probes , which dominated the scattering by their host solutions .the probe diffusion time was obtained from a second order cumulant fit to the spectra . during the gelation process , of the probes increased 120-fold with increasing duration of the gelation process . over the same time interval ,the macroscopic solution viscosity increased only 12-fold , so the probe diffusion time was not simply proportional to the macroscopic viscosity .matsukawa and ando used pfg nmr to study polyethylene glycol and water diffusing through fully swollen cross - linked polyacrylamide gels .peg molecular weights were 4.25 , 10.89 , and 20 kda . was taken to be proportional to , where is a gel length scale that could be varied by changing the degree of swelling .nishio , et al. used qelss to monitor the diffusion of polystyrene latex spheres in polyacrylamide : water .probe radii were 25 and 50 nm .probes were added to the solutions prior to adding the ammonium persulphonate polymerization initiator .the fraction concentration of bisacrylamide crosslinker was varied from zero ( leading to linear polyacrylamide solutions ) to 5% ( leading to a strong gel ) . under fixed optical conditions , the extent to which the correlation function decayed at long times decreased with increasing crosslinker concentration and with decreasing scattering angle , indicating that above 1.6% bisacrylamide more and more particles are confined , especially over longer distances .nishio , et al .make an inversion of their data to determine the distribution of effective pore lengths for different probe radii ( and , implicitly , pore diameters ) showing that the distribution is quite wide .park , et al. used holographic relaxation spectroscopy to measure of a dye and a labelled protein through polyacrylamide gels as a function of polyacrylamide concentration . the holographic method measures diffusion over distances orders of magnitude larger than any structure in the gel .qelss was used to infer a nominal correlation length for the gels from their apparent diffusion coefficient via where is boltzmann s constant , is the absolute temperature , and is the solvent viscosity . on comparison with literature data on diffusion by d , sucrose , and urea through the same medium , park ,et al find from their measurements that the probe diffusion coefficient depends on and probe radius via for and .that is , park , et al . found that real polyacrylamide gels are size filters that selectively retard the diffusion of larger probe particles .park , et al . also find from qelss measurements on gels that , leading them to note that is to good approximation a function of the single variable .reina , et al. used qelss to measure the diffusion of 25 and 50 nm radius psl probe particles in polyacrylamide solutions and cross - linked polyacrylamide gels over a range of polymer concentrations and scattering angles . in simple solutions , the probe spectrum is close to a single exponential , whose decay rate falls with increasing polymer until the gel threshold is reached . above the threshold ,a second slow mode appears ; the relaxation rates of both modes then increase with increasing polymer concentration . at the threshold ,particle trapping becomes evident from the lack of complete relaxation of the scattering spectrum . above the threshold ,a complex -dependence of the spectrum determined by the probe diameter and the gel mesh spacing is observed .suzuki and nishio examined 60 nm radius psl spheres in polyacrylamide gels , determining the extent to which the spectrum decays toward zero as , and the dependence of the spectrum on monomer concentration and scattering angle .spectra were polymodal .qualitative properties of spectra were interpreted from physical models .fadda , et al. used qelss and static light scattering to monitor the diffusion of 225 nm radius polystyrene latex probes through gelatin solutions during the gelation process .static light scattering determined the particle radius .the presence of a deep first minimum in showed that the particles were highly monodisperse .the spectrum of the probes was monitored as a function of time after quenching from high to low temperature .the spectrum was approximated as a pure exponential at small and a stretched exponential in at large . when gelation set in ,the short - time decay became faster , the long - time decay became very long , and a normalized measure of spectral intensity fell .comparison was made with particular models of gel formation .diffusion of ficoll probes through solutions of overlapped f - actin chains at actin concentrations 1 ( ) , 3( ) , 5 ( ) , 8 ( ) , and 12 ( )g / l , and a fit to a single joint stretched exponential in and . ] hou , et al. used probe diffusion measured with frap to examine several physical models of the intracellular medium . while simpler model systems containing either globular particles or long chains did not reproducethe behavior of probes moving in vitro , probes diffusing through a mixture of globular and long - chain proteins did show most physical properties seen with _ in vivo _ probe diffusion studies .the probe particles were a series of fluorescently - tagged size fractionated ficolls .the background matrix included concentrated globular particles ( ficolls or bovine serum albumin(bsa ) ) and/or heavily overlapped f - actin filaments .the experimental focus was studying nine probes in each of a few solutions , so concentration dependences for were examined imprecisely .probes diffusing through solutions of globular particles showed a that was only weakly dependent on probe radius . for probes in these solutions, was always larger than , increasing with increasing solution viscosity .in contrast , of probes diffusing through f - actin solutions falls markedly with increasing probe radius , but extrapolates to as .figure [ figurehou1990adp2 ] shows the nine probes diffusing through heavily overlapped actin matrix solutions .a joint fit of all measurements gave , as shown .hou , et al. compared their results with findings of luby - phelps , et al. on probes in the cytoplasm of living cells . in living cells, depends strongly on probe radius but extrapolates to as .in contrast to results in _ in vivo _ probes , probes in simple ficoll and bovine serum albumin solutions have a that is substantially independent of .probes in f - actin solutions have in the limit of small .however , probes in a mixture of f - actin and concentrated ( 7 - 10% ) ficoll or concentrated ( 7% ) bsa show the same properties as probes in cytoplasm , showing that probe diffusion in cytoplasm is plausibly governed by the simultaneous presence of a network phase and concentrated globular macromolecules .madonia , et al. prepared nongelling and gelling hemoglobin solutions and used qelss to watch the diffusion through them of 43 and 250 nm radius probe particles . at the onset of gelation, of the 250 nm probes falls rapidly . under identical experimental conditions , as gelation proceeds of the 43 nm spheres increases slightly .the gel structure functions as a size filter , trapping the larger particles but not obstructing significantly the motions of the smaller particles .newman , et al. used qelss to monitor the diffusion of polystyrene latex spheres through f - actin solutions .four probe species with radii of 55 , 110 , 270 , and 470 nm were employed .actins were polymerized with mgcl , kcl , or cacl ; however , all samples studied were liquids , not gels .prior to polymerization , in the actin solutions was not significantly different from of the same probe in pure water .on addition of salt to start the polymerization process , of the latex spheres began to fall , and the second cumulant of the probe spectra began to increase , showing that the distribution of probe displacements is no longer a gaussian .changes in spectral parameters stop after several hours . was determined for all probes , actin concentrations m , and several scattering angles . in solutions polymerized with mg or ca followed a stretched exponential for and .in contrast , for probes in solutions polymerized with k , one finds .newman , et al. used qelss to examine the diffusion of psl spheres ( radii 55 , 105 , and 265 nm ) through solutions of polymerized actin .the actin concentration was fixed at 0.65g / l throughout the experiments .addition of gelsolin , a protein that interacts with actin to shorten the actin filaments , was used to determine the effect of actin filament length on probe diffusion at constant total actin concentration .analysis of qelss spectra indicated that if little gelsolin was present the actin network trapped many probe particles , at least over long distance scales .trapping was more effective for the larger probe particles .on shortening the actin filament length by adding gelsolin , the probe diffusion coefficient increased and the trapping vanished .the inferred microviscosity fell from 5 - 20 times that of water when actin filaments were very long ( no added gelsolin ) down to twice that of water at the largest gelsolin concentration examined .after addition of gelsolin , changing the probe radius five - fold had little effect on the inferred microviscosity .addition of gelsolin changed modestly the probe spectrum line shape : the second spectral cumulant was large at very low gelsolin concentration , when some particles were trapped , but dropped a great deal when all probes were free to diffuse .schmidt , et al. , as part of an extremely systematic study of actin dynamics , examined the diffusion of 35 , 50 , and 125 nm radius latex spheres through fully polymerized actin networks .the actin networks were also characterized using qelss , frap , and electron microscopy .sphere diffusion through the networks followed with prefactor , and exponents and .for actin concentrations mg / ml , 125 nm radius spheres were retarded by up to sixfold , while the concentration dependence of of the smaller spheres was weak . overthe range of radii studied , namely comparable with the mesh spacing seen in an electron micrograph , heavily interlaced networks of actin molecules act as size filters , selectively retarding the motion of the larger probe particles .stewart , et al. used holographic relaxation spectroscopy ( hrs ) to monitor the diffusion of dye and tagged proteins through fibrin gels . in an hrs experiment , a pair of crossed laser beams are used to bleach a holographic pattern in solution , creating an index of refraction grating formed from unbleached molecules , and perhaps separate gratings formed from bleached , photochromically modified , or photochemically bound molecules .stewart , et al . found that small dye molecules diffused through fibrin gels as if they were in pure water .the diffusion of labelled bsa molecules was retarded by the gel , though less so if the gel was formed in the presence of ca , while in contrast the photobleaching reaction caused some labelled immunoglobulin g molecules to bind to the fibrin gel and become immobile .wong , et al. study diffusion of probe particles in f - actin networks , using video tracking microscopy to observe particle motion .mean - square displacements were measured directly .they were not diffusive : was not .examination of the motions of individual particles showed that particle dynamics were heterogenous .while the motion of some particles showed complete trapping , the particle never moving far from its starting point , the motions of other particles showed only partial trapping : particles alternately were trapped within restricted regions and made rare saltatory jumps to other trapping regions .diffusion by 250 , 320 , and 500 nm radius polystyrene spheres was compared with the diffusive exponent of for networks at a series of f - actin concentrations , showing that is a universal function of , with probe radius and network mesh size .arrio - dupont used a fluorescence recovery technique to measure the diffusion of a series of fluorescently - labeled proteins through the interior of cultured muscle cells .probe radii extended from 1.3 to 7.2 nm .cellular interiors are size filters , selective obstructing the diffusion of the larger probes .the retardation of the diffusion of larger globular proteins by the cell interiors is substantially more extensive than the retardation of similarly large dextrans by the same interiors , perhaps because dextrans are flexible and globular proteins are inflexible .kao , et al. used fluorescence recovery after photobleaching and picosecond resolution fluorescence polarization measurements to determine translational and rotational diffusion coefficients and fractional fluorescence recovery levels of fluorescein derivatives in swiss 3t3 fibroblasts and viscous solutions .careful analysis permitted separate determination of the fluid - phase viscosity , the level of probe binding , and the inhibition of probe diffusion due to the cell volume occupied by the relatively immobile cytomatrix .the combination of these effects accounts for the retardation of small - molecule solutes by cellular interiors .luby - phelps and collaborators used frap to study the diffusion of a series of fractionated ficolls , both in the cytoplasm of 3t3 cells and in concentrated ( 10 - 26% ) protein solutions .ficoll fractions had hydrodynamic diameters 60 - 250 . of ficolls in protein solutions was independent of the size of the ficoll molecules .that is , protein solutions are not size filters ; they retard to approximately the same fractional extent the diffusion of small and large probe particles .in contrast , cell cytoplasm is a size filter . of ficolls in cytoplasm falls sixfold as the ficoll diameter is increased from 60 to 500 . for the largest ficoll , frap finds that a third of the ficoll particles are trapped by cytoplasm ; only 2/3 of the particles are able to diffuse through substantial distances .luby - phelps , et al .propose that objects larger than 500 or so are mechanically prevented from diffusing through cell cytoplasms .luby - phelps , et al. describe a fluorometric method using two homologous dyes for determination of the local solvent viscosity , and apply the technique to determine the viscosity of water in the cytoplasm of a cell .the fluorometric behavior of one dye ( but not the other ) is sensitive to the local solvent mobility , while both dyes are approximately equally responsive to other environmental influences .the dyes were bound to ficoll for microinjection purposes .it was found that the solvent viscosity is not greatly different from the viscosity of bulk water , resolving a long and well - known literature dispute .seksek , et al. studied the diffusion of labelled dextrans and ficolls within fibroblasts and epithelial cells using frap .dextrans had molecular weights 4 , 10 , 20 , 40 , 70 , 150 , 580 , and 200 kda .ficolls were size - fractionated , four fractions being used for further studies .probes were microinjected into cells . with large probes ,the cytoplasm and nucleus can be studied separately .small probes pass through the nuclear membrane , so nucleus and cytoplasm had to be studied simultaneously . in mdck cells , did not depend on the size of the probes .seksek found that under some conditions processes other than translational diffusion could lead to photobleaching recovery .probe diffusion measurements are often said to be related to the phenomena that control the release of medical drugs from semirigid gels .shenoy and rosenblatt provide an example in which release times are measured directly .they examined succinylated collagen and hyaluronic acid matrices , separately and in mixtures , using bovine serum albumin and dextran as the probes whose diffusion was to be measured .this section notes a series of papers , sometimes linked under the cognomen _ microrheology _ , that rely on the assertion that the incoherent structure factor for the diffusion of dilute probes through a viscoelastic matrix may in general be rewritten via , the so - called gaussian assumption. as noted above , this assumption is incorrect except in a certain very special case .the special case in which this assertion is correct is the case in which the incoherent structure factor is characterized by a single pure exponential , so that is linear in .interpretations based on the gaussian assumption , when applied to non - exponential spectra , are therefore highly suspect .bellour , et al. used dws to measure probe diffusion of 0.27 , 0.5 , 1 , and 1.5 m radius polystyrene spheres in solutions of cetyltrimethylammonium ( cta ) bromide and sodium hexane sulphonate , in some cases after ion exchange removal of the br .cta salts in aqueous solutions form giant extended flexible micelles . at elevated surfactant concentrations , the micelles are nondilute and potentially entwine .however , unlike regular polymers , the micelles have a lifetime for a mid - length scission process that effectively permits micelles to pass through each other .at elevated surfactant concentrations , the dws spectrum had a bimodal relaxation form .dasgupta , et al. used dws and qelss to study polystyrene spheres diffusing through 200 and 900 kda polyethylene oxide solutions . a wide range of polymer concentrations , was examined , taking of the two polymers to be 0.48 and 0.16 wt% , respectively .polystyrene probes had radii 230 , 320 , 325 , 485 , and 1000 nm , the psl being carboxylate - modified except for the 325 nm probes , which were sulphate modified .qelss spectra were interpreted by invoking the gaussian assumption , even though the nominal was clearly not linear in .qelss spectra were truncated and only reported for times longer than 10 ms .gisler and weitz used diffusing wave spectroscopy to examine the motion of polystyrene spheres through f - actin solutions .the gaussian assumption was invoked to convert dws spectra to nominal mean - square particle displacements .the inferred displacements were then used to infer viscoelastic properties of f - actin gels .heinemann , et al. used diffusing wave spectroscopy to examine 720 nm polystyrene latex spheres moving through aqueous solutions of potato starch , using -dodecalactone to induced aggregation of the starch molecules .dws spectra were inverted by invoking the gaussian approximation .inverted spectra were used to generate nominal values for the storage and loss moduli as functions of frequency .kao , et al. used diffusing wave spectroscopy , in a specialized instrument in which the digital correlator is replaced by a michelson interferometer , to examine the diffusion of 103 - 230 nm radius colloidal spheres over the first 20 ns of their displacements . the gaussian approximation for particle displacements in quasielastic scatteringwas invoked .kaplan , et al. use dws to study structure formation in alkyl ketene dimer emulsions .spectra were interpreted by invoking the gaussian assumption .these systems are used in paper manufacture as sizing agents , but lead to technical difficulties if they gel .the scattering particles were intrinsically present in the system .the time evolution of spectra over a period of weeks was observed . within a few days , systems that were going to gelcould readily be distinguished from those that would remain stable .it is important to emphasize that even if a measurable is not simply related to underlying physical properties , the measurable may still represent a valuable practical analytical tool for industrial purposes .knaebel , et al. used dws , static light scattering , small - angle neutron scattering , and mechanical rheometry to characterize an alkali - soluble emulsion system .these solutions exhibit shear thinning and thixotropic behavior .the low - shear viscosity increases dramatically with increasing concentration , up to a limiting concentration beyond which increases more slowly with increasing .the dws measurements were analyzed by invoking the gaussian assumption .lu and solomon use dws to measure the diffusion of 0.2 to 2.2 m polystyrene spheres and 0.25 m colloidal silica beads in solutions of hydrophobically - modified ethoxylated urethane , which is an associating polymer , at polymer concentrations 0 - 4 wt% .mechanical rheology measurements were made on the same systems .dws measurements were analyzed using the gaussian displacement assumption . from the reported nominal ,the underlying single - scattering spectrum was not a pure exponential .mason and weitz applied dws to study the diffusion of 420 nm polystyrene spheres , at large concentration in ethylene glycol ( ) , at lower concentration ( ) , and in a 15 wt % solution of 4 mda polyethylene oxide .the gaussian assumption was used to convert to a mean square particle displacement and thence to a time - dependent diffusion coefficient. a generalized stokes - einstein equation translated the diffusion coefficient into microscopic storage and loss moduli .narita , et al. used dws and qelss to study probe diffusion in solutions of 95 kda polyvinylalcohol ( pva ) and crosslinked pva gels .the probes were 107 and 535 nm psl spheres .qelss spectra were visibly bimodal at all polymer concentrations studied .the relaxation time of the fast qelss mode was diffusive .the slow mode showed a stronger - than- dependence of its linewidth .dws spectra lacked visible long - time shoulders , and were interpreted using the gaussian assumption .nisato , et al. use qelss and dws to examine 85 and 107 nm polystyrene latex probe spheres in polyacrylic acid gels that had been chemically - crosslinked by addition of methylene bis - acrylamide .dws spectra were reported for forward and backward scattering geometries , nominally giving determinations of motion on two different length scales .qelss spectra had a long - time limit that was far above the baseline calculated from the mean scattering intensity .the gaussian assumption was used to infer from the long - time limit of the dynamic structure factor a long - time limit of the mean displacement ; the assumption was also applied to interpret dws spectra .palmer , et al. used dws to study the motions of 480 nm radius latex spheres in actin and actin:-actinin solutions .they reported spectra , nominal mean - square displacements , time - dependent diffusion coefficients , and magnitude of the complex modulus as inferred from a generalized stokes - einstein equation .a gaussian assumption was used to interpret spectra .palmer , et al. used dws to examine the diffusion of 480 nm radius polystyrene beads through f - actin , concentrations 0.42 - 6.89the gaussian assumption was invoked to analyze spectra in terms of mean - square displacements and inferred frequency dependent viscoelastic moduli .the authors also measured and with a mechanical rheometer , and compare with the dws data .pine , et al. presented the first experimental demonstration of dws as applied to complex fluids , based on the earlier study of maret and wolf on dynamic light scattering in the multiple - scattering limit .the gaussian assumption was implicit in their use of the earlier work of maret and wolf .dws was first applied to a 0.01 volume fraction solution of 497 nm diameter psl spheres diffusing freely in water , showing excellent agreement between the measured spectrum and the predicted theoretical form for the dws spectrum , if the photon transport mean free path was used as a fitting parameter .pine , et al ., report that from their dynamic measurements was consistently smaller than inferred from static backscattering measurements .comparison was made with mixtures of 312 and 497 nm diameter spheres ( volume fractions 0.01 , 0.04 , respectively ) that had been deionized to produce a colloidal glass .the dws spectrum changed markedly on formation of the glass .popescu , et al. present a novel qelss apparatus , based on their earlier analysis , that uses extremely - short - coherence - length visible light in its measurements . in popescu , et al.s method , light from a superluminescent diode with a coherence length of 30 m was sent perpendicular to the window surface into a scattering volume .light , backscattered from particles that are located within a few coherence lengths of the surface , combines coherently with light back - reflected by the window , allowing measurement in heterodyne mode of the qelss spectrum , even of highly scattering samples .analysis invokes the gaussian assumption for times much shorter than the decaying time of the autocorrelation function. sohn , et al. present a theoretical analysis of popescu , et al. s microvolume qelss apparatus , based on invoking the gaussian assumption .the possibility of working directly in frequency domain , and describing the spectrum as a sum of lorentzians rather than a sum of exponentials , is examined .rojas - ochoa , et al. make a systematic study of dws spectra of monodisperse interacting hard sphere systems .small - angle neutron scattering was used to determine the sphere size and the static structure factor , which was significantly perturbed by interparticle interactions .the photon transport mean free path was determined directly by measuring the optical transmittance of samples of various thicknesses .the hydrodynamic radius of the spheres was obtained via qelss applied to dilute samples .dws was used to measure particle motions over a few to a few hundred microseconds .the solvent was newtonian . under these conditions , which would not arise in a viscoelastic matrix solution , the gaussian assumption is applicable .comparison of this no - free - parameter determination of with predictions of orthodox theory for the concentration dependence of found excellent agreement .thus , under the very restrictive conditions under which the theoretical models for and diffusing wave spectroscopy are applicable , the model gives good results .romer , et al. used dws to study gel formation in colloid preparations .gelation was induced by varying the solution ionic strength , using enzymatic degradation of a neutral organic compound to create a slow change uniform across the sample in the ionic strength of the solution .the gaussian assumption was invoked to convert dws spectra to determinations of the mean - square displacement of individual particles , even though particles in gelling systems have long - time correlations in the forces on them , so that the nominal from dws for identical particles in a gelling system is not linear in time .romer , et al. present additional dws and classical mechanical rheology measurements on the same system , showing that dws and rheometric properties show dramatic changes after similar elapsed times , the elapsed time being characteristic of the system s sol - gel transition .rufener , et al. used dws to study concentrated g - actin that had been systematically polymerized into f - actin .the gaussian assumption was invoked to interpret spectra of polystyrene sphere probes .spectra of small probes decayed to the expected baseline .spectra of large probes decayed only part way to the expected baseline , leading to the inference that large but not small spheres were trapped by the gel network .van der gucht , et al. used qelss to study the diffusion of 125 and 250 nm radius modified silica particles in solutions of the self - assembling monomer bis(ethylhexylureido)toluene .they also measured the low - concentration viscosity , dynamic shear moduli , and static light scattering intensity .concentration - driven self - association gives a peculiar form : is nearly constant out to a crossover concentration / l monomer , and then increases as a power law in .qelss spectra were relatively unimodal at low polymer concentration , but gain a slow mode at larger .the concentration at which the slow mode appears is nearly an order of magnitude larger than .light scattering spectra were interpreted using the gaussian assumption even when spectra were clearly not single exponentials .van zanten , et al. used dws to examine 195 , 511 , 739 , 966 , and 1550 nm diameter psl spheres diffusing in 330 kda polyethylene oxide : water solutions for peo concentrations wt% , the gaussian assumption being invoked .a cone and plate viscometer was used to study transient creep and dynamic oscillatory responses .here we have examined the literature on the diffusion of probes through polymer solutions . nearly 200 probe size :polymer molecular weight combinations were examined at a range of polymer concentrations .there is a solid but not extremely extensive body of work on the temperature dependence of probe diffusion in polymer solutions .a half - dozen studies of probe rotational motion and more than a dozen reports based on particle tracking are noted , along with a half - dozen sets of true microrheological measurements , in which mesoscopic objects perform driven motion in polymer solutions .two dozen studies of diffusion in which spectra are interpreted using the `` gaussian approximation '' are identified .as itemized in tables i - v , probes have included polystyrene spheres , divinylbenzene - styrene spheres , silica spheres , tobacco mosaic virus , bovine serum albumin , ovalbumin , starburst dendrimers , fluorescein , hematite particles , unilamellar vesicles , micelles , ficols , pblg rods , and low - molecular weight dextrans .polystyrene spheres , ficols , and dextrans were used in the bulk of the probe measurements .matrix polymer chains include a wide range of water - soluble polymers .only limited data exist on probes in solutions of the organophilic polymers that form the staple for the remainder of the polymer literature .rotational motion measurements require the use of orientationally anisotropic objects .rodlike particles include tobacco mosaic virus , collagen , actin fragments , and pblg .optically anisotropic spheres , the anisotropy arising from oriented internal domains or magnetite inclusions , were examined by cush , et al. and sohn , et al. .results on probe diffusion fall into three phenomenological classes . in the first two classes ,diffusion is usefully characterized by a single relaxation time and hence a well - defined probe diffusion coefficient . in the third class ,light scattering spectra are more complex .the classes are : \(i ) systems in which decreases as is increased , with monotonically growing more negative with increasing .\(ii ) systems showing re - entrant concentration behavior , in which over some concentration range first increases with increasing and perhaps then decreases again .\(iii ) systems whose spectra have bimodal or trimodal relaxations , corresponding to the relaxation of probe concentration fluctuations via several competing modes .we begin with systems having a well - defined . in systems in which the solvent quality does not change strongly with temperature , scales with temperature as , being the ( temperature - dependent ) solvent viscosity .this result applies equally to probes in solutions containing no polymer , in which probe - solvent interactions govern diffusion , and to probes in concentrated polymer solutions , in which probe - polymer interactions however mediated must dominate the diffusion process .studies confirming this result include bremmell , et al. on probes in water : glycerol and phillies , et al. on probes in aqueous polyacrylic acids and aqueous dextrans .phillies and quinlan observed slight deviations from simple behavior for probes in water : dextran , which they interpret in terms of a temperature dependence of the solvent quality . because in general simply tracks the temperature dependence of , one infers that solvent - based hydrodynamic forces play a dominant role in probe diffusion .unfortunately , this inference says very little about the nature of polymer solution dynamics , because models for polymer solution dynamics largely agree on predicting an inverse dependence of relaxation rates on the solvent viscosity : changing the solvent viscosity changes the mobility of individual polymer subunits , the so - called _ monomer mobility _, thus changing the rate at which polymer chains can form or release hypothesized chain entanglements . changing the solvent viscosityequally changes the strength of probe - chain hydrodynamic interactions , thus changing the forces coupling the motion of a probe and of nearby chains .temperature dependence studies do rule out one entire class of approaches , namely approaches referring to reduction relative to a glass temperature .analyses of uniformly compare measurements made at fixed , and not at fixed . is claimed to depend very strongly on , so measurements at fixed and different are claimed to have very different .it was proposed that comparisons should be made at fixed , not fixed , and comparisons made at fixed are not valid . to determine , thereby allowing comparisons at fixed albeit different , phillies , et al. measured at fixed over a range of . after removing from the dependence of on , the remnant -dependence of have revealed how depends on .comparing results at fixed would eliminate any -dependence of arising from the -dependence of .phillies , et al. actually found that after removing from the dependence of on , there is no remnant -dependence of to interpret .correspondingly , either is independent of , or the notion reduction relative to the glass temperature is fundamentally inapplicable to probe diffusion in polymer solutions . in either case ,the data serve to reject the use of glass temperature corrections as a part of interpreting .changes in temperature may also change the solvent quality and hence the radius of each polymer coil .clomenil and phillies examined probes in hpc : water at different temperatures , including temperatures at which water is a good solvent for hpc and temperatures approaching the pseudotheta transition .corresponding to the change in solvent quality with temperature , the dependence of on polymer concentration is markedly temperature dependent .unfortunately , as detailed in ref . ,the observed dependence is equally consistent with most models of polymer solutions , so these results do not sort between different models of polymer dynamics .the relationship between and the solution viscosity was a primary motivation for early studies of probe diffusion . in some systems though not others , does not scale inversely as the solution viscosity measured macroscopically , so that depends strongly on polymer concentration and polymer molecular weight .in almost all systems showing deviations from stokes - einsteinian behavior , increases with increasing , sometimes by large factors .probe particles generally diffuse faster than expected from the solution viscosity . from the experimental standpoint , this sign for the discrepancy between and leads to a marked simplification , because the obvious experimental artifacts such as probe aggregation and polymer adsorption all lead to probes that diffuse too slowly , not too swiftly as actually observed . in simple fluids ,the probe diffusion coefficient follows the stokes - einstein equation with .if the stokes - einstein equation remained valid in polymer solutions , would accurately track the solution viscosity , so that measuring would simply be a replacement for classical rheological measurements .indeed , the early work of turner and hallet found that inferred from is very nearly equal to measured classically with a rotating drum viscometer .in contrast , a substantial motivation driving lin and phillies to extend their probe diffusion work was that in their systems was very definitely not equal to the macroscopic , even for very large probes .the differences between the works of turner and hallet , and of lin and phillies , apparently reflect differences between chemical systems .probes in small- polymer solutions generally show stokes - einsteinian behavior with , including 160 nm spheres in 101 - 445 kda pmma : chcl for up to 10 , 20 and 230 nm probes in aqueous 20 kda dextran , and 20 - 1500 nm spheres in aqueous 50 kda polyacrylic acid . on the other hand ,49 and 80 nm probes in aqueous 90 kda poly - l - lysine show small -independent deviations from stokes - einsteinian behavior .stokes - einsteinian behavior is also found in some large- systems .onyenemezu , et al. find stokes - einsteinian behavior within experimental accuracy for 1100 kda polystyrene solutions having as large as 100 .turner and hallett and phillies , et al. reveal in dextran solutions even with as large as 2 mda .in most non - dilute polymer solutions , non - stokes - einsteinian behavior is found , often leading at large to and .a particularly conspicuous example of non - stokes - einsteinian behavior appears in fig .[ figurelin1984adp1 ] , based on lin and phillies , involving 20 - 620 nm probes in aqueous 1mda polyacrylic acid . much early work on the relationship between and referenced the pioneering study of langevin and rondelez on sedimentation in polymer solutions . langevin and rondelez propose that small sedimenting particles experience the solvent viscosity , but adequately large sedimenting particles experience a much larger drag proportional to the macroscopic solution viscosity .the distinction between small and large particles is determined by a ratio , bring the probe radius and being a hypothesized characteristic correlation length `` ... dependent on concentration ... but independent of the molecular weight . ''langevin and rondelez cite unpublished results of degennes as predicting for the sedimentation coefficient here is a molecular - weight - independent constant and is a scaling exponent . is the distance between the hypothesized entanglement points of the transient statistical lattice seen in the degennes model for polymer solutions . for small particles ,this form predicts , while for large particles the exponential is dominated by , even for , so that is determined by the solution s macroscopic viscosity .the langevin - rondelez equation is a special case of general assertions that : polymer solutions have a longest distance scale , properties measured over distances necessarily reflect macroscopic system properties , and therefore of large ( ) probes must follow the stokes - einstein equation . if this proposal were correct , the polyacrylic acid : water systems in which the stokes - einstein equation fails badly would be in this regime , implying a surprising of hundreds of nanometers .langevin and rondelez report sedimentation experiments leading to , being consistent with limited measurements , and in the limit of low probe concentration .langevin and rondelez do not treat diffusion explicitly .the same drag coefficient determines and , leading to , implying that eq .[ eq : slangevindp ] should also govern probe diffusion .bu and russo examine the diffusion of extremely small probes ( 0.5 - 55 nm ) in hpc : water . in one ofthe few published explicit tests of eq .[ eq : slangevindp ] as applied to diffusion , bu and russo find the dependence of on to match langevin and rondelez s prediction .small probes diffuse much more rapidly that expected from ; for larger probes approaches .a result for is entirely inconsistent with the physical model invoked by langevin and rondelez in their _ ansatz _ for their equation .however , in large numbers of probe : polymer systems depends strongly on .it is at present unclear whether the strong -dependence of would disappear with sufficiently small probes , in which case eq .[ eq : slangevindp ] could be correct , or whether is determined by an equation similar to [ eq : slangevindp ] but in which depends on .as seen above , in almost but not quite every system whose relaxations can be characterized by a single diffusion coefficient , the concentration dependence of is described by a stretched exponential in polymer concentration .the fitting parameters and of the stretched exponentials are found in tables i - v .it is apparent that varies over a substantial range , and that varies over a considerably narrower range . whenever and were varied over a sufficient range to support a credible log - log plot of against ( e.g. , figures [ figurefurukawa1991adp1]b , [ figurelin1984bdp6 ] , [ figurelin1984adp5 ] , and [ figurewon1994adp1 ] ) , the apparent slope decreases monotonically with increasing , precisely as expected for data that follow a stretched exponential in .if followed a power law in , a plot of against would reveal a straight line .it is possible on the aforementioned figures to draw straight lines tangent to the observed curves , but such straight lines manifestly are simply local tangents .furthermore , a log - log plot of a stretched exponential in , no matter its parameters , always shows a smooth curve , while on the same plot a power - law is a straight line .measurements that actually follow a power law in therefore can not be described accurately with a force - fitted stretched exponential , and vice versa .parameters and from fits of of 20.4 ( ) and 230 ( ) nm spheres in dextran solutions , and of those solutions , to , against dextran , showing with decreasing from 1.0 toward 0.5 over the observed molecular weight range.,title="fig : " ] parameters and from fits of of 20.4 ( ) and 230 ( ) nm spheres in dextran solutions , and of those solutions , to , against dextran , showing with decreasing from 1.0 toward 0.5 over the observed molecular weight range.,title="fig : " ] , the initial slope of against for 7 ( ) , 34 ( ) , and 95( ) nm radius polystyrene spheres in aqueous polystyrene sulfonate : 0.2 m nacl , and ( line ) the no - free - parameter prediction for in this system , after phillies , et al. . ] relationships between , , and for probes in an extensive series of homologous polymers were studied by phillies , et al. ( psl in dextran : water ) and by phillies , et al. ( psl in polystyrenesulfonate : water . ) representative results on these systems appear as figures [ figurephillies1989bdp1 ] and [ figurephillies1997bdp2 ] , respectively .figure [ figurephillies1989bdp5 ] shows and for probes in dextran : water . over the observed molecular weight range , . with increasing , decreases from 1.0 to slightly more than 0.5 .the authors determined the polymer molecular weight distributions using aqueous size - exclusion chromatography ; measured polydispersities ranged from 1.16 to 2.17 .no effect of polydispersity on probe diffusion was identified .these results show that depends strongly on , and serve to reject any suggestion that polymer dynamics are strongly sensitive to the detailed shape of the polymer molecular weight distribution , as opposed to being determined by averaged molecular weights .phillies , et al. sought to determine the initial slope of for probes in a series of of homologous monodisperse polymers .the intent was to test the phillies - kirkitelos hydrodynamic calculation of .measurements were fit to straight lines , simple exponentials , and stretched exponentials in , which describe these data out to progressively larger . the phillies - kirkitelos calculation , which parallels hydrodynamic calculations of the concentration dependence of of hard spheres , has no free parameters .the predicted depends on probe radius , chain thickness , and .so long as , the calculated is very nearly insensitive to and .as seen in figure [ figurephillies1997bdp3 ] , for kda good agreement is found between the measurements and the no - free - parameter calculation . for kda ,the experimental is too large , a deviation interpreted by phillies , et al. as appearing because short polystyrene sulfonate chains are not well - approximated as random coils . and of as functions of , for mesoscopic probes in solutions of pmma ( ) , dextran ( ) ( ) , polyacrylic acid ( ) , polystyrene() , polyisobutylene ( ) , and bovine serum albumin ( ) ( plotted based on its hydrodynamic radius ) .the lines are , based on probes in dextran : water , and based on fig. 65 of ref . .,title="fig : " ] and of as functions of , for mesoscopic probes in solutions of pmma ( ) , dextran ( ) ( ) , polyacrylic acid ( ) , polystyrene() , polyisobutylene ( ) , and bovine serum albumin ( ) ( plotted based on its hydrodynamic radius ) . the lines are , based on probes in dextran : water , and based on fig. 65 of ref . .,title="fig : " ] accurate determination of and from non - linear least squares fits is challenging , because errors in these parameters are strongly anticorrelated . to determine and accurately, one needs accurate measurements of at quite small , not just large , concentrations , as well as over a wide range of .only a limited number of studies seem to attain these conditions .furthermore , the systematic review above reveals systems with multimodal spectra , systems in which exhibits re - entrant behavior , and some measurements on polyelectrolyte and rodlike polymer solutions , none of which appear exactly comparable to mesoscopic probes diffusing in solutions of near - random - coil neutral polymers . with these exclusions , figures [ figuredataprobedp1 ] exhibit and as functions of for probes in a range of polymer solutions . the solid line is taken from figure 65 , ref . [ figurephillies1989bdp5 ] ; it represents a best fit to the measurements on dextran solutions . with few exceptions , is seen to be closely correlated with the molecular weight of the matrix polymer over two orders of magnitude in its dependent and independent variables . from figure [ figuredataprobedp1 ], one sees at lower polymer molecular weight , and more scattered values at larger .probes in polyisobutylene are noteworthy for retaining up to the largest examined .the solid line in figure [ figuredataprobedp1 ] corresponds to the dependence of found for the not entirely physically dissimilar polymer self - diffusion coefficient . as seen in figure 65 of ref ., from is at lower , declining to 0.5 by kda . and are thus seen to be correlated with matrix molecular weight , a correlation that will be predicted by a valid physical model for these parameters .a few systems show re - entrant behavior in which a relaxation rate increases rather than decreasing with increasing .most observations of reentrant behavior involve systems showing bimodal spectra .for example , bremell , and dunstan and stokes report for low - ionic - strength solutions a slow probe mode that slows with increasing and a fast probe mode whose relaxation rate first increases and then decreases with increasing . with the same polymer and elevated ( 0.1 , 1.0 m ) ionic strengths , the relaxation rates of both modes increase and perhaps plateau with increasing .ullmann , et al .do report a probe in polyethylene oxide : water whose single relaxation rate has a true maximum with increasing .however , ullmann , et al.s data were from an old - style linear correlator , so their spectra might have been bimodal , the second peak passing unseen . in a very few systems ,an apparent low- plateau is observed , in which is nearly constant over a range of low concentrations and then begins to fall above some low - concentration limiting .good examples of this behavior are supplied by yang and jamieson for probes in hpc : water , and by papagiannapolis , et al. ( who find the plateau in rather than ) .the evidence above on the effect of probe size reduces to two general statements with a qualification .first , true gels , e.g. , polyacrylamide gels , protein gels , or cellular interiors , are indeed size filters , as witness the results of , e.g. , luby - phelps , et al. .true gels are substantially more effective at retarding the diffusion of larger probes than at retarding the diffusion of smaller probes .second , to first approximation polymer solutions are relatively weak size filters . as witness the results of hou , et al. , polymer solutions are only slightly more effective at retarding the diffusion of smaller probes that at retarding the diffusion of larger probes .however , the approximation that polymer solutions are weak size filters breaks down for very small probes , as studied by bu and russo . in a polymer solution, a very small probe may be considerably more mobile , relative to its size , than a large probe would be .it is legitimate to ask if rheology , and probe diffusion using sufficiently large albeit mesoscopic probes , should measure the same viscosity .it is noteworthy that the few true microrheological measurements on forced motion of micron - scale probes indicate that true microrheology and classical rheology do not always measure the same quantities . in simple liquids and in rigid - rod fd - virus solutions , macroscopic and truemicroscopic measurements of viscosity are consistently in good agreement . on the other hand , in low - molecular - weight 85 kdapolyethylene oxide solutions , dynamic moduli measured with a 1.6 m probe are quite different from those obtained by a macroscopic rheometer. in f - actin solutions , true microrheological moduli are substantially smaller than their macrorheological counterparts , typically by factors of 2 - 8 .only very limited true microrheological measurements exist for neutral random - coil polymers in good solvents , so generalization appears hazardous .because microscopic and classical macroscopic rheological instruments sometimes give different values for , there is no reason to expect that from will agree with from a classical macroscopic instrument , rather than agreeing with the very different from a microscopic instrument .phillies , brown , and zhou re - examine results of brown and collaborators on probe diffusion by silica spheres and tracer diffusion of polyisobutylene ( pib ) chains through pib : chloroform solutions .these comparisons are the most precise available in the literature , in the sense that all measurements were made in the same laboratory using exactly the same matrix polymer samples .comparisons were made between silica sphere probes and polymer chains have very nearly equal and in the absence of pib .for each probe sphere and probe chain , the concentration dependence of the single - particle diffusion coefficient is accurately described by a stretched exponential in . for large probes ( 160 nm silica spheres , 4.9 mda pib ) in solutions of a small ( 610 kda ) pib , remains very nearly independent of as falls 100-fold from its dilute solution limit .in contrast , with small probes ( 32 nm silica spheres , 1.1 mda pib ) in solutions of a large polymer ( 4.9 mda pib ) , depends very strongly on . between 0 and 6 g / l matrix pib , falls nearly 300-fold , while over the same concentration range falls no more than 10-fold . was followed out to the considerably larger concentration at which it , too , had fallen 300-hundred - fold from its small - matrix - concentration limit .this result is somewhat surprising relative to some theories of polymer dynamics , because in non - dilute polymer solutions polymer chains are predicted to have available modes of motion , such as reptation , that are predicted to be denied to rigid spheres . that is , in solutions of a very large ( 4.9 mda ) polymer , at concentrations large enough that was reduced 300-hundred - fold , the matrix polymer was far more effective at obstructing the motions of rigid probes than at obstructing the motion of long flexible chains .one could propose that at still larger the trend in reverses with increasing , but there is absolutely no sign of such a phenomenon in the published experiments .in the 4.9 mda system , $ ] was only taken out to 2 or 6 , so it could reasonably be argued that the observed phenomenology only refers to non - dilute non - entangled systems , and that a different phenomenology might be encountered in entangled polymer solutions . while multimodal probe spectra have been reported in several systems , only in hydroxypropylcellulose: water is there a highly systematic examination of the full range of experimental parameters . because only one system showing multimodal spectra has been studied systematically , one can not be certain to what extent one is examining specific chemical effects rather than a general phenomenology .a few specific points seem most worth repeating : probe and probe - free hpc solution spectra are both bimodal or trimodal , with modes on the same time scales .however , the probe and polymer modes are not the same : the concentration dependences of a probe mode and the corresponding polymer mode can be opposite in sign .a wide range of phenomena indicate that these solutions have a dominant characteristic length scale , being approximately the size of a polymer chain , and not some shorter distance . in particular, while small and large probes both show bimodal spectra , probes having and show very different concentration dependences for their spectral parameters .the observed characteristic length is approximately , the diameter of a random chain coil . depends weakly or not at all on polymer concentration .in contrast , the hypothesized characteristic length scale between chain entanglements is a distance , smaller than a polymer chain , that depends strongly on polymer concentration . taken together these resultsserve to reject interpretations of the viscoelastic solutionlike : meltlike transition in this system as representing a transition to reptation dynamics .however , it is not certain if chain dynamics in hpc : water are representative of generic polymer dynamics , or if special chemical effects unique to hpc are important , so it appears premature to generalize these interpretations to all polymer solutions .the experimental literature on the motion of mesoscopic probe particles through polymer solutions is systematically reviewed .the primary focus is the study of diffusive motion of small probe particles .comparison is made with literature data on solution viscosities .a coherent description was obtained , namely that the probe diffusion coefficient generally depends on polymer concentration as .one finds that the scaling prefactor depends on polymer molecular weights as .the scaling exponent appears to have large- and small- values with a crossover linking them .the probe diffusion coefficient does not simply track the solution viscosity ; instead , typically increases markedly with increasing polymer concentration and molecular weight . in some systems , e.g. ,hydroxypropylcellulose : water , the observed probe spectra are bi- or tri - modal .extended analysis of the full probe phenomenology implies that hydroxypropylcellulose solutions are characterized by a single , concentration - independent , length scale that is approximately the size of a polymer coil . in a very few systems , one sees re - entrant or low - concentration - plateau behaviors of uncertain interpretation . from their rarity , these behaviors are reasonably interpreted as corresponding to specific chemical effects .true microrheological studies examining the motion of mesoscopic particles under the influence of a known external force are also examined .the viscosity determined with a true microrheological measurement is in many cases substantially smaller than the viscosity measured with a macroscopic instrument .g. d. j. phillies , g. s. ullmann , k. ullmann and t .- h .phenomenological scaling laws for semidilute macromolecule solutions light scattering by optical probe particles ._ j. chem ._ , * 82 * ( 1985 ) , 5242 - 5246 . g. d. j. phillies and k. a. streletzky .microrheology of complex fluids via observation of tracer microparticles ._ recent res ._ , * 5 * ( 2001 ) , 269 - 285. j. l. harden and v. viasnoff .recent advances in dws - based micro - rheology ._ curr . op .interf , sci ._ , * 6 * ( 2001 ) , 438 - 445 .a. mukhopaday and s. granick .micro- and nanorheology ._ curr . op .interf ._ , * 6 * ( 2001 ) , 423 - 429 . y. tseng , t. p. kole , s .- h .j. lee and d. wirtz .local dynamics and viscoelastic properties of cell biological systems . _ curr . op ._ , * 8 * ( 2002 ) , 210 - 217 .p. habdas and e. r. weeks .video microscopy of colloidal suspensions and colloidal crystals ._ curr . op ._ , * 7 * ( 2002 ) , 196 - 203 .george d. j. phillies .self and tracer diffusion of polymers in solution , arxiv : cond - mat/0403109 ( 3 march 2004 ) .b. j. berne and r. pecora ._ dynamic light scattering _( new york , ny : wiley , 1976 ) , especially chapter 5 .j. l. doob .the brownian movement and stochastic equations . _ annals math ._ , * 43 * ( 1942 ) , 351 - 369 . g. d. j. phillies. interpretation of light scattering spectra in terms of particle displacements , _j. chem ._ , * 122 * ( 2005 ) , 224905 1 - 8 .w. brown and r. rymden .interaction of carboxymethylcellulose with latex spheres studied by dynamic light scattering ._ macromolecules _ , * 20 * ( 1987 ) , 2867 - 2873 .w. brown and r. rymden .comparison of the translational diffusion of large spheres and high molecular weight coils in polymer solutions ._ macromolecules _ , * 21 * ( 1988 ) , 840 - 846 . z. bu and p. s. russo . diffusion of dextran in aqueous hydroxypropylcellulose ._ macromolecules _ , * 27 * ( 1994 ) , 1187 - 1194 .d. langevin and f. rondelez .sedimentation of large colloidal particles through semidilute polymer solutions ._ polymer _ , * 19 * ( 1978 ) , 875 - 882 .y. cheng , r. k. prudhomme and j. l. thomas .diffusion of mesoscopic probes in aqueous polymer solutions measured by fluorescence recovery after photobleaching ._ macromolecules _ , * 35 * ( 2002 ) , 8111 - 8121 .s. c. de smedt , a. lauwers , j. demeester , y. engelborghs , g. de mey and m. du . structural information on hyaluronic acid solutions as studied by probe diffusion experiments ._ macromolecules _ , * 27 * ( 1994 ) , 141 - 146 . s. c. de smedt , p. dekeyser , v. ribitsch , a. lauwers and j. demeester . viscoelastic and transient network properties of hyaluronic acid as a function of the concentration ._ biorheology _ , * 30 * ( 1994 ) , 31 - 42 .s. c. de smedt , t. k. l. meyvis , j. demeester , p. van oostveldt , j. c. g. blonk and w. e. hennink . diffusion of macromolecules in dextran methacrylate solutions and gels as studied by confocal scanning laser microscopy ._ macromolecules _ , * 30 * ( 1997 ) , 4863 - 4870. i. delfino , c. piccolo and m. lepore. experimental study of short- and long - time diffusion regimes of spherical particles in carboxymethylcellulose solutions .j. _ , * 41 * ( 2005 ) , 1772 - 1780 d. e. dunstan and j. stokes . diffusing probe measurements in polystyrene latex particles in polyelectrolyte solutions : deviations from stokes - einstein behavior . _macromolecules _ , * 33 * ( 2000 ) , 193 - 198 .r. furukawa , j. l. arauz - lara and b. r. ware .self - diffusion and probe diffusion in dilute and semidilute solutions of dextran ._ macromolecules _ , * 24 * ( 1991 ) , 599 - 605 . d. gold , c. onyenemezu and w. g. miller . effect of solvent quality on the diffusion of polystyrene latex spheres in solutions of poly(methylmethacrylate ) ._ macromolecules _ , * 29 * ( 1996 ) , 5700 - 5709 . c. konak , b. sedlacek and z. tuzar .diffusion of block copolymer micelles in solutions of a linear polymer ._ makromol ., rapid commun ._ , * 3 * ( 1982 ) , 91 - 94 . t .- h .lin and g. d. j. phillies .translational diffusion of a macroparticulate probe species in salt - free poly(acrylic ) acid : water ._ j. phys ._ , * 86 * ( 1982 ) , 4073 - 4077 . tlin and g. d. j. phillies .probe diffusion in poly(acrylic acid ) : water .effect of probe size ._ macromolecules _ , * 17 * ( 1984 ) , 1686 - 1691 .lin and g. d. j. phillies .probe diffusion in polyacrylic acid : water effect of polymer molecular weight ._ j. coll ._ , * 100 * ( 1984 ) , 82 - 95 . tdiffusion of tio particles through a poly(ethylene oxide ) melt ._ makromol ._ , * 187 * ( 1986 ) , 1189 - 1196 .o. a. nehme , p. johnson and a. m. donald .probe diffusion in poly - l - lysine solution ._ macromolecules _ , * 22 * ( 1989 ) , 4326 - 4333 . c. n. onyenemezu , d. gold , m. roman , and w. g. miller .diffusion of polystyrene latex spheres in linear polystyrene nonaqueous solutions ._ macromolecules _ , * 26 * ( 1993 ) , 3833 - 3837 .g. d. j. phillies .diffusion of bovine serum albumin in a neutral polymer solution ._ biopolymers _ , * 24 * ( 1985 ) , 379 - 386 .g. d. j. phillies , j. gong , l. li , a. rau , k. zhang , l .- p .yu and j. rollings .macroparticle diffusion in dextran solutions ._ j. phys.chem._ , * 93 * ( 1989 ) , 6219 - 6223 .g. d. j. phillies , t. pirnat , m. kiss , n. teasdale , d. maclung , h. inglefield , c. malone , l .- p .yu and j. rollings .probe diffusion in solutions of low - molecular - weight polyelectrolytes . _ macromolecules _ , * 22 * ( 1989 ) , 4068 - 4075 .g. d. j. phillies , c. malone , k. ullmann , g. s. ullmann , j. rollings and l .- p .probe diffusion in solutions of long - chain polyelectrolytes ._ macromolecules _ , * 20 * ( 1987 ) , 2280 - 2289 .g. d. j. phillies , m. lacroix and j. yambert , probe diffusion in sodium polystyrene sulfonate - water : experimental determination of sphere - chain binary hydrodynamic interactions ._ journal of physical chemistry _ ,* 101 * ( 1997 ) , 5124 - 5130 . g. d. j. phillies and p. c. kirkitelos , higher - order hydrodynamic interactions in the calculation of polymer transport properties , _ journal of polymer science b : polymer physics _, * 31 * ( 1993 ) , 1785 - 1797 .g. d. j. phillies , a. saleh , l. li , y. xu , d. rostcheck , m. cobb and t. tanaka . temperature dependence of probe diffusion in solutions of low - molecular - weight polyelectrolytes ._ macromolecules _ , * 24 * ( 1991 ) , 5299 - 5304 .g. d. j. phillies and c. a. quinlan .glass temperature effects effects in probe diffusion in dextran solutions ._ macromolecules _ , * 25 * ( 1992 ) , 3110 - 3316 . c. roberts , t. cosgrove , r. g. schmidt and g. v. gordon. diffusion of poly(dimethylsiloxane ) mixtures with silicate nanoparticles ._ macromolecules _ , * 34 * ( 2001 ) , 538 - 543 .m. shibayama , y. isaka and i. shiwa .dynamics of probe particles in polymer solutions and gels ._ macromolecules _ , * 32 * ( 1999 ) , 7086 - 7092. y. shiwa .hydrodynamic screening and diffusion in entangled polymer solutions ._ , * 58 * ( 1987 ) , 2102 - 2105 . g. s. ullmann and g. d. j. phillies. implications of the failure of the stokes - einstein relation for measurements with qelss of polymer adsorption by small particles . _macromolecules _ , * 16 * ( 1983 ) , 1947 - 1949 . g. s. ullmann , k. ullmann , r. m. lindner and g. d. j. phillies. probe diffusion of polystyrene latex spheres in poly(ethylene oxide):water ._ j. phys ._ * 89 * ( 1985 ) , 692 - 700 .k. ullmann , g. s. ullmann and g. d. j. phillies .optical probe study of a nonentangling macromolecule solution bovine serum albumin : water ._ j. coll ._ , * 105 * ( 1985 ) , 315 - 324 .m. r. wattenbarger , v. a. bloomfield , b. zu and p. s. russo .tracer diffusion of proteins in dna solutions ._ macromolecules _ , * 25 * ( 1992 ) , 5263 - 5265 .j. won , c. onyenemezu , w. g. miller and t. p. lodge .diffusion of spheres in entangled polymer solutions : a return to stokes - einstein behavior ._ macromolecules _ , * 27 * ( 1994 ) , 7389 - 7396 .t. p. lodge and l. m. wheeler .translational diffusion of linear and 3-arm - star polystyrenes in semidilute solutions of linear poly(vinylmethylether ) ._ macromolecules _ , * 19 * ( 1986 ) , 2983 - 2986 .t. p. lodge and p. markland .translational diffusion of 12-arm star polystyrenes in dilute and concentrated poly(vinyl methyl ether ) solutions ._ polymer _ , * 28 * ( 1987 ) , 1377 - 1384 .t. p. lodge , p. markland and l. m. wheeler . tracer diffusion of 3-arm and 12-arm star polystyrenes in dilute , semidilute , and concentrated poly(vinylmethylether ) solutions ._ macromolecules _ , * 22 * ( 1989 ) , 3409 - 3418 .l. m. wheeler , t. p. lodge , b. hanley and m. tirrell .translational diffusion of linear polystyrenes in dilute and semidilute solutions of poly(vinylmethylether ) ._ macromolecules _ , * 20 * ( 1987 ) , 1120 - 1129 .j. won and t. p. lodge .tracer diffusion of star - branched polystyrenes in poly(vinylmethylether ) gels ._ j. polym .phys . ed ._ , * 31 * ( 1993 ) , 1897 - 1907 .zhou pu and w. brown .translational diffusion of large silica spheres in semidilute polymer solutions ._ macromolecules _ , * 22 * ( 1989 ) , 890 - 896 .a. c. fernandez and g. d. j. phillies .temperature dependence of the diffusion coefficient of polystyrene latex spheres ._ biopolymers _ , * 22 * ( 1983 ) , 593 - 595 . g. d. j. phillies .translational diffusion coefficient of macroparticles in solvents of high viscosity ._ j. phys ._ , * 85 * ( 1981 ) , 2838 - 2843 . g. d. j. phillies and d. clomenil .lineshape and linewidth effects in optical probe studies of glass - forming liquids ._ j. phys.chem._ , * 96 * ( 1992 ) , 4196 - 4200 . g.d. j. phillies and c. a. quinlan .analytic structure of the solutionlike - meltlike transition in polymer solution dynamics . _ macromolecules _ , * 28 * ( 1995 ) , 160 - 164 .g. d. j. phillies .range of validity of the hydrodynamic scaling model ._ j. phys ._ , * 96 * ( 1992 ) , 10061 - 10066 .g. d. j. phillies .quantitative prediction of in the scaling law for self - diffusion ._ macromolecules _ , * 21 * ( 1988 ) , 3101 - 3106 .w. brown and r. rymden . diffusion of polystyrene latex spheres in polymer solutions studied by dynamic light scattering ._ macromolecules _ , * 19 * ( 1986 ) , 2942 - 2952 . t. yang and a. m. jamieson .diffusion of latex spheres through solutions of hydroxypropylcellose in water ._ j. coll ._ , * 126 * ( 1988 ) , 220 - 230 . p. s. russo , m. mustafa , t. cao and l. k. stephens .interactions between polystyrene latex spheres and a semiflexible polymer , hydroxypropylcellulose ._ j. coll ._ , * 122 * ( 1988 ) , 120 - 137 .m. mustafa and p. s. russo .nature and effects of nonexponential correlation functions in probe diffusion experiments by quasielastic light scattering ._ j. coll ._ , * 129 * ( 1989 ) , 240 - 253 . g. d. j. phillies and d. clomenil .probe diffusion in polymer solutions under and good conditions ._ macromolecules _ , * 26 * ( 1993 ) , 167 - 170 .a. r. altenberger , m. tirrell , and j. s. dahler , _ j. chem_ , * 84 * ( 1986 ) , 5122 . g. d. j. phillies .dynamics of polymers in concentrated solution : the universal scaling equation derived . _macromolecules _ , * 20 * ( 1987 ) , 558 - 564 .g. d. j. phillies and p. peczak .the ubiquity of stretched - exponential forms in polymer dynamics . _ macromolecules _ , * 21 * ( 1988 ) , 214 - 220 .g. d. j. phillies , c. richardson , c. a. quinlan and s. z. ren .transport in intermediate and high molecular weight hydroxypropylcellulose / water solutions ._ macromolecules _ , * 26 * ( 1993 ) , 6849 - 6858 .m. b. mustafa , d. l. tipton , m. d. barkley , p. s. russo . andf. d. blum .dye diffusion in isotropic and liquid crystalline aqueous ( hydroxypropyl)cellulose . _ macromolecules _ , * 26 * ( 1993 ) , 370 - 378 . k. l. ngai and g. d. j. phillies . coupling model analysis of polymer dynamics in solution : probe diffusion and viscosity ._ j. chem ._ , * 105 * ( 1996 ) , 8385 - 8397 . k. l. ngai . in_ disorder effects in relaxation processes _ ed .r. richert and a. blumen , ( berlin , germany : springer - verlag ( 1994 ) ) .g. d. j. phillies and m. lacroix .probe diffusion in hydroxypropylcellulose water : radius and line - shape effects in the solutionslike regime . _ j. phys .b _ , * 101 * ( 1997 ) , 39 - 47 . k. a. streletzky and g. d. j. phillies .translational diffusion of small and large mesoscopic probes in hydroxypropylcellulose - water in the solutionlike regime ._ j. chem ._ , * 108 * ( 1998 ) , 2975 - 2988 . k. a. streletzky and g. d. j. phillies .relaxational mode structure for optical probe diffusion in high molecular weight hydroxypropylcellulose ._ j. polym .b _ , * 36 * ( 1998 ) , 3087 - 3100 .k. a. streletzky and g. d. j. phillies .confirmation of the reality of the viscoelastic solutionlike - meltlike transition via optical probe diffusion ._ macromolecules _ , * 32 * ( 1999 ) , 145 - 152 . k. a. streletzky and g. d. j. phillies. coupling analysis of probe diffusion in high molecular weight hydroxypropylcellulose ._ j. phys .b _ , * 103 * ( 1999 ) , 1811 - 1820 .k. streletzky and g. d. j. phillies .optical probe study of solution - like and melt - like solutions of high molecular weight hydroxypropylcellulose . in _ scattering from polymers _b. s. hsiao , ( washington , d.c .2000 ) vol .739 , 297 - 316 .g. d. j. phillies , r. oconnell , p. whitford and k. a. streletzky .mode structure of diffusive transport in hydroxypropylcellulose : water . _j. chem ._ , * 119 * ( 2003 ) , 9903 - 9913 .r. oconnell , h. hanson , and g. d. j. phillies .neutral polymer slow mode and its rheological correlate ._ j. polym .b. polym ._ , * 43 * ( 2005 ) , 323 - 333 . s. a. kivelson , x. zhao , d. kivelson , t. m. fischer and c. m. knobler .frustration - limited clusters in liquids . _ j. chem ._ , * 101 * ( 1994 ) , 2391 - 2397 .b. camins and p.s .following polymer gelation by depolarized dynamic light scattering from optically and geometrically anisotropic latex particles ._ langmuir _ , * 10 * ( 1994 ) , 4053 - 4059 .z. cheng and t. g. mason .rotational diffusion microrheology ._ , * 90 * ( 2003 ) , 018304 1 - 4 .r. cush , d. dorman and p. s. russo . rotational and translational diffusion of tobacco mosaic virus in extended and globular polymer solutions ._ macromolecules _ , * 37 * ( 2004 ) , 9577 - 9584 .r. cush , p. s. russo , z. kucukyavuz , z. bu , d. neau , d. shih , s. kucukyavuz and h. ricks .rotational and translational diffusion of a rodlike virus in random coil polymer solutions ._ macromolecules _ , * 30 * ( 1997 ) , 4920 - 4926 .t. jamil and p. a. russo .interactions between colloidal poly(tetrafluoroethylene ) latex and sodium poly(styrenesulfonate ) . _ langmuir _ , * 14 * ( 1998 ) , 264 - 270 . g. h koenderink , s. sacanna , d. g. a. l. aarts and a. p. philipse. rotational and translational diffusion of fluorocarbon tracer spheres in semidilute xanthan solutions .e _ , * 69 * ( 2004 ) , 021804 1 - 12 . l. le goff , o. hallatschek , e. frey and f. amblard .tracer studies on f - actin fluctuations ._ , * 89 * ( 2002 ) , 258101 1 - 4 . j. g. phalakornkul , a. p. gast and r. pecora .rotational dynamics of rodlike polymers in a rod / sphere mixture ._ j. chem ._ , * 112 * ( 2000 ) , 6487 - 6494 .d. sohn , p. s. russo , a. davila , d. s. poche and m. l. mclaughlin .light scattering study of magnetic latex particles and their interaction with polyelectrolytes ._ j. coll ._ , * 177 * ( 1996 ) , 31 - 44. j. apgar , y. tseng , e. federov , m. b. herwig , s. c. almo and d. wirtz .multiple - particle tracking measurements of heterogeneities in solutions of actin filaments and actin bundles ._ biophys .j. _ , * 79 * ( 2000 ) , 1095 - 1106 .d. t. chen , e. r. weeks , j. c. crocker , m. f. islam , r. verna , j. gruber , a. j. levine , t. c. lubensky and a. g. yodh .rheological microscopy : local mechanical properties from microrheology ._ , * 90 * ( 2003 ) , 108301 1 - 4 .a. j. levine and t. c. lubensky .two - point microrheology and the electrostatic analogy .e _ , * 65 * ( 2001 ) , 011501 1 - 13 .j. c. crocker , m. t. valentine , e. r. weeks , t. gisler , p. d. kaplan , a. g. yodh and d. a. weitz .two - point microrheology of inhomogeneous soft materials ._ , * 85 * ( 2000 ) , 888 - 891 . m. a. dichtl and e. sackmann. colloidal probe study of short time local and long time reptational motion of semiflexible macromolecules in entangled networks ._ new j. physics _ , * 1 * ( 1999 ) , 18.1 - 18.11. m. l. gardel , m. t. valentine , j. c. crocker , a. r. bausch and d. a. weitz .microrheology of entangled f - actin solutions ._ , * 91 * ( 2003 ) , 158302 1 - 4 .f. gittes , b. schnurr , p. d. olmstead , f. c. mackintosh and c. f. schmidt .microscopic viscoelasticity : shear moduli of soft materials determined from thermal fluctuations ._ , * 79 * ( 1997 ) , 3286 - 3289 .a. goodman , y. tseng and d. wirtz .effect of length , topology , and concentration on the microviscosity and microheterogeneity of dna solutions ._ j. mol ._ , * 323 * ( 2002 ) , 199 - 215 .a. w. c. lau , b. d. hoffman , a. davies , j. c. crocker and t. c. lubensky .microrheology , stress fluctuations , and active behavior of living cells ._ , * 91 * ( 2003 ) , 198101 1 - 4 . t. g. mason , k. ganesan , j. h. van zanten , d. wirtz and s. c. kuo .particle tracking microrheology of complex fluids ._ , * 79 * ( 1997 ) , 3282 - 3285 .a. papagiannopolis , c. m. ferneyhough and t. a. waigh .the microrheology of polystyrene sulfonate combs in aqueous solution ._ j. chem ._ , * 123 * ( 2005 ) , 214904 1 - 10 . b. schnurr , f. gittes , f. c. mackintosh , and c. f. schmidt .determining microscopic viscoelasticity in flexible and semiflexible polymer networks from thermal fluctuations ._ macromolecules _ , * 30 * ( 1997 ) , 7781 - 7792 . y. tseng and d. wirtz .mechanics and multiple - particle tracking microheterogeneity of -actinin - cross - linked actin filament networks ._ biophys .j. _ , * 81 * ( 2001 ) , 1643 - 1656. m. t. valentine , z. e. perlman , m. l. gardel , j. h. shin , p. matsudaira , t. j. mitchison and d. a. weitz .colloid surface chemistry critically affects multiple particle tracking measurements of biomaterials ._ biophys .j. _ , * 86 * ( 2004 ) , 4004 - 4014 . j. xu , a. palmer and d. wirtz .rheology and microrheology of semiflexible polymer solutions : actin filament networks ._ macromolecules _ , * 31 * ( 1998 ) , 6486 - 6492. j. xu , v. viasnoff and d. wirtz .compliance of actin network filaments measured by particle - tracking microrheology and diffusing wave spectroscopy ._ rheol .acta _ , * 37 * ( 1998 ) , 387 - 398 . j. xu , y. tseng , c. j. carriere and d. wirtz . microheterogeneity and microrheology of wheat gliadin suspensions studied by multiple - particle tracking ._ biomacromolecules _ , * 3 * ( 2002 ) , 92 - 99 . f. amblard , a. c. maggs , b. yurke , a. n. pargellis and s. leibler . subdiffusion and anomalous local viscoelasticity in actin networks . _ phys. rev ._ , * 77 * ( 1996 ) , 4470 - 4473 . a. i. bishop , t. a. nieminen , n. r. heckenberg and h. rubinsztein - dunlop. optical microrheology using rotating laser - trapped particles ._ , * 92 * ( 2004 ) , 198104 1 - 4 . l. a. hough and h. d. ou - yang .a new probe for mechanical testing of nanostructures in soft materials. _ j. nanoparticle research _ * 1 * ( 1999 ) , 495 - 499 . m. keller , j. schilling and e. sackmann .oscillatory magnetic bead rheometer for complex fluid microrheometry ._ , * 72 * ( 2001 ) , 3626 - 3634 . f. g. schmidt , b. hinner , and e. sackmann .microrheometry underestimates the values of the viscoelastic moduli in measurements on f - actin solutions compared to macrorheometry .e , * 61 * ( 2000 ) , 5646 - 5653 .a. c. maggs , micro - bead mechanics with actin filaments .e _ , * 57 * ( 1998 ) , 2091 - 2094 . f. g. schmidt , b. hinner , e. sackmann and j. x. tang .viscoelastic properties of semiflexible filamentous bacteriophage fd .e _ , * 62 * ( 2000 ) , 5509 - 5517 . c. allain , m. drifford and b. gauthier - manuel . diffusion of calibrated particles during the formation of a gel ._ polymer communications _ , * 27 * ( 1986 ) , 177 - 180 . s. matsukawa and i. ando . a study of self - diffusion of molecules in polymer gel by pulsed - field - gradient nmr ._ macromolecules _ , * 29 * ( 1996 ) , 7136 - 7140 . i. nishio , j. c. reina and r. bansil .quasielastic light scattering study of the movement of particles in gels .59 * ( 1987 ) , 684 - 687 . i. h. park , c. s. johnson , jr . , and d. a. gabriel. probe diffusion in polyacrylamide gels as observed by means of holographic relaxation methods : search for a universal equation ._ macromolecules _ , * 23 * ( 1990 ) , 1548 - 1553 .j. c. reina , r. bansil , and c. konak .dynamics of probe particles in polymer solutions and gels ._ polymer _ , * 31 * ( 1990 ) , 1038 - 1044 .y. suzuki and i. nishio . quasielastic - light - scattering study of the movement of particles in gels : topological structure of pores in gels .b _ , * 45 * ( 1992 ) , 4614 - 4619 .g. c. fadda , d. lairez and j. pelta .critical behavior of gelation probed by the dynamics of latex spheres .e _ , * 63 * ( 2001 ) , 061405 1 - 9 . l. hou , f. lanni and k. luby - phelps .tracer diffusion in f - actin and ficoll mixtures . toward a model for cytoplasm ._ biophys .j. _ , * 58 * ( 1990 ) , 31 - 43 .k. luby - phelps , d. l. taylor and f. lanni .probing the structure of cytoplasm ._ j. cell ._ , * 102 * ( 1986 ) , 2015 - 2022 .k. luby - phelps , p. e. castle , d. l. taylor and f. lanni . hindered diffusion of inert tracer particles in the cytoplasm of mouse 3t3 cells ._ , * 84 * ( 1987 ) , 4910 - 4913 . f. madonia , p. l. san biagio , m. u. palma , g. schiliro , s. musumeci and g. russo .photon scattering as a probe of microviscosity and channel size in gels such as sickle haemoglobin ._ nature _ , * 302 * ( 1983 ) , 412 - 415 . j. newman , n. mroczka and k. l. schick .dynamic light scattering measurements of the diffusion of probes in filamentatious actin solutions ._ biopolymers _ , * 28 * ( 1989 ) , 655 - 666 . j. newman , g. gukelberger , k. l. schick and k. s. zaner . probe diffusion in solutions of filamentatious actin formed in the presence of gelsolin ._ biopolymers _ , * 31 * ( 1991 ) , 1265 - 1271 . c. f. schmidt , m. baermann , g. isenberg , and e. sackmann .chain dynamics , mesh size , and diffusive transport in networks of polymerized actin . a quasielastic light scattering and microfluorescence study ._ macromolecules _ , * 22 * ( 1989 ) , 3638 - 3649 .u. a. stewart , m. s. bradley , c. s. johnson , jr . , and d. a. gabriel .transport of probe molecules through fibrin gels as observed by means of holographic relaxation methods ._ biopolymers _ , * 27 * ( 1988 ) , 173 - 185 . i. m. wong , m. l. gardel , d. r. reichman , e. r. weeks , m. t. valentine , a. r. bausch and d. a. weitz .anomalous diffusion probes microstructure dynamics of entangled f - actin networks ._ , * 92 * ( 2004 ) , 178101 1 - 4 . m. arrio - dupont , g. foucault , m. vacher , p. f. devaux and s. cribier .translational diffusion of globular proteins in the cytoplasm of cultured muscle cells ._ biophys .j. _ , * 78 * ( 2000 ) , 901 - 907 . m. arrio - dupont , s. cribier , j. foucault , p. f. devaux and a. dalbis. diffusion of fluorescently labelled macrolecules in cultured muscle cells ._ biophys .j. _ , * 70 * ( 1996 ) , 2327 - 2332 .h. p. kao , j. r. abney and a. s. verkman .determinants of the translational mobility of a small solute in cell cytoplasm ._ j. cell biol ._ , * 120 * ( 1993 ) , 175 - 184 . k. luby - phelps , s. mujumdar , r. b. mujumdar , l. a. ernst , w. galbraith and a. s. waggoner. a novel fluorescence ratiometric method confirms the low solvent viscosity of the cytoplasm ._ biophys .j. _ , * 65 * ( 1993 ) , 236 - 242 . o. seksek , j. biwersi , and a. s. verkman .translational diffusion of macromolecule - sized solutes in cytoplasm and nucleus ._ j. cell biology _ ,* 138 * ( 1997 ) , 131 - 142 .v. shenoy and j. rosenblatt .diffusion of macromolecules in collagen and hyaluronic acid , rigid - rod flexible polymer , composite matrices ._ macromolecules _ , * 28 * ( 1995 ) , 8751 - 8758 . m. bellour , m. skouri , j .-p . munch and p. hebraud .brownian motion of particles embedded in a solution of giant micelles .* 8 * ( 2002 ) , 431 - 436 . b. r. dasgupta , s .- y .tee , j. c. crocker , b. j. frisken and d. a. weitz .microrheology of polyethylene oxide using diffusing wave spectroscopy and single scattering .e _ , * 65 * ( 2002 ) , 051505 1 - 10 . c. heinemann , f. cardinaux , f. scheffold , p. shurtenberger , f. escher and b. conde - petit .tracer microrheology of -dodecalactone induced gelation of aqueous starch dispersions . _ carbohydrate polymers _ , * 55 * ( 2004 ) , 155 - 161 . m. h. kao , a. g. yodh and d. j. pine . observation of brownian motion on the time scale of hydrodynamic interactions ._ , * 70 * ( 1993 ) , 242 - 245 . p. d. kaplan , a. g. yodh and d. f. townsend. noninvasive study of gel formation in polymer - stabilized dense colloids using multiply scattered light ._ j. coll ._ , * 155 * ( 1993 ) , 319 - 324 .a. knaebel , r. skouri , j. p. munch and s. j. candau .structural and rheological properties of hydrophobically modified alkali - soluble emulsion solutions ._ j. polymer sci .b _ , * 40 * ( 2002 ) , 1985 - 1994 . d. j. pine , d. a. weitz , p. m. chaikin and e. herbolzheimer . diffusing - wave spectroscopy ._ , * 60 * ( 1988 ) , 1134 - 1137 .g. maret and p. e. wolf .multiple light - scattering from disordered media .the effect of brownian motion of scatterers ._ z. phys .b _ , * 65 * ( 1987 ) , 409 - 413 . g. popescu , a. dogariu and r. rajagopalan .spatially resolved microrheology using localized coherence volumes .e _ , * 65 * ( 2002 ) , 041504 1 - 8 . g. popescu and a. dogariu .dynamic light scattering in localized coherence volumes . _optics letters _ , * 26 * ( 2001 ) , 551 - 553 . i. s. sohn , r. rajagopalan and a. c. dogariu . spatially resolved microrheology through a liquid / liquid interface ._ j. coll ._ , * 269 * ( 2004 ) , 503 - 513 . l. f. rojas - ochoa , s. romer , f. scheffold and p. schurtenberger . diffusing wave spectroscopy and small - angle neutron scattering from concentrated colloidal suspensions .e _ , * 65 * ( 2002 ) , 051403 1 - 8 . s. romer , f. scheffold and p. schurtenberger .sol - gel transition of concentrated colloidal suspensions ._ , * 85 * ( 2000 ) , 4980 - 4983. s. romer , c. urban , h. bissig , a. stradner , f. scheffold and p. schurtenberger .dynamics of concentrated colloidal suspensions : diffusion , aggregation , and gelation . _london a _ , * 359 * ( 2001 ) , 977 - 984 .k. rufener , a. palmer , j. xu and d. wirtz .high - frequency dynamics and microrheology of macromolecular solutions probed by diffusing wave spectroscopy : the case of concentrated solutions of f - actin . _j. non - newtonian fluid mech ._ , * 82 * ( 1999 ) , 303 - 314 . j. van der gucht , n. a. m. besseling , w. knoben , l. bouteiller and m. a. cohen stuart .brownian particles in supramolecular polymer solutions .e _ , * 67 * ( 2003 ) , 051106 1 - 10 . j. h. van zanten , s. amin and a. a. abdala .brownian motion of colloidal spheres in aqueous peo solutions ._ macromolecules _ , * 37 * ( 2004 ) , 3874 - 3880 .
|
the experimental literature on the motion of mesoscopic probe particles through polymer solutions is systematically reviewed . the primary focus is the study of diffusive motion of small probe particles . comparison is made with measurements of solution viscosities . a coherent description was obtained , namely that the probe diffusion coefficient generally depends on polymer concentration as . one finds that depends on polymer molecular weights as , and appears to have large- and small- values with a crossover linking them . the probe diffusion coefficient does not simply track the solution viscosity ; instead , typically increases markedly with increasing polymer concentration and molecular weight . in some systems , e.g. , hydroxypropylcellulose : water , the observed probe spectra are bi- or tri - modal . extended analysis of the full probe phenomenology implies that hydroxypropylcellulose solutions are characterized by a single , concentration - independent , length scale that is approximately the size of a polymer coil . in a very few systems , one sees re - entrant or low - concentration - plateau behaviors of uncertain interpretation ; from their rarity , these behaviors are reasonably interpreted as corresponding to specific chemical effects . true microrheological studies examining the motion of mesoscopic particles under the influence of a known external force are also examined . viscosity from true microrheological measurements is in many cases substantially smaller than the viscosity measured with a macroscopic instrument .
|
in -dimensional euclidean space , a hard hypersphere ( i.e. -dimensional sphere ) packing is an arrangement of hyperspheres in which no two hyperspheres overlap . the _ packing density _ or _ packing fraction _ is the fraction of space in covered by the spheres , which for identical spheres of radius , the focus of the paper , is given by : where is the number density and is the volume of a -dimensional sphere of radius and is the gamma function .sphere packings are of importance in a variety of contexts in the physical and mathematical sciences .dense sphere packings have been used to model a variety of many - particle systems , including liquids , amorphous materials and glassy states of matter , granular media , suspensions and composites , and crystals .the densest sphere packings are intimately related to the ground states of matter and the optimal way of sending digital signals over noisy channels .finding the densest sphere packing in for is generally a notoriously difficult problem .kepler s conjecture , which states that there is no other three - dimensional arrangement of identical spheres with a density greater than that of face - centered cubic lattice , was only recently proved .the densest sphere packing problem in the case of congruent spheres has not been rigorously solved for , although for and the and leech lattices , respectively , are almost surely the optimal solutions . understanding the high - dimensional behavior of disordered sphere packings is a fundamentally important problem , especially in light of the recent conjecture that the densest packings in sufficiently high dimensions may be disordered rather than ordered . indeed ,ref . provides a putative exponential improvement on minkowski s lower bound on the maximal density among all bravais lattices : where is the riemann zeta function . for large values of , the asymptotic behavior of the minkowski s lower bound is controlled by .interestingly , any saturated packing density satisfies the following so - called `` greedy '' lower bound : a saturated packing of congruent spheres of unit diameter and density in has the property that each point in space lies within a unit distance from the center of some sphere .thus , a covering of the space is achieved if each center is encompassed by a sphere of unit radius and the density of this covering is which proves the lower bound ( [ greedy_bound ] ) .note that it has the same dominant exponential term as in inequality ( [ minkowski_bound ] ) .the packing density of can also be exactly achieved by _ ghost random sequential addition _packings , an unsaturated packing less dense than the standard random sequential addition ( rsa ) packing in some fixed dimension , implying that the latter will have a superior dimensional scaling .additionally , the effect of dimensionality on the behavior of equilibrium hard - sphere liquids and of maximally random jammed spheres have been investigated .sphere packings are linked to a variety of fundamental characteristics of point configurations in , including the _ covering radius _ and the _ quantizer error _ , which are related to properties of the underlying voronoi cells .the covering and quantizer problems have relevance in numerous applications , including wireless communication network layouts , the search of high - dimensional data parameter spaces , stereotactic radiation therapy , data compression , digital communications , meshing of space for numerical analysis , coding , and cryptography .it has recently been shown that both of these quantities can be extracted from the _ void exclusion probability _ , which is defined to be the probability of finding a randomly placed spherical cavity of radius empty of any points .it immediately follows that is the expected fraction of space not covered by circumscribing spheres of radius centered at each point .thus , if is identically zero for for a point process , then there is a covering associated with the point process with covering radius . finally , for a point configuration with positions , a quantizer is a device that takes as an input a position in and outputs the nearest point of the configuration to .assuming is uniformly distributed , one can define a mean square error , called the _ scaled dimensionless quantizer error _ , which can be obtained from the void exclusion probability via the relation : it is noteworthy that the optimal covering and quantizer solutions are the ground states of many - body interactions derived from .the rsa procedure , which is the focus of the present paper , is a time - dependent process to generate disordered hard - hypersphere packings in . starting with a large , empty region of of volume , spheres are randomly and sequentially placed into the volume subject to a nonoverlap constraint : if a new sphere does not overlap with any existing spheres , it will be added to the configuration ; otherwise , the attempt is discarded .one can stop the addition process at any time , obtaining rsa configurations with various densities up to the maximal saturation density that occurs in the infinite - time limit .besides identical d - dimensional spheres , the rsa packing process has also been investigated for polydisperse spheres and other particle shapes , including squares , rectangles , ellipses , spheroids , superdisks , sphere dimers , and sphere polymers in , and for different shapes on lattices and fractals . the rsa packing process in the first three space dimensions has been widely used to model the structure of cement paste , ion implantation in semiconductors , protein adsorption , polymer oxidation , and particles in cell membranes .the one - dimensional case , also known as the `` car - parking '' problem , has been solved analytically and its saturation density is .however , for , the saturation density of rsa spheres has only been estimated through numerical simulations . in general, generating exactly saturated ( infinite - time limit ) rsa configurations in is particularly difficult because infinite computational time is not available .the long - time limit of rsa density behaves as : previous investigators have attempted to ascertain the saturation densities of rsa configurations by extrapolating the densities obtained at large , finite times using the asymptotic formula ( [ rsa_infinite_time_density ] ) . in order to describe more efficient ways of generating nearly - saturated and fully - saturated rsa configurations, we first need to define two important concepts : the _ exclusion sphere _ and the _available space_. the _ exclusion sphere _ associated with a hard sphere of diameter ( equal to ) is the volume excluded to another hard sphere s center due to the impenetrability constraint , and thus an exclusion sphere of radius circumscribes a hard sphere .the _ available space _ is the space exterior to the union of the exclusion spheres of radius centered at each sphere in the packing .a more general notion of the available space is a fundamental ingredient in the formulation of a general canonical -point distribution function .an efficient algorithm to generate nearly - saturated rsa configurations was introduced in ref . .this procedure exploited an economical procedure to ascertain the available space ( as explained in the subsequent section ) .although a huge improvement in efficiency can be achieved , this and all other previous algorithms still require extrapolation of the density of nearly - saturated configurations to estimate the saturation limit .in this paper , we present an improvement of the algorithm described in ref . in order to generate saturated ( i.e. , infinite - time limit ) rsa packings of identical spheres in a finite amount of computational time . using this algorithm ,we improve upon previous calculations of the saturation packing and covering densities , pair correlation function , structure factor , void exclusion probability , and quantizer error in dimensions 2 through 8 .the rest of the paper is organized as follows : in sec .[ algorithm ] , we describe the improved algorithm ; in sec .[ results ] , we present the packing and covering densities , pair correlation function , structure factor , void exclusion probability , and quantizer error of saturated rsa configurations ; and in sec .[ conclusion ] , we conclude with some discussions of extending this method to generate saturated rsa packings of objects other than congruent spheres .reference introduced an efficient algorithm to generate nearly saturated rsa configurations of hard -dimensional spheres .specifically , a hypercubic simulation box is divided into small hypercubic `` voxels '' with side lengths much smaller than the diameter of the spheres . at any instant of time, spheres are sequentially added to the simulation box whenever there is available space for that sphere .each voxel can be probed to determine whether it may contain any available space or not to add another sphere . by tracking all of the voxels that can contain some portion of the available space, one can make insertion attempts only inside these `` available voxels '' and save computational time .this enables one to achieve a huge improvement in computational efficiency over previous methods .however , this and all other previous algorithms still require extrapolation of the density of nearly - saturated configurations to estimate the saturation limit .sub - voxels .the available ones constitute a new voxel list .e : return to the third step with the new available voxel list and two additional disks are inserted .the program then subdivides each voxel into four subvoxels and all subvoxels can be identified as unavailable .thus the program finishes.,width=604 ] the improved algorithm reported in the present paper differs from the original voxel method by dividing the undetermined voxels ( voxels that are not included in any exclusion sphere after certain amount of insertion trials ) into smaller subvoxels . repeating this voxel subdivision process with progressively greater resolution enables us to track the available space more and more precisely .eventually , this allows us to discover _ all _ of the available space at any point in time and completely consume it in order to arrive at saturated configurations .the improved algorithm consists of the following steps , which are illustrated in figure [ process ] : 1 .starting from an empty simulation box in , the cartesian coordinates of a sphere of radius are randomly generated .this sphere is added if it does not overlap with any existing sphere in the packing at that point in time ; otherwise , the attempt is discarded .this addition process is repeated until the success rate is sufficiently low .the acceptance ratio of this step equals to the volume fraction of the available space inside the simulation box : + + where is the acceptance ratio of this step , is the volume fraction of the available space , is the volume of the available space and is the volume of the simulation box with side length .when the fraction of the available space is low , we improve the acceptance ratio by avoiding insertion attempts in the unavailable space . to do this ,the simulation box is divided into hypercubic voxels , with side lengths comparable to the sphere radius .each voxel is probed to determine whether it is completely included in any of the exclusion spheres or not . if not, the voxel is added to the available voxel list .thus we obtain an `` available voxel list '' .a voxel in this list may or may not contain available space , but the voxels not included in this list are guaranteed to contain no available space .3 . since some unavailable space is excluded from the voxel list , we can achieve a higher success rate of insertion by selecting a voxel randomly from the available voxel list , generate a random point inside it , attempt to insert a sphere and repeat this step .the acceptance ratio of this step is equal to the volume fraction of the available space inside voxels from the available voxel list : + + where is the acceptance ratio of this step , is the volume fraction of the available space inside the voxel list , is the volume of the available space , is the number of voxels in the available voxel list and is the volume of a voxel .4 . in the previous step, spheres were inserted into the system , thus the volume of the available space will decrease . eventually , is very low and is also low .thus we improve the efficiency again by dividing each voxel in the voxel list into sub - voxels , each with side length equal to a half of that of the original voxel .each sub - voxel is checked for availability according to the rule described in step 2 .the available ones constitute the new voxel list .return to step 3 with the new voxel list and repeat steps 3 to 5 until the number of voxels in the latest voxel list is zero .since we only exclude a voxel from the voxel list when we are absolutely sure that it does not contain any available space , we know at this stage that the entire simulation box does not contain any available space and thus the configuration is saturated .we have used the method described in sec .[ algorithm ] to generate saturated configurations of rsa packings of hyperspheres in dimensions two through eight in a hypercubic ( -dimensional cubic ) box of side length under periodic boundary conditions . in each dimension ,multiple sphere sizes are chosen .the relative sphere volume is represented by the ratio of a sphere s volume to the simulation box s volume , where is the sphere radius and is the volume of the hypercubic simulation box . for each sphere size ,multiple configurations are generated .the number of spheres contained in these configurations fluctuate around some average value inversely proportional to .the relative sphere volume and number of configurations generated for each sphere radius in each dimension is given in table [ nbrconfig ] .the mean density and its standard error for each sphere radius is calculated .subsequently , we plot the mean density and its standard error versus a quantity proportional to , namely ^{1/2}$ ] .we then perform a weighted linear least squares fit to this function in each dimension in order to extrapolate to the infinite - system - size [ limit .the weight is given by where is the standard error of the mean density for spheres with radius ..dimensionless sphere size and number of configurations generated for each dimension .[ cols="^,^,^,^,^,^ " , ] [ quantizer_error ]we have devised an efficient algorithm to generate exactly saturated , infinite - time limit rsa configurations in finite computational time across euclidean space dimensions . with the algorithm ,we have improved previous results of the saturation density and extended them to a wider range of dimensions , i.e. , up through dimension eight .the associated covering density , pair correlation function , structure factor , void exclusion probability , and quantizer error have also been improved . in particular , we found appreciable improvement for near contact and in the limit , which are especially sensitive to whether or not very small fragments of the available space are truly eliminated as the saturation state is approached .we observed that as increases , the degree of hyperuniformity " ( the magnitude of the suppression of infinite - wavelength density fluctuations ) increases and appears to be consistent with our results also supports the `` decorrelation principle '' , which in turns lends further credence to a conjectural lower bound on the maximal sphere packing density that provides the putative exponential improvement on minkowski s lower bound .it is noteworthy that the rsa packing in has relevance in the study of high - dimensional scaling of packing densities .for example , ref . suggested that since rsa packing densities appear to have a similar scaling in high dimensions as the best lower bound on bravais lattice packings densities , the density of disordered packings might eventually surpass that of the densest lattice packing beyond some large but finite dimension .our improvements to the saturation densities , as well as a previous investigation , support this conjecture . converting a packing into a covering by replacing each sphere with its exclusion sphereis rigorous only if the packing is exactly saturated . by guaranteeing that the packings that we generated are saturated , we rigorously met this condition ( in a large finite simulation box ) .although the best known lattice covering and lattice quantizer perform better than their rsa counterparts in low dimensions , rsa packings may outperform lattices in sufficiently high dimensions , as suggested in ref . .it is useful here to comment on the ability to ascertain the high - dimensional scaling of rsa packing densities from low - dimensional data .we have fitted our data of the saturation densities as a function of for using a variety of different functions .the best fit we find is the following form : where , , and are parameters .however , it is not clear how accurate this form is for .in fact , this form is likely not correct in high dimensions , where it has been suggested from theoretical considerations that high - dimensional scaling may be given by the asymptotic form where , , and are constants .it is noteworthy that ( [ fit1 ] ) provides a fit that is very nearly as good as ( [ fit2 ] ) .nonetheless , for , the estimates of the saturation densities obtained from ( [ fit2 ] ) and ( [ fit1 ] ) differ by about 20% , which is a substantial discrepancy and indicates the uncertainties involved in applying such dimensional scalings for even moderately - sized dimensions .when is very large , extrapolations based on fits of low - dimensional data is even more problematic . in this limit , eq .( [ fit2 ] ) is dominated by the term , which can be significantly larger than the dominating term in eq .( [ fit1 ] ) , although it is safe to say that the saturation density grows at least as fast as .therefore , caution should be exercised in attempting to ascertain the precise high- asymptotic behavior of rsa saturation densities from our data in relatively low dimensions .the same level of caution should be employed in attempting to determine high- scaling behavior by extrapolating low - dimensional packing densities for other types of sphere packings .for example , it may useful to revisit the high - dimensional scalings that have been ascertained or tested for the maximally random jammed densities . in summary , it is nontrivial to ascertain high- scalings of packing densities from low - dimensional information .in contrast , in the study of the dimensional dependence of continuum percolation thresholds , it is possible to obtain exact high- asymptotics and tight upper and lower bounds that apply across all dimensions .rsa packings of spheres with a polydispersity in size have also been investigated previously .our algorithm can easily be extended to generate saturated rsa packings of polydisperse spheres in by constructing a -dimensional auxiliary space for the associated radius - dependent available space and voxels , where the additional dimension is used to represent the radius of a sphere that could be added in the rsa process .rsa packings of nonspherical particles have also been studied , including squares , rectangles , ellipses , spheroids , and superdisks . while packings of polyhedra have received recent attention , rsa packings of such shapes have not been considered to our knowledge .our algorithm can also be extended to treat these situations by constructing auxiliary spaces for the associated orientation - dependent available space and voxels .the dimension of such an auxiliary space is determined by the total number of degrees of freedom associated with a particle , i.e. , translational and rotational degrees of freedom .the extensions of the methods devised here to generate saturated packings of polydisperse spheres and nonspherical particles is an interesting direction for future research .we are very grateful to tienne marcotte , yang jiao , and adam hopkins for many helpful discussions and yang jiao , steven atkinson , and tienne marcotte for their careful reading of the manuscript .this work was supported by the materials research science and engineering center program of the national science foundation under grant no .dmr-0820341 and by the division of mathematical sciences at the national science foundation under award no .this work was partially supported by a grant from the simons foundation ( grant no .231015 to salvatore torquato ) . in the implementation of our algorithm ,the criteria for `` sufficiently low '' is less than 3 spheres inserted in trials .the optimal choice of depends on the dimension , ranging from for to for .
|
the study of the packing of hard hyperspheres in -dimensional euclidean space has been a topic of great interest in statistical mechanics and condensed matter theory . while the densest known packings are ordered in sufficiently low dimensions , it has been suggested that in sufficiently large dimensions , the densest packings might be disordered . random sequential addition ( rsa ) time - dependent packing process , in which congruent hard hyperspheres are randomly and sequentially placed into a system without interparticle overlap , is a useful packing model to study disorder in high dimensions . of particular interest is the infinite - time _ saturation _ limit in which the available space for another sphere tends to zero . however , the associated saturation density has been determined in all previous investigations by extrapolating the density results for near - saturation configurations to the saturation limit , which necessarily introduces numerical uncertainties . we have refined an algorithm devised by us [ s. torquato , o. uche , and f. h. stillinger , phys . rev . e * 74 * , 061308 ( 2006 ) ] to generate rsa packings of identical hyperspheres . the improved algorithm produce such packings that are guaranteed to contain no available space using finite computational time with heretofore unattained precision and across the widest range of dimensions ( ) . we have also calculated the packing and covering densities , pair correlation function and structure factor of the saturated rsa configurations . as the space dimension increases , we find that pair correlations markedly diminish , consistent with a recently proposed `` decorrelation '' principle , and the degree of hyperuniformity " ( suppression of infinite - wavelength density fluctuations ) increases . we have also calculated the void exclusion probability in order to compute the so - called quantizer error of the rsa packings , which is related to the second moment of inertia of the average voronoi cell . our algorithm is easily generalizable to generate saturated rsa packings of nonspherical particles .
|
in the last few decades it has become apparent that many problems in information theory , and coding problems in particular , can be mapped onto ( and interpreted as ) analogous problems in the area of statistical physics of disordered systems , most notably , spin glass models .such analogies are useful because physical insights , as well as statistical mechanical tools and analysis techniques ( like the replica method ) , can be harnessed in order to advance the knowledge and the understanding with regard to the information theoretic problem under discussion ( and conversely , information theoretic approaches to problems in physics may sometimes prove useful to physcists as well ) .a very small , and by no means exhaustive , sample of works along this line includes references [ 1][25 ] . in particular ,sourlas , was the first to observe that there are strong analogies and parallisms between the behavior of ensembles of error correcting codes and certain spin glass models with quenched parameters , like the glass model and derrida s random energy model ( rem ) ,, at least as far as the mathematical formalism goes .in particular , the rem is an especially attractive model to adopt in this context , as it is , on the one hand , exactly solvable , and on the other hand , rich enough to exhibit phase transitions . as noted in ( * ? ? ? * chap .6 ) and , ensembles of error correcting codes ` inherit ' these phase transitions from the rem when viewed as physical systems whose phase diagram is defined in the plane of the coding rate vs. decoding temperature . in topic was further investigated and ensemble performance figures of error correcting codes ( random coding exponents ) were related to the free energies in the various phases of the phase diagram .while the above described relation takes place between _ pure _ channel coding and the rem _ without _ any external magnetic field , in this work , we demonstrate that there are also intimate relationships between _ combined source / channel coding _ and the rem _ with _ such a magnetic field . in particular, it turns out that typical patterns of erroneously decoded messages in the source / channel coding problem have `` magnetization '' properties that are analogous to those of the rem in certain phases , where the non uniformity of the distribution of the source in the joint source channel coding system , plays the role of an external magnetic field applied to the spin glass modeled by the rem .we also relate the ensemble performance ( random coding exponents ) of joint source channel codes to the free energy of the rem in its different phases .the outline of this paper is as follows . in section 2, we provide some background , both on the information theoretic aspect of this work , which is the problem of joint source channel coding , and the statistical mechanical aspect , which is the rem and its magnetic properties . in section 3, we present the phase diagram pertaining to finite temperature decoding of an ensemble of joint source channel codes and characterize the free energies in the various phases . finally ,in section 4 , we derive random coding exponents pertaining to this emsemble and demonstrate their relationships to the free energies .in this section , we give some very basic background which will be needed in the sequel . in subsection 2.1 , we provide a brief overview of shannon s fundamental coding theorems , the skeleton of information theory : the source coding theorem , the channel coding theorem , and finally the joint source channel coding theorem . in subsection 2.2, we review a few models of spin glasses , with special emphasis on the rem .suppose we wish to compress a sequence of bits , , drawn from a stationary memoryless binary source , i.e. , each bit is drawn independently , where .shannon s _ source coding theorem _( see , e.g. , ( * ? ? ? * chap . 5 ) ) tells that if we demand that the source sequence would be perfectly reconstructable from the compressed data , then the best achievable compression ratio ( i.e. , the smallest average ratio between the compressed message length and the original source message length ) , at the limit of large , is given by the entropy of the source , which in the binary memroyless case considered here , is given by : many practical coding algorithms are known to achieve asymptotically , e.g. , huffman coding , shannon coding , arithmetic coding , and lempel ziv coding , to name a few .shannon s celebrated _ channel coding theorem _ ( see ,e.g. , ( * ? ? ?* chap . 7 ) ) is about reliable transmission of digital information across a noisy channel : suppose we wish to transmit a binary messsage of bits , indexed by ( ) , through a noisy binary symmetric channel , which flips the transmitted bit with probability or conveys it unaltered , with probability .if we wish to convey the message via the channel reliably ( i.e. , with very small probability of error ) , then before we transmit the message via the channel , we have to encode it , i.e. , map it in a sophisticated manner into a longer binary message of length ( ) and then transmit the encoded message .the ratio is called the _ coding rate_. it measures how efficiently the channel is used , i.e. , how many information bits are conveyed per one channel use .the corresponding channel output sequence , ( with some of the bits flipped by the channel ) , is received at the decoder . the optimum decoder , in the sense of minimum probability of error , estimates the message by the _ maximum a posteriori _ ( map ) decoder , i.e., it selects the message which maximizes posterior probability given , that is , , or equivalently , it maximizes the product , where the prior probability of message and is the conditional probability of the observed given that was transmitted . in the important special case where all messages are _ a - priori _equiprobable , that is , for all , the map decoding rule boils down to the maximization of , which is the _ maximum likelihood _ ( ml )decoding rule .channel capacity is defined as the supremum of all coding rates for which there still exist encoders and decoders which make the probability of error arbitrarly small provided that is large enough ( keeping fixed ) .shannon s channel coding theorem provides a formula of the channel capacity , which in the binary case considered here , is given by one of the mainstream efforts in the information theory literature has evolved around devising practical coding and decoding schemes , in terms of computational complexity and storage , with rates close to capacity . finally , we consider the problem of _ joint source channel coding _ ( see ,e.g. , ( * ? ? ?* sect . 7.13 ) ) : suppose we have a binary memoryless source , as in the first paragraph above , and a binary memoryless channel , as in the second paragraph above .we assume that by the time that the source generates symbols , the channel can transmit bits ( is fixed ) . a joint source channel code maps the source sequence of length into a channel input sequence of length .the decoder , that receives the channel output vector , estimates either by the _ symbol map _ decoder , which minimizes the symbol error probability ( or the bit error probability ) or the _ word map _ decoder , which as mentioned earlier , minimizes the word error probability .the word map decoder works similarly to the above described map decoder for a channel code : it estimates the source sequence as a whole by seeking the vector that maximizes , where is the probability of the source vector .the symbol map decoder , on the other hand , estimates each bit of the source separately by seeking the symbol that maximizes , .these two decoders can be thought of as two special cases of a more general class of decoders , referred to as _ finite temperature decoders _ . a finite temperature decoder estimates the symbol by ^\beta,\ ] ] where the parameter can be thought of as an inverse temperature parameter .the choice corresponds to the symbol map decoder , whereas gives us the word map decoder ( * ? ? ?* chap . 6 ) .the joint source channel coding theorem asserts that a necessary and sufficient condition for the existence of codes , that for large enough and ( with fixed ) , can be decoded with aribrarily small probability of error ( both wordwise and symbolwise ) is given by one approach to achieve reliable communication , whenever this condition holds , is to apply separate source coding and channel coding : first compress the source to essentially bits per symbol , resulting in a binary compressed message of length about bits , as described in the first paragraph above , and then use a reliable channel code of rate to convey the compressed message , as described in the second paragraph .the decoder will first decode the message by the corresponding channel decoder and then decompress the resulting message .another approach is to map directly to a channel input vector . it can be shown ( * ? ? ?* exercise 5.16 , p. 534 ) that by a random selection of a code from the uniform ensemble ( i.e. , by generating each codeword , , independently by a sequence of fair coin tosses ) , the average probability of error , over this ensemble of codes , tends to zero as the block length goes to infinity , as long as the above necessary and sufficient condition holds .consider a spin glass with spins , designated by a binary vector , , .the simplest model of this class is that of a _ paramagnetic _ solid , namely , the one where the only effect is that of the external magnetic field , whereas the effect of interactions is negligible ( cf .3 ) ) . assuming that the spin directions are all either parallel or antiparallel to the direction of the external magnetic field , the energy associated with a configuration is given ( in the appropriate units ) by : which means ( according to the boltzmann distribution ) that each spin is independently oriented upward ( + 1 ) with probability ] .this means that the average ( net ) magnetic moment is and so the average internal energy per particle is and the free energy per particle is ] , pertaining to spin configuration , is given by .equivalently , , and then ^{n/2}\left(\frac{q}{1-q}\right)^{nm({\mbox{\boldmath }}))/2}\nonumber\\ & = & [ q(1-q)]^{n/2}e^{nm({\mbox{\boldmath }})h}\end{aligned}\ ] ] where is defined as above . by the same token , for the binary symmetric channel we have : where and is the hamming distance between and , namely , the number of places where .thus , ^{n\beta/2}\sum_m\left[\sum_{{\mbox{\boldmath }}({\mbox{\boldmath }}):~m({\mbox{\boldmath }})=m } e^{-\beta\ln[1/p({\mbox{\boldmath }}|{\mbox{\boldmath }}({\mbox{\boldmath }}))]}\right]e^{n\beta mh}\nonumber\\ & = & [ q(1-q)]^{\beta n/2}(1-p)^{n\beta}\sum_m\left[\sum_{{\mbox{\boldmath }}({\mbox{\boldmath }}):~m({\mbox{\boldmath }})=m } e^{-\beta bd_h({\mbox{\boldmath }}({\mbox{\boldmath }}),{\mbox{\boldmath }})}\right]e^{\beta nmh}\nonumber\\ & { \stackrel{\delta } { = } } & [ q(1-q)]^{n\beta/2}(1-p)^{n\beta}\sum_m\zeta(\beta , m ) e^{\beta nmh}\end{aligned}\ ] ] where the resemblance to eq .( [ rempart ] ) is self evident , with being redefined as the second bracketed term . in analogy to the above analysis of the rem , here behaves like in the rem without a magnetic field , namely , it contains exponentially terms , with the random energy levels of the rem being replaced now by random hamming distances that are induced by the random selection of the code .is also random , but this randomness does not play any essential role here .this discussion applies as well for every given .] using the same considerations as with the rem ( see also ) , can be represented as , where is the number of vectors with and . since is the sum of many i.i.d .binary random variables of the form ( again , with randomness induced by the random selection of ) , each with expectation given by } ] , for all such that .defining now the gilbert varshamov distance ( * ? ? ?6 ) as the solution to the equation , the condition is equivalent to the condition .thus , for a typical randomly selected code , } \left[\frac{1}{\theta}h\left(\frac{1+m}{2}\right)+h(\delta)-\ln 2-\beta b\delta\right]\nonumber\\ & = & \left\{\begin{array}{ll } \frac{1}{\theta}h\left(\frac{1+m}{2}\right)-\ln 2 + h(p_\beta)-\beta bp_\beta & p_\beta \ge \delta_{gv}\left(\frac{1}{\theta}h\left(\frac{1+m}{2}\right)\right ) \\-\beta b \delta_{gv}\left(\frac{1}{\theta}h\left(\frac{1+m}{2}\right)\right ) & p_\beta <\delta_{gv}\left(\frac{1}{\theta}h\left(\frac{1+m}{2}\right)\right ) \end{array}\right . \end{aligned}\ ] ] where . the condition is equivalent to the condition .\ ] ] the exponential order of , as a function of is then = \max_m[\theta\phi(\beta , m)+\beta mh].\ ] ] for small enough , the dominant value of is the one that maximizes ] is increasing . at , whereas . as , whereas , provided that .thus , for , there must be a unique solution , which we shall denote by , where the subscript `` pg '' stands for the fact that this is the boundary curve between the paramagnetic phase and the glassy phase .since is decreasing with , is decreasing in , i.e. , the temperature is increasing in , as before ( see fig . [ gen3 ] ) .as for the case , for , we have .\ ] ] for , , namely , , which means that there is no phase transition as the behavior is paramagnetic at all temperatures . in the same manner , it is easy to see that for all , which is another case where there are no phase transitions , but this time , it is a glassy behavior at all temperatures . as long as , we have on the other hand , for , the system is in the glassy phase . in this case ,\ ] ] thus , the maximizing depends only on but not on . in this case, we have and so \nonumber\\ & = & \beta\left[h\tanh(\beta_{pg}(h)\cdot h)- b\theta p_{\beta_{pg}(h)}\right].\end{aligned}\ ] ] the free energy density associated with erroneous messages is therefore given by -\theta\ln(1-p)-\frac{\psi(\beta , h)}{\beta}\ ] ] i.e. , where -\theta\ln(1-p)-\frac{1}{\beta}\left [ h\left(\frac{1+\tanh(\beta h)}{2}\right)-\theta(\ln 2-h(p_\beta))\right ] + \theta bp_\beta - h\tanh(\beta h)\ ] ] and -\theta\ln(1-p)- h\tanh(\beta_{pg}(h)\cdot h)+ b\theta p_{\beta_{pg}(h)}.\ ] ] the boundary between the ferromagnetic phase ( where is the dominant term in ) and the glassy phase is the vertical line ( see fig . [ gen3 ] ) , where is the solution to the equation -\ln(1-p)- \frac{h\tanh(\beta_{pg}(h)h)}{\theta}+bp_{\beta_{pg}(h)}\ ] ] which after rearranging terms becomes whose solution in turn is achieved when , i.e. , which is nothing but the boundary of reliable communication ( [ relcond ] ) .thus , where is the inverse of the function in the range where the argument is in y u u y x u y y y e e y e y e y y ] .the second expectation is handled as follows . using the above derived relation : ^\beta= [ q(1-q)]^{n\beta/2}(1-p)^{n\beta}\sum_m\zeta(\beta , m ) e^{\beta nmh},\ ] ] we get ^\beta\right)^\rho&= & \left([q(1-q)]^{n\beta/2}(1-p)^{n\beta}\sum_m\zeta(\beta , m)e^{\beta nmh}\right)^\rho\nonumber\\ & { \stackrel{\cdot } { = } } & [ q(1-q)]^{n\beta\rho/2}(1-p)^{n\beta\rho}\sum_m\zeta^\rho(\beta , m)e^{\beta\rho nmh}\end{aligned}\ ] ] and so , assuming e$}}\{\zeta(\beta , m)\}]^\rho e^{\beta\rho mhn}\nonumber\\ & { \stackrel{\cdot } { = } } & [ q(1-q)]^{n\beta\rho/2}(1-p)^{n\beta\rho } \sum_m\sum_\delta\exp\left\{n\rho\left [ \frac{1}{\theta}h\left(\frac{1+m}{2}\right)+h(\delta)-\ln 2- \beta b\delta\right]\right\}\cdot e^{\beta\rho mhn}\nonumber\\ & { \stackrel{\cdot } { = } } & [ q(1-q)]^{n\beta\rho/2}(1-p)^{n\beta\rho}\times\nonumber\\ & & \exp\left\{n\rho\left [ \frac{1}{\theta}h\left(\frac{1+\tanh(\beta h)}{2}\right)+h(p_\beta)-\ln 2-\beta bp_\beta + \frac{\beta h}{\theta}\tanh(\beta h)\right]\right\}\end{aligned}\ ] ] we see that the magnetization that dominates the gallager bound is the paramagnetic magnetization . by plugging this expression back into the bound on , we get the error exponent : +\theta[\gamma(1-\rho\beta)-\ln 2 ] -\frac{\beta\rho}{2}\ln[q(1-q)]-\beta\rho\theta\ln(1-p)-\nonumber\\ & & \rho\left[h\left(\frac{1+\tanh(\beta h)}{2}\right)+\theta[h(p_\beta)-\ln 2- \beta bp_\beta ] + \beta h\tanh(\beta h)\right]\nonumber\\ & = & -\ln [ q^{1-\rho\beta}+(1-q)^{1-\rho\beta}]+ \theta[\gamma(1-\rho\beta)-\ln 2]+\rho\beta f_p(\beta , h)\nonumber\\ & = & -\ln \{[p^{1-\rho\beta}+(1-p)^{1-\rho\beta}]^\theta\cdot[q^{1-\rho\beta}+(1-q)^{1-\rho\beta}]\ } + \rho\beta f_p(\beta , h)\end{aligned}\ ] ] here , unlike in the computation of the correct decoding exponent , there is a mismatch between the phase in the plane at which the decoder operatively works , and the phase at which is analyzed : while the former is ferromagnetic , the latter is paramagnetic regardless of the temperature .t. c. dorlas and j. r. wedagedera , `` phase diagram of the random energy model with higher order ferromagnetic term and error correcting codes due to sourlas , '' _ phys ._ , vol .83 , no .21 , pp . 44414444 ,november 1999 .a. e. allakhverdyan and d. b. saaskyan , `` finite volume corrections to the magnetization in the spin glass phase of the derrida model , '' _ theoretical and mathematical physics _ ,109 , no . 3 , pp .15741577 , 1996 .n. merhav , `` relations between random coding exponents and the statistical physics of random codes , '' submitted to_ ieee trans . inform .theory _ , august 2007 .available on line at : [ http://www.ee.technion.ac.il/people/merhav/papers/p117.pdf ] .n. merhav , `` error exponents of erasure / list decoding revisited via moments of distance enumerators , '' submitted to _ieee trans . inform . theory _ ,november 2007 . also , available on line at : [ http://www.ee.technion.ac.il/people/merhav/papers/p119.pdf ] .
|
we demonstrate that there is an intimate relationship between the magnetic properties of derrida s random energy model ( rem ) of spin glasses and the problem of joint source channel coding in information theory . in particular , typical patterns of erroneously decoded messages in the coding problem have `` magnetization '' properties that are analogous to those of the rem in certain phases , where the non uniformity of the distribution of the source in the coding problem , plays the role of an external magnetic field applied to the rem . we also relate the ensemble performance ( random coding exponents ) of joint source channel codes to the free energy of the rem in its different phases . + _ keywords _ : spin glasses , rem , phase transitions , magnetization , information theory , joint source channel codes . department of electrical engineering + technion - israel institute of technology + haifa 32000 , israel +
|
biochemical systems with small numbers of interacting components have increasingly been studied in the recent years .examples include the phage lysis - lysogeny decision circuit , circadian rhythms and cell cycle .it is this small number of interacting components that makes the appropriate mathematical framework for describing these systems a stochastic one . in particular , the kinetics of the different species is accurately described , under appropriate assumptions , by a continuous - time discrete - space markov chain .the theory of stochastic processes allows the association of the markov chain with an underlying master equation , which is a set of ordinary differential equations ( odes ) , possible of infinite dimensions , that describe , at each point in time , the probability density of all the different possible states of the system . in the context of biochemical systemsthis equation is known as the _ chemical master equation _ ( cme ) .the high dimensionality of the cme makes it intractable to solve in practice .in particular , with the exception of some very simple chemical systems analytic solutions of the cme are not available . one way to deal with this issue is to resort to stochastic simulation of the underlying markov chain . the stochastic simulation algorithm ( ssa ) developed by gillespie exactly simulates trajectories of the cme as the system evolves in time . the main idea behind this algorithm , is that at each time point , one samples a waiting time to the next reaction from an appropriate exponential distribution , while another draw of a random variable is then used to decide which of the possible reactions will actually occur . for suitable classes of chemically reacting systems, one can sometimes use exact algorithms which , although equivalent to the gillespie ssa are less computationally intensive .examples include the gibson - bruck next reaction method and the optimized direct method .these algorithms can be further accelerated by using parallel computing , for example , on graphics processing units .all the methods described above can only go so far in terms of speeding up the simulations , since even with the all possible speed ups running the ssa can be computationally intensive for realistic problems .one approach to alleviate the computational cost is to employ different approximations on the level of the description of the chemical system .for example , in the limit of large molecular populations , the waiting time becomes , on average , very small and under the law of mass action the time evolution of the kinetics is described by a system of odes .this system is known as the _ reaction rate equation _ which describes , approximately , the time evolution of the mean of the evolving markov chain .an intermediate regime between the ssa and the reaction rate equation is the one where stochasticity is still important , but there exist a sufficient number of molecules to describe the evolving kinetics by a continuous model .this regime is called the chemical langevin equation ( cle ) , which is an it stochastic differential equation ( sde ) driven by a multidimensional wiener process . in this casethe corresponding master equation for the cle is called the chemical fokker - planck equation ( cfpe ) which is a -dimensional parabolic partial differential equation , where is the number of the different chemical species present in the system .the fact that stochasticity is still present in the description of the chemical system , combined with the fact that the underlying cfpe is more amenable to rigorous analysis than the cme , has made the cle equation a very popular regime used in applications . however , while there are benefits to working with the cle / cfpe , this approximation is only valid in the limit of large system volume and provides poor approximations for systems possessing one or more chemical species with low copy numbers .furthermore , unlike the ssa / cme which ensures that there is always a positive ( or zero ) number of molecules in the system , the cfpe and cle can give rise to negative concentrations , so that the chemical species can attain negative copy numbers .this can have serious mathematical implications , since the cfpe equation might break down completely , due to regions in which the diffusion tensor is no longer positive definite , which makes the underlying problem ill - posed . on the level of the cfpe , one way to deal with such positivity issues is to truncate the domain and artificially impose no flux - boundary conditions along the domain boundary , which will have a negligible effect on the solution when it is concentrated far away from the boundary .when all chemical species exist in sufficiently high concentration , dirichlet boundary conditions can also be used if one solves the stationary cfpe as an eigenvalue problem .however , as shown in , these artificial boundary conditions can result in significant approximation errors when the solution is concentrated near the boundary .other alternatives have been proposed to overcome the behaviour of the cle close to the boundary , either by suppressing reaction channels which may cause negativity near the boundary , or by extending the domain of the process to allow exploration in the complex domain . in the later approachthe resulting process , called the complex cle will have a positive definite diffusion tensor for all time , thus avoiding such breakdowns entirely . however, this method does not accurately capture the cme behaviour near the boundary , and in areas where the cle is a poor approximation to the cme , the corresponding complex cle will suffer equally .these issues have motivated a number of hybrid schemes which have been obtained by treating only certain chemical species as continuous variables and the others as discrete . by doing so ,such schemes are able to benefit from the computational efficiency of continuum approximations while still taking into account discrete fluctuations when necessary .typically such schemes involve partitioning the reactions into `` fast '' and `` slow '' reactions , with the fast reactions modelled using a continuum approximation ( cle or the reaction rate equation ) , while using markov jump process to simulate the discrete reactions .chemical species which are affected by fast reactions are then modelled as continuous variables while the others are kept discrete .since the reaction rate depends on the state , it is possible that some fast reactions become slow and vice versa .this is typically accounted for by periodically repartitioning the reactions .based on this approach , a number of hybrid models have been proposed , such as , which couple deterministic reaction - rate equations for the fast reactions with markov jump dynamics for the slow , resulting in a piecewise - deterministic markov process for the entire system .error estimates for such systems , in the large volume limit , were carried out in .similar methods have been proposed , such as and more recently .other hybrid schemes also involve a similar partition into slow and fast species , however the evolution of the slow species is obtained by solving the cme directly , coupled to a number of reaction - rate equations for the fast reactions .the hybrid system is thus reduced to a system of odes .an error analysis of these schemes was carried out in . in this paper, we propose a hybrid scheme which uses langevin dynamics to simulate fast reactions coupled with jump / ssa dynamics to simulate reactions in which the discreteness can not be discounted .thus , unlike the previously proposed models , both the continuous and discrete parts of the model are described using a stochastic formulation .moreover , our scheme does not explicitly keep track of fast and slow reactions , but rather , the process will perform langevin dynamics in regions of abundance , jump dynamics in regions in where one of the involved chemical species are in small concentrations , and a mixture of both in intermediate regions .the resulting process thus becomes a jump - diffusion process with poisson distributed jumps .the preference of jump over langevin dynamics is controlled for each individual reaction by means of a _ blending _ function which is chosen to take value in regions of low concentration , in regions where all involved chemical species are abundant , and smoothly interpolates in between .the choice of the blending regions will depend on the constants of the propensity and are generally chosen so that the propensity is large in the continuum region , and small in the discrete region .hybrid models for chemical dynamics involving both jump and diffusive dynamics have been previously studied in various contexts .recently , a method based on a similar coupling of ssa and langevin dynamics was proposed .the authors introduce a partition of reactions into fast and slow reactions , applying the diffusion approximation to the fast reactions to obtain a jump - diffusion process .based on an a - posteriori error estimator the algorithm periodically repartitions the species accordingly . by introducing the blending region our approachno longer requires periodic repartitioning .other works which have considered hybrid schemes based on jump - diffusion dynamics include . in a hybrid scheme based on a similar domain decomposition ideawas proposed for simulating spatially - extended stochastic - reaction diffusion models . in one part of the domain a sdewas used to simulate the position of the particles and on the other part a compartment - based jump process for diffusion was used .these two domains were separated by a sharp interface , where corrections to the transition probabilities at the interface were applied to ensure that probability mass was transferred between domains .while such a direct matching between continuum and discrete fluxes at the interface can accurately simulate systems having only reactions with unit jumps , for systems possessing jumps of length or higher , such a direct coupling would cause non - physical results .this scenario is analogous to _ ghost forces _ which arise in quasi - continuum methods used in the multiscale modelling of materials .overlap regions are also necessary for coupling brownian dynamics ( sdes ) with mean - field partial differential equations .the paper is organised as follows . in section [ sec : prelims ] after reviewing the cme / ssa and cle / cfpe formalisms we introduce blending functions and the hybrid jump - diffusion formalism . in section [ sec : derivation ] we derive weak error bounds for the hybrid scheme in the limit of large volume , and in particular show that the hybrid scheme does not perform worse than the cle in this regime . in section [ sec : algorithm ] we describe three possible discretisations of the process , which can be used in practise to simulate the jump - diffusion process .a number of numerical experiments which demonstrates the use of the hybrid scheme are detailed in sections [ sec : lotka_volterra ] , [ sec : dimer_steady_state ] and [ sec : exit_time ] .consider a biochemical network of chemical species interacting via reaction channels within an isothermal reactor of fixed volume .for , denote by the number of molecules of species at time , and let . under the assumption that the chemical species are well - mixed itcan be shown that is a continuous time markov process .when in state , the -th reaction gives rise to a transition with exponentially distributed waiting time with inhomogeneous rate , where and denote the propensity and stoichiometric vector corresponding to the -th reaction , respectively .more specifically , each reaction is of the form where and , for .let us denote and .the stoichiometric vectors are then given by and describe the net change in molecular copy numbers which occurs during the -th reaction .under the assumption of mass action kinetics , the propensity function for the -th reaction is assuming that if , to simplify notation . within the interval , we update the process can thus be expressed as the sum of poisson processes with inhomogeneous rates . as noted in , can be expressed as a random time change of unit rate poisson processes , where are independent unit - rate poisson processes .this is a continuous time markov process with infinitesimal generator the classical method for sampling realisations of is the gillespie ssa .given the current state at time , the time of next reaction and state are sampled as follows : 1 .let .sample , where ] . clearly ,as the process evolves , the -th reaction will then occur at satisfying this provides the basis of the next reaction method .suppose we are time , the next reaction will occur at for where ] .4 . set and for each .5 . set for each . 6 .set and let be the reaction for which this minimum is realised . 7 .set and update the number of each molecular species according to reaction . 8 . for each k , set , and for the reaction , let ] , see . clearly , the lifting of to is not unique , and different extensions will give rise to different diffusion approximations .however , as we shall see in section [ sec : derivation ] , in the classical large volume rescaling , the dynamics of the process will be largely determined by the value of the propensities on the rescaled grid , and indeed , subject to the extension satisfying a number of assumptions , different extensions will lead to the diffusion approximation having weak error of the same order . in this section, we introduce a jump - diffusion process which provides an approximation which is intermediate between the gillespie ssa and cle by introducing a series of _ blending _functions which are used to blend the dynamics linearly between the ssa jump process and the cle .more specifically , given smooth functions ]is almost surely constrained within a bounded subset of .we note that the domain will depend on the initial condition .as it stands this assumption will not hold for general chemical reacting systems . under suitable conditions on the propensities it is possible to replace this assumption with a localization result showing that the probability of escaping the bounded set is exponentially small .we shall avoid this approach for simplicity , simply noting that one can always ensure this assumption by setting propensities to zero outside a fixed bounded region .chemical langevin dynamics are only a valid approximation of in the large volume limit .to study this regime , we introduce a system size which can be viewed as the ( dimensionless ) volume of the reactor . writing , we then rewrite the molecular copy number as where will be the vector of concentrations of each chemical species .we shall assume that each rate constant satisfies given this scaling assumption , we can always write the propensity for the -th reaction , , as where is with respect to . using ( [ notrescaledgenerator ] ) , the generator of the rescaled process is given as follows : we now introduce the hybrid jump - diffusion scheme .to do so we must extend propensities from the discrete lattice to on the continuous space .we shall make the following assumptions on the extension .[ ass : extension_assumptions ] the following properties hold for the extended propensities : 1 .they are non - negative , and lie in .2 . they are bounded , uniformly with respect to , and the same applies for their mixed derivatives up to order .3 . for each , is zero outside a bounded domain which contains .a extension of the propensities satisfying assumption is always possible . indeed , for each , set to be zero in . then one can extend the value of the propensities to by transfinite interpolation , see .such an extension may result in propensities which differ from the `` standard '' propensities typically used for the cle .in particular , propensities of the form must be modified so as to remain non - negative .such an explicit construction of extended propensities for unimolecular and bimolecular reactions of a single species can be found in ( * ? ? ?* example 4.7 - 4.8 ) .using the extended propensities , one can extend the markov jump process to take initial conditions .the infinitesimal generator of is the natural extension of ( [ eq : generator_epsilon ] ) , also denoted by defined by , \quad \text{for all } \ f \in c_0(\mathbb{r}^n).\ ] ] for a fixed observable , define the value function \times\mathbb{r}^n \rightarrow \mathbb{r} ] follows immediately .moreover , if is locally bounded , then so is .clearly , for such that for all we have , for all ] define the following scalar quantities then, there exists constants , and and , , and independent of such that for ] , for all . given the extended propensities , we can apply the same large - volume rescaling to the hybrid process ( [ eq : hybrid ] ) to obtain a jump - diffusion given by where and are standard wiener and poisson processes , respectively , all mutually independent , and is the closest in to , or equivalently .the generator of this process is given by \\ + & \sum_{r=1}^r \left ( 1 - \beta_r(\mathbf{x } ) \right ) \, \widetilde{\lambda}_r^\varepsilon(\mathbf{x } ) \ , \boldsymbol{\nu}_r\cdot\nabla f(\mathbf{x } ) \\ + & \frac{\varepsilon}{2 } \sum_{r=1}^r \left ( 1 - \beta_r(\mathbf{x } ) \right ) \ , \widetilde{\lambda}_r^\varepsilon ( \mathbf{x } ) \ , ( \boldsymbol{\nu}_r\otimes\boldsymbol{\nu}_r ) : \nabla\nabla f(\mathbf{x}).\end{aligned}\ ] ] we now obtain a weak error estimate between the processes and in the large volume limit as .let blending functions satisfy assumption .let , then there exists a constant , independent of , such that - \mathbb{e}_{\mathbf{x}}\left[{g(\mathbf{z}^\varepsilon(t ) ) } \right ] \big\rvert \leq c \varepsilon^2 , \quad \mbox { for } t \in [ 0,t],\ ] ] where .let ] .we wish to obtain a bound on .then taking the derivative with respect to and using the fact that for all , we obtain .\end{aligned}\ ] ] since is in , we can apply taylor s theorem up to the second order on to obtain where for some lying on the line between and .from ( [ eq : ereq ] ) and the fact that , it follows that where is the semigroup operator corresponding to .applying the uniform bound ( [ eq : uniform_bound ] ) we thus have that .\ ] ] the remainder term in equation ( [ eq : ereq ] ) characterises the local error at the point .we note that it is non - zero only in regions where .intuitively we would expect this to imply that is a superior approximation to the standard cle .however , the global error estimate we derived is too coarse to capture the distinction between the two diffusion approximations , and thus we have only shown that the two approximations are consistent : in that the hybrid scheme does no worse than the cle in the large - volume limit .equation ( [ eq : hybrid ] ) provides a general framework for simulating chemical systems which can capture both the discrete and continuum nature of a biochemical system . any numerical scheme which can generate realisations of a jump - diffusion process with inhomogeneous jump rates with deterministic jump sizescan be used to simulate ( [ eq : hybrid ] ) . for illustrative reasons we describe three different possible numerical schemes , the first based on the gillespie ssa and the second based on the modified next reaction method proposed in , and discussed in section [ sec : prelims ]finally , we describe an alternative scheme based on adaptive thinning which works well for systems with bounded blending regions .the main difference between the purely jump case ( where these algorithms have been applied before ) and is the fact that in the blending region the propensity functions do not remain constant between two consecutive reactions . for the sake of clarity ,given the propensities and blending functions define pseudocodes of each approach are given as algorithms [ alg : hybrid ] , [ alg : hybrid1 ] and [ alg : hybrid3 ] .they all have the same input , namely propensities , blending functions , the stoichiometric matrix , the final time of simulation , time steps for the cle and ( here , ) and initial state .set .+ the steps to simulate the jump - diffusion process based on an extension of the gillespie ssa are described in algorithm [ alg : hybrid ] . as we can see in regions where are , the scheme reduces to the standard gillespie ssa , and thus simulates the discrete dynamics exactly .analogously , in regions where are all zero one can use a larger time - step to evolve cle since it is not necessary to approximate the solutions of in such regions , which can only be done with accuracy . in the intermediate regime for the continuous part of the dynamics the following cleis used if during the cle time step a discrete event is occurring at time we simulate the cle up to that time and then add the discrete event . to simulate the diffusion part of the hybrid scheme we make use of the weak trapezoidal method described in .given the current state we perform the following two steps to obtain : ^{+ } } \boldsymbol{\nu}_j \,\xi'_j,\end{aligned}\ ] ] where ^{+ } = \max\left(0 , a\right) ] random numbers , . +set .set and , for each .+ compute the weighted propensities .+ in the special case where one can bound the value of the weighted propensities , it is possible to use a third method , based on standard thinning methods for sampling inhomogeneous poisson processes , see for example and more recently . the main advantage of this scheme would be that it avoids the necessity to approximate the solution of the first passage time problem associated with the modified next reaction method , thus being potentially more efficient . while one can find such bounds for many chemical systems , the added caveat is that the bounds must be known a priori , and choosing them too looselywill severely degrade the performance of the scheme . to this end, we shall assume that there exist constants , where these constants will form the additional input of algorithm [ alg : hybrid3 ] .suppose that for some . to compute the next jump time of the -th reaction, we sample from a dominating homogeneous process with rate , so that the next jump time for the -th reaction occurs at time where .\ ] ] suppose that is the first jump occurring after the current time . to determine whether the reaction will occur at time , we sample $ ] and perform the reaction only if where is the state of the process after simulating the langevin dynamics from time to .this thinning approach can be integrated into the gillespie ssa , demonstrated in algorithm [ alg : hybrid3 ] . at each timestep, three cases can occur . if ( i.e. the process is a pure jump process ) we use the standard gillespie ssa .if ( i.e. the process is purely diffusive ) , then we perform a `` macro - step '' of the cle dynamics of size .the final case is where there is both diffusion and jumps , we simulate the homogeneous dominating process with rate , and accept / reject according to condition ( [ eq : thinning_condition ] ) .the main advantage is that we avoid the error arising from the piecewise constant approximation of integral .in particular , one can use higher order methods for simulating the langevin dynamics within the blending region to obtain a better weak order of convergence in .the drawback of this approach is the necessity to know _ a priori _ the upper bounds , assuming such bounds exist .care must be taken so that the bounds are not too pessimistic , otherwise the dominating homogeneous poisson process will fire very rapidly when the system lies within a blending region . in such casesthe bounds can be tuned by running exploratory simulations and keeping track of the acceptance rate for each reaction .set .we illustrate the main features of the hybrid framework described in section [ subsec : stoc_fram ] and demonstrate the use of algorithms [ alg : hybrid ] , [ alg : hybrid1 ] and [ alg : hybrid3 ] by considering three numerical examples .each of these examples were implemented in the programming language julia . as a first example , we consider a chemical system consisting of two reacting chemical species and undergoing the following reactions the chemicals and can be considered to be in a predator - prey " relationship with and as prey and predator , respectively .the reaction - rate equations corresponding to reactions ( [ eq : lotka ] ) would then be the standard lotka - volterra model .we choose the dimensionless parameters and the initial condition is chosen to be and .a histogram generated from independent ssa simulations of this system up to time is shown in figure [ fig : lotka1 ] . the dashed line depicts the evolution of the deterministic reaction rate equation starting from the same initial point .one sees that the nonequilibrium dynamics force the system to spend time in both low and high concentration regimes . due to the time spent in states with high propensity ,the ssa is computationally expensive to simulate .it is clear that away from the boundary , using an approximation such as the cle would be computationally beneficial .the cle corresponding to ( [ eq : lotka ] ) , choosing the multiplicative noise as described in , is given by where , and , are three standard independent brownian motions .for a non - negative initial condition , the process will remain nonnegative , however , this will not be the case for fixed - timestep discretisation .in particular , an euler discretisation ( will contain a term of the form , where is a standard gaussian random variable , which can cause the discretised process to cross the axis if the process sufficiently close to this line . thus , it is essential that reflective boundary conditions are imposed to ensure positivity .however , even if positivity is guaranteed , there is no reason to believe that the cle will correctly approximate the dynamics near the axes .this motivates the use of the hybrid model to efficiently simulate this chemical system . to simulate the hybrid model we use algorithm [ alg : hybrid ] , choosing blending functions , and as described in ( [ eq : multi_species_blending ] ) .we simulate the langevin dynamics in the blending region using a timestep of size , and a timestep outside the blending regions . in figure[ fig : lotka_timedep](a ) , we plot at times , generated from independent realisations of each model .error bars denote confidence intervals .the hybrid models were simulated for different values of and , , however , the results were not plotted as the monte carlo error was too large to distinguish between the schemes . the hybrid scheme displayed in figure [ fig : lotka_timedep](a ) has blending regions with parameters and .while all models agree approximately at small times , for larger the averages generated from ssa and cle differ significantly . indeed , for ( corresponding to a single period of the deterministic system ) the means fromthe ssa and cle differ by three orders of magnitude . on the other hand ,the hybrid scheme remains in good agreement with the ssa . in figure[ fig : lotka_timedep](b ) we compare the average computational ( cpu ) time to simulate each model up to time , averaged over realisations .this was measured in seconds , using the standard julia functions tic and toc .the computational cost of the hybrid scheme was plotted for three different choices of blending regions namely , and , respectively . for small ssa and hybrid schemes require a comparable amount of computational effort .however , as increases , the computational cost of the ssa scheme dramatically increases , while the cost of the hybrid scheme remains approximately constant . to compute the average value at time ,the hybrid scheme was on average orders of magnitude cheaper to run .as expected , the computational effort is smaller when the blending region is closer to the boundary .however , relative to the computational cost of the ssa and cle , varying the blending region does not significantly alter performance .for these simulations , the value of and where chosen manually by computing the error for a number of short exploratory runs. a more sophisticated implementation of the hybrid model would require an adaptive scheme for the langevin part of the process .ssa simulations of up to time starting from and . the dashed line is the solution of the corresponding deterministic reaction rate equations._,scaledwidth=75.0% ] as a function of for the ssa , cle and hybrid schemes ; _ ( b ) _ the corresponding average computational ( cpu ) time in seconds as a function of ._,title="fig:",scaledwidth=46.0% ] as a function of for the ssa , cle and hybrid schemes ; _ ( b ) _ the corresponding average computational ( cpu ) time in seconds as a function of ._,title="fig:",scaledwidth=46.0% ] -4.6 cm as a second example , we consider a chemical system consisting of two species and in a reactor of volume .the species are subject to the following system of four chemical reactions : this corresponds to a jump process having stoichiometric vectors with corresponding propensities ( depending on the volume ) : the dimensionless reaction rates are given by , , and .as a first numerical experiment , we compute the evolution of the distribution of over time .we assume that , and that the initial distribution is a `` discrete '' gaussian mixture , namely the gaussian mixture restricted to the lattice . for each scheme ( ssa , cle and hybrid ), the distribution is approximated by a histogram generated from independent realisations of the process .the cle was simulated using the weak second order trapezoidal scheme described in . to ensure positivity of the cle, reflective boundary conditions were imposed at the boundary of the orthant .the hybrid scheme was simulated the hybrid next reaction scheme detailed in algorithm [ alg : hybrid1 ] .the timestep was chosen to be .the blending functions for the hybrid scheme were chosen according to ( [ eq : multi_species_blending ] ) , with and . in figure[ fig : dimer_timedep ] we plot the distribution approximated using each scheme at times and . as expected ,when the concentrations of and remain abundant , all three models agree .as the distribution approaches the low concentration regions , the discrete nature of the chemical system becomes important , and the cle is no longer able to correctly capture the dynamics .indeed , at time one observes a significant difference between the ssa and cle distributions . on the other hand, the hybrid scheme provides a good approximation to the ssa at all times , but benefiting from a computational advantage in the large concentration regimes .[ htp ] using ssa , cle and the hybrid scheme starting from initial distribution given by ._,title="fig : " ] the corresponding markov jump process can be shown to possess a unique stationary steady state .we use all three models to compute the first two moments and of the stationary distribution , for decreasing values of .the moments were approximated using ergodic average of the discretised schemes , i.e. where is the value of the discretised process at time and are the jump times of the process .each process was simulated up to time . for the hybrid and cle schemes a timestep of was used throughout .the blending region was chosen as in the previous example .the first and second moment are plotted in figure [ fig : dimer_stat ] for where .while there is good agreement between all three schemes for large , the cle consistently overestimates the moments when is small . on the other hand ,the hybrid scheme remains robust to this rescaling . with parameters as in figure for for ._ ( b ) _ comparison of the second moments as a function of ._,title="fig:",scaledwidth=46.0% ] with parameters as in figure for for . _( b ) _ comparison of the second moments as a function of ._,title="fig:",scaledwidth=46.0% ] -4.6 cm as a final example , we consider the problem of computing the mean extinction time ( met ) for a one - dimensional birth - death process , namely a system two reactions for one chemical species : by assuming that for all , the state of the system lies within the finite domain . for , the birth - death process will hit the extinction state with probability .we denote by the met of the process starting from .following directly the approach of ( * ? ? ?* section 2.1 ) ( also see ) , we obtain the corresponding cle is given by for standard independent brownian motions and .the mean first time of reaching starting from can be calculated explicitly as where is the potential since is a unique solution of , the stochastic birth - death process will fluctuate around for a long time before eventually going extinct . in general , we expect that any approximation which correctly describes the extinction time behaviour of the birth - death process must accurately capture the behaviour of the process particularly near and ( and possibly all points in between ) .if is large , then a gaussian approximation ( e.g. cle or the system size expansion ) would accurately capture the fluctuations around the quasi - equilibrium .however , as observed in such approximations would suffer close to .this suggests that the hybrid scheme with a blending region supported between and would be a good candidate for an approximation to the process .similar observations have been in more general chemical systems , where it is observed that diffusion approximations of jump processes are not able to correctly capture rare events , even when the system is in a regime where the cle correctly captures both the transient and stationary dynamics of the process . to test the hybrid scheme for met problems ,we consider the above birth - death process with , so that .we compute the mean time of the birth - death process starting from to extinction .following the discussion at the end of section [ sec : prelims ] , the hybrid scheme is considered extinct when the process satisfies .we choose the blending functions according to ( [ eq : multi_species_blending ] ) , simulating the process for different values of and .since the blending region is bounded , we can use the thinning - based algorithm [ alg : hybrid3 ] to simulate the jump - diffusion process within the blending region , choosing and .a timestep of is chosen throughout . in figure[ fig : mfpt_bd ] , we plot the met of the hybrid scheme for varying , each point generated from independent realisations , and for different choices of blending regions .the met for the cme and cle , computed directly from ( [ eq : bd_mfpt ] ) and ( [ eq : bd_cle_mfpt ] ) , respectively , are shown for comparison .it is evident from the numerical experiments that the hybrid scheme provides a better approximation for the mer compared to the cle .however , the improvement is not uniform over all timescales : the region in which the jump process is simulated must be increased to correctly capture rare events .figure [ fig : mfpt_bd ] suggests that the width of the blending region also plays a role in the simulation .indeed , a blending region with appears to be sufficient to accurately estimate the met up to , although it is likely this approximation will break down , if is increased further .the necessity of tuning the blending region to capture the escape time dynamics is a disadvantage .nonetheless , the hybrid scheme provides us with an approach for improving the met estimate obtained from the cle , at the `` cost '' of having to simulate discrete jumps in ( increasingly large ) regions of the domain . , the corresponding cle and the hybrid scheme .the parameters used are and and the process is started from .we use : _ ( a ) _ blending regions of width ; _ ( b ) _ blending regions of width ._ , title="fig:",scaledwidth=46.0% ] , the corresponding cle and the hybrid scheme .the parameters used are and and the process is started from .we use : _ ( a ) _ blending regions of width ; _ ( b ) _ blending regions of width ._ , title="fig:",scaledwidth=46.0% ] -4.6 cmin this paper we have introduced a jump - diffusion model for simulating multiscale reaction systems efficiently while still accounting for the discrete fluctuations where necessary .fast reactions are simulated using the cle , while the standard discrete description is used for slow ones .our approach involves the introduction of a set of blending functions ( [ eq : multi_species_blending ] ) which allow one to make explicit in which regions the continuum approximation should be expected to hold .based on the representation of the markov jump process as a time changed poisson process , we described three different schemes , based on to numerically simulate the jump - diffusion model in the three different regimes ( discrete , continuous and hybrid ) . to demonstrate the efficacy of the schemes , we simulated equilibrium distributions of chemical systems and computed extinction times of chemical species for illustrative chemical systems .the results suggest that the proposed algorithm is robust , and is able to handle multiscale processes efficiently without the breakdown associated when using the cle directly .the research leading to these results has received funding from the european research council under the _european community _ s seventh framework programme ( _ fp7/2007 - 2013_)/erc _ grant agreement _n 239870 .radek erban would also like to thank the royal society for a university research fellowship and the leverhulme trust for a philip leverhulme prize .andrew duncan was also supported by the epsrc grant ep / j009636/1 .10 url # 1`#1`urlprefixhref # 1#2#2 # 1#1 g. klingbeil , r. erban , m. giles , p. maini , fat vs. thin threading approach on gpus : application to stochastic simulation of chemical reactions , ieee transactions on parallel and distributed systems 23 ( 2 ) ( 2012 ) 280 - 287 .r. erban , s.j .chapman , i. kevrekidis , t. vejchodsk , analysis of a stochastic chemical system close to a sniper bifurcation of its mean - field model , siam journal of applied mathematics 70 ( 3 ) ( 2009 ) 9841016 .l. ferm , p. ltstedt , p. sjberg , adaptive , conservative solution of the fokker - planck equation in molecular biology , technical report , available as http://www.it.uu.se/research/reports/2004-054 ( 2004 ) .b. mlykti , k. burrage , k. zygalakis , fast stochastic simulation of biochemical reaction systems by alternative formulation of the chemical langevin equation , journal of chemical physics 132 ( 16 ) ( 2010 ) 164109 . c. doering , k. sargsyan , l. sander , extinction times for birth - death processes : exact results , continuum asymptotics , and the failure of the fokker - planck approximation , multiscale modeling & simulation 3 ( 2 ) ( 2005 ) 283299 .
|
stochasticity plays a fundamental role in various biochemical processes , such as cell regulatory networks and enzyme cascades . isothermal , well - mixed systems can be modelled as markov processes , typically simulated using the gillespie stochastic simulation algorithm ( ssa ) . while easy to implement and exact , the computational cost of using the gillespie ssa to simulate such systems can become prohibitive as the frequency of reaction events increases . this has motivated numerous coarse - grained schemes , where the `` fast '' reactions are approximated either using langevin dynamics or deterministically . while such approaches provide a good approximation when all reactants are abundant , the approximation breaks down when one or more species exist only in small concentrations and the fluctuations arising from the discrete nature of the reactions becomes significant . this is particularly problematic when using such methods to compute statistics of extinction times for chemical species , as well as simulating non - equilibrium systems such as cell - cycle models in which a single species can cycle between abundance and scarcity . in this paper , a hybrid jump - diffusion model for simulating well - mixed stochastic kinetics is derived . it acts as a bridge between the gillespie ssa and the chemical langevin equation . for low reactant reactions the underlying behaviour is purely discrete , while purely diffusive when the concentrations of all species is large , with the two different behaviours coexisting in the intermediate region . a bound on the weak error in the classical large volume scaling limit is obtained , and three different numerical discretizations of the jump - diffusion model are described . the benefits of such a formalism are illustrated using computational examples . chemical master equation , chemical langevin equation , jump - diffusion process , hybrid scheme .
|
the two fundamental characteristics of the wireless medium , namely _ broadcast _ and _ superposition _ , present different challenges in ensuring secure and reliable communications in the presence of adversaries .the broadcast nature of wireless communications makes it difficult to shield transmitted signals from unintended recipients , while superposition can lead to the overlapping of multiple signals at the receiver . as a result ,adversarial users are commonly modeled either as ( 1 ) a passive _ eavesdropper _ that tries to listen in on an ongoing transmission without being detected , or ( 2 ) a malicious transmitter ( _ jammer _ ) that tries to degrade the signal quality at the intended receiver .two distinct lines of research have developed to analyze networks compromised by either type of adversary , as summarized below . a network consisting of a transmitter - receiver pair and a passive eavesdropper is commonly referred to as the _ wiretap _ channel .the information - theoretic aspects of this scenario have been explored in some detail . in particular , this work led to the development of the notion of _ secrecy capacity _ , which quantifies the maximal rate at which a transmitter can reliably send a secret message to the receiver , without the eavesdropper being able to decode it .ultimately , it was shown that a non - zero secrecy capacity can only be obtained if the eavesdropper s channel is of lower quality than that of the intended recipient .the secrecy capacity metric for the multiple - input multiple - output ( mimo ) wiretap channel , where all nodes may possess multiple antennas , has been studied in - , for example .there are two primary categories of secure transmission strategies for the mimo wiretap channel , depending on whether the instantaneous channel realization of the eavesdropper is known or unknown at the transmitter . in this workwe assume that this information is not available , and thus the transmitter incorporates an `` artificial interference '' signal - along with the secret message in an attempt to degrade the eavesdropper s channel , as elaborated on in section [ sec : model ] .the impact of malicious jammers on the quality of a communication link is another problem of long - standing interest , especially in mission - critical and military networks .a common approach is to model the transmitter and the jammer as players in a game - theoretic formulation with the mutual information as the payoff function , and to identify the optimal transmit strategies for both parties - .recent work has extended this technique to compute the optimal spatial power allocation for mimo and relay channels with various levels of channel state information ( csi ) available to the transmitters - . in this paper , we consider a mimo communication link in the presence of a more sophisticated and novel adversary , one with the dual capability of either passively eavesdropping or actively jamming any ongoing transmission , with the objective of causing maximum disruption to the ability of the legitimate transmitter to share a secret message with its receiver .the legitimate transmitter now faces the dilemma of establishing a reliable communication link to the receiver that is robust to potential jamming , while also ensuring confidentiality from interception . since it is not clear _ a priori _what strategies should be adopted by the transmitter or adversary per channel use , a game - theoretic formulation of the problem is a natural solution due to the mutually opposite interests of the agents . unlike the jamming scenarios mentioned above that do not consider link security , the game payoff function in our applicationis chosen to be the ergodic _mimo secrecy rate _ between the legitimate transmitter - receiver pair .related concurrent work on the active eavesdropper scenario has focused on single - antenna nodes without the use of artificial interference , possibly operating together with additional ` helping ' relays .the single - antenna assumption leads to a much more restrictive set of user strategies than the mimo scenario we consider .the contributions of the paper are as follows : ( 1 ) we show how to formulate the mimo wiretap channel with a jamming - capable eavesdropper as a two - player zero - sum game , ( 2 ) we characterize the conditions under which the strategic version of the game has a pure - strategy nash equilibrium , ( 3 ) we derive the optimal mixed strategy profile for the players when the pure - strategy nash equilibrium does not exist , and ( 4 ) we study the extensive or stackelberg version of the game where one of the players moves first and the other responds , and we also characterize the various equilibrium outcomes for this case under perfect and imperfect information .these contributions appear in the paper as follows .the assumed system model and csi assumptions are presented in the next section .the strategic formulation of the wiretap game is described in section [ sec : strats ] , where the two - player zero - sum payoff table is developed , the conditions for existence of pure - strategy nash equilibria are derived , and the optimal mixed strategy formulation is discussed .the extensive version of the wiretap game with perfect and imperfect information where the players move sequentially is detailed in section [ sec : extensive ] .outcomes for the various game formulations are studied via simulation in section [ sec : sim ] , and conclusions are presented in section [ sec : concl ] ._ notation _ : we will use to denote a circular complex gaussian distribution with zero - mean and unit variance .we also use to denote expectation , for mutual information , for the transpose , for the hermitian transpose , for the matrix inverse , for the trace operator , to denote the matrix determinant , is the ordered eigenvalue of , and represents an identity matrix of appropriate dimension .we study the mimo wiretap problem in which three multiple - antenna nodes are present : an -antenna transmitter ( alice ) , an -antenna receiver ( bob ) , and a malicious user ( eve ) with antennas , as shown in fig .[ fig_mimowiretap ] .we assume that alice does not have knowledge of the instantaneous csi of the eavesdropper , only the statistical distribution of its channel , which is assumed to be zero - mean with a scaled - identity covariance .the lack of instantaneous eavesdropper csi at alice precludes the joint diagonalization of the main and eavesdropper channels . instead , as we will show , alice has the option of utilizing all her power for transmitting data to bob , regardless of channel conditions or potential eavesdroppers , or optimally splitting her power and simultaneously transmitting the information vector and an `` artificial interference '' signal that jams any unintended receivers other than bob .the artificial interference scheme does not require knowledge of eve s instantaneous csi , which makes it suitable for deployment against passive eavesdroppers , .eve also has two options for disrupting the secret information rate between alice and bob : she can either eavesdrop on alice or jam bob , under a half - duplex constraint .when eve is in passive eavesdropping mode , the signal received by bob is where is the signal vector transmitted by alice , is the channel matrix between alice and bob with i.i.d elements drawn from the complex gaussian distribution , and is additive complex gaussian noise .when eve is not jamming , she receives where is the channel matrix between alice and eve with i.i.d elements drawn from the complex gaussian distribution , and is additive complex gaussian noise .the background noise at all receivers is assumed to be spatially white and zero - mean complex gaussian : , where indicates bob or eve , respectively .the receive and transmit channels of the eavesdropper have gain factors and respectively .these scale factors may be interpreted as an indicator of the relative distances between eve and the other nodes . on the other hand ,when eve decides to jam the legitimate channel , bob receives where is the gaussian jamming signal from eve and is the channel matrix between eve and bob with i.i.d elements distributed as . due to the half - duplex constraint , eve receives no signal when she is jamming ( ) .alice s transmit power is assumed to be bounded by : and similarly eve has a maximum power constraint of when in jamming mode . to cause maximum disruption to alice and bob s link ,it is clear that eve will transmit with her full available power when jamming . in the most general scenario where alice jams eve by transmitting artificial interference, we have where are the , precoding matrices for the information vector and uncorrelated jamming signal respectively . to ensure that the artificial interference does not interfere with the information signal , a common approach taken in the literature , is to make these signals orthogonal when received by bob .if alice knows , this goal can be achieved by choosing and as disjoint sets of the right singular vectors of .note that if the users have only a single antenna , the effect of the artificial interference can not be eliminated at bob , and it will degrade the snr of both bob and eve .this makes it unlikely that alice will employ a non - zero artificial interference signal when she has only a single transmit antenna , which significantly restricts alice s transmission strategy .the matrix may be expressed as where are the covariance matrices associated with and , respectively . if we let denote the fraction of the total power available at alice that is devoted to the information signal , then and . due to the zero - forcing constraint ,it is clear that any power available to alice that is not used for the desired signal will be used for jamming , so between the signal and artificial interference , alice will transmit with full power .the covariance matrices of the received interference - plus - noise at bob and eve are where is the covariance of the jamming signal transmitted by eve , . note that we have assumed that alice s jamming signal ( if any ) is orthogonal to the information signal received by bob , and hence , from the point of view of mutual information , can be ignored in the expression for . for our purposes, we assume that alice splits her transmit power between a stochastic encoding codebook and artificial interference for every channel use in _ all _ scenarios , while bob employs a deterministic decoding function .firstly , this ensures that the general encoding and decoding architecture of the alice - bob link remains fixed irrespective of eve s actions .secondly , for a point - to - point channel without an eavesdropper ( _ i.e. , _ when the eavesdropper is jamming and not listening ) , using a stochastic codebook does not offer any advantage over a conventional codebook , but it does not hurt either , i.e. , the receiver still reliably decodes the transmitted codeword . given the signal framework introduced above , we are ready to discuss the important issue of csi . we have already indicated that alice knows in order to appropriately precode the jamming and information signals via and , conceivably obtained by public feedback from bob after a training phase . at the receiver side, we will assume that eve knows the channel from alice and the covariance of the interference and noise , and similarly we will assume that bob knows and .all other csi at the various nodes is assumed to be non - informative ; the only available information is that the channels are composed of independent random variables .this implies that when eve jams bob , her lack of information about and the half - duplex constraint prevents her from detecting the transmitted signal and applying correlated jamming .consequently , she will be led to uniformly distribute her available power over all transmit dimensions , so that .similarly , when alice transmits a jamming signal , it will also be uniformly distributed across the available dimensions : . while in principle alice could use her knowledge of to perform power loading , for simplicity and robustnesswe will assume that the power of the information signal is also uniformly distributed , so that .given the above assumptions , equations ( [ qt])-([eq : qb_qe ] ) will simplify to where we have defined . the mimo secrecy capacity between alice and bob is obtained by solving where are the random variable counterparts of the realizations .given the csi constraints discussed above , such an optimization can not be performed since alice is unaware of the instantaneous values of all channels and interference covariance matrices .consequently , we choose to work with the lower bound on the mimo ergodic secrecy capacity based on gaussian inputs and uniform power allocation at all transmitters : where we define .this serves as a reasonable metric to assess the relative security of the link and to explain the behavior of the players .recall that we assume alice has instantaneous csi for the link to bob and only statistical csi for eve , and the achievability of an ergodic secrecy rate for such a scenario was shown in .using ergodic secrecy as the utility function for the game between alice and eve implies that a large number of channel realizations will occur intermediate to any changes in their strategy .that is , the physical layer parameters are changing faster than higher ( _ e.g. , _ application ) layer functions that determine the user s strategy .thus , the expectation is taken over all channel matrices ( including ) , which in turn provides alice and eve with a common objective function , since neither possesses the complete knowledge of that is needed to compute the instantaneous mimo secrecy rate .eve must decide whether to eavesdrop or jam with an arbitrary fraction of her transmit power .alice s options include determining how many spatial dimensions are to be used for data and artificial interference ( if any ) , and the appropriate fraction that determines the transmit power allocated to them .as described in , there are several options available to alice for choosing and depending upon the accuracy of her csi , ranging from an exhaustive search for optimal values to lower - complexity approaches based on fixed - rate assumptions .numerical results from this previous work have indicated that the achievable secrecy rate is not very sensitive to these parameters , and good performance can be obtained for a wide range of reasonable values .the general approach of this paper is applicable to essentially any value for and , although the specific results we present in the simulation section use a fixed value for and find the optimal value for based on under the assumption that the eavesdropper is in fact eavesdropping , and not jamming . in section [ sec : strats ]we show that it is sufficient to consider a set of two strategies for both players without any loss in optimality .in particular , we show that alice need only consider the options of either transmitting the information signal with full power , or devoting an appropriate amount of power and signal dimensions to a jamming signal .on the other hand , eve s only reasonable strategies are to either eavesdrop passively or jam bob with all her available transmit power .we will denote eve s set of possible actions as to indicate either `` eavesdropping '' or `` jamming , '' while alice s will be expressed as to indicate `` full - power '' devoted to the information signal , or a non - zero fraction of the power allocated to `` artificial interference . ''the secrecy rates that result from the resulting four possible scenarios will be denoted by , where and . assuming gaussian inputs and , the mimo secrecy rate between alice and bob when eve is in eavesdropping mode is whereas the secrecy rate when eve is jamming reduces to where denotes the transmission strategies available to alice .we refer to ( [ eq : payoff_j ] ) as a secrecy rate even though there is technically no eavesdropper , since eve s mutual information is identically zero and alice still uses a stochastic encoder ( cf .[ sec : model ] ) .therefore , when evaluating the secrecy rate definition ( 11 ) for the case where eve chooses to jam , the second term is zero which yields and in ( [ eq : payoff_j ] ) as the effective secrecy rate .recall that the definition of the secrecy rate is the maximum transmission rate which can be reliably decoded by bob while remaining perfectly secret from eve , which is still satisfied by the rates in ( [ eq : payoff_j ] ) .note also that when alice employs artificial interference , a choice for and must be made that holds regardless of eve s strategy .therefore , the values of and that are numerically computed to maximize in ( [ eq : payoff_e2 ] ) remain unchanged for in ( [ eq : payoff_j ] ) .when alice transmits with full power , then , where , and the precoder consists of the right singular vectors of corresponding to the largest singular values . while alice uses the same type of encoder regardless of eve s strategy , achieving the rates in ( [ eq : payoff_e2])-([eq : payoff_j ] ) requires adjustments to the code rate that _ will _ depend on eve s actions .for example , if alice is transmitting with full power ( strategy ) , the code rate needed to achieve either or in ( [ eq : payoff_e2 ] ) or ( [ eq : payoff_j ] ) will be different .thus , we assume that alice can be made aware of eve s strategy choice , for example through feedback from bob , in order to make such adjustments)-([eq : payoff_j ] ) will be valid . ] .such behavior is not limited to just alice and bob ; eve also makes adjustments based on alice s choice of strategy .in particular , when eve is eavesdropping , her method of decoding alice s signal will depend on whether or not alice is transmitting artificial interference .we do not consider adjustments such as these as part of alice or eve s strategy _ per se _ , which in our game theory framework is restricted to the decision of whether or not to use artificial interference .we assume that minor adaptations to the coding or decoding algorithm for alice and eve occur relatively quickly , and that any resulting transients are negligible due to our use of ergodic secrecy rate as the utility function .the more interesting question is whether or not alice and eve decide to change strategies based on the actions of the other is addressed in section [ sec : extensive ] . in the game - theoretic analysis of the next two sections, we will utilize the following general properties of the mimo wiretap channel : 1 . 2 . the validity of ( _ _ p__2 ) is obvious ; if alice employs artificial interference , it reduces the power allocated to the information signal , which in turn can only decrease the mutual information at bob .since eve is jamming , her mutual information is zero regardless of alice s strategy , so can never be larger than .the validity of ( _ _ p__1 ) can be established by recalling that alice chooses a value for that maximizes , assuming eve is eavesdropping .since is an available option and corresponds to , alice can do no worse than in choosing the optimal for strategy .in this section we construct the zero - sum model of the proposed wiretap game .we define the payoff to alice as the achievable mimo secrecy rate between her and bob . modeling the strategic interactions between alice and eveas a strictly competitive game leads to a zero - sum formulation , where alice tries to maximize her payoff and eve attempts to minimize it .formally , we can define a compact strategy space for both alice and eve : alice has to optimize the pair , where is chosen from the unit interval ] , where zero jamming power corresponds to the special case of passive eavesdropping .in other words , each player theoretically has a continuum of ( pure ) strategies to choose from , where the payoff for each combination of strategies is the corresponding mimo secrecy rate . in the following discussion ,let represent the choice of alice s parameters that maximizes the ergodic secrecy rate .the complete set of mixed strategies for player is the set of borel probability measures on .let be the set of all probability measures that assign strictly positive mass to every nonempty open subset of .the optimal mixed strategy for player must belong to , since any pure strategies that are assigned zero probability in equilibrium can be pruned without changing the game outcome .furthermore , as in the case of finite games , the subset of pure strategies included in the optimal mixed strategy must be _ best responses _ to particular actions of the opponent .consider alice : when eve chooses the action of eavesdropping , is alice s corresponding best response pure strategy since by definition it offers a payoff at least as great as _ any _ other possible choice of [ cf . ( _ _ p__1 ) ] .similarly , when eve chooses to jam with any arbitrary power , alice s best response pure strategy is [ cf .( _ _ p__2 ) ] . therefore , these two pure strategies are alice s best responses for any possible action by eve , and it is sufficient to consider them alone in the computation of the optimal mixed strategy since all other pure strategies are assigned zero probability .a similar argument holds for eve with her corresponding best responses of and .therefore , it is sufficient to consider the following strategy sets for the players : alice chooses between transmitting with full power for data ( _ f _ ) or devoting an appropriate fraction of power to jam eve ( _ a _ ) , described as .eve must decide between eavesdropping ( _ e _ ) or jamming bob with full power ( _ j _ ) at every channel use , represented by .the strategic form of the game where alice and eve move simultaneously without observing each other s actions can be represented by the payoff matrix in table [ table : game ] .our first result establishes the existence of nash equilibria for the strategic game ._ proposition 1 _ : for an arbitrary set of antenna array sizes , transmit powers and channel gain parameters , the following unique pure - strategy saddle - points or nash equilibria ( ne ) exist in the proposed mimo wiretap game : ( x^ * , y^ * ) = [ eq : prop1 ] r_ae & + r_fj & ._ proof _ : of the 24 possible orderings of the four rate outcomes , only six satisfy both conditions ( _ _ p__1)-(__p__2 ) of the previous section .furthermore , it is easy to check that only two of these six mutually exclusive outcomes results in a pure ne .if , then assumptions ( _ _ p__1 ) and ( _ _ p__2 ) imply the following rate ordering in this case , represents an ne since neither alice nor eve can improve their respective payoffs by switching strategies ; _ i.e. , _ the secrecy rate will decrease if alice chooses to transmit the information signal with full power , and the secrecy rate will increase if eve decides to jam .similarly , when , then ( _ _ p__1)-(__p__2 ) result in the rate ordering and will be the mutual best response for both players .evidently only one such ordering can be true for a given wiretap gamescenario. proposition 1 establishes that there is no single pure strategy choice that is always optimal for either player if the inequalities in ( [ eq : pureneorder1])-([eq : pureneorder2 ] ) are not satisfied .this occurs in four of the six valid rate orderings of the entries of that satisfy conditions ( _ _ p__1)-(__p__2 ) .therefore , since the minimax theorem guarantees that any finite zero - sum game has a saddle - point in randomized strategies , in such scenarios alice and eve should randomize over ; that is , they should adopt mixed strategies .let and , represent the probabilities with which alice and eve randomize over their strategy sets and , respectively .in other words , alice plays with probability , while eve plays with probability .alice obtains her optimal strategy by solving while eve optimizes the corresponding minimax problem . for the payoff matrix in table [ table : game ], the optimal mixed strategies and unique ne value of the game can be easily derived as [ eq : mixed ] where .the mixed ne above is unique according to the classic properties of finite matrix games , since the optimization in has a unique solution .a graphical illustration of the saddle - point in mixed strategies as and are varied for a specific wiretap channel is shown in fig .[ fig_mixedstrats_3d ] .for the specified parameters , , the rate ordering turns out to be , which results in a mixed ne with optimal mixing probabilities and value .alice s bias towards playing more frequently is expected since that guarantees a secrecy rate of at least 2.85 , whereas playing risks a worst - case payoff of zero .eve is privy to alice s reasoning and is therefore biased towards playing more frequently since she prefers a game value close to .the _ repeated _ wiretap game is a more sophisticated strategic game model in which alice and eve play against each other repeatedly over multiple stages in time . at each stage , the set of player strategies and payoff function representation is identical to the single - stage zero - sum game in table [ table : game ] . in our context , the single - stage game can be considered to represent the transmission of a single codeword , with the repeated game spanning the successive transmission of multiple codewords .let the payoff to alice at stage be denoted as } ] at each stage , which is achieved by playing as dictated by proposition 1 or at each stage .if the game is played over a finite number of stages instead , the players will continue to play their single - stage game ne strategies by the same argument .the concepts developed in sec .[ sec : imperfectinfo ] are applicable to the more involved repeated game scenario where alice and eve have imperfect observations of each other s actions .given the strategic game analysis of the previous section , we can now proceed to analyze the actions of a given player in response to the opponent s strategy . here , one player is assumed to move first , followed by the opponent s response , which can then lead to a strategy ( and code rate ) change for the first player , and so on . accordingly , in this section we examine the sequential or _ extensive form _ of the mimo wiretap game , which is also known as a stackelberg game .the standard analysis of a stackelberg game is to cast it as a dynamic or extensive - form game and elicit equilibria based on backward induction .we begin with the worst - case scenario where alice moves first by either playing _ f _ or _ a _ , which is observed by eve who responds accordingly .it is convenient to represent the sequential nature of an extensive - form game with a rooted tree or directed graph , as shown in fig .[ fig : extensive ] .the payoffs for alice are shown at each terminal node , while the corresponding payoffs for eve are omitted for clarity due to the zero - sum assumption . in this section ,we explore extensive - form games with and without perfect information , and the variety of equilibrium solution concepts available for them . assuming that eve can distinguish which move was adopted by alice , and furthermore determine the exact jamming power if she is being jammed by alice , then the extensive game is classified as one of _perfect information_. in the sequel , we will make use of the notions of an _ information state _ and a _ subgame_. a player s information state represents the node(s ) on the decision tree at which she must make a move conditioned on her knowledge of the previous move of the opponent . for the case of perfect information in fig .[ fig : extensive ] , alice has a single information state , while eve has two information states ( each with a single node ) based on alice s choice , since she has perfect knowledge of alice s move .a subgame is a subset ( subgraph ) of a game that starts from an information state with a single node , contains all of that node s successors in the tree , and contains all or none of the nodes in each information state .next , we analyze _ subgame - perfect equilibria _ ( spe ) of the extensive game , which are a more refined form of ne that eliminate irrational choices within subgames .it is well known that in extensive games with perfect information , a sequential equilibrium in pure strategies is guaranteed to exist ( * ? ? ?* theorem 4.7 ) .the equilibrium strategies can be obtained by a process of backward induction on the extensive game tree , as shown below . _ proposition 2 _ : in the extensive form wiretap game with perfect information where alice moves first , the unique subgame - perfect equilibrium rate with pure strategies is determined by the following : ( ^e,1)= r_a , e & + r_f , j & + & _ proof _ : the extensive game tree for this problem , depicted in fig . [fig : extensive ] , is comprised of three subgames : the two subgames at eve s decision nodes , and the game itself with alice s decision node as the root .consider the scenario . under this assumption ,eve always plays in the lower - left subgame of fig . [fig : extensive ] , whereas eve picks in the lower - right subgame . bybackward induction , alice then chooses the larger of ] at her decision node . note that in the scenario where alice moves first , she chooses her coding parameters based on the assumption that eve acts rationally and adopts the equilibrium strategy in proposition 2 .we see from both propositions that , when conditions for one of the pure - strategy nes hold , the outcome of both and will be the corresponding ne .this is also true of an extensive game with more than 2 stages ; if an ne exists , the overall spe outcome will be composed of repetitions of this constant result .we now consider extensive wiretap games with imperfect information , where the player moving second has an imperfect estimate of the prior move made by her opponent .let and denote the games where alice and eve move first , respectively .the game tree representation of can be drawn by connecting the decision nodes of eve in fig .[ fig : extensive ] to indicate her inability to correctly determine alice s move in the initial phase of the game .thus , in this case , eve effectively only possesses a single information state .while no player has an incentive to randomize in the game with perfect information in section [ sec : perfectinfo ] , mixed strategies enter the discussion when the game is changed to one of imperfect information .the subgame perfect equilibrium solution is generally unsatisfactory for such games , since the only valid subgame in this case is the entire game itself .therefore , _ sequential equilibrium _ is a stronger solution concept better suited for extensive games of imperfect information .an extreme case of imperfect information in is the scenario where it is common knowledge at all nodes that eve is _ completely unable _ to determine what move was made by alice in the first stage of the game .let eve then assign the _ a priori _ probabilities to alice s moves over for some and , while eve herself randomizes over with probabilities . therefore , eve s left and right decision nodes are reached with probability and , respectively .there are three possible supports for eve s moves at her information state : pure strategies or exclusively , or randomizing over . in the general scenario where eve randomizes over with probabilities , her expected payoff can be expressed as + \left ( { \alpha - 1 } \right)\left [ { \gamma r_{ae } + \left ( { 1 - \gamma } \right)r_{aj } } \right].\ ] ] using a probabilistic version of backward induction , it is straightforward to compute the sequential equilibrium of , which in fact turns out to be identical to the mixed - strategy ne in ( [ eq : mixed ] ) .a similar argument holds for with no information at alice , which arises if no feedback is available from bob .it is much more reasonable to assume that the player moving second is able to form some estimate of her opponent s move , known as the _ belief _ vector .an example of how such a scenario may play out is described here .consider the game , where alice s belief vector represents the posterior probabilities of eve having played \{e } and \{j } in the first stage .assume that bob collects signal samples and provides alice with an inference of eve s move via an error - free public feedback channel .the competing hypotheses at bob are = { { \mathbf{h}}_{ba}}{{\mathbf{x}}_a}\left [ n \right ] + { { \mathbf{n}}_b}\left [ n \right ] } \\ { { \mathcal{h}_1}:}&{{{\mathbf{y}}_b}\left [ n \right ] = { { \mathbf{h}}_{ba}}{{\mathbf{x}}_a}\left [ n \right ] + \sqrt { { g_2 } } { { \mathbf{h}}_{be}}{{\mathbf{x}}_e}\left [ n \right ] + { { \mathbf{n}}_b}\left [ n \right]\ : , } \end{array}\ ] ] for where the null hypothesis corresponds to eve listening passively andthe alternative hypothesis is that she is jamming bob . here, the channels are assumed to be constant over the sensing interval and known to bob since he possesses local csi .aggregating the samples into a matrix } & \ldots & { { { \mathbf{y}}_b}\left [ { m - 1 } \right ] } \end{array } } \right]} ] follows the distributions under and under , where the mpe test at eve thus simplifies to where , and is the ratio of worst - case prior probabilities based on . by the equivalence of equilibrium payoffs ,eve s best response based on her computed posterior probabilities is since alice has no means of estimating the beliefs possessed by eve , alice plays her maximin strategy as specified by when she moves first .in this section , we present several examples that show the equilibrium secrecy rate payoffs for various channel and user configurations .all displayed results are based on the actual numerically computed secrecy rates with 5000 independent trials per point .ne rates are depicted using a dashed red line where applicable . in all of the simulations ,the noise power was assumed to be the same for both bob and eve : . for the strategic game in fig .[ fig_mix ] we set and eve s power is larger than alice s : .the optimal choice for the signal dimension in this scenario is .prior to the cross - over , a pure strategy ne in is the game outcome since the rate ordering is that of ( [ eq : pureneorder1 ] ) , whereas after the cross - over it is optimal for both players to play mixed strategies according to ( [ eq : mixed ] ) . in this case, randomizing strategies clearly leads to better payoffs for the players as eve s jamming power increases , compared to adopting a pure strategy .the optimal mixing probabilities are shown in fig .[ fig_mix_b ] with a clear division between pure and mixed strategy ne regions .the pure ne is lost as increases since grows more quickly than .this is because increasing under both improves bob s rate and reduces eve s rate , since more power is available for both signal and jamming .for aj , increasing can only improve bob s rate since eve is not impacted by the artificial interference ( any power devoted to artificial interference is wasted ) .for the case of equal transmit powers and parameters , the outcomes of the strategic game as the ratio of eavesdropper to transmitter antennas varies is shown in fig .[ fig_antratio ] .we observe that a similar dichotomy exists between a pure - strategy saddle - point region and a mixed - strategy equilibrium in terms of ( with the transition roughly at marked by the dashed red line ) .next , the spe outcomes of the two extensive - form games and over a range of transmit power ratios are shown in fig . [ fig_subgame ] .the red and blue dashed lines represent the subgame - perfect outcomes of the game where alice moves first or second , respectively , as defined in proposition 2 and corollary 1 . in the extensive form game, alice could adjust her transmission parameters ( , etc . ) in addition to her overall strategy ( or ) in response to eve s move . for simplicity , and to allow us to present the main result in a single figure , we have assumed instead that the transmission parameters are chosen independently of eve s actions , as described for the strategic game .observe that prior to the crossover point of and , both equilibria are equal as determined by proposition 2 , since a pure - strategy ne results .we see that it is always beneficial for alice to move second especially as eve s jamming power increases , which agrees with intuition . finally , in fig .[ fig_extensiveimperf ] we compare the equilibrium outcomes of the extensive - form games with perfect and imperfect information as a function of , with .the no - information lower bound is given by the strategic game mixed - strategy ne .for the given choice of parameters , alice is not significantly disadvantaged when she moves first in the idealized scenario of perfect information . in sharp contrast , a carefully designed hypothesis test allows alice to significantly improve her payoff in given a noisy observation of eve s move , as compared to the no - information case .since in this example , an increase in alice s transmit power also implies an increase in eve s power , which aids the hypothesis test at bob and thus alice has a better estimate of eve s move .on the other hand , eve s hypothesis test does not show the same improvement as increases since the ratio between data and artificial noise power remains virtually the same .we have formulated the interactions between a multi - antenna transmitter and a dual - mode eavesdropper / jammer as a novel zero - sum game with the ergodic mimo secrecy rate as the payoff function .we derived conditions under which nash equilibria exist and the optimal user policies in both pure and mixed strategies for the strategic version of the game , and we also investigated subgame - perfect and sequential equilibria in the extensive forms of the game with and without perfect information .our numerical results showed that a change in a single parameter set while others remain constant can shift the equilibrium from a pure to a mixed ne outcome or vice versa .i. csiszr and j. krner , broadcast channels with confidential messages , " _ ieee trans .inf . theory _339 - 348 , may 1978 .f. oggier and b. hassibi , the secrecy capacity of the mimo wiretap channel , " _ ieee trans .inf . theory _ ,8 , pp . 4961 - 4972 , aug .j. li and a. petropulu , on ergodic secrecy capacity for gaussian miso wiretap channels , " _ ieee trans .wireless commun .1176 - 1187 , apr .t. liu and s. shamai , a note on the secrecy capacity of the multiple - antenna wiretap channel , " _ ieee trans .inf . theory _ ,55 , no . 6 , pp .2547 - 2553 , june 2009 . s. goel and r. negi , guaranteeing secrecy using artificial noise , " _ ieee trans .wireless commun_. , vol . 7 , no . 6 , pp .2180 - 2189 , june 2008 .a. khisti and g. wornell , secure transmission with multiple antennas i : the misome wiretap channel " , _ ieee trans .inf . theory _ ,56 , no . 7 , pp . 3088 - 3104 , july 2010 .a. khisti and g. wornell , secure transmission with multiple antennas ii : the mimome wiretap channel " , _ ieee trans .inf . theory _ ,11 , pp . 5515 - 5532 ,a. mukherjee and a. l. swindlehurst , robust beamforming for secrecy in mimo wiretap channels with imperfect csi , " _ ieee trans . signal process .1 , pp . 351 - 361 , jan .w. e. stark and r. j. mceliece , on the capacity of channels with block memory , " _ ieee trans .inf . theory _322 - 324 , mar .m. mdard , capacity of correlated jamming channels , " in _ proc .35th allerton conf ._ , pp . 1043 - 1052 , 1997 . s. n. diggavi and t. m. cover , the worst additive noise under a covariance constraint , " _ ieee trans .inf . theory _ ,3072 - 3081 , nov . 2001 .a. kashyap , t. baar , and r. srikant , correlated jamming on mimo gaussian fading channels , " _ ieee trans .inf . theory _ ,2119 - 2123 , sep . 2004 .a. bayesteh , m. ansari , and a. k. khandani , effect of jamming on the capacity of mimo channels , " in _ proc .42nd allerton conf_. , pp .401 - 410 , oct . 2004 .s. shafiee and s. ulukus , mutual information games in multi - user channels with correlated jamming , " _ ieee trans .inf . theory _ , vol .4598 - 4607 , oct .t. wang and g. b. giannakis , mutual information jammer - relay games , " _ ieee trans .forensics security _ , vol .290 - 303 , june 2008 .g. amariucai and s. wei , half - duplex active eavesdropping in fast fading channels : a block - markov wyner secrecy encoding scheme , " submitted to _ ieee trans .inf . theory _, 2010 , available : arxiv:1002.1313 .m. yksel , x. liu , and e. erkip , a secure communication game with a relay helping the eavesdropper , " _ ieee trans .forensics security _ , vol . 6 , no .3 , pg . 818 - 830 , sep . 2011 .a. l. swindlehurst , fixed sinr solutions for the mimo wiretap channel , " in _ proc .ieee icassp _ ,2437 - 2440 , 2009 .a. mukherjee and a. l. swindlehurst , fixed - rate power allocation strategies for enhanced secrecy in mimo wiretap channels , " in _ proc .ieee spawc _ ,344 - 348 , perugia , june 2009 .x. zhou and m. r. mckay , secure transmission with artificial noise over fading channels : achievable rate and optimal power allocation , " _ ieee trans . veh .tech_. , vol .59 , no . 8 , pp . 3831 - 3842 , oct .lin , t .- h .chang , y .- l. liang , y .- w .p. hong , and c .- y .chi , on the impact of quantized channel feedback in guaranteeing secrecy with artificial noise - the noise leakage problem , _ ieee trans .wireless commun_. , vol .901 - 915 , mar .a. mukherjee and a. l. swindlehurst , equilibrium outcomes of dynamic games in mimo channels with active eavesdroppers , " in _ proc .ieee icc _ , cape town , south africa , may 2010 .a. mukherjee and a. l. swindlehurst , optimal strategies for countering dual - threat jamming / eavesdropping - capable adversaries in mimo channels , " in _ proc .ieee milcom _ , san jose , ca , nov . 2010 .s. shafiee and s. ulukus , `` achievable rates in gaussian miso channels with secrecy constraints , '' in _ proc .ieee isit _ , 2007 .l. a. petrosjan and n. a. zenkevich , _game theory_. world scientific , 1996 .d. fudenberg and j. tirole , _game theory_. mit press , 1991 .r. myerson , _ game theory : analysis of conflict_. harvard university press , 1997 .y. wu , b. wang , k. j. r. liu , and t. c. clancy , repeated open spectrum sharing game with cheat - proof strategies , " _ ieee trans .wireless commun ._ , vol . 8 , no .1922 - 1933 , apr . 2009 .a. taherpour , m. nasiri - kenari , and s. gazor , multiple antenna spectrum sensing in cognitive radios , " _ ieee trans .wireless commun ._ , vol . 9 , pp . 814 - 823 , feb .s. m. kay , _ fundamentals of statistical signal processing vol .ii- detection theory_. prentice hall , 1998 .
|
this paper investigates reliable and covert transmission strategies in a multiple - input multiple - output ( mimo ) wiretap channel with a transmitter , receiver and an adversarial wiretapper , each equipped with multiple antennas . in a departure from existing work , the wiretapper possesses a novel capability to act either as a passive eavesdropper or as an active jammer , under a half - duplex constraint . the transmitter therefore faces a choice between allocating all of its power for data , or broadcasting artificial interference along with the information signal in an attempt to jam the eavesdropper ( assuming its instantaneous channel state is unknown ) . to examine the resulting trade - offs for the legitimate transmitter and the adversary , we model their interactions as a two - person zero - sum game with the ergodic mimo secrecy rate as the payoff function . we first examine conditions for the existence of pure - strategy nash equilibria ( ne ) and the structure of mixed - strategy ne for the strategic form of the game . we then derive equilibrium strategies for the extensive form of the game where players move sequentially under scenarios of perfect and imperfect information . finally , numerical simulations are presented to examine the equilibrium outcomes of the various scenarios considered . physical layer security , mimo wiretap channel , game theory , jamming , secrecy rate , nash equilibria .
|
how many theoretical probabilists walk away from a tenured faculty position at a top university and set out to make their living as consultants ?how many applied consultants get hired into senior faculty positions in first - rate research universities ?how many professors with a fine reputation in their field , establish an equally fine reputation in a _ different _field , _ after _ retirement ?leo breiman did all of these things and more .he was an inspiring speaker and a convincing writer , doing both with seemingly boundless enthusiasm , in an unpretentious , forthright manner that he called his `` casual , homespun way . ''he was intelligent and thought deeply about research .but there are a number of bright , talented statisticians .what made breiman different ?for one thing , he was willing to take risks . by and large ,statisticians are not great risk - takers .we tend not to stray too far from what we know , tend not to tackle problems for which we have no tools , tend to adopt or adapt existing ideas instead of coming up with completely new ones . linked to this willingness to take risks was breiman s unusual creativity .it was not a wild , off - the - wall creativity it was grounded in a sound knowledge of theoretical principles and directed by an intuition gained by working intensively with data , along with a generous dose of common sense .breiman was driven by challenging and important real - data problems that people cared about .he did nt spend time publishing things just because he could , filling the gaps just because they were there .lastly , he was tenacious. he would not give up on a problem until he , or someone else , got to the bottom of what was going on .some of breiman s ideas have advanced the field in and of themselves ( e.g. , bagging , random forests ) while others have contributed more indirectly ( e.g. , breiman s nonnegative garrote inspired the lasso ) . although his joint work tree - based methods [ ] was arguably his most important contribution to science , he viewed random forests as the culmination of his work .i consider myself privileged to have been able to work with leo breiman for almost 20 years , as his student , collaborator and friend , and i m honored to have been asked to write this review of his contributions to applied statistics .i have divided the paper into roughly chronological sections , but these have considerable overlap and are intended to be organizational rather than definitive .i kept biographical details to a minimum ; those interested in a biography are referred to .i do not feel qualified to discuss breiman s work on the 1991 census adjustment [ ] and have omitted a few other isolated pieces of work such as ; ; ; .breiman was born in new york city in 1928 and educated in california , receiving his ph.d . in mathematics from uc berkeley in 1954 .he earned tenure as a probabilist in the ucla mathematics department but soon after , he `` got tired of doing theory and wanted something that would be more exciting '' ( personal communication ) so he resigned .at this time , breiman was already interested in classification , co - authoring a paper on the convergence properties of a `` learning algorithm '' .curiously , the paper had only two references , one of which was to some early work by seymour papert , who was later to become one of the pioneers of artificial intelligence and co - author of an influential ( and controversial ) book on perceptrons . after resigning, the first thing breiman did was to write his probability book and then , with no formal statistical training , he proceeded to spend the next 13 years as a consultant . as well as some work in transportation, he worked for william meisel s division of technology services corporation , doing environmental studies and unclassified defense work .it s difficult to imagine making such a transition today , but one can speculate that it was in part , _ because _ he did not have a background in applied statistics that breiman was so successful at consulting .certainly the prediction problems on which he worked , some of which are mentioned in and section 3 of , would have been a challenge for the tools and computers of the time . in , he acknowledges meisel for helping him `` make the transition from probability theory to algorithms . ''one of the early problems breiman worked on as a consultant was to classify ship types from the peaks of radar profiles .the observations had different numbers of peaks and the number of peaks and their locations depended on the angle the ship made with the radar .after `` a lot of head - scratching and a lot of time just thinking '' the idea of a classification tree came to him `` out of the blue . '' after this , meisel s research team began using trees regularly .charles stone was brought on board , became interested in trees , and worked with breiman to improve accuracy . in the early to mid-1970s ,breiman and stone came up with the breakthrough idea of using cross validation to prune large trees .it s difficult to obtain published work from breiman s consulting years , but by 1976 , breiman and meisel published an early version of regression trees which broke down the data space into regions and fitted a linear regression in each region .regions were split using a randomly oriented plane and an f - ratio was used to determine if the split had significantly reduced the residual sum of squares ; if not , another random split was tried .in retrospect , the idea of using randomly chosen splits seems a good 20 years ahead of its time .the statement `` many typical data analytic problems are characterized by their high dimensionality and the lack of any a priori identification of a natural and appropriate family of regression functions '' was a clear indicator of breiman s future research directions . in 1976 , breiman met jerome friedman , a high - energy particle physicist , and soon friedman was also working as a consultant for tsi . both friedman and stone had connections to richard olshen , and the four started to collaborate .apparently , they decided to publish their research as a book because they believed the work was unlikely to be published in the standard statistical journals . in 1980 ,stone and breiman joined the uc berkeley statistics department , and the group experimented with different splitting criteria , refined the cross - validation approach , and came up with the idea of surrogate splits .several things set this work apart from other early work on trees .first , they did painstaking experiments .as they report in , `` in the course of the research that led to cart , almost two years were spent experimenting with different stopping rules .each stopping rule was tested on hundreds of simulated data sets with different structures . ''second , they kept applications in the foreground of their work , due in part to breiman s years as a consultant .third , they had what breiman referred to as `` some beautiful and complex theory . ''the book , priced low to make it accessible , was published in 1984 .i once heard charles stone express regret that the cart group had not written a follow up book of `` things we tried that did nt work . ''i expect such a book could have prevented a number of researchers from reinventing the wheel , but few would want to read such a book , much less write it .in fact , after completing , breiman admitted to being `` completely fed up with thinking about trees . '' breiman and friedman continued to talk , because both were interested in high - dimensional data analysis , and soon they came up with the alternating conditional expectations ( ace ) algorithm . for predictor variables and response , ace defines and to minimize ^ 2\ ] ] under the constraint . estimates and obtained using an iterative optimization procedure involving ( nonlinear ) smoothing to estimate each of the transformations while holding the others fixed .this was an application of the gauss seidel algorithm of numerical linear algebra . a simpler version ,taking as the identity , is the familiar `` backfitting '' algorithm [ , ] .ace was the first in a series of papers breiman wrote on smoothing and additive models . compared four scatterplot smoothers using an extensive simulation .building on the spline models used in , breiman s method , with the colorful acronym `` pimple , '' fit additive models of products of ( univariate ) cubic splines . hinging hyperplanes an additive function of hyperplanes , continuously joined along a line called a `` hinge . ''according to , while ace provided the `` first available method for fitting additive models to data , '' it had some difficulties .for small sample sizes , the results were `` noisy and erratic . ''the nonlinearity of the smoother combined with the iterative algorithm led to results that were `` difficult to analyze and sometimes mildly unstable . ''so breiman went back to the drawing board , adapting a spline - based method using stepwise deletion of knots , resulting in .this paper contains early thoughts on using cross - validation to measure instability : `` if transformations change drastically when one or a few cases are removed , then they do not reflect an overall pattern in the data . '' these early ideas of instability ultimately led to some of breiman s most influential work .while all breiman s work was multivariate , some was more clearly affiliated with traditional multivariate techniques . in 1984 ,breiman and ihaka released a technical report describing a nonlinear , smoothing - based version of discriminant analysis .the work was never published but it motivated the work on `` flexible discriminant analysis '' by . in his consulting days ,one of the problems breiman studied was next - day ozone prediction .one of his ideas was to represent each day as a mixture of `` extreme '' or `` archetypal '' days .for example , an archetypal sunny day would be as sunny as possible , an archetypal rainy day would have as much rain as possible , an archetypal foggy day would have fog for as long as possible , and so on .most days would not be archetypal they would fall in between the archetypes , resembling each to a greater or lesser extent . for data , the problem was to find archetypal points to minimize subject to the constraints , while also constraining the s to fall on or inside the convex hull of the data . the problem can be solved using an alternating least squares algorithm .archetypes have been used as an alternative to cluster analysis or principal components in numerous disciplines .the final method in this section is a paper on multivariate regression , whimsically called `` curds and whey '' . to predict correlated responses , breiman and friedman considered predicting each response by a linear combination of the ordinary least squares ( ols ) predictors rather than the ols predictors themselves .the method worked by transforming into canonical coordinates , shrinking , then transforming back .cross - validation was used to choose the amount of shrinkage .breiman had a longstanding interest in submodel selection in linear regression , revealing itself in , which used an early version of a regression tree to estimate the `` intrinsic variability '' of the data , with the goal of effectively ranking the predictive capabilities of subsets of independent variables . looked at determining the optimal number of regressors to minimize mean squared prediction error .again , using prediction error as the gold standard , and contained careful and thorough simulation studies for the -fixed and -random situations . as mentioned , leo s `` openness to new ideas whatever their source '' was an attractive feature of his work .one example of this openness was that in the early 1990s , leo got interested in neural nets and started participating in the neural information processing systems ( nips ) conference and workshops .neural nets were not really a new idea , but they were enjoying new popularity among computer scientists , physicists and engineers , who in leo s view were turning out `` thousands of interesting research papers related to applications and methodology '' . to this active community , leo brought his considerable statistical background , experience with trees and subset selection , and perspective from years of dealing with real data and thinking about how to do it better .this led to leo s most productive years , in part facilitated by his retirement from the uc berkeley statistics department in early 1993 , about which he said , `` so far retirement has meant that i ve got more time to spend on research '' ( personal communication ) . the first work to appear from this period , stacking , was stimulated by and first appeared as a technical report in 1992 . in , he said , `` in past statistical work , all the focus has been on selecting the `` best '' single model from a class of models .we may need to shift our thinking to the possibility of forming combinations of models . '' in the case of stacking , this was a linear combination of predictors .each predictor was based on what wolpert called the `` level 1 data '' . a family of models indexed by .for example , might be the number of variables in a subset selection method or might index a collection of shrinkage parameters for ridge regression . for data , each of the predictors were fit to the data with observation omitted ( leave - one - out cross validation ) to give predictions of , namely , which were the `` level 1 data . '' the `` stacked '' predictor was where , were chosen to minimize breiman considered stacked subsets and stacked ridge regressions and concluded that both were better than the existing method ( choosing a single model by cross - validation ) .however , stacking improved subsets more than it improved ridge , which breiman suggested was due to the greater instability of subset selection .building on stacking and using some of his experiences from and , breiman introduced the nonnegative garrote . for data as before and original ols coefficients , the nonnegative garrote chose to minimize subject to the constraints and .this was a much simpler idea than stacking because it did not use wolpert s `` level 1 data '' and ranged over the predictor variables instead of denoting the size of a subset or the value of a shrinkage parameter .breiman found that the garrote had consistently lower prediction error than subset selection , and sometimes better than ridge regression .breiman s ideas about instability , first mentioned in , led him to characterize of ridge regression as stable , subset selection unstable , and the garrote intermediate .breiman remarked that `` the more unstable a procedure is , the more difficult it is to accurately estimate pe ( prediction error ) '' and speculated about finding a `` numerical measure of stability . '' showed some interesting results for the garrote in a boosting context .however , the largest impact of the garrote was that it inspired the lasso [ ] , which is currently the method of choice , in part because of garrote s dependence on . breiman s notions of stability were further explored in .he compared ridge regression , subset selection and two versions of garrote and stated , `` unstable procedures can be stabilized by perturbing the data , getting a new predictor sequence and then averaging over many such predictor sequences . ''the types of perturbation he considerd are leave - one - out cross - validation , leave - ten - out cross - validation and adding random noise to the response variable .he stated `` we do not know yet what the best stabilization method is . ''breiman released an early version of in june 1994 , but by september of the same year he released yet another technical report in which he had already resolved some of the questions raised in .he called the report `` bagging predictors '' and it was to be published as .the name comes from `` bootstrap aggregating '' because in bagging , the data were perturbed by taking bootstrap samples and the resulting predictors were averaged ( aggregated ) to give the `` bagged estimate . ''the classification version aggregates by voting the predictors . in november 1994, breiman presented bagging as part of a tutorial at the nips conference , where it was immediately embraced by the neural net community . according to google scholar ,citations of already exceed 6300 , slightly higher than efron s 1979 bootstrap paper .the simplicity and elegance of bagging made it appealing in a community where new ideas tended to be technically complex . in bagging ,each predictor was fit to a bootstrap sample , so roughly 37% of the observations were not included in the fit ( `` out - of - bag '' ) . in an unpublished technical report described how to use these for estimating node probabilities and generalization error .although bagging trees improved the accuracy of trees , breiman liked the simple , understandable structure of individual trees and was not ready to give up on them .noting that trees have `` the disadvantage that the splits get noisier as you go down '' ( personal communication ) , he worked with nong shang to try to improve the stability of trees by estimating the joint density of the data and basing the splits on this density estimate instead of directly on the data .one of the problems of this method was that density estimates depended on numerous parameters and breiman referred to it later as a `` complex and unwieldy procedure . ''another attempt , described in , was to generate new `` pseudo - data '' by randomly choosing an existing data point and moving its predictor variables a small step towards a second randomly - chosen data point .the new predictor values , together with the response for the original data point , gave the pseudo - data .the step size was chosen to be uniform on the interval where was a parameter of the method .although the results appeared promising , the method did not give improvements on large datasets and the paper was never published .breiman tried to improve upon bagging in a number of other ways .his `` iterated '' or `` adaptive '' bagging was designed to reduce the bias of bagged regressions by successively altering the output values using the out - of - bag data .naturally , this biases the out - of - bag generalization error estimates , but breiman found that for the purpose of bias reduction it worked well . in a similar vein , provided an alternative to bagging by combining predictors fit to data for which only the output variables have been perturbed .it s not clear whether these ideas would have endured because breiman did not release code and they were discarded once he discovered random forests .while breiman developed bagging , freund and schapire worked on adaboost [ , , ] .breiman referred to the adaboost algorithm as `` the most accurate general purpose classification algorithm available '' . like bagging , adaboost combined a sequence of predictors .unlike bagging , each predictor was fit to a sample from the training data , with larger sampling weights given to observations that had been misclassified by earlier predictors in the sequence .the predictions were combined using performance weights . in a personal communication ,breiman wrote , `` some of my latest efforts are to understand adaboost better .its really a strange algorithm with unexpected behavior .its become like searching for the holy grail ! ! '' in his quest , breiman produced a series of papers [ ( ) ] .he noted in that if adaboost `` is run far past the point at which the training set error is zero , it gives better performance than bagging on a number of real data sets . ''this was a great mystery and breiman was determined to get to the bottom of it . in , breiman constructed a more general class of algorithms `` arcing , '' of which adaboost , ( `` arc - fs '' ) was a special case. one contribution of was that breiman removed the randomness of boosting by using a weighted version of the classifier instead of sampling weights .focusing on bias and variance , he concluded that `` arcing does better than bagging because it does better at variance reduction '' , but gave examples in which the main effect of adaboost was to reduce bias and proposed their own reasons for why boosting worked so well .breiman thought the explanation was incomplete .breiman s work on half and half bagging was stimulated by one of the referees of , who commented that the probability weight at a given step was equally divided between the points misclassified , and those correctly classified , at the previous step . in breimandivided the data into two parts , one containing `` easy '' points , the other `` hard '' points , based on previous classifiers in the sequence .he randomly sampled an equal number of cases from both groups and fitted a classification tree .for the first time , the tree was grown deep ( one example per terminal node ) , which he later carried over to random forests . in , he showed that adaboost is a `` down - the - gradient '' method for minimizing an exponential function of the error .independently , presented `` the statistical view of boosting . '' about his `` infinity theory '' paper , breiman stated in august 2000 : `` i ve been compulsively working on a theory paper about tree ensembles which i got sick and tired of but knew that if i did nt keep going it would never get finished . ''the paper was released as a technical report , cited by and , among others .a later version was published as and in this paper breiman showed that the population version of adaboost was bayes - consistent .in the meantime , several publications , including , suggested that adaboost could overfit in the limit and showed that in the finite sample case , adaboost was only bayes - consistent if it was regularized .in the light of boosting , breiman spent a lot of time trying to improve individual trees [ , ] and bagged trees [ ( ) ] .he also worked very hard to understand what was going on with boosting [ ( ) ] .however , he never seriously produced a boosting algorithm for practical use , and i believe the reason was that he wanted a method that could give meaningful results for data analysis , not just prediction , and he did nt think he could get this by combining dependent predictors . the culmination of his work on bagging and how to improve it , and his work trying to understand boosting , was a method breiman called `` random forests '' ( rf ) .random forests fit trees to independent bootstrap samples from the data .the trees were grown large ( for classification ) and at each node independently , predictors were chosen out of the available , and the best possible split on these predictors was used . as a default for classification , breiman settled on choosing .in rf we see a synthesis of the bagging ideas ( bootstrapping ) , along with ideas that came from boosting ( growing large trees ) , and breiman s understanding of how to increase instability ( randomly choosing predictors at each node ) to get more accurate aggregate predictions .once he came up with rf , breiman stopped working on new algorithms and started work on how to get the most out of the rf results .he developed measures of variable importance and proximities between observations .together , we developed a program for visualizing and interpreting rf results ( available from http://www.math.usu.edu/~adele/forests/cc_graphics.htm ) .chao chen and andy liaw worked with breiman on ways to adjust rf for unbalanced classes .vivian ng worked with him on detecting interactions . in his last technical report , breiman showed consistency for a simple version of rf [ ] .but the work on rf did not stop when breiman died .several extensions have been published ; for example , developed a variable selection procedure , introduced quantile regression forests , and , considered forests for survival analysis .although theory is still thin on the ground , showed that rf behaves like a nearest neighbor classifier with an adaptive metric and biau , devroye and lugosi made some progress on consistency in a paper dedicated to breiman s memory .numerous applied articles have appeared and even a number of youtube videos .i believe breiman would be truly delighted at the popularity of the method .leo developed his own code , invariably in fortran .i collaborated with him on the random forests fortran code and documentation http://www.math.usu.edu/~adele/forests/cc_home.htm .andy liaw and matt wiener developed an interface to r . although leo supported the r release and admired the free - software philosophy of r , he regarded r as a tool for `` ph.d . statisticians '' and he wanted his code to also be available with an easy to use graphical user interface ( gui ) .gui - driven software for classification and regression trees and random forests is available from salford systems .versions of trees , random forests and archetypes are available in r ( packages rpart , randomforests , and archetypes ) .in addition to his papers , breiman wrote three textbooks [ ( ) ] , the first of which is in siam s `` classics of mathematics '' series .perhaps even more impressive is the fact that other scholars are now writing texts that refer extensively to breiman s work , including trees , bagging and random forests [ see , and ] .breiman passionately believed that statistics should be motivated by problems in data analysis .comments such as _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if statistics is an applied field and not a minor branch of mathematics , then more than 99% of the published papers are useless exercises . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ show how deeply he believed that statistics needed a change of direction . when he heard that was to be published with discussion in _ the annals of statistics _, he commented that `` it would sure liven things up maybe get some blood moving in the statistical main stream of asymptopia '' ( personal communication ) .although it is not widely cited , i believe breiman s `` two cultures '' paper is one of his most widely read , at least among statisticians .the paper contained breiman s views about where the field was going and what needed to be done . to conclude, he said : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the roots of statistics , as in science , lie in working with data and checking theory against data .i hope in this century our field will return to its roots .there are signs that this hope is not illusory . over the last ten years, there has been a noticeable move toward statistical work on real world problems and reaching out by statisticians toward collaborative work with other disciplines .i believe this trend will continue and , in fact , has to continue if we are to survive as an energetic and creative field .[ ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ breiman , l. ( 1995b ) . reflections after refereeing papers for nips . in _ the mathematics of generalization : proceedings of the sfi / cnls workshop on formal approaches to supervised learning , volume 1992 _ ( d. h. wolpert , ed . ) .westview press , boulder , co. freund , y. and schapire , r. ( 1996 ) .experiments with a new boosting algorithm . in _machine learning : proceedings of the thirteenth international conference _morgan kauffman , san francisco , ca .
|
leo breiman was a highly creative , influential researcher with a down - to - earth personal style and an insistence on working on important real world problems and producing useful solutions . this paper is a short review of breiman s extensive contributions to the field of applied statistics . .
|
leukocyte traffiking plays a central role in the immune response of vertebrates .leukocytes constantly circulate in the cardiovascular system and enter into tissue and lymph through a multi - step process involving rolling on the endothelium , activation by chemokines , arrest , and transendothelial migration .a key molecule in this process is l - selectin , a leukocyte - expressed adhesion receptor which is localized to tips of microvilli and binds to glycosylated ligands on the endothelium .its properties are optimized for initial capture and rolling under physiological shear , as confirmed by recent experimental data and computer simulations .in contrast to tethering through other receptor systems like p - selectin , e - selectin or integrins , appreciable tethering through l - selectin and subsequent rolling only occurs above a threshold in shear , even in cell - free systems .downregulation by low shear is unique for l - selectin tethers and might be necessary because l - selectin ligands are constitutively expressed on circulating leukocytes , platelets and subsets of blood vessels .the dissociation rate of single molecular bonds is expected to depend exponentially on an externally applied steady force ( _ bell equation _ ) . quantitative analysis with regular video camera ( time resolution of 30 ms ) of l - selectin tether kinetics in flow chambers above the shear threshold resulted in first - order dissociation kinetics , with a force dependence which could be fit well to the bell equation , resulting in a force - free dissociation constant of hz .these findings have been interpreted as signatures of single l - selectin carbohydrate bonds .however , recent experimental evidence suggests that l - selectin tether stabilization involves multiple bonds and local rebinding events .evans and coworkers used the biomembrane force probe to measure unbinding rates for single l - selectin bonds as a function of loading force . modeling bond rupture as thermally activated escape over a sequence of transition state barriers increasingly lowered by rising force , these experiments revealed two energy barriers along the unbinding pathwaythe inner barrier corresponds to ca-dependent binding through the lectin domain and explains the high strength of l - selectin mediated tethers required for cell capture from shear flow .extracting barrier properties from the dynamic force spectroscopy data allows to convert them into a plot of dissociation rate as a function of force . in this way, results from dynamic force spectroscopy and flow chamber experiments can be compared in a way which is independent of loading rate . in detail , evans and coworkers found a 1000-fold increase in dissociation rate as force rises from 0 to 200 pn , in marked contrast to tether dissociation kinetics as measured in flow chamber experiments , which increases at most 10-fold over this range .therefore additional stabilization has to be involved with leukocyte tethers mediated by l - selectin .dwir and coworkers used flow chambers to study tethering of leukocytes transfected with tail - modified mutants of l - selectin .they found that tether dissociation increases with increased tail truncation , possibly since tail truncation leads to decreased cytoskeletal anchorage and increased mobility .more recently , dwir and coworkers found with high speed video camera ( time resolution of 2 ms ) that l - selectin tethers form even below the shear threshold at shear rate hz , albeit with a very fast dissociation rate of hz , undetectable with regular camera .thus the shear threshold results from insufficient tether stabilization at low shear . using systematic changes in viscosity ( which changes shear stress , butnot shear rate ) , dwir and coworkers were able to show that at the shear threshold , tether lifetime is prolongued by a factor of due to shear - mediated cell transport over l - selectin ligand .they suggested that sufficient transport might be needed for formation of additional bonds . with more than one bond being present , rebinding then could provide the tether stabilization observed experimentally . in this paper, we present a theoretical model for the interplay between bond rupture , l - selectin mobility and ligand rebinding within small clusters of l - selectin bonds , which interprets the recent experimental results in a consistent and quantitative way . traditionally , tether dissociation at low ligand density has been interpreted as single molecule rupture due to the observed first order dissociation kinetics and a shear dependence which can be fit well to the bell equation .here we demonstrate that the same features result for small clusters of multiple bonds with fast rebinding .our results suggest that the shear threshold corresponds to the formation of multiple contacts and that single l - selectin bonds decay too rapidly as to provide functional leukocyte tethers .determined from kinetic analysis of flow chamber experiments plotted as function of shear rate .solid line with circles : wildtype . dashed line with diamonds :tail - deleted mutant .dotted line with squares : wildtype with 6 percent of ficoll , which changes viscosity and thus shear stress ( but not shear rate ) by a factor of 2.6 .these data suggest that the shear threshold is a transport - related rather than a force - related issue , and that the shear threshold is not about ligand recognition , but about tether stabilization . ]our experimental procedures have been described before elsewhere .three variants of human l - selectin were stably expressed in the mouse 300.19 pre b cell line .wildtype , tail - truncated mutant and tail - deleted mutant have the same extracellular domains and differ only in their cytoplasmic tails .l - selectin mediated tethering was investigated in a parallel plate flow chamber . the main ligand used was pnad , the major l - selectin glycoprotein ligand expressed on endothelium . for immobilization in the flow chamber ,the ligand was diluted in such a way that no rolling was supported at shear rates lower than 100 hz ( dilution 10 ng / ml in the coating solution , which corresponds to an approximate scaffold density of ) .single tethers were monitored with video microscopy at 2 ms resolution and the microkinetics were analyzed by single cell tracking as described previously .the logarithm of the number of cells which pause longer than time is plotted as a function of and usually gives a straight line indicative of an effectively first order dissociation process .the slope is the tether dissociation rate and is plotted as a function of shear rate in fig . 1 .this plot shows that below the shear threshold of 40 hz , the dissociation rate is 250 hz , independent of tail mutations and viscosity of the medium . above the shear threshold ,the dissociation rate becomes force - dependent , with a dependence on shear stress which can be fit well to the bell equation . here is the force - free dissociation rate and is the bond s internal force scale .the force on an undeformed 300.19 lymphocyte with radius m follows from stokes flow around a sphere close to a wall . taking into account the lever arm geometry provided by the tether holding the cell at an angle of 50 , the force acting on the l - selectin bonds can be calculated to be pn per dyn / cm of shear stress .fitting the bell equation to the wildtype data from fig .1 gives similar values as obtained in earlier studies , namely a force - free dissociation constant of hz and an internal force scale pn ( corresponding to a reactive compliance of ) . at the shear threshold ,we find 14-fold and 7-fold reduction in dissociation rate for wildtype and tail - truncated mutant , respectively .adding 6 volume percent of the non - toxic sugar ficoll increases viscosity from 1 cp to 2.6 cp .thus shear stress is increased by a factor of 2.6 , whereas shear rate is unchanged . at the shear threshold , this increases wildtype dissociation 3-fold , roughly as expected from the fit to the bell equation .most importantly , there is no shift of the shear threshold as a function of shear rate .this indicates that the shear threshold results from shear - mediated transport , rather than from a force - dependent process .* shear - mediated transport . * at the shear threshold at shear rate hz ( corresponding to shear stress dyn / cm for viscosity cp ) and for small distance between cell and substrate , a cell with radius m will translate with hydrodynamic velocity / ms and at the same time rotate with frequency hz .therefore the cell surface and the substrate surface will move relatively to each other with an effective velocity nm / ms . in averagethere is no normal force which pushes the cell onto the substrate , but since it moves in close vicinity to the substrate , it can explore it with this effective velocity . thus there exists a finite probability for a chance encounter between a l - selectin receptors on the tip of a microvillus and a carbohydrate ligand on the substrate .here we focus on the case of diluted ligands , with a ligand density of .then the average distance between single ligands is nm , that is larger than the lateral extension of the microvilli , which is nm . therefore the first tethering event is very likely to be a single molecular bond ( compare fig .if this first bond has formed , the microvillus will be pulled straight and the cell will slow down .it will come to a stop on the distance of order m ( e.g. the rest length of a microvillus is m ) .this takes the typical time ms . during this time, the cell can explore an additional distance of the order of nm . the experimental data presented in fig .1 suggest that this is the minimal transport required to establish a second microvillar contact which is able to contribute to tether stabilization ( compare fig .* single bond loading . *if tether duration was much longer than the time over which the cell comes to a stop , the single bond dissociation rate below the shear threshold should increase exponentially with shear rate according to the bell equation .however , this assumption is not valid in our case , because tether duration and slowing down time are both in the millisecond range .3 shows that indeed the bell equation ( dotted line ) does not describe the wildtype data from fig . 1 ( dashed line with circles ) . in order to model a realistic loading protocol , we assume that the force on the bond rises linear until time and then plateaus at the constant force arising from shear flow . note that initial loading rate scales quadratically with shear rate , because and .the dissociation rate for this situation can be calculated exactly .the result is given in the supplemental material and is plotted as dash - dotted line in fig .it is considerably reduced towards the experimentally observed plateau .agreement is expected to increase further if initial loading is assumed to be sub - linear .a scaling argument shows the main mechanism at work . for the case of pure linear loading ,the mean time to rupture is , where is the exponential integral .there are two different scaling regimes for slow and fast loading , which are separated by the critical loading rate . for slow loading , ,a large argument expansion gives , that is the bond decays by itself before it starts to feel the effect of force .for fast loading , a small argument expansion gives , which is also found for the most frequent time of rupture in this regime . in our case , hz , pn and pn / s .at the shear threshold , pn / s and we are still in the regime of slow loading , .this suggests that tethers below the shear threshold correspond to single l - selectin carbohydrate bonds which decay before the effect of force becomes appreciable .this does not imply that the bonds do not feel any force ( after all the cell is slowed down ) , but that we are in a regime in which as a function of shear does not change appreciably , as observed experimentally . as a function of shear rate compared to experimentally measured wildtype data from fig . 1 ( dashed line with circles ) . dotted line : the single bond dissociation rate with force - free dissociation rate hz and constant instantaneous loading increases exponentially according to the bell equationdash - dotted line : it is reduced towards the experimentally observed plateau below the shear threshold at hz by including the effect of finite loading rates .solid lines from top to bottom : cluster dissociation rate for two - bonded tether with rebinding rate and . above the shear threshold , the two - bonded tether with agrees well with the experimentally measured data . ] * single bond rebinding . *single bond rupture is a stochastic process according to the dissociation rate given by the bell equation .if ligand and receptor remain in spatial proximity after rupture , rebinding becomes possible .we define the single molecule rebinding rate to be the rate for bond formation when receptor and ligand are in close proximity .if bond formation was decomposed into transport - determined formation of an encounter complex and chemical reaction of the two partners , then would correspond to the on - rate for reaction .it has the dimension of 1/s and should not be confused with two- or three - dimensional association rates , which have dimensions of m/s ( equivalently m / ms ) and m/s ( equivalently 1/ms ) , respectively . should depend mainly on the extracellular side of the receptor . in the following, it will therefore be assumed to be the same for wildtype and mutants .there are two mechanisms which might prevent rebinding within an initially formed cluster : the single receptor might escape from the rebinding region due to lateral mobility , or the receptor might be carried away from the ligand because the cell is carried away by shear flow .2c shows schematically the interplay between rupture , rebinding and mobility for single l - selectin receptors .we start with the first case , that is diminished rebinding due to lateral receptor mobility .since increased tail truncation decreases interaction with the cytoskeleton , lateral mobility increases from wildtype through tail - truncated to tail - deleted mutant . for each receptor type , we assume an effective diffusion constant . the conditional probability for rebinding depends on absolute time since rupture .we approximate it by the probability that a particle with two - dimensional diffusion , but without capture is still within a disc with capture radius at time , .thus the time scale for the diffusion correction is set by , the time to diffuse the distance of the capture radius .the diffusion constant for the wildtype can be estimated to be cm/s , with the one for the tail - deleted mutant being at least one order of magnitude larger .a typical value for the capture radius is nm . then the time to diffuse this distance is and for wildtype and tail - deleted mutant , respectively . for smaller times , , plateaus at its initial value . for larger times , , it decays rapidly towards zero .the single molecule behavior is governed by the dimensionless number , which is the ratio of timescales set by diffusion and rebinding .diffusion does not interfere with rebinding as long as .our theory therefore predicts that for wildtype with diffusion constant cm/s and capture radius nm , hz . for the tail - deleted mutant , mobility does interfere with rebinding and we must have .if we assume that in this case is smaller by one order of magnitude , then hz .thus we can conclude that should be of the order of hz .* tether stabilization through multiple bonds . * we now turn to the possibility that spatial proximity required for rebinding is established by multiple contacts .tethers above the shear threshold are modeled as clusters of bonds , which in practice are expected to be distributed over at least two microvilli . at any timepoint ,each of the bonds is either closed or open .the way force is shared between the closed bonds depends on the details of each tether realization .however , we expect that only those realizations will contribute significantly to the long - lived tethers above the shear threshold in which different bonds share force more or less equally .this most likely corresponds to two microvilli being bound with similar latitude in regard to the direction of shear flow . with this assumption, the force used in the single molecule dissociation rate has to be overall force divided by the number of closed bonds .if one bond ruptures , force is redistributed among the remaining bonds .open bonds can rebind with the rebinding rate .if rebinding occurs , force again is redistributed among the closed bonds . in general , in the absence of diffusion cluster lifetime , but not the full cluster dissociation probability function can be calculated exactly .we first discuss the case without loading or diffusion , thus focusing on the role of rebinding .as argued in the supplemental material , for small rebinding rate , , cluster lifetime scales logarithmically rather than linear with cluster lifetime . this weak increase in with results because different bonds decay not one after the other , but on the same time scale .the exact treatment shows that for clusters of , , , and bonds without rebinding , lifetime is prolongued by , , , and , respectively . in order to achieve 14-fold stabilization as observed experimentally at the shear threshold , one needs the astronomical number of bonds . in practice , for the case of dilute ligand discussed here , only very few bonds are likely .therefore even in the presence of multiple bonds , rebinding is essential to provide tether stabilization . and ( right ) and hz ( left ) .dashed lines : the same with pn .dotted lines : , and mobility parameter ( right ) and ( left ) , respectively .inset : l - selectin mediated tethers show bell - like shear dependence even in the presence of l - selectin mobility .solid lines from bottom to top : no mobility , and . ] in general , fast rebinding is much more efficient for tether stabilization than large cluster size .our calculations predict that in order to obtain 14-fold stabilization for the cases , and , one needs , and hz , respectively .the value hz obtained for the case is surprisingly close to the estimate hz obtained above via a completely different route , namely the competition of rebinding and diffusion for a single molecule .therefore in the following we restrict ourselves to the simple case of two bonds being formed above the shear threshold ( most probably by two microvilli ) . in this case, cluster lifetime can be calculated to be : a derivation of this result is given in the supplemental material . in fig .3 we use to plot the dissociation rate for the two - bonded tether ( identified with the inverse of cluster lifetime ) as a function of shear rate for different values of rebinding .the shear threshold at hz corresponds to .it follows from that for this value of , 14-fold stabilization in comparision with the force - free single bond lifetime is achieved for .for hz , this corresponds to a rebinding rate of hz .thus again we arrive at the same order of magnitude estimate , hz .3 shows that with this value for , agreement between theory and experimental wild - type data above the shear threshold is surprisingly good .* relation to biacore . *we now discuss how our estimate relates to biacore data for l - selectin . in this experiment ,l - selectin was free in solution and glycam-1 immobilized on the sensor surface , which makes it a monovalent ligand .for the equilibrium dissociation constant , the authors found m .this unusually low affinity results from a very large dissociation rate , which they estimated to be hz .the results presented in fig . 1 seem to suggest that the real dissociation rate hz .however , surface anchorage of both counter - receptors often reduces bond lifetime by up to two orders of magnitude .this has been demonstrated experimentally for several receptor - ligand systems and might result from the reduction in free enthalpy of the anchored bond .thus it might well be that the dissociation rate hz found for surface anchored bonds might be reduced down to hz for free l - selectin binding to surface - bound ligand . then the association rate 1/ms . for a capture radius nm and a three - dimensional diffusion constant cm/s ,the diffusive forward rate in solution is 1/ms . because , the receptor - ligand binding in solution is reaction - limited , as it usually is .as explained above , can be identified with the rate with which an encounter complex tranforms into the final product . since bond formation is reaction - limited , hz , where is the dissociation constant of diffusion .thus this estimate agrees well with the two other estimates derived above .in order to obtain effective dissociation rates in the presence of diffusion , one has to use computer simulations . for each parameterset of interest , we used monte carlo simulations to simulate 5,000 realizations according to the rates given above .more details are described in the supplemental material . in general , our simulations show that for strong rebinding , that is , the effective dissociation kinetics of small clusters is first order . in fig . 4 , this is demonstrated for the case .the plot shows the logarithm of the simulated number of tethers lasting longer than time for different parameter values of interest .all curves are linear , even in the presence of mobility , and the slopes can be identified with the dissociation rates .for example , hz and pn yields the same effective first order dissociation rate as hz and , thus rebinding can rescue the effect of force .our simulations also show that cluster dissocation rate as a function of force fits well to the bell equation for .in particular , this holds true in the presence of l - selectin mobility , as shown in the inset of fig .4 . for and without mobility ( vanishing diffusion constant ) , lifetime at the shear threshold is 12-fold increased compared with single bond dissociation . with our estimate for wildtype mobility ( ) , 10-fold stabilization takes place . for tail - deleted mobility ( ), only 1.5-fold stabilization occurs .this effect is more dramatic than observed experimentally , where stabilization for wildtype and tail - deleted mutant are 14-fold and 7-fold , respectively . in practice ,the mobility scenario is certainly more complicated and is expected to smooth out the threshold effect arising from our modeling .in this paper , we have presented biophysical modeling of l - selectin tether stabilization in shear flow based on recently published flow chamber data with high temporal resolution .our analysis suggests that the 14-fold stabilization observed at the shear threshold results from formation of multiple contacts and a single molecule rebinding rate of the order of hz , which is remarkably faster than the force - independent dissociation rate hz observed below the shear threshold . using computer simulations , we showed that for such strong rebinding , the experimentally observed first order dissociation and bell - like shear force dependence follow from the statistics of small clusters of bonds . despite the good quantitative agreement achieved here between experimental data and our model , it is important to state that it can not be expected to predict all details of the experimental results . in practice ,the formation of bonds is a stochastic process and there will be a statistical mixture of differently sized and differently loaded clusters , involving different microvilli and different scaffolds of l - selectin ligands .cytoskeletal anchorage of the different ligand - occupied l - selectin molecules might also change in time in a complex way .nevertheless , by focusing on the case of two bonds ( possibly on two different microvilli ) with shared loading and mobility - dependent rebinding , we obtained quantitative explanations for many conflicting observations from flow chamber experiments and biomembrane force probes , which have not been interpreted in a consistent way before .several explanations have been proposed for the shear threshold effect before .chang and hammer suggested that faster transport leads to increased probability for receptor ligand encounter . yetthe new high resolution data from flow chamber experiments indicate that below the shear threshold , the issue is insufficient stabilization rather than insufficient ligand recognition . chen and springer suggested that increased shear helps to overcome a repulsive barrier , possibly resulting from negative charges on the mucin - like l - selectin ligands .however , dwir and coworkers showed that small oligopeptide ligands for l - selectin presented on non - mucin avidin scaffolds exhibit the same shear dependence as their mucin counterparts .evans and coworkers have argued that increased shear leads to cell flattening and bond formation .however , dwir and coworkers found that fixation of psgl-1 presenting neutrophils does not change the properties of tethers formed on low density immobilized l - selectin , while they do destabilize psgl-1 tethers to immobilized p - selectin ( dwir and alon , unpublished data ) .these data suggest that cell deformation as well as stretching and bending of microvilli do not play any significant role in l - selectin tether stabilization .recently , the unusual molecular property of catch bonding has been suggested as explanation for the shear threshold .however , the data by dwir and coworkers suggests that force - related processes do not account for the shear threshold of l - selectin mediated tethering .our interpretation of the shear threshold as resulting from multiple bond formation is supported by experimental evidence that increased ligand density both rescues the diffusion defect and abolishes the shear threshold .the diffusion defect can also be rescued by anchoring of cell - free tail mutants of l - selectin to surfaces , allowing them to interact with leukocytes expressing l - selectin ligands .on all ligands tested , the tail - truncated and more so the tail - deleted l - selectin mutants support considerably shorter tethers , consistent with a role for anchorage in these local stabilizaton events .one possible explanation is that cytoskeletal anchorage prevents uprooting of l - selectin from the cell .however , uprooting from the plasma membrane of neutrophils has been shown to take place on the timescale of seconds .the tail - truncated l - selectin mutant has still two charged residues in the tail , which makes it impossible to extract it from the membrane in milliseconds .receptor uprooting from the cytoskeleton only should lead to microvillus extension , which however is a slow process and has been shown to stabilize the longer - lived p - selectin mediated tethers rather than l - selectin mediated tethers .here we postulated another possibility for cytoskeltal regulation , namely restriction of lateral mobility .it has been argued before for integrin - mediated adhesion that increased receptor mobility due to unbinding from the cytoskeletal is used to upregulate cell adhesion . indeed increased receptor mobility is favorable for contact _ formation _ , but herewe show that it is unfavorable for contact _ maintenance _ , since it reduces the probability for rebinding .our analysis suggests that the smallest functional tethers are mediated by a least two l - selectin bonds , each on a different microvillus , working cooperatively as one small cluster .our model does not explain from which configuration a broken bond rebinds , but it suggests that this configuration is neither collapsed ( otherwise rebinding , which implies spatial proximity , was not possible ) nor strongly occupied ( otherwise diffusive escape was not possible ) .we can only speculate that complete rupture is a multi - stage process , and that the rebinding discussed here starts from some partially ruptured state .we also can not exclude that the rebinding events described here involve different partners than the dissociated ones , because both l - selectins and their carbohydrate ligands might be organized in a dimeric way .moreover , cytoplasmic anchorage might proceed in multiple steps , including some weak pre - ligand binding anchorage , which is strengthened by l - selectin occupancy with ligand .coupling between ligand binding and cytoplasmic anchorage is well - known for integrins and might also be at work with selectins. the mechanisms discussed in this paper could be effective also with other vascular counterreceptors specialized to operate under shear flow .as argued here , the exceptional capacity of l - selectin to promote functional adhesion in shear flow might not only result from fast dissociation and high strength under loading , but more so from a fast rebinding rate .indeed other vascular adhesion receptors specialized to capture cells share on - rates similar to that of l - selectin .shear flow may also promote multi - contact formation for shear - promoted platelet tethering to von willebrand factor .it may also enhance formation of multivalent and lfa-1 integrin tethers to their respective ligands .the importance of cytoskeletal anchorage in local rebinding processes of these and related adhesion receptors has not been experimentally demonstrated to date .however , the lesson drawn here from the role of l - selectin anchorage in millisecond tether stabilization may apply to these receptors as well .future studies will help confirm this hypothesis .they will also shed light on the specialized structural features acquired by these receptors and their ligands through evolution , allowing them to operate under the versatile conditions of vascular shear flow .* acknowlegdments : * we thank oren dwir , thorsten erdmann , evan evans , stefan klumpp , rudolf merkel , samuel safran and udo seifert for helpful discussions .r.a . is the incumbent of the tauro career development chair in biomedical research .is supported by the german science foundation through the emmy noether program .* model . *we consider a cluster with a constant number of parallel bonds under constant force . at any time , bonds are closed and bonds are open ( ) .closed bonds are assumed to rupture according to the bell equation : for convenience , we introduce dimensionless variables : dimensionless time , dimensionless dissociation rate and dimensionless force .the closed bonds are assumed to share force equally , that is each closed bond is subject to the force .thus the dimensionless dissociation rate is .as long as the receptors are held in proximity to the ligands , rebinding of open bonds can occur .therefore we assume that single open bonds rebind with the force independent association rate .the dimensionless rebinding rate is defined as .the stochastic dynamics of the bond cluster can be described by a one - step master equation p_i\ ] ] where is the probability that closed bonds are present at time .the reverse and forward rates between the different states follow from the single molecule rates as once the completely dissociated state is reached , the cell will be carried away by shear flow and the cluster can not regain stability .this corresponds to an absorbing boundary at , which can be implemented by setting .cluster lifetime is identified with the mean time to reach the absorbing state .* lifetime of two bonded cluster . * for a cluster with two bonds , , cluster lifetime can be calculated exactly in the following way . at ,the cluster starts with the initial condition .next it moves to state with probability , after the mean time .from there , it rebinds to state with probability , or dissociates with probability .the mean time for this part is .thus after two steps the system has reached state with probability or returned to state with probability , with . in detail , the probabilities and mean times for both processes are different paths to dissociation only differ in the number of rebinding events to state : we first check normalization : and then calculate cluster lifetime : this formula is given in dimensional form as eq . 1 in the main text . *cluster size versus rebinding . * for arbitrary cluster size , cluster lifetime can be obtained from the adjoint master equation . in the case of vanishing force , , the solutioncan also be found by using laplace transforms : for , this equation reduces to where are the harmonic numbers .an expansion for large gives here is the euler constant .this formula is rather good already for small values of .is easy to understand : for , dissociation is simply a sequence of poisson decays with mean times .the overall mean time for dissociation is the sum of the mean times of the subprocesses .we conclude that for vanishing rebinding , grows only weakly ( logarithmically ) with and very large cluster sizes are required to achieve long lifetimes .therefore rebinding is essential to achieve stabilization for small cluster sizes .* effect of finite loading rate .* loading and dissociation of single l - selectin bonds occur on the same time scale . as a cellis captured from shear flow and comes to a stop , force rises from cero and plateaus at a finite value .we model the initial rise as linear , with loading rate .therefore until time , followed by constant loading , where is dimensionless loading rate .since , there are only two independent parameters , and .the mean lifetime can be calculated in the usual way .we find where is the exponential integral .for , we find the result for constant loading , .for , we find the result for linear loading , .is used in the section on single bond loading and for the plot of the dash - dotted line in fig .* simulations . * in the presence of diffusion with diffusion constant , the single molecule association rate becomes a function of the time which has passed since unbinding . in this paper , we use the approximation where is the capture radius .since anaytical solutions are intractable in this case , the master equation has to be solved numerically . the standard method to do soare monte carlo simulations .unfortunately , the gillespie algorithm for exact stochastic simulations can not be used in this case , because it does not track the identity of different bonds .therefore we simulate the master equation by discretizing time in small steps . for each time step ,random numbers are drawn in order to decide how the system evolves according to the rates defined for the different processes . in detail , in the time interval ] . in fig . 4 , we plot the natural logarithm of the fraction of tethers which last longer than dimensional time as a function of , as it is common for the analysis of experimental data .the slope of this curve is identified with the dissociation rate .although this procedure involves numerical integration of the probability distribution for dissociation , and therefore leads to loss of information , its smoothing effect is essential to obtain reliable estimates for the dissociation rate in the presence of noisy data . in the inset of fig .4 , the dissociation rates obtained in this way are plotted as function of shear rate ( that is force ) and diffusion constant ( which determines the dimensionless parameter ) .firth , c. a. j. m. & bray , d. ( 2001 ) stochastic simulation of cell signaling pathways , in _ computational modeling of genetic and biochemical networks _ , eds .bower , j. m. & bolouri , h. ( mit , boston ) , pp .
|
l - selectin mediated tethers result in leukocyte rolling only above a threshold in shear . here we present biophysical modeling based on recently published data from flow chamber experiments ( dwir et al . , j. cell biol . 163 : 649 - 659 , 2003 ) which supports the interpretation that l - selectin mediated tethers below the shear threshold correspond to single l - selectin carbohydrate bonds dissociating on the time scale of milliseconds , whereas l - selectin mediated tethers above the shear threshold are stabilized by multiple bonds and fast rebinding of broken bonds , resulting in tether lifetimes on the timescale of seconds . our calculations for cluster dissociation suggest that the single molecule rebinding rate is of the order of hz . a similar estimate results if increased tether dissociation for tail - truncated l - selectin mutants above the shear threshold is modeled as diffusive escape of single receptors from the rebinding region due to increased mobility . using computer simulations , we show that our model yields first order dissociation kinetics and exponential dependence of tether dissociation rates on shear stress . our results suggest that multiple contacts , cytoskeletal anchorage of l - selectin and local rebinding of ligand play important roles in l - selectin tether stabilization and progression of tethers into persistent rolling on endothelial surfaces .
|
it is well known that real thermal engines can not achieve a perfect carnot cycle . in a perfect carnot cycle , the two reversible isothermal stages must be infinitely long and hence the carnot engine has zero power output .although the carnot thermal machine is impractical , it gives an upper limit on the efficiency of all thermal engines .real thermal engines work at finite cycle times and lose a finite amount of energy due to irreversible cycles and other mechanisms such as mechanical friction , heat leak and dissipative processes , etc .searching for real thermal engines which operate with optimal cycles has caught a lot of attention . here`` optimal '' refers to different optimizations of the heat engine , such as maximum efficiency , maximum power , maximum entropy production and maximum work , etc .of all these optimizations , the efficiency of thermal engines at maximum output power is a very practical problem and has been extensively studied in the literature .the efficiency of a quantum thermal engine operating at maximum power has also recently been studied .one of the most important results addressing the efficiency of a thermal engine at maximum power was given by curzon and ahlborn in 1975 .here we briefly review their result first .they made the assumption that during the time that the working medium is in contact with the hot(cold ) reservoir , the amount of heat exchanged is proportional to the temperature difference between the working medium and the reservoirs , and also to the time duration of the processes . during the heating process , which lasts time , the amount of heat absorbed by the system is where is the temperature of the heat source , the temperature of the working medium and the heat transfer coefficient of the heating process .similarly , for the cooling process which lasts time , the working medium releases heat here is the temperature of the cold source , the temperature of the working substance and the heat transfer coefficient of the cooling process .the reversibility of the adiabatic stages requires this leads to a relationship between and . by maximizing the power output of the system, they derived the famous curzon - ahlborn ( ca ) formula for the efficiency of the thermal engines at maximum power as : the ca formula describes the thermal engines of power plants very well and all the parameters here have clear physical meanings .however , as pointed out by ref . , the ca formula is neither exact nor universal , and it gives neither an upper bound nor a lower bound . in ref . , the authors considered a carnot thermal engine performing finite - time cycles .they assume that the amount of heat absorbed by the system per cycle from the hot(cold ) reservoir is given by and where is the temperature of the hot(cold ) reservoir and the time during which the thermal machine is in contact with the hot(cold ) reservoir .the second terms of eqs.([eq.1 ] ) and ( [ eq.2 ] ) give the extra entropy production per cycle when the system deviates from the reversible regime . by maximizing the power ,the efficiency of the engine can be derived .the upper and lower bounds of the thermal efficiency at maximum power are derived when the ratio approaches 0 and respectively .esposito et al s result agrees well with the observed efficiencies of thermal plants . however , why the working medium releases less heat for longer contact times with the cold reservoir as indicated by eq.([eq.2 ] ) was not explained .also , in both ref. and , only carnot engines were studied . in this paper, we study a more general and realistic thermal engine model and derive its efficiency bounds at maximum power . in both ref. and ,the temperature of the working medium does not change during heat transferring processes which is not true for either a realistic system , such as thermal plants , or for other heat engine models , such as the otto , joule - brayton , diesel , and akinson engines .instead , we simply assume that heat transfer by a thermal engine is described by newton s law of cooling , thus it does not have to be isothermal anymore .furthermore , we also take into account the fact that the thermal capacities of the working medium in realistic systems usually could be quite different at high and low temperatures .this is also be motivated by heat engines , such as the diesel and akinson type , for which the thermal capacities are different at the two different thermal stages . with these two modifications, we argue that our model is not only more realistic , but also more general .since the efficiency and its bounds are derived by considering heat exchange processes during which the temperature of the working medium could be close to or far away from isothermal , our model could simulate heat transferring not only in carnot engines , as in ref. and , but also some other engines such as otto , joule - brayton , diesel , and akinson as described in ref. .the organization of the paper is as follows , we will first describe newton s law of cooling and derive the corresponding entropy and heat formulas , and then study the thermal efficiency at maximum power for two limiting cases .we assume heat transferred by thermal engines in contact with a heat source is described by newton s law of cooling : where is the heat capacity , medium mass , medium temperature , heat source temperature , heat transfer coefficient , and contact area . for convenience ,we denote by .though newton s law of cooling is quite simple , many other heat transfer laws can be simplified to it if the temperatures of the objects are high while the temperature difference between them is small .based on this assumption , we consider a thermal engine working between hot and cold reservoirs at temperatures and respectively , and the initial temperature of the working medium is ( ) at the beginning of the heating(cooling ) stage . the solution to eq.([eq.3 ] ) gives the temperature of the working medium at time : where .assuming that the time during which the working medium is in contact with the high temperature source is , the entropy produced during the heating process can be evaluated straightforwardly as : where .the heat exchanged between the working medium and the high temperature source is given by here and from now on we take the convention that means absorbing and releasing heat .similarly , the entropy production and heat exchange of the working medium during the cooling process are given by where and .after a thermodynamic cycle , the system returns to its initial state , and the total entropy change of the working medium should be zero , which leads to }=0.\ ] ] by noting that , and by defining , eq.([eq.4 ] ) is reduced to the power output and the efficiency of the thermal engine are given by generally , the efficiency at maximum power output can be derived using the constraint of eq.([eq.18 ] ) .however , eq.([eq.18 ] ) is a transcendental equation which can not be solved analytically . in what follows we will focus our discussions on two special casesin this case , the contact time is long enough that the working medium can exchange heat sufficiently with the reservoirs .therefore the final temperature of the working medium is close to the heat reservoir and quite different from its initial temperature .numerically , when , we have and .thus , when is sufficiently large , which is supposed to be the case studied in ref. , can be safely ignored , and eq.([eq.18 ] ) is reduced to by plugging eq.([eq.17 ] ) into eq.([eq.10 ] ) and ( [ eq.12 ] ) , using eq.([eq.19])then the output power is given by }{\tau_{h}+\tau_{c}}.\ ] ] let , is maximized when .\ ] ] therefore , the efficiency at maximum power is given by =1-\frac{1}{\gamma}[\frac{\eta_{c}}{1-(1-\eta_{c})^{\frac{1}{1+\gamma}}}-1].\ ] ] from the above expression we see that decreases as increases . for the symmetric dissipation in which , becomes interestingly , the ca efficiency is recovered though the situation is quite different .expanding in series of , we have the coefficient of the second order term lies between and , while in ref. this term is between and which indicates a tighter bound here .the lower and upper bounds of in this case are given by a comparison of the upper and lower bounds for the long contact time limit between this work and results derived in ref. is shown as fig.[fig.scattering ] .we see that the limits derived here give much tighter bounds than those derived in ref. .we emphasize again that our model does not only apply to carnot engines . since the final temperature of the working medium after heat exchange can be quite different from its initial temperature , it is not necessarily an isothermal process and thus the engine does not need to be a carnot type engine .it can also simulate the engines described in ref. , if we take , it is the otto engine , , the joule - brayton engine , , the diesel engine and , the akinson engine .we can recover all the thermal efficiencies at maximum power derived in ref. .correspondingly , the bounds derived in this section should apply to those four types of engines mentioned above in practical conditions . in this case , the heating and cooling processes are both short .therefore the final temperature of the working medium after transferring heat is very close to its initial temperature .this is approximately what was studied in ref. where temperature of the working medium does not change during heat transfers .numerically , one can estimate that if , then and .we solve eq.([eq.18 ] ) by expanding it as series of the infinitesimal variable ( ) and matching both sides of the equation order by order ( we always keep the same order of and ) .as , to the zeroth order of , eq.([eq.18 ] ) simply gives which is trivial . to the first order of , eq.([eq.18 ] ) gives now the amounts of heat exchanged by the system during the heating and cooling processes are given by the above equations agree with the fundamental equations listed at the beginning of ref .thus if we continue our straightforward calculation , we simply recover the same results in ref . , including the ca efficiency .now , we continue to expand eq.([eq.18 ] ) to the second order of , we obtain another simple relation combining this with eq.([eq.20 ] ) , we get to obatian the expression for the power output , plug eqs.([eq.23 ] ) and ( [ eq.24 ] ) into eq.([eq.19 ] ) , and expand the expressions of and to the first order of again , we have is maximized by letting .note and , the unique allowed solution of is given by {h}}{\frac{\gamma\sigma_{c}-\sigma_{h}}{\sigma_{h}+\sigma_{c}}}.\ ] ] therefore the thermal efficiency at maximum power is given by by defining , and using the relation , can be expressed as (\beta+\gamma)}-\sqrt{[1+\gamma(1-\eta_{c})](1+\gamma)}}{\beta\sqrt{[1+\gamma(1-\eta_{c})](1+\gamma)}-\sqrt{[\beta+\gamma(1-\eta_{c})](\beta+\gamma)}}.\ ] ] moreover , can be expanded in a series of as the coefficient of the first order term of is , and the coefficient of the second order term lies in the range between and . in the symmetric casewhere and , we have when expanding as a series in , the coefficient of the second order term is .those results agree with the expansion of ca efficiency . to the third order of ,the difference of from is . a comparison between and the ca efficiency and our result for the symmetric case is shown in fig.[fig.1 ] .now , we estimate the bounds of . in the limits or while is finite , we recover the lower and upper limits of given in ref . interestingly our bounds on are obtained in the short contact time limit ( ) while the same results were obtained in the long contact time limit ( ) in ref .it is easy to verify that decreases as increases , but increases as increases .this means the larger the ratio between heat capacities of the working medium at the hot and cold reservoirs , the lower the efficiency at maximum output power .in summary , we presented an analysis of thermal efficiency and its bounds at maximum power for thermal engines for which the heat transferring processes are described by newton s law of cooling . in the long contact time limit , ca efficiency is recovered for symmetric thermal capacity and two tighter bounds on the thermal efficiency are derived .the model can simulate otto , joule brayton , diesel and atkinson engines in the long contact time limit . in the short contact time limit, we recover the famous ca efficiency in the first order calculation .when we proceed to the second order calculation , we derived a different efficiency formula and recovered the efficiency bounds at maximum power given by espositi , et al . in both limits ,the thermal efficiency is found to decrease as increases.this might be helpful for choosing a suitable working medium and working temperatures when designing a thermal engine whose heat transfer can be approximated by newton s law of cooling .other cases such as those associated with intermediate thermal contact time and different heat transfer laws are being investigated further .this work was supported by u.s .department of energy , office of science under grant de - fg02 - 03er46093 .h.yan thanks professor m.w .snow for support .we thank dr .changbo fu , e.smith and zhaowen tang for stimulating discussions .one of the referees for the previous version of this paper provided us very instructive suggestions and valuable references , we acknowledge it and thank him or her .
|
we study a thermal engine model for which newton s cooling law is obeyed during heat transfer processes . the thermal efficiency and its bounds at maximum output power are derived and discussed . this model , though quite simple , can be applied not only to carnot engines but also to four other types of engines . for the long thermal contact time limit , new bounds , tighter than what were known before , are obtained . in this case , this model can simulate otto , joule brayton , diesel , and atkinson engines . while in the short contact time limit , which corresponds to the carnot cycle , the same efficiency bounds as esposito et al s are derived . in both cases , the thermal efficiency decreases as the ratio between the heat capacities of the working medium during heating and cooling stages increases . this might provide instructions for designing real engines .
|
besides fundamental interest , the reconstruction of the photon distribution of an optical signal , plays a major role in high - rate quantum communication schemes based on light beams , and is required for implementations of linear - optics quantum computing .effective photon counters have been indeed developed , though their current operating conditions are still extreme . at present ,the most convenient method to infer photon distribution is perhaps quantum tomography , which have been applied to several quantum states with reliable statistics .however , the tomography of a state needs the implementation of homodyne detection , which in turn requires the appropriate mode matching of the signal with a suitable local oscillator at a beam splitter . as a matter of fact, quantum tomography has been developed to gain a complete characterization of the signal under investigation , and may not be suited in case where we are interested only in obtaining partial information , as for example the photon distribution .an alternative approach , based on an array of avalanche _ on / off _ photodetectors with different quantum efficiencies has been suggested and demonstrated with fiber - loop detectors . in this scheme ,repeated preparations of the signal are revealed at different quantum efficiencies , and the resulting _ on / off _ statistics is then used to reconstruct the photon distribution through maximum - likelihood estimation .the statistical reliability of the method has been also analyzed in details . in this paperwe want to further reduce the experimental requirements .we assume that avalanche photodetection may be performed only with low values of the quantum efficiency and , in addition , that only few of those values are available .then we analyze whether the photon distribution may be inferred from this scarce piece of information .we found that the use of maximum entropy principle , together with maximum likelihood estimation of moments of the distribution , provides an effective method to obtain the full photon distribution . in section [ s : due ] we describe in details the two - step maxlik - maxent method , whereas in section [ s : tre ] results from numerically simulated experiments on coherent and number states are reported .section [ s : out ] closes the paper with some concluding remarks .maximum entropy ( me ) principle is a powerful tool to infer the density matrix of a quantum state when only a _ partial _ knowledge about the system has been obtained from measurements .let us call the set of independent operators summarizing the measurements performed on a given systems . is called the _ observation level _ gained from the measurements .the me principle states that the density matrix compatible with a given is that maximizing the von neumann entropy while reproducing the known values of the operators in . in order to enforce this conditionthe inferred density operator will depend on a number of parameters whose values must be properly determined .me principle is a way of filling our ignorance about the system , without assuming more than what we have learned from measurements .the actual form of the inferred state heavily depends on the observation level : it can range from a ( nearly ) complete knowledge of the state , as it happens for quantum tomography , to the ( nearly ) complete ignorance .an example of the latter case is the knowledge of the mean photon number alone , for which me principle yields a thermal state . in general , the me density operator that estimates the signal for a given observation level is given by where $ ] is the partition function , and the coefficients are lagrange multipliers , to be determined by the constraints \equiv -\partial_{\lambda_{\nu}}\log{z } = \langle \mathrm{o}_{\nu } \rangle\,,\ ] ] where are the expectation values obtained from the measurements .suppose now that we would like to measure the photon statistics of a given signal using only avalanche photodetectors with efficiencies , , _i.e. _ we perform _ on / off _ measurements on the signal with different values of the quantum efficiency .the statistics of the measurements is described by the probability of the _ off _ events when the quantum efficiency is \nonumber \\ & = & \sum_{n=0}^{\infty}(1-\eta_{\nu})^n \rho_{n}\:.\end{aligned}\ ] ] in eq .( [ p_nu_def ] ) and , where the probability measure ( povm ) of the measurements is given by from the me principle we know that the best state we can infer is given by ( [ rhomaxent ] ) that , in this case , reads as follows explicit equations for the are obtained expanding the above formulas in the fock basis ; we have eq .( [ x1 ] ) can be solved numerically in order to determine the coefficients and in turn the me density operator . in the followingwe consider situations where the experimental capabilities are limited .we suppose that _ on / off _ measurements can be taken only at _ few _ and _ low _ values of the quantum efficiencies . in this casethe statistics ( [ p_nu_def ] ) can be expanded as summing the series we have where ,\end{aligned}\ ] ] are moments of the photon distribution . by inversion of eq .( [ p_nu_approx ] ) , upon a suitable truncation , we retrieve the first moments of the distribution from the _ on / off _ statistics at low quantum efficiency .the most effective technique to achieve the inversion of the above formula is maximum - likelihood .the likelihood of the _ on / off _ measurement is given by where and are respectively the number of _ off _ events , and the total number of measurement when using a detector with efficiency .in practical calculations it is more convenient to use the logarithm of ( [ likelyhood ] ) , that reads as follows : without loss of generality we can set , and divide ( [ loglik ] ) by , obtaining : where are the experimental frequencies of the _ off _ events .substituting ( [ p_nu_approx ] ) in ( [ logliknorm ] ) we find an expression for the renormalized likelihood as a function of the moments .maximization of ( [ logliknorm ] ) over the parameters leads to the following set of optimization equations : if we stop the expansion in ( [ p_nu_approx ] ) at the second order ( see below ) the derivatives in ( [ grad ] ) take the form : the system ( [ grad ] ) can be easily solved numerically given the _ on / off _ statistics as well as the number and the values of the quantum efficiencies used during the experiments .after having determined the first moments , the density matrix of the signal , according to maximum entropy principle , is given by with to be determined as to satisfy \nonumber \\ & = & \frac { \sum_{n=0}^{\infty } n^k \: \exp \left\ { -\sum_{\nu=1}^{2 } \lambda_{\nu } n^\nu \right\ } } { \sum_{n=0}^{\infty } \exp \left\ { -\sum_{\nu=1}^{2 } \lambda_{\nu } n^\nu \right\}}\:.\end{aligned}\ ] ] notice that the unknowns are contained also in the denominator and that a suitable truncation should be adopted ( which can be easily determined by imposing normalization on the me density matrix ) .we have performed simulated experiments of the whole procedure on coherent and number states . for this kind of signals the first two moments are sufficient to obtain a good reconstruction via me principle because their mandel parameter is less than or equal to the average photon number , while squeezed states , for which , are ruled out , requiring the knowledge of a considerably larger set of moments ( in principle , all of them ) .a number of values of the quantum efficiency ranging from to have been enough in order to achieve good reconstructions .our results are summarized in figs .[ f : fig1]-[f : fig3 ] .notice that a faithful photon statistics retrieval needs a sufficiently accurate knowledge of the s : our results are obtained using observations for each .this condition can be relaxed if we increase the number of probabilities measured , but their range of values should be kept narrow because eq .( [ p_nu_approx ] ) must hold .( black ) , and comparison with the theoretical values ( light gray ) .simulated _ on / off _ measurements have been performed with values of ranging between 1% and 5% .the number of experimental measures is for each .fidelity of the reconstruction is larger than .,scaledwidth=35.0% ] finally in fig .[ f : fig4 ] we check the robustness of the me inference , against errors in the knowledge of the parameters and that may come from ml estimation in the first step .the quality of the reconstruction has been assessed through fidelity between the inferred and the true photon distribution . as it is apparent from the plotthe reconstruction s fidelity remains large if the relative errors on both parameters are about .( black ) and comparison with the theoretical values ( light gray ) .simulations parameters are the same as in fig .[ f : fig1 ] .fidelity of the reconstruction is larger than .,scaledwidth=35.0% ] ( black ) and comparison with the theoretical values ( light gray ) .simulations parameters are the same as in fig .[ f : fig1 ] .fidelity of the reconstruction is .,scaledwidth=35.0% ]in this paper we have shown that the photon distribution of a light signal , with mandel parameter lower than or equal to its average photon number , can be reconstructed using few measurements collected by a low efficiency avalanche photodetector .the _ on / off _ statistics is used in a two steps algorithm , consisting in retrieving the first two moments of the photon distribution via a maximum - likelihood estimation , and than inferring the diagonal entries of using of the maximum entropy principle .the last step implies the solution of a nonlinear equation in order to shape the statistic to reproduce exactly the moments obtained in the first estimation .though this last process may be delicate , we showed with simulated experiments that it yields sound results when applied to coherent and number states .finally we demonstrated that the method exhibits a sufficient robustness against errors deriving from the maximum - likelihood estimation ., as a function of the average photon number and its second moment .the true values for the two parameters are printed in italics .simulations parameters are the same as in fig .[ f : fig1 ] .notice that the first two entries in the top line are set to 0 because , in these cases , the mandel parameter is smaller than -1 , so they are not physical states.,scaledwidth=35.0% ]mgap is research fellow at _ collegio alessandro volta_. arr wishes to thank s. olivares for fruitful discussions and helpful suggestions .99 c. m. caves and p. d. drummond , rev .phys . * 66 * , 481 ( 1994 ) .e. knill , r. laflamme , and g. j. milburn , nature * 409 * , 46 ( 2001 ) .j. kim , s. takeuchi , y. yamamoto , and h. h. hogue , appl .. lett . * 74 * , 902 ( 1999 ) ; c. kurtsiefer , s. mayer , p. zarda , and h. weinfurter , phys .lett . * 85 * , 290 ( 2000 ) ; m. pelton , c. santori , j. vukovic , b. zhang , g. s. solomon , j. plant , and y. yamamoto , phys .lett . * 89 * , 233602 ( 2002 ) .g. m. dariano , m. g. a. paris and m. f. sacchi , advances in imaging and electron physics * 128 * , 205 ( 2003 ) .m. munroe et al . , phys .a * 52 * , r924 ( 1995 ) m. raymer and m. beck in _ quantum states estimation _ , m. g. a. paris and j. ehek eds ., lect . not* 649 * ( springer , heidelberg , 2004 ) , at press . g. m. dariano and m. g. a. paris , phys . lett .a * 233 * 49 ( 1997 ) .d. mogilevtsev , opt . comm . * 156 * , 307 ( 1998 ) ; acta phys . slov . *49 * , 743 ( 1999 ) .j. , z. hradil , o. haderka , j. pe , jr . , and m. hamar , phys . rev .a * 67 * , 061801(r ) ( 2003 ) ; o. haderka , m. hamar and j. pe , eur .d * 28 * , ( 2004 ) .a. r. rossi , s. olivares , m. g. a. paris , phys . rev .a * 70 * , 055801 ( 2004 ) .e. t. jaynes , phys . rev . *106 * , 620 ( 1957 ) ; * 108 * , 171 ( 1957 ) .v. bu and g. adam , ann . phys . *245 * , 37 ( 1996 ) ; v. bu in _ quantum states estimation _ , m. g. a. paris and j. ehek eds ., lect . not* 649 * ( springer , heidelberg , 2004 ) , p. 189
|
a method based on maximum - entropy ( me ) principle to infer photon distribution from _ on / off _ measurements performed with _ few _ and _ low _ values of quantum efficiency is addressed . the method consists of two steps : at first some moments of the photon distribution are retrieved from _ on / off _ statistics using maximum - likelihood estimation , then me principle is applied to infer the quantum state and , in turn , the photon distribution . results from simulated experiments on coherent and number states are presented .
|
consider the underactuated mechanical control system where is the configuration vector with either a displacement in or an angular variable in {t_i} ] be a regular parameterization of . then letting ,the dynamics on the set in are globally described by where {{t_1}}\times { \mathbb r} ] .system is lagrangian if and only if the functions and in are -periodic , in which case the lagrangian function is given by .an immediate consequence of the foregoing result is that , when the reduced dynamics are lagrangian , the orbits of are characterized by the level sets of the energy function moreover , almost all orbits of the reduced dynamics are closed , and they belong to two distinct families , defined next .a closed orbit of the reduced dynamics is said to be a * rotation of * if is homeomorphic to a circle {t } \times { \mathbb r } : \dot{\theta}=\text{constant}\} ] via a homeomorphism of the form . in (* proposition 4.7 ) , it is shown that if the assumptions of proposition [ prop : red_dyn ] hold , then almost all orbits of are either oscillations or rotations .oscillations and rotations are illustrated in figure [ fig : closed_orbits ] .it is possible to give an explicit regular parameterization of rotations and oscillations which will be useful in what follows .[ fig : closed_orbits ] .the orbit is an oscillation , while is a rotation . ]if is a rotation with associated energy value , then we may solve for obtaining with plus sign for counterclockwise rotation , and minus sign for clockwise rotation .thus a rotation is the graph of a function , which leads to the natural regular parameterization {{{t_1 } } } \to [ { \mathbb r}]_{{{t_1}}\times { \mathbb r}} ] : the mechanical control system with dofs and controls .let be a regular vhcof order , and assume that is diffeomorphic to .as before , let {{{t_1}}}\to { \mathcal q} ] . in the latter case ,the sum is to be understood as sum modulo . ] of by the vector ( see figure [ fig : stab : dynvhc ] ) .if {{{t_1}}}\rightarrow { \mathcal q} ] represents global coordinates for . using these coordinates , and with a slight abuse of notation , the closed orbit in is given by {{{t_1 } } } \times { \mathbb r}\times { \mathbb r}\times { \mathbb r } : e(\theta,\dot \theta ) = e_0 , \ , s=\dot s=0\}.\ ] ] this is the set we will stabilize next .the objective now is to design the control input in the extended reduced dynamics so as to stabilize the closed orbit in .we will do so by adopting the philosophy of hauser et al . in that relies on an implicit representation of the closed orbit to derive the so - called transverse linearization along . roughly speaking ,this is the linearization along of the components of the dynamics that are transversal to .hauser s approach generalizes classical results of hale ( * ? ? ?* chapter vi ) , who require a moving orthonormal frame .the insight in is that orthogonality is not needed , transversality is enough .this insight allowed hauser et al . in to derive a normal form analogous to that in ( * ? ? ?* chapter vi ) , but calculated directly from an implicit representation of the orbit .we shall use the same idea in the theorem below .we begin by enhancing the results of in two directions .first , while require the knowledge of a periodic solution , we only require a parameterization of ( something that is readily available in the setting of this paper , while the solution is not ) .second , while deals with dynamics without inputs , we provide a necessary and sufficient criterion for the exponential stabilizability of the orbit . * a general result .* our first result is a necessary and sufficient condition for a closed orbit to be exponentially stabilizable .this result is of considerable practical use , and is of independent interest . consider a control - affine system with state , where is a closed embedded submanifold of , and control input .a closed orbit is * exponentially stabilizable * for if there exists a locally lipschitz continuous feedback such that the set is exponentially stable for the closed - loop system , i.e. , there exist such that for all such that , the solution of the closed - loop system satisfies for all .note that if is exponentially stable , then is asymptotically stable .let be a positive real number . a linear -periodic system , where is a continuous and -periodic matrix - valued function , is * asymptotically stable * if all its characteristic multipliers lie in the open unit disk .a linear -periodic control system where and are continuous and -periodic matrix - valued functions , is * stabilizable * ( or the pair is stabilizable ) if there exists a continuous and -periodic matrix - valued function such that is asymptotically stable . in this case , we say that the feedback * stabilizes * system .the notion of stabilizability can be characterized in terms of the characteristic multipliers of ( see , e.g. ) .[ thm : transverse_linearization ] consider system , where is a vector field and is locally lipschitz continuous on .let be a closed orbit of the open - loop system , and let , \to { \mathcal x} ] , with a neighborhood of in and such that , the feedback exponentially stabilizes the closed orbit for .the proof of theorem [ thm : transverse_linearization ] is found in the appendix .concerning the existence of the function in the theorem statement , since closed orbits of smooth dynamical systems are diffeomorphic to the unit circle , it is always possible to find a function satisfying the assumptions of the theorem .this well - known fact is shown , e.g. , in ( * ? ? ?* proposition 1.2 ) .as for the existence of the function ] , , with , given by and if is a rotation ; and given by and if is an oscillation . applying theorem [ thm : transverse_linearization ] to system ,we get the following -periodic linear system where \big|_{{z}_2=0 } , \\ & a_{13}(t ) = \eta(t ) m(\varphi_1(t ) ) \varphi_2 ^ 2(t ) \psi_3 ^ 0(\varphi_1(t ) ) , \\ & b_1(t ) = \eta(t ) m(\varphi_1(t ) ) \varphi_2(t ) \psi_5 ^ 0(\varphi_1(t ) ) , \\ & \eta(t ) = \frac{\varphi_1 ^ 2(t)+\varphi_2 ^ 2(t)}{\varphi_1'(t)\varphi_2(t ) +\varphi_2'(t ) [ \psi_1(\varphi_1(t ) ) + \psi_2(\varphi_1(t ) ) \varphi_2 ^ 2(t ) ] } .\end{aligned}\ ] ] assuming that system is stabilizable , then we may find the unique positive semidefinite solution of the periodic riccati equation to get the matrix - valued function in .theorem [ thm : transverse_linearization ] guarantees that the controller exponentially stabilizes the orbit in for the extended reduced dynamics .it remains to find an explicit expression for the map .if is a rotation , then in light of the parameterization , we may set else , if is an oscillation , using we set where is the four - quadrant arctangent function such that for all .in section [ sec : stab : dynconstr ] , we designed the feedback in to asymptotically stabilize the constraint manifold associated with the dynamic vhc . in section[ sec : stab : s_control ] , we designed the feedback in for the double integrator rendering the closed orbit exponentially stable relative to ( i.e. , when initial conditions are on ) .there are two things left to do in order to solve the vhc - based orbital stabilization problem .first , in order to implement the feedback in , we need to relate the variables to the state .second , we need to show that the asymptotic stability of and the asymptotic stability of relative to imply that is asymptotically stable . to address the first issue, we leverage the fact that , since is a closed embedded submanifold of , by ( * ? ? ?* proposition 6.25 ) there exists a neighborhood of in and a smooth retraction of onto , i.e. , a smooth map such that is the identity on .define ] such that . using the function , we now define an extension of from to a neighborhood of as follows we are now ready to solve the vhc - based orbital stabilization problem .[ thm : main_result ] consider system and let be a regular vhcof order .let {{t_1}}\to { \mathcal q} ] , consider one of the regular parametrizations {{{t_2 } } } \mapsto [ { \mathbb r}]_{{t_1}}\times { \mathbb r}\times { \mathbb r}\times { \mathbb r} ] [ -0.25cm] ] be as in the theorem statement .we claim that there exists a neighborhood of in such that the map \times { \mathbb r}^{n-1} ] .the first property was proved in ( * ? ? ?* proposition 1.2 ) . for the second property, we observe that is a diffeomorphism \times \{0\} ] . the smooth inverse of is thus \times { \mathbb r}^{n-1} ] , we have that ) \\varphi'({\vartheta } ) = \frac { 1}{\rho ( { \vartheta})}f(\varphi({\vartheta})),\ ] ] and is bounded away from zero .we now represent the control system in coordinates .the development is a slight variation of the one presented in the proof of ( * ? ? ?* proposition 1.4 ) , the variation being due to the fact that , in , it is assumed that . for the -dynamics , we have {x = f^{-1}({\vartheta},{z})}.\ ] ] we claim that the restriction of the drift term to is . indeed , using and , we have {x = f^{-1}({\vartheta},0 ) } = l_f \pi(\varphi({\vartheta } ) ) = d\pi_{\varphi({\vartheta } ) } f(\varphi({\vartheta } ) ) = \rho({\vartheta } ) d\pi_{\varphi({\vartheta } ) } \varphi'({\vartheta } ) = \rho({\vartheta}).\ ] ] the last equality is due to the fact that , so that .thus we may write where .the derivation of the dynamics is essentially the same as in ( * ? ? ?* proposition 1.4 ) so we present their form without proof .the control system in coordinates has the form where and satisfy , , . letting , we have that because is bounded away from zero .consider the partial coordinate transformation \to [ { \mathbb r}]_{{{\tilde t}}} ] . thus , for all ] has the same property . referring to and using ( * ? ? ?* proposition 1.5 ) , we conclude that the closed orbit is exponentially stable for the closed - loop system and hence also for system with feedback . * acknowledgments * a. mohammadi and m. maggiore were supported by the natural sciences and engineering research council ( nserc ) of canada .a. mohammadi was partially supported by the university of toronto doctoral completion award ( dca ) .
|
this article investigates the problem of enforcing a virtual holonomic constraint ( vhc ) on an underactuated mechanical system while simultaneously stabilizing a closed orbit on the constraint manifold . this problem , which to date is open , arises when designing controllers to induce complex repetitive motions in robots . in this paper , we propose a solution which relies on the parameterization of the vhcby the output of a double integrator . while the original controls are used to enforce the vhc , the control input of the double - integrator is designed to asymptotically stabilize the closed orbit and make the state of the double - integrator converge to zero . the proposed design is applied to the problem of making a pvtol aircraft follow a circle on the vertical plane with a desired speed profile , while guaranteeing that the aircraft does not roll over . virtual holonomic constraints ( vhcs ) have been recognized to be key to solving complex motion control problems in robotics . there is an increasing body of evidence from bipedal robotics , snake robot locomotion , and repetitive motion planning that vhcsconstitute a new motion control paradigm , an alternative to the traditional reference tracking framework . the key difference with the standard motion control paradigm of robotics is that , in the vhcframework , the desired motion is parameterized by the states of the mechanical system , rather than by time . grizzle and collaborators ( see , e.g. , ) have shown that the enforcement of certain vhcson a biped robot leads , under certain conditions , to the orbital stabilization of a hybrid closed orbit corresponding to a repetitive walking gait . the orbit in question lies on the constraint manifold , and the mechanism stabilizing it is the dissipation of energy that occurs when a foot impacts the ground . in a mechanical system without impacts , this stabilization mechanism disappears , and the enforcement of the vhcalone is insufficient to achieve the ultimate objective of stabilizing a repetitive behavior . some researchers have addressed this problem by using the vhcexclusively for motion planning , i.e. , to find a desired closed orbit . once a suitable closed orbit is found , a time - varying controller is designed by linearizing the control system along the orbit . in this approach , the constraint manifold is not an invariant set for the closed - loop system , and thus the vhcis not enforced via feedback . to the best of our knowledge , for mechanical control systems with degree of underactuation one , the problem of simultaneous enforcement of a vhcand orbital stabilization of a closed orbit lying on the constraint manifold is still open . * contributions of the paper . * this paper presents the first solution of the simultaneous stabilization problem just described . leveraging recent results in , we consider vhcsthat induce lagrangian constrained dynamics . the closed orbits on the constraint manifold are level sets of a `` virtual '' energy function . we make the vhcdynamic by parametrizing it by the output of a double - integrator . we use the original controls of the mechanical system to enforce the dynamic vhc , while we use the double - integrator input to asymptotically stabilize the selected orbit . because the output of the double - integrator acts as a perturbation of the original constraint manifold , we also make sure that the state of the double - integrator converges to zero . to achieve these objectives , we develop a novel theoretical result giving necessary and sufficient conditions for the exponential stabilizability of a closed orbit for a control - affine system . the benefit of simultaneously enforcing a vhcand stabilizing a closed orbit is that it offers a superior control over the transient behavior of the system . this is illustrated in an example at the end of the paper , in which a pvtol vehicle performs a repetitive maneuver while guaranteeing that it does not undergo full revolutions along its longitudinal axis . * relevant literature . * previous work employs vhcsto stabilize desired repetitive behaviors for underactuated mechanical systems . canudas - de - wit and collaborators propose a technique to stabilize a desired closed orbit that relies on enforcing a virtual constraint and on dynamically changing its geometry so as to impose that the reduced dynamics on the constraint manifold match the dynamics of a nonlinear oscillator . in , canudas - de - wit , shiriaev , and collaborators employ vhcsto aid the selection of closed orbits corresponding to desired repetitive behaviors of underactuated mechanical systems . it is demonstrated that an unforced second - order system possessing an integral of motion describes the constrained motion . assuming that this unforced system has a closed orbit , a linear time - varying controller is designed that yields exponential stability of the closed orbit . with the exception of , the papers above do not guarantee the invariance of the vhcfor the closed loop system . the idea of event - triggered dynamic vhcshas appeared in the work by morris and grizzle in where the authors construct a hybrid invariant manifold for the closed - loop dynamics of biped robots by updating the vhcparameters after each impact with the ground . this approach is similar in spirit to the one presented in this paper . in section [ sec : stab : shir ] , we discuss the differences between the method presented in this article and the ones in . we also discuss the conceptual similarities between the method presented in this article and the one in . * organization . * this article is organized as follows . we review preliminaries in section [ sec : stab : prelim ] . the formal problem statement and our solution strategy are presented in section [ sec : stab : prob_stat ] . in section [ sec : stab : dynconstr ] we present dynamic vhcs . in section [ sec : stab : s_control ] we design the input of the double - integrator to stabilize the closed orbit relative to the constraint manifold , and in section [ sec : stab : sol ] we present the complete control law solving the vhc - based orbital stabilization problem . in section [ sec : stab : shir ] we discuss the differences between the method presented in this article and the ones in . finally , in section [ sec : motex ] we apply the ideas of this paper to a path following problem for the pvtol aircraft . * notation . * if and , then modulo is denoted by ] is denoted by ] . if , we denote , and . if is a smooth map between smooth manifolds , and , we denote by the derivative of at ( in coordinates , this is the jacobian matrix of evaluated at ) , and if has dimension 1 , then we may use the notation in place of . if are smooth manifolds and is a smooth function , then denotes the derivative of the map at . if is a vector field on and is , then is defined as . for a function , we denote by . if has full row - rank , we denote by the right - inverse of , . given a scalar function , we denote by its hessian matrix .
|
fabry - prot interferometers have advanced to very high precision wavelength sensors . in a fizeau type setup and combined with an absolute frequency reference ( typically an helium - neon laser ) they are used as precision optical wavelength meters . by means of an additional feedback , stabilization and tuning of the laser wavelength can be accomplished . however , due to the comparatively long signal processing times , the feedback bandwidth of these systems is low and limiting the use for laser frequency stabilization . in this paperwe present the optimization and calibration of an existing fizeau type quadrature interferometer that is used for laser frequency stabilization and allows a very wide tuning range of a stabilized laser .several orders of magnitude in precision over a wide wavelength range of about nm are gained in comparison to an uncalibrated setup .the achieved accuracy with an rms deviation of is state - of - the - art interferometric laser frequency stabilization at high feedback bandwidth . in order to characterize and compensate the main observed frequency deviations occurring in the wavelength range nm we use a fiber laser based frequency comb .we take these frequency deviations into account by a correction of the wavelength dependence of the refractive index up to second polynomial order .this allows for future calibration of the setup by employing only three interpolation points . for these measurements a doppler - free saturation spectroscopyis easily used as an absolute frequency reference .pd : photo detector , quad : quadrature , norm : normalization .see for details . ]prior to the description of the calibration of the interferometer , we only briefly review it s functional principle , as it is explained in detail elsewhere .fig.[iscan_device ] shows a sketch of the interferometer setup .a test laser beam enters the device through an optical fiber and is split into two by means of a wedged beam splitter .the beams pass an etalon under a slight relative angle .two pairs of photo diodes , one pair for each beam , are used to electronically construct two normalized periodic interferometer signals , and , from the beams transmitted and reflected by the etalon .the angle between the beams results in a relative phase shift between the interferometer signals . due to the etalon s low finesse ,their frequency dependence can be approximated by sine and cosine functions : hence , and can be used as quadrature signals describing a circular path in the x - y plane . here , the interferometer phase depends on the laser frequency , and the thickness and the refractive index of the etalon .the latter together constitute the etalon s wavelength dependent optical path length . here, is the speed of light in vacuum .we call the frequency interval corresponding to a phase change of the etalon s free spectral range ( fsr ) , which is approximately 2ghz for the described setup . in order to obtain an error signal for frequency stabilization of a laser ,the interferometer signals are compared with electronically generated set signals that depend on a single chosen set phase .the difference between and is used as a frequency error signal that varies with constant slope throughout the free spectral range .this allows to use the etalon for frequency stabilization of the laser at any point within the fsr .further , tuning of a stabilized laser can be accomplished over a large wavelength range including many fsr by alteration of the generated . as the interferometer signals are periodic functions of the optical frequency , they are ambiguous . hence , for stabilization to a specific target frequencyone has to predetermine the laser s frequency to an accuracy of better than one half of an fsr .this can be achieved by means of a standard wavemeter .further , is subject to a drift of .we use a reference laser of precisely known frequency to keep track of and correct for small optical path length changes , that result in an offset of the measured phase at the given frequency . in practice , as typical interferometer drift rates are on the order of / min , this measurement has to be carried out about every minutes , which can be done automatically within approximately one to two seconds . with the help of these additional measures , each frequency is unambiguously assigned a phase , and continuous scans over an almost arbitrary amount of fsrs can be carried out .to reach an absolute frequency precision on the mhz level over a wavelength range of several tens of nanometers , three calibration steps are performed .firstly , a systematic deviation due to the approximation of the exact airy function describing the interferometer signal as a sinusoidal function of the laser frequency in eq.([phi_dependence ] ) has to be considered .this is in practice done by calibrating the set phase for a given target wavelength throughout one fsr .the calibration is achieved by comparing the frequency response of the interferometer with that of an etalon made of an optical fiber . with a fiber length of 2.5 mthe fsr of 60mhz is much smaller than the fsr of the interferometer .therefore , the fiber etalon s fringe pattern varies much faster when the laser frequency is tuned as compared to the interferometer signals and .this allows to use the fringes of the fiber etalon as equidistant markers for frequency intervals . in this waythe deviations from the linear frequency dependence of the interferometer phase can be measured .the maximum nonlinearity found for the quadrature interferometer is on the order of 2% of one fsr , corresponding to a maximum of deviation in target frequencies within one etalon fsr of 2ghz . in order to compensate this systematic deviation in practice ,the recorded data are electronically represented in a look - up - table ( lut ) .the lut is used to modify the set phase for frequency tuning of the stabilized laser . with a set phase corrected by use of the lut we find a remaining nonlinearity of 0.05% using the same method .this corresponds to an accuracy of better than within one fsr .it is important to note that this procedure is easily repeated and an exact lut can be created when the interferometer is used in a different wavelength range at a different refractive index of the etalon .however we found that one lut remained valid when changing the frequency of the laser by several hundred fsr .thz ( dfb diode laser , tem messtechnik ) , saturation spectroscopy ( cosy , tem messtechnik ) , wavemeter : high finesse ws/7 , fi : faraday isolator , s : shutter , bs : beam splitter , pd : photodiode , of : polarization maintaining optical fiber , fp : fiber port . ] in the following we describe a second calibration step that enables precise tuning over nanometers throughout a wavelength range as wide as nm .over such a broad wavelength range the dispersion of the etalon s medium is significant .the index of refraction can be described by the sellmeier equation with the specific material parameters for the bk7 etalon used in this case . however , in order to be able to calculate based on eq.[phi_dependence ] with a precision corresponding to 1mhz , has to be determined with a relative error on the order of . in this calibration stepwe obtain a precise measurement of the optical path with the measurement setup shown in fig.[comb_measurement_setup ] . for an absolute frequency reference we use a reference laser at the d2 optical transition line of at .this laser is stabilized to the cross - over signal between the transitions by means of a saturation spectroscopy . for a precise determination of the frequency of the application laser controlled by the interferometer setupwe employ a frequency comb .the frequency is inferred by recording the beat note of the application laser with the comb , in combination with wavelength measurement carried out by a high resolution wavemeter . in order to collect data over a wide wavelength rangethe following steps have to be repeatedly performed : * the application laser is frequency stabilized to the interferometer with a given phase at a given wavelength . *the fiber etalon is used to create an lut as explained in the first step .* the reference laser is used to perform a measurement of the interferometer phase offset . * a precise measurement of the application laser frequencyis carried out by means of the frequency comb for various around the given wavelength .the results of this procedure for different wavelengths are summarized in figs .[ wavelength_scan](a ) and [ zoom - in ] . to obtain the remaining frequency deviation after applying the previous calibration step we process the data in the following way : we first use eq.[phi_dependence ] to convert the measured frequencies to interferometer phases and compare these to the set phases . this resulting wavelength dependent phase deviation can be expressed as a frequency deviation by scaling with the etalon s fsr .the resulting data exhibit systematic deviations in the frequency control of the application laser on the order of several ten mhz over a wavelength range of 45 nm .this corresponds to a relative error on the order of . over the whole accessible wavelength range .circles and squares are separate datasets taken on two consecutive days .one second order polynomial fit to both data sets is shown .the measurements for the frequency ranges 1 and 2 are shown enlarged in fig.[zoom - in ] .( b ) residual of a fit to both datasets shown in ( a ) .a vertical line marks the reference laser frequency in both graphs . ]thz within one fsr of the etalon are shown .circles and squares are taken using different offset phase measurements but the same lut .for the triangles a new lut was generated and the offset phase was newly measured .a linear drift of / min obtained from repetitive measurements at the reference wavelength has been subtracted from all datasets for clearer visibility of the frequency - dependent error .( b ) : frequency range 1 .two datasets close to the reference laser frequency ( vertical line ) are shown . to record this data , the reference laser ( dfb diode laser )was controlled by the quadrature interferometer and interchanged with the application laser ( toptica dlpro ) . ] in order to further characterize the performance of the device we show in fig.[zoom - in ] the same data resolved for the two wavelength ranges that are indicated in fig.[wavelength_scan](a ) : fig.[zoom - in](a ) shows the frequency deviation occurring within one fsr .this gives a measure on how well we are able to calibrate the relation between interferometer phase and frequency by means of the lut . for a single scanwe find small rms deviations .however , for repeated calibrations and subsequent scans over one fsr the precision of the device is limited by the reproducibility to an rms value of 0.9mhz .this can be attributed to frequency offsets consistent with the linewidth of the reference laser .fig.[zoom - in](b ) demonstrates the capability of precise frequency tuning over a range as broad as of fsrs . for this measurementa distributed feedback diode is used as an application laser , which was continuously stabilized throughout the scan . again using the dataset shown as squares for calibration, we find an rms deviation of for the dataset represented with circles .thus , the calibration step described above proofs valid for the complete wavelength range over approximately 1 nm . to calculate the interferometer phase from the measured frequency in the above mentioned analysis we use the refractive index as described by the sellmeier equation with coefficients specified for the bk7 in use , together with an estimated etalon length .the optical path length has been estimated such that the observed deviations are minimal throughout the full wavelength range shown in fig.[wavelength_scan](a ) . as a constraintthe optical path length was chosen such that the phase offset measured at the wavelength of the reference laser was reproduced . with this the second calibration step , in which is determined to a very high precision ,is achieved .the characterization obtained with the data shown in fig.[wavelength_scan](a ) can be utilized for a third and last calibration step . for practical purposes we use a best fit to the data by a second order polynomial that serves as a calibration curve andis shown in fig.[wavelength_scan](a ) .the residuals of this fit are shown in fig.[wavelength_scan](b ) . based on the obtained curvewe can implement the calibration of the interferometer setup by applying a wavelength dependent correction for the refractive index up to second order .this effectively changes the etalon s optical path length used in our calculation of the interferometer phase for a target frequency . to further demonstrate the effectiveness of this calibration we use a different second order polynomial fit function that has been obtained from the datasetrepresented as circles in fig.[wavelength_scan](a ) . in this mannerwe find a maximum frequency deviation of ( rms ) in the dataset represented as squares for the fully calibrated interferometer .comparison of two datasets obtained on consecutive days ( circles , squares in fig.[wavelength_scan ] ) indicate a small systematic drift that is not further investigated within this work .it can be seen in the data presented in fig.[wavelength_scan](a ) , that the frequency deviation is found to be dominantly of second order .this could occur due to a second order error in the wavelength dependence of the refractive index described with the specified sellmeier parameters . in the second calibration step ,we have chosen an optical path length which minimizes the overall frequency deviation in the whole wavelength range and leads to zero frequency deviation at the reference laser frequency . with this choice of ,we compensate the zeroth order error in the description of the refractive index and obtain corrections in higher orders to the wavelength dependence of the refractive index .we find relative errors of in the first order and in the second order .these deviations are consistent with a relative uncertainty of the refractive index described by the sellmeier parameters of .as we have shown that a correction of the refractive index up to second order serves very well to calibrate the setup , it will from now on only be necessary to measure the deviation at three roughly equally spaced frequencies within the wavelength range . the calibration is then obtained by using a second order polynomial fit to these data points in the way as described above . in the examined wavelength interval , doppler - free spectroscopies of optical transitions in rubidium and potassiumare readily available and can serve as absolute frequency references .this eliminates the need for a frequency comb to carry out the last calibration step .in addition , employing data at just three wavelengths allows for efficient recalibration in order to compensate possible slow systematic drifts on the time scale of days .in this paper we describe a calibration procedure for an interferometric laser frequency stabilization in the wavelength range nm .several calibration steps are taken resulting in an exceptional precision with an rms deviation of .the maximum observed deviation of 5.7mhz corresponds to a relative accuracy of .while the accuracy within one fsr is increased by approximately one order of magnitude by the first calibration step , a calibration over a wide frequency range allows the accurate determination of the interferometer s optical path length . only this increases the precision of the device from hundreds of mhz to the mhz level over the wide frequency range . the calibration method presented hereis expected to be applicable throughout the nm wavelength range of the interferometer setup , which is limited by the choice of the single mode optical fibers used in the system .further , continuous high bandwidth laser frequency stabilization over a wide tuning range is supported .this makes the presented laser frequency stabilization an optimal choice for the spectroscopy of molecules over a large wavelength range .the accuracy of the device is enhanced , if frequency stabilization in a comparatively smaller wavelength range of approximately 1 nm is required , as shown in fig.[zoom - in](b ) .if in this reference range a reference laser is available , the third calibration step can be omitted .if for the respective wavelength range no laser spectroscopy is available as a reference , a beat note with a known frequency comb mode can be used as a reference .this allows then to continuously frequency stabilize the application laser while tuning over a range spanning hundreds of etalon fsrs of 2ghz . in comparison ,a direct beat note of the application laser with the comb mode can be applied continuously only throughout a fraction of the narrow mode spacing of the comb .we thank th . udem and t. w. hnsch at the max - planck - institute of quantum optics and ronald holzwarth at menlo systems gmbh for discussions and for making a frequency comb available .t. j. scholl , s. j. rehse , r. a. holt , and s. d. rosner , `` broadband precision wavelength meter based on a stepping fabry - prot interferometer , '' review of scientific instruments * 75 * , 33183326 ( 2004 ) .m. mack , f. karlewski , h. hattermann , s. hckh , f. jessen , d. cano , and j. fortgh , `` measurement of absolute transition frequencies of to and rydberg states by means of electromagnetically induced transparency , '' phys .a * 83 * , 052515 ( 2011 ) .g. p. barwood , p. gill , and w. r. c. rowley , `` frequency measurements on optically narrowed rb - stabilised laser diodes at 780 nm and 795 nm , '' applied physics b : lasers and optics * 53 * , 142147 ( 1991 ) .
|
we report on a calibration procedure that enhances the precision of an interferometer based frequency stabilization by several orders of magnitude . for this purpose the frequency deviations of the stabilization are measured precisely by means of a frequency comb . this allows to implement several calibration steps that compensate different systematic errors . the resulting frequency deviation is shown to be less than ( rms ) in the whole wavelength interval nm . wide tuning of a stabilized laser at this exceptional precision is demonstrated .
|
object detection is one of the long - standing and important problems in computer vision .motivated by the recent success of deep learning on visual object recognition tasks , significant improvements have been made in the object detection problem .most notably , proposed the `` regions with convolutional neural network '' ( r - cnn ) framework for object detection and demonstrated state - of - the - art performance on standard detection benchmarks ( e.g. , pascal voc , ilsvrc ) with a large margin over the previous arts , which are mostly based on deformable part model ( dpm ) .there are two major keys to the success of the r - cnn .first , features matter . in the r - cnn , the low - level image features ( e.g. , hog ) are replaced with the cnn features , which are arguably more discriminative representations .one drawback of cnn features , however , is that they are expensive to compute .the r - cnn overcomes this issue by proposing a few hundreds or thousands candidate bounding boxes via the selective search algorithm to effectively reduce the computational cost required to evaluate the detection scores at all regions of an image . despite the success of r - cnn ,it has been pointed out through an error analysis that inaccurate localization causes the most egregious errors in the r - cnn framework . for example , if there is no bounding box in the close proximity of ground truth among those proposed by selective search , no matter what we have for the features or classifiers , there is no way to detect the correct bounding box of the object . indeed , there are many applications that require accurate localization of an object bounding box , such as detecting moving objects ( e.g. , car , pedestrian , bicycles ) for autonomous driving , detecting objects for robotic grasping or manipulation in robotic surgery or manufacturing , and many others . in this work ,we address the localization difficulty of the r - cnn detection framework with two ideas .first , we develop a fine - grained search algorithm to expand an initial set of bounding boxes by proposing new bounding boxes with scores that are likely to be higher than the initial ones . by doing so ,even if the initial region proposals were poor , the algorithm can find a region that is getting closer to the ground truth after a few iterations .we build our algorithm in the bayesian optimization framework , where evaluation of the complex detection function is replaced with queries from a probabilistic distribution of the function values defined with a computationally efficient surrogate model .second , we train a cnn classifier with a structured svm objective that aims at classification and localization simultaneously .we define the structured svm objective function with a hinge loss that balances between classification ( i.e. , determines whether an object exists ) and localization ( i.e. , determines how much it overlaps with the ground truth ) to be used as the last layer of the cnn . in experiments, we evaluated our methods on pascal voc 2007 and 2012 detection tasks and compared to other competing methods .we demonstrated significantly improved performance over the state - of - the - art at different levels of intersection over union ( iou ) criteria .in particular , our proposed method outperforms the previous arts with a large margin at higher iou criteria ( e.g. , iou = ) , which highlights the good localization ability of our method .overall , the contributions of this paper are as follows : 1 ) we develop a bayesian optimization framework that can find more accurate object bounding boxes without significantly increasing the number of bounding box proposals , 2 ) we develop a structured svm framework to train a cnn classifier for accurate localization , 3 ) the aforementioned methods are complementary and can be easily adopted to various cnn models , and finally , 4 ) we demonstrate significant improvement in detection performance over the r - cnn on both pascal voc 2007 and 2012 benchmarks .the dpm and its variants have been the dominating methods for object detection tasks for years .these methods use image descriptors such as hog , sift , and lbp as features and densely sweep through the entire image to find a maximum response region . with the notable success of cnn on large scale object recognition , several detection methods based on cnns have been proposed . following the traditional sliding window method for region proposal, proposed to search exhaustively over an entire image using cnns , but made it efficient by conducting a convolution on the entire image at once at multiple scales .apart from the sliding window method , used cnns to regress the bounding boxes of objects in the image and used another cnn classifier to verify whether the predicted boxes contain objects . proposed the r - cnn following the `` recognition using regions '' paradigm , which also inspired several previous state - of - the - art methods . in this framework ,a few hundreds or thousands of regions are proposed for an image via the selective search algorithm and the cnn is finetuned with these region proposals .our method is built upon the r - cnn framework using the cnn proposed in , but with 1 ) a novel method to propose extra bounding boxes in the case of poor localization , and 2 ) a classifier with improved localization sensitivity .the structured svm objective function in our work is inspired by , where they trained a kernelized structured svm on low - level visual features ( i.e. , hog ) to predict the object location . integrated a structured objective with the deep neural network for object detection , but they adopted the branch - and - bound strategy for training as in . in our work , we formulate the linear structured objective upon high - level features learned by deep cnn architectures , but our negative mining step is very efficient thanks to the region - based detection framework .we also present a gradient - based optimization method for training our architecture .there have been several other related work for accurate object localization . incorporated the geometric consistency of bounding boxes with bottom - up segmentation as auxiliary features into the dpm . used the structured svm with color and edge features to refine the bounding box coordinates in dpm framework . used the height prior of an object . these auxiliary features to aid object localization can be injected into our framework without modifications .localization refinement can be also taken as a cnn regression problem . extracted the middle layer features and linearly regressed the initially proposed regions to better locations . refined bounding boxes from a grid layout to flexible locations and sizes using the higher layers of the deep cnn architecture . conducted classification and regression in a single architecture .our method is different in that 1 ) it uses the information from multiple existing regions instead of a single bounding box for predicting a new candidate region , and 2 ) it focuses only on maximizing the localization ability of the cnn classifier instead of doing any regression from one bounding box to another .let denote a detection score of an image at the region with the box coordinates .the object detection problem deals with finding the local maximum of with respect to of an unseen image .as it requires an evaluation of the score function at many possible regions , it is crucial to have an efficient algorithm to search for the candidate bounding boxes .a sliding window method has been used as a dominant search algorithm , which exhaustively searches over an entire image with fixed - sized windows at different scales to find a bounding box with a maximum score . however , evaluating the score function at all regions determined by the sliding window approach is prohibitively expensive when the cnn features are used as the image region descriptor .the problem becomes more severe when flexible aspect ratios are needed for handling object shape variations .alternatively , the `` recognition using regions '' method has been proposed , which requires to evaluate significantly fewer number of regions ( e.g. , few hundreds or thousands ) with different scales and aspect ratios , and it can use the state - of - the - art image features with high computational complexity , such as the cnn features .one potential issue of object detection pipelines based on region proposal is that the correct detection will not happen when there is no region proposed in the proximity of the ground truth bounding box . to resolve this issue, one can propose more bounding boxes to cover the entire image more densely , but this would significantly increase the computational cost . in this section , we develop a fine - grained search ( fgs ) algorithm based on bayesian optimization that sequentially proposes a new bounding box with a higher _ expected _ detection score than previously proposed bounding boxes without significantly increasing the number of region proposals .we first present the general bayesian optimization framework ( section [ sec - general - bayesian - framework ] ) and describe the fgs algorithm using gaussian process as the prior for the score function ( section [ sec - gp - regression ] ) .we then present the local fgs algorithm that searches over multiple local regions instead of a single global region ( section [ sec - l - fgs ] ) , and discuss the hyperparameter learning of our fgs algorithm ( section [ sec - gp - param ] ) .let be the set of solutions ( e.g. , bounding boxes ) . in the bayesian optimization framework , is assumed to be drawn from a probabilistic model : where and .here , the goal is to find a new solution that maximizes the chance of improving the detection score , where the chance is often defined as an acquisition function .then , the algorithm proceeds by recursively sampling a new solution from , and update the set to draw a new sample solution with an updated observation .bayesian optimization is efficient in terms of the number of function evaluation , and is particularly effective when is computationally expensive . when is much less expensive than to evaluate , and the computation for requires only a few function evaluations , we can efficiently find a solution that is getting closer to the ground truth .a gaussian process ( gp ) defines a prior distribution over the function . due to this property , a distribution over is fully specified by a mean function and a positive definite covariance kernel , i.e. , .specifically , for a finite set , the random vector {1\leq j \leq { n}} ] . a random gaussian noise with precision is usually added to each independently in practice .here , we used the constant mean function and the squared exponential covariance kernel with automatic relevance determination ( seard ) as follows : where is a diagonal matrix whose diagonal entries are . these form a -dimensional gp hyperparameter to be learned from the training data . transforms the bounding box coordinates into a new form : ,\ ] ] where and denote the center coordinates , denotes the width , and denotes the height of a bounding box .we introduce a latent variable to make the covariance kernel scale - invariant .are scaled down by a certain factor , we can keep invariant by properly setting .] we determine in a data - driven manner by maximizing the marginal likelihood of , or the gp regression ( gpr ) problem tries to find a new argument given observations that maximizes the value of acquisition function , which , in our case , is defined with the expected improvement ( ei ) as : where .the posterior of given follows gaussian distribution : with the following mean function and covariance kernels : {1\leq j\leq { n}}\right ) , \\\sigma^{2}(y_{{n}+1}|\mathcal{d}_{{n } } ) & = k_{n+1}-\mathbf{k}_{{n}+1}^{\top}\mathbf{k}_{{n}}^{-1}\mathbf{k}_{{n}+1 } , \\k_{n+1 } & = \beta^{-1}+ k(y_{{n}+1},y_{{n}+1}),\\ \mathbf{k}_{{n}+1 } & = \big[k(y_{{n}+1},y_{j})\big]_{1\leq j\leq { n}},\\ \mathbf{k}_{{n } } & = \big[k(y_{i},y_{j})\big]_{1\leq i , j\leq { n}}+\beta^{-1}\mathbf{i}\;.\end{aligned}\ ] ] we refer for detailed derivation . by plugging in , where . is the cumulative distribution function of standard normal distribution . in this section ,we extend the gpr - based algorithm for global maximum search to local fine - grained search ( fgs ) .the local fgs steps are described in figure [ fig : structured - rcnn ] .we perform the fgs by pruning out easy negatives with low classification scores from the set of regions proposed by the selective search algorithm and sorting out a few bounding boxes with the maximum scores in local regions .then , for each local optimum ( red boxes in figure [ fig : structured - rcnn ] ) , we propose a new candidate bounding box ( green boxes in figure [ fig : structured - rcnn ] ) .specifically , we initialize a set of local observations for from the set given by the selective search algorithm , whose localness is measured by an iou between and region proposals ( yellow boxes is not a rectangular region around local optimum since we use iou to determine it . ] in figure [ fig : structured - rcnn ] ) . is used to fit a gp model , and the procedure is iterated for each local optimum at different levels of iou until there is no more acceptable proposal .we provide a pseudocode of local fgs in algorithm [ alg : fgs ] , where the parameters are set as : , .in addition to the capability of handling multiple objects in a single image , better computational efficiency is another factor making local fgs preferable to global search . as a kernel method , the computational complexity of gpr increases cubically to the number of observations . by restricting the observation set to the nearby region of a local optimum ,the gp fitting and proposal process can be performed efficiently . in practice, fgs introduces only computational overhead compared to the original r - cnn .please see the appendices , which are also available in our technical report , for more details on its practical efficiency ( appendix [ sup : fgs - efficiency ] ) . as we locally perform the fgs, the gp hyperparameter also needs to be trained with observations in the vicinity of ground truth objects . to this end , for an annotated object in the training set , we form a set of observations with the structured labels and corresponding classification scores of the bounding boxes that are close to the ground truth bounding box . such an observation setis composed of the bounding boxes ( given by selective search and random selection ) whose iou with the ground truth exceed a certain threshold .finally , we fit a gp model by maximizing the joint likelihood of such observations : where is the index set for positive training samples ( i.e. , with ground truth object annotations ) , and is a ground truth annotation of an image . for handling multiple objects in training . ]we set , where consists of the bounding boxes given by selective search on , is a random subset of , and is the overlap threshold .the optimal solution can be obtained via l - bfgs .our implementation relies on the gpml toolbox .this section describes a training algorithm of r - cnn for object detection using structured loss .we first revisit the object detection framework with structured output regression introduced by in section [ sec - str_detect ] , and extend it to r - cnn pipeline that allows training the network with structured hinge loss in section [ sec - finetuning ] .let be the set of training images and be the set of corresponding structured labels .the structured label is composed of 5 elements ; when , and denote the top - left and bottom - right coordinates of the object , respectively , and when , it implies that there is no object in , and there is no meaning on coordinate elements .note that the definition of is extended from section [ sec - gp ] to indicate the presence of an object as well as its location when exists . when there are multiple objects in an image , we crop an image into multiple positive ( ) images , each of which contains a single object , and a negative image ( ) that does nt contain any object .let represent the feature extracted from an image for a label with . in our case , denotes the top - layer representations of the cnn ( excluding the classification layer ) at location specified by , at location given by to a fixed size ( e.g. , 224 ) to compute the cnn features . ] which are fed into the classification layer .the detection problem is to find a structured label that has the highest score : where note that includes a trick for setting the detection threshold to .the model parameter is trained to minimize the structured loss between the predicted label and the ground - truth label : for the detection problem , the structured loss is defined in terms of intersection over union ( iou ) of two bounding boxes defined by and as follows: where . in general , the optimization problem is difficult to solve due to the complicated form of structured loss . instead , we formulate the surrogate objective in structured svm framework as follows : using and , the constraint is written as follows : where , and denote the set of indices for positive and negative training examples , respectively , and . to learn the r - cnn detector with structured loss , we propose to make several modifications to the original structured svm formulation .first , we restrict the output space of example to regions proposed via selective search .this results in a change in notation for every in and of example to .second , the constraints ( [ eq : stsvm_const_reform_pos_pos ] , [ eq : stsvm_const_reform_pos_neg ] , [ eq : stsvm_const_reform_neg_pos ] ) should be transformed into hinge loss to backpropagate the gradient to lower layers of cnn . specifically , the objective function is reformulated as follows : where , are given as : note that we use different values for positive and negative examples . in experiments , and .structured svm objective may cause a slow convergence in parameter estimation since it utilizes at most one instance among a large number of instances in the ( restricted ) output space , whose size varies from few hundreds to thousands . to overcome this issue , we alternately perform a gradient - based parameter estimation and hard negative data mining that effectively adapts the number of training examples to be evaluated for updating the parameters ( appendix [ sup : hard - mining ] ). for model parameter estimation , we use l - bfgs to first learn parameters of the classification layer only .we found that this already resulted in a good detection performance .then , we optionally use stochastic gradient descent to finetune the whole cnn classifiers ( appendix [ sup : finetuning ] ) .we applied our proposed methods to standard visual object detection tasks on pascal voc 2007 and 2012 . in all experiments ,we consider r - cnns as baseline models .following , we used the cnn models pretrained on imagenet database with object categories , and finetuned the whole network using the target database by replacing the existing softmax classification layer to a new one with a different number of classes ( e.g. , classes for voc 2007 and 2012 ) .we provide the learning details in appendix [ sup : learning - details ] .our implementation is based on the caffe toolbox . setting the r - cnn as a baseline method, we compared the detection performance of our proposed methods , such as r - cnn with fgs ( r - cnn + fgs ) , r - cnn trained with structured svm objective ( r - cnn + structobj ) , and their combination ( r - cnn + structobj + fgs ) . since our goal is to localize the bounding boxes more accurately at the object regions , we also consider the iou of for an evaluation criterion , which only counts the detection results as correct when the overlap between the predicted bounding box and the ground truth is greater than .this is more challenging than common practices ( e.g. , iou ) , but will be a good indicator for a better localization of an object bounding box if successful .c < > c c < > c c < > c c < > c c < > c & & & & & & & & & + & & & & + & & & & & & & & & + & & & & + & & & & & & & & & + & & & & + & & & & & & & & & + & & & & + before reporting the performance of the proposed methods in r - cnn framework , we demonstrate the efficacy of fgs algorithm using an oracle detector .we design a hypothetical oracle detector whose score function is defined as , where is a ground truth annotation for an image .the score function is ideal in the sense that it outputs high scores for bounding boxes with high overlap with the ground truth and vice versa , overall achieving 100% map .we summarize the results in figure [ fig : oracle - gp ] .we report the performance on the voc 2007 test set at different levels of iou criteria ( ) for the baseline selective search ( ss ; `` fast mode '' in ) , selective search with objectness ( ss + objectness ) , selective search with extended super - pixel similarity measurements ( ss extended ) , `` quality mode '' of selective search ( ss quality ) , local random search , and the proposed fgs method with the baseline selective search . for low values of iou ( ) ,all methods using the oracle detectors performed almost perfectly due to the ideal score function .however , we found that the detection performance with different region proposal schemes other than our proposed fgs algorithm start to break down at high values of iou .for example , the performance of ss , ss + objectness , ss extended , and local random search methods , which used around bounding boxes per image in average , significantly dropped at iou .ss quality method kept pace with the fgs method until iou of , but again , the performance started to drop at iou .on the other hand , the performance of fgs dropped in map at iou of by only introducing approximately new bounding boxes per image .given that ss quality requires region proposals per image , our proposed fgs method is much more computationally efficient ( less bounding boxes ) while localizing the bounding boxes much more accurately .this provides an insight that , if the detector is accurate , our bayesian optimization framework would limit the number of bounding boxes to a manageable number ( e.g. , few thousands per image on average ) to achieve almost perfect detection results. we also report similar experimental analysis for the real detector trained with the proposed structured objective in appendix [ sup : map - diff - pr ] . in this section ,we demonstrate the performance of our proposed methods on pascal voc 2007 detection task ( comp4 ) , a standard benchmark for object detection problem .similarly to the training pipeline of r - cnn , we finetuned the cnn models ( with softmax classification layer ) pretrained on imagenet database using images from both train and validation sets of voc 2007 and further trained the network with linear svm ( baseline ) or the proposed structured svm objective .we evaluated on the test set using the proposed fgs algorithm . for post - processing , we performed nms and bounding box regression .figure [ fig-2007examples ] shows representative examples of successful detection using our method .for these cases , our method can localize objects accurately even if the initial bounding box proposals do nt have good overlaps with the ground truth .we show more examples ( including the failure cases ) in appendix [ sup : example - improvement ] , [ sup : example - fp ] , [ sup : example - rand ] .the summary results are in table [ tab - voc2007-iou5 ] with iou criteria of and table [ tab - voc2007-iou7 ] with .we report the performance with the alexnet and the vggnet ( 16 layers ) , a deeper cnn model than alexnet that showed a significantly better recognition performance and achieved the best performance on object localization task in ilsvrc 2014 .first of all , we observed the significant performance improvement by simply having a better cnn model .building upon the vggnet , the fgs improved the performance by and in map without and with bounding box regression ( table [ tab - voc2007-iou5 ] ) .it becomes much more significant when we consider iou criteria of ( table [ tab - voc2007-iou7 ] ) , improving upon the baseline model by and in map without and with bounding box regression .the results demonstrate that our fgs algorithm is effective in accurately localizing the bounding box of an object .further improvement has been made by training a classifier with structured svm objective ; we obtained map in iou criteria of , which , to our knowledge , is higher than the best published results , and map in iou criteria of with fgs and bounding box regression by training the classification layer only ( structobj " ) . by finetuning the whole cnn classifiers ( `` structobj - ft '' ) , we observed extra improvement for most cases ; for example , we obtained map in iou criteria of , which improves by in map over the method without finetuning .however , for iou.5 criterion , the overall improvement due to finetuning was relatively small , especially when using bounding box regression . in this case, considering the high computational cost for finetuning , we found that training only the classification layer is practically a sufficient way to learn a good localization - aware classifier .we provide in - depth analysis of our proposed methods in the appendices .specifically , we report the precision - recall curves of different combinations of the proposed methods ( appendix [ sup : pr - curves ] ) , the performance of fgs with different gp iterations ( appendix [ sup : per - iter - fgs ] ) , the analysis of localization accuracy ( appendix [ sup : loc - distr ] ) , and more detection examples .we also evaluate the performance of the proposed methods on pascal voc 2012 . asthe data statistics are similar to voc 2007 , we used the same hyperparameters as described in section [ exp - voc2007 ] for this experiment .we report the test set map over 20 object categories in table [ tab - voc2012-iou5 ] .our proposed method shows improvement by with r - cnn + structobj and with r - cnn + fgs over baseline r - cnn using vggnet .finally , we obtained map by combining the two methods , which significantly improved upon the baseline r - cnn model and the previously published results on the leaderboard .in this work , we proposed two complementary methods to improve the performance of object detection in r - cnn framework with 1 ) fine - grained search algorithm in a bayesian optimization framework to refine the region proposals and 2 ) a cnn classifier trained with structured svm objective to improve localization .we demonstrated the state - of - the - art detection performance on pascal voc 2007 and 2012 benchmarks under standard localization requirements .our methods showed more significant improvement with higher iou evaluation criteria ( e.g. , iou ) , and hold promise for mission - critical applications that require highly precise localization , such as autonomous driving , robotic surgery and manipulation .this work was supported by samsung digital media and communication lab , google faculty research award , onr grant n00014 - 13 - 1 - 0762 , china scholarship council , and rackham merit fellowship .we also acknowledge nvidia for the donation of gpus . finally , we thank scott reed , brian wang , junhyuk oh , xinchen yan , ye liu , wenling shang , and roni mittelman for helpful discussions .contents of appendicesthe model parameters are updated via gradient descent .the gradient , for example , with respect to the cnn parameters for positive examples is given as follows : where .similarly , the gradient for negative examples can be computed as follows : where .the gradient with respect to the parameters of all layers of cnn can be computed efficiently using backpropagation .when finetuning the entire network , the parameter updated in the hard mining procedure illustrated by algorithm [ alg : hard - mining ] is done by replacing with the cnn parameters .the active set consisting of the hard training instances are updated in two steps during the iterative learning process .first , we include instances to the active set when they are likely to be active , i.e. , affect the gradient : second , once new parameters are estimated , we exclude instances from the current active set when they are likely to be inactive , i.e. , have no effect on the gradient : in our experiments , we used and .the values of are the same as those for the svm training in r - cnn .we did not observe a noticeable performance fluctuation due to different values .algorithm [ alg : hard - mining ] summarizes the hard - mining procedure .initial parameters , maximum epoch number , training images , positive and negative index final parameters .the active set , s.t . , for , update the classifier / network parameters on s.t . , for our experiments on pascal voc 2007 and voc 2012 , we first finetune the cnn pretrained on imagenet by stochastic gradient descent with a 21-way softmax classification layer , where 20 ways are for the 20 object categories of interest , and the rest 1 way is for the background . in this step ,the sgd learning rate starts at 0.0003 , and decreases by 0.3 every 15000 iterations with a mini - batch size of 48 .we set the momentum to 0.9 , and the weight decay to 0.0005 for all the layers .after that , we replace the softmax loss layer with a 20-way structured loss layer , where each way is a binary classifier , and the hinge loss for different category are simply summed up . for classification layer only learning , l - bfgs is adopted , as batch gradient descent for a single layer .each category has an associated active set for hard negative mining .the classifier update happens independently for each category when ( in algorithm [ alg : hard - mining ] ) new hard examples are added to the active set .it is worth mentioning that , in the beginning of the hard negative mining , significantly more positive images are present than the negative images , resulting in serious unbalance of the active training samples . as a heuristic to avoid this problem , we limit the number of positive image to the number of the active negative images when classifier update happens in the first epoch .we run the hard negative mining for 2 epochs in total .the first epoch is for initializing the active set with the above heuristic , and the rest is for learning with the all the training data .compared to the linear svm training in r - cnn , our l - bfgs based solution to the structured objective costs longer time .however , it turns out to be significantly more efficient than svm^struct^ . for the entire network finetuning , we initialize the structured loss layer with the weights obtained bythe classification - layer - only learning .the whole network is then finetuned by backpropagating the gradient from the top layer with a fixed sgd learning rate of . for implementation simplicity , we keep updating the active sets until the end of an epoch , and update the classifiers per epoch ( i.e. , in algorithm [ alg : hard - mining ] ) . likebefore , each category still has one particular active set .however , the network parameters ( except for the classifier ) are shared across all the category so that the feature extraction time is not scaled up with the number of categories . in practice , we found one epoch was enough for both hard negative mining and sgd in the entire network finetuning case . running moreepochs did not make noticeable improvement on the final detection performance on pascal voc 2007 test set , but cost a significantly larger amount of training time .in this section , we provide more details on the local fgs presented in algorithm [ alg : fgs ] of the main text .* gpr practical efficiency : * for the initial proposals given by selective search , usually turns out to be 20 to 100 , and line [ alg - ln : latent],[alg - ln : best ] can be efficiently solved in around 9 and 6 l - bfgs iterations for structobj , respectively . * gpu parallelism for cnn : * one image can have multiple search regions ( e.g. , line 6 ) , and 20 object categories together yield more regions .fgs proposes one extra box per iteration for every search region .these boxes are fed into the cnn together to utilize gpu parallelism . for vggnet, we use the batch size of 8 for computing the cnn features within the fgs procedure . * time overhead : * for pascal voc 2007 , fgs ( ) induced only total overhead compared to initial time cost , which mostly consists of cnn feature extraction from bounding boxes proposed by selective search ( ss ) .specifically , of the overhead is caused by cnn feature extraction from the newly proposed boxes ( line 11 ) ; the rest is caused by gpr ( line9 , 10 ) , nms ( line 5 ) , and pruning ( line 12 ) .each gp iteration ( line 2 - 16 ) counts for with respect to the initial time cost , and gp iterations were sufficient for convergence .figure [ fig : gp - iter - time ] shows the trends of the accumulated time overhead introduced by fgs per iteration . the time overhead due to fgs may vary with different datasets ( e.g. , voc 2012 ) , but in general, it is compared to initial time cost .we evaluated the map at each gp iteration using r - cnn(vgg)+structobj+fgs+bboxreg .the maps from 0 to 8 gp iterations are reported in table [ tab : gp - stepwise ] .map increases rapidly in the first 4 iterations , and becomes stable in the following iterations .
|
object detection systems based on the deep convolutional neural network ( cnn ) have recently made ground - breaking advances on several object detection benchmarks . while the features learned by these high - capacity neural networks are discriminative for categorization , inaccurate localization is still a major source of error for detection . building upon high - capacity cnn architectures , we address the localization problem by 1 ) using a search algorithm based on bayesian optimization that sequentially proposes candidate regions for an object bounding box , and 2 ) training the cnn with a structured loss that explicitly penalizes the localization inaccuracy . in experiments , we demonstrate that each of the proposed methods improves the detection performance over the baseline method on pascal voc 2007 and 2012 datasets . furthermore , two methods are complementary and significantly outperform the previous state - of - the - art when combined .
|
recently , simultaneous wireless information and power transfer ( swipt ) becomes appealing by essentially providing a perpetual energy source for the wireless networks .moreover , the swipt system offers great convenience to mobile users , since it realizes both useful utilizations of radio signals to transfer energy as well as information .therefore , swipt has drawn an upsurge of research interests .varshney first proposed the idea of transmitting information and energy simultaneously in assuming that the receiver is able to decode information and harvest energy simultaneously from the same received signal .however , this assumption may not hold in practice , as circuits for harvesting energy from radio signals are not yet able to decode the carried information directly .two practical schemes for swipt , namely , time switching ( ts ) and power splitting ( ps ) , are proposed in . with ts applied at the receiver , the received signalis either processed by an energy receiver for energy harvesting ( eh ) or processed by an information receiver for information decoding ( i d ) . with psapplied at the receiver , the received signal is split into two signal streams with a fixed power ratio by a power splitter , with one stream to the energy receiver and the other one to the information receiver .swipt for multi - antenna systems has been considered in . in particular , studied the performance limits of a three - node multiple - input multiple - output ( mimo ) broadcasting system , where one receiver harvests energy and another receiver decodes information from the signals sent by a common transmitter . extended the work in by considering imperfect channel state information ( csi ) at the transmitter for a multiple - input single - output ( miso ) system .a miso swipt system without csi at the transmitter was considered in , where a new scheme that employs random beamforming for opportunistic eh was proposed . studied a miso broadcasting system that exploits near - far channel conditions , where `` near '' users are scheduled for eh , while `` far '' users are scheduled for i d .swipt that exploits flat - fading channel variations was studied in , where the receiver performs dynamic time switching ( dts ) or dynamic power splitting ( dps ) to coordinate between eh and i d .swipt in multi - antenna interference channels was considered in using ps and ts , respectively .swipt with energy / information relaying has been studied in , where an energy - constrained relay harvests energy from the received signal and uses that harvested energy to forward the source information to the destination .two relaying protocols , i.e. , the ts - based relaying ( tsr ) protocol and the ps - based relaying ( psr ) protocol , are proposed in . in the tsr protocol, the relay spends a portion of time for eh and the remaining time for information processing . in the psr protocol, the relay spends a portion of the received power for eh and the remaining power for information processing .networks that involve wireless power transfer were studied in . in ,the authors studied a hybrid network which overlays an uplink cellular network with randomly deployed power beacons that charge mobiles wirelessly . under an outage constraint on the data links ,the tradeoffs among the network parameters were derived . in ,the authors investigated a cognitive radio network powered by opportunistic wireless energy harvesting , where mobiles from the secondary network either harvest energy from nearby transmitters in a primary network , or transmit information if the primary transmitters are far away . under an outage constraint for coexisting networks, the throughput of the secondary network was maximized .orthogonal frequency division multiplexing ( ofdm ) is a well established technology for high - rate wireless communications , and has been adopted in various standards , e.g. , ieee 802.11n and 3gpp - long term evolution ( lte ) .however , the performance may be limited by the availability of energy in the devices for some practical application scenarios .it thus motivates our investigation of swipt in ofdm - based systems .swipt over a single - user ofdm channel has been studied in assuming that the receiver is able to decode information and harvest energy simultaneously from the same received signal .it is shown in that a tradeoff exists between the achievable rate and the transferred power by power allocation in the frequency bands : for sufficiently small transferred power , the optimal power allocation is given by the so - called `` waterfilling ( wf ) '' allocation to maximize the information transmission rate , whereas as the transferred power increases , more power needs to be allocated to the channels with larger channel gain and finally approaches the strategy with all power allocated to the channel with largest channel gain . however , due to the practical limitation that circuits for harvesting energy from radio signals are not yet able to decode the carried information directly , the result in actually provides only an upper bound for the rate - energy tradeoff in a single - user ofdm system .power control for swipt in a multiuser multi - antenna ofdm system was considered in , where the information decoder and energy harvester are attached to two separate antennas . in ,each user only harvests the energy carried by the subcarriers that are allocated to that user for i d , which is inefficient in energy utilization , since the energy carried by the subcarriers allocated to other users for i d can be potentially harvested .moreover , focuses on power control by assuming a predefined subcarrier allocation . in this paper, we jointly optimize the power allocation strategy as well as the subcarrier allocation strategy . considered swipt in a multiuser single - antenna ofdm system , where ps is applied at each receiver to coordinate between eh and i d . in , it is assumed that the splitting ratio can be different for different subcarriers .however , in practical circuits , ( analog ) power splitting is performed before ( digital ) ofdm demodulation .thus , for an ofdm - based swipt system , _ all _ subcarriers would have to be power split with the same power ratio at each receiver even though only a subset of the subcarriers contain information for the receiver .in contrast , for the case of a single - carrier system , a receiver simply harvests energy from all signals that do not contain information for this receiver . as an extension of our previous work in for a single - user narrowband swipt system , in this paper , we study a multiuser ofdm - based swipt system ( see fig .[ fig : system ] ) , where a fixed access point ( ap ) with constant power supply broadcasts signals to a set of distributed users . unlike the conventional wireless network where all the users contain only information receiver anddraw energy from their own power supplies , in our model , it is assumed that each user contains an additional energy receiver to harvest energy from the received signals from the ap .for the information transmission , two conventional multiple access schemes are considered , namely , time division multiple access ( tdma ) and orthogonal frequency division multiple access ( ofdma ) . for the tdma - based information transmission ,since the users are scheduled in nonoverlapping time slots to decode information , each user should apply ts such that the information receiver is used during the time slot when information is scheduled for that user , while the energy receiver is used in all other time slots . for the ofdma - based information transmission, we assume that ps is applied at each receiver .as mentioned in the previous paragraph , we assume that all subcarriers share the same power splitting ratio at each receiver . under the tdma scenario, we address the problem of maximizing the weighted sum - rate over all users by varying the power allocation in time and frequency and the ts ratios , subject to a minimum harvested energy constraint on each user and a peak and/or total transmission power constraint . by an appropriate variable transformationthe problem is reformulated as a convex problem , for which the optimal power allocation and ts ratios are obtained by the lagrange duality method . for the ofdma scenario, we address the same problem by varying the power allocation in frequency , the subcarrier allocation to users and the ps ratios . in this case, we propose an efficient algorithm to iteratively optimize the power and subcarrier allocation , and the ps ratios at receivers until the convergence is reached .furthermore , we compare the rate - energy performances by the two proposed schemes , both numerically by simulations and analytically for the special case of single - user system setup .it is revealed that the peak power constraint imposed on each ofdm subcarrier as well as the number of users in the system play key roles in the rate - energy performance comparison by the two proposed schemes .the rest of this paper is organized as follows .section [ sec : system model ] presents the system model and problem formulations .section [ sec : solution - single ] studies the special case of a single - user ofdm - based swipt system .section [ sec : solution ] derives the resource allocation solutions for the two proposed schemes in the multiuser ofdm - based swipt system .finally , section [ sec : conclusion ] concludes the paper .as shown in fig . [ fig : system ] , we consider a downlink ofdm - based system with one transmitter and users .the transmitter and all users are each equipped with one antenna .the total bandwidth of the system is equally divided into subcarriers ( scs ) .the sc set is denoted by .the power allocated to sc is denoted by .assume that the total transmission power is at most .the maximum power allocated to each sc is denoted by , i.e. , , where .the channel power gain of sc as seen by the user is denoted by .we consider a slow - fading environment , where all the channels are assumed to be constant within the transmission scheduling period of our interest . for simplicity , we assume the total transmission time to be one .moreover , it is assumed that the channel gains on all the scs for all the users are known at the transmitter . at the receiver side, each user performs eh in addition to i d .it is assumed that the minimum harvested energy during the unit transmission time is for user .we first consider the case of tdma - based information transmission with ts applied at each receiver .it is worth noting that for a single - user swipt system with ts applied at the receiver , the transmission time needs to be divided into two time slots to coordinate the eh and i d processes at the receiver .thus , in the swipt system with users , we consider time slots without loss of generality , where the additional time slot , which we called the _ power _ slot , may be allocated for all users to perform eh only .in contrast , in conventional tdma systems without eh , the power slot is not required .we assume that slot is assigned to user for transmitting information , while slot is the power slot . with total time duration of slots to be atmost one , the ( normalized ) time duration for slot is variable and denoted by the ts ratio , with and .in addition , the power allocated to sc at slot is specified as , where .the average transmit power constraint is thus given by consider user at the receiver side , user decodes its intended information at slot when its information is sent and harvests energy during all the other slots .the receiver noise at each user is assumed to be independent over scs and is modelled as a circularly symmetric complex gaussian ( cscg ) random variable with zero mean and variance at all scs .moreover , the gap for the achievable rate from the channel capacity due to a practical modulation and coding scheme ( mcs ) is denoted by .the achievable rate in bps / hz for the information receiver of user is thus given by assuming that the conversion efficiency of the energy harvesting process at each receiver is denoted by , the harvested energy in joule at the energy receiver of user is thus given by an example of the energy utilization at receivers for the ts case in a two - user ofdm - based swipt system is illustrated in fig .[ fig : pa - ts ] . as shown in fig .[ fig : pa - ts](a ) for user 1 , the received energy on all scs during slot 1 is utilized for i d ; while the received energy on all scs during slot 2 and slot 3 is utilized for eh . in fig .[ fig : pa - ts](b ) for user 2 , the received energy on all scs during slot 2 is utilized for i d ; while the received energy on all scs during slot 1 and slot 3 is utilized for eh .our objective is to maximize the weighted sum - rate of all users by varying the transmission power in the time and frequency domains jointly with ts ratios , subject to eh constraints and the transmission power constraints .thus , the following optimization problem is formulated . where denotes the non - negative rate weight assigned to user .problem ( p - ts ) is feasible when all the constraints in problem ( p - ts ) can be satisfied by some .from ( [ eq : ek - ts ] ) , the harvested energy at all users is maximized when , while for , i.e. , all users harvest energy during the entire transmission time .therefore , problem ( p - ts ) is feasible if and only if the following linear programming ( lp ) is feasible . is easy to check the feasibility for the above lp .we thus assume problem ( p - ts ) is feasible subsequently .problem ( p - ts ) is non - convex in its current form .we will solve this problem in section [ sec : solution - ts ] .next , we consider the case of ofdma - based information transmission with ps applied at each receiver . as is standard in ofdma transmissions , each sc is allocated to at most one user in each slot , i.e. , no sc sharing is allowed .we define a sc allocation function , i.e. , the sc is allocated to user .the total transmission power constraint is given by at the receiver , the received signal at user is processed by a power splitter , where a ratio of power is split to its energy receiver and the remaining ratio is split to its information receiver , with .the achievable rate in bps / hz at sc assigned to user is thus with energy conversion efficiency , the harvested energy in joule at the energy receiver of user is thus given by an example of the energy utilization at receivers for the ps case in a two - user ofdm - based swipt system is illustrated in fig .[ fig : pa - ps ] . as shown in fig .[ fig : pa - ps ] , the received signals at all scs share the same splitting ratio at each user .it is worth noting that only of the power at each of the scs allocated to user 2 for i d is harvested by user 1 , the remaining of power at those scs is neither utilized for eh nor i d at user 1 , similarly as for user 2 with ps ratio . with the objective of maximizing the weighted sum - rate of all users by varying the transmission power in the frequency domain , the sc allocation , jointly with the ps ratios at receivers , subject to a given set of eh constraints and the transmission power constraints , the following optimization problem is formulated . from ( [ eq : ek - ps ] ) , the harvested energy at all users is maximized when , i.e. , all power is split to the energy receiver at each user . therefore , problem ( p - ps ) is feasible if and only if problem ( p - ps ) with is feasible .it is worth noting that problem ( p - ps ) and problem ( p - ts ) are subject to the same feasibility conditions as given by problem ( [ prob : feasibility ] ) .it can be verified that problem ( p - ps ) is non - convex in its current form .we will solve this problem in section [ sec : solution - ps ] .an upper bound for the optimization problems ( p - ts ) and ( p - ps ) can be obtained by assuming that each receiver is able to decode the information in its received signal and at the same time harvest the received energy without any implementation loss .we thus consider the following optimization problem . note that problem ( p - ub ) , as well as problem ( p - ts ) and problem ( p - ps ) are subject to the same feasibility conditions as given by problem ( [ prob : feasibility ] ) .also note that any infeasible problem ( p - ub ) can be modified to become a feasible one by increasing the transmission power or by decreasing the minimum required harvested energy at some users . in the sequel, we assume that all the three problems are feasible , thus optimal solutions exist .the solution for problem ( p - ub ) is obtained in section [ sec : solution - ps ] ( see remark [ remark : p - ub ] ) .to obtain tractable analytical results , in this section , we consider the special case that , i.e. , a single - user ofdm - based swipt system . for brevity , , , and are replaced with , , and , respectively . without loss of generality , we assume that and . with , problem ( p - ts ) and problem ( p - ps ) are then simplified respectively as follows to obtain useful insight , we first look at the two extreme cases , i.e. , and .we shall see that the peak power constraint plays an important role in the performance comparison between the ts and ps schemes .note that implies the case of no peak power constraint on each sc ; and implies the case of only peak power constraint on each sc , since the total power constraint is always satisfied and thus becomes redundant .given and , the maximum rates achieved by the ts scheme and the ps scheme are denoted by and , respectively .for the case of , we have the following proposition for the ts scheme .we recall that is the ts ratio for the power slot .[ proposition-1 ] assuming , in the case of a single - user ofdm - based swipt system with , the maximum rate by the ts scheme , i.e. , , is achieved by and .clearly , we have ; otherwise , no energy is harvested , which violates the eh constraint .thus , . to maximize the objective function subject to the eh constraint, it can be easily shown that the optimal and should satisfy and .it follows that the minimum transmission energy consumed to achieve the harvested energy is given by , i.e. , .thus , in problem ( [ prob : ts - single ] ) , the achievable rate is given by maximizing subject to and .let , the above problem is then equivalent to maximizing subject to and . for given ,the objective function is an increasing function of ; thus , is maximized when .it follows that , which completes the proof .[ remark : peak - infty ] by proposition [ proposition-1 ] , to achieve with , the portion of transmission time allocated to eh in each transmission block should asymptotically go to zero .for example , let denote the number of transmitted symbols in each block , by allocating symbols for eh in each block and the remaining symbols for i d results in as , which satisfies the optimality condition provided in proposition [ proposition-1 ] .it is worth noting that is achieved under the assumption that the transmitter and receiver are able to operate in the regime of infinite power in the eh time slot due to . for a finite , a nonzero time ratio should be scheduled to the power slot to collect sufficient energy to satisfy the eh constraint .moreover , we have the following proposition showing that the ps scheme performs no better than the ts scheme for the case of .[ proposition-2 ] in the case of a single - user ofdm - based swipt system with , the maximum rate achieved by the ps scheme is no larger than that achieved by the ts scheme , i.e. , . for the ps scheme , from the eh constraint , it follows that must hold .thus , is upper bounded by maximizing subject to and .let , the above problem is then equivalent to maximizing subject to and . since , it follows that .note that according to proposition [ proposition-1 ] , is obtained ( with ) by maximizing subject to .therefore , we have . for the other extreme casewhen , we have the following proposition . [ proposition-3 ] in the case of a single - user ofdm - based swipt system with ,the maximum rate achieved by the ts scheme is no larger than that achieved by the ps scheme , i.e. , . with ,the total power constraint is redundant for both ts and ps .thus , the optimal power allocation for ts is given by .it follows that .then we have the optimal . is thus given by .on the other hand , the optimal power allocation for ps is given by .it follows that . is thus given by .due to the concavity of the logarithm function , we have , which completes the proof .in fact , from the proof of proposition [ proposition-3 ] , we have provided that equal power allocation ( not necessarily equals to ) over all scs are employed for both ts and ps schemes .note that for a single - user ofdm - based swipt system with , the performance comparison between the ts scheme and the ps scheme remains unknown analytically . from proposition [ proposition-2 ] and proposition [ proposition-3 ] , neither the ts scheme northe ps scheme is always better .it suggests that for a single - user ofdm - based swipt system with sufficiently small peak power , the ps scheme may be better ; with sufficiently large peak power , the ts scheme may be better .for the special case that , i.e. , a single - carrier point - to - point swipt system , the following proposition shows that : for , the ts and ps schemes achieve the same rate ; for a finite peak power , the ts scheme performs no better than the ps scheme .[ proposition-4 ] in the case of a single - carrier point - to - point swipt system with , we have , with equality if .since , we remove the sc index in the subscripts of , , and . for the ps scheme , to satisfy the eh constraint, we have ; thus , with , the maximum rate by the ps scheme is given by .for the ts scheme , we have to satisfy the eh constraint .it follows that .therefore , , and the equality holds if . by proposition [ proposition-1 ] , is achieved by ; thus , the above equality holds if , which completes the proof . figs . [fig : rate - k1 ] and [ fig : rate - k1n1 ] show the achievable rates by different schemes versus different minimum required harvested energy . for fig .[ fig : rate - k1 ] , the total bandwidth is assumed to be , which is equally divided as subcarriers .the six - tap exponentially distributed power profile is used to generate the frequency - selective fading channel . for fig .[ fig : rate - k1n1 ] with , i.e. , a single - carrier point - to - point swpt system , the bandwidth is assumed to be .for both figures , the transmit power is assumed to be (w ) or 30dbm .the distance from the transmitter to the receiver is 1 meter(m ) , which results in path - loss for all the channels at a carrier frequency with path - loss exponent equal to 3 .for the energy receivers , it is assumed that .for the information receivers , the noise spectral density is assumed to be / hz . the mcs gap is assumed to be . in both fig .[ fig : rate - k1 ] and fig .[ fig : rate - k1n1 ] , it is observed that for both ts and ps schemes , the achievable rate decreases as the minimum required harvested energy increases , since the available energy for information decoding decreases as increases . in fig .[ fig : rate - k1 ] with , it is observed that there is a significant gap between the achievable rate by ts with and that by ts with ; moreover , the gap increases as increases .this is because that with , all transmission time can be utilized for information decoding by letting ( c.f . proposition [ proposition-1 ] ) ; whereas for a finite , a nonzero transmission time needs to be scheduled for energy harvesting . for the ps scheme , this performance gap due to finite peak power constraint is only observed when is sufficiently large . comparing the ts and ps schemes in fig .[ fig : rate - k1 ] , it is observed that ts outperforms ps when ; however , for sufficiently small , e.g. , , ps outperforms ts . in fig .[ fig : rate - k1n1 ] with , it is observed that when , the achievable rate by the ts scheme is the same as that by the ps scheme ; when , the achievable rate by the ts scheme is no larger than that by the ps scheme , which is in accordance with proposition [ proposition-4 ] .in this section , we consider the general case of an ofdm - based swipt system with multiple users .we derive the optimal transmission strategies for the two schemes proposed in section [ sec : system model ] , and compare their performances .we first reformulate problem ( p - ts ) by introducing a set of new non - negative variables : .moreover , we define at to keep continuity at .( p - ts ) is thus equivalent to the following problem : ) , for the case of , we allow while by letting . ] after finding the optimal and for problem ( [ prob : ts-1 ] ) , the optimal power allocation for problem ( p - ts ) is given by provided that . from the constraint , we have if and .thus , if and , the allocated power will be , since no information / power transmission is scheduled at slot . for the extreme case of ,if , then the allocated power will be ; if and , then we have .[ lemma:1 ] function is jointly concave in and , where please refer to appendix [ appendix : proof 1 ] . from lemma [ lemma:1 ] , as a non - negative weighted sum of , the new objective function of problem ( [ prob : ts-1 ] )is jointly concave in and . since the constraints are now all affine , problem ( [ prob : ts-1 ] ) is convex , and thus can be optimally solved by applying the lagrange duality method , as will be shown next .the lagrangian of problem ( [ prob : ts-1 ] ) is given by where , , and are the non - negative dual variables associated with the corresponding constraints in ( [ prob : ts-1 ] ) .the dual function is then defined as the optimal value of the following problem . dual problem is thus defined as .first , we consider the maximization problem in ( [ eq : dual func - ts ] ) for obtaining with a given set of , , and .we define , as shown in ( [ eq : lk ] ) at the top of this page . then for the lagrangian in ( [ eq : lagrangian - ts ] ) , we have thus , for each given , the maximization problem in ( [ eq : dual func - ts ] ) can be decomposed as we first study the solution for problem ( [ prob : ts - lk ] ) with given . from ( [ eq : lk ] ) , we have given , the that maximizes can be obtained by setting to give where . for given , it appears that there is no closed - form expression for the optimal that maximizes .however , since is a concave function of with given , can be found numerically by a simple bisection search over . to summarize , for given , problem ( [ prob : ts - lk ] )can be solved by iteratively optimizing between and with one of them fixed at one time , which is known as block - coordinate descent method .next , we study the solution for problem ( [ prob : ts - lk ] ) for , i.e. , for the power slot , which is a lp .define the set . from ( [ eq : lk ] ) , to maximize we have and after obtaining with given , , and , the minimization of over , , and can be efficiently solved by the ellipsoid method .a subgradient of this problem required for the ellipsoid method is provided by the following proposition .[ proposition - subgradient ] for problem ( [ prob : ts-1 ] ) with a dual function , the following choice of is a subgradient for : where and is the solution of the maximization problem ( [ eq : dual func - ts ] ) with given and .please refer to appendix [ appendix : proof 2 ] .note that the optimal and are obtained at optimal , , and .given , the objective function in problem ( [ prob : ts-1 ] ) is an increasing function of .thus , the optimal s , satisfy ; otherwise , the objective can be improved by increasing some of the s , .then , the optimal is given by . with , ,problem ( [ prob : ts-1 ] ) becomes a lp with variables .the optimal values of are obtained by solving this lp . to summarize , one algorithm to solve ( p - ts ) is given in table [ table1 ] .for the algorithm given in table [ table1 ] , the computation time is dominated by the ellipsoid method in steps i)-iii ) and the lp in step v ) .in particular , the time complexity of steps 1)-3 ) is of order , step 4 ) is of order , step 5 ) is of order , and step 6 ) is of order .thus , the time complexity of steps 1)-6 ) is of order , i.e. , .note that step ii ) iterates to converge , thus the time complexity of stepsi)-iii ) is .the time complexity of the lp in step v ) is .therefore , the time complexity of the algorithm in table [ table1 ] is . ' '' '' * initialize , and . ** repeat * * * initialize . * * * repeat * * * * for , compute by ( [ eq : pa - ts ] ) . * * * for , obtain that maximizes with given by bisection search . * * * until * improvement of converges to a prescribed accuracy .* * compute and by ( [ eq : q_k+1,n ] ) and ( [ eq : alpha - k+1 ] ) . * * compute the subgradient of by ( [ eq : subgradient - ts ] ) . * * update , and according to the ellipsoid method . * * until * , and converge to a prescribed accuracy . *set , and . *obtain by solving problem ( [ prob : ts-1 ] ) with , . * for and , if , set ; if and , set ; if and , set . ' '' '' [ table1 ] similar with the single - user case , we have the following proposition . in the case of a multiuser ofdm - based swipt system with and ,the maximum rate by the ts scheme , i.e. , , is achieved by or . in the equivalent problem ( [ prob : ts-1 ] ) with , the eh constraints and the total power constraint are independent of .the objective in problem ( [ prob : ts-1 ] ) is an increasing function of for given .thus , the maximum achievable rate is obtained by minimizing the time allocated to the power slot , i.e. , ( when ) or ( when for some ) .it is worth noting that for the multiuser system with and , it is possible that the maximum rate by the ts scheme is achieved by , in which case no additional power slot is scheduled and all users simply harvest energy at the slots scheduled for other users to transmit information . in contrast , for the single - user case , the power slot is always needed if .[ remark : conventional tdma ] in problem ( [ prob : ts-1 ] ) , when and , the system becomes a conventional tdma system without eh constraints .assume that the harvesting energy at each user by the optimal transmission strategy for this system is given by . then for a system with , the same rate as that by the conventional tdma system can be achieved .since problem ( p - ps ) is non - convex , the optimal solution may be computationally difficult to obtain .instead , we propose a suboptimal algorithm to this problem by iteratively optimizing and with fixed , and optimizing with fixed and .note that ( p - ps ) with given and is a convex problem , of which the objective function is a nonincreasing function of .thus , the optimal power splitting ratio solution for ( p - ps ) with a given set of feasible and is obtained as next , consider ( p - ps ) with a given set of feasible s , i.e. , where and can be viewed as the equivalent channel power gains for the information and energy receivers , respectively .the problem in ( [ prob : ps - rho ] ) is non - convex , due to the integer sc allocation .however , it has been shown that the duality gap of a similar problem to ( [ prob : ps - rho ] ) without the harvested energy constrains converges to zero as the number of scs , , increases to infinity .thus , we solve problem ( [ prob : ps - rho ] ) by applying the lagrange duality method assuming that it has a zero duality gap . with , the duality gap of problem ( [ prob : ps - rho ] )is observed to be negligibly small and thus can be ignored . ]the lagrangian of problem ( [ prob : ps - rho ] ) is given by where s and are the non - negative dual variables associated with the corresponding constraints in ( [ prob : ps - rho ] ) .the dual function is then defined as the dual problem is thus given by .consider the maximization problem in ( [ eq : dual func ] ) for obtaining with a given set of and .for each given sc , the maximization problem in ( [ eq : dual func ] ) can be decomposed as note that for the lagrangian in ( [ eq : lagrangian ] ) , we have from ( [ eq : ln ] ) , we have thus , for any given sc allocation , the optimal power allocation for problem ( [ eq : ln ] ) is obtained as thus , for each given , the optimal sc allocation to maximize can be obtained , which is shown in ( [ eq : subcarrier ] ) at the top of next page . note that ( [ eq : subcarrier ] ) can be solved by exhaustive search over the user set . after obtaining with given and , the minimization of over and can be efficiently solved by the ellipsoid method .a subgradient of this problem required for the ellipsoid method is provided by the following proposition . for problem ( [ prob : ps - rho ] ) with a dual function , the following choice of is a subgradient for : where is the solution of the maximization problem ( [ eq : dual func ] ) with given and .the proof is similar as the proof of proposition [ proposition - subgradient ] , and thus is omitted .[ remark : p - ub ] the optimal solution for ( p - ub ) can be obtained by setting in problem ( [ prob : ps - rho ] ) .hence , the above developed solution is also applicable for problem ( p - ub ) . for ( p - ps ) with given , the optimal and obtained by ( [ eq : pa ] ) and ( [ eq : subcarrier ] ) , respectively .define the corresponding optimal value of problem ( [ prob : ps - rho ] ) as , where ^t$ ]. then can be increased by optimizing s by ( [ eq : rho ] ) .the above procedure can be iterated until can not be further improved .note that problem ( [ prob : ps - rho ] ) is guaranteed to be feasible at each iteration , provided that the initial s are feasible , since at each iteration we simply decrease s to make all the harvested energy constraints tight .thus , with given initial , the iterative algorithm is guaranteed to converge to a local optimum of ( p - ps ) when all the harvested energy constraints in ( [ prob : ps - rho ] ) are tight .note that the above local optimal solution depends on the choice of initial . to obtain a robust performance ,we randomly generate feasible as the initialization steps , where is a sufficiently large number . for each initialization step ,the iterative algorithm is applied to obtain a local optimal solution for ( p - ps ) .the final solution is selected as the one that achieves the maximum weighted sum - rate from all the solutions . to summarize , the above iterative algorithm to solve ( p - ps ) is given in table [ table3 ] .for the algorithm given in table [ table3 ] , the computation time is dominated by the ellipsoid method in steps a)-c ) . in particular , in step b ) ,the time complexity of step a ) is of order , step b ) is of order , and step c ) is of order .thus , the time complexity of steps a)-c ) is of order , i.e. , .note that step b ) iterates to converge , thus the time complexity of the ellipsoid method is .considering further the number of initialization steps , the time complexity of the algorithm in table [ table3 ] is . ' '' '' * randomly generate feasible as different initialization steps .* for each initialization step : * * initialize . * * * repeat * * * * compute and .initialize and . ** * * repeat * * * * * compute and by ( [ eq : pa ] ) and ( [ eq : subcarrier ] ) with given and . * * * * compute the subgradient of by ( [ eq : subgradient - ps ] ) . * * * * update and according to the ellipsoid method . * * * * until * and converge to a prescribed accuracy . ** * update by ( [ eq : rho ] ) with fixed and . * * * until * , where controls the algorithm accuracy .* select the one that achieves the maximum weighted sum - rate from the solutions . ' '' '' [ table3 ] we provide simulation results under a practical system setup . for each user, we use the same parameters as the single - user case with in section [ sec : solution - single ] .the channels for different users are generated independently .in addition , it is assumed that , i.e. , sum - rate maximization is considered .the minimum harvested energy is assumed to be the same for all users , i.e. , .the number of initialization steps is set to be 100. figs .[ fig : re - k4-nopeak ] and [ fig : re - k4-peak ] show the achievable rates versus the minimum required harvested energy by different schemes with .we assume in fig .[ fig : re - k4-nopeak ] , and in fig .[ fig : re - k4-peak ] . fig .[ fig : alpha ] shows the time ratio of the eh slot versus minimum required harvested energy for the ts scheme in fig .[ fig : re - k4-peak ] . in fig .[ fig : re - k4-nopeak ] with , it is observed that when , the achievable rates by both ts and ps are less than the upper bound . for the ts scheme ,the maximum rate is achieved when is less than 150 ( c.f .remark [ remark : conventional tdma ] ) ; when is larger than 150 , the achievable rate decreases as increases . for the ps scheme , the achievable rate decreases as increases , since for larger more power needs to be split for eh at each receiver . comparing the ts and ps schemes ,it is observed that for sufficiently small ( ) , the achievable rate by ps is larger than that by ts .this is because that when the required harvested energy is sufficiently small , only a small portion of power needs to be split for energy harvesting , and the ps scheme may take the advantage of the frequency diversity by subcarrier allocation . for sufficiently large ( ), it is observed that the achievable rate by ts is larger than that by ps . in fig .[ fig : re - k4-peak ] with , it is observed that when is sufficiently large , the ts scheme becomes worse than the ps scheme .this is because that for a finite peak power constraint on each sc , as becomes sufficiently large , the ts scheme needs to schedule a nonzero eh slot to ensure all users harvest sufficient energy ( see fig .[ fig : alpha ] ) , the total information transmission time then decreases and results in a degradation of achievable rate. however , for , in which case the system achieves large achievable rate ( larger than of ub ) while each user harvests a reasonable value of energy ( about to of the maximum possible value ) , the ts scheme still outperforms the ps scheme .[ fig : rateoverk ] shows the achievable rates versus the number of users by different schemes under fixed minimum required harvested energy and . in fig .[ fig : rateoverk ] , it is observed that for both ts and ps schemes , the achievable rate increases as the number of users increases , and the rate tends to be saturated due to the bandwidth and the transmission power of the system is fixed . in particular , for the ts scheme , the achievable rate with is much larger ( about ) than that with .this is because that for the case , one of the user decodes information when the other user is harvesting energy ; however , for the single - user case , the transmission time when the user is harvesting energy is not utilized for information transmission .it is also observed in fig .[ fig : rateoverk ] that for a general multiuser system with large , the ts scheme outperforms the ps scheme .this is intuitively due to the fact that as the number of users increases , the portion of energy discarded at the information receiver at each user after power splitting also becomes larger ( c.f .[ fig : pa - ps ] ) , hence using ps becomes inefficient for large .this paper has studied the resource allocation optimization for a multiuser ofdm - based downlink swipt system .two transmission schemes are investigated , namely , the tdma - based information transmission with ts applied at each receiver , and the ofdma - based information transmission with ps applied at each receiver . in both cases , the weighted sum - rate is maximized subject to a given set of harvested energy constraints as well as the peak and/or total transmission power constraint .our study suggests that , for the ts scheme , the system can achieve the same rate as the conventional tdma system , and at the same time each user is still able to harvest a reasonable value of energy . when the harvested energy required at users is sufficiently large , however , a nonzero eh slot may be needed .this in turn degrades the rate of the ts scheme significantly .hence , the ps scheme may outperform the ts scheme when the harvested energy is sufficiently large . from the view of implementation , the ts scheme is easier to implement at the receiver side by simply switching between the two operations of eh and i d .moreover , in practical circuits the power splitter or switcher may introduce insertion loss and degrade the performance of the two schemes .this issue is unaddressed in this paper , and is left for future work .to prove the concavity of function , it suffices to prove that for all , , and the convex combination with , we have . with , we consider the following four cases for .\1 ) and : in this case , we have .since is a concave function of , it follows that its perspective is jointly concave in and for . therefore , we have .\2 ) and : in this case , we have , , and thus , we have .\3 ) and : similar as case 2 ) , we have .\4 ) and : in this case , we have . therefore , . from the above four cases, we have for all and , and thus is concave , which completes the proof .for any , we have h. ju and r. zhang , `` a novel mode switching scheme utilizing random beamforming for opportunistic energy harvesting , '' in _ proc .ieee wireless communications and networking conference ( wcnc ) _ , apr . 2013 . j. xu , l. liu , and r. zhang , `` multiuser miso beamforming for simultaneous wireless information and power transfer , '' in _ proc .ieee international conference on acoustics , speech , and signal processing ( icassp ) _ , may 2013 .a. a. nasir , x. zhou , s. durrani , and r. a. kennedy , `` relaying protocols for wireless energy harvesting and information processing , '' _ ieee trans .wireless commun .3622 - 3636 , july 2013 .k. huang and v. k. n. lau , `` enabling wireless power transfer in cellular networks : architecture , modeling and deployment , '' to appear in _ ieee trans .wireless commun ._ , available on - line at arxiv:1207.5640 .
|
in this paper , we study the optimal design for simultaneous wireless information and power transfer ( swipt ) in downlink multiuser orthogonal frequency division multiplexing ( ofdm ) systems , where the users harvest energy and decode information using the same signals received from a fixed access point ( ap ) . for information transmission , we consider two types of multiple access schemes , namely , time division multiple access ( tdma ) and orthogonal frequency division multiple access ( ofdma ) . at the receiver side , due to the practical limitation that circuits for harvesting energy from radio signals are not yet able to decode the carried information directly , each user applies either time switching ( ts ) or power splitting ( ps ) to coordinate the energy harvesting ( eh ) and information decoding ( i d ) processes . for the tdma - based information transmission , we employ ts at the receivers ; for the ofdma - based information transmission , we employ ps at the receivers . under the above two scenarios , we address the problem of maximizing the weighted sum - rate over all users by varying the time / frequency power allocation and either ts or ps ratio , subject to a minimum harvested energy constraint on each user as well as a peak and/or total transmission power constraint . for the ts scheme , by an appropriate variable transformation the problem is reformulated as a convex problem , for which the optimal power allocation and ts ratio are obtained by the lagrange duality method . for the ps scheme , we propose an iterative algorithm to optimize the power allocation , subcarrier ( sc ) allocation and the ps ratio for each user . the performances of the two schemes are compared numerically as well as analytically for the special case of single - user setup . it is revealed that the peak power constraint imposed on each ofdm sc as well as the number of users in the system play key roles in the rate - energy performance comparison by the two proposed schemes . simultaneous wireless information and power transfer ( swipt ) , energy harvesting , wireless power , orthogonal frequency division multiplexing ( ofdm ) , orthogonal frequency division multiple access ( ofdma ) , time division multiple access ( tdma ) , time switching , power splitting . [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ]
|
quantum key distribution ( qkd ) is a protocol to share the secret key between two authenticate parties ( alice and bob ) with negligible leakage of its information to an eavesdropper ( eve ) .the advantage of employing qkd is that it can achieve the unconditional security , which is the security against any possible attack allowed by the law of quantum mechanics under some assumptions on the devices used by alice and bob .the first qkd protocol was proposed by bennett and brassard at 1984 ( the protocol is called bb84 protocol ) . since this proposal ,many works have been devoted to prove the unconditional security , and some works take into account practical imperfections of the devices used by alice and bob .it is important that a mathematical model of alice and bob s devices is needed to prove the security of a qkd protocol , and thus the model should reflect the actual imperfections of the devices for the realization . in this paper , we consider the effect of the losses in phase modulators and the unbalance in the transmission rate of beam splitters in the phase encoded bb84 .this practical imperfection is firstly pointed out by ferenczi , _ et . , and this scheme is referred to as the unbalanced bb84 .since the actual phase modulators have inevitable losses and the transmission rate of actual beam splitters can not be exactly 50% , it is important to analyze the security of the protocol in order to fit the theory to the actual situation .the security of this protocol has been analyzed by ferenczi , _ et . al . _ based on the security proof . in their proof , however , it is not clear whether blindly applying the post - processing for the standard ( balanced ) bb84 would lead to an insecure key or not .the purpose of this paper is to provide the unconditional security proof of the unbalanced bb84 by showing that any security proof of the balanced bb84 where eve is allowed to distribute alice and bob a basis - independent state , for instance the security proof based on complementary scenario or shor - preskill type proof , can be directly applied to the unbalanced bb84 .this means that we can safely perform the data processing for the key distillation as if there were no unbalance and the unbalance only changes experimental data .moreover , a natural consequence of our security proof is that as long as the unbalances are basis - independent , our conclusion holds even if the unbalance in alice and bob is unknown and fluctuate in time .in order to see the performance of the unbalanced bb84 , we simulate the resulting key generation rate as a function of the distance between alice and bob . following the work by ferenczi , _ et .al . _ , we consider two cases : the first case is that we employ the unbalanced bb84 as it is and the second one is that we apply additional attenuations to alice and bob s devices in order to balance the intensities of the double pulses and to eliminate the effect of the unbalance in bob s measurement .we call the second case as the bb84 with the hardware fix or the hardware fix scenario , and this case is essentially the same as standard bb84 with additional losses . by simulating the key generation rates for the two cases , we have obtained almost the same threshold distance of the key generation in and confirmed that the key rate of the bb84 with the hardware fix is lower than that of the unbalanced bb84 .the organization of this paper is as follows , in sec .[ protocol ] , we briefly review the protocol of the unbalanced bb84 . in sec .[ security proof ] , we first briefly review the security proof of the balanced bb84 and prove the unconditional security of the unbalanced bb84 . in sec .[ simulation ] , by assuming the use of the decoy state , we simulate the key generation rates of the unbalanced bb84 and the bb84 with the hardware fix .the key rates are plotted as a function of the distance between alice and bob by taking experimental data from gys experiment .finally , we summarize this paper in sec .[ summary ] .in this section , we introduce the unbalanced bb84 .the experimental setup of the unbalanced bb84 is depicted in fig .[ composition ] .the unbalanced bb84 is the standard phase encoded bb84 protocol using phase randomized weak coherent pulses where the intensities of the double pulses are not the same because of the imperfections of the phase modulators and/or the beam splitters . for the simplicity of the discussion , we assume that the phase modulators possessed by alice and bob have the same transmission rate and we do not explicitly consider the unbalance of the beam splitters , i.e. , all the relevant beam splitters are assumed to have 50% of transmission rate , but the generalization of our analysis is trivial . in alices side , a phase randomized coherent pulse from her laser source splits into two arms a1 and a2 by a balanced beam splitter bs1 .alice randomly applies phase modulation to the pulse passing a1 by the imperfect phase modulator pma . here, the phase is defined as the bit value in basis ( for the later convenience , we use to refer the basis ) . the pair of the pluses come from a1 and a2 is sent to bob via another balanced beam splitter bs2 .because of the imperfection of pma , the state of the pulses sent by alice becomes , where subscripts s and r respectively denote the signal pulse passed through a1 and reference pulse passed through a2 , and is the random phase chosen between 0 and . .note that our proof is valid even if the unbalances in alice and bob s sides are different and unknown . ] in bob s side , the optical length difference between b1 and b2 is adjusted to the same as the one between a1 and a2 .thanks to his interferometer , the pair of the incoming pulses is finally separated into three pulses which can be distinguished by the detection time of the detectors , and we only consider the pulses arriving at the intermediate time .bob applies the phase modulation randomly chosen from to the pulse passing through b1 by the imperfect phase modulator pmb . here , the phase modulation is defined as basis in bob s measurement . as the result of bob s phase modulation ,the state of the pulses arriving at the beam splitter bs4 becomes , and state at the detector d0 ( d1 ) becomes .bob records the bit value 0 ( 1 ) when d0 ( d1 ) clicks , and bob randomly assigns a random bit to the double click event if the double click event occurs due to noises such as the dark counting of the detectors or misalignment . after bob s measurement, bob broadcasts his basis , and alice and bob keep the data with the bases matched .one can see that bob can obtain the same bit value with alice when there are no noises .we note that we make the following assumption on bob s detection device : the povm element corresponding to the failure detection of the bit value in basis is basis - independent , i.e. , therefore , we consider bob s basis measurement is constructed by a basis - independent filter followed by bob s two - outcome , i.e. , bob s bit value , basis measurement .we note that the squash model is not necessary in our proof .we remark that since each of the signal and reference pulses of our interest passes through the phase modulator only once , we do not need to equalize the intensities of the pulses in order to suppress the bit errors .however , just for the comparison , we consider the case where we fix the unbalance by implementing the beam splitters with the transmission rate of to the paths of a2 and b2 ( see also fig .[ fix ] ) .we call this scenario as the bb84 with hardware fix or the hardware fix scenario , and we can regard this case as the ideal bb84 with the additional attenuation. we will compare the key generation rates of both of the cases by the key generation simulations in sec .[ simulation ] , and we confirm that the unbalanced bb84 protocol has larger key generation rate than the bb84 with the hardware fix . preceded by the lossless phase modulator ( pm ) . ( b )is equivalent to ( c ) . ]in this section , we prove the unconditional security of the unbalanced bb84 protocol based on complementarity scenario .we consider only the single - photon part ( ) since we may not be able to or may have very little chance to generate the key from multi - photon part due to the so - called photon - number splitting ( pns ) attacks . in order to treat the photon number separately , we apply the argument by gllp or by koashi .for the simplicity of the discussion , we assume that the key is generated from the states of basis , and therefore the states of basis are only spent in the parameter estimation , i.e. , the so - called phase error estimation . before the proof, we would like to discuss alice s source .thanks to the phase randomization , the density matrix of the pulses sent by alice becomes the mixture of the eigenstates of the photon number ( we rewrite the eigenstates as ) as follows where and are the basis and bit value alice chooses for sending the pulses ( and in right - hand side corresponds to in the left - hand side ) , is defined as , and the subscript represents the system to be sent to bob . the single - photon part ( ) of described as and , where we define and . for the later convenience ,we define the relationships of the eigenstates of qubit states as follows : and where . in the following two paragraphs , we briefly review the essential point of the security proof of the _ balanced _ bb84 . in gllpargument or the argument by koashi in , we ask whether the privacy amplification succeeds if it is applied only to the qubits associated with the single - photon emission , and it is shown that we can generate the secret key with asymptotic key generation rate as follow .\,.\end{aligned}\ ] ] here , is basis fictitious bit error rate that would have been obtained if alice had sent a single - photon in basis and alice and bob had employed basis for the measurement , is the bit error rate from the basis measurements when alice sends a pulse in basis , is the rate of bob s detection in basis ( is the part of where alice sends a single photon ) , is the inefficiency of the error correcting code , and .note that what the actual experiment gives alice and bob is the bit error rate in basis when bob chooses basis and alice sends a pulse in basis rather than a single - photon in basis . by using gllp argument or combining gllp argument with the decoy state idea , we can estimate the lower bound of , which is the contribution from alice s single - photon emission in , but this quantity is different from . in the _ balanced _bb84 , it turns out that these quantities match ( ) .the most important point to derive in the balanced bb84 is the basis independence of the state alice prepares and of bob s detection eq .thanks to this independence , eve can not behave differently between and basis so that we have . inwhat follows , we prove that is also the case for the unbalanced bb84 , which means that we can perform the data processing in the unbalanced bb84 as if there were no unbalance . for the security proof ,we first consider a virtual protocol where we change the method to determine alice s bit value . in the virtual protocol, she firstly generates with the probability where the subscripts and respectively denote alice s and bob s system , which is mathematically expressed by hilbert space . after generating the state, she conducts basis measurement on her qubit , and she records the measurement outcome as her bit value in basis .finally , she sends the system to bob .we can confirm the state sent to bob is equivalent to the actual protocol . in the virtual protocol, it does not matter if alice delays her measurement after sending the state , and hence we assume this delay hereafter . from the definition of and , one can easily confirm that , i.e. , the state of the single - photon in the virtual protocol is basis - independent . by combining this independence with the basis - independence of bob s measurement , we have .this ends the proof . as an alternative proof( also see fig .[ virtual ] ) , we furthermore modify the single - photon part of the virtual protocol that is equivalent to the single photon part of the unbalanced bb84 protocol .let be kraus operator corresponding to the successful event of her filtering operation . in the modified protocol, alice first prepares a basis - independent joint state , performs the filtering operation , and then she keeps only the successfully filtered state .we can confirm that the filtered state satisfies , which means that the post - selected state is also basis - independent .this ends the alternative proof . in summary, we can prove the security of the unbalanced bb84 protocol by directly confirming the basis - independence together with the basis - independence of bob s measurement .alternatively , we consider the virtual protocol for the single - photon part , which is constructed by the preparation of the bell state followed by each side of basis - independent filtering operation with and , the post - selection , and each side of the two outcome measurements of basis .the two outcome measurements consist of in bob s side and projective basis qubit measurement in alice s side ( depicted in fig . [ virtual ] ) .this virtual protocol is composed only by basis - independent operations so that we have .basis which consists of ( ) following the post - selection by each side of the filters .note that the filtering operations and are basis - independent , and as a result we can safely apply the security proof of the standard bb84 to the filtered state . ] from the above discussion , we conclude that the key generation rate of the unbalanced bb84 protocol is written as \,.\end{aligned}\ ] ] this formula is completely equivalent to the rate of the balanced bb84 .therefore , we can perform the data processing in the unbalanced bb84 as if there were no unbalance .the unbalance affects only the realization of the experimental data as we will see in the next section , and it never affects the key formula itself .we remark that we have used only the basis independence of the unbalance in our security proof .thus , it follows that as long as the unbalances are basis - independent , our conclusion holds even if the unbalance of the sending pulses and that of the measurement are unknown and fluctuate in time .it also follows that if we use the so - called squash operators for bob s measurement satisfying eq .( [ basis independent failure ] ) , which is shown to exist in , then one can draw the same conclusion even when one uses any proof technique of the balanced bb84 , including shor - preskill type security proof , where eve is regarded as the sender of a basis - independent quantum state to alice and bob .in this section , we simulate the key generation rate by using the typical experimental parameters taken from the gobby - yuan - shields ( gys ) experiments . in the simulation ,we assume the use of infinite number of decoy states to obtain and .we also assume that the bit error only stems from the dark counting of the detectors , and therefore , we ignore the probability of the error stemming from the misalignment and other imperfections of the devices . in order to ensure the basis - independent detection ( eq .( [ basis independent failure ] ) ) , we assume that the quantum efficiencies of the two detectors are the same and the inefficiency of the detector can be modeled by a beam splitter preceded by a detector with unit quantum efficiency .moreover , we assign a random bit value to the double click event . with all the assumptions , we may have the following experimental data .(1-p_{d})\\&+(1-p_{d})e^{-\kappa \beta}p_{d}+[1-(1-p_{d})e^{-\kappa \beta}]p_{d}\\ e_x=&\{(1-p_{d})e^{-\kappa \beta}p_{d}+\frac{1}{2}[1-(1-p_{d})e^{-\kappa \beta}]p_{d}\}/\gamma_x\\ p^{(1)}=&e^{-(1+\kappa)\alpha}\alpha(1+\kappa)\\ \gamma_x^{(1)}=&\{[1-(1-p_{d})(1-\eta \frac{\kappa}{1+\kappa})](1-p_{d})\\&+(1-p_{d})(1-\eta \frac{\kappa}{1+\kappa})p_{d}\\&+[1-(1-p_{d})(1-\eta \frac{\kappa}{1+\kappa})]p_{d}\}p^{(1)}\\ e_y^{(1)}=&\{(1-p_{d})(1-\eta \frac{\kappa}{1+\kappa})p_{d}\\&+\frac{1}{2}[1-(1-p_{d})(1-\eta \frac{\kappa}{1+\kappa})]p_{d } \}p^{(1)}/\gamma_x^{(1 ) } \end{split}\end{aligned}\ ] ] here , is the dark count rate of each of the detector d0 and d1 , is the overall transmission rate , is the quantum efficiency of the detectors d1 and d2 , is the channel transmission rate , and is the distance between alice and bob .we also simulate the case of the hardware fix scenario . in this case , some values of the parameters change as follows .(1-p_{d})\\&+(1-p_{d})e^{-\kappa^2 \beta}p_{d}+[1-(1-p_{d})e^{-\kappa^2 \beta}]p_{d}\\ e_x=&\{(1-p_{d})e^{-\kappa^2 \beta}p_{d}+\frac{1}{2}[1-(1-p_{d})e^{-\kappa^2 \beta}]p_{d}\}/\gamma_x\\ p^{(1)}=&e^{-2\kappa \alpha}2\kappa \alpha\\ \gamma_x^{(1)}=&\{[1-(1-p_{d})(1-\frac{\eta \kappa}{2})](1-p_{d})\\&+(1-p_{d})(1-\frac{\eta \kappa}{2})p_{d}\\&+[1-(1-p_{d})(1-\frac{\eta \kappa}{2})]p_{d}\}p^{(0)}\\ e_y^{(1)}=&\{(1-p_{d})(1-\frac{\eta \kappa}{2})p_{d}\\&+\frac{1}{2}[1-(1-p_{d})(1-\frac{\eta \kappa}{2})]p_{d}\}p^{(1)}/\gamma_x^{(1 ) } \end{split}\end{aligned}\ ] ] we take the following parameters from the gys experiments : , , [db / km ] , and .the result of the simulation is shown in fig .[ kgrgys ] where we have optimized the intensity of the pulse ( ) , and the optimum intensity is depicted in fig .we obtain almost the same transmission distances as those in .furthermore , we can confirm that the hardware fix scenario causes the decrease in the key generation rate ( the same tendency is also obtained by the work by ferenczi , _ et .this decrease is due to the additional loss in the hardware fix scenario . ), the thin line is the unbalanced bb84 ( ) , and dashed line is the bb84 with the hardware fix ( ) . ]that results in each of the optimal key generation rate in fig .[ kgrgys ] . ]in summary , we have proved the unconditional security of the unbalanced bb84 protocol . for the security proof, we have considered the virtual protocol that is equivalent to the unbalanced bb84 protocol . in the proof , we have confirmed that the single photon part of the virtual protocol is basis - independent or this part is constructed by the preparation of the bell state followed by each side of the basis - independent filtering operations , and each side of the two outcome measurements .thanks to the basis - independence in the virtual protocol , we have concluded that we can apply the method of security proofs of the balanced bb84 .therefore , we can conduct the data processing for the key distillation as if there were no unbalance and the unbalance has influence only through the realization of the experimental data .we note that a natural consequence of our security proof is that as long as the unbalances are basis - independent , our conclusion holds even if the unbalance of the sending pulses and that of the measurement are unknown and fluctuate in time . finally , by the simulation, we have also compared the key generation rates of the unbalanced bb84 and the bb84 with the hardware fix , and confirmed that the hardware fix scenario causes the decrease in the key generation rate and the transmission distance .we thank koji azuma , go kato , william j. munro , norbert ltkenhaus , and agnes ferenczi for valuable discussions and comments .this research is in part supported by the project `` secure photonic network technology '' as part of `` the project uqcc '' by the national institute of information and communications technology ( nict ) of japan , in part by the japan society for the promotion of science ( jsps ) through its funding program for world - leading innovative r on science and technology ( first program ) " .bennett and g. brassard , _ proceedings of ieee international conference on computers , systems and signal processing _ , 175 ( 1984 ) .d. mayers , j. acm * 48 * ( 3 ) , pp .351 - 406 ( 2001 ) .lo and h. f. chau , science * 283 * , 2050 ( 1999 ) .p. w. shor and j. preskill , phys .lett . * 85 * , pp .441 - 444 ( 2000 ) .m. koashi , new j. phys .* 11 * no 4 ( april 2009 ) 045018 .k. tamaki , m. koashi , and n. imoto , phys .lett . * 90 * , 167904 ( 2003 ) h. inamori , n. ltkenhaus and d. mayers , arxiv : quant - ph/0107017 ( 2001 ) .m. koashi , and j. preskill , phys.rev.lett .* 90 * , 057902 , ( 2003 ) .f. fung , k. tamaki , b. qi , h - k .lo , x. ma , quantum information and computation , vol .0131 - 0165 ( 2009 ) .d. gottesman , h .- k .lo , n. ltkenhaus , and j. preskill , quantum information and computation * 5 * , 325 ( 2004 ) .a. ferenczi , v. narasimhachar , n. ltkenhaus , phys .a * 86 * 042327 ( 2012 ) .i. devetak and a. winter , proc . of the roy .soc . of londonseries a , * 461 * ( 2053):207235 ( 2005 ) .b. kraus , n. gisin , and r. renner , phys .lett , * 95 * 080501 ( 2005 ) .hwang , phys .. lett . * 91 * , 057901 ( 2003 ) .lo , x. ma , k. chen , phys .lett . * 94 * , 230504 ( 2005 ) .wang , phys .lett . * 94 * , 230503 ( 2005 ) . c. gobby , z. l. yuan , and a. j. shields , appl. phys.lett . * 84 * , 3762 ( 2004 ) .t. tsurumaru and k. tamaki .a , * 78*:032302 , ( 2008 ) .n. j. beaudry , t. moroder , and n. ltkenhaus .lett . * 101*:093601 , ( 2008 ) .b. huttner , n. imoto , n. gisin , and t. mor , phys .rev . a * 51 * , 1863 ( 1995 ) .m. koashi , arxiv : quant - ph/0609180 ( 2006 ) . for the simplicity, we neglect the contribution of the vacuum emission part to the key generation and the noisy processing to increase the rate .since the vacuum emission part is trivially basis - independent , direct application of our proof shows the positive contribution in the unbalanced bb84 .lo , quantum information and computation vol * 5 * , no .4&5 413 - 418 ( 2005 ) .r. renner , n. gisin , and b. kraus , phys .a * 72 * , 012332 ( 2005 ) .b. kraus , n. gisin , and r. renner , phys .lett . * 95 * , 080501 ( 2005 ) . j. m. renes , and g. smith . phys .* 98 * , 020502 ( 2007 ) .k. tamaki and g. kato , phys .a 81 , 022316 ( 2010 ) .
|
for the realization of quantum key distribution , it is important to investigate its security based on a mathematical model that captures properties of the actual devices used by the legitimate users . recently , ferenczi , _ et . al . _ ( phys . rev . a * 86 * 042327 ( 2012 ) ) pointed out potential influences that the losses in phase modulators and/or the unbalance in the transmission rate of beam splitters may have on the security of the phase - encoded bb84 and analyzed the security of this scheme , which is called the unbalanced bb84 . in this paper , we ask whether blindly applying the post - processing of the balanced bb84 to the unbalanced bb84 would lead to an insecure key or not , and we conclude that we can safely distill a secure key even with this post - processing . it follows from our proof that as long as the unbalances are basis - independent , our conclusion holds even if the unbalances are unknown and fluctuate in time .
|
fourier - based algorithms , or `` fft '' methods for short , are an efficient approach for computing the mechanical response of composites . initially restricted to linear - elastic media ,fft tools are nowadays employed to treat more involved problems , ranging from viscoplasticity to crack propagation . in fft methods ,the microstructure is defined by 2d or 3d images and the local stress and strain tensors are computed along each pixel or `` voxel '' in the image . coupled with automatic or semi - automatic image segmentation techniques , this allows for the computation of the mechanical response of a material directly from experimental acquisitions , like focused ion beam or 3d microtomography techniques .the latter often deliver images containing billions of voxels , for which fft methods are efficient .this allows one to take into account representative volume elements of materials which are multiscale by nature such as concrete or mortar . from a practical viewpoint ,the simplicity of fft methods is attractive to researchers and engineers who need not be experts in the underlying numerical methods to use them .nowadays , fft tools are available not only as academic or free softwares but also as commercial ones . in the past years, progress has been made in the understanding of fft algorithms .vondejc and co - workers have recently shown that the original method of moulinec and suquet corresponds , under one technical assumption , to a particular choice of approximation space and optimization method ( see also ) .this property allows one to derive other fft schemes that use standard optimization algorithms , such as the conjugate gradient method . in this regard , making use of variational formulations , efficient numerical methods have been proposed that combine ffts with an underlying gradient descent algorithm .different approximation spaces or discretization methods have also been proposed , where , contrary to the original scheme , the fields are not trigonometric polynomials anymore .brisard and dormieux introduced `` energy - based '' fft schemes that rely on galerkin approximations of hashin and shtrikman s variational principle and derived modified green operators consistent with the new formulation .they obtained improved convergence properties and local fields devoid of the spurious oscillations observed in the original fft scheme . in the context of conductivity , accurate local fields and improved convergence rateshave also been obtained from modified green operators based on finite - differences .these results follow earlier works where continuum mechanics are expressed by centered or `` forward and backward '' finite differences .this work focuses on the effect of discretization in fft methods .it is organized as follows .we first recall the equations of elasticity in the continuum ( sec .[ sec : problem ] ) .we give the lippmann - schwinger equations and the `` direct '' and `` accelerated '' fft schemes in sec .( [ sec : greencont ] ) . in sec .( [ sec : disc ] ) , a general formulation of the green operator is derived that incorporates methods in , and a new discretization scheme is proposed .the accuracy of the local stress and strain fields are examined in sec .( [ sec : accuracy ] ) whereas the convergence rates of the various fft methods are investigated in sec .( [ sec : rate ] ) . we conclude in sec .( [ sec : conclusion ] ) .we are concerned with solving the equations of linear elasticity in a square or cubic domain ^d ] around the corner of the inclusion ( fig .[ fig : accuracy2d ] ) . at low resolution ,numerical methods predict values as large as in a few pixels , because of the singularity of the stress field at the corner . to highlight the field patterns , we threshold out the values above , which amount to % of the pixels . using the same color scale for all images , the smallest stress value , equal to , is shown in dark blue whereas the highest , equal to , is in dark red .green , yellow and orange lie in - between .as expected , in the limit of very fine resolution , all methods tend to the same local stress field , as shown by the similar field maps obtained at resolution .however , use of the green operator leads to spurious oscillations along the interfaces of the inclusion , up to resolutions as big as pixels , a side - effect noticed in in conductivity .the oscillations do not disappear after computing local averages of the fields ( not shown ) .strong oscillations are produced by schemes using as well , not only in the quasi - rigid inclusion , but also in the matrix .we observe checkerboard patterns in the former , and vertical and horizontal alignments in the latter , at resolution .these oscillations are greatly reduced by the use of .still , due to the non - symmetric nature of , the stress is not correctly estimated along a line of width pixel oriented upward from the inclusion corner .similar patterns are observed , in other directions , along the three other corners of the inclusion ( not shown ) .these issues are solved when using which produces a stress field that respects the symmetries of the problem .furthermore , use of greatly reduces oscillations compared to and .elementary periodic domain ^ 2 $ ] containing a square inclusion with elastic moduli , ( top - left , shown in white ) embedded in a matrix ( shown in gray ) with elastic moduli , . ,width=151 ] [ cols="<,^,^,^,^ " , ] contrary to the previous sections , we now consider a microstructure without singularities ( edges or corners ) and focus on the effect of the green operator discretization on the effective elastic properties . in the rest of this section ,the elementary domain contains one spherical inclusion of volume fraction % , so that the material is a periodic array of spheres .the spheres are very soft with contrast of properties .we compute the effective elastic modulus produced by either or at increasing resolutions , , , and .again , we use the accelerated scheme and iterations are stopped when the stress field maximum variation over two iterations in any pixel is less than .results are shown in fig .( [ fig : eff ] ) and are compared with the analytical estimate in .when the resolution increases , the effective elastic modulus increases up to a limit value that we estimate to about , for all schemes .as observed in other studies , very large systems are needed to compute this estimate at a high precision .this is especially true of the green operator which has the slowest convergence with respect to the system size . at fixed resolution ,the error on the predictions given by is about times larger than the one provided by , which , among all methods , gives the best estimate .the operators and stand in - between .this is another indication of the benefits of the operator .apparent elastic modulus estimated by fft methods using the green operators and ( black and red ) , at increasing resolution .orange : estimate in ; violet : estimate of the asymptotic effective modulus using fft data ., width=340 ]in this section , we estimate the rates of convergence of the direct and accelerated schemes ds , ds , as and as , that use the various green operators .all schemes enforce stress equilibrium at convergence only , therefore we follow and consider a criterion based on the -norm : where is the precision and the normalizing factor is the frobenius norm : in ( [ eq : criterion ] ) we set for the schemes using and when using the green operator , so that is the divergence of the stress field in the fourier domain , estimated according to the various discretization schemes .we now estimate the convergence rates on a random microstructure . in the following ,the domain is a ( periodized ) boolean model of spheres of resolution and volume fraction % , below the percolation threshold of the spheres of about % . to obtain meaningful comparisons, we use the same randomly - generated microstructure for all schemes .this particular configuration contains spheres of diameter voxels , about times smaller than the size of .taking for the reference poisson ratio , we compute numerically the number of iterations required to reach the precision , for varying reference young moduli , in the range .we consider the boolean model of spheres with contrast and the various accelerated schemes as and as ( fig .[ fig : convref0 ] ) . within the range , the number of iterations is about the same for all accelerated schemes .when , however , is a strongly increasing function of for scheme as , contrary to the other schemes as . for the latter, decreases with up to a local minimum , beyond which variations are much less sensitive to .one unique local minimum around is found for scheme as , whereas the schemes as and as exhibit two local minima .the effect of the poisson ratio is also investigated numerically .we let and for various values of with and observe a strong increase of the number of iterations , for the schemes as and as .the same behavior is observed for the direct scheme ds and ds with .therefore , in the following , we fix the poisson ratio to for the reference tensor , for all schemes and all contrast of properties .this leaves one parameter , , to optimize on .we use the gradient descent method to determine a local minimum of for arbitrary contrast and scheme ds , ds , as and as . as above, is the number of iterations necessary to reach .we choose for schemes ds , ds and for schemes as and as as initial guess for , suggested by ( [ eq : ref ] ) and ( [ eq : refacc ] ) . at each step , we determine if is to be increased or decreased , by comparing with where .it frequently happens that . in that case, we compare the values of the precision after iterations and follow the direction that minimizes .iterations are stopped whenever is unchanged after two descent steps .the gradient descent method determines a local minimum rather than the global minimum , which is sub - optimal . to check the validity of the results ,further numerical investigations are carried out for , and schemes ds and ds .the method predicts the global minimum in these cases .this also holds for schemes as and as with , , but not for scheme with .however , in this case the number of iterations are very similar at the two local minima , as shown in fig .( [ fig : convref0 ] ) . in the following ,the results given by the gradient descent method are used as - is .number of iterations required to achieve convergence , as a function of the reference young modulus , for the accelerated schemes as and as using various green operators .convergence is achieved when the precision is reached .the microsctructure is a boolean model of quasi - porous spheres with ., width=340 ] results for the optimal reference are indicated in fig .( [ fig : convref ] ) . for the direct scheme, the optimal reference follows ( [ eq : ref ] ) with , independently of the green operator used .values of smaller than lead to non - converging schemes . for the accelerated schemes , the situation is less simple , and differs depending on the green operator in use . for the scheme as with green operator ,the choice ( [ eq : refacc ] ) is optimal except in the region where the value of tend to a small constant of about .similar behavior is found for the schemes as for which : with for and for .similar behavior has been observed numerically in , in the context of conductivity . for the scheme that uses , the optimal choice for follows the same pattern as above with except in the regionthis behavior is an effect of the presence of two local minima , similar to that shown in fig .( [ fig : convref0 ] ) for .convergence rates , computed with optimized reference , are represented in fig .( [ fig : convrate ] ) as a function of the contrast , in log - log scale .results for ( strictly porous media ) have been included in the same graph ( left point ) . as is well - established , the number of iterations in the direct scheme ds scales as when and when .for the accelerated scheme as , the number of iterations is smaller and follows when and when , with one exception . at very high contrast of properties , including at , convergence is reached after a finite number of iterations , about .this particular behavior is presumably sensitive to the value choosen for the requested precision .when , the schemes ds and as , that use , converge after a number of iterations not exceeding .as shown in fig .( [ fig : convrate ] ) , the number of iterations is nearly constant in the range .as expected , the accelerated schemes as are faster than the direct schemes ds , with scheme as proving the fastest . with this scheme andwhen , the number of iterations is at most .again , these results are qualitatively similar with that given in in the context of conductivity. for rigidly - reinforced media ( ) , the number of iterations of schemes ds and as follow the same powerlaw behaviors , with respect to , as that of ds or as . in all considered schemes, the number of iterations continuously increases with the contrast .differences are observed between the various accelerated schemes as and as , with as the fastest . the use of green operators associated to the problem for the strain field , as undertaken here , results in convergence properties that are worse in the region than when . in this respect , benefits are to be expected from the use of dual green operators , associated to the problem for the stress fields. optimal reference young modulus as a function of the contrast of properties , for the various fft methods , in log - log scale .direct schemes : black and red ( nearly superimposed to one another ) ; accelerated schemes : blue and green . results for a porous material ( ) are indicated at the left of the graph . the material is a boolean model of spheres with volume fraction % . , width=340 ] number of iterations as a function of the contrast of properties , for the various fft methods , in log - log scale .direct schemes : black and red ; accelerated schemes : blue and green .results for a porous material ( ) are indicated at the left of the graph .the material is a boolean model of spheres with volume fraction % ., width=340 ] in this section , we focus on the accelerated schemes as and as and examine the rate of convergence of the various schemes with respect to the effective elastic moduli .we consider the same boolean microstructure as given in sec .( [ sec : rate ] ) but discretized on higher resolution grids of and voxels .the volume fraction of the spheres are respectively % and % . for simplicity , the contrast of properties take on two values and , so that the spheres are quasi - porous or quasi - rigid .we perform iterations of the schemes as and as using the optimized reference moduli found in the previous section , on the lower resolution grid .we apply the macroscopic strain loading : at each iteration and for each scheme , we compute the elastic modulus , derived from the mean of the stress component .the convergence rate toward the elastic modulus is represented in fig .( [ fig : itporous ] ) for quasi - porous spheres with and in fig .( [ fig : itrigid ] ) for quasi - rigid spheres with . in fig .( [ fig : itporous ] ) , for the sake of clarity , the elastic moduli are represented by symbols once every iterations , except for the first five iterations of the scheme as which are all represented .dotted lines are guide to the eyes . in the porous case , much better convergence is obtained with scheme as than with schemes as and as , as shown in fig .( [ fig : itporous ] ) .the estimate for predicted by as and as present strong oscillations that are much reduced with as .after about iterations , the estimate given by as is valid to a relative precision of . to achieve the same precision , more than iterationsare needed for schemes as and as .the situation is notably different for quasi - rigid spheres ( fig . [fig : itrigid ] ) . for all schemes ,a much higher number of iterations is required to determine the elastic modulus with a precision of .the slower convergence rate follows that observed in sec .( [ sec : rate ] ) , where convergence is much poorer for than for , and where as is less advantageous compared to the other schemes . nevertheless , in this case also , as shown in fig .( [ fig : itrigid ] ) , smaller oscillations are observed in the estimate for when using as rather than schemes as or as .estimate of the elastic modulus as a function of the number iterations performed , for a 3d boolean model of quasi - porous spheres .black symbols : accelerated schemes as , as , as ; red : scheme as ( orange : hashin and shtrikman upper bound ) . , width=340 ] estimate of the elastic modulus as a function of the number iterations performed , for a 3d boolean model of quasi - rigid spheres .black lines : accelerated schemes as , as , as ; red : scheme as ., width=340 ]in this work , a novel discretization method has been proposed in 2d and 3d for use in fourier - based schemes .the core of the proposed scheme is a simple modification of the green operator in the fourier domain .the results obtained confirm those achieved in the context of conductivity . compared to schemes using trigonometric polynomials as approximation space , or to other finite - differences methods , superior convergence rateshave been observed in terms of local stress equilibrium , but also in terms of effective properties .more importantly , the solution for the local fields , predicted by the new discretization scheme is found to be more accurate than that of other methods , especially at the vicinity of interfaces .this property is important when applying fft methods to solve more complex problems like large strain deformation .the new method also provides better estimates for the effective elastic moduli .furthermore , its estimates does not depend on the reference medium , because the scheme is based on a finite - differences discretization of continuum mechanics .although not explored in this work , the modified green operator can be used with most other fft iterative solvers , like the `` augmented lagrangian '' or with fft algorithms that are less sensitive to the reference , leading to the same local fields .dunant , b. bary , a. b. giorla , c. pniguel , j. sanahuja , c. toulemonde , a. b. tran , f. willot , and j. yvonnet . a critical comparison of several numerical methods for computing effective properties of highly heterogeneous materials ., 58:112 , 2013 .j. escoda , f. willot , d. jeulin , j. sanahuja , and c. toulemonde .estimation of local stresses and elastic properties of a mortar sample by fft computation of fields on a 3d image ., 41(5):542556 , 2011 .s. brisard and l. dormieux .combining galerkin approximation techniques with the principle of hashin and shtrikman to derive a new fft - based numerical method for the homogenization of composites ., 217(220):197212 , 2012 .f. willot , b. abdallah , and y .- p .fourier - based schemes with modified green operator for computing the electrical response of heterogeneous media with accurate local fields . , 98(7):518533 , 2014 .f. willot and y .- p .fast fourier transform computations and build - up of plastic deformation in 2d , elastic - perfectly plastic , pixelwise - disordered porous media . in _ d. jeulin , s. forest ( eds ) , `` continuum models and discrete systems cmds 11 '' _ , pages 443449 , paris , 2008 .cole des mines .n. lahellec , j .- c .michel , h. moulinec , and p. suquet .analysis of inhomogeneous materials at large strains using fast fourier transforms . in _ proc .iutam symposium on computational mechanics of solid materials at large strains _ , pages 247258 .kluwer academic publishers , 2001 .
|
we modify the green operator involved in fourier - based computational schemes in elasticity , in 2d and 3d . the new operator is derived by expressing continuum mechanics in terms of centered differences on a rotated grid . use of the modified green operator leads , in all systems investigated , to more accurate strain and stress fields than using the discretizations proposed by moulinec and suquet ( 1994 ) or willot and pellegrini ( 2008 ) . moreover , we compared the convergence rates of the `` direct '' and `` accelerated '' fft schemes with the different discretizations . the discretization method proposed in this work allows for much faster fft schemes with respect to two criteria : stress equilibrium and effective elastic moduli . fft methods , homogenization , heterogeneous media , linear elasticity , computational mechanics , spectral methods
|
to prolong the operating lifetime of many battery powered commercial and tactical wireless ( e.g. , sensor ) networks , energy - efficiency has appeared to be a critical issue .energy - efficient resource allocation strategies were extensively pursued in , where the goal is to minimize the transmission energy expenditure subject to average rate or delay constraints .such an energy minimization is carried out over an infinite horizon and does not directly translate into quality - of - service ( qos ) guarantees over finite time intervals. for qos provisioning over finite time intervals , considered minimizing the transmission energy for bursty packet arrivals with a single strict deadline .it was shown that a so - called lazy scheduling is the most energy - efficient by properly selecting minimum transmission rates for arriving packets under the causality constraints . generalizing the lazy - scheduling ,a calculus approach was proposed to find the optimal data departure curve ( thus the optimal rate schedule ) for packet arrivals with individual delay constraints , by the trajectory of letting a string tie its two ends and then taut between the data arrival and minimum departure curves .the approaches in only apply to packet transmissions over time - invariant channels . assuming a one - packet - per - slot arrival process and the same delay requirement for all packets , a recursive `` constrained flowright '' algorithm was developed to find the energy - efficient scheduling over time - varying fading channels in . for arbitrary packet arrival process and delay constraints, an efficient algorithm was put forth to find the optimal rate control strategy over time - varying wireless channels with a low computational complexity .all the works assumed an ideal ( negligible ) circuit - power model .this holds for typical long - range transmissions .however , for short - range wireless ( sensor ) networks , non - ideal circuit power consumption due to signal processing ( filters , dsp , oscillators , converters , etc . )needs to be taken into account ; yet , there are few studies on the effects of the non - ideal circuit power on energy - efficient transmission policies for delay - limited data packets . in a different yet relevant context , investigated sum - throughput maximization for packet transmissions over time - invariant channels subject to the causality and battery - capacity constraints due to an energy harvesting ( arrival ) process .however , these algorithms are inapplicable to addressing the critical issue of optimizing the energy efficiency for transmissions of delay - sensitive packets in general situations where energy harvesting does not take place and batteries are the only source of energy . in this paper , we develop a novel unified approach to obtaining energy - efficient transmission schedules for bursty data packets with strict deadlines under the non - ideal circuit power consumption . assuming that full knowledge of channel states , packet arrivals and deadlines is available a - prior, we consider the optimal ( offline ) policies that minimize the total energy consumption . through a judicious convex formulation and the resultant optimality conditions, we reveal the structure of the optimal schedule .specifically , we show that the optimal transmission between any two consecutive data or channel state changing instants ( referred to as an epoch ) can only take one of the three ( `` off '' , `` on - off '' , `` on '' ) strategies : ( i ) no transmission , ( ii ) transmission with the energy - efficiency ( ee ) maximizing rate over a portion of the epoch , ( iii ) transmission with a rate over the whole epoch . based on this structure, we propose an efficient `` clipped string - tautening '' algorithm to find the optimal transmission policy with a low computational complexity for a time - invariant channel .interestingly , it is shown that the calculus approach in can be modified to find the optimal policy ; namely , the optimal data departure for the general non - ideal circuit - power case can be obtained by simply adjusting the ideal - case data departure in accordance to the ee - maximizing rate value .the proposed approach is then generalized to time - varying channels . in this case, it is shown that the optimal transmit - power allocation admits a multi - level water - filling form , where the water - levels can be obtained by a `` clipped water - tautening '' procedure .our approach provides the optimal benchmarks for practical schemes designed for transmissions of delay - limited data arrivals over time - invariant and time - varying channels .it can be also employed to develop efficient online scheduling schemes which require only causal knowledge of channel states , data arrivals and deadline requirements .the rest of the paper is organized as follows .section ii describes the system models .section iii and iv present the proposed approaches to energy - efficient transmissions of delay - limited bursty data packets over time - invariant and time - varying channels , respectively .section v provides the numerical results to evaluate the proposed schemes , followed by a conclusion in section vi .consider a wireless link with complex - valued baseband equivalent channel coefficient .for simplicity , all nearby devices are supposed to use orthogonal channels so that interferences from other links are negligible .assume without loss of generality ( w.l.o.g . ) that the noise at the receiver is a circularly symmetric complex gaussian ( cscg ) random variable with zero mean and unit variance .given a transmit - rate , we adopt the well - known shannon - capacity formula as the minimum required transmit - power function : note that the shannon formula is only used for specificity .it has been shown that with many modulation and coding schemes , transmit - power is an increasing and strictly convex function of the transmission rate .our approach applies generally to any of these power functions . consider a wireless link with data packets transmitted from a transmitter to a receiver .we say that the data state changes when new data packets arrive or a data deadline is reached . as shown in fig .[ model ] , over the entire transmission interval ] ( we let for convenience ) .let , with and .let where .the total number of data packets arriving and transmitted over time interval ] denote the efficiency of the rf chain , and the circuit power consumed during the `` off '' mode .the total power consumed by a transmitter is then : in practical systems , is usually much smaller compared to and thus can be neglected for simplicity .hence , we can assume w.l.o.g .the circuit - power during the `` on '' and `` off '' modes to be watts and watt , respectively .we further assume w.l.o.g since is only a scaling constant .consider first a static channel with time - invariant channel coefficient . due to the non - ideal circuit power consumption, the transmission can be turned on for only a portion of an epoch and turned off afterwards to save energy .let collect the `` on '' periods with length in the epoch .given that the power function is convex , it was proved that the transmit - rate over the `` on '' period of each epoch should remain unchanged in the optimal policy .let collect such invariant transmit - rates over the `` on '' period of each epoch .for a bursty data arrival process modeled by ( , ) , the energy - efficient transmission schedule is to select an optimal set of such that the total energy consumed for delivery of the arriving data packets ahead of deadlines is minimized ; i.e. , we wish to solve : and can be transformed into a similar form , where the objective }} ] .hence , our results readily carry over to such cases by simply involving . ]}\\ \text{s.t . } & \text{(c1 ) : } \;\ ; \displaystyle\sum_{n=1}^{\alpha_i}{(r_n l_n^{on } ) \leq \sum_{k=0}^{i-1}{a_k } } , & i=1 , \ldots , a,\\ & \text{(c2 ) : } \;\ ; \displaystyle\sum_{n=1}^{\delta_j}{(r_n l_n^{on } ) \geq \sum_{k=1}^{j}{d_k } } , & j=1 , \ldots , d,\\ & { \text{(c3 ) : } \;\ ; \displaystyle r_n \geq 0 , \;\ ; 0 \leq l_n^{on } \leq l_n } , & n=1 , \ldots , n. \end{array}\ ] ] here , in addition to the trivial constraints ( c3 ) , ( c1 ) presents the causality constraints : the number of packets transmitted before the arrival time instant must not exceed the number of available packets in the transmit buffer .( c2 ) presents the deadline constraints : the number of packets transmitted before the deadline should be no less than the required number of packets . in the ideal circuit - power( ) case , it was shown that the transmitter is always on ( i.e. , ) in the optimal policy .the optimal transmission schedule then reduces to an optimal rate control problem . with as the only optimization variable ,is a convex program as long as is convex .however , in the general non - ideal circuit - power case , is also a variable to be optimized .since both and are neither concave nor convex in , the problem is non - convex .yet , we next show that it can be reformulated into a convex program through a change of variables . define .with , we rewrite as : }\\ \text{s.t . } & \displaystyle\sum_{n=1}^{\alpha_i}{\phi_n } \leq \sum_{k=0}^{i-1}{a_k } , & i=1 , \ldots , a,\\ & \displaystyle\sum_{n=1}^{\delta_j}{\phi_n } \geq \sum_{k=1}^{j}{d_k } , & j=1 , \ldots , d,\\ & \displaystyle \phi_n \geq 0 , \quad 0 \leq l_n^{on } \leq l_n , & n=1 , \ldots , n , \end{array}\ ] ] where we define if . for any convex , called its perspective , which is a jointly convex function of .since the constraints are all linear , it then readily follows that is a convex problem .let where and denote the lagrange multipliers associated with the causality and deadline constraints , respectively .the lagrangian of is given by : } + \sum_{i=1}^{a}{\lambda_i(\sum_{n=1}^{\alpha_i}{\phi_n- \sum_{k=0}^{i-1}{a_k } } ) } -\sum_{j=1}^{d}{\mu_j(\sum_{n=1}^{\delta_j}{\phi_n}-\sum_{k=1}^{j}{d_k})}\\ \displaystyle = & \mathcal{c}(\boldsymbol{\lambda})+\sum_{n=1}^{n}{[(p(\frac{\phi_n}{l_n^{on}})+\rho)l_n^{on}-(\sum_{j = j_n}^{d}{\mu_j}-\sum_{i = i_n}^{a}{\lambda_i})\phi_n]}\\ \end{aligned}\ ] ] where we define , , and .let denote the optimal solution for and the optimal lagrange multiplier vector for its dual problem .upon defining , we can derive from the karush - kuhn - tucker ( kkt ) optimality conditions that : , }\\ \text{s.t . }\quad & \phi_n \geq 0 , \quad 0 \leq l_n^{on } \leq l_n.\\ \end{aligned}\ ] ] in addition , the non - negative lagrange multipliers and satisfy the complementary slackness conditions : let if , and take an arbitrary non - negative value when , .it is obvious that is the optimal solution to . from , the sufficient and necessary optimality conditions for are : ^{on}}\\ \text{s.t . }\quad & r_n \geq 0 , \;\ ; 0 \leq l_n^{on } \leq l_n;\\ \end{split } \right .\;\ ; \forall n.\ ] ] next , we develop an efficient algorithm to find the optimal satisfying . let denote the first derivative of . for any , we can derive from that }.\ ] ] as is strictly convex and increasing , this is equivalent to : . substituting it into implies : }.\ ] ] now we consider a bits - per - joule ee - maximizing rate : note that since is a ( convex - over - linear ) quasi - convex function , it has a unique minimizer , which can be efficiently obtained by a simple bisectional search .interestingly , we can rely on to show that the optimal schedule depends on the ee - maximizing rate : _ the optimal transmission policy for ( [ eq3 ] ) can only adopt one of the following three ( `` off '' , `` on - off '' and `` on '' ) strategies per epoch : ( i ) , ( ii ) , , or ( iii ) , ._ see appendix a. lemma 1 dictates that any transmit - rate should not be adopted in the optimal policy .in fact , since maximizes the bits - per - joule ee , we can show that a transmission strategy with an over an epoch is always dominated by an on - off transmission with , which can use less energy to deliver the same data amount .only when the data deadlines are strict ( i.e. , no further delay is allowed ) should we adopt an ; in this case , the transmitter should be always on , i.e. , , over epoch .let denote the inverse function of .we can obtain from that }:=p'^{-1}(w_n ) = \log(|h|^2 w_n ) \ ] ] which is an increasing function of .using this fact and the complementary slackness conditions , we can then establish that : _ in the optimal policy , the rate can only change at or where the causality or deadline constraints are met with equality ; specifically , the rate increases after a where , and it decreases after a where . _see appendix b. lemma 2 reveals that the optimal rate control policy follows a specific pattern . due to the convexity of rate function , a constant transmit - rate should be maintained whenever possible , to minimize the total energy consumption . in the optimal policy, the rate needs to be changed only when the data causality or deadline constraints become active .a causality constraint is active , i.e. , all available data is cleared up at when the amount of data arrivals so far is small ; as a result , a lower rate is maintained before than after .similarly , a deadline constraint is active at when the deadline requirements are strict , thus a higher rate is maintained before than after .this is in the same spirit with the `` string tautening '' calculus approach developed in .based on the rules revealed in lemmas 12 , we then put forth an -clipped `` string tautening '' procedure in algorithm 1 to construct the optimal policy . 1.13 ' '' '' height 0.1pt depth 0.3pt * algorithm 1 * -clipped `` string tautening '' ' '' '' height 0.1pt depth 0.3pt , and , ; = firstchanger( ) ; find a set of satisfying ; , ; ; update ; sort , and , together in ascending order into a vector ; , , , ; ; , , ; ; , , ; return , , ; return , , ; ' '' '' height 0.1pt depth 0.3pt the key component in algorithm 1 is the function firstchanger , which relies on lemmas 12 to determine the first rate - changing time and the invariant rate used before it in the optimal policy for the system . in this function , and denote the epoch indices for the two candidate first rate - changing time instants , whereas and denote the candidate rates that are maintained over ] .suppose that a constant transmit - rate , , is maintained in the optimal policy such that the corresponding causality constraint is met with equality at , i.e. , .by lemma 1 , holds , and an renders , .this implies that the packets can only be delivered at by either ( i ) a transmission with over the `` on '' periods of a total length , or ( ii ) a transmission with a rate over the entire interval ] is available .when a - priori knowledge of the future packet arrivals is not available in practice , we can develop a heuristic online scheme based on the proposed optimal offline policy .the idea is to schedule the packet transmissions according to the optimal rate control policy based on the current packet arrivals , and reschedule when new packets arrive .for instance , suppose that packets arrive at time instant with ( different ) deadline requirements .we can construct the set in accordance to the deadline requirements , and let the set , where is determined by the largest deadline .with such a , we run the proposed algorithm 1 to find the optimal transmission policy until new packets arrive at . then we treat the current time instant as new `` '' instant , and update the set by subtracting all by , removing the past deadlines ( i.e. with negative after subtraction ) , and then including the deadline requirements for the newly arriving packets .the set also needs to be updated .note that we always have , where is updated as the sum of the remaining packets in the buffer and the newly arriving packets , and is determined by the last deadline in the updated .algorithm 1 is run for the new , and the subsequent packet transmissions follow the resultant new policy .this process continues until all the packets are delivered .in this section , we generalize the proposed approach to a time - varying wireless channel , where the channel state in general changes with time . with a little abuse of notation , herewe redefine an epoch as the interval between two consecutive channel or data state changing instants .again , over the entire transmission interval ] , or for a certain ] , over epoches ] .we can then construct a set of lagrange multipliers as follows : let , where the inequality is due to the strictly increasing of , leading to positivity of .let , if for a certain and , or let , if for a certain and , .we have shown that the rate if the causality constraint is tight at , or if the deadline constraint is tight at . recalling that is increasing in , it readily follows that or , depending on which type of constraint is tight at . except these positive and or , all other lagrange multipliers in set to zero .with such a , the complementary slackness conditions clearly hold . using sucha also leads to , ] .in addition , the construction of ensures when , and computes a feasible set of when in each phase .this guarantees that each pair of satisfies ; thus , follows the optimal structure in lemma 1 .we have proven that yielded by algorithm 1 and the lagrange multipliers constructed accordingly , satisfy the sufficient and necessary optimality conditions for .it then readily follows that is a global optimal policy for . in the search of the rate - changing points and the associated rates in algorithm 1 ,we only need to go through the data arrival or deadline time instants as shown in fig .2 ; hence , the algorithm has a complexity .z. nan , t. chen and x. wang , `` energy - efficient transmission of delay - limited bursty data packets over time - varying channels under non - ideal circuit power , '' in _ proc .chinasip _ , xian , china , july 2014 .x. wang and z. li , `` energy - efficient transmissions of bursty data packets with strict deadlines over time - varying wireless channels , '' _ ieee trans .wireless commun .5 , pp . 25332543 , may 2013 .
|
this paper develops a novel approach to obtaining energy - efficient transmission schedules for delay - limited bursty data arrivals under non - ideal circuit power consumption . assuming a - prior knowledge of packet arrivals , deadlines and channel realizations , we show that the problem can be formulated as a convex program . for both time - invariant and time - varying fading channels , it is revealed that the optimal transmission between any two consecutive channel or data state changing instants , termed epoch , can only take one of the three strategies : ( i ) no transmission , ( ii ) transmission with an energy - efficiency ( ee ) maximizing rate over part of the epoch , or ( iii ) transmission with a rate greater than the ee - maximizing rate over the whole epoch . based on this specific structure , efficient algorithms are then developed to find the optimal policies that minimize the total energy consumption with a low computational complexity . the proposed approach can provide the optimal benchmarks for practical schemes designed for transmissions of delay - limited data arrivals , and can be employed to develop efficient online scheduling schemes which require only causal knowledge of data arrivals and deadline requirements . * keywords : * energy efficiency , bursty arrivals , strict deadlines , non - ideal circuit power , convex optimization .
|
recently , there has been an upsurge of interest in radio signal enabled simultaneous wireless information and power transfer ( swipt ) - .a typical swipt system of practical interest is shown in fig .[ fig1 ] , where a fixed access point ( ap ) with constant power supply broadcasts wireless signals to a set of user terminals ( uts ) , among of which some decode information from the received signals , thus referred to as information receivers ( irs ) , while the others harvest energy , thus called energy receivers ( ers ) . to meet the practical requirement that irs and ers typically operate with very different power sensitivity ( e.g. , for irs versus for ers ) , a receiver - location - based scheduling for information andenergy transmission has been proposed in , , where ers need to be in more proximity to the transmitter than irs .however , this gives rise to a new information security issue since ers , which have better channels than irs from the transmitter , can easily eavesdrop the information sent to irs . to tackle this challenging problem , in this paperwe propose the use of multiple antennas at the transmitter to achieve the secret information transmission to irs and yet maximize the energy simultaneously transferred to ers , by properly designing the beamforming vectors and their power allocation at the transmitter .for the purpose of exposition , we consider a multiple - input single - output ( miso ) swipt system with a multi - antenna transmitter , one single - antenna ir , and single - antenna ers , as shown in fig .[ fig1 ] . to prevent the ers from eavesdropping the information sent to the ir, we study the joint information and energy transmit beamforming design to maximize the weighted sum - energy transferred to ers subject to a secrecy rate constraint for the ir .this problem is shown to be non - convex , but we solve it globally optimally by reformulating it into a two - stage optimization problem .first , we fix the signal - to - interference - plus - noise ratio ( sinr ) at the ir and obtain the optimal transmit beamforming solution by applying the semidefinite relaxation ( sdr ) technique .next , the original problem is solved by a one - dimension search for the optimal sinr value at the ir .furthermore , we present two suboptimal designs of lower complexity , for which the directions of information and energy beams are separately optimized with their power allocation . finally , we compare the performance of the proposed optimal algorithm with that of the two suboptimal schemes by simulations .it is worth noting that in , a similar miso swipt system as in this paper has been studied , but without the secret information transmission . as a result, it was shown in that to maximize the weighted sum - energy transferred to ers while meeting given sinr constraints at the irs , the optimal strategy is to employ only information beams without any dedicated energy beam .however , in this paper we will show that with the newly introduced secrecy rate constraint at the ir , energy beams are in general needed in the optimal solution .the reason is that energy beams in this paper will also play an important role of artificial noise ( an ) to facilitate the secret information transmission to the ir by interfering with the ers that may eavesdrop the ir s message .it is also worth noting that an - aided secrecy communication has been extensively studied in the literature , where a fraction of the transmit power is allocated to send artificially generated noise signals to reduce the information eavesdropped by the eavesdroppers . since in practice ,eavesdroppers channels are in general unknown at the transmitter , an isotropic approach was proposed in where the power of an is uniformly distributed in the null space of the legitimate receiver s channel , while the performance of this practical approach has been shown to be nearly optimal at the high signal - to - noise ratio ( snr ) regime in . furthermore , the miso beamforming design problem for the an - aided secrecy transmission under the assumption that eavesdroppers channels are known at the transmitter has been studied in .notice that this assumption is not practically valid if the eavesdroppers are passive devices .however , for the swipt system of our interest , since ers need to feed back their individual channels to the transmitter for it to deliver the required energy , it is reasonable to assume that ers channels are known at the transmitter .the rest of this paper is organized as follows .section [ sec : system model ] presents the miso swipt system model .section [ sec : problem formulation ] formulates the weighted sum - energy maximization problem subject to the secrecy rate constraint .section [ sec : optimal solution ] derives the optimal beamforming solution to this problem .section [ sec : suboptimal solution ] presents two suboptimal algorithms with lower complexity . section [ sec : numerical results ] provides numerical results on the performance of the proposed schemes . finally , section [ sec : conclusion ] concludes the paper .we consider a multiuser miso downlink system for swipt over a given frequency band as shown in fig .it is assumed that there is one single ir and ers denoted by the set , where the ir is more distant away from the transmitter ( tx ) than all ers to meet their different received power requirements .suppose that tx is equipped with antennas , while each ir / er is equipped with one single antenna .we assume linear transmit beamforming at tx and the ir is assigned with one dedicated information beam , while the ers are in total allocated to energy beams without loss of generality .therefore , the baseband transmitted signal of tx can be expressed as and denote the information beamforming vector and the energy beamforming vector , , respectively ; denotes the transmitted signal for the ir , while s , , denote the energy - carrying signals for energy beams .it is assumed that is a circularly symmetric complex gaussian ( cscg ) random variable with zero mean and unit variance , denoted by .furthermore , s , , are in general any arbitrary random signals each with unit average power .since in this paper we consider secret information transmission to the ir , s , , also play the role of an to reduce the information rate eavesdropped by the ers . as a result ,similarly to - , we assume that s are independent and identically distributed ( i.i.d . )cscg random variables denoted by , , since the worst - case noise for the eavesdropping ers is known to be gaussian .suppose that tx has a transmit sum - power constraint ; from ( [ eqn : signal3 ] ) , we thus have =\|{\mbox{\boldmath{ } } } _ 0\|^2+\sum_{i=1}^d\|{\mbox{\boldmath{ } } } _i\|^2\leq \bar{p} ] in the first suboptimal scheme in order to eliminate the information leaked to ers , but to the same direction as in the second suboptimal scheme to maximize the ir s sinr .note that the first suboptimal scheme is only applicable when since otherwise the null space of is empty . in the following ,we present the two suboptimal schemes with more details . supposing that , then the first suboptimal scheme aims to solve problem ( p1 ) with the additional constraints , , and , .consider first the information beam .let the singular value decomposition ( svd ) of be denoted as ^h,\end{aligned}\]]where and are unitary matrices , i.e. , , , and is a rectangular diagonal matrix .furthermore , and consist of the first and the last right singular vectors of , respectively .it can be shown that with forms an orthogonal basis for the null space of .thus , to guarantee that , must be in the following form : denotes the transmit power of the information beam , and is an arbitrary complex vector of unit norm .it can be shown that to maximize the ir s sinr , should be aligned to the same direction as the equivalent channel , i.e. , .since the energy beams are all aligned to the null space of ( to be shown later ) , the secrecy rate of the ir in this scheme is that the ers can not harvest any energy from the information beam ; thus , to maximize the weighted sum - energy transferred to the ers , should be as small as possible , i.e. , .it thus follows that summarize , in this scheme , we have next , consider the energy beams s .define the projection matrix as . without loss of generality, we can express , where satisfies .it can be shown that forms an orthogonal basis for the null space of .thus , to guarantee that , must be in the following form : is an arbitrary complex vector . in this case , the energy harvested at is thus , where . to find the optimal s , we need to solve the following problem .let and denote the maximum eigenvalue and its corresponding unit - norm eigenvector of the matrix , respectively .similar to problem ( p1-noi ) , it can be shown that the optimal value of problem ( p1-sub1 ) is , which is achieved by , , for any set of s satisfying . in practice , it is preferred to send only one energy beam to minimize the complexity of beamforming implementation at the transmitter , i.e. , the second suboptimal scheme aims to solve problem ( p1 ) with the additional constraints and , , where denotes the transmit power of the information beam . similar to ( [ eqn : sub1 wi ] ) , it can be shown that the optimal energy beams should be in the following form , we derive the optimal power allocation . it can be shown that the secrecy rate of the ir in this scheme is given by the set of feasible power allocation as .then let and denote the minimal and maximal elements in the set , respectively .thus , to maximize the weighted sum - energy transferred to ers subject to the secrecy rate constraint of the ir , the optimal power allocation can be expressed as this section , we provide numerical examples to validate our results .it is assumed that tx is equipped with antennas , and there are ers .we assume that the signal attenuation from tx to all ers is corresponding to an identical distance of meter , i.e. , , , and that to the ir is corresponding to a distance of meters , i.e. , .the channel vectors s and are randomly generated from i.i.d .rayleigh fading with the respective average power values specified as above .we set ( w ) or , , and , .we also set , ; thus , the sum - energy harvested by all ers is considered .similar to , we use the rate - energy ( r - e ) region , which consists of all the achievable ( secrecy ) rate and harvested sum - energy pairs for a given sum - power constraint , to compare the performance of the optimal and suboptimal schemes proposed in sections [ sec : optimal solution ] and [ sec : suboptimal solution ] .specifically , the r - e region is defined as and are given in ( [ eqn : secrecy rate ] ) and ( [ eqn : harvested energy ] ) , respectively .note that by solving problem ( p1 ) with each scheme for all feasible s , we can characterize the boundary of the corresponding r - e region .[ fig3 ] shows a set of r - e regions achieved by different beamforming schemes .it is observed that the optimal beamforming scheme achieves the best r - e trade - off .moreover , the suboptimal scheme works better than suboptimal scheme since its achieved r - e region is closer to that achieved by the optimal scheme , especially when the secrecy rate target for the ir is large .however , it is worth noting that the suboptimal scheme is of the lowest complexity .notice that for this suboptimal scheme , the closed - form expressions of the optimal information / energy beamforming vectors and their power allocation are given in ( [ eqn : sub1 v0 ] ) and ( [ eqn : sub1 wi ] ) .furthermore , since no information leakage to ers is achieved by the designed information beamforming , i.e. , , , there is no need to design any special codebook for the secrecy information signal at tx .this paper is an initial attempt to address the important issue of physical layer security in the emerging simultaneous wireless information and power transfer ( swipt ) system . under a miso setup ,the joint information and energy beamforming is investigated to maximize the weighted sum - energy harvested by multiple ers subject to a given secrecy rate constraint at one single ir .we solve this non - convex optimization problem by a two - step algorithm and show that the technique of sdr yields the optimal beamforming solution .two suboptimal beamforming schemes of lower complexity are also presented , and their performances are compared with that of the optimal scheme in terms of the achievable ( secrecy ) rate - energy region . j. xu , l. liu , and r. zhang , `` multiuser miso beamforming for simultaneous wireless information and power transfer , '' in _ proc .ieee international conference on acoustics , speech , and signal processing ( icassp ) _ , 2013 .w. c. liao , t. h. chang , w. k. ma , and c. y. chi , `` qos - based transmit beamforming in the presence of eavesdroppers : an artificial - noise - aided approach , '' _ ieee trans . signal process .1202 - 1216 , mar .
|
the dual use of radio signals for simultaneous wireless information and power transfer ( swipt ) has recently drawn significant attention . to meet the practical requirement that energy receivers ( ers ) operate with much higher received power than information receivers ( irs ) , ers need to be deployed closer to the transmitter than irs . however , due to the broadcast nature of wireless channels , one critical issue is that the messages sent to irs can not be eavesdropped by ers , which possess better channels from the transmitter . in this paper , we address this new secrecy communication problem in a multiuser multiple - input single - output ( miso ) swipt system where a multi - antenna transmitter sends information and energy simultaneously to one ir and multiple ers , each with a single antenna . by optimizing transmit beamforming vectors and their power allocation , we maximize the weighted sum - energy transferred to ers subject to a secrecy rate constraint for the information sent to the ir . we solve this non - convex problem optimally by reformulating it into a two - stage problem . first , we fix the signal - to - interference - plus - noise ratio ( sinr ) at the ir and obtain the optimal beamforming solution by applying the technique of semidefinite relaxation ( sdr ) . then the original problem is solved by a one - dimension search over the optimal sinr value for the ir . furthermore , two suboptimal low - complexity beamforming schemes are proposed , and their achievable ( secrecy ) rate - energy ( r - e ) regions are compared against that by the optimal scheme . [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ]
|
an important problem that often arises in signal processing , machine learning , and statistics is sparse recovery .it is in general formulated in the standard form {0.00,0.00,0.00}}{\|{\bf x}\|_0}\ \\textrm{subject to}\ \ { \bf ax}={\bf y}\ ] ] where the sensing matrix has more columns than rows and the `` norm '' {0.00,0.00,0.00}}{\|{\bf x}\|_0} ] , is a necessary and sufficient condition such that for any -sparse and , is the unique solution of minimization .therefore , is a tight quantity in indicating the performance of minimization in sparse recovery .however , it has been shown that calculating is in general np - hard , which makes it difficult to check whether the condition is satisfied or violated . despite this , properties of are of tremendous help in illustrating the performance of minimization , e.g. , non - decrease of in ] ,* is finite if and only if ; * for , there exist with and such that see section [ ssec : prf_lem_nsp ] .first , we show the strict increase of in .[ thm_nsp_k ] suppose .then for ] , we can define a set of all positive integers that every -sparse can be recovered as the unique solution of minimization with .according to theorem [ thm_nsp_k ] , contains successive integers starting from to some integer and is possibly empty .if , then . therefore , if , . for with identical column norms , if and , then .to show this , we only need to prove that .first , for any and , since , where is the column of . since with equality holds only when are all on the same ray , which can not be true since . since has identical column norms , holds , which leads to because of lemma [ lem_nsp].2 ) .now we turn to the properties of as a function of .the following result reveals the continuity of in .[ thm_nsp_p2 ] suppose .then for , is a continuous function in ] , the inverse image of the open interval is also an open interval of ] with probability one .see section [ ssec : prf_thm_nsp_p3 ] .it needs to be noted that there exists such that is a constant number for all ], it is easy to check that for all ] , according to lemma 4.5 in , , and implies due to the compactness of , the sequence has a convergent subsequence , and its limit also lies in . then implies for , i.e. , contains a -sparse element .this contradicts the assumption that .\2 ) if , for any with and any , it holds that on the other hand , since , contains an -sparse signal with as its support set . for any with , , and therefore holds .if ] and , according to lemma [ lem_nsp].2 ) , there exist with and such that since is at least -sparse , there exists an index such that .let , then and hence recalling and the equivalent definition , we can get and complete the proof .according to theorem 5 in , is non - decreasing in ] , we prove the one - sided limit from the negative direction satisfies according to lemma [ lem_nsp].2 ) , there exist with and satisfying according to the definition of , it is easy to show that and then holds obviously .second , for any , we prove the one - sided limit from the positive direction satisfies since , there exists such that .then for , lemma [ lem_nsp].2 ) reveals that there exist with and such that since there are only finite different satisfying , there exists with such that an infinite subsequence of is associated with . due to the compactness of , this subsequence has a convergent subsequence , and its limit also lies in . according to the definition of and , andconsequently .since is non - decreasing in , and is proved .first , we show that with probability 1 .let denote the -dimensional vector space of real matrices .for any , let denote the subset of consisting of matrices of rank .it can be proved that is an embedded submanifold of dimension in .consequently , for matrices with i.i.d .entries drawn from a continuous distribution , the -dimensional volume of the set of singular matrices is zero . in other words ,any , or fewer , random vectors in with i.i.d .entries drawn from a continuous distribution are linearly independent with probability 1 . on the other hand ,more than vectors in are always linearly dependent .therefore , with probability 1 . next , with the equivalent definition, we prove that for and , holds with probability 1 . according to lemma [ lem_nsp].2 ), there exist with and such that suppose has nonzero entries with as its support set , then with probability 1 .it is obvious that , and for any and any , . since , and therefore summing with in and in , we can obtain which is equivalent to since , it is easy to check that the equality in holds only when for all and all , i.e. , the nonzero entries of have the same magnitude .we prove that contains such with probability 0 , which together with imply that holds with probability 1 . to this end , let denote the -dimensional vector space of real matrices . for fixed with , it can be easily shown that the subset is an -dimensional subspace in . therefore , for with i.i.d .entries drawn from a continuous probability distribution , contains with probability 0 . in , the number of vectorswhose nonzero entries have the same magnitude is which is a finite number .therefore , with probability 0 , contains a vector which makes the equality in hold .that is , is strictly increasing in $ ] with probability 1 .in characterizing the performance of minimization in sparse recovery , null space constant can be served as a necessary and sufficient condition for the perfect recovery of all -sparse signals .this letter derives some basic properties of in and .in particular , we show that is strictly increasing in and is continuous in , meanwhile for random , the constant is strictly increasing in with probability 1 .possible future works include the properties of in , for example , the requirement of number of measurements to guarantee with high probability when is randomly generated .j. wright , a. yang , a. ganesh , s. sastry , and y. ma , `` robust face recognition via sparse representation , '' _ ieee transactions on pattern analysis and machine intelligence _ ,31 , no . 2 , pp . 210 - 227 ,2009 .e. cands , j. romberg , and t. tao , `` robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information , '' _ ieee transactions on information theory _ ,52 , no . 2 , pp .489 - 509 , feb . 2006 .r. saab , r. chartrand , and o. yilmaz , `` stable sparse approximations via nonconvex optimization , '' _ ieee international conference on acoustics , speech , and signal processing _ , pp .3885 - 3888 , mar .s. foucart and m. lai , `` sparsest solutions of underdetermined linear systems via -minimization for , '' _ applied and computational harmonic analysis _ , vol .26 , no . 3 , pp . 395 - 407 , may 2009 .r. gribonval and m. nielsen , `` highly sparse representations from dictionaries are unique and independent of the sparseness measure , '' _ applied and computational harmonic analysis _ , vol .22 , no . 3 , pp . 335 - 355 , may 2007 .a. tillmann and m. pfetsch , `` the computational complexity of the restricted isometry property , the nullspace property , and related concepts in compressed sensing , '' _ ieee transactions on information theory _ , vol .60 , no . 2 , pp .1248 - 1259 , feb .d. donoho and m. elad , `` optimally sparse representation in general ( nonorthogonal ) dictionaries via minimization , '' _ proceedings of the national academy of sciences _ , vol . 100 , no . 5 , pp .2197 - 2202 , mar .d. malioutov , m. cetin , and a. willsky , `` optimal sparse representations in general overcomplete bases , '' _ ieee international conference on acoustics , speech , and signal processing _ , vol . 2 , pp .793 - 796 , 2004 .g. fung and o. mangasarian , `` equivalence of minimal - and -norm solutions of linear equalities , inequalities and linear programs for sufficiently small , '' _ journal of optimization theory and applications _151 , no . 1, pp . 1 - 10 , 2011 .
|
the literature on sparse recovery often adopts the `` norm '' )$ ] as the penalty to induce sparsity of the signal satisfying an underdetermined linear system . the performance of the corresponding minimization problem can be characterized by its null space constant . in spite of the np - hardness of computing the constant , its properties can still help in illustrating the performance of minimization . in this letter , we show the strict increase of the null space constant in the sparsity level and its continuity in the exponent . we also indicate that the constant is strictly increasing in with probability when the sensing matrix is randomly generated . finally , we show how these properties can help in demonstrating the performance of minimization , mainly in the relationship between the the exponent and the sparsity level . * keywords : * sparse recovery , null space constant , minimization , monotonicity , continuity .
|
interference management has been taken into account as one of the most challenging issues to increase the throughput of cellular networks serving multiple users . in multiuser cellular environments ,each receiver may suffer from intra - cell and inter - cell interference .interference alignment ( ia ) was proposed by fundamentally solving the interference problem when there are multiple communication pairs .it was shown that the ia scheme can achieve the optimal degrees - of - freedom ( dof ) in the multiuser interference channel with time - varying channel coefficients .subsequent studies have shown that the ia is also useful and indeed achieves the optimal dof in various wireless multiuser network setups : multiple - input multiple - output ( mimo ) interference channels and cellular networks .in particular , ia techniques for cellular uplink and downlink networks , also known as the interfering multiple - access channel ( imac ) or interfering broadcast channel ( ibc ) , respectively , have received much attention .the existing ia framework for cellular networks , however , still has several practical challenges : the scheme proposed in requires arbitrarily large frequency / time - domain dimension extension , and the scheme proposed in is based on iterative optimization of processing matrices and can not be optimally extended to an arbitrary downlink cellular network in terms of achievable dof . in the literature , there are some results on the usefulness of fading in single - cell downlink broadcast channels , where one can obtain multiuser diversity gain along with user scheduling as the number of users is sufficiently large : opportunistic scheduling , opportunistic beamforming , and random beamforming .scenarios exploiting multiuser diversity gain have been studied also in ad hoc networks , cognitive radio networks , and cellular networks .recently , the concept of opportunistic ia ( oia ) was introduced in for the -cell uplink network ( i , e ., imac model ) , where there are one -antenna base station ( bs ) and users in each cell .the oia scheme incorporates user scheduling into the classical ia framework by opportunistically selecting ( ) users amongst the users in each cell in the sense that inter - cell interference is aligned at a pre - defined interference space .it was shown in that one can asymptotically achieve the optimal dof if the number of users in a cell scales as a certain function of the signal - to - noise - ratio ( snr ) . for the -cell downlink network ( i.e. , ibc model ) assuming one -antenna base station ( bs ) and per - cell users , studies on the oia have been conducted in .more specifically , the user scaling condition for obtaining the optimal dof was characterized for the -cell multiple - input single - output ( miso ) ibc , and then such an analysis of the dof achievability was extended to the -cell mimo ibc with receive antennas at each user full dof can be achieved asymptotically , provided that scales faster than , for the -cell mimo ibc using oia . in this paper, we propose an _ opportunistic downlink ia ( odia ) _ framework as a promising interference management technique for -cell downlink networks , where each cell consists of one bs with antennas and users having antennas each .the proposed odia jointly takes into account user scheduling and downlink ia issues .in particular , inspired by the precoder design in , we use two cascaded beamforming matrices to construct our precoder at each bs . to design the first transmit beamforming matrix, we use a user - specific beamforming , which conducts a linear zero - forcing ( zf ) filtering and thus eliminates intra - cell interference among spatial streams in the same cell . to design the second transmit beamforming matrix, we use a predetermined reference beamforming matrix , which plays the same role of random beamforming for cellular downlink and thus efficiently reduces the effect of inter - cell interference from other - cell bss . on the other hand ,the receive beamforming vector is designed at each user in the sense of minimizing the total amount of received inter - cell interference using _ local _ channel state information ( csi ) in a decentralized manner .each user feeds back both the effective channel vector and the quantity of received inter - cell interference to its home - cell bs . the user selection andtransmit beamforming at the bss and the design of receive beamforming at the users are completely decoupled .hence , the odia operates in a non - collaborative manner while requiring no information exchange among bss and no iterative optimization between transmitters and receivers , thereby resulting in an easier implementation .the main contribution of this paper is four - fold as follows .* we first show that the minimum number of users required to achieve dof ( ) can be fundamentally reduced to by using the odia at the expense of acquiring perfect csi at the bss from users , compared to the existing downlink ia schemes requiring the user scaling law , implies that . ] where denotes the number of spatial streams per cell .the interference decaying rate with respect to for given snr is also characterized in regards to the derived user scaling law .* we introduce a limited feedback strategy in the odia framework , and then analyze the required number of feedback bits leading to the same dof performance as that of the odia assuming perfect feedback , which is given by .* we present a user scheduling method for the odia to achieve optimal multiuser diversity gain , i.e. , per stream even in the presence of downlink inter - cell interference . * to verify the odia schemes , we perform numerical evaluation via computer simulations .simulation results show that the proposed odia significantly outperforms existing interference management and user scheduling techniques in terms of sum - rate in realistic cellular environments .the remainder of this paper is organized as follows .section [ sec : system ] describes the system and channel models .section [ sec : oia ] presents the overall procedure of the proposed odia . in section [ sec : achievability ] , the dof achievablility result is shown .section [ sec : oia_limited ] presents the odia scheme with limited feedback . in section [ sec : threhold_odia ] , the achievability of the spectrally efficient odia leading to a better sum - rate performance is characterized .numerical results are shown in section [ sec : sim ] .section [ sec : conc ] summarizes the paper with some concluding remarks .we consider a -cell mimo ibc where each cell consists of a bs with antennas and users with antennas each . the number of selected users in each cell is denoted by .it is assumed that each selected user receives a single spatial stream . to consider nontrivial cases , we assume that , because all inter - cell interference can be completely canceled at the receivers ( i.e. , users ) otherwise .moreover , the number of antennas at the users is in general limited due to the size of the form factor , and hence it is more safe to assume that is relatively small compared to .the channel matrix from the -th bs to the -th user in the -th cell is denoted by }\in \mathbb{c}^{l \times m} ] is assumed to be independent and identically distributed ( i.i.d . ) according to .in addition , quasi - static frequency - flat fading is assumed , i.e. , channel coefficients are constant during one transmission block and change to new independent values for every transmission block .the -th user in the -th cell can estimate the channels } ] denotes additive noise , each element of which is independent and identically distributed complex gaussian with zero mean and the variance of . the average snr is given by }\mathbf{s}_i \right\|^2 \right]}/{\mathbb{e}\left [ \left\| \mathbf{z}^{[i , j]}\right\|^2 \right ] } = { 1}/{n_0} ] , where is an for and .if the reference beamforming matrix is generated in a pseudo - random fashion , i.e. , it changes based on a certain pattern as if it changes randomly and the pattern is known by the bss as well as the users , bss do not need to broadcast them to users .then , the -th user in the -th cell obtains }_{k} ] denote the unit - norm weight vector at the -th user in the -th cell , i.e. , } \right\|^2 = 1 ] , the -th user in the -th cell can compute the following quantity while using its receive beamforming vector } ] , is defined as the sum of }_{k} ] is not exactly the amount of the generating interference from the -th bs to the -th user in the -th cell due to the absence of , it decouples the design of the user - specific precoding matrix from the user scheduling metric calculation , i.e. , }_{k} ] even with excluded and that the optimal dof can be achieved .at this point , it is worthwhile to note that the role of is two - fold .first , it determines the dimension of the effective received channel according to given parameter . by multiplying to the channel matrix , the dimension of the effective channelis reduced to rather than , which results in reduced number of inter - cell interference terms as well as reduced average interference level for each interference term .we shall show in the sequel that plays a role in the end of rendering the user scaling law dependent on the parameter .second , separates the user scheduling procedure from the user - specific precoding matrix design of and also from the receiver beamforming vector design of . by employing the cascaded precoding matrix design ,the scheduling metric in ( 1 ) becomes independent of or , and can be obtained as a function of only } ] , where } \in \mathbb{c}^{s \times 1} ] , where } \right|^2=1/s ] , the transmit signal vector at the -th bs is given by , and the received signal vector at the -th user in the -th cellis written as } & = \mathbf{h}_i^{[i , j]}\mathbf{p}_i \mathbf{v}_i \mathbf{x}_i + \sum_{k=1 , k\neq i}^{k } \mathbf{h}_k^{[i , j]}\mathbf{p}_k \mathbf{v}_k \mathbf{x}_k + \mathbf{z}^{[i , j ] } \nonumber \\ & = \underbrace{\mathbf{h}_i^{[i , j]}\mathbf{p}_i \mathbf{v}^{[i , j ] } x^{[i , j]}}_{\textsf{desired signal } } + \underbrace{\sum_{s=1 , s\neq j}^{s } \mathbf{h}_i^{[i , j]}\mathbf{p}_i \mathbf{v}^{[i , s ] } x^{[i , s]}}_{\textsf{intra - cell interference } } \nonumber \\ & + \underbrace{\sum_{k=1 , k\neq i}^{k } \mathbf{h}_k^{[i , j]}\mathbf{p}_k \mathbf{v}_k \mathbf{x}_k}_{\textsf{inter - cell interference } } + \mathbf{z}^{[i , j]}. % \underbrace{\sum_{j=1}^{s}\mathbf{h}_{i}^{[i , j ] } \mathbf{w}^{[i , j]}x^{[i , j]}}_{\textrm{desired signal } } + \underbrace{\sum_{k=1 , k\neq i}^{k } \sum_{m=1}^{s } \mathbf{h}_{i}^{[k , m]}\mathbf{w}^{[k , m]}x^{[k , m]}}_{\textrm{inter - cell interference } } + \mathbf{z}_i,\end{aligned}\ ] ] the received signal vector after receive beamforming , denoted by } = { \mathbf{u}^{[i , j]}}^{\mathsf{h } } \mathbf{y}^{[i , j]} ] . by selecting users with small } ] tends to be orthogonal to the receive beamforming vector } ] in ( [ eq : rec_vector_after_bf ] ) also tend to be orthogonal to } ] denotes a normalization factor for satisfying the unit - transmit power constraint for each spatial stream , i.e. , } = 1/\left\| \mathbf{p}_i \mathbf{v}^{[i , j ] } \right\| ] . using ( [ eq : data_rate_single_user ] ) , the achievable total dof can be defined as }}{\log \textsf{snr}}.\ ] ]in this section , we characterize the dof achievability in terms of the user scaling law with the optimal receive beamforming technique . to this end , we start with the receive beamforming design that maximizes the achievable dof . for given channel instance , from ( [ eq : data_rate_single_user ] ) , each user can attain the maximum dof of 1 if and only if the interference }}^{\mathsf{h}}\mathbf{v}^{[k , s]}\big|^2 \cdot \mathsf{snr} ] can be bounded as } \!\ !\ge \!\ ! \log_2 \!\ ! \left(\!\!1 + \frac { \gamma^{[i , j ] } } { \frac{s}{\mathsf{snr}}+ \sum_{k=1 , k\neq i}^{k } \sum_{s=1}^{s } \left\| \mathbf{f}_{k}^{[i , j]}\right\|^2 \left\| \mathbf{v}^{[k , s]}\right\|^2 } \!\!\ !\right ) \label{eq : data_rate_single_user_bound } \\ & \ge \log_2 \left ( 1 + \frac { \gamma^{[i , j ] } } { \frac{s}{\mathsf{snr}}+ \sum_{k\neq i}^{k } \sum_{s=1}^{s } \left\| \mathbf{f}_{k}^{[i , j ] } \right\|^2 \left\| \mathbf{v}^{(\max)}_{i}\right\|^2 } \right ) \label{eq : data_rate_single_user_bound2}\\ & = \log_2\left ( \mathsf{snr}\right ) + \log_2 \left(\frac{1}{\mathsf{snr}}+ \frac { \frac{\gamma^{[i , j]}}{\left\| \mathbf{v}^{(\max)}_{i}\right\|^2 } } { \frac{s}{\left\| \mathbf{v}^{(\max)}_{i}\right\|^2}+ i^{[i , j ] } } \right ) , \label{eq : data_rate_single_user_bound3}\end{aligned}\ ] ] where in ( [ eq : data_rate_single_user_bound2 ] ) is defined by }\right\|^2 : i'\in \mathcal{k}\setminus i , j'\in \mathcal{s}\bigg\},\end{aligned}\ ] ] , and } ] is determined by } ] through receive beamforming at the users . since } = \sum_{s=1}^{s } \eta^{[i , j ] } \mathsf{snr} ] at the users can reduce the sum of received interference .therefore , each user finds the beamforming vector that minimizes } ] by } = \boldsymbol{\omega}^{[i , j]}\boldsymbol{\sigma}^{[i , j]}{\mathbf{q}^{[i , j]}}^{\mathsf{h } } , \displaybreak[0]\ ] ] where }\in \mathbb{c}^{(k-1)s\times l} ] consist of orthonormal columns , and } = \textrm{diag}\left ( \sigma^{[i , j]}_{1 } , \ldots , \sigma^{[i , j]}_{l}\right) ] .then , the optimal } ] is the -th column of } ] is i.i.d .complex gaussian with zero mean and unit variance .[ remark : decoupled ] in general , the conventional scheduling metric such as snr or sinr in the ibc is dependent on the precoding matrices at the transmitters , which makes the joint optimization of the precoder design and user scheduling difficult to be separated from each other and implemented with feasible signaling overhead and low complexity .the previous schemes for the ibc only consider the design of the precoding matrices and receive filters without any consideration of user scheduling . with the cascaded precoding matrix design , however , the proposed scheme decouples the user scheduling metric calculation and the user - specific precoding matrix , as shown in ( [ eq : eta_tilde ] ) .in addition , the receive beamforming vector design can also be decoupled from as shown in ( [ eq : u_design ] ) .a similar cascaded precoding matrix design was used in for some particular cases of the antenna configuration without the consideration of user scheduling .however , the proposed scheme applies to an arbitrary antenna and channel configuration , where the inter - cell interference is suppressed with the aid of opportunistic user scheduling .in addition , we shall show in the sequel that the optimal dof can be achievable under a certain user scaling condition for an arbitrary antenna configuration without any iterative optimization procedure between the users and bss .[ line : remark1:end ] [ remark : noiteration ] note that although it is assumed in the proposed scheme that each user feeds back the -dimensional vector } ] , denoted by , can be written as for , where means , and is a constant determined by , , and .we further present the following lemma for the probabilistic interference level of the odia .[ lemma : cdf_scaling ] the sum - interference remains constant with high probability for increasing snr , that is , } \le \epsilon \bigg\}=1 \displaybreak[0]\end{aligned}\ ] ] for any , if see appendix [ app : lemma2 ] .now , the following theorem establishes the dof achievability of the proposed odia . [ theorem : dof ] the proposed odia scheme with the scheduling metric ( [ eq : lif_beamforming_simple ] ) achieves the optimal dof for given with high probability if if the sum - interference remains constant for increasing snr with probability , the achievable rate in ( [ eq : data_rate_single_user_bound3 ] ) can be further bounded by } \nonumber \\ & \ge \mathcal{p } \ ! \cdot \ !\left [ \log_2\left ( \mathsf{snr}\right)\!\ ! + \!\ ! \log_2 \!\! \left ( \!\ ! \frac{1}{\mathsf{snr } } \ ! + \ !\frac { \gamma^{[i , j]}/\left(s\left\| \mathbf{v}^{(\max)}_{i}\right\|^2\right ) } { 1/\left\| \mathbf{v}^{(\max)}_{i}\right\|^2 + \epsilon } \ !\right ) \!\ !\right ] , \label{eq : data_rate_single_user_bound5}\end{aligned}\ ] ] for any .thus , the achievable dof in ( [ eq : sum_dof ] ) can be bounded by from lemma [ lemma : cdf_scaling ] , it is immediate to show that tends to 1 , and hence dof is achievable if , which proves the theorem .the following remark discusses the uplink and downlink duality on the dof achievability within the oia framework .[ remark : up_down_duality ] [ line : duality_power : start]the same scaling condition of was achieved to obtain dof in the -cell uplink interference channel , each cell of which is composed of a bs with antennas and users each with antennas .similarly as in the proposed scheme , the uplink scheme also selects users that generate the minimal interference to the receivers ( bss ) . in the uplink scheme ,the transmitters ( users ) perform svd - based beamforming and the receivers ( bss ) employ a zf equalization , while in the proposed downlink case the transmitters ( bss ) perform zf precoding and the receivers ( users ) employ svd - based beamforming . in addition , each transmitter sends the information on effective channel vectors to the corresponding receiver in the uplink case , and vise versa in the downlink case .the transmit power per spatial stream is the same for both the cases .therefore , theorem [ theorem : dof ] implies that the same dof is achievable with the same user scaling law for the downlink and uplink cases .[ line : duality_power : end ] the user scaling law characterizes the trade - off between the asymptotic dof and number of users , i.e. , the more number of users , the more achievable dof .in addition , we relate the derived user scaling law to the interference decaying rate with respect to for given snr in the following theorem .[ theorem : scaling_decay ] if the user scaling condition to achieve a target dof is given by for some , then the interference decaying rate is given by } } \right\ } \ge \theta\left ( n^{1/\tau'}\right),\end{aligned}\ ] ] where if and . [ line : theorem2_proof : start]from the proof of theorem [ theorem : dof ] , the user scaling condition to achieve a target dof is given by if and only if the cdf of } ] , which are used to design the precoding matrix in ( [ eq : zf_bf ] ) .this feedback procedure corresponds to the feedforward of the effective channel vectors in the uplink oia case . note that even with this feedback procedure , a straightforward dual transceiver and user scheduling scheme inspired by the uplink oia would result in an infinitely - iterative optimization between the user scheduling and transceiver design , because the received interference changes according to the precoding matrix and receive beamforming vectorfurthermore , only with the cascaded precoding matrix , the iterative optimization is still needed , since the coupled optimization issue is still there , as shown in .it is indeed the proposed odia that can achieve the same user scaling condition of the uplink case , i.e. , , without any iterative design .in addition , the proposed odia applies to an arbitrary , , and , whereas the optimal dof is achievable only in a few special cases in the scheme proposed in .in the proposed odia scheme , the vectors ( }}^{\mathsf{h}}\mathbf{h}^{[i , j]}_{i}\mathbf{p}_i ] is fed back using codebooks . for limited feedback, we define the codebook by where is the codebook size and is a unit - norm codeword , i.e. , .hence , the number of feedback bits used is given by for }}^{\mathsf{h } } = { \mathbf{u}^{[i , j]}}^{\mathsf{h } } \mathbf{h}^{[i , j]}_{i } \mathbf{p}_i, ] , 2 ) channel gain of }\right\|^2 ] .note that the feedback of scalar information such as channel gains and scheduling metrics can be fed back relatively accurately with a few bits of uplink data , and the main challenge is on the feedback of the angle of vectors .thus , in what follows , the aim is to analyze the impact of the quantized feedback of the index of } ] from } \triangleq \left\| \mathbf{f}_{i}^{[i , j]}\right\|^2 \cdot \tilde{\mathbf{f}}_{i}^{[i , j ] } , \hspace{10pt}i=1 , \ldots , s,\end{aligned}\ ] ] and the precoding matrix from where } } , \ldots , \sqrt{\gamma^{[i , s]}}\right) ] . with limited feedback ,the received signal vector after receive beamforming is written by } & = { \mathbf{f}_{i}^{[i , j]}}^{\mathsf{h}}\hat{\mathbf{v}}_i \mathbf{x}_i +\cdot \sum_{k=1 , k\neq i}^{k } { \mathbf{f}_{k}^{[i , j]}}^{\mathsf{h } } \hat{\mathbf{v}}_k \mathbf{x}_k + { \mathbf{u}^{[i , j]}}^{\mathsf{h } } \mathbf{z}^{[i , j ] } \\ & = \sqrt{\gamma^{[i , j]}}x^{[i , j ] } + \underbrace{\left ( { \mathbf{f}_{i}^{[i , j]}}^{\mathsf{h}}\hat{\mathbf{v}}_i \mathbf{x}_i- \sqrt{\gamma^{[i , j]}}x^{[i , j]}\right)}_{\textrm{residual intra - cell interference } } \nonumber \\ & + \sum_{k=1 , k\neq i}^{k } { \mathbf{f}_{k}^{[i , j]}}^{\mathsf{h } } \hat{\mathbf{v}}_k \mathbf{x}_k + { \mathbf{u}^{[i , j]}}^{\mathsf{h } } \mathbf{z}^{[i , j]},\end{aligned}\ ] ] where the residual intra - cell interference is non - zero due to the quantization error in .it is important to note that the residual intra - cell interference is a function of , which includes other users channel information , and thus each user treats this term as unpredictable noise and calculates only the inter - cell interference for the scheduling metric as in ( [ eq : eta ] ) ; that is , the scheduling metric is not changed for the odia with limited feedback .the following theorem establishes the user scaling law for the odia with limited feedback .[ th : codebook ] the odia with a grassmannianthe grassmannian codebook refers to a vector codebook having a maximized minimum chordal distance of any two codewords , which can be obtained by solving the grassmannian line packing problem . ] or random codebook achieves the same user scaling law of the odia with perfect csi described in theorem [ theorem : dof ] , if that is , dof is achievable with high probability if and ( [ eq : nf_cond0 ] ) holds true . without loss of generality , the quantized vector } ] is a unit - norm vector i.i.d . over }\right) ] such that with a slight abuse of notation } = \sqrt{1-{d^{\max}_i}^2}\cdot\mathbf{f}_{i}^{[i , j]}+ d^{\max}_i\nu_i \cdot\mathbf{t}^{[i , j]},\end{aligned}\ ] ] where } , \ldots , d^{[i , s ] } \right\ } , \nonumber \\ \nu_i & = & \max \left\ { \left\|\mathbf{f}_{i}^{[i , j]}\right\|^2 , j=1 , \ldots , s \right\}.\end{aligned}\ ] ] note that more quantization error only degrades the achievable rate , and hence the quantization via ( [ eq : f_hat3 ] ) yields a performance lower - bound . inserting ( [ eq : f_hat3 ] ) to ( [ eq : v_hat ] ) gives us where } , \ldots , \mathbf{f}_{i}^{[i , s]}\right]^{\mathsf{h}} ] .the taylor expansion of in ( [ eq : v_hat ] ) gives us where is a function of and .thus , can be written by inserting ( [ eq : v_hat_final ] ) to ( [ eq : rec_vector_after_bf_limited ] ) yields } & = \sqrt{\gamma^{[i , j]}}x^{[i , j ] } \nonumber \\ & - \underbrace{d^{\max}_i\nu_i{\mathbf{t}^{[i , j]}}^{\textsf{h}}\mathbf{f}_i^{-1}\boldsymbol{\gamma}_i \mathbf{x}_i + \sum_{k=2}^{\infty } \left(d^{\max}_i\right)^k{\mathbf{f}_{i}^{[i , j]}}^{\mathsf{h}}\mathbf{a}_k \boldsymbol{\gamma}_i \mathbf{x}_i } _ { \textrm{residual intra - cell interference } } \nonumber \\ & \hspace{0pt}+ \sum_{k=1 , k\neq i}^{k } { \mathbf{f}_{k}^{[i , j]}}^{\mathsf{h } } \hat{\mathbf{v}}_k \mathbf{x}_k + { \mathbf{u}^{[i , j]}}^{\mathsf{h } } \mathbf{z}^{[i , j]}.\end{aligned}\ ] ] consequently , the rate } ] and residual intra - cell interference } ] for all selected users for increasing snr . in appendix [ app :th_codebook ] , it is shown that } \le \epsilon' ] , needs to vanish for any given channel instance with respect to snr to achieve a non - zero dof per spatial stream .[ line : remark_fb2:end ] though the system and proof are different , our results of theorem [ th : codebook ] are consistent with this previous result .in this section , we propose a spectrally efficient oia ( se - odia ) scheme and show that the proposed se - odia achieves the optimal multiuser diversity gain . for the dof achievability , it was enough to design the user scheduling in the sense to minimize inter - cell interference .however , to achieve optimal multiuser diversity gain , the gain of desired channels also needs to be considered in user scheduling .the overall procedure of the se - odia follows that of the odia described in section [ sec : oia ] except the the third stage ` user scheduling ' .in addition , we assume the perfect feedback of the effective desired channels }}^{\mathsf{h}}\mathbf{h}_i^{[i , j]}\mathbf{p}_i ] , for given } , \ldots , \mathbf{b}_{s-1}^{[i ] } \right\} ] .* step 3 : for the -th user selection , a user is selected at random from the user pool that satisfies the following two conditions : } \le \eta_i , \hspace{10pt}\mathsf{c}_2 : \|\tilde{\mathbf{b}}_{s}^{[i , j]}\|^2 \ge \eta_d % \pi_s = \arg \max_{k\in\mathcal{n}_s } \|\mathbf{b}_{s}^{[i]}\|^2\\ % \mathcal{s}_0 = \mathcal{s}_0 \cup \{\pi_s\}\\ % \mathbf{f}^{[i , s ] } _ * = \mathbf{f}^{[i,\pi_s]}\\ % \mathbf{b}_{s}^{[i ] } = \mathbf{b}_{\pi_s}.\end{aligned}\ ] ] denote the index of the selected user by and define } = \tilde{\mathbf{b}}^{[i,\pi(s)]}_s.\ ] ] * step 4 : if , then find the -th user pool from : }}^{\mathsf{h } } \mathbf{b}_{s}^{[i]}\right|}{\| \mathbf{f}_{i}^{[i , j]}\| \|\mathbf{b}_{s}^{[i]}\| } < \alpha\right\ } , \nonumber \\ s & = s+1,\end{aligned}\ ] ] where is a positive constant . repeat step 2 to step 4 until . note that from the cdf of } ] , where .thus , from ( [ eq : eta_d_choice ] ) , ( [ eq : eta_i_choice ] ) , and ( [ eq : c2_final ] ) , ( [ eq : p_s2 ] ) can be bounded by the right - hand side of ( [ eq : p_s_final ] ) converges to 1 for increasing snr if and only if since the left - hand side of ( [ eq : scaling_mud ] ) can be written by , where and are positive constants independent of snr and , it tends to infinity for increasing snr , and thereby tends to 1 if and only if .k. gomadam , v. r. cadambe , and s. a. jafar , `` a distributed numerical approach to interference alignment and applications to wireless interference networks , '' _ ieee trans .inf . theory _ ,57 , no . 6 , pp . 33093322 , june 2011. b. c. jung , d. park , and w .- y .shin , `` opportunistic interference mitigation achieves optimal degrees - of - freedom in wireless multi - cell uplink networks , '' _ ieee trans ._ , vol . 60 , no . 7 , pp19351944 , july 2012 .h. j. yang , w .- y .shin , b. c. jung , and a. paulraj , `` opportunistic interference alignment for mimo interfering multiple access channels , '' _ ieee trans .wireless commun ._ , vol .12 , no . 5 , pp .21802192 , may 2013 .j. jose , s. subramanian , x. wu , and j. li , `` opportunistic interference alignment in cellular downlink , '' in _50th annual allerton conference on communication , control , and computing ( allerton ) _ , 2012 , pp .15291545 .j. h. lee and w. choi , `` on the achievable dof and user scaling law of opportunistic interference alignment in 3-transmitter interference channels , '' _ ieee trans .wireless commun ._ , vol .12 , no .27432753 , jun . 2013 . , `` effect of receive spatial diversity on the degrees - of - freedom region in multi - cell random beamforming , '' _ ieee trans. wireless commun ._ , submitted , preprint , [ online ] .available : http://arxiv.org/abs/1303.5947 .j. h. lee , w. choi , and b. d. rao , `` multiuser diversity in interfering broadcast channels : achievable degrees of freedom and user scaling law , '' _ ieee trans .wireless commun ._ , vol . 12 , no . 11 , pp .58375849 , nov .2013 .r. t. krishnamachari and m. k. varanasi , `` interference alignment under limited feedback for interference channels , '' in _ proc .ieee intl symp .inf . theory ( isit ) _ , austin , tx , june 2010 , pp .619623 .s. pereira , a. paulraj , and g. papanicolaou , `` opportunistic scheduling for multiantenna cellular : interference limited regime , '' in _ proc .asilomar conference on signals , systems and computers _ , pacific grove , ca ,2007 .q. shi , m. razaviyayn , z. q. luo , and c. he , `` an iteratively weighted mimo approach to distributed sum - utility maximization for a mimo interfering broadcast channel , '' _ ieee trans .signal process ._ , vol .59 , no .43314340 , sep .2011 .z. peng , w. xu , j. zhu , h. zhang , and c. zhao , `` on performance and feedback strategy of secure multiuser communications with mmse channel estimate , '' _ ieee trans .wireless commun ._ , vol . 15 , no . 2 , pp .16021616 , feb .
|
in this paper , we propose an opportunistic downlink interference alignment ( odia ) for interference - limited cellular downlink , which intelligently combines user scheduling and downlink ia techniques . the proposed odia not only efficiently reduces the effect of inter - cell interference from other - cell base stations ( bss ) but also eliminates intra - cell interference among spatial streams in the same cell . we show that the minimum number of users required to achieve a target degrees - of - freedom ( dof ) can be fundamentally reduced , i.e. , the fundamental user scaling law can be improved by using the odia , compared with the existing downlink ia schemes . in addition , we adopt a limited feedback strategy in the odia framework , and then analyze the number of feedback bits required for the system with limited feedback to achieve the same user scaling law of the odia as the system with perfect csi . we also modify the original odia in order to further improve sum - rate , which achieves the optimal multiuser diversity gain , i.e. , , per spatial stream even in the presence of downlink inter - cell interference , where denotes the number of users in a cell . simulation results show that the odia significantly outperforms existing interference management techniques in terms of sum - rate in realistic cellular environments . note that the odia operates in a non - collaborative and decoupled manner , i.e. , it requires no information exchange among bss and no iterative beamformer optimization between bss and users , thus leading to an easier implementation . inter - cell interference , interference alignment , degrees - of - freedom ( dof ) , transmit & receive beamforming , limited feedback , multiuser diversity , user scheduling .
|
in this introduction , we first provide a colloquial account to the problem of regularizability .next , we outline our contribution . consider a network of nodes and edges .the nodes represent the network actors and the edges are the connections between actors .another ingredient is the force of the connection between two connected actors .we can represent this force with a number , called the weight associated with each edge : the greater the weight , the stronger the relation between the actors linked by the edge . the _ strength _ of a node as the sum of the weights of the edges connecting the node to some neighbor .the strength of a node gives a rough estimate of how much important is the node .in real networks , the distribution of connections among actors is typically very asymmetric : a few actors have many connections and the most have few links .similarly , few actors generally collect a great part of total strength in the network .this disparity can be a harbinger of frictions and crisis between the actors of the network .consider , by way of example , figure [ fig : gasreal ] , which depicts ( a subset of ) the european natural gas pipeline network .nodes are european countries ( country codes according to iso 3166 - 1 ) and there is an undirected edge between two nations if there exists a natural gas pipeline that crosses the borders of the two countries .data has been downloaded from the website of the international energy agency ( www.iea.org ) .the original data corresponds to a directed , weighted multigraph , with edge weights corresponding to the maximum flow of the pipeline .we simplified and symmetrized the network , mapping the edge weights in a consistent way .the distribution of strength of the countries is very skewed , with germany leading the ranking with a strength value of 46.25 and luxembourg closing the ranking with a strength value of 0.44 ( two orders of magnitude less than germany ) .the _ regularization problem _ we approach in this work is the following : given a network with fixed nodes and edges , we look to an assignment of weight to each edge of the network such that all nodes have the same non - zero strength value .hence , we seek for a way to associate a level of force with the connections among actors so that the resulting network is found to be egalitarian , that is , all players have the same importance .such a network is supposed to be less prone to problematic frictions among its constituents .figure [ fig : gasreg ] shows a regularized version of the gas pipeline network : edge weights are such such each country has now the same strength .notice that regularized edge weights are generally quite different from the real ones depicted in figure [ fig : gasreal ] .the regularization problem sketched above is customarily defined on undirected graphs and constrained to nonnegative integer weights . in this paper , we build a _ regularization hierarchy _ of ( either directed and undirected ) graph classes : at the bottom of the hierarchy lies the set of graphs that are regularizable with arbitrary ( possibly negative ) weights . above this classthere is the set of graphs that are regularizable with nonnegative weights , and above the set of graphs that are regularizable with positive weights . at the top of the hierarchyis the class of naturally regular graphs , that is , graphs whose nodes have already the same degree without any weighting . we show that the regularization hierarchy is strict , meaning that each class is properly included in the lower one . since a graph can be represented with an adjacency matrix , each graph class in the regularization hierarchy has a counterpart matrix class .we investigate , for the classes of the hierarchy , structural conditions on both the graph and the corresponding adjacency matrix that are necessary and sufficient for the inclusion in the class .in particular for the bottom and largest class of our hierarchy we find a structural characterization of its members that can be translated in matrix language by means of the notion of chainable matrix . for graphs regularizable with nonnegative and positive weightsthe two characterizations involve the notions of spanning cycle forest for the graph and of support and total support for its adjacency matrix .we prove that , for all classes , if a regularization solution exists , then there exists also an integral solution .we study the computational complexity of the problem of positioning a graph in the hierarchy , and model the problem of finding the regularization weights as a linear programming feasibility problem .the paper is organized as follows . in section[ hierarchy ] we present the regularization hierarchy .in particular , in sections [ prg ] and [ nnrg ] we review and extend to directed graphs some known results on regularizability with nonnegative and positive weights . in section [ nrg ]we investigate negative regularizable graphs . in section [ inclusion ]we show that the inclusion relation among the graph classes is strict .section [ complexity ] is devoted to computational issues . in section [ related ]we present the related literature .conclusions are drawn in section [ conclusion ] .let be a square matrix .we denote with the graph whose adjacency matrix is , that is , has nodes numbered from to and it has an edge from node to node with weight if and only if . any square matrix corresponds to a weighted graph and any weighted graph , once its nodes have been ordered , corresponds to a matrix .a _ permutation _matrix is a square matrix such that each row and each column of contains exactly one entry equal to and all other entries are equal to .the two graphs and are said to be isomorphic since they differ only in the way their nodes are ordered .if and are square matrices of the same size , then we write if implies , that is , the set of non - zero entries of is a subset of the set of non - zero entries of .if , then is a subgraph of .we write if both and .hence , means that the two matrices have the same zero / nonzero pattern , and the graphs and have the same topological structure ( they may differ only for the weighing of edges ) .given , a matrix with nonnegative entries is -_bistochastic _ if , where we denote with the vector of all 1 s . a 1-bistochastic matrix is simply called bistochastic .let be a graph with nodes and edges .we enumerate the edges of from to .let be the out - edges incidence matrix such that if corresponds to edge for some , that is edge exits node , and otherwise .similarly , let be the in - edges incidence matrix such that if corresponds to edge for some , that is edge enters node , and otherwise .consider the following incidence matrix : let be the vector of edge weight variables and be a variable for the regularization degree .the regularization linear system is as follows : if in an undirected graph ( that is , its adjacency matrix is symmetric ) , then there is no difference between in - edges and out - edges . in this case, the incidence matrix of is an matrix such that if belongs to edge and otherwise .notice that system ( [ eq : system ] ) has always the trivial solution .the set of non - trivial solutions of system ( [ eq : system ] ) induces the following _ regularization hierarchy _ of graphs : * _ negatively regularizable graphs _ : those graphs for which there exists at least one solution and of system ( [ eq : system ] ) .notice that can contain negative components but the regularization degree must be positive . * _ nonnegatively regularizable graphs _ : those for which there exists at least one solution of system ( [ eq : system ] ) such that has nonnegative entries and .* _ positively regularizable graphs _ : those for which there exists at least one solution of system ( [ eq : system ] ) such that has positive entries and .* _ regular graphs _ : those graphs for which and is a solution of system ( [ eq : system ] ) .clearly , a regular graph is a positively regularizable graph , a positively regularizable graph is a nonnegatively regularizable graph , and a nonnegatively regularizable graph is a negatively regularizable graph . in section [ inclusion ] we show that this inclusion is strict , meaning that each class is properly contained in the previous one , for both undirected and directed graphs . informally , a graph is positively regularizable if it becomes regular by weighting its edges with positive values .more precisely , if is a graph and its adjacency matrix , then graph ( or its adjacency matrix ) is _ positively regularizable _ if there exists and an -bistochastic matrix such that .a matrix has _ total support _ if and for every pair such that there is a permutation matrix with such that .notice that a permutation matrix corresponds to a graph whose strongly connected components are cycles of length greater than or equal to 1 .we call such a graph a ( directed ) _ spanning cycle forest_. hence , a matrix has total support if each edge of is contained in a spanning cycle forest of .the following result can be found in .for the sake of completeness , we give the proof in our notation .[ th : regularizability ] let be a square matrix. then is positively regularizable if and only if has total support .if is positively regularizable there exists and an -bistochastic such that .clearly is bistochastic and has the same pattern of . by birkhoff theorem ,see for example , we obtain , where every , and every is a different permutation matrix .hence , for every such that there exists some permutation matrix such that {i , j } = 1 ] and .let . notice that is nonnegative and has the same pattern than .moreover , for every it holds , that is is bistochastic .thus , where , that is , is -bistochastic .we conclude that is positively regularizable . from theorem [ th : regularizability ] and definition of total support, it follows that a graph is positively regularizable if and only if each edge is included in a spanning cycle forest .moreover , from the proof of theorem [ th : regularizability ] it follows that if a graph is positively regularizable then there is a solution of the regularization system ( [ eq : system ] ) with integer weights .we now switch to the undirected case , which corresponds to symmetric adjacency matrices .let , with a permutation matrix .each element is either ( if both and ) , 1 ( if either or but not both ) , or 2 ( if both and ) . notice that is symmetric and -bistochastic .moreover , corresponds to an undirected graph whose connected components are single edges or cycles ( including loops , that are cycles of length 1 ) . we call these graphs ( undirected ) _ spanning cycle forests_. for symmetric matrices , we have the following : [ th : regularizability2 ] let be a symmetric square matrix .then has total support if and only if for every pair such that there is a matrix , with a permutation matrix , such that and .if has total support then for every pair such that there is a permutation such that and .let .hence and since is symmetric .on the other hand , if for every pair such that there is a matrix such that and , then and ( and and ) .hence an undirected graph is positively regularizable if each edge is included in an undirected spanning cycle forest .informally , a nonnegatively regularizable graph is a graph that can be made regular by weighting its edges with nonnegative values .more precisely , if is a graph and its adjacency matrix , then graph is _ nonnegatively regularizable _ if there exists and an -bistochastic matrix such that .a matrix has _ support _ if there is a permutation matrix such that .the following result is well - known , see for instance . for the sake of completeness, we give the proof in our notation .[ th : quasi - regularizability ] let be a square matrix. then is nonnegatively regularizable if and only if has support .suppose is nonnegatively regularizable .then there is and an -bistochastic with . since , we have that is bistochastic and .hence , by birkhoff theorem , , where every , and every is a different permutation matrix .let be any permutation matrix in the sum that defines .then and hence .we conclude that has support .on the other hand , suppose has support .then there is a permutation matrix with .since is bistochastic , we have that is nonnegatively regularizable . from theorem[ th : quasi - regularizability ] and definition of support it follows that a graph is nonnegatively regularizable if and only if it contains a spanning cycle forest .furthermore , from the proof of theorem [ th : quasi - regularizability ] it follows that if a graph is nonnegatively regularizable then there exists a solution of the regularization system with binary weights ( 0 and 1 ) .we now consider undirected graphs , that is , symmetric matrices .we have the following : [ th : regularizability3 ] let be a symmetric square matrix .then has support if and only if there is a matrix , with a permutation matrix , with .if has support then there is a permutation with .let . since is symmetric we have . on the other hand , if there is a matrix , with permutation and , then ( and ) .hence an undirected graph is nonnegatively regularizable if and only if it contains an undirected spanning cycle forest and there exists a solution of the regularization system with weights 0 , 1 and 2 .informally , a negatively regularizable graph is a graph that can be made regular by weighting its edges with arbitrarily values .more precisely , if is a graph and its adjacency matrix , then graph is _ negatively regularizable _ if there exists and a matrix , whose entries are not restricted to be nonnegative , such that and such that .our goal here is to topologically characterize the class of negatively regularizable graphs .we first address the case of undirected graphs ( that is , symmetric adjacency matrices ) .the next result , see , will be useful . [lem : rank ] let be a connected undirected graph with nodes and let be the incidence matrix of .then the rank of is if is bipartite and otherwise .first of all , notice that an undirected graph is negatively regularizable ( resp ., nonnegatively , positively ) if and only if all its connected components are so .it follows that we can focus on undirected graphs that are connected .let be the set of the nodes of an undirected connected graph .if is bipartite then can be partitioned into two subsets and such that each edge connects a node in with a node in . if the bipartite graph is said to be _ balanced _ , otherwise it is called _unbalanced_. let us introduce a vector , that we call the _ separating vector _, where the entries corresponding to the nodes of are equal to and the entries corresponding to the nodes of are equal to .clearly we have that , where is the incidence matrix of the graph .we have the following result : [ th : negreg2 ] let be an undirected connected graph .then : 1 .if is not bipartite , then is negatively regularizable ; 2 .if is bipartite and balanced , then is negatively regularizable ; 3 . if is bipartite and unbalanced , then is not negatively regularizable .we prove item ( 1 ) . by virtue of lemma [ lem : rank ] the incidence matrix of rank equal to the number of its rows .by permuting the columns of without loss of generality we can assume that : where is and nonsingular and in .let be a vector of length and let be a vector of length obtained by concatenating with a vector of 0s .notice that , hence .then hence the linear system has a nontrivial solution for every so that is negatively regularizable . if then the vector has integer entries , since the entries of are integers .we prove item ( 2 ) . if is bipartite and balanced , then is even and . by lemma [ lem : rank ]the rank of is and by permuting the rows and columns of , without loss of generality , we can assume that : where is and nonsingular , is , is , and is .let and be a vector of length , and be a vector of length obtained by concatenating with a vector of 0s .notice that , hence . then if is the separating vector , then so that since half entries of are equal to and the remaining half are equal to , it must be .hence so that and is a solution of system ( [ eq : system ] ) .hence is negatively regularizable .again , if the vector has integer entries .we prove item ( 3 ) . if is bipartite and unbalanced , then .suppose is negatively regularizable .then where and .let be the separating vector .since we have that , that is . since is unbalanced we have and hence it must be . hence is not negatively regularizable . from the proof of theorem [ th : negreg2 ]it follows that if a graph is negatively regularizable then there exists regularization solution with integer weights .we recall that the same holds for nonnegatively and positively regularizable graphs .a _ tree _ is an undirected connected acyclic graph . since acyclic , a tree is bipartite . as a corollary , a tree is negatively regularizable if and only if it is balanced as a bipartite graph .the next theorem points out that acyclic and cyclic graphs behave differently when they are not negatively regularizable .[ th : negreg1 ] let be an undirected connected graph that is not negatively regularizable and let be its incidence matrix. then 1 .if is acyclic then the system has only the trivial solution and ; 2 .if is cyclic then the system has infinite many solutions such that and .let be the number of nodes and be the number of edges of .consider the homogeneous linear system , where is and .notice that implies .since is not negatively regularizable then either there is only one trivial solution or the system has infinite many solutions different from the null one with and .since is connected , then .we have that : 1 . if is acyclic then and by virtue of lemma [ lem : rank ] .hence the columns of are linearly independent so that the system can not have solutions with and .when has cycles , we have indeed that . but , since has rows .hence so that the system has infinite many solutions .hence graphs that are not negatively regularizable can be partitioned in two classes : unbalanced trees , for which the only solution of system is the trivial null one , and cyclic bipartite unbalanced graphs , for which there are infinite many solutions with and .for instance , consider the chair graph with undirected edges , , , , .it is cyclic , bipartite and unbalanced .if and are labelled with , and are labelled with , and is labelled with , then all nodes have degree 0 .we now switch to the case of directed graphs .consider the following mapping from directed graphs to undirected graphs .if is a directed graph , let be its undirected counterpart such that each node of corresponds to two nodes ( with color white ) and ( with color black ) of , and each directed edge in corresponds to the undirected edge .notice that is a bipartite graph with nodes ( white nodes and black nodes ) and edges that tie white and black nodes together .moreover , the degree of the white node ( resp . ,black node ) of is the out - degree ( resp ., the in - degree ) of the node in . despite is weakly or strongly connected , can have many connected components. however , we have the following : [ th : negreg3 ] let be a directed graph .then is negatively regularizable ( resp . ,nonnegatively regularizable , positively regularizable ) if and only if is negatively regularizable ( resp . ,nonnegatively regularizable , positively regularizable ) .the crucial observation is the following : if we order in the white nodes before the black nodes , then the incidence matrix of the directed graph ( as defined in this section ) is precisely the incidence matrix of .it follows that is a solution of system ( [ eq : system ] ) for if and only if is a solution of system ( [ eq : system ] ) for .hence the thesis .notice that the connected componentes of are bipartite graphs . using theorem [ th : negreg2 ], we have the next result .[ th : negreg4 ] let be a directed graph . then is negatively regularizable if and only if all connected components of are balanced ( they have the same number of white and black nodes ) .theorem [ th : negreg4 ] can be stated directly on without the use of its undirected mate .we need , however , the following auxiliary definitions .[ def : path ] let be a directed graph . a directed path of length is a sequence of directed edges of the form : an alternating path of type 1 of length is a sequence of directed edges of the form : if we have an alternating cycle .an alternating path of type 2 of length is a sequence of directed edges of the form : if we have an alternating cycle . observe that if we reverse the edges in even ( resp ., odd ) positions of an alternating path of type 1 ( resp . ,type 2 ) we get a directed path .moreover , in simple graphs , an alternating cycle is either a self - loop or an alternating path of even length greater than or equal to 4 .let be a directed graph .we define an _ alternating path relation _ on the set of edges of such that for , we have if there is an alternating path that starts with and ends with .notice that is reflexive , symmetric and transitive , hence it is an equivalence relation .thus induces a partition of the set of edges where are nonempty pairwise disjoint sets of edges .it is easy to realize that for each the edges of corresponds to the edges of some connected components of the undirected counterpart of .each induces a subgraph of .we say that a node in is a _ white _ node if it has positive outdegree , it is a _ black _ node if it has positive indegree , it is a _ source _ node if it has null indegree , and it is a _ sink _node if it has null outdegree .notice that a node can be both white and black , or neither white nor black ; also , it can be both source and sink , or neither source nor sink .a corollary of theorem [ th : negreg4 ] is the next .[ th : negreg5 ] let be a directed graph and be the partition of edges induced by the alternating path binary relation . then is negatively regularizable if and only if 1 . contains neither source nor sink nodes ; 2 .all subgraphs induced by edge sets have the same number of white and black nodes .suppose is negatively regularizable .if contains a source or a sink node , then the regularization degree of must be , hence can not be negatively regularizable .hence we assume that contains neither source nor sink nodes . by theorem[ th : negreg4 ] we have that all connected components of the undirected counterpart of are balanced ( have the same number of white and black nodes ) . since for each the edges of corresponds to the edges of some connected components of , and a white node ( resp . , black node ) in corresponds to a white node ( resp . , black node ) in , we have that all subgraphs induced by edge sets have the same number of white and black nodes . on the other hand ,if contains neither source nor sink nodes , then each connected component of contains at least one edge .hence , each connected component in corresponds to some edge set of . since all subgraphs induced by edge sets have the same number of white and black nodes , and a white node ( resp . , black node ) in corresponds to a white node ( resp ., black node ) in , we have that all connected components of are balanced , hence by theorem [ th : negreg4 ] we have that is negatively regularizable .it is possible to obtain a formulation of theorem [ th : negreg5 ] starting from the adjacency matrix of the graph .first of all , we observe that , given a matrix , if and are permutation matrices then the two graphs and are not necessarily isomorphic . however , about , the following simple result holds .[ lm : iso ] let be square matrix an let and two permutation matrices .then and are isomorphic .if we set then is equal to .moreover so that and are isomorphic . now , we recall the definition of chainable matrix . an matrix is chainable if 1 . has no rows or columns composed entirely of zeros , and 2 . for each pair of non - zero entries and of there is a sequence of non - zero entries for such that , and for either or . as noted in , the property of being chainable can be described by saying that one may move from a nonzero entry of to another by a sequence of rook moves on the nonzero entries of the matrix .notice that if is the adjacency matrix of a graph then is chainable if and only if it holds that . actually an alternating path between the edges of corresponds to rook moves between the nonzero entries of .it is interesting to observe that if is chainable then is chainable .in addition , if is chainable and and are permutation matrices then is chainable , since if two entries of the matrix belong to the same row or column then this property is not lost after a permutation of rows and columns .[ ex : chains ] the matrix is the adjacency matrix of the top left graph in figure [ fig : directed ] .the matrix is not chainable .the equivalence has the two equivalence classes , , , and , .the matrix is the adjacency matrix of the top right graph in figure [ fig : directed ] .the matrix is chainable and this means that all the six edges belong to the same equivalence class .theorem 1.2 in can be restated in our notation as follows .[ lem : conchain ] the graph is connected if and only if is chainable .we borrow from another useful result .[ lem : diagblock ] if is and has no rows or columns of zeros , then there are permutations matrices and so that where the diagonal blocks , for , are chainable .now we are ready to rewrite theorem [ th : negreg5 ] .[ th : negregchar ] let be a square matrix , and let be the graph whose adjacency matrix is . then 1 . is negatively regularizable if and only if there exist two permutation matrices and such that is a block diagonal matrix with square and chainable diagonal blocks ; 2 . is not negatively regularizable if and only if there exist two permutation matrices and such that is a block diagonal matrix with chainable diagonal blocks some of which are not square .we start by proving item ( 1 ) .let us assume first that there exist two permutation matrices and such that is block diagonal with square and chainable diagonal blocks .by lemma [ lem : conchain ] , this means that each diagonal block corresponds to a connected component of .since the diagonal blocks are square the connected components of are balanced and thus negatively regularizable .this implies that is negatively regularizable .hence is negatively regularizable , being isomorphic to in the view of lemma [ lm : iso ] .we obtain the thesis by means of theorem [ th : negreg3 ] .now let us assume that is negatively regularizable .this implies that can not contain rows or columns of zeros , so that , by means of lemma [ lem : diagblock ] we obtain that there exist two permutations and such that is block diagonal with chainable diagonal blocks .since each of the diagonal blocks corresponds to a connected component of the presence of non - square blocks would imply the presence of unbalanced connected components in .hence would be not negatively regularizable .but this is impossible since is isomorphic to .hence all the diagonal blocks must be square . now to prove item ( 2 ) we note that it is impossible to find two permutations and such that is block diagonal with square and chainable diagonal blocks and at the same time two permutations and such that is block diagonal with chainable diagonal blocks some of which are not square .indeed , by lemma [ lm : iso ] , the two graphs and would be isomorphic , since they are both isomorphic to .let us consider the adjacency matrix of the top right graph in figure [ fig : undirected ] . the matrix is not chainable .observe that the equivalence classes of the relation are , , , , and , , , , . by using two permutation and to move rows , , ( the white nodes of ) on the top of the matrix and rows , , ( the white nodes of ) on the bottom and to move columns , , ( the black nodes of ) on the left and columns , , ( the black nodes of ) on the right we obtain observe that is block diagonal with square and chainable diagonal blocks .hence is negatively regularizable . as a second example , let us consider again the adjacency matrix of the top left graph in figure [ fig : directed ] presented in example [ ex : chains ] . by permuting rows and columns of according to the black and white nodes that appear in the two equivalence classes of the relation we obtain the presence of non - square chainable diagonal blocks implies that can not be negatively regularizable .we show that the regularization hierarchy is strict , meaning that each class in properly contained in the previous one , for both undirected and directed graphs .we first address the undirected case .consider figure [ fig : undirected ] .the top - left graph is not negatively regularizable , since it is bipartite and unbalanced . in particular , since each leaf of the star must have the same degree , each edge must be labelled with the same weight , but this forced the degree of the center to be , unless .a graph that is negatively regularizable but not nonnegatively regularizable is the top - right one .since the graph is bipartite and balanced , it is negatively regularizable : if we label each external edge with and the central bridge with , then each edge has degree .the graph is not nonnegatively regularizable since it contains no spanning cycle forest , hence it does not have support . a graph that is nonnegatively regularizable but not positively regularizableis the bottom - left one .the graph is nonnegatively regularizable since edges and form a spanning cycle forest .the solution that labels the edges and with and the other edges with is a nonnegative regularizability solution with regularization degree .the graph is not positively regularizable since edges and do not belong to any spanning cycle forest . finally , a graph that is positively regularizable but not regular is the bottom - right one .the graph is positively regularizable since each edge belongs to the spanning cycle forest formed by the tringle that contains the edge plus the opposite edge .if and we label the outer edges with and the inner edges with we have a positive regularizability solution with regularization degree . clearly , the graph is not regular .we now address the directed case .consider figure [ fig : directed ] .the top - left graph is not negatively regularizable .the equivalence relation has the two equivalence classes , , , and , .class is unbalanced since it contains three white nodes ( 1 , 2 and 4 ) and two black nodes ( 2 and 3 ) .also , class is unbalanced since it contains one white node ( 3 ) and two black nodes ( 1 and 4 ) .a graph that is negatively regularizable but not nonnegatively regularizable is the top - right one .the graph is chainable and the equivalence relation has only one class containing all edges .all nodes are both white and black and hence the class is balanced .if we weight each edge with 1 excluding edge that we weighted with -1 , then all nodes have in and out degrees equal to 1 . the graph is not nonnegatively regularizable since there is no spanning cycle forest : the only cycles are indeed the two loops , which do not cover node 2 .a graph that is nonnegatively regularizable but not positively regularizable is the bottom - left one .indeed , the loop plus the 3-cycle make a spanning cycle forest .however , the remaining edges ( those on the 2-cycle ) are not contained in any spanning cycle forest . finally , a graph that is positively regularizable but not regular is the bottom - right one .the loop plus the 3-cycle and the two 2-cycles form two distinct spanning cycle forests that cover all edges .it is easy to see that the graph is not regular . in this sectionwe make some observations on the computational complexity of positioning a graph in the hierarchy we have developed .the reduction of a directed graph turns out to be useful to check whether a graph is nonnegatively as well as positively regularizable .we remind that a _ matching _ is a subset of edges with the property that different edges of can not have a common endpoint .a matching is called if every node of the graph is the endpoint of ( exactly ) one edge of .notice that a bipartite graph has a spanning cycle forest if and only if it has a perfect matching .moreover , every edge of a bipartite graph is included in a spanning cycle forest if and only if every edge of the graph is included in a perfect matching . using theorem [ th : regularizability ] , we hence have the following .[ th : reg ] let be a directed graph and its undirected counterpart . then : 1 . is nonnegatively regularizable if and only if has a perfect matching ; 2 . is positively regularizable if and only if every edge of is included in a perfect matching .the easiest problem is to decide whether a graph is negatively regularizable . for an undirected graph with nodes and edges, it involves finding the connected components of and determining if they are bipartite , and in case , if they are balanced . for a directed graph , it involves finding the connected components of the undirected graph ( which are bipartite graphs ) and determining if they are balanced .all these operations can be performed in linear time in the size of the graph .the problem can be formulated as the following linear programming feasibility problem ( any feasible solution is a regularization solution ) : where is the incidence matrix of the graph as defined at the beginning of section [ hierarchy ] . also regular graphs can be checked in linear time by computing the indegrees and outdegrees for every node in directed graphs , or simply the degrees in the undirected case .the complexity for nonnegatively and positively regularizability is higher , but still polynomial . to determine if an undirected graph is nonnegatively regularizable , we have to find a spanning cycle forest . using the construction adopted in the proof of theorem 6.1.4 in ,this boils down to solve a perfect matching problem on a bipartite graph of the same asymptotic size of ( precisely , with a double number of nodes and edges ) .this costs using hopcroft - karp algorithm for maximum cardinality matching in bipartite graphs , that is on sparse graphs .the directed case is covered by theorem [ th : reg ] with the same complexity . moreover, the problem can be encoded as the following linear programming feasibility problem : to decide if an undirected graph is positively regularizable , we have to find a spanning cycle forest for every edge of .using again theorem 6.1.4 in , this amounts to solve at most perfect matching problems on bipartite graphs of the same asymptotic size of , which costs , that is on sparse graphs .this bound can be improved by exploiting the equivalence of positive regularizability with total support , see section [ related ] .again , the directed case is addressed by theorem [ th : reg ] with the same complexity .finally , the problem is equivalent to the following linear programming feasibility problem : graphs were introduced and studied by berge , see also chapter 6 in .we summarize in the following the main results related to our work . a connected undirected graph is nonnegatively regularizable ( quasi - regularizable ) if and only if for every independent set of nodes of it holds that , where is the set of neighbors of . a connected undirected graph is positively regularizable ( regularizable ) if and only if is either elementary bipartite or 2-bicritical .a bipartite graph is elementary if and only if it is connected and each edge is included in a perfect matching .a graph is 2-bicritical if and only if for every nonempty independent set of nodes of it holds that . in vulnerability of an undirected graph is defined as where is any independent nonempty set of nodes of .it holds that if and only if is nonnegatively regularizable . in addition if and only if is 2-bicritical . hence , nonnegativelyregularizable graphs , and in particular , positively regularizable ones tend to have low vulnerability . on the other hand , this does not hold for negatively regularizable graphs : as an example , consider the square matrix matrix is chainable and hence is negatively regularizable .it is not difficult to show that , hence the vulnerability of can be arbitrarily high as the graph grows .the condition of having total support for a matrix can be tested via dulmage - mendelsohn decomposition : if a square matrix has support then there exist two permutations and such that is block upper triangular with square and fully indecomposable diagonal blocks , see . once the decomposition in blocks has been computed , testing total support amounts in checking that each of the positive entries of the matrix ( edges of the corresponding graph ) belongs to one of the square diagonal blocks determined by the decompositionthe cost of computing the decomposition is , and this is also the overall cost of checking total support .since , by theorem [ th : regularizability ] , total support is equivalent to positive regularizability , the same complexity bound extends to testing positive regularizability . the problem or regularizability could be seen as a member of a wide family of problems concerning the existence of matrices with prescribed conditions on the entries and on sums of certain subsets of the entries , typically row and columns .additional conditions on the matrices can be imposed , such as , for example , symmetry or skew - symmetry .actually , brualdi in gives necessary and sufficient conditions for the existence of a nonnegative rectangular matrix with a given zero pattern and prescribed row and column sums .in these results are generalized in various directions and in particular by considering the prescription that the entries of the matrix belong to finite or ( half)infinite intervals ( thus encompassing the case where some entries are prescribed to be zero or positive or nonnegative or are unrestricted ) . the results presented in this paper are not implied by the above described literature . moreover ,our approach to regularizability is graph - theoretic .by generalizing the problem of graph regularization to directed graphs , possibly weighted with negative values , we defined a hierarchy of four nested graph classes corresponding to as many matrix classes .we found structural characterizations of the classes both in terms of graphs and corresponding matrices , and discussed the computational problem of deciding the positioning of a graph in the hierarchy .we argued that the regularization solution might be useful to alleviate tension in real weighted networks with great disparity of centralities ( and power ) among nodes . in real cases , moving from an irregular real weighted network toward a regular ideal weighted network might be costly ( in terms of time and other relevant resources ) , since a central authority has to negotiate new force relations among players of the network .moreover , we expect that the cost of this transformation is proportional to the distance between the real weight assignment and the ideal one .hence , for networks that admit a regularization solution , the optimum is to find the closest regularization solution to the real weight assignment , which amounts to solve a minimization problem similar to the following : c. berge , some common properties for regularizable graphs , edge - critical graphs and b - graphs , in : n. saito , t. nishizeki ( eds . ) , graph theory and algorithms , vol .108 of lecture notes in computer science , springer , berlin , 1981 , pp . 108123 .
|
a network is regularizable if it is possible to assign weights to its edges so that all nodes have the same degree . we define a hierarchy of four network classes in terms of their regularization properties . for the classes of the hierarchy , we investigate structural conditions on both the network and the corresponding adjacency matrix that are necessary and sufficient for the inclusion of a network in the class . moreover , we provide an algorithmic solution for the problem of positioning a network in the hierarchy . we argue that the regularization solution is useful to build an egalitarian , friction - free network in which all actors of the network has the same centrality ( and power ) . networks , regularizability , total support , centrality , power
|
the idea of implementing quantum information devices based on the use of single atoms or molecules has gone progressively growing up in the course of the last few years .the reason can be envied in the high contemporary evolution of theoretical dynamical models along with the ability reached by experimentalists in manipulating quantum objects first considered _theoretician s tools _ .the state of art is that , although it has been possible to obtain entanglement conditions between elementary systems in different physical scenarios , the temporal persistence of quantum coherence is an open problem . in this moment, it is therefore necessary to put attention on the theoretical aspects of decoherence in order to predict the experimental conditions under which it is maximally reduced .almost decoherence free substates seem to have the characteristics of eligibility needed to implement quantum computation .their generation and temporal persistence can be theoretically predicted in some high symmetrical models .the application of the early introduced formal solution of the markovian master equation ( nud theorem ) has supplied , in the few analyzed systems , the prediction of a conditional building up of entanglement .the result is obtained under symmetrical condition corresponding to specific locations of involved subsystems ( atoms , molecules ) . unfortunately , despite the positive results in isolating single quantum objects , the difficulties connected with the location of more than one atoms in fixed arbitrary points is an open task . in general , practical quantum gates implementation needs to meet stringent criteria in order to operate successfully . _first of all , the qubits has to be sufficiently isolated in order to manipulate them in a controlled environment .they need to be initialized precisely .the effective interactions among qubits should be carefully tuned and a set of quantum operations should be made possible in order to perform any other required quantum gates . finally , the system must be scalable to more than a few qubits _ .these considerations have encouraged us to speculate about the possibility to use hybrid entanglement conditions in order to implement quantum protocols .a single mode cavity and a two level atom may be considered a hybrid two qubits system .the isolation of a single atom for enough long time inside a cavity is today possible .moreover , the jaynes and cummings model is one of the few exactly soluble models in quantum optics .it predicts many interesting non classical states experimentally testable in laboratory .in realistic situation , however , one performs experiments in cavity with finite q and in presence of atomic spontaneous emission .so it become of fundamental importance to know how the predictions of this model are affected by the unavoidable presence of loss mechanisms .this problem is currently very extensively studied , the main approximation being the assumption of two different reservoirs , one for the atom and one for the cavity mode respectively .the further calculation are simpler under this approximation because the two bath do nt introduce coherences between the atom and the cavity : the master equation is in lindblad form and it is easier to solve than the one we derive ; the solution being the total destruction of coherences because of the two different and nonspeaking channels of dissipation .moreover , my equation contains the simple one as particular case .actually , here we show to be able to find the exact solution of a dissipative j - c model assuming a common reservoir for the bipartite system , which , on the ground of the above consideration , appears to be a more realistic hypothesis .this leads to the prediction of new cooperative effects , induced by the zero - point fluctuations of environment , between the atom and the cavity mode as the creation of conditional transient entanglement , tending to become stationary as the strengths of the coupling with the reservoir take a well defined value . finally , in order to be maximally realistic , prof .j.m.reimond suggested me to consider also the loose of energy due to the imperfect reflection on the mirrors .this correction in the microscopic model does not introduce complication in the solution of the relative master equation because the new bath is independent from the first one and , indeed , easily treating from a theoretical point of view ( not induced coherences ) . in presence of the second channel of decoherencethe building up of entanglement exists during the transient period in which the atom is confined inside the cavity .the long time solution ( the order of magnitude of the time involved is given in the next sections ) shows that the introduction of the second bath makes disappear coherence ( rabi oscillation ) among the two subsystem involved : the atom and the cavity mode . despite this fact ,the time involved in decoherence process can be made much longer than those necessary to implement a quantum protocol as deducible from the theoretical analysis here developed if interfaced with the experimental measures performed by aroches group .the measures appear well fitted by the theoretical model here proposed .the standard models ( two different baths ) are able to reproduce only the top of the curve ( dissipation ) .instead mine is able to reproduce also the lower part ( cooperation ) .the paper is structured as follow : in section ii we report the principal step and approximation leading to the microscopic derivation of the markovian master equation ( mme ) and we solve it when , showing also the full equivalence between mme and piecewise deterministic processes ( pdp ) . in section iiiwe apply the nud theorem to derive the solution to the dissipative j - c model . in sectioniv we try to justify the obtained dynamical behaviour in terms of continuous measurement theory .it is well know that under the rotating wave and the born - markov approximations the master equation describing the reduced dynamical behavior of a generic quantum system linearly coupled to an environment can be put in the form +d(\rho_s ( t)),\ ] ] where is the hamiltonian describing the free evolution of the isolated system , and being the one - sided fourier transforms of the reservoir correlation functions .finally we recall that the operators and , we are going to define and whose properties we are going to explore , act only in the hilbert space of the system . eq . ( [ me ] ) has been derived under the hypothesis that the interaction hamiltonian between the system and the reservoir , in the schrdinger picture , is given by that is the most general form of the interaction . in the above expression and operators acting respectively on the hilbert space of the system and of the reservoir .the eq.([hi ] ) can be written in a slightly different form if one decomposes the interaction hamiltonian into eigenoperators of the system and reservoir free hamiltonian .definition supposing the spectrum of and to be discrete ( generalization to the continuous case is trivial ) let us denote the eigenvalue of ( ) by ( ) and the projection operator onto the eigenspace belonging to the eigenvalue ( ) by ( ) .then we can define the operators : from the above definition we immediately deduce the following relations =-\omega a_{\alpha } ( \omega),\;\;\ ; [ h_b , b_{\alpha } ( \omega)]=-\omega b_{\alpha } ( \omega),\ ] ] =+\omega a^{\dag}_{\alpha } ( \omega)\;\;\ ; and \;\;\ ; [ h_b , b^{\dag}_{\alpha } ( \omega)]=+\omega b^{\dag}_{\alpha } ( \omega).\ ] ] an immediate consequence is that the operators e raise and lower the energy of the system by the amount respectively and that the corresponding interaction picture operators take the form finally we note that summing eq .( [ aconalfadiomega ] ) over all energy differences and employing the completeness relation we get the above positions enable us to cast the interaction hamiltonian into the following form the reason for introducing the eigenoperator decomposition , by virtue of which the interaction hamiltonian in the interaction picture can now be written as is that exploiting the rotating wave approximation , whose microscopic effect is to drop the terms for which , is equivalent to the schrodinger picture interaction hamiltonian : lemma [ th1 ] the rotating wave approximation imply the conservation of the free energy of the global system , that is =0\ ] ] the necessary condition involved in the previous proposition is equivalent to the equation =0 $ ] we are going to demonstrate .&=&[h_s+h_b , h_i]=[h_s , h_i]+[h_b , h_i]\\\nonumber & = & \sum_{\alpha , \omega } [ h_s , a_{\alpha } ( \omega ) ] \otimes b_{\alpha}^{\dag}(\omega ) + \sum_{\alpha , \omega } a_{\alpha } ( \omega ) \otimes [ h_b , b_{\alpha}^{\dag}(\omega)]\\\nonumber & = & -\sum_{\alpha , \omega}\omega a_{\alpha } ( \omega ) \otimes b_{\alpha}(-\omega)+\sum_{\alpha , \omega}\omega a_{\alpha } ( \omega ) \otimes b_{\alpha}(-\omega)=0.\end{aligned}\ ] ] where we have made use of eq .( [ com1],[com2 ] ) ' '' '' lemma [ th2 ] the detailed balance condition in the thermodynamic limit imply where ' '' '' corollary [ th3 ] let us suppose the temperature of the thermal reservoir to be the absolute zero , on the ground of lemma 2 immediately we see that let us now cast eq .( [ me ] ) in a slightly different form splitting the sum over the frequency , appearing in eq.([dissme ] ) , in a sum over the positive frequencies and a sum over the negative ones so to obtain where we again make use of eq .( [ aconalfadiomega ] ) . in the above expression we can recognize the first term as responsible of spontaneous and stimulated emission processes , while the second one takes into account stimulated absorption , as imposed by the lowering and raising properties of .therefore if the reservoir is a thermal bath at the corollary 4 tell us that the correct dissipator of the master equation can be obtained by suppressing the stimulated absorption processes in eq.([diss ] ) .we are now able to solve the markovian master equation when the reservoir is in a thermal equilibrium state characterized by .we will solve a cauchy problem assuming the factorized initial condition to be an eigenoperator of the free energy .this hypothesis does nt condition the generality of the found solution being able to extend itself to an arbitrary initial condition because of the linearity of the markovian master equation .nud theorem [ th4 ] if eq .( [ me ] ) is the markovian master equation describing the dynamical evolution of a open quantum system s , coupled to an environment b , assumed to be in the detailed - balance thermal equilibrium state characterized by a temperature t=0 , and if the global system is initially prepared in a state so that , where is the free energy of the global system then is in the form of a piecewise deterministic process , that is a process obtained combining a deterministic time - evolution with a jump process .the proof of the theorem is contained in the paper .my aim here is to give an explanation of the found implication .a pdp is a statistical mixture of alternative generalized trajectories evolving individually in a deterministic way .this statement is mathematically given by the equation where the quantum trajectories are obtained by the deterministic non - unitary equation where , in particular , and , , being with hermitian .finally , these last are _ generalized _ respect to f.petruccione and h.j.carmochael approach , which leads to , but nobody is able to do it , with exception of few highly symmetrical systems .the found solution ( nud theorem ) ensures that the dynamical processes , whose statistical mixture gives the open system stochastic evolution , are deterministic .this demonstrates that the evolution is representable as a piecewise deterministic process ( pdp ) .the found solution generalizes the pdps introduced by h.j.carmichael and formalized by f.petruccione and h.p.breuer . actually , it is applicable also when the markovian master equation is nt in the lindblad form .this , as already highlighted , in general , introduces simplification in the further calculations , but because of the difficulty to recast the equation in this form the results obtained are in general merely formal .tough the eq .( [ brutta ] ) seems complicated to use it is a powerful predictive tool .we have tested it deriving the photocounting formula ; reproducing the environment - induced entanglement between two two - level not - direct - interacting atoms placed in fixed arbitrary points in the free space and carmichael unravelling of the master equation .moreover i have tested the nud theorem s predictive capability solving the dynamics of two two - level dipole - dipole interacting atoms placed in fixed arbitrary points inside a single mode cavity in presence of atomic spontaneous emission and cavity losses ; two - level not - direct - interacting atoms placed in fixed arbitrary points inside a single mode cavity in presence of atomic spontaneous emission and cavity losses ; a bipartite hybrid model , known as jaynes - cummings model , constituted by an atom and a single mode cavity linearly coupled and spontaneously emitting in the same environment ( next subsection ) and two harmonic oscillator linearly coupled and spontaneously emitting in the same environment ( work in progress ) .two of my results are already published , others will be object of future papers .all of them contain the same ( predictable ? ) result : multipartite systems , discarding the physical nature of the parts and of the environment , can exhibit entangled stationary states towards the system can be guided by a probabilistic scheme of measurement .the jaynes - cummings model describes , under the rotating wave approximation , the resonant interaction between a single two - level atom and the single mode of the electromagnetic field protected by a perfect cavity ( no loosing of energy ) .the model has been introduced in 1963 in order to analyze the classical aspects of spontaneous emission and to understand the effects of quantization on the atomic evolution . actually , despite its apparent simplicity, this model has revealed interesting non - classical proprieties characterizing the matter - radiation interaction .moreover , thanks to the recent experimental implementation of high q cavities , it is today possible to verify the most of the theoretical predictions of the model .the major experimental limitation is related to the coupling with a chaotic environment able to destroy the quantum coherences . a theoretical approach including the loss of energy due to the interaction of the atom and the cavity mode with the free electromagnetic field is more complete and , as we will show, it is suitable to reproduce the experimental measured decay of the population of the atom . in particular , assuming a common bath of interaction between the cavity mode and the atom , the theoretical probability to find the atom in the excited state performs rabi oscillation exponentially decaying .this fact is consistent with the open dynamics but it is not the only effect .actually , the common bath induces cooperation between the two involved parts ( mode and atom ) .this behavior competes with the exponential decay . in the long timelimit the the exponential decay wins on cooperation if we work under the experimental condition performed by haroche group . in the paper reported the experimental graph relative to the probability to find the atom in its excited state as a function of the time .we can interpret the upper part of the figure as the exponential decay and the lower part of it ( increasing of probability ) as the cooperation induced by the common reservoir .the new theoretical approach , here presented , is better than the usual one ( two independent baths ) because it is able to reproduce the experimental curve in a complete way .in fact the two bath approximation keeps account only for the dissipation meaning that the rabi oscillation of the atomic population goes to zero every period characterizing the free dynamics of the bipartite system . in this casethe cooperative part of the dynamics disappears : the two parts do not speak trough the common bath , the main behavior being the dissipation of energy in the reservoirs .moreover it is possible to demonstrate that single bath approach is more general than the other including it as particular case .this fact is very well understood if the parts are , for example , two or more atoms , in which case the cooperation is the maximum one if the distance among atoms is small enough and it reaches its minimum value when the distance goes to infinity . in the last case the out diagonal terms of the spectral correlation tensorgo to zero meaning that the parts see independent reservoirs .the lack of a microscopical derivation of the coupling constant of the mode with the electromagnetic field makes difficult the analytical derivation of an analogue relation in the case here analyzed .despite this fact it will be shown that the single bath case is more general than the other one because the independent baths case does not reproduce the out diagonal terms giving a simplified master equation unable to reproduce part of the experimental measurements .the hamiltonian describing the open system is (s_+ + s_-)+\sum_{\vec{k},\lambda}[s^\ast_{\vec{k},\lambda } b_{\lambda}(\vec{k})+s_{\vec{k},\lambda } b^{\dag}_{\lambda}(\vec{k})](\alpha^\dag + \alpha).\end{aligned}\ ] ] if we make the position where ,\end{aligned}\ ] ] \end{aligned}\ ] ] we can describe the reduced dynamics of the bipartite system at by a master equation of the standard form +d(\rho_s ( t)).\end{aligned}\ ] ] in this expression we have neglected the lamb - shift .this approximation is made possible because we have considered , ab initio , a direct linear static interaction among the parts respect to which the lamb - shift is negligible . in the above equation where the master equation for can be solved applying the nud theorem to this case : where is the number of the excitation initially given to the system ( ) and is the index giving the number of excitations characterizing every quantum trajectory .the trajectories evolve in time in accordance with where is given by eq.([brutta ] ) and is a nonunitary temporal evolution operator , being , in general , non - hermitian as it appears from the following equation : where let us suppose the system in the initial state characterized by excitations in the cavity mode and the atom in its excited state : then every quantum trajectory belonging to statistical mixture characterizing the dynamical evolution of the system will have the form the highest energy subspace ( ) is easily solved and the block vector relative to this subspace has the form : where let us note that if we have started from the initial condition we have obtained the same dynamical behavior .this fact ensures that an arbitrary linear combination of the two different initial condition will bring to the same dynamics .this fact is really important because it is not simple to prepare one or the other of the initial states .actually , when we inject an excitation inside the system we can only know that the system is in a statistical mixture of the two states .but we have seen seen that the dynamics is not case sensitive and therefore a statistical mixture of the two states leads to the described dynamical evolution .the circumstance that we succeed in finding the explicit time dependence of the solution of the master equation ( [ jcme ] ) provides an occasion to analyze in detail at least some aspects of how entanglement is getting established in our bipartite system .as particular case we can choose so obtaining in a simple way the complete dynamics of the open system in the form where and on the basis of the block diagonal form exhibited by eq .( [ statmix ] ) , at a generic time instant , the system is in a statistical mixture of the vacuum state of the system and of a one - excitation appropriate density matrix describing with certainty the storage of the initial energy . in order to analyze the time evolution of the degree of entanglement that gets established between the two initially uncorrelated parties , we exploit the concept of concurrence first introduced by wootters .if , at an assigned time , no photon have been emitted the conditional concurrence assumes the form : in the analyzed case ( ) , as clearly shown in fig.1 , obtained using the experimental values setted by haroche s group , the degree of entanglement starting from zero increases during the transient collapsing to the initial value when time is long enough .this fact depend on the choice of the atom whose spontaneous emission time is much longer than the cavity damping time . in accordance to this factthe probability to find the atom in the excited state starting from go to zero when as clearly showed in fig.2 .this figure reproduce in a perfect way the experimental measures performed by haroche s group .the standard theoretical models assume two different channel of dissipation ( one for the atom and one for the cavity mode ) .the corresponding master equation is simpler to solve because of the absence of _ cooperative terms _ the corresponding dynamics fits only the upper part of the measures of the haroche s group ( _ dissipative behavior _ ) .the low part of the graph represent the cooperation induced by the common reservoir between the cavity mode and the atom .such cooperation become the maximum one when and , as clearly shown in fig.3 and fig.4 .these ones depicted the concurrence and the probability to find the atom in its excited state , respectively . under this condition ( decoherence free regime )( [ statmix ] ) suggests that , for , the correspondent asymptotic form assumed by is time independent and such that the probability of finding energy in the bipartite system is : where is the ground state of the bipartite system and is the maximally antisymmetric entangled state of the system .this fact suggests that _ stationary _ entangled states of the jc system can be generated by putting a single photon detector able to capture in a continuous way all the excitations lost by the system in the reservoir .reading out the detector states at , if no photons have been emitted , then , as a consequence of the measurement outcome , our system is projected into the maximally antisymmetric entangled state .this is the main result of the paper which means that a successful measurement , performed at large enough time instants , generates un uncorrelated state of the two subsystems , bipartite system and reservoir , leaving atom and cavity in their maximally antisymmetric entangled state .this ideal result has to be corrected by the introduction in the microscopical model of a second bath of interaction able to take account of the cavity leakage of energy because of the imperfect mirrors . in terms of the hamiltonian operatorthis means : (s_+ + s_-)\\\nonumber&+&\sum_{\vec{k},\lambda}[s^\ast_{\vec{k},\lambda } b_{\lambda}(\vec{k})+s_{\vec{k},\lambda } b^{\dag}_{\lambda}(\vec{k})](\alpha^\dag + \alpha)+\sum_{\vec{k},\lambda}[r^\ast_{\vec{k},\lambda } c_{\lambda}(\vec{k})+r_{\vec{k},\lambda } c^{\dag}_{\lambda}(\vec{k})](\alpha^\dag + \alpha).\end{aligned}\ ] ] if we make the position where ,\end{aligned}\ ] ] +\sum_{\vec{k},\lambda}[r^\ast_{\vec{k},\lambda } c_{\lambda}(\vec{k})+r_{\vec{k},\lambda } c{^\dag}_{\lambda}(\vec{k})]\end{aligned}\ ] ] we can describe the reduced dynamics of the bipartite system at by a standard master equation and we can solve it in the same way of the previous case .the changes in the microscopical model do not introduce variation in the formal solution .instead , the presence of two different channel of dissipation modifies the dynamical behavior .actually , the system has now the possibility to loose energy in environments that do not speak each other .this means that , when the time is much longer than the sum of the single emission time , the coherence induced by the common bath during the transient will go to zero in the long time domain . despite this fact, the dechoerence time can be made as long as we need to implement the required quantum protocol . actually , named the cavity decay rate , if is much greater than , then the storage of energy can be maximized for a time sufficient to realize the quantum protocol .in this paper we have considered the interaction of a jaynes and cummings system with the electromagnetic field ( and with another phenomenological zero temperature bath ) in its vacuum state and , solving the dynamical problem , we have analyzed the amount of entanglement induced in the bipartite system by the common electromagnetic reservoir .this has allowed us to quantitatively characterize the regime under which field - induced cooperative effects are not vanished by dissipation .once the decoherence free regime is reached , transient entanglement tends to become stationary and , therefore , usable for quantum gate implementation .the asymptotic solution of the dynamical problem appears to be a statistical mixture of a maximally entangled state and the ground state of the open system , the probability to obtain one or the others being the same . in the whole temporal domainthe found solution tell us that the state of the system is a statistical mixture of the free energy system eigenoperators .this fact is general enough and it is consistent with the existence of a photon detector device because the act of measurement introduces a stochastic variable respect to which we can only predict the probability to have one or another of the possible alternative measures .these probabilities can be regarded as the weight of the possible alternative generalized trajectories . with this approachthe dynamics has to be depicted as a statistical mixture of this alternative generalized trajectories .moreover the found trajectories evolve in time in a deterministic way : for example the trajectory relative to the initially excited system state is a shifted free evolution characterized by complex frequencies that means an exponential decay free evolution. this statement may give the sensation that every system has to decay in its ground state because of the observed dynamics .it is in general not true . actually , if the system is multipartite as ours , it is possible that it admits excited and entangled equilibrium decoherence free subspace ( dfs ) (so as it happens in some highly symmetric models ) , constituted by states on which the action of is identically zero .if the system , during evolution , passes through one of these states , the successive dynamics will be decoupled from the environment evolution .an equilibrium condition is reached in which entanglement is embedded in the system .
|
our aim is to give an account , trough an analysis of a number of papers by f.petruccione , h.j.charmicael , j.c . raimond and their contemporaries , of the specific answer that we gave to the problem of open quantum systems dynamical evolution and how this idea evolves and develops in physical research and in the scientific debate of the following decades . permanent solution should not been accepted from physical research , but analysis of the real work of scientists , of the difficulties they face and the ever changing solutions they offer is , we believe , part of our understanding of science and an indispensable basis for further methodological inquiries . in this paper we have chosen to analyze a dissipative jaynes - cummings model assuming the common electrodynamics free field for the bipartite system and an another independent bath for the cavity , so taking into account loosing of energy because of the imperfect mirrors . the nud theorem application leads to predict new cooperative effects between the atom and the cavity mode as the creation of conditional transient entanglement , tending to become stationary as the coupling constant take a well defined value .
|
let be a random sample from the probability density function .the kernel density estimator of is computed as where is a smoothing parameter that is usually called the bandwidth , and the kernel is assumed to be of the second order , which means that , , and .most frequent choices of include the gaussian , epanechnikov , and quartic kernels ( see ) .the two most commonly used measures of performance of are the integrated squared error ( ise ) and the mean integrated squared error ( mise ) defined as let and denote the minimizers of the and functions , respectively . both bandwidths and are unavailable for practical use since their computation requires knowing .there exist many data - driven bandwidth selection techniques ( see the survey of ) .some of the most popular bandwidth selectors are the plug - in rule of and the least squares cross - validation ( lscv ) method proposed independently by and .the cross - validation method is quite popular among practitioners because of its simplicity .moreover , it requires fewer assumptions on compared to the plug - in method .nevertheless , the method is criticized because of producing too variable bandwidths and selecting the trivial bandwidths for the data sets that contain substantial amount of tied observations ( see and ) .a well known lscv paradox consists in the method s improved performance on the harder estimation problems ( see ) .a couple of successful modifications of the lscv method that take advantage of this paradox are the one - sided cross - validation ( oscv ) method proposed by and the indirect cross - validation ( icv ) method of .both methods are supported by the corresponding r packages ( see and ) . the oscv method is originally introduced in the regression context ( see ) . extended the method to the case of smooth densities .a density function is referred to as _smooth _ if it is twice continuously differentiable , whereas it is called _ nonsmooth _ if it is continuous but has finitely many simple discontinuity points in its first derivative . the oscv method in the smooth case is shown to greatly stabilize the bandwidth distribution ( see ) .this article extends the oscv method to the case of nonsmooth density functions . introduced the left - sided and right - sided oscv versions based on the so - called left - sided and right - sided kernels , respectively .both one - sided kernels are obtained by multiplying a benchmark two - sided kernel by a linear function and restricting the support of the one - sided kernel to either $ ] ( left - sided case ) or ( right - sided case ) . in this article we restrict our attention on the right - sided oscv version .the right - sided kernel based on the benchmark two - sided kernel is computed as it follows that is of the second order .generally , the benchmark kernel is different from the kernel used to compute . implemented oscv based on , where denotes the epanechnikov kernel .the oscv function based on a one - sided kernel is defined as where the definition of is given in the appendix . in the above expression the density estimator based on the kernel and the bandwidth , whereas is its leave - one - out modification that is computed from all data points except .the above version of the oscv function mimics the traditional definition of the cv function of and and slightly differs from the function used in and the follow - up articles ( see and ) .indeed , in the oscv function of , the estimator under the sum is replaced by .this is justified by assuming .we find this assumption rather restrictive since the one - sided versions of the most frequently used kernels do not possess this property . since the case does not substantially complicate the oscv implementation , we proceed by using .let denote the minimizer of . defined the oscv bandwidth in the smooth case as , where the functionals and are defined in the appendix .the oscv bandwidth is consistent for the mise optimal bandwidth , that is .in this section we extend the oscv algorithm to the case of a nonsmooth density that has simple discontinuities in its first derivative at the points , .the extension is based on the asymptotic expansion of mise of the kernel density estimator in the nonsmooth case ( see and ) that has the following form : where where is defined in the appendix .the above expression yields the following asymptotically optimal bandwidth : the asymptotic expansion of , the mise function for , has the same form as that of with and being replaced by and .let denote the asymptotically optimal bandwidth for .it then follows that this motivates defining the oscv bandwidth in the nonsmooth case as . in both smooth and nonsmooth cases the oscv bandwidthis defined by multiplying , the minimizer of the oscv function , by a rescaling constant .the constant is used in the smooth case , whereas the constant is used in the nonsmooth case .it is remarkable that the expressions for and are identical to those in the oscv implementation for regression functions ( see and ) .this follows from similarity of the corresponding asymptotic expansions of mise of the kernel density estimator and the mean average squared error ( mase ) of the local linear estimator .the values of and in the case for the most frequently used kernels and their one - sided counterparts are given in table [ tab : constants ] . .rescaling constants for the most frequently used kernels.[tab : constants ] [ cols="<,^,^,^",options="header " , ] the robust kernel used in the fully robust oscv implementation in the regression context ( see ) is also robust in the density estimation setting .this kernel is a member of the following family : where is a two - sided counterpart of , and the kernel has and is plotted in figure [ fig : robust_kernels ] * ( a)*. it is worth to mention that the one - sided gaussian kernel is obtained from by either setting or . despite the fact that the kernel with works well in the regression context ( see ) , its performance in the density estimation framework is unsatisfactory .our inspection of the -based oscv curves for a sequence of random samples generated from revealed that the considerable part of them has two local minima with the largest one being inappropriately large .however , the more serious problem with is that it frequently produces the oscv curves that tend to as . and argued that the lscv method experiences this type of problem even for such frequently used kernels as epanechnikov , quartic , and gaussian .inappropriate performance of the kernel with on stimulated further search for the robust kernels .first of all , we inspected performances of two other robust kernels mentioned in .both these kernels are members of the family with and the values of equal to 0.4275 and 0.9821 .one of the kernels ( with ) is plotted in figure [ fig : robust_kernels ] * ( b)*. the other kernel ( with ) has a similar shape and performance .unfortunately , both kernels are found to produce highly variable and biased upwards bandwidth distributions , at least in the case of . in our next attempt , we considered another member of with that is plotted in figure [ fig : robust_kernels ] * ( c)*. this kernel is almost robust with .it has a quite different shape compared to the one - sided kernels considered so far , but , unfortunately , performs even worse than them . indeed , for all inspected realizations from and the density it produced the oscv curves that tended to as .it is remarkable that the kernel with is equal to zero at the origin .this implies that , the two - sided counterpart of this kernel , is nonnegative and bimodal ( see ) .according to and such a kernel has potential for oscv implementation in the regression context . further experimenting with the kernels from the family is possible by using the r function ` oscv_li_dens ` from the r package ` oscv ` .indeed , many other robust kernels can be found in .it is entirely possible that there exists one that performs better than the kernels discussed above .three other almost robust kernels considered below are not the members of but originate from the dissertation of .these kernels , denoted by , and , are defined below .figure [ fig : robust_kernels ] * ( d ) * shows .the graphs of the other two kernels are not included since they are fairly similar in shape to .}(u),}\\ \displaystyle{l_2(u)=30u^2(1-u)^2(8 - 14u)i_{[0,1]}(u),}\\ \displaystyle{l_3(u)=140u^3(1-u)^3(10 - 18u)i_{[0,1]}(u ) . } \end{array}\ ] ] all of the above kernels have .it appears that each of the kernels , , and produces quite wiggly oscv curves for random samples generated from and the density .the figure [ fig : robust_kernels ] shows variety of robust and almost robust one - sided kernels of different shapes , but none of them performs satisfactory in the case of .thus , finding a kernel that improves practical performance of the method in the nonsmooth case appears to be an open challenging problem .the oscv method for smooth density functions is proposed by . in this articlewe extend the oscv methodology to the case of nonsmooth densities .we also introduce the fully robust oscv version that produces consistent bandwidths in both smooth and nonsmooth cases .the proposed oscv modifications , essentially , use different one - sided kernels to select the bandwidths by the cross - validation method .the selected bandwidths are then rescaled and plugged - in to the gaussian density estimator . the nonsmooth density plotted in figure [ fig : f_nonsmooth ] * ( a ) *is used for discrimination of the proposed oscv extensions .we found and investigated many robust and almost robust kernel candidates for the fully robust oscv implementation .some of them are shown in figure [ fig : robust_kernels ] .none of these kernels performs satisfactory in the case of .moreover , the nonsmooth version of oscv based on the one - sided gaussian kernel performs worse that the ordinary lscv method in the case of .thus , practical implementations of the proposed theoretical extensions remain open to further research efforts .the main problems experienced by the majority of the considered robust and almost robust one - sided kernels were selecting too variable bandwidths and/or producing multiple local minima in the oscv curves .similar difficulties were faced when implementing the icv method ( see , , and ) .this indicates that some nontraditional negative - valued cross - validation kernels may substantially improve the asymptotic properties of the cross - validation method while introducing challenging problems with their practical use .the current implementation of the oscv method in the smooth case of is based on the one - sided epanechnikov kernel .it appears that produces inappropriately wiggly oscv curves in the case of ( see figure [ fig : oscv_realization_epanechnikov ] * ( b ) * ) .moreover , frequently yields insufficiently smooth oscv curves even in the case of the standard normal density ( see figure [ fig : oscv_realization_norm ] * ( a ) * ) .the problem of occasionally producing rough criterion curves persists for real data sets ( see figure [ fig : oscv_geyser ] * ( a ) * ) . on the other hand , we empirically found that for variety of smooth and nonsmooth densities and different sample sizes , the one - sided gaussian kernel usually produces smooth oscv curves with one local minimum ( see figures [ fig : oscv_realization_epanechnikov ] * ( a ) * , [ fig : oscv_realization_norm ] * ( b ) * and [ fig : oscv_geyser ] * ( b ) * for illustration ) .this indicates that might be , potentially , superior than for practical implementation of the oscv method in the smooth case .this matter , however , requires further investigation that is out of scope of this article that is mainly devoted to extending oscv to the case of nonsmooth densities .the almost robust one - sided kernel shown in figure [ fig : robust_kernels ] * ( c ) * with has a nonnegative two - sided counterpart .it then follows from the conclusions of and that this kennel might , potentially , lead to successful implementation of the fully robust oscv method in the regression context .this article and the recent publication of are supported by the r package ` oscv ` that can be used for reproducing the presented results and allows for further experimenting in attempts of improving the oscv method s practical implementation for smooth and nonsmooth density and regression functions .23 [ 1]#1 [ 1]`#1 ` urlstyle [ 1]doi : # 1 a. w. bowman. an alternative method of cross - validation for the smoothing of density estimates ._ biometrika _ , 710 ( 2):0 353360 , 1984 .the effect of discretization error on bandwidth selection for kernel density estimation ._ biometrika _ , 780 ( 2):0 436441 , 1991 .d. b. h. cline and j. d. hart .kernel estimation of densities with discontinuities or discontinuous derivatives ._ statistics _ , 220 ( 1):0 6984 , 1991 .w. hrdle ._ smoothing techniques_. springer series in statistics .springer - verlag , new york , 1991 .with implementation in s. j. d. hart and s. yi .one - sided cross - validation ._ journal of the american statistical association _ , 930 ( 442):0 620631 , 1998. m. c. jones , j. s. marron , and s. j. sheather . a brief survey of bandwidth selection for density estimation . _ journal of the american statistical association _ , 910 ( 433):0 401407 , 1996 .c. r. loader .bandwidth selection : classical or plug - in ? _ annals of statistics _ , 270 ( 2):0 415438 , 1999 .e. mammen , m. d. martnez miranda , j. p. nielsen , and s. sperlich .do - validation for kernel density estimation . _ j. amer ._ , 1060 ( 494):0 651660 , 2011 .issn 0162 - 1459 .doi : 10.1198/jasa.2011.tm08687 .url http://dx.doi.org/10.1198/jasa.2011.tm08687 .e. mammen , m. d. martnez miranda , j. p. nielsen , and s. sperlich .further theoretical and practical insight to the do - validated bandwidth selector . _ j. korean statist ._ , 430 ( 3):0 355365 , 2014 . m. d. martnez - miranda , j. p. nielsen , and s. sperlich .one sided cross validation for density estimation . in g.n. gregoriou , editor , _ operational risk towards basel iii : best practices and issues in modeling , management and regulation _ , pages 177196 .john wiley & sons , hoboken , new jersey , 2009 .m. rudemo .empirical choice of histograms and kernel density estimators ._ scandanavian journal of statistics _ , 90 ( 2):0 6578 , 1982 .o. savchuk ._ icv : indirect cross - validation _ , 2017 .r cran package version 1.0 .o. savchuk ._ oscv : one - sided cross - validation _ , 2017 .r cran package version 1.0 .o. savchuk , j. hart , and s. sheather .an empirical study of indirect cross - validation . in _nonparametric statistics and mixture models _ , pages 288308 .world sci .hackensack , nj , 2011 .o. y. savchuk and j. d. hart .fully robust one - sided cross - validation for regression functions . _ computational statistics _ , 2017 .doi : 10.1007/s00180 - 017 - 0713 - 7 .o. y. savchuk , j. d. hart , and s. j. sheather .indirect cross - validation for density estimation . _ journal of the american statistical association _ , 1050 ( 489):0 415423 , 2010 .o. y. savchuk , j. d. hart , and s. j. sheather .one - sided cross - validation for nonsmooth regression functions ._ j. nonparametr ._ , 250 ( 4):0 889904 , 2013 . o. y. savchuk , j. d. hart , and s. j. sheather .corrigendum to `` one - sided cross - validation for nonsmooth regression functions '' [ j. nonparametr .stat . , 25(4 ) : 889 - 904 , 2013 ] . _ journal of nonparametric statistics _ , 280 ( 4):0 875877 , 2016 .s. j. sheather and m. c. jones . a reliable data - based bandwidth selection method for kernel density estimation ._ journal of the royal statistical society , series b _ , 530 ( 3):0 683690 , 1991 .b. w. silverman ._ density estimation for statistics and data analysis_. monographs on statistics and applied probability .chapman & hall , london , 1986 .b. van es .asymptotics for least squares cross - validation bandwidths in nonsmooth cases . _annals of statistics _ , 200 ( 3):0 16471657 , 1992 . m. p. wand and m. c. jones ._ kernel smoothing _ , volume 60 of _ monographs on statistics and applied probability_. chapman and hall ltd ., london , 1995 . s. yi ._ on one - sided cross - validation in nonparametric regression_. ph.d dissertation , texas a&m university , 1996 .* notation . * for an arbitrary function , define the following functionals : \displaystyle{r(g)=\int_{-\infty}^\infty g^2(u)\,du},\\[0.3 cm ] \displaystyle{d_g(z)=\int_{-\infty}^z g(u)\,du},\\[0.3 cm ] \displaystyle{g_g(z)=\int_{-\infty}^z ug(u)\,du,\qquad z\in\mathbbr}. \end{array}\ ] ] based on and we define
|
one - sided cross - validation ( oscv ) is a bandwidth selection method initially introduced by in the context of smooth regression functions . developed a version of oscv for smooth density functions . this article extends the method for nonsmooth densities . it also introduces the fully robust oscv modification that produces consistent oscv bandwidths for both smooth and nonsmooth cases . practical implementations of the oscv method for smooth and nonsmooth densities are discussed . one of the considered cross - validation kernels has potential for improving the oscv method s implementation in the regression context .
|
storage schemes for data storage have been studied for a long period since they can keep data reliable over unreliable nodes .all kinds of strategies are proposed for data storage , such as replication , erasure cods , and regenerating codes , etc . among them , regenerating codes recently proposed by dimakis _ are more effective in terms of repair bandwidth . motivating by network coding , they used an information flow graph to express regenerating codes , and identified a tradeoff curve between the storage capacity per node and the repair bandwidth for a failed node .this tradeoff curve has two extremal points which correspond to the minimum storage regenerating ( msr ) codes , , and the minimum bandwidth regenerating ( mbr ) codes , respectively .however , lots of distributed storage schemes do not consider network structures among storage nodes .actually , in many applications , storage nodes have certain topological relationships , such as the hierarchical network structure , the multi - hop network structure and so on .li _ et al . _ studied repair - time in tree - structure networks which have links with different capacities . in ,gerami _ et al ._ studied repair - cost in multi - hop networks and formulated the minimum - cost as a linear programming problem for linear costs .inspired by these works , in this paper , we focus on distributed storage problems over a class of simple but important networks , i. e. , unidirectional ring networks , which usually exist as parts of some complex networks . in these unidirectional ring networks ,each user connects one and only one storage node to download data . for each user ,its reconstructing bandwidth is the total number of all transmitted symbols to recover original data . by cut - set bound analysis of information flow graph for each user requiring total original data in this system, we obtain a lower bound on the reconstructing bandwidth and further indicate its tightness for arbitrary parameters .thus , we define optimal reconstructing distributed storage schemes ( ordsses ) , if the reconstructing bandwidth for every user achieves the lower bound with equality .furthermore , we study the repair problem in ordsses and also deduce a tight lower bound on the repair bandwidth for repairing each failed storage node , which is the total number of all transmitted symbols to repair this failed storage node . in particular, we show that every ordss can satisfy the lower bound with equality . at last , we present two constructions of ordsses , called mds construction and ed construction , respectively . both of them can be applied for arbitrary parameters , but ed construction has many advantages than mds construction , such as finite field size and computational complexity .the remainder of the paper is organized as follows . in section[ m - example ] , a motivating example is given to illustrate our research problems .the basic mathematical model of research problems and tight lower bounds on the reconstructing and repair bandwidths are deduced in section [ bm - sbound ] . in section [ cons ] , we design an efficient constructing approach of ordsses for arbitrary parameters . finally , section [ conc ] concludes this paper . because the limit of the space , some proofs of our conclusions are ignored in this paper . for more details ,please refer to the extended reversion .in this section , we first discuss an example to show our research problems .this shows that it is meaningful and interesting to study the distributed storage problems over unidirectional ring networks . , , .,width=181,height=117 ] fig .[ fig_cn ] depicts a unidirectional ring network with storage nodes , denoted by , the data exchanges between the storage nodes have to accord to the direction of the ring network .each storage node has storage capacity .let the row vector of original data be \in\mathbb{f}^5_5 ] be the row vector of original data with size , each coordinate represents an information symbol taking values in a finite field with elements , where is a power of some prime .the data is distributed to all storage nodes in order to store . here, we just consider linear storage , that is , every stored symbol and every transmitted symbol are linear combinations of the information symbols , still being an element in . for any storage node , define an node generator matrix . then all the coordinates of the product are stored in , each of which is called a node symbol . and each node symbol corresponds to a column vector of , called node vector .further , each transmitted symbol is a linear combination of some node symbols , and clearly , it also corresponds to a vector , called a transmitted vector , which is the linear combination of those corresponding node vectors , too .concatenating all node generator matrices according to the order of storage nodes , we obtain an matrix ] mds code , and we regard it as the generator matrix for our storage scheme .then we partition all column vectors into parts , each of which contains column vectors constituting node generator matrix for a storage node .it is easy to check that this storage scheme satisfies the two conditions in lemma [ lemma - iff ] .thus , all users can reconstruct the original data with bandwidth , which implies that the proposed lower bound is achievable for all users .this completes the proof .if a distributed storage scheme achieves the lower bound in theorem [ thm - rec - bound ] with equality for all users , we say it an optimal reconstructing distributed storage scheme ( ordss ) .actually , any ] be the row vector of original data .we construct an ed - matrix of size as follows : .\ ] ] subsequently , we calculate .\ ] ] then , we assign to the node , to the node , to the node and , to the node . clearly , any user can reconstruct the original data with reconstructing bandwidth 9 . andif any storage node fails , it can be repaired with repair bandwidth 5 .for instance , if the node fails , transmits to , then transmits to , can solve and transmits them to the new substituted node .so is repaired exactly with repair bandwidth 5 .[ ed - construction ] depicts the repair process in details for this storage scheme .we call the construction of ordsses in the proof of theorem [ thm - rec - bound ] the mds construction , and the above construction using an ed - matrix the ed construction .now , we compare the two constructions . mds construction applies a generator matrix of an ] mds code with .for mds construction , when the values of parameters are large , the field size will become too large to be applied in practice , while ed construction always uses the smallest finite field .this is because that the weekly mds property is sufficient for constructing ordsses .for example , when , the field size for a $ ] mds construction is at least , which is much larger than the field size 2 for ed construction .it is well - known that the cost of arithmetic in a small field is smaller than that in a bigger one .thus , the smaller field size will reduce the computational complexity of the storage scheme and save much time evidently .therefore , ed construction is much better than mds construction .in this paper , we discuss distributed storage problems over unidirectional ring networks and propose two tight lower bounds on the reconstructing and repair bandwidths .in addition , we present two constructions for ordsses , and both can be used for arbitrary parameters . in practical applications , the networks of bidirectional ring topology , in which adjacent nodes can exchange information each other , are more useful .the same research problems in that case are also meaningful and still keep open .this research is supported by national key basic research problem of china ( 973 program grant no .2013cb834204 ) , the national natural science foundation of china ( nos . 61301137 , 61171082 ) and fundamental research funds for central universities of china ( no . 65121007 ) .k. v. rashmi , n. b. shah , and p. v. kumar , `` optimal exact regenerating codes for distributed storage at the msr and mbr points via a product - matrix construction , '' _ ieee trans .inf . theory _ ,8 , pp . 5227 - 5239 , aug .2011 .z. kong , s. a. aly , and e. soljanin , `` decentralized coding algorithms for distributed storage in wireless sensor networks , '' _ ieee j. on selected areas in communications _ ,vol , 28 , pp .261 - 268 , feb . 2010
|
in this paper , we study distributed storage problems over unidirectional ring networks . a lower bound on the reconstructing bandwidth to recover total original data for each user is proposed , and it is achievable for arbitrary parameters . if a distributed storage scheme can achieve this lower bound with equality for each user , we say it an optimal reconstructing distributed storage scheme ( ordss ) . furthermore , the repair problem for a failed storage node in ordsses is under consideration and a tight lower bound on the repair bandwidth for each storage node is obtained . particularly , we indicate the fact that for any ordss , every storage node can be repaired with repair bandwidth achieving the lower bound with equality . in addition , we present an efficient approach to construct ordsses for arbitrary parameters by using the concept of euclidean division . finally , we take an example to characterize the above approach . : distributed storage over unidirectional ring networks
|
currently , the only absolutely secure way through which two parties ( alice and bob ) can , at least theoretically , secretly share a random sequence of bits ( key ) is given by quantum cryptography , whose security is guaranteed by the validity of the laws of quantum mechanics .this secret key is the most important ingredient in the implementation of classical cryptography protocols , such as the one - time pad , which are provably secure if the key is only known to alice and bob .the original qkd protocols are based on single photons ( `` discrete '' states ) , requiring photon - counting techniques to their implementation . however , single photon detectors are not as efficient and fast ( short response time ) as standard telecommunication pin photodiodes used to detect bright light ( many photons ) . in quantum mechanics these bright quantum statesare described by the quadratures of a mode of the quantized electromagnetic field and are also known as cv states due to the continuum spectrum of the quadratures . in order to explore the efficient and fast measurement schemes for such states ( homodyne or heterodyne detection ) , qkd protocols based on several types of cv states and strategieswere proposed .they are all called cvqkd protocols and are considered theoretically secure .the quantum resources of the first cvqkd protocols , whose security was equivalent to discrete qkd protocols , were either single - mode squeezed states , sent from alice to bob , or two - mode entangle squeezed states shared between them . in these early schemesthe secret key was encoded either in binary alphabets composed of two different coherent states ( discrete modulation ) or in coherent states with real and imaginary quadratures chosen from gaussian distributions ( continuous modulation ) .an important development of cvqkd appeared in , where it was shown that coherent states are equally secure to generate a secret key between alice and bob if one uses a gaussian continuous modulation and if the transmission losses from alice to bob does not exceed .subsequently , in it was shown that if bob accepts only certain measurement outcomes ( postselection ) to generate the key , or if alice and bob employ reverse reconciliation techniques , they can surpass the loss threshold . also , by employing at the same time reverse reconciliation and postselection one gets the greatest secure key rates .a reconciliation technique is an error correction scheme implemented at the end of the protocol by alice and bob , in which they execute a set of tasks in order to agree on common sequence of bits .this process is called direct if alice , who sends the quantum states , communicates classically with bob , who then processes his data using a predetermined algorithm to agree with alice s random sequence of bits .reverse reconciliation is the opposite scenario , where bob communicates with alice , who now manipulates her data in order to share a common key with bob .so far , there is no cvqkd protocol that is secure for any value of loss that uses only direct reconciliation and no postselection . in this articlewe show a different way to do cvqkd that is secure for losses up to 100 without resorting to either reverse reconciliation or postselection .since this protocol works deterministically ( no postselection ) and uses a discrete modulation for the key , it achieves fairly high key rates over long distances , even assuming the usual conservative reconciliation efficiencies for binary channels .apart from its possible practical significance , this protocol also adds to our fundamental understanding of cvqkd since it is based on the active use of cv teleportation ( cvt ) protocols , opening up alternative ways to understand the security of cvqkd as well as different routes for future unconditional security proofs .following , the main idea behind this teleportation - based cvqkd is the active use of the finite resources ( finite squeezing ) inherently associated to the cvt , combined with the knowledge of the pool of coherent states with alice to be teleported to bob .it is by properly making use of these two pieces of information that we can build a protocol furnishing high key rates even with loss , turning the finiteness of squeezing into an advantage .indeed , the cvt protocol is not simply employed as an alternative to the direct sending of the states with alice to bob , as required by the aforementioned standard cvqkd protocols , where the greater the entanglement of the channel the more a flawless teleportation is achieved with subsequent higher key rates . in the present protocol, however , less entanglement means more efficiency , since we will show that for a lossy transmission the amount of entanglement ( squeezing ) maximizing the key rate is finite and dependent on the level of loss and on the coherent states chosen for encoding the key .let start describing the protocol ( see fig .[ fig_esquema ] ) .first we note that the main ingredient of the present cvqkd scheme is the modified cvt protocol presented in , where the knowledge of the set of input states to be teleported by alice allows bob to get an output state at the end of the process nearly identical to the input state , even for low squeezing . to achieve that alice has to modify the beam splitter ( bs ) transmittance and bob has to modify the displacement on his mode from those given by the original cvt protocol , according to the pool of input states with alice . here ( ) is the annihilation ( creation ) operator of mode with quadratures and and commutation relation =i/2 ] measures the similarity of two quantum states and in our case can be written as , where for orthogonal states and for identical ones . in fig .[ figfid ] we show the optimal values for these quantities ( see appendix ) .( color online ) optimized parameters giving the greatest ( least ) fidelity for a teleported real ( imaginary ) coherent state .the optimal settings for the greatest ( least ) fidelity for an imaginary ( real ) input are obtained from the ones above by interchanging with and changing to .the squeezing remains unchanged .the dashed curves give the standard settings for the original cvt .,width=302 ] the next step of the protocol consists in bob once again displacing his state .he applies to his mode if he previously implemented the real displacement or otherwise .the goal of this last displacement is to transform the states or to vacuum states or to move farther from the vacuum the states or .these states nearly describes if alice chose the real ( imaginary ) basis and bob the real ( imaginary ) displacement in a given run of the protocol .after the last displacement bob measures the intensity of his mode and associates the bit if he sees no light ( vacuum state ) or the bit if he sees any light ( see fig .[ figprob ] ) .note that the previous step can be modified to any strategy aimed to discriminate between two coherent states , such as the measurement of the quadratures of using homodyne detection .( color online ) probabilities for bob detecting the vacuum state at the end of a run of the protocol if alice and bob uses the optimal settings given in fig .[ figfid ] .whenever bob assumes incorrectly the basis employed by alice , he can not discern between the two possible inputs ( star / blue curves ) ., width=302 ] alice and bob repeat steps ( 1 ) to ( 7 ) until they have enough data to check for an eavesdropper and still get a secure key long enough for their purposes . after alice finishing all teleportations and after bob making all measurements, they use an authenticated classical channel to disclose the following information .alice reveals to bob the basis used at each run of the protocol but not the state .bob reveals to alice the instances where he used the optimal values of and matching the basis chosen by alice .they discard the data where no matches occurred and use a sample of the remaining data to determine the parameters of the quantum channel ( loss and noise ) and to check for security .then they error correct the non disclosed data ( reconciliation stage ) and generate a secret key via privacy amplification techniques .let us move to the security analysis , where we deal with individual ( incoherent ) attacks only . the intercept - resend attack , with an eavesdropper eve blocking bob s share of the entangled state ( mode 3 in fig .[ fig_esquema ] ) and sending him a fake mode , is not a serious threat .this is true because eve can not know alice s input with certainty _ before _ sending bob the fake mode . indeed, eve can only hope to know alice s input by knowing which basis she used and this only happens _after _ bob measures his mode .the most serious incoherent attack to the present and all cvqkd schemes is the bs attack , in which eve inserts a bs of transmittance in the optical line connecting alice and bob and operates on the signal reaching her in the same way as bob does .note that the bs attack is equivalent to a lossy transmission where of the signal is lost to the environment or eve . for direct reconciliation ,the secure key rate generated between alice and bob in the bs attack is where and are the mutual information between alice and bob and alice and eve , respectively . is the reconciliation efficiency and depends on the reconciliation software employed . for binary encodings that we use hereit has a conservative value of . since the present protocol and the bs attack are symmetric with respect the real and imaginary states , in the following security analysis we consider only the case where alice used the real basis and bob the real displacement .a direct calculation gives ( see appendix ) /2 , \end{aligned}\ ] ] where or , if , and is the unconditional ( no postselection ) probability of to assign the bit to the key if alice teleported the corresponding state that encodes the bit . in the present case means the probability of to detect the vacuum state at the end of a run of the protocol if alice teleports while is the probability of to detect any light if she teleports .note that depends on and that .( color online ) all plots : solid lines mean and dashed lines .main plot : for large values of we have from top to bottom increasing ( decreasing ) loss ( ) .inset : from the greatest to the lowest peak we have .note that for low loss for high values of only . for losses superior to , only for small ., width=302 ] in fig .[ figkey ] we plot for several values of loss employing the parameters shown in fig . [ figfid ] .the inset shows that it is possible to choose a value of such that for and loss we get .this value should be contrasted with those without excess noise in , where by setting a perfect direct reconciliation ( ) and postselection one gets at loss , and with the ones in , where above loss it is not possible to extract a secret key via direct reconciliation . in other words , we improve the key rate at about one order of magnitude even assuming the worst case scenario . to get such enormous gain in the key ratewe need a squeezing of about db . fortunately the present scheme still works well for very low squeezing ( see fig .[ figloss ] ) . using for and their optimalpreviously obtained expressions when the matching condition occurs , becomes a function of only , , and .assuming the worst case scenario of loss and , we obtain the curves in fig .[ figloss ] fixing and maximizing as a function of ( see appendix ) .it is clear that for squeezing below db it is still possible to get a secure key and in tab .[ tab1 ] we show the maximum key rates attainable in fig . [ figloss ] for the different values of squeezing . optimal key rates with loss as a function of . ,width=302 ] .[tab1 ] maximal key rate for a fixed squeezing and the corresponding optimal parameters ( middle columns ) .here , ( loss ) , and we assume real inputs . [cols="^,^,^,^,^,^",options="header " , ]in summary , we proposed an efficient cvqkd scheme with a binary encoding for the key ( discrete modulation ) based on the cvt of coherent states , where the cvt protocol is not just a substitute to the direct sending of coherent states from alice to bob for the usual cvqkd protocols .rather , the resources needed to implement the cvt protocol play a direct role in the generation of the secret key since alice s bs transmittance , the squeezing of the entangled channel , and bob s displacement are all tuned in order to generate a secret key. we showed that the present teleportation - based cvqkd protocol is secure against individual attacks and in particular that it works with direct reconciliation and no postselection for any value of loss in the optical channel connecting alice and bob .moreover , we showed that it is possible to achieve fairly high key rates with mild squeezing ( db ) even at the loss regime .this fact combined with the high repetition rates of cv technology may lead to efficient long distance qkd protocols .indeed , once a mildly squeezed two - mode entangled state channel is established between alice and bob , directly or via entanglement swapping techniques , they can generate a secret key using the present cvqkd scheme .finally , the present cvqkd protocol naturally leads to many interesting open questions .first , how robust is the present scheme to added excess noise at the transmission line ?second , can reverse reconciliation and/or postselection increase the key rates of this scheme and decrease even more the level of squeezing to generate a secure key ?third , how can one extend the individual security proof here for collective and coherent attacks ?fsl and gr thank cnpq ( brazilian national council for scientific and technological development ) for funding and gr thanks cnpq / fapesp ( state of so paulo research foundation ) for financial support through the national institute of science and technology for quantum information .here we show a schematic view of a single run of the protocol in a step by step description ( see fig .[ fig_esquema ] ) . in the next sectionswe will develop all the mathematical machinery needed to understand the protocol .a step by step description of a successful run of the protocol , generating a common random bit between alice and bob , is as follows : ( 1 ) alice randomly chooses between the real or imaginary coherent state `` basis '' and then randomly prepares or , respectively , to teleport to bob .the picture describes the case where alice chooses ( mode given by the solid / blue line ) .( 2 ) alice generates a two - mode squeezed entangled state ( modes and ) , whose squeezing parameter is chosen according to the value of , and sends mode to bob . (3 ) alice adjusts the beam splitter ( bs ) transmittance according to her choosing the real or imaginary basis and then sends mode to interact with her share of the two - mode squeezed state ( mode ) .( 4 ) she measures the position and momentum quadratures of the modes and , respectively , that emerge after the bs and classically informs bob of those results ( and ) . ( 5 ) bob randomly chooses from two possible pairs of values and implements a displacement operation on his mode given by , where . and are such that the fidelity of bob s output state with alice s input is greatest if she chooses a real ( imaginary ) state and he assumes a real ( imaginary ) state and , at the same time , least if she chooses an imaginary ( real ) state and he assumes a real ( imaginary ) state . the optimal pair ( , ) depends on the input being a real or imaginary coherent state but not on its sign .( 6 ) bob implements another displacement on his mode , or , depending on the choice he made for the pair .the picture shows the case in which bob assumes alice chooses the real basis ( solid lines ) .had he assumed the wrong basis , which alice and bob will discover classically communicating after finishing the whole protocol , they would discard this run of the protocol .( 7 ) bob measures the intensity of his mode and assigns the bit value if he sees no light ( vacuum mode ) and the bit otherwise .alice and bob repeat steps ( 1 ) to ( 7 ) until they have enough data to check for an eavesdropper and still get a secure key .after alice finishes all teleportations and bob makes all measurements , they communicate using an authenticated classical channel to disclose the following information . alice tells bob the basis ( real or imaginary ) employed at each run of the protocol .bob tells alice the cases where he has employed the optimal values of and matching the basis chosen by alice .they discard the data where no matches occurred and use a part of the matched cases to determine the parameters of the quantum channel ( loss and noise ) and to check for security .the remaining data is error corrected ( reconciliation stage ) and used to generate a secret key via privacy amplification techniques . a key ingredient to the present scheme is the cv teleportation protocol adapted to the case where alice and bob has a complete knowledge of the pool of possible states to be teleported .with such a knowledge , alice and bob can greatly improve the fidelity between the teleported state with bob and alice s input by changing certain parameters of the original proposal .our goal in this section is to review in a self contained way this modified cv teleportation protocol , following closely the presentation given in .let and be the position and momentum quadratures of mode , respectively , where and are the annihilation and creation operators with commutation relation =1 ] , with and denoting the complex conjugation . since and commute with their commutator glauber s formula applies , giving [\lambda]}e^{-2ire[\lambda]\hat{p}}e^{2iim[\lambda]\hat{x}}$ ] and finally in order to estimate after a single run of the protocol the closeness of bob s state , , with the original one at alice s , , we use the fidelity in general depends on the input state , the measurement outcomes of alice ( and ) , the squeezing of the entangled two - mode squeezed state , , , and .also , f achieves its highest value ( ) if we have a flawless teleportation ( ) and its minimal one ( ) if the output is orthogonal to the input .we will be dealing with input states giving by coherent states , , with and reals , and to entangled two - mode squeezed states shared between alice and bob , where are fock number states with alice ( bob ) and is the squeezing parameter .when we have , the vacuum state , and for the unphysical maximally entangled einstein - podolsky - rosen ( epr ) state .the present cvqkd protocol is based on a binary encoding for the key such that , with a real number .these states are to be teleported from alice to bob randomly .let us explicitly analyze the case where alice chooses the real basis , namely , she teleports either or to bob .the calculations for the imaginary basis are similar and only the final results for this case will be given . where \cos ( 2 \theta ) \tanh r + 4 g_u \tilde{x}_u^2 \sin\theta + 4 g_v \tilde{p}_v^2 \cos \theta \right\}\tanh r\\ & & + \tilde{x}_u^2 \left[-g_u^2+\frac{2}{\cosh ^2r- \cos ( 2 \theta )\sinh ^2r}-2\right]+\tilde{p}_v^2 \left[-g_v^2+\frac{2}{\cosh^2r+\cos ( 2 \theta ) \sinh^2r } -2\right ] , \\f_2(g_u , r,\theta ) & = & g_u -\{g_u \cos ( 2 \theta ) \tanhr+2 [ 1+g_u \cos \theta]\sin \theta\}\tanh r + 2 \left(\frac{1 } { \cos ( 2 \theta ) \sinh^2r-\cosh ^2r}+1\right)\cos \theta , \\f_3(r,\theta)&= & -\frac{\{\cosh r-[ \tanh r \cos ( 2 \theta ) + \sin ( 2 \theta ) ] \sinh r\}^2}{\cosh ^2r- \cos ( 2 \theta ) \sinh ^2r}. \end{aligned}\ ] ] moreover , since only appears in the exponent and we want the maximum of , we maximize the exponent as a function of . differentiating the exponent with respect to and equating to zero we get inserting and back into we finally obtain \}^2}{\cosh^2r-\cos(2 \theta ) \sinh^2r } } \hspace{-.1 cm } , \label{fidre}\end{aligned}\ ] ] where we use the superscript `` re '' to remind us that this is the optimal for real inputs . also , it is important to note that the optimal expression for , as well as for and , do not depend on the measurement outcomes and obtained by alice .this is one of the reasons making the present cvqkd scheme yield high key rates without postselecting a subset of all possible measurement outcomes of alice . for an imaginary input ,namely , either or , the roles of and are reversed . in order to have a solution for independent of the sign of the imaginary coherent state we fix .then , we maximize the exponent of as a function of .the final result is that we obtain the same expressions for and as given before for the real case and the following expression for the fidelity : \}^2}{\cosh^2r+ \cos(2 \theta ) \sinh^2r}}\hspace{-.1cm}. \label{fidim}\end{aligned}\ ] ] the final calculations needed to determine the optimal and are as follows .we want and such that if alice chooses the real basis and bob assumes alice chose the real basis , is maximal and is minimal .this is achieved maximizing the following function : .\ ] ] it is not possible to get simple closed analytical expressions for the optimal and and the whole maximization process is carried out numerically once the value of is specified .this is what was done to get the optimal data shown in fig .2 of the main text .the optimal parameters if alice chooses the imaginary basis and bob assumes alice chose the imaginary basis is obtained imposing that be minimal and be maximal .this is obtained maximizing the following function : =\pi^{\text{re}}(r,\pi/2-\theta).\ ] ] it is clear by the last equality that the optimal for the imaginary input is obtained from the optimal one for the real input by subtracting it from .the relations between the optimal settings for the real and imaginary inputs are as follows : the state with bob after finishing the teleportation protocol is given by eq .( [ bobfinal ] ) , where he has already implemented either the real or imaginary displacement on his mode . by real and imaginary displacementswe mean that bob applied the displacement , with , using either the real ( and ) or imaginary ( and ) optimal parameters . in the next step of the teleportation - based cvqkd protocol, he implements another displacement , which depends on whether he chose the real or imaginary displacement . for a previously real displaced mode he now applies the displacement and for a previously imaginary displaced mode he applies .the goal of these last displacements is to transform states nearly described by or to vacuum states and to push further away from the vacuum the states or .note that bob s state will be very close to one of those four states only if the `` matching condition '' occurred , i.e. , if alice teleported a real ( imaginary ) state and bob used the optimal settings presuming a real ( imaginary ) input by alice . mathematically , the state after the last displacement is where or .the probability to detect the vacuum state is where we used that and . in eq .( [ q0 ] ) is the complex conjugate of ( [ in ] ) , with the subscript as a reminder to which coherent state the kernel refers to , and is given by eq .( [ bobfinal ] ) .figure 3 in the main text is a plot of for all possible combinations of input state by alice and displacement by bob when a matching condition occurs ( the first four curves from top to bottom ) .the fifth and sixth curves are averaged over all possible measurement outcomes and for alice , weighted by alice s probability to get and ( cf .( [ prob ] ) ) , this averaging is needed whenever the matching condition does not occur since depends on and in this case .[ fig3main ] for a reproduction of fig . 3 of the main text but this time with a different caption , where we employ the notation just developed to describe each one of the plotted curves . the first curve is computed with the following parameters , .the second curve is for .the third curve is for .the fourth curve is for .the fifth curve is the averaged for .the sixth curve is the averaged for ., width=302 ] we have also tested the robustness of the optimal settings by randomly and independently changing the optimal parameters about their correct values . as can be seen in fig .[ fignoise ] , the optimal settings are very robust , supporting fluctuations of about the optimal values for small and large . for small fluctuations of still tolerable .for each value of we have implemented realizations of random fluctuations about the input state , about the optimal values , and about .we worked with alice s sending a real state and bob assuming a real state .similar results are obtained for the imaginary matching condition .the red / square curves connects the maximal and minimal values for due to the random fluctuations assuming alice sent a negative real state .the gray dots between the red / square curves represent the value of at each realization .the black / circle curves has the same meaning of the red / square curves but assuming alice sent a positive real state ., width=302 ] we want to study how the teleportation - based cvqkd protocol responds to a lossy channel , or equivalently , to the bs attack .this will allow us to determine the level of loss in which a secure key can be extracted via direct reconciliation and no postselection .we want to investigate the security of the present scheme to the bs attack . in the bs attack an eavesdropper ( eve )inserts a bs of transmittance , , during the transmission to bob of his share of the entangled two - mode squeezed state ( mode 3 in fig .[ fig_esquema ] ) .bob will receive a signal with intensity and eve the rest . with her share of the signal , , eve proceeds as bob in order to extract information of the key .the bs is inserted before bob receives his mode and therefore before he applies the displacements and , with or .bob s state before the insertion of the bs is as given in eq .( [ initial3 ] ) .hence , the joint state of bob and eve before the bs is with given by eq .( [ in ] ) with .but since we have after the bs , the last equality was obtained making the following change of variables , and .bob s state after the bs is given by the partial trace of the state with respect to eve s mode , . in the position basiswe have where note that eve s state is , which is simply obtained from eq .( [ rhoboblosskernel ] ) by changing . using the state ( )bob ( eve ) proceeds as explained before to finish all the steps of a single run of the teleportation - based cvqkd protocol .bob displaces his mode by , which depends on whether he assumed alice teleported a real or imaginary state , finishing the teleportation stage of the protocol .his state at this stage is .then he implements the last displacement , which depends on his first displacement as explained before , and measures the intensity of his mode .hence , bob s probability to detect the vacuum state ( no - light ) is \nonumber \\ & = & \langle -\lambda -\gamma| \rho'_b|-\lambda -\gamma\rangle , \end{aligned}\ ] ] where have made explicit that depends on the measurement outcomes of alice when , i.e. , when we have a lossy channel . in the position representationwe have as before , we define the unconditional ( no postselection ) probability as and in fig .[ figbs ] we show its value for several values of loss .probability to detect the vacuum state for several values of loss ( ) , which increases ( decreases ) from top to bottom .the other parameters used to compute , namely , , and , were the optimal ones when the matching condition occurs .the remaining parameter , alice s input , was set to ( solid lines ) and ( dashed lines ) ., width=302 ] for direct reconciliation the secure key rate between alice and bob is where is the reconciliation efficiency , the mutual information between alice and bob , and the mutual information between alice and eve . in what followswe will prepare the ground for defining and computing those mutual informations for our problem . also ,since the present teleportation - based cvqkd protocol is symmetric to both matching conditions , we will work with the one where alice teleported a real state and bob implemented the real displacement .let and be two binary discrete variables , whose possible values for are and for .if we associate variable to alice and adopt the convention we have where is the probability distribution associated to .this means that alice randomly chooses between the negative or positive coherent states at each run of the protocol .if we associate variable to bob we can define the conditional probability of bob assigning the value to his variable if alice assigned the value as . for the present protocol , and according to the encoding that alice and bob mutually agreed on for the key , the four conditional probabilities are where , the probability to detect the vacuum state , is given by eq .( [ q0 ] ) .if we define where is the probability to detect light , we have note that we have explicitly written the dependence of , , on alice s teleported state to remind us that we should compute it using the appropriate sign for .we can understand the previous conditional probabilities as follows . if alice teleports the state ( bit ) and bob displaces his mode by , for a faithful teleportation he will likely detect the vacuum state after that final displacement and assign correctly the bit .the chance for that happening is quantified by .he will obviously make a mistake , assigning erroneously the bit , if he does not detect the vacuum state .for that reason we have . in the same fashion ,if alice teleports the state ( bit ) and bob displaces his mode by , for a faithful teleportation he will very likely not detect the vacuum state and will correctly assign the bit .this event occurs with probability , which implies .he makes a mistake if he gets the vacuum state and therefore .since the conditional probability is related to the joint probability distribution by the rule we have /2 , \\p_{xy}(0,1 ) & = & [ 1-q_1^b(\alpha)]/2 , \\p_{xy}(1,1 ) & = & q_1^b(\alpha)/2.\label{pab4}\end{aligned}\ ] ] if we now use that we have /2 , \\p_{y}(1 ) & = & [ 1+q_1^b(\alpha)-q_0^b(-\alpha)]/2 . \label{pb2}\end{aligned}\ ] ] the mutual information between alice and bob is defined as \ ] ] and a direct computation using eqs .( [ pa ] ) and ( [ pab1])-([pb2 ] ) gives /2 .\nonumber \\ \label{iab}\end{aligned}\ ] ] here we have dropped the dependence since is always computed with and with .note that also depends on , and . in order to obtain simply replace for in the expression for since if . using eq .( [ iab ] ) and the equivalent one for we can compute the secret key rate ( eq .( [ k ] ) ) .figure 4 in the main text was obtained this way , where we employed for each curve a different value for and for all of them the optimal values of , and assuming the real matching condition as given in fig . 2 of the main text .figure 5 of the main text , on the other hand , was obtained computing for several values of fixing ( loss ) and using eqs .( [ gur ] ) and ( [ gvr ] ) for and . was determined in such a way that it maximized for . as always , we assumed the real matching condition to fix the remaining parameters needed to evaluate , namely , alice s input was either or and bob s final displacement was . to get a feeling of how the optimal parameters change as a function of , we show in fig .[ figr0p3 ] their values assuming and optimizing the key rate for several values of loss ( bs attack ) .note that the same features occurs for other values of .the upper panels show the key rate with optimized for a fixed value of squeezing ( or ) .the lower panels give the values of ( in radians ) , related to alice s bs transmittance ( ) , leading to the optimal key rate . the optimal and given by using these values of and to evaluate eqs .( [ gur ] ) and ( [ gvr ] ) .we assume the real matching condition . left upper panel : decreases from top to bottom .right upper panel : for solid lines decreases from top to bottom while for dashed lines we have the opposite behavior . left lower panel : increases from top to bottom .right lower panel : for solid and dashed lines decreases from top to bottom.,width=302 ] we can see from fig .[ figr0p3 ] that for low loss ( high ) there exist two ranges for the values of where a secure key can be extracted .the greatest key rates occurs for but one can also get secure key for . for high losses( small ) , on the other hand , only when a secure key can be extracted .also , for losses up to ( from to ) the greater the loss the lower the key rate .interestingly , the behavior for losses greater than is the opposite .once you cross the border of loss , more loss means a better key rate .when the exact value of loss is used , no key rate can be achieved since bob s and eve s state are the same and , hence , they share the same level of information with alice . assuming squeezing is a cheap resource, we can let , together with , be a free parameter in the maximization of the key rate . in this scenario , we get the results in fig .[ figopt ] for the optimal key rate .the optimal parameters leading to such key rates are given in fig .[ figpar ] .it is interesting to note that whenever we have loss ( ) the optimal squeezing is not the greatest value possible and that with more than loss , the greater the loss the better the key rates .also , in most of the cases the optimal squeezing is not greater than . hereboth and are adjusted to get the optimal key rates with . for the curves with circles, decreases from top to bottom .the curve is over the x - axis since in this case the key rate is always zero . for the lines without circles, increases from top to bottom .we assume the real matching condition ., width=302 ] optimal parameters leading to the key rates in fig .[ figopt ] . in the maximization processwe have restricted from to while could assume any value .the optimal and are obtained using these values of and to evaluate eqs .( [ gur ] ) and ( [ gvr ] ) .right panels and for high values of : decreases from to from top to bottom ., width=302 ] c. h. bennett and g. brassard , _ proceedings of the ieee international conference on computers , systems and signal processing , bangalore , india , 1984 _ , p. 175 ; a. k. ekert , phys . rev. lett . * 67 * , 661 ( 1991 ) ; c. h. bennett , phys . rev . lett . * 68 * , 3121 ( 1992 ) ; c. h. bennett , g. brassard , and n. d. mermin , phys . rev . lett . * 68 * , 557 ( 1992 ) .extensive reviews of the discrete variable protocols in and their descendants can be found in : n. gisin , g. ribordy , w. tittle , and h. zbinden , rev .phys . * 74 * , 145 ( 2002 ) ; v. scarani , h. bechmann - pasquinucci , n. j. cerf , m. duek , n. ltkenhaus , and m. peev , rev . mod . phys . * 81 * , 1301 ( 2009 ) ; f. grosshans , g. van assche , j. wenger , r. brouri , n. j. cerf , and ph .grangier , nature ( london ) * 421 * , 238 ( 2003 ) ; m. legr , h. zbinden , n. gisin , quantum inf . comput . * 6 * , 326 ( 2006 ) ; j. lodewyck , t. debuisschert , r. tualle - brouri , and ph .grangier , phys .a * 72 * , 050303(r ) ( 2005 ) ; j. lodewyck , m. bloch , r. garca - patrn , s. fossier , e. karpov , e. diamanti , t. debuisschert , n. j. cerf , r. tualle - brouri , s. w. mclaughlin , and ph .grangier , phys .a * 76 * , 042305 ( 2007 ) ; p. jouguet , s. kunz - jacques , a. leverrier , ph .grangier , and e. diamanti , nature photonics * 7 * , 378 ( 2013 ) .t. hirano , h. yamanaka , m. ashikaga , t. konishi , and r. namiki , phys .a * 68 * , 042331 ( 2003 ) ; r. namiki and t. hirano , phys .a * 67 * , 022308 ( 2003 ) ; r. namiki and t. hirano , phys .lett . * 92 * , 117901 ( 2004 ) ; r. namiki and t. hirano , phys .a * 74 * , 032302 ( 2006 ) .weedbrook , a. m. lance , w. p. bowen , th .symul , t. c. ralph , and p. k. lam , phys .lett . * 93 * , 170504 ( 2004 ) ; a. m. lance , th .symul , v. sharma , ch .weedbrook , t. c. ralph , and p. k. lam , phys .lett . * 95 * , 180503 ( 2005 ) .s. pirandola , c. ottaviani , g. spedalieri .weedbrook , s. l. braunstein , s. lloyd , t. gehring , ch .s. jacobsen , and u. l. andersen , arxiv:1312.4104v2 [ quant - ph ] ; z. li , y .- c .zhang , f. xu , x. peng , and h. guo , phys .rew . a * 89 * , 052301 ( 2014 ) .see and in particular the following references for reviews on cvqkd protocols : s. l. braunstein and p. van loock , rev .phys . * 77 * , 513 ( 2005 ) ; ch .weedbrook , s. pirandola , raul garca - patrn , n. j. cerf , t. c. ralph , j. h. shapiro , s. lloyd , rev .phys . * 84 * , 621 ( 2012 ) .d. gottesman and j. preskill , phys .a * 63 * , 022309 ( 2001 ) ; f. grosshans and n. j. cerf , phys .lett . * 92 * , 047905 ( 2004 ) ; s. iblisdir , g. van assche , and n. j. cerf , phys . rev . lett . * 93 * , 170502 ( 2004 ) ; f. grosshans , physlett . * 94 * , 020504 ( 2005 ) ; m. navascus and a. acn , phys .lett . * 94 * , 020505 ( 2005 ) ; m. navascus , f. grosshans , and a. acn , phys .* 97 * , 190502 ( 2006 ) ; r. garca - patrn and n. j. cerf , phys . rev . lett . * 97 * , 190503 ( 2006 ) ; r. renner and j. i. cirac , phys . rev . lett . *102 * , 110504 ( 2009 ) ; y .- b .zhao , m. heid , j. rigas , and n. ltkenhaus , phys .a * 79 * , 012307 ( 2009 ) ; ch .weedbrook , s. pirandola , s. lloyd , and t. c. ralph , phys .lett . * 105 * , 110501 ( 2010 ) ; a. leverrier , r. garca - patrn , r. renner , and n. j. cerf , phys . rev . lett . * 110 * , 030502 ( 2013 ) ; p. jouguet , s. kunz - jacques , and e. diamanti , phys . rev .a * 87 * , 062313 ( 2013 ) ; j .- z .huang , s. kunz - jacques , p. jouguet , ch .weedbrook , z .- q .yin , sh . wang , w. chen , g .- c .guo , and z .- f .han , phys .rev . a * 89 * , 032304 ( 2014 ) .l. vaidman , phys .a * 49 * , 1473 ( 1994 ) ; s. l. braunstein and h. j. kimble , phys .lett . * 80 * , 869 ( 1998 ) ; a. furusawa , j. l. srensen , s. l. braunstein , c. a. fuchs , h. j. kimble , and e. s. polzik , science * 282 * , 706 ( 1998 ) .
|
we show a continuous variable ( cv ) quantum key distribution ( qkd ) scheme based on the cv quantum teleportation of coherent states that yields a raw secret key made up of discrete variables for both alice and bob . this protocol preserves the efficient detection schemes of current cv technology ( no single - photon detection techniques ) and , at the same time , has efficient error correction and privacy amplification schemes due to its binary discrete key . in particular , it is secure for any value of the transmission efficiency of the optical line used by alice to share entangled two - mode squeezed states with bob ( no 3 db or loss limitation characteristic of beam splitting attacks ) . the present cvqkd protocol works deterministically ( no postselection needed ) with efficient direct reconciliation techniques ( no reverse reconciliation ) in order to generate a secure key , even at the surprisingly loss case .
|
what do document clustering , recommender systems , and audio signal processing have in common ?all of them are problems that involve finding patterns buried in noisy data . as a result , these three problems are common applications of algorithms that solve non - negative matrix factorization , or nmf .non - negative matrix factorization involves factoring some matrix , usually large and sparse , into two factors and , usually of low rank because all of the entries in , , and must be non - negative , and because of the imposition of low rank on and , an exact factorization rarely exists .thus nmf algorithms often seek an approximate factorization , where is close to . despite the imprecision , however , the low rank of and forces the solution to describe using fewer parameters , which tends to find underlying patterns in .these underlying patterns are what make nmf of interest to a wide range of applications . in the decades since nmf was introduced by seung and lee , a variety of algorithms have been published that compute nmf .however , the non - deterministic nature of these nmf algorithms make them difficult to test .first , nmf asks for approximations rather than exact solutions , so whether or not an output is correct is somewhat subjective .although cost functions can quantitatively indicate how close a given solution is to being optimal , most algorithms do not claim to find the globally optimal solution , so whether or not an algorithm gives useful solutions can be ambiguous .secondly , all of the algorithms produced so far are stochastic algorithms , so running the algorithm on the same input multiple times can give different outputs if they use different random number sequences .thirdly , the algorithms themselves , though often simple to implement , can have very complex behavior that is difficult to understand . as a result, it can be hard to determine whether a proposed algorithm really `` solves '' nmf .this paper proposes some test cases that nmf algorithms should solve verifiably .the approach uses very simple input , such as matrices that have exact non - negative factorizations , that reduce the space of possible solutions and ensure that the algorithm finds correct patterns with little noise .in addition , small perturbations of these simple matrices are also used , to ensure that small variations in the matrix do not drastically change the generated solution .suppose nmf is applied to a non - negative matrix to get non - negative matrices and such that . if is chosen to have an exact non - negative factorization , then the optimal solution satisfies .furthermore , if is simple enough , most `` good '' nmf algorithms will find the exact solution .for example , suppose is a non - negative square diagonal matrix , and the output and is also specified to be square .let the diagonal matrix be denoted , where is an -dimensional vector , so that the diagonal entries are .it is easy to show that and must be monomial matrices ( diagonal matrices under a permutation ) . ignoring the permutation and similarly denoting and , then for applicable .such diagonal matrices were given as input to the known nmf algorithms described in the next section , and all of the algorithms successfully found exact solutions in the form of monomial matrices for and .one way to analyze the properties of an algorithm is to perturb the input by a small amount and see how the output changes .formally , if the input gives output , then the output generated from can be approximated as .it is assumed that is sufficiently small that terms are negligible . for the test case ,the nonzero entries of were chosen to be the on the superdiagonal ( the first diagonal directly above the main diagonal ) .this matrix is denoted as , where is an -dimensional vector such that .the resulting matrix has entries on its main diagonal , entries on the superdiagonal , and zeroes elsewhere .it is assumed that all the vector entries and are of comparable magnitude .three published nmf algorithms were implemented and run with input of the form as described above .algorithm 1 was the multiplicative update algorithm described by seung and lee in their groundbreaking paper , which was run for iterations in each test .algorithm 2 was the als algorithm described in , and which was run for iterations as well .algorithm 3 was a gradient descent method as described by guan and tao , and was run for iterations .these three algorithms were chosen because they were representative and easy - to - implement algorithms of three distinct types .many published nmf algorithms are variations of these three algorithms .the experiments began with the simplest nontrivial case , in which is a matrix with only three nonzero entries , with fixed ] , while was varied over several different values .each of the algorithms used randomness in the form of initial seed values for and .the random seeds were held constant as varied . as a result ,the outputs from the algorithms with different values of were comparable within each test case . for the case, it is possible to enumerate all of the non - negative exact factorizations of . given that the factors and are also matrices , they can be written as shown below . \left [ \begin{array}{cc } r & s \\ t & u \end{array } \right ] = \left [ \begin{array}{cc } 1 & \epsilon \\ & 1 \end{array } \right]\ ] ] multiplying the matrices directly produces the the following four equations : recall that all entries must be non - negative , so from equation ( 5 ) , either or must be 0 , and either or must be 0 .furthermore , it can not be that because that would contradict equation ( 6 ) , and it can not be that because that would contradict equation ( 3 ) .thus two cases remain : and .substituting into equations ( 3 ) , ( 4 ) , and ( 6 ) and solving for , , and gives likewise , substituting into ( 3 ) , ( 4 ) , and ( 6 ) and solving for , , and to gives observe that these two solutions look similar .in fact , they differ merely by a permutation . in the first case , and have the same main diagonal and superdiagonal format as , and can be written in matrix notation as \left [ \begin{array}{cc } \frac{1}{\mathbf{w}_0(1 ) } & \frac{1}{\mathbf{w}_0(1)}(\epsilon - \frac{\mathbf{w}_1(1)}{\mathbf{w}_0(2 ) } ) \\ & \frac{1}{\mathbf{w}_0(2 ) } \end{array } \right]\ ] ] the second case can be written as , where is the permutation matrix ] , $ ] , and .the solutions were categorized by the solution type in figure 2 .the distributions of the solutions by algorithm type are given in figure 2 .note that some solutions did not have two negligible entries among , , , and , in which case the smaller entry was ignored for the sake of sorting - this accounted for about 20% of the three algorithms , the majority occurring in algorithm 1 .being a matrix by where the non - negligible entries in the solution were .this chart shows how often each algorithm generated a solution of each type out of 100 cases .type ii ( in which is diagonal ) was the most common among all the algorithms , but by differing amounts . ]it is significant to note that even the solutions that did nt fall cleanly into a `` type '' still satisfied the pattern given in ( 12 ) .it seems that an nmf algorithm should satisfy this pattern , but little more is required .next , entries in and , were changed as in the case .as long as the entries were ( as opposed to or ) , the behavior of the algorithms was similar .finally , larger than were examined .several different sizes of matrices were tested , ranging from to , always keeping , , and square , with positive entries only on the main diagonal and the superdiagonal .the experiments followed the same general pattern ; nonzero entries in and appeared only on the main diagonal and superdiagonal . using similar logic to the and cases , it can be shown that these are the only exact solutions . however , in practice , as the matrices get larger , exceptions to this pattern become more common , particularly in algorithm 3 .the general rule seems to mostly hold ( over half the time ) until becomes around .note , however , that because the run - time of the algorithms are cubic in the size of the matrix , at best , the sample size for large matrices is small .since all three algorithms , which cover a variety of approaches to nmf , had a lot in common in their solutions , it is propose that these inputs could be used as a test case of an nmf algorithm implementation . in this section , it is proposed how such test cases could be executed .the test begins with input of the form is square , and preferably somewhere between and in size , although bigger inputs may be useful as well .the entries should vary between tests .each test should start by using so that is diagonal .the results of this test should have and monomial - only one nonzero element in each row and column .ignore entries that are below , for the entirety of testing , as any such entries are negligible .if or is not monomial , or if the product is not equal to to within a negligible margin of error , the algorithm fails the test . otherwise , the generated solution can be used to find the permutation matrix that makes and diagonal by replacing the nonzero entries of with 1 s .since is diagonal , is also diagonal , and since is diagonal , so is . knowing will make the rest of the testing much simpler since it is easier to identify whether a solution is of the form given above when it is not permuted .next , run the test again using a positive value for ; seems to work well , although using a variety of is also recommended .make sure to use the same random seeds that were used in the test to produce corresponding output . then check that the and given by the algorithm are such that and nonzero entries only on the two diagonals that they are supposed to .if this does nt hold , changing might have changed which permutation returns and to the proper form , so check again ; this happens more commonly among larger matrices than smaller ones .however , if and really do break the form , or , the algorithm fails the test on this input .otherwise , it passes .note that even widely accepted algorithms do fail these tests occasionally , especially with matrices larger than , so it s advisable to perform the test many times to get a more accurate idea of an algorithm s performance .this paper proposes an approach to the problem of testing nmf algorithms by running the algorithms on simple input that can produce an exact non - negative factorization , and perturbations of such input .in particular , square matrices with entries on the main diagonal and entries on the superdiagonal are proposed , because they have exact solutions that can enumerated mathematically , or because they are perturbations of matrices with exact solutions .the test cases have been used as input on three known nmf algorithms that represent a variety of algorithms , and all of them behaved similarly , which suggests testable , quantifiable behaviors that many nmf algorithms share .these test cases offer one approach for testing candidate nmf implentations to help determine whether it behaves as it should .the authors would like to thank dr .alan edelman for providing and overseeing this research opportunity , and dr .vijay gadepally for his advice and expertise .rainer gemulla , erik nijkamp , peter j. haas , and yannis sismanis , _ large - scale matrix factorization with distributed stochastic gradient descent _ , proceedings of the 17th acm sigkdd international conference on knowledge discovery and data mining ( new york , ny , usa ) , kdd 11 , acm , 2011 , pp .6977 .daniel d. lee and h. sebastian seung , _ algorithms for non - negative matrix factorization _ , advances in neural information processing systems 13 ( t. k. leen , t. g. dietterich , and v. tresp , eds . ) , mit press , 2001 , pp. 556562 .suvrit sra and inderjit s. dhillon , _ generalized nonnegative matrix approximations with bregman divergences _ , advances in neural information processing systems 18 ( y. weiss , b. schlkopf , and j. c. platt , eds . ) , mit press , 2006 , pp . 283290 .wenwu wang , _ instantaneous vs. convolutive non - negative matrix factorization : models , algorithms and applications _, machine audition : principles , algorithms and systems : principles , algorithms and systems ( 2010 ) , 353 .
|
non - negative matrix factorization ( nmf ) is a problem with many applications , ranging from facial recognition to document clustering . however , due to the variety of algorithms that solve nmf , the randomness involved in these algorithms , and the somewhat subjective nature of the problem , there is no clear `` correct answer '' to any particular nmf problem , and as a result , it can be hard to test new algorithms . this paper suggests some test cases for nmf algorithms derived from matrices with enumerable exact non - negative factorizations and perturbations of these matrices . three algorithms using widely divergent approaches to nmf all give similar solutions over these test cases , suggesting that these test cases could be used as test cases for implementations of these existing nmf algorithms as well as potentially new nmf algorithms . this paper also describes how the proposed test cases could be used in practice .
|
direct numerical approaches based on molecular interactions have become standard computational , as well as modeling , tools nowadays for modeling molecular structures . for dynamics problems ,the trajectory of each atom can be described by the newton s equations of motion , this approach is the essences of the molecular dynamics ( md ) modeling .the interatomic potential embodies the interactions between particles ( atoms ) through the changes of bond lengths , bond angles , dihedral angles , electrostatics , van der waals etc .direct md simulations capture all the physics in a biological system , but they particularly suited for studying small scale transitions due to the computational complexity . meanwhile , most biological processes are intrinsically multiscale : the overall dynamics consists of large number of atoms associated with many different types of motions , spanning a wide range of time scales .in fact , typical biological functions begin at the time scale , which is far beyond the reach of direct md simulations . to overcome this significant modeling difficulty , much effort has been devoted to developing coarse - grained ( cg ) molecular models to access processes on a longer time scale .problems of this type have been identified as one of the most important and challenging problems in molecule modeling .one of the key components in a cg model is to find out the direct interaction of the cg variables , represented , , by the many - body potential of mean force ( pmf ) . in a cg approach , this interaction , in terms of forces , can in principle be obtained by integrating out the remaining degrees of freedom .however in practice , approximation schemes have to be introduced , and the main issue for pmf is to ensure the consistency with the original full molecular interaction as well as to control the accuracy .we refer to the reviews for the recent progress and existing issues .the calculation of pmf is often formulated based on a thermodynamic consideration . in particular , one considers a system where the remaining degrees of freedom are at a conditional equilibrium .another remarkable approach is through the generalized langevin equations ( gle ) , which can be derived directly from the equations of motion using the mori - zwanzig ( mz ) projection formalism .the mode has been considered by many researchers over the years .the mz projection procedure , when the conditional expectation is used as projector , yields an averaged force , which is consistent with that in the pmf approach .in addition , the formalism gives rise to a _ history - dependent term _ , which with reasonable approximations , simplifies to a linear convolutional term with a memory function , and a _ random noise _term , which is consistent with the memory function via the second fluctuation - dissipation theorem ( fdt ) . the main practical difficulty in implementing the gle is the computation of the memory function . in some cases ,markovian approximations can be made to reduce the gle to langevin equation , or one may simply use exponential functions , assuming a rapid decay .however , it is difficult to quantify and control the modeling error in such an ad hoc approximation . a more systematic approach is to related the memory function to correlation functions , , the velocity auto - correlation function ( vacf ) , which is computed from equilibrium md simulations .for instance , berkowitz et al considered a gle where the mean force term is linear , and then derived an integral equation of volterra type for the memory function . as input of the integral equation ,the correlation function of the velocity and position are obtained from md experiments .this has been the approach followed by many other groups . in general, the calculation of vacf tends to be expensive due to the large size of the system .more importantly , the sampling of the random noise is still a challenge . in this paper, we propose a more efficient approach to obtain the memory functions without performing direct md simulations .the method for computing the kernel functions is based on the krylov - subspace method , motivated by the numerical methods for evaluating matrix functions .we will present the algorithm , and detailed implementation procedure . as will be shown, this approach offers the added advantage that the random force term can be approximated in the same subspace , and it automatically satisfy the second fdt .it is important to point out that the memory functions will depend on how the cg variables are selected , and what reduction procedure is used .the point will be illustrated and clarified using two reduction methods , and three different selection schemes for the cg variables .the rest of the paper is organized as follows : we first discuss the reduction method of mori - zwanzig , from which we derive the exact expression of the memory functions .then , we present an efficient numerical algorithm to compute these functions .examples are given in the following section to demonstrate the effectiveness of the methods .the generalized langevin ( gle ) models can be derived from many different coarse graining procedures , , by using appropriate linearization procedure .a more systematic procedure is the mori - zwanzig projection formalism . herewe will consider two different projection operators , and derive two types of gles models .in particular , we derive an explicit expression for the memory function .we start with the full molecular dynamics ( md ) model , here denotes the position of _ all _ the atoms .further , we let be the velocity .let us introduce a scaling , this reduces the equation to which is expressed in a vector form .the coarse - graining procedure will be applied to these rescaled equations . in particular , the position will be mass weighted .the collective motions are often represented in terms of the dynamics of a number of coarse - grained variables .we will define such variables through a projection to a subspace . toward this end , we let be the entire configuration space , and be a subspace with dimension ; .specific examples of such subspaces will be discussed later . to derive explicit formulas ,let us choose a set of orthonormal basis vectors of , denoted here by by grouping these vectors , we form a matrix .further , we let be an orthonormal basis for the orthogonal complement of the subspace , denoted by .they form a matrix . in practice , it is often difficult , if not impossible , to construct the matrix . nevertheless , we will use this set of basis to express certain functions , and then we will discuss how to approximate these functions without actually computing . to proceed , we define the cg variables through the projection to the subspace : where is the displacement to the equilibrium state .the displacement is often easier to work with , and we further switch the notation to since all the columns in are unit vectors , can be regarded as average position and average velocity , respectively .similarly , one can define and ; .they represent the additional degrees of freedom , referred to as _ under - resolved variables _ , and they will not appear explicitly in the cg models .it is clear now that for any or , we have a unique decomposition in the form of , the first step of the mz reduction procedure is to express the time evolution of the cg variables .this is best represented by a semi - group operator , _i.e. , _ for any dynamical variable , we have , where the operator is given by , as is customary in statistical mechanics theory , we use to denote the initial value , _ , and these differential operators are defined with respect to the initial coordinate and momentum . more specifically , the solution ( and ) of the md model at time depends on the initial condition and .such dependence defines a symplectic mapping . as a result , any dynamic variable , as a function of and ,are also functions of and .the partial derivatives in should be calculated with respect to the initial condition . in order to distinguish thermodynamic forces of different nature, one defines a projection operator , with its complementary operator given by .it can either be defined as a projection to a subspace or a conditional average .this will be discussed separately in the next section . once the dynamic variables and the projection are defined , the mori - zwanzig procedure yields the effective model , where , and , the first term on the right hand side of is typically considered as the reversible thermodynamic force . the second term represents the history dependence and provides a more general form of frictional forces .it dictates the strong coupling with the under - resolved variables .the last term , , takes into account the influence of the under - resolved variables , in the form of a random force .next , we discuss the specific forms of the memory function and the random noise for different choices of the projection operator . herewe choose the following projection : for any function , or we define , the operator is a projection since this is motivated by the galerkin method for coarse - graining md models .if the mz equation is reduced to , no memory term arises from this equation .next , we let . we will derive the cg model in several steps .first we start with the random noise . at , we find from ..\ ] ] in order to simplify this term , we introduce the approximation , in principle , one can choose .but here we let , i.e. , the hessian matrix of the potential energy at a local minimum , which has the same second order accuracy of approximation near the reference position . with this approximation, we find that , applying the operator , we get , we proceed to compute . a direct calculation yields , which by a similar approximation , can be written as , similarly , repeating such calculations , we find that the random noise can be approximated by ,\ ] ] where with . this can be verified by examining the taylor expansion of the trigonometric functions .we now turn to the function with the approximation of , we obtain , to further simplify this , we make another approximation that in this expression , which leads to , this simplifies the integral to a convolutional form , where the matrix function is given by , collecting terms , we obtain the gle , the first term in the gle is related to the inter - molecular force as follows : another choice of the projection operator is the conditional expectative , which for the canonical ensemble , is given by , \\ & \stackrel{\text{def}}{=}\frac{\dsp\int_{\mathbb{r}^{6n } } g(\bm x,\bm v)e^{-\beta[v(\bm x)+\half\bm v^2]}\delta(\bm q-\phi^t\bm x)\delta(\bm p-\phi^t \bm v)d\bm xd\bm v}{\dsp\int_{\mathbb{r}^{6n}}e^{-\beta[v(\bm x)+\half \bm v^2]}\delta(\bm q-\phi^t \bm x)\delta(\bm p-\phi^t\bm v ) d\bm xd\bm v } \end{split}\ ] ] here is the inverse temperature , and the delta functions are introduced to enforce the given conditions. again we start with the construction of the random noise in the mz equation . herewe introduce two approximations .first , we let be an approximate hessian of the potential energy , and we approximate the projection by , } \delta(\bm q - \phi^t \bm x ) \delta(\bm p - \phi^t \bm v ) d\bm x d\bm v } { \dsp\int_{\mathbb{r}^{6n } } e^{-\beta[\half \bm x^t a \bm x + \half \bm v^2 ] } \delta(\bm q - \phi^t \bm x ) \delta(\bm p - \phi^t \bm v ) d\bm x d\bm v}. \end{split}\ ] ] as a result , the expectation is with respect to a multi - variant gaussian distribution .the second approximation also involves the same linearization used in the previous section , to facilitate the following calculations , we define projection matrices , in particular , we have , and with the approximation , we have , therefore , the projection operator has been turned into a matrix - vector multiplication .the following identities can be easily verified , we proceed to compute the random noise . at , by invoking the two approximations , we find that , in addition , we have , repeating these steps , we have , again we defined .these calculations suggest that the random noise may be approximated by , , \end{split}\ ] ] which can be validated by checking each term in the taylor series . with the approximation of , we can approximate by , here we have used the first and third identities in . similar to the previous section , we neglect the second term using . as a result, we obtain a memory function , further , the memory term is reduced to a convolutional integral , notice that the memory function involves the coarse - grained momentum instead of the coarse - grained coordinate . using the matrix identity , we can simplify the memory function to , to get some insight , we let the eigenvalues of be , and let be the associated eigenvectors .then , we can express the kernel function as follows , further , let .this can be interpreted as the residual error , when is viewed as the approximate eigenvalue of obtained by a projection to the orthogonal complement .a direct substitution yields , intuitively , when the eigenvalues are well approximated within the initial subspace , they make less contribution to the memory function . with the condition expectation chosen as the projection operator ,the first term in the mz equation has a natural interpretation .to explain this , we define the free energy by integrating out the under - resolved variables , then the first term in coincides with the mean force .now we can collect all the terms and the gle is expressed as , with the approximation of the probability density , we see that and follow the conditional distribution , },\ ] ] where . in addition, we have therefore , the random process in is a gaussian process .furthermore , with direct calculation , we can verify that it is stationary with zero mean and it satisfies the second fluctuation - dissipation theorem ( fdt ) , based on the theory of gaussian processes , is uniquely determined by the correlation function .thus , the gle is closed .the fdt a critical property of the generalized langevin model .it is a necessary condition to ensure that the system will approach to a thermodynamic equilibrium .therefore , it is also important to preserve this condition at the level of numerical approximations .this will be discussed in the next section .in contrast , the random noise derived from the previous section is not stationary .however , notice that using integration by parts , one can show that the memory functions and random noises in the gles and can be related to one another . for the rest of the paper , we will focus on the gle and the memory function .the function can be computed using a similar procedure .in most of previous works , the memory functions are computed from molecular dynamics simulations . in this paper , we present another approach , based on the analytical expression of the kernel . due to the matrix function form , we will use the krylov subspace approximation , a popular method for computing matrix functions .next , we explain the general idea , and address some implementation issues .we first consider the approximation of to illustrate the idea .recall that , and so consider the vector for some , , and we define the krylov subspace with _ order _ , with the standard lanczos algorithm , we can construct orthogonal basis vectors ] .we then have similarly , we have , we now turn to the random noise , which can also be sampled within the krylov subspace .more precisely , we state that , let be given by , where and are independent normal random variables with zero mean and variance and , respectively , then is stationary random noise with zero mean and the correlation is given by , as a result , the sampling of the random force is reduced to the sampling of low - dimensional quantities and .more importantly , the approximate random force and memory function still satisfy the fluctuation - dissipation theorem .in this section , we present some numerical results . as an example , we choose a hiv-1 protease whose pdb i d is 1dif .the protein contains 198 residues and 3128 atoms .the cartoon picture of the structure is shown in fig .[ pdbfig ] .the kernel functions depend on the choice of the coarse - grained variables .in particular , it depends on the initial subspace . here ,three different subspaces are considered : * * subspace - i : * the subspace spanned by the rtb basis corresponding to the translations and rotations of rigid blocks .the partition of the blocks is obtained from the partition scheme first .the implementation was done by using the software proflex .the dimension of the subspace is 380 . * * subspace - ii : * the subspace generated by the rtb basis functions with each residue as a rigid block .there are 1188 basis functions in total . * * subspace - iii : * the subspace spanned by 540 low frequency modes , obtained from the principle component analysis ( pca ) . to obtain the basis functions ,trajectories are generated from direct molecular dynamics simulations .these basis functions may not be localized .nonetheless , we still choose this subspace due to its importance in dimension reduction . for each subspace, we use the krylov subspace methods and compute the approximate memory functions in ( [ approx3 ] ) . for comparison, we also computed the exact memory function using brutal force .the kernel functions have the unit of . in fig .[ fig1 ] - [ fig5 ] , we show the profiles of the entries , and , of the kernel function within a time period of 0.1ps obtained from different computational methods and different coarse grained subspaces . based on these figures, we can see that the krylov space method produces good approximations of the kernel functions , especially at the beginning period .another observation is that these memory functions do not exhibit fast decay at this scale .instead , they exhibit many oscillations , which indicate that a markovian or exponential approximation is premature .currently the order of the krylov subspace in these examples are 4 .if we increase the order of the krylov subspace , the approximations will further improve , see fig .[ fig6 ] . and of the exact kernel function ( lines without markers ) produced by brutal - force computation according to ( [ kernel ] ) and approximated kernel ( [ approx3 ] ) using the krylov space method ( lines with markers ) with order 4 .these two entries are corresponding to the correlations of the noises in the first two translational modes of the first rigid block.[fig1],width=340,height=188 ] ) and the approximated kernel ( [ approx3 ] ) using the krylov space method and subspace - ii ( lines with markers ) .these two entries are corresponding to the correlations of the noises in the first two translational modes of the first rigid block .the order of the krylov space is 4.,width=340,height=188 ] ) using krylov space method ( lines with markers ) with order 4 .these two entries are corresponding to the correlations of the noises in the first two rotational modes of the first rigid block.,width=340,height=188 ] fig .[ fig2 ] also indicates that the memory functions for the residue - based subspaces look smoother .this is because the residue - based subspaces admit more low frequency modes than those of rigid bodies from the partitions of proflex .next , we consider the same type of partitions ( subspace -ii based on residues ) , but with different block sizes . in particular , we first start with a fine partition , in which each residue is a block .we then form a coarser partition , where there are 3 residues in each block ( it is clear that this partition is not based on the flexibility of the molecule ) .one observes from fig .[ fig7 ] that the memory functions become smaller for the coarser partition .to further confirm this observation , we divide the entire system equally into 22 blocks with 9 residues in each block . we also form a 6-block partition , each of which contains 33 residues .the results , shown in fig .[ fig8 ] , exhibit the same trend : as we coarse - grain more and more , the memory functions become smaller and smaller .in this paper , we have presented a methodology to compute memory functions which are important parameters in the generalized langevin model .computing such memory functions directly from molecular dynamics simulations would require extensive effort .in contrast , the method proposed here relies on a technique in numerical linear algebra , and it can be implemented without performing molecular simulations .we have also demonstrated that under the current framework , the random noise term in the generalized langevin equation can be consistently approximated . to our knowledge , none of the existing methods offers such advantage .together with the average force , the generalized langevin equation can be solved to describe the collective motion of the system .this is work in progress .the work has been partially supported by nsf grants dms1109107 , dms-1216938 , and dms-1159937 .this work was initialized during chen s visitation to the department of mathematics , at the pennsylvania state university .he would like to thank the hospitality of the department .chen was supported by the china nsf ( nsfc11301368 ) and the nsf of jiangsu province ( bk20130278 ) .48ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] __ ( , ) _ _ ( , ) * * ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) _ _ ( , ) _ _ ( , ) _ _ , vol .( , ) * * ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) _ _ , ed .( , ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) _ _ ( , )
|
we present a numerical method to compute the approximation of the memory functions in the generalized langevin models for collective dynamics of macromolecules . we first derive the exact expressions of the memory functions , obtained from projection to subspaces that correspond to the selection of coarse - grain variables . in particular , the memory functions are expressed in the forms of matrix functions , which will then be approximated by krylov - subspace methods . it will also be demonstrated that the random noise can be approximated under the same framework , and the fluctuation - dissipation theorem is automatically satisfied . the accuracy of the method is examined through several numerical examples .
|
the excellent error correction performance of codes , alongside with the availability of low - complexity and highly parallel decoding algorithms and hardware architectures makes them an attractive choice for many high throughput communication systems .codes are traditionally decoded using iterative algorithms like the algorithm and variants thereof , most notably the algorithm .those conventional algorithms rely on the exchange of continuous messages , which are usually quantized with resolutions of to bits in most in hardware implementations .lower resolutions are possible but entail severe performance penalties , especially in the error - floor region . previous work on quantized algorithms for decoding has shown that decoders which are designed to operate directly on message alphabets of finite size can lead to improved performance .there are numerous different approaches towards the design of such decoders .for example , the authors of , and consider based update rules that are designed such that the resulting decoders can correct most of the error events contributing to the error floor .however , their design is restricted to codes with column weight and to binary output channels . in a quasi - uniform quantization was proposed which extends the dynamic range of the messages at later iterations and improves the error floor performance .however , the design of still relies on the conventional message update rules and therefore does not reduce the required message bit - width .finally , the authors of consider message updates based on an information theoretic fidelity criterion .while , , and analyze the performance of their decoding schemes by means of simulations , only provides density evolution results and focuses solely on the algorithm for designing the message update rules . to the best of our knowledge ,none of the above schemes have been assessed in terms of their impact on hardware implementations . in this paper , we derive a low - complexity decoding algorithm that is designed to directly operate with a finite message alphabet and that manages to achieve better error - rate performance than conventional algorithms with message resolutions as low as bits . based on this algorithm , we synthesize a fully unrolled decoder and compare our results with our implementation of the only existing fully unrolled decoder .our approach for the design of the update rule is similar to , but we use a more sophisticated tree structure as well as a different update rule .a -regular code is the set of codewords where all operations are performed modulo .the parity check matrix contains ones per column and ones per row and is sparse in the sense that .the parity - check matrix forms an incidence matrix for a tanner graph which contains and .variable node is connected to check node if and only if . codes are traditionally decoded using algorithms , where information is exchanged between the and the over the course of several decoding iterations .let the message alphabet be denoted by . for simplicity , in this workwe assume that does not change over the iterations . at each iterationthe messages from to are computed using the mapping , which is defined as where denotes the neighbours of node in the tanner graph , is a vector that contains the incoming messages from all neighboring except , and denotes the channel corresponding to . similarly , the -to- messages are computed using the mapping , which is defined as illustrates the message updates in the tanner graph .in addition to and , a third mapping is needed to provide an estimate of the transmitted codeword bit based on the incoming check node messages and the channel + for the widely used algorithm , the mappings read where denotes the minimum of the absolute values of the vector components , and .the decision mapping is defined as ms algorithm assumes that the message set and the set are real numbers .however , it is impractical to use floating - point arithmetic in hardware implementations of such decoders and the message alphabets are usually discretized using a relatively low number of uniformly spaced quantization levels .this uniform quantization , together with the well - established two s complement and sign - magnitude binary encoding , leads to efficient arithmetic circuits , but it is not necessarily the best choice in terms of error - rate performance .recently , efforts have been made to devise decoders that are designed to work directly with finite message and alphabets .instead of arithmetic computations such as [ eqn : vnupdatems ] and [ eqn : cnupdatems ] , the update rules for these decoders are implemented as look - up tables ( luts ) .there are numerous approaches to the design of such luts . in the following, we provide an algorithm that is a mixture between the conventional algorithm and purely -based decoders .more specifically , we only replace the update rules with , which are designed using an information theoretic metric . for the design of the, we exploit the fact that the outputs of the -based , although not representing real numbers , can be ordered and for symmetric channels , the message sign can be directly inferred from the labels , cf .[ subsec : quantdiscussion ] .this allows us to use the standard update rule , thereby avoiding the high hardware complexity that a -based design would cause for codes with high degree .our hybrid algorithm provides excellent performance even with very few message levels and leads to an efficient hardware architecture , which is described in detail in section [ sec : architecture ] .the key idea behind the lut design method that we employ is that , given the cn - to - vn message distributions of the previous iterations , one can design the vn luts for each iteration in a way that maximizes the mutual information between the vn output messages and the codeword bit corresponding to the vn in question .we first describe how the distribution of the cn - to - vn messages can be computed based on the distribution of the incoming cn - to - vn messages. if the tanner graph is cycle - free , then the individual input messages of a at iteration are iid conditioned on the transmitted bit , and their distribution is denoted by .then , the joint distribution of the incident messages conditioned on the transmitted bit value corresponding to the recipient ( cf . fig . [ fig : message_updates ] ) reads where denotes the modulo- sum of the components of . using the update rule [ eqn : cnupdatems ] , the distribution of the outgoing -to- message is then given by where .the output message values are given by conventional decoding algorithms need a high dynamic range in order to represent the growing message magnitudes , as they are using the same message representation for every iteration . in our lut - based decoder ,the message representation changes from one iteration to the next and the message values grow implicitly as the distributions become more and more concentrated over the course of the iterations , thus providing an explanation for the good performance we can achieve with very low resolutions .let denote the incident -to- messages that are involved in the update of a certain ( one of which is always the channel llr ) and let be the transmit bit corresponding to this .then , the joint distribution of the vn input messages is given by given this distribution , we can construct an update rule where is the set of all deterministic mappings in the form of [ eqn : vnupdategeneric ] and denotes the mutual information between and .hence , the resulting update rule locally maximizes the information flow between the and the .an algorithm that solves [ eqn : miquantizer ] with complexity was provided in . using the update rule [ eqn : miquantizer ], we can compute the message conditional distribution of the next iteration given an initial message distribution and a distribution of the channel , the repeated alternating application of [ eqn : cnprodchannel , eqn : cnmsgdist , eqn : vnprodchannel , eqn : miquantizer , eqn : vnmsgdist ] produces a sequence of locally optimal update mappings , where denotes a pre - determined maximum number of performed iterations . as the mappings take inputs , a direct application of the algorithm described in section [ subsec : it ] is restricted to low weight codes .however , we can construct a hierarchy of mappings where each partial mapping only processes a subset of the inputs and the intermediate outputs of preceding stages .the quantizer design for such a hierarchy follows directly by considering only the messages incident to the respective mapping in [ eqn : vnprodchannel ] and , for the intermediate nodes , replacing the distributions [ eqn : cnmsgdist ] of the incident messages by the distributions [ eqn : vnmsgdist ] of the previous stage .so far we considered the initial distributions and as given .when designing practical decoders for communication applications , the initial distributions follow from the transmission channel and the quantization of the preceding signal processing . throughout the rest of the paper ,we consider a channel followed by maximum mutual information quantization of the . in this case , the initial distributions depend on the snr , which renders the lut design snr - specific .nevertheless , we observe in our simulations that the decoder generally performs well also for snrs other than the design snr .consider the practically relevant case where and are even and the distributions and are symmetric in the sense that or equivalently , expressed in terms of the values for that case , computing the update [ eqn : cnupdatems ] is simplified as the sign follows immediately from the message labels .thus , for the update the message values do not need to be stored and the entire decoder can be implemented based on the message labels . since the discrete messages of our decoder do not represent real numbers but are labels , a simple arithmetic decision mapping such as [ eqn : minsumdec ] is not possible .instead , has to be implemented as a generic mapping as well .the construction of is similar to the construction of , with the difference that all input messages and the channel have to be processed and that the output is binary .in the previous section , we have described an algorithm that can construct locally optimal variable node update rules in the form of for a given quantization bit - width for each iteration for any given -regular ldpc code .most conventional ldpc decoder architectures are either partially parallel , meaning that fewer than vns and cns are instantiated , or fully parallel , meaning that vns and cns are instantiated .using a lut - based decoder with a carefully designed quantization scheme can significantly reduce the memory required to store the messages exchanged by the vns and cns due to the reduced message bit - width required to achieve the same fer performance . however , both for partially parallel and for fully parallel decoders , separate luts would be required within each vn for each one of the performed decoding iterations , significantly increasing the size of each vn , and thus possibly outweighing the gain in the memory area .an additional degree of parallelism was recently explored in , where a _ fully unrolled _ and fully parallel ldpc decoder was presented .this decoder instantiates vns and cns for each iteration of the decoding algorithm , leading to a total of vns and cns .while such a fully unrolled decoder requires significant hardware resources , it also has a very high throughput since one decoded codeword can be output in each clock cycle .thus , the hardware efficiency ( i.e. , throughput per unit area ) of the fully unrolled decoder presented in turns out to be significantly better than the hardware efficiency of partially parallel and fully parallel ( non - unrolled ) approaches .since in a fully unrolled ldpc decoder architecture vns and cns are instantiated for each iteration , it is a very suitable candidate for the application of our lut - based decoding algorithm . in this section ,we describe the hardware architecture of our fully unrolled lut - based ldpc decoder .our hardware architecture is similar to the architecture used in , while the most important differences are the optimized lut - based variable node and the significantly reduced bit - width of all quantities involved in the decoding process .an overview of our decoder architecture is shown in fig . [fig : toplevel ] .each decoding iteration is mapped to a distinct set of variable nodes and check nodes which then form a processing pipeline .in essence , a fully unrolled and fully parallel ldpc decoder is a systolic array in which data flows from left to right .a new set of channel llrs can be read in each clock cycle , and a new decoded codeword is output in each clock cycle .the decoding latency as well as the maximum frequency depend on the number of performed iterations as well as the number of pipeline registers present in the decoder .our decoder consist of three types of stages , namely the cn stage , the vn stage , and the dn stage , which are described in detail in the sequel .as long as a steady flow of input channel llrs can be provided to the decoder , there is no control logic required apart from the clock and reset signals .each cn stage contains check node units , as well as -bit registers which store the check node output messages , where denotes the number of bits used to represent the internal decoder messages .moreover , each cn stage contains -bit channel llr registers which are used to forward the channel llrs required by the following variable node stages , where denotes the number of bits used to represent the channel llrs . due to, we can use a check node architecture which is practically identical to the check node architecture used in . more specifically , each check node consists of a sorting unit that identifies the two smallest messages among all input messages and an output unit which selects the first or the second minimum for each output , along with the appropriate sign .the sorting unit contains 4-input compare - and - select ( cs ) units in a tree structure , which identify and output the two smallest values out of the four input values .we use sign - magnitude ( sm ) to represent all message labels . the sm2tc unit used in the check node of not required in our architecture since the variable node does not perform any arithmetic operations where the two s complement representation could be favorable .each stage contains variable node units , as well as -bit registers that store the variable node output messages .moreover , each vn stage contains -bit channel llr registers which are used to forward the channel llrs required by the following vn stages . in the variable node architecture used in the adder - based decoder of ,all input messages are added and then the input message corresponding to each output is subtracted from the sum in order to form the output message , thus implementing the conventional ms update rule given in . in order to avoid overflows , in our implementation of the bit - width of the internal signals is increased by one bit for each addition . for our lut - based decoderthe adder tree is replaced by lut trees , each of which computes one of the outputs of the variable node .one possible lut - tree structure is shown in fig .[ fig : vnodelut ] , where denotes an internal message from a check node and denotes the channel llr .lut sharing between the lut trees can be achieved by identifying the nodes that appear in more than one tree and instantiating them only once , thus significantly reducing the required hardware resources . moreover , keeping the number of inputs of each lut as low as possible ensures that the size of the luts , which grows exponentially with the number of inputs , is manageable for the automated logic synthesis process .the variable node that corresponds to the final decoding iteration is called a _ decision node _the dn stage contains decision nodes , as well single - bit registers that store the decoded codeword bits .the dn stage does not contain channel llr registers , as there are no subsequent decoding stages where the channel llrs would be used .the architecture of a decision node is generally simpler than that of a variable node , as a single output value ( i.e. , the decoded bit ) is calculated instead of distinct outputs .more specifically , in the architecture of , the decision metric of is already calculated as part of the variable node update rule .however , for the decision node , there is no need to subtract each input message from the sum in order to generate distinct output messages .it suffices to check whether the sum is positive or negative , and output the corresponding decoded codeword bit . in our lut - based decoder , as discussed in section [ sec : decstage ] , a lut tree is designed whose tree node has an output bit - width of a single bit , which is the corresponding decoded codeword bit .an example of a decision lut tree for a decision node that corresponds to a code with is shown in fig .[ fig : decnodelut ] .each decision node contains a single lut tree , in contrast with the variable nodes which contain lut trees .our lut - based architecture contains pipeline registers at the output of each stage ( vn , cn , and dn ) .thus , for a given number of decoding iterations , the decoding latency is clock cycles .since one decoded codeword is output in each clock cycle , the decoding throughput of the decoder , measured in gbits / s , is given by , where denotes the operating frequency measured in ghz .each pipeline stage except the dn stage requires an channel llr register .moreover , each vn and cn stage requires ( equivalently , ) registers to store the output messages .finally , the dn stage requires registers to store the decoded codeword bits .thus , the total number of register bits required by our lut - based decoder can be calculated as naturally , can also be used to calculate the register bits required by an adder - based ms architecture with the same pipeline register structure .in this section , we present synthesis results for a fully unrolled lut - based ldpc decoder and we compare it with synthesis results of our implementation of a fully unrolled adder - based ms ldpc decoder .we have used the parity - check matrix of the ldpc code defined in the ieee 802.3an standard ( gbit / s ethernet ) , which is a -regular ldpc code of rate and blocklength . for the fixed point decoder and the lut - based decoder , a total of decoding iterations are performed , since from fig .[ fig : fer ] we observe that increasing the number of iterations to , e.g. , , does not lead to a significant improvement in performance for this ldpc code .all synthesis results are obtained by using a tsmc nm cmos library under typical operating conditions .+ [ mark= * , mark options = fill = blue , color = blue , ] table [ x index = 0 , y index = 1 , col sep = comma ] lut.csv ; + [ mark = square * , mark options = fill = red , color = red , ] table [ x index = 0 , y index = 1 , col sep = comma ] fixed_4bit.csv ; + [ mark = triangle * , mark options = fill = green!60!black , color = green!60!black , ] table [ x index = 0 , y index = 1 , col sep = comma ] fixed_5bit.csv ; + [ mark = x , mark options = fill = black , color = black , ] table [ x index = 0 , y index = 1 , col sep = comma ] float_05iter.csv ; + [ mark = diamond * , mark options = fill = red!50!yellow , color = red!50!yellow , ] table [ x index = 0 , y index = 1 , col sep = comma ] float_10iter.csv ; for the lut - based decoder , we have used bits for the representation of the channel llrs and bits for the representation of the internal messages , as this leads to an error correction performance that is very close the floating - point ms decoder ( cf .[ fig : fer ] ) . for the variable nodes ,we use the lut tree structure of fig .[ fig : vnodelut ] and for the decision nodes we use the lut tree structure of fig .[ fig : decnodelut ] .the design snr is set to db . for the adder - based ms decoder which serves as a reference , we use bits for the representation of the channel llrs and bits for the representation of the internal messages , as this leads to practically the same fer performance for the lut - based and the adder - based ms decoder , as can be seen in fig .[ fig : fer ] . .synthesis results [ cols="^,^,^",options="header " , ] we present synthesis results for the adder - based and the lut - based decoders in table [ tab : results ] . for fair comparison , we synthesized both designs for various clock constraints and selected the result with the highest hardware efficiency for each design .these results should not be regarded in absolute terms , as the placement and routing of such a large design is highly non - trivial and will increase the area and the delay of both designs significantly . however , it is safe to make relative comparisons , especially when considering the fact that the lut - based decoder will be easier to place and route due to the fact that it requires approximately 40% fewer wires for the interconnect between the vn , cn , and dn stages .we observe that the lut - based decoder is approximately 8% smaller as well as 64% faster than the adder - based ms decoder . as a result ,the area efficiency of the lut - based decoder is higher than that of the adder - based ms decoder . for both designs ,the critical path goes through the cn , but in the lut - based decoder the delay is smaller due to the reduced bit - width .we show the area breakdown of the lut - based and the adder - based decoders in table [ tab : breakdown ] .we observe that the vn stage area of the lut - based decoder varies significantly over the iterations , even though the lut tree structures are identical .this is not unexpected , since the contents of the luts are different for different iterations and the resulting logic circuits can have very different complexities .moreover , we see that the cn stage of the lut - based decoder is approximately smaller than the cn stage of the adder - based decoder due to the bit - width reduction enabled by the optimized lut design . the vn stage of the lut - based decoder , on the other hand , is larger than the vn stage of the adder - based decoder .however , the reduction in the cn stage is larger than the increase in the vn stage , leading to an overall reduction in area . from table [tab : breakdown ] we can see that this reduction stems mainly from the reduced number of required registers , as the area occupied by the logic of each decoder is similar .in this paper , we described a method that can be applied to design a discrete message - passing decoder for ldpc codes by replacing the standard vn update rules with locally optimal lut - based update rules . moreover , we presented a hardware architecture for a lut - based fully unrolled ldpc decoder which can reduce the area and increase the operating frequency compared to a conventional adder - based ms decoder by and , respectively , due to the significantly reduced bit - width required to achieve identical error correction performance. finally , the lut - based decoder requires approximately % fewer wires , simplifying the routing step , which is a known problem in fully parallel architectures .z. zhang , l. dolecek , b. nikolic , v. anantharam , and m. wainwright , `` design of ldpc decoders for improved low error rate performance : quantization and algorithm choices , '' _ communications , ieee transactions on _ , vol .57 , no . 11 , pp . 32583268 , nov 2009 .s. planjery , d. declercq , l. danjean , and b. vasic , `` finite alphabet iterative decoders part i : decoding beyond belief propagation on the binary symmetric channel , '' _ ieee trans . on communications _, oct . 2013 .d. declercq , b. vasic , s. planjery , and e. li , `` finite alphabet iterative decoders part ii : towards guaranteed error correction of ldpc codes via iterative decoder diversity , '' _ ieee transactions on communications _ , vol .61 , no . 10 , pp .40464057 , october 2013 .f. cai , x. zhang , d. declercq , s. planjery , and b. vasi , `` finite alphabet iterative decoders for ldpc codes : optimization , architecture and analysis , '' _ ieee transactions on circuits and systems i : regular papers _61 , no . 5 , pp .13661375 , may 2014 .p. schlafer , n. wehn , m. alles , and t. lehnigk - emden , `` a new dimension of parallelism in ultra high throughput ldpc decoding , '' in _ ieee workshop on signal processing systems ( sips ) _ , october 2013 , pp . 153158 .a. winkelbauer and g. matz , `` on quantization of log - likelihood ratios for maximum mutual information , '' in _ proc .16th ieee int .workshop on signal processing advances in wireless communications ( spawc 2015)_. 1em plus 0.5em minus 0.4emstockholm , sweden : accepted for publication , jun .`` ieee standard for information technology telecommunications and information exchange between systems local and metropolitan area networks specific requirements part 3 : carrier sense multiple access with collision detection ( csma / cd ) access method and physical layer specifications , '' ieee std .802.3an , sep . 2006 .
|
in this paper , we propose a finite alphabet message passing algorithm for ldpc codes that replaces the standard min - sum variable node update rule by a mapping based on generic look - up tables . this mapping is designed in a way that maximizes the mutual information between the decoder messages and the codeword bits . we show that our decoder can deliver the same error rate performance as the conventional decoder with a much smaller message bit - width . finally , we use the proposed algorithm to design a fully unrolled ldpc decoder hardware architecture .
|
we are concerned with the incompressible limit of solutions of multidimensional steady compressible euler equations .the steady compressible full euler equations take the form : while the steady homentropic euler equations have the form : \mbox{div}\,(\rho u\otimes u)+ \nabla p = 0 , \end{cases}\end{aligned}\ ] ] where with , is the flow velocity , is the flow speed , , , and represent the density , pressure , and total energy respectively , and is an matrix . for the full euler case , the total energy is with adiabatic exponent , the local sonic speed is and the mach number is for the homentropic case , the pressure - density relation is the local sonic speed is and the mach number is defined as the incompressible limit is one of the fundamental fluid dynamic limits in fluid mechanics .formally , the steady compressible full euler equations converge to the steady inhomogeneous incompressible euler equations : while the homentropic euler equations converge to the steady homogeneous incompressible euler equations : \mbox{div}\,(u\otimes u)+ \nabla p=0 . \end{cases } \end{aligned}\ ] ] however , the rigorous justification of this limit for weak solutions has been a challenging mathematical problem , since it is a singular limit for which singular phenomena usually occur in the limit process . in particular , both the uniform estimates and the convergence of the nonlinear terms in the incompressible models are usually difficult to obtain. moreover , tracing the boundary conditions of the solutions in the limit process is a tricky problem . generally speaking, there are two processes for the incompressible limit : the adiabatic exponent tending to infinity , and the mach number tending to zero .the latter is also called the low mach number limit . a general framework for the low mach number limit for local smooth solutions for compressible flowwas established in klainerman - majda . in particular , the incompressible limit of local smooth solutions of the euler equations for compressible fluids was established with well - prepared initial data _i.e. _ , the limiting velocity satisfies the incompressible condition initially , in the whole space or torus . indeed , by analyzing the rescaled linear group generated by the penalty operator ( _ cf ._ ) , the low mach number limit can also be verified for the case of general data , for which the velocity in the incompressible fluid is the limit of the leray projection of the velocity in the compressible fluids .this method also applies to global weak solutions of the isentropic navier - stokes equations with general initial data and various boundary conditions . in particular , in , the incompressible limit on the stationary navier - stokes equations with the dirichlet boundary condition was also shown , in which the gradient estimate on the velocity played the major role . for the one - dimensional euler equations ,the low mach number limit has been proved by using the space in . for the limit , it was shown in that the compressible isentropic navier - stokes flow would converge to the homogeneous incompressible navier - stokes flow .later , the similar limit from the korteweg barotropic navier - stokes model to the homogeneous incompressible navier - stokes model was also considered in . for the steady flow ,the uniqueness of weak solutions of the steady incompressible euler equations is still an open issue .thus , the incompressible limit of the steady euler equations becomes more fundamental mathematically ; it may serve as a selection principle of physical relevant solutions for the steady incompressible euler equations since a weak solution should not be regarded as the compressible perturbation of the steady incompressible euler flow in general . furthermore , for the general domain , it is quite challenging to obtain directly a uniform estimate for the leray projection of the velocity in the compressible fluids . in this paper, we formulate a suitable compactness framework for weak solutions with weak uniform bounds with respect to the adiabatic exponent by employing the weak convergence argument .one of our main observations is that the compactness can be achieved by using only natural weak estimates for the mass conservation and the vorticity , which was introduced in .another observation is that the incompressibility of the limit for the homentropic euler flow follows directly from the continuity equation , while the incompressibility of the limit for the full euler flow is from a combination of all the euler equations .finally , we find a suitable framework to satisfy the boundary condition without the strong gradient estimates on the velocity . as direct applications of the compactness framework, we establish two incompressible limit theorems for multidimensional steady euler flows through infinitely long nozzles . as a consequence, we can establish the new existence theorems for the corresponding problems for multidimensional steady incompressible euler equations .the rest of this paper is organized as follows . in 2 , we establish the compactness framework for the incompressible limit of approximate solutions of the steady full euler equations and the homentropic euler equations in with . in 3 , we give a direct application of the compactness framework to the full euler flow through infinitely long nozzles in . in 4 , the incompressible limit of homentropic euler flows in the three - dimensional infinitely long axisymmetric nozzle is established .in this section , we establish the compensated compactness framework for approximate solutions of the steady euler equations in with .we first consider the homentropic case , that is , the approximate solutions satisfy \mbox{div}\big(\rho^{(\gamma ) } u^{(\gamma)}\otimes u^{(\gamma)}\big)+ \nabla p^{(\gamma)}=e_2(\gamma ) , \end{cases}\end{aligned}\ ] ] where and are sequences of distributional functions depending on the parameter .the distributional functions , , here present possible error terms from different types of approximation . if with are the exact solutions of the steady euler flows , , are both equal to zero .moreover , the same remark is true for the full euler case , where , are the distributional functions as introduced in .let the sequences of functions and be defined on an open bounded subset such that the following qualities : & & e^{(\gamma ) } = \frac{|u^{(\gamma)}|^2}{2}+\frac{\big(p^{(\gamma)}\big)^{\frac{\gamma-1}{\gamma}}}{\gamma-1}. \end{aligned}\ ] ] can be well defined . moreover ,the following conditions hold : ( a.1 ) . are uniformly bounded by ; ( a.2 ) . and are uniformly bounded in ; ( a.3 ) . and are in a compact set in ; ( h ) . as , in the limit , the energy sequence may tend to zero .condition ( h ) is designed to exclude the case that exponentially decays to zero as .in fact , in the two applications in below , both of the energy sequences go to zero with polynomial rate so that condition ( h ) is satisfied automatically .it is noted that condition ( h ) could be replaced equivalently by a pressure condition : indeed , from ( a.1 ) and , we have which directly implies the equivalence of the two conditions . conditions ( a.1)(a.3 ) are naturally satisfied in the applications in . then we have [ thm2.1 ] let a sequence of functions and satisfy conditions ( a.1)(a.3 ) and ( h ) . then there exists a subsequence ( still denoted by ) such that , when , p^{(\gamma)}(x)\rightharpoonup \bar{p}\quad & \quad\mbox{in bounded measure . } \end{array}\ ] ] * proof*. we divide the proof into four steps . \1 . from condition ( a.2 ) , we can see that weakly converges to in measure as .now we show that _ a.e . _ in as . since , for given , we may assume .then we find by jensen s inequality that where is the lebesgue measure of .then , for , we have on the other hand , since is concave with respect to , we have which implies from the hlder inequality that moreover , from , we have which , together with and , gives note that both the left and right sides of the above inequality tend to as , owing to condition ( h ). then we have where .in particular , taking and respectively , we have this implies that are uniformly bounded in .then there exists a subsequence of ( still denoted by ) such that weakly converges to in . by a simple computation , we obtain from that that is, converges to _ a.e . _ in , as . \3 . by the div - curl lemma of murat and tartar , the young measure representation theorem for a uniformly bounded sequence of functions in ( _ cf ._ tartar ; also see ball ) , we use and ( a.3 ) to obtain the following commutation identity : where we have used that is the associated young measure ( a probability measure ) for the sequence .then the main point in the compensated compactness framework is to prove that is in fact a dirac measure , which in turn implies the compactness of the sequence .on the other hand , from we see that where is the delta mass concentrated at .we now show is a dirac measure . combining both sides of together, we have exchanging indices and , we obtain the following symmetric commutation identity : which immediately implies that , _ i.e. _ , concentrates on a single point .if this would not be the case , we could suppose that there are two different points and in the support of . then , , , and would be in the support of , which contradicts with .therefore , the young measure is a dirac measure , which implies the strong convergence of .this completes the proof . for the full euler case ,we assume that the approximate solutions satisfy \mbox{div}\big(\rho^{(\gamma ) } u^{(\gamma)}\otimes u^{(\gamma)}\big)+ \nabla p^{(\gamma)}=e_2(\gamma),\\[1 mm ] \mbox{div}\big(\rho^{(\gamma ) } u^{(\gamma ) } e^{(\gamma)}+ u^{(\gamma ) } p^{(\gamma)}\big)=e_3(\gamma ) , \end{cases}\end{aligned}\ ] ] where , , and are sequences of distributional functions depending on the parameter . in this case , the energy function is and the entropy function is so that condition ( h ) for the homentropic case is replaced by ( f.1 ) . as , ( f.2 ) . converges to a bounded function _ a.e . _ in as .conditions ( a.1)(a.3 ) and ( f.1)(f.2 ) in the framework are naturally satisfied in the applications for the full euler case in below .similar to theorems [ thm2.1 ] , we have [ thm3.1 ] let a sequence of functions , , and satisfy conditions ( a.1)(a.3 ) and ( f.1)(f.2 ) .then there exists a subsequence ( still denoted by ) such that , as , \rho^{(\gamma)}(x)\rightarrow \bar{\rho}(x)\quad & \quad \mbox{{\it a.e . }in}\,\,\ , x\in \omega , \\[1 mm ] u^{(\gamma)}(x)\rightarrow ( \bar{u}_1 , \cdots , \bar{u}_n)(x ) \qquad & \quad \mbox{{\it a.e . }in}\,\,\ , x \in \{x\,:\ , \bar{\rho}(x)>0 , x\in \omega\}.\nonumber \end{array}\ ] ] * proof*. we follow the same arguments as in the homentropic case . first , the weak convergence of is obvious . on the other hand , we observe that and still hold for the full euler case .then , for any , taking and respectively and following the same line of argument as in the homentropic case , we conclude that converges to _ a.e . _ in as .then , from condition ( f.2 ) , converges to _ a.e . _ in .the remaining proof is the same as that for the homentropic case , except the strong convergence of only stands on since the vacuum can not excluded .this completes the proof .[ rem4 ] consider any function satisfying where in the distributional sense as .the similar statement is also valid for theorem , via replacing by .then , as direct corollaries , we conclude the following propositions .[ thm2.2 ] let and be a sequence of approximate solutions satisfying conditions ( a.1)(a.3 ) and( h ) , and in the distributional sense for .then there exists a subsequence ( still denoted by ) that converges _ a.e ._ to a weak solution of the homogeneous incompressible euler equations as : * proof*. from theorem , we know that converges to as . for the approximate continuity equation, we see that , for any test function , letting , we conclude which implies in the distributional sense . with a similar argument , we can show that holds in the distributional sense .[ thm3.2 ] let , , and be a sequence of approximate solutions satisfying conditions ( a.1)(a.3 ) and ( f.1)(f.2 ) , and in the distributional sense as .then there exists a subsequence ( still denoted by ) that converges a.e . to a weak solution of the inhomogeneous incompressible euler equations as : \mbox{\rm div}(\bar{\rho}\bar{u})=0,\\[1 mm ] \mbox{\rm div}(\bar{\rho}\bar{u}\otimes\bar{u})+\nabla\bar{p}=0 .\end{cases}\ ] ] * proof*. from a direct calculation , we have then , for any test function , we find taking , we have which implies in the distributional sense .the fact that and hold in the distributional sense can be shown similarly from as , , respectively .[ rem6 ] the main difference between propositions and is that , when , the compressible homentropic euler equations converge to the homogeneous incompressible euler equations with the unknown variables , while the full euler equations converge to the inhomogeneous incompressible euler equations with the unknown variables . furthermore , the incompressibility of the limit for the homentropic case follows directly from the approximate continuity equation , while the incompressibility for the full euler case is from a combination of all the equations in .there are various ways to construct approximate solutions by either numerical methods or analytical methods such as numerical / analytical vanishing viscosity methods . as direct applications of the compactness framework , we now present two examples in 34 for establishing the incompressible limit for the multidimensional steady compressible euler flows through infinitely long nozzles .in this section , as a direct application of the compactness framework established in theorem [ thm3.1 ] , we establish the incompressible limit of steady subsonic full euler flows in a two - dimensional , infinitely long nozzle .the infinitely long nozzle is defined as with the nozzle walls , where suppose that and satisfy & f_1(x_1)\rightarrow 0 , \quad f_2(x_1)\rightarrow 1 \qquad \mbox{as } ~x_1\rightarrow -\infty , \nonumber\\[1 mm ] & f_1(x_1)\rightarrow a , \quad f_2(x_1)\rightarrow b > a \qquad \mbox{as } ~x_1\rightarrow \infty,\end{aligned}\ ] ] and there exists such that for some positive constant _ c_.it follows that satisfies the uniform exterior sphere condition with some uniform radius .see fig [ fig01 ] .suppose that the nozzle has impermeable solid walls so that the flow satisfies the slip boundary condition : where is the unit outward normal to the nozzle wall .it follows from and that holds for some constant , which is the mass flux , where is any curve transversal to the , and is the normal of in the positive direction .we assume that the upstream entropy function is given , _i.e. _ , and the upstream bernoulli function is given , _i.e. _ , where and are the functions defined on ] for , , and , there exists such that , for any , there is a global solution ( _ i.e. _ a full euler flow ) of such that the following hold : subsonic state and horizontal direction of the velocity : the flow is uniformly subsonic with positive horizontal velocity in the whole nozzle , _i.e. _ , \(ii ) the flow satisfies the following asymptotic behavior in the far field : as , uniformly for , where , the constant and function can be determined by , , and uniquely .next , we take the incompressible limit of the full euler flows .[ thm5.2 ] let be the corresponding sequence of solutions to . then , as , the solution sequence possesses a subsequence ( still denoted by ) that converges strongly_ in to a vector function which is a weak solution of .furthermore , the limit solution also satisfies the boundary condition as the normal trace of the divergence - measure field on the boundary in the sense of chen - frid .* proof*. we divide the proof into four steps . \1 . from , we can obtain the following linear transport parts : \partial_{x_1}\big((p^{(\gamma)})^{\frac{1}{\gamma } } b^{(\gamma)}u_1^{(\gamma ) } \big ) + \partial_{x_2}\big((p^{(\gamma)})^{\frac{1}{\gamma } } b^{(\gamma)}u_2^{(\gamma)}\big)=0,\\[2 mm ] \partial_{x_1}\big((p^{(\gamma)})^{\frac{1}{\gamma } } s^{(\gamma ) } u_1^{(\gamma)}\big ) + \partial_{x_2}\big((p^{(\gamma)})^{\frac{1}{\gamma } } s^{(\gamma ) } u_2^{(\gamma)}\big)=0 . \end{cases}\end{aligned}\ ] ] from , we can introduce the potential function : \partial_{x_2}\psi^{(\gamma)}=(p^{(\gamma)})^{\frac{1}{\gamma } } u_1^{(\gamma)}. \end{cases } \end{aligned}\ ] ] from the far - field behavior of the euler flows , we can define since both the upstream bernoulli and entropy functions are given , and have the following expressions : where is a function from to ] . 1 .there exists such that , if , then there is . for any ,there exists a global ( _ i.e. _ a homentropic euler flow ) through the nozzle with mass flux condition and the upstream asymptotic condition .moreover , the flow is uniformly subsonic , and the axial velocity is always positive , _i.e. _ , 2 . the subsonic flow satisfies the following properties : as , uniformly for , where is a positive constant , and and can be determined by and uniquely .[ thm4.3 ] let , and be the corresponding solutions to . then , as , the solution sequence possesses a subsequence ( still denoted by ) that converges strongly_ in to a vector function with which is a weak solution of .furthermore , the limit solution also satisfies the boundary conditions as the normal trace of the divergence - measure field on the boundary in the sense of chen - frid . * proof*. for the approximate solutions , satisfy based on the equation : we introduce as \partial_{r}\psi^{(\gamma ) } = r u^{(\gamma ) } ( p^{(\gamma)})^{\frac{1}{\gamma}}. \end{cases}\end{aligned}\ ] ] from the far - field behavior of the euler flows , we define similar to the previous case , the flow is subsonic so that the mach number , and from , we have therefore , conditions ( a.1)(a.2 ) and ( h ) are satisfied for any bounded domain . on the other hand , the vorticity has the following expressions : \omega_{2,3}^{(\gamma)}=\partial_{x_2}u_3^{(\gamma)}-\partial_{x_3}u_2^{(\gamma)}=0,\\[2 mm ] \omega_{3,1}^{(\gamma)}=\partial_{x_3}u_1^{(\gamma)}-\partial_{x_1}u_3^{(\gamma ) } = -\frac{x_3}{r}(\partial_{x_1}v^{(\gamma)}-\partial_{r}u^{(\gamma ) } ) .\end{cases}\end{aligned}\ ] ] a direct calculation yields which implies that is uniformly bounded in the bounded measure space and ( a.3 ) is satisfied .similar to theorem [ thm5.2 ] , we conclude that there exists a subsequence ( still denoted by ) that converges to a vector function _ a.e ._ in satisfying in the distributional sense. since is uniformly bounded , the normal trace on exists and is in in the sense of chen - frid .on the other hand , for any , we have since , and then we have for any . by approximation, we conclude that the normal trace in .this completes the proof .* acknowledgments : * the research of gui - qiang g. chen was supported in part by the uk epsrc science and innovation award to the oxford centre for nonlinear pde ( ep / e035027/1 ) , the uk epsrc award to the epsrc centre for doctoral training in pdes ( ep / l015811/1 ) , and the royal society wolfson research merit award ( uk ) .the research of feimin huang was supported in part by nsfc grant no .10825102 for distinguished youth scholars , and the national basic research program of china ( 973 program ) under grant no .the research of tianyi wang was supported in part by the china scholarship council no .201204910256 as an exchange graduate student at the university of oxford , the uk epsrc science and innovation award to the oxford centre for nonlinear pde ( ep / e035027/1 ) , and the nsfc grant no .11371064 ; he would like to thank professor zhouping xin for the helpful discussions .wei xiang was supported in part by the uk epsrc science and innovation award to the oxford centre for nonlinear pde ( ep / e035027/1 ) , the cityu start - up grant for new faculty 7200429(ma ) , and the general research fund of hong kong under grf / ecs grant 9048045 ( cityu 21305215 ) .chen , c. christoforou , and y. zhang , continuous dependence of entropy solutions to the euler equations on the adiabatic exponent and mach number . _ arch .rational mech .anal . _ * 189(1 ) * ( 2008 ) , 97130 .b. desjardins , e. grenier , p .-lions , and n. masmoudi , incompressible limit for solutions of the isentropic navier - stokes equations with dirichlet boundary conditions . _ j. math .pures appl . _* 78 * ( 1999 ) , 461471 .s. klainerman and a. majda , singular perturbations of quasilinear hyperbolic systems with large parameters and the incompressible limit of compressible fluids .pure appl .math . _ * 34 * ( 1981 ) 481524 .
|
a compactness framework is formulated for the incompressible limit of approximate solutions with weak uniform bounds with respect to the adiabatic exponent for the steady euler equations for compressible fluids in any dimension . one of our main observations is that the compactness can be achieved by using only natural weak estimates for the mass conservation and the vorticity . another observation is that the incompressibility of the limit for the homentropic euler flow is directly from the continuity equation , while the incompressibility of the limit for the full euler flow is from a combination of all the euler equations . as direct applications of the compactness framework , we establish two incompressible limit theorems for multidimensional steady euler flows through infinitely long nozzles , which lead to two new existence theorems for the corresponding problems for multidimensional steady incompressible euler equations .
|
the study of finance largely concerns about the trade - off between risk and expected return . a significant source of risk in financial marketis the uncertainty of the volatility of equity indices , where volatility is understood as the standard deviation of a financial instrument s return with a specific time horizon . in late 1990s ,wall street firms started trading volatility derivatives such as variance swaps .since then , these derivatives have become a preferred route for many hedge fund managers to trade on market volatility . due to the crucial role that volatility plays in making investment decisions , it is important for financial practitioners to understand the nature of the volatility variations .research on volatility derivatives has been an active pursued topic in quantitative finance .researchers working in the field concerning volatility derivatives have been focusing on developing suitable methods for evaluating variance swaps .carr and madan combined static replication using options with dynamic trading in futures to price and hedge certain volatility contracts without specifying the volatility process .the principal assumptions were continuous trading and continuous semi - martingale price processes for the future prices .demeterfi et al . worked in the same area by proving that a variance swap could be reproduced via a portfolio of standard options .the requirements were continuity of exercise prices for the options and continuous sampling time for the variance swaps .one common feature shared among these researches was the assumption of continuous sampling time which was actually an simplification of the discrete sampling reality in financial markets .in fact , options of discretely - sampled variance swaps were mis - valued when the continuous sampling was used as approximation , and large inaccuracies occurred in certain sampling periods , as discussed in .in addition to the above mentioned analytical approaches , some other authors also conducted researches using numerical approaches .little and pant explored the finite - difference method via dimension - reduction approach and obtained high efficiency and accuracy for discretely - sampled variance swaps .windcliff et al . investigated the effects of employing the partial - integro differential equation on constant volatility , local volatility and jump diffusion - based volatility products .an extension of the approach in was made by zhu and lian in through incorporating heston two - factor stochastic volatility for pricing discretely - sampled variance swaps .another recent study was conducted by bernard and cui on analytical and asymptotic results for discrete sampling variance swaps with three different stochastic volatility models .their cholesky decomposition technique exhibited significant simplification .however , the constant interest rate assumption by the authors did not reflect the real market phenomena .one of the contemporary developments in the financial research was the emergence of hybrid models , which described interactions between different asset classes such as stock , interest rate and volatility .the main aim of these models was to provide customized alternatives for market practitioners and financial institutions , as well as reducing the associated risks of the underlying assets .hybrid models could be generally categorized into two different types , namely hybrid models with full correlation and hybrid models with partial correlation among engaged underlyings .literatures concerning hybrid models with partial correlation among asset classes appeared to dominate the field due to less complexity involved .majority of the researchers focused on either inducing correlation between the stock and interest rate , or between the stock and the volatility .grunbichler and longstaff developed pricing model for options on variance based on the heston stochastic volatility model . and stressed that correlation between equity and interest rate was crucial to ensure that the pricing activities were precise , especially for industrial practice .according to these authors , the correlation effects between equity and interest rate were more distinct compared to the correlation effects between interest rate and volatility .the hybrid models with full correlation among underlyings started to attract attention for their improved model capability . and compared their heston - hull - white hybrid model with the szhw hybridization for pricing inflation dependent options and european options , respectively . in this articlewe develop the modeling framework which extends the heston stochastic volatility model by including the stochastic interest rate which follows the cir process .note that derived an semi - analytical pricing formula for partially correlated heston - cir hybrid model of discretely - sampled variance swaps .their suggestion of imposing full correlation among state variables is considered in this work .our focus is on the pricing of discrete sampling variance swaps with full correlation among equity , interest rate as well as volatility .since the heston - cir model hybridization is not affine , we approach the pricing problem via the hybrid model approximation which fits in the class of affine diffusion models .the key ingredient involves the derivation of characteristic functions for two phases of partial differential equations and we obtain a semi - closed form pricing formula for variance swaps .numerical experiments are performed to evaluate the accuracy of the pricing formula .in this section we present a hybrid model which combines the heston stochastic volatility model with the one - factor cir stochastic interest rate model .our model extends the work in by imposing full correlation among the underling asset , volatility and interest rate .recently , proposed a model which was a combination of the multi - scale stochastic volatility model and the hull - white interest rate model and showed that incorporation of the stochastic interest rate process into the stochastic volatility model gave better results compared with the constant interest rate case in any maturity . given , let be the stochastic process of some asset price with the time horizon ] .this value should be zero at , since it is defined in the class of forward contracts .the above expectation calculation involves the joint distribution of the interest rate and the future payoff , so it is complicated to evaluate . thus , it would be more convenient to use the bond price as the numeraire , since the price of a -maturity zero - coupon bond at is given by ] .under the -forward measure , the valuation of the fair delivery price for a variance swap is reduced to calculating the expectations expressed in the form of \label{mainexpectation}\ ] ] for , some fixed equal time period and different tenors .it is important to note that we have to consider two cases and separately . for the case , we have and is a known value , instead of an unknown value of for any other cases with . in the process of finding this expectation , , unless otherwise stated , is regarded as a constant . hence both and are regarded as known constants .based on the tower property of conditional expectations , the calculation of expectation ( [ mainexpectation ] ) can be separated into two phases in the following form = \mathbb{e}_0^t \left[\mathbb{e}_{t_{j-1}}^t \left[\left(\frac{s(t_{j})}{s(t_{j-1})}-1\right)^2\right]\right ] .\end{array}\ ] ] we denote the term ] .the contingent claim has a european - style payoff function at expiry denoted by =h_j(s ) .% u_j(s,\nu , r , t_{j}):=h_j(s)=\left(\frac{s}{s(t_{j-1})}-1\right)^2 .h_j(s)=\left(\frac{s}{s(t_{j-1})}-1\right)^2.\ ] ] applying standard techniques in the general asset valuation theory , the pde for over ] is +\left(\kappa^*\theta^ * + ( \rho_{12}\sigma\omega i-\kappa^*)\nu -\rho_{23}\sigma b(t-\tau , t)\eta\sqrt{\nu(t-\tau)}\sqrt{r(t-\tau ) } \right)\dfrac{\partial \widetilde{u}}{\partial \nu } \\[0.8em]+\left(\alpha^*\beta^*-(\alpha^*+b(t-\tau , t)\eta^2)r + \rho_{13}\eta \sqrt{\nu(t-\tau)}\sqrt{r(t-\tau ) } \omega i\right)\dfrac{\partial\widetilde{u}}{\partial r}\\[0.8em]+ \rho_{23}\sigma \eta\sqrt{\nu(t-\tau ) } \sqrt{r(t-\tau)}\dfrac{\partial^2 \widetilde{u}}{\partial \nu \partial r}\\[0.8em]+\left(-\dfrac{1}{2}(\omega i+\omega^2 ) \nu+r \omega i -\rho_{13}b(t-\tau , t)\eta \sqrt{\nu(t-\tau ) } \sqrt{r(t-\tau ) } \omega i \right)\widetilde{u},\\[0.8em ] \widetilde{u}(\omega,\nu , r,0)=\mathcal{f}[h(e^x)],\\ \end{array } \right.\ ] ] where and is the fourier transform variable . in order to solve the above pde system ,we adopt heston s assumption in that the pde solution has an affine form as follows we can then obtain three ordinary differential equations by substituting the above function form into the pde system as \dfrac{de}{d\tau}=\dfrac{1}{2 } \eta^2 e^2 -(\alpha^*+b(t-\tau , t)\eta^2 ) e + \omega i , \\[0.8em ] \dfrac{dc}{d\tau}=\kappa^*\theta^ * d+ \alpha^ * \beta^ * e -\rho_{13}\eta \sqrt{\nu(t-\tau)}\sqrt{r(t-\tau)}\omega i b(t-\tau , t)\\[0.8em]\quad \quad \quad+\rho_{13}\eta\sqrt{\nu(t-\tau)}\sqrt{r(t-\tau)}\omega i e\\[0.8em ] \quad \quad \quad -\rho_{23}\sigma\eta \sqrt{\nu(t-\tau)}\sqrt{r(t-\tau)}db(t-\tau , t)\\[0.8em]\quad \quad \quad+\rho_{23}\eta \sigma \sqrt{\nu(t-\tau)}\sqrt{r(t-\tau)}de,\\ \end{array } \right.\ ] ] with the initial conditions note that only the function has analytical form as b=\sqrt{a^2+\sigma^2(\omega^2+\omega i ) } , \quad g=\dfrac{a+b}{a - b}. \end{array } % \right.\ ] ] the approximate solutions of the functions and can be found by numerical integrations using standard mathematical software package , e.g. , matlab. the algorithm of evaluating the functions and is given in appendix b. since the fourier transform variable appears as a parameter in functions , and , the inverse fourier transform is conducted to retrieve the solution as in its initial setup \\[0.8em ] \quad \quad \quad \quad \quad = \mathcal{f}^{-1}\left[e^{c(\omega , \tau)+d(\omega , \tau)\nu + e(\omega , \tau)r}\mathcal{f}[h(e^x)]\right ] .\end{array}\ ] ] in the generalized fourier transform of a function is defined to be =\int^{\infty}_{-\infty}f(x)e^{-i\omega x}dx.\ ] ] the function can be derived from via the generalized inverse fourier transform =\frac{1}{2\pi}\int^{\infty}_{-\infty}\hat{f}(\omega)e^{i\omega x}d\omega.\ ] ] note that the fourier transformation of the function is =2\pi\delta_{\xi}(\omega),\\\ ] ] where is any complex number and is the generalized delta function satisfying for notational convenience , let .conducting the generalized fourier transform for the payoff with respect to gives = 2\pi\left(\frac{\delta_{-2i}(\omega)}{i^2}-2\frac{\delta_{-i } ( \omega)}{i}+\delta_{0}(\omega)\right).\ ] ] as a result , the solution of the pde ( [ eq1 ] ) is derived as follows \\[0.8em ] \quad\quad\quad\quad\quad\quad=\dfrac{e^{2x}}{i^2}e^{\widetilde{c}(\tau)+\widetilde{d}(\tau ) \nu + \widetilde{e}(\tau)r}-\dfrac{2e^{x}}{i}e^{\widehat{c}(\tau)+\widehat{e}(\tau)r}+1 \\[0.8em ] \quad\quad\quad\quad\quad\quad=\dfrac{s^2}{i^2}e^{\widetilde{c}(\tau)+\widetilde{d}(\tau ) \nu + \widetilde{e}(\tau)r}-\dfrac{2s}{i}e^{\widehat{c}(\tau)+\widehat{e}(\tau)r}+1 , \end{array}\ ] ] where and .we denote , and as , and respectively .in addition , and are the notations for and respectively .note that . in this subsection , we continue to carry out the second phase in finding out the expectation ] , is represented by \nonumber \\[0.8em ] = & & \mathbb{e}_0^t\left[ e^{\widetilde{c}(\delta t)+\widetilde{d}(\delta t)\nu(t_{j-1})+\widetilde{e}(\delta t)r(t_{j-1 } ) } -2 e^{\widehat{c}(\deltat)+\widehat{e}(\delta t)r(t_{j-1})}+1 \right]\\[0.8em ] % \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad = & & \mathbb{e}_0^t\left [ e^{\widetilde{c}(\delta t)+\widetilde{d}(\delta t)\nu(t_{j-1})+\widetilde{e}(\delta t)r(t_{j-1})}\right ] % \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad - 2 \mathbb{e}_0^t \left [ e^{\widehat{c}(\delta t)+\widehat{e}(\delta t ) r(t_{j-1 } ) } \right ] + 1 \nonumber \\= & & e^{\widetilde{c}(\delta t ) } \cdot \mathbb{e}_0^t\left [ e^{\widetilde{d}(\delta t)\nu(t_{j-1})+\widetilde{e}(\delta t)r(t_{j-1})}\right]-2e^{\widehat{c}(\delta t ) } \cdot \mathbb{e}_0^t\left [ e^ { \widehat{e}(\delta t ) r(t_{j-1 } ) } \right ] + 1.\nonumber \end{aligned}\ ] ] in appendix c , we show in more details how to derive approximate solutions for ] by using approximations of normally distributed random variable and its characteristic function . in the previous two subsections ,we demonstrate our solution techniques for pricing variance swaps by separating them into phases. however , as mentioned in section 2.3 , we have to consider two cases and separately .the case follows directly the expression in ( [ funcdoubleexpectation ] ) .for the case of , we use the method described in section 3.1 to obtain \\ & = & e^{\widetilde{c}(\delta t)+\widetilde{d}(\delta t)\nu(0)+\widetilde{e}(\delta t)r(0 ) } -2 e^{\widehat{c}(\delta t)+\widehat{e}(\delta t)r(0)}+1 .\end{aligned}\ ] ] the summation for the whole period from to gives the fair delivery price of a variance swap as = \dfrac{100 ^ 2}{t}\left(g(\nu(0),r(0))+ \sum_{j=2}^{n } g_j(\nu(0),r(0 ) ) \right).\ ] ]in order to analyze the performance of our approximation formula for evaluating prices of variance swaps as described in the previous section , we conduct some numerical simulations .comparisons are made with the monte carlo ( mc ) simulation which resembles the real market .in addition , we also investigate the impact of full correlation setting among the state variables in our model .table 1 shows the set of parameters that we use for all the numerical experiments , unless otherwise stated .|cccc|ccc c|c & & & & & & & & & & & & + 1 & -0.4 & 0.5&0.5&0.05 & 0.05 & 2 & 0.1 & 0.05 & 1.2 & 0.05 & 0.01 & 1 + the mc simulation is a widely utilized numerical tool for the basis of conducting computations involving random variables .we perform our mc simulation in this paper using the euler - maruyama scheme with sample paths .we present the comparison results between numerical implementation of the formula ( [ finalresult ] ) with the mc simulation in figure 1 and in table 2 .all values for the fair delivery prices are measured in variance points .it could be seen in figure 1 that our approximation formula matches the mc simulation very well . to gain some insight of the relative difference between our formula and the mc simulation , we compare their relative percentage error . by taking which is the weekly sampling frequency and paths, we discover that the error is , with further reduction of the error as path numbers increase to .furthermore , even for small sampling frequency such as the quarterly sampling frequency when , our formula can be executed in just seconds compared to seconds needed by the mc simulation .these findings verify the accuracy and efficiency of our formula . ) and the mc simulation.,width=432 ] .comparing variance swap prices between the results of formula ( [ finalresult ] ) and the mc simulation . [ cols="<,^,^,^,^,^,^",options="header " , ] [ comparisonresult ] next , we investigate the impact of the correlation coefficient between the interest rate and the underlying and the correlation coefficient between the interest rate and the volatility , respectively .the impact of the correlation between the interest rate and the underlying is shown in figure 2 . in the figurewe can see that the values of variance swaps are increasing corresponding to the increase in the correlation values of .the difference of the variance swap rates goes up to 5 variance points for largely different correlation coefficient values of .this is very crucial since a relative difference of might produce considerable error .however , it is also observed that the impact of the correlation coefficient becomes less apparent as the sampling times increase .values on delivery price of variance swaps in the heston - cir hybrid model.,width=384 ] values on delivery price of variance swaps in the heston - cir hybrid model.,width=384 ] the effects of the correlation coefficient between the interest rate and the volatility are displayed in figure 3 .in contrast to the significant correlation effects of in figure 2 , smaller impact of is observed .in fact , the variance swap rates for three different values of are almost the same .for example , for which is the monthly sampling frequency , the delivery price is for , with only a slight increase to for , and a slight decrease to for .figure 3 also displays the same trend of diminishing impact of the correlation as the number of sampling periods increases .this paper studies the evaluation of discretely - sampled variance swap rates in the heston - cir hybrid model of stochastic volatility and stochastic interest rate .this work extends the model framework considered in by imposing the full correlation structure among the state variables .the proposed hybrid model is not affine , and we derive a semi - closed form approximation formula for the fair delivery price of variance swaps .we consider the numerical implementation of our pricing formula which is validated to be fast and accurate via comparisons with the monte carlo simulation .this pricing formula could be a useful tool for the purpose of model calibration to market quotation prices .our pricing model which offers the flexibility to correlate the underlying with both the volatility and the interest rate is a more realistic model with practical importance for pricing and hedging .in fact , our numerical experiments confirm that the impact of the correlation coefficient between the underlying and interest rates is very crucial , as it becomes more apparent for larger correlation values .the pricing approach in our paper can be applied to other stochastic interest rate and stochastic volatility models , such as the heston - hull - white hybrid model .in order to obtain the dynamics for the sdes in under , we need to find the volatilities for both numeraires , respectively ( refer to ) .denote the numeraire under as and the numeraire under as .differentiating yields whereas the differentiation of gives now we have obtained the volatilities for both numeraires as next , the drift term for the sdes under is found by utilizing the formula below with and in and the terms , and as defined in .this results in the transformation of under to the following system under the forward measure where approximate solutions of the functions and can be found from the following differential equations which are obtained using the deterministic approximation technique discussed in \dfrac{dc}{d\tau}=\kappa^*\theta^ * d+ \alpha^ * \beta^ * e -\rho_{13}\eta \mathbb{e}^{t}\left [ \sqrt{\nu(t-\tau)}\sqrt{r(t-\tau)}\right ] \omega i b(t-\tau , t)\\[0.8em]\quad \quad \quad+\rho_{13}\eta\mathbb{e}^{t}\left[\sqrt{\nu(t-\tau)}\sqrt{r(t-\tau)}\right]\omega i e\\[0.8em ] \quad \quad \quad -\rho_{23}\sigma\eta \mathbb{e}^{t}\left[\sqrt{\nu(t-\tau)}\sqrt{r(t-\tau)}\right ] db(t-\tau , t)\\[0.8em]\quad \quad \quad+\rho_{23}\eta \sigma \mathbb{e}^{t}\left[\sqrt{\nu(t-\tau)}\sqrt{r(t-\tau)}\right]de,\\ \end{array } \right.\ ] ] with the initial conditions the differential equation related to contains terms of which are non - affine .note that standard techniques to find characteristic functions as in could not be applied in this case , thus we need to find approximations for these non - affine terms .the expectation ] as follows : \approx \sqrt{q_{2}(t)(\varphi_{2}(t)-1)+q_{2}(t)l_{2}+\dfrac{q_{2}(t)l_{2}}{2(l_{2}+\varphi_{2}(t))}}=:\lambda_{2}(t),\\[0.8em]\ ] ] with \ ] ] and simplify further as \approx m_{2}+p_{2}e^{-q_{2}t}=:\widetilde{\lambda_{2}}(t),\ ] ] where \ ] ] utilizing the above expectations of both stochastic processes , we are able to obtain ] , we utilize the definition of instantaneous correlations : } { \sqrt{\mathbb{v}ar^{t}\left[\sqrt{\nu(t)}\right]\mathbb{v}ar^{t}\left[\sqrt{r(t)}\right]}}.\ ] ] substitution of the following \approx \frac{\mathbb{v}ar^{t}\left[\nu(t)\right]}{4\mathbb{e}^{t}\left[\nu(t)\right ] } \approx q_1(t)-\frac{q_1(t)l_{1}}{2(l_{1}+\varphi_{1}(t))}\ ] ] and \approx \frac{\mathbb{v}ar^{t}\left[r(t)\right ] } { 4\mathbb{e}^{t}\left[r(t)\right ] } \approx q_2(t)-\frac{q_2(t)l_{2}}{2(l_{2}+\varphi_{2}(t))}\ ] ] into gives us \\\approx & & { \rho}_{\mathsmaller { \sqrt{\nu(t)}\sqrt{r(t)}}}\left(\sqrt{\left(q_1(t)-\frac{q_1(t)l_{1}}{2(l_{1}+\varphi_{1}(t ) ) } \right)\left(q_2(t)-\frac{q_2(t)l_{2}}{2(l_{2}+\varphi_{2}(t))}\right)}\right ) .\end{aligned}\ ] ]in this appendix , we derive approximate expressions of the expectations ] .then , we can obtain an approximation for .the variables and can be approximated by normally distributed random variables as follows : and where , , are defined in and , , are defined in .since both approximations of and are normally distributed , we can find the characteristic function of their sum which is also normally distributed .let , then \approx \exp\left(\mathbb{e}_0^{t}\left[y(0,t_{j-1})\right]+\frac{1}{2 } { \mathbb var}^{t}\left[y(0,t_{j-1})\right]\right),\ ] ] where \\ & \approx & \widetilde{d}(\delta t)(q_{1}(t_{j-1})(l_{1}+\varphi_{1}(t_{j-1})))+\widetilde{e}(\delta t)(q_{2}(t_{j-1})(l_{2}+\varphi_{2}(t_{j-1 } ) ) ) , \nonumber \end{aligned}\ ] ] and \\ & \approx & 2\widetilde{d}(\delta t)\widetilde{e}(\delta t)\rho_{23}\sqrt{{q_1(t_{j-1})}^2(2l_{1}+4\varphi_1(t_{j-1}))}\sqrt{{q_2(t_{j-1})}^2(2l_{2}+4\varphi_{2}(t_{j-1}))}\\ & & + { \widetilde{d}(\delta t)^2}(q_{1}(t_{j-1})^{2}(2l_{1}+4\varphi_{1}(t_{j-1 } ) ) ) \\ & & + { \widetilde{e}(\delta t)^2}(q_{2}(t_{j-1})^{2}(2l_{2}+4\varphi_{2}(t_{j-1 } ) ) ) .\end{aligned}\ ] ] we can apply the same procedure to find the expression of $ ] , which is given as follows : \approx \exp \left(\mathbb{e}_0^{t}\left[\widehat{e}(\delta t)r(t_{j-1})\right ] + \frac{1}{2}{\mathbb var}^{t}\left[\widehat{e}(\delta t)r(t_{j-1})\right]\right ) \\[0.8em ] \quad\quad\quad\quad\quad\quad\quad\quad\approx \exp \left ( \widehat{e}(\delta t ) ( q_{2}(t_{j-1})(l_{2}+\varphi_{2}(t_{j-1 } ) ) ) \right.\\[0.8em ] \quad\quad\quad\quad\quad\quad\quad\quad\quad \left .+ \frac{\widehat{e}(\delta t)^2}{2 } ( q_{2}(t_{j-1})^{2}(2l_{2}+4\varphi_{2}(t_{j-1 } ) ) ) \right ) .\end{array}\ ] ] therefore , an approximation of is given as follows
|
this paper considers the pricing of discretely - sampled variance swaps under the class of equity - interest rate hybridization . our modeling framework consists of the equity which follows the dynamics of the heston stochastic volatility model and the stochastic interest rate driven by the cox - ingersoll - ross ( cir ) process with full correlation structure among the state variables . since one limitation of hybrid models is the unavailability of analytical pricing formula of variance swaps due to the non - affinity property , we obtain an efficient semi - closed form pricing formula of variance swaps for an approximation of the hybrid model via the derivation of characteristic functions . we implement numerical experiments to evaluate the accuracy of our formula and confirm that the impact of the correlation between the underlying and interest rate is significant . heston - cir hybrid model , realized variance , stochastic interest rate , stochastic volatility , variance swap , generalized fourier transform . 91g30 , 91g20 , 91b70
|
although electro - optic amplitude modulators and phase modulators are commonplace in modern optics laboratories , there is no single , commercially available device that produces controllable amplitude and phase modulations with complete variability . when required ,various optical configurations have been assembled , such as phase and amplitude modulators in series , and two phase modulators used in a mach - zehnder interferometric setup ( the mach - zehnder modulator ) , which are both capable of amplitude and phase modulation .the device discussed in this paper , called a _ universally tunable modulator _ ( utm ) , is a modification of a commercial electro - optic amplitude modulator .an amplitude modulator consists of two phase - modulation crystals aligned at right angles , and connected with opposite polarities to a single electrical input .a utm ( see fig .[ figsmvectors](a ) ) is identical , but with two separate inputs , one connected to each crystal .a prototype was constructed from a new focus broadband amplitude modulator ( model ) in this way .we demonstrate here that this extra degree of freedom enables the production of phase as well as amplitude modulation .we use the term `` universally tunable '' to highlight the fact that the device is capable of both amplitude modulation ( am ) and phase modulation ( pm ) , with full selectability of the amplitude of each and of the relative phase between the two . choosing a figure of merit with which to assessthe device is difficult in general , and depends on the modulation requirements of the application in question .we address this issue here by identifying two characteristics that might be expected of the modulations states produced by the device : _ variability _ and _ purity_. the variable nature of the device is brought out by experiments which require real - time tuning of the modulator across its parameter space .other experiments rely on precise selection of a small number of operating points , where the purity of the modulations produced has a direct influence on the success of the experiment .one anticipated use for the utm device is in optical feedback control of the resonance condition of a fabry - perot cavity , or the fringe condition of a michelson interferometer .the rf modulation techniques used to lock these devices are pound - drever - hall locking and schnupp modulation locking , respectively , and are both based on properties of the devices that convert ( injected ) pm to am , which is then demodulated to give an error signal readout . in both cases ,the default locking point is at a turning point in the transmitted ( or reflected ) optical field intensity ; injection into the device of additional am ( along with the default pm ) causes the device to lock with an offset relative to the turning point .moreover , the quadrature of am required for pound - drever - hall locking is orthogonal to that required for schnupp modulation locking , introducing the possibility of independently tuning two lock points at once in a coupled system . this _rf offset locking _ application is our case study example to demonstrate the variability property of a prototype utm ; the technique may be useful for future large - scale gravitational wave detector configurations , with the feature of facilitating real - time tuning of detector frequency responses .the utm device is capable of several `` pure '' states , which have various applications as well as providing a natural way to test the purity property of the prototype .single - sideband modulation , for example , requires an equal combination of am and pm , in quadrature .the subject has arisen a number of times with optic fibre technology where , for example , a single - sideband - modulated signal is immune to fibre dispersion penalties , and the technique has also been suggested for subcarrier - multiplexing systems .single - sideband modulation has been achieved by other means : optically filtering out one sideband ; cascaded amplitude and phase modulators ; mach - zehnder modulators ; and more complex arrangements .while these methods have merit , we submit that the utm is far simpler in design , with fewer degrees of freedom available to drift , and is at least comparable regarding suppression capabilities .the utm can produce pm or am states , with purity limited by the accuracy of polarisation optics ( including the birefringent effects of the device itself ) .while these states are obtainable using off - the - shelf amplitude or phase modulators , applications in coherent state quantum cryptography require fast - switching between am and pm , for which the utm is ideal .other applications , including some quantum communication protocols , would also benefit from having easy access to a tuned , stationary combination of am and pm .in addition , we show that the utm ( or indeed an amplitude modulator ) can produce a _ carrier suppression _ state , where the output consists only of two modulation sidebands . we have developed a geometric phase sphere description of modulation states , the _ modulation sphere _ formulation , in analogy with the poincar sphere description for polarisation states .we demonstrate the use of this formalism by calculating the transfer function ( electrical to optical ) of the utm device , both using optical - field - phasors and using modulation sphere parameters , and discuss the pros and cons of the two mathematical descriptions .we have found that the modulation sphere representation is particularly useful for gaining a visual insight into the dynamics of the utm .section [ 2model ] gives the mathematical description of the utm , in terms of optical - field - phasors ( section [ 2phasor2 ] ) , and the modulation sphere formulation ( section [ 2modsphere3 ] ) .the transfer functions from these two subsections are derived in appendices [ secapp1 ] and [ secapp2 ] respectively .section [ 3expt ] describes a characterisation experiment conducted on a prototype utm , with the experimental layout described in section [ 3expt2 ] , and the results presented in section [ 3results3 ] .the main results of the study are summarised in section [ 4conc ] .before we proceed , a simple analogy may help to clarify the inner workings of a utm device , and lend physical insight .a mach - zehnder modulator involves splitting a beam into two parts , separately phase - modulating each part , then recombining the two parts on a beamsplitter .the utm is physically equivalent to this , where the two interferometer paths are collinear but different in polarisation .each utm crystal modulates one and only one polarisation component .the phase between the electric fields in the two polarisation states we identify as the interferometric recombination phase .the birefringent waveplates used to create the initial polarisation take the place of the input beamsplitter , and a polarising beamsplitter facilitates the output recombination .the recombination phase is adjustable ( by modifying the polarisation state ) , and one can control the amplitudes and phases of the single - beam phase - modulations generated by each of the two crystals . through manipulation of these input parameters ,the user has complete control over the interference condition of the two carrier beams , and separately over the interference of the lower and upper sidebands generated by the modulating crystals .this amounts to having complete freedom to choose any modulation state by appropriate choice of the input parameters . as an example , consider two identically phase - modulated beams , where the optical phase of one beam is advanced by , and the modulation phase of the other beam is advanced by .the result , upon interfering the two beams , is that one resultant sideband is exactly cancelled out and the other is additively reinforced , producing a single sideband state .one can intuitively arrive at all of the modulation states discussed in this paper by reasoning along these lines . in the next section ,we quantify such reasoning into a mathematical theory of the utm . as stated earlier , a utm ( see fig . [ figsmvectors](a ) )consists of two modulating crystals in series , with separate voltage sources , and with modulating axes at right angles to each other .the utm is used by sending a laser beam of elliptical polarisation through the system , and then through a linearly polarising filter angled at to the crystals modulating axes .the choice of polarisation state of the incident beam has a crucial effect on the transfer characteristics of the system .we will restrict ourselves to a subset of polarisation states : those where there is equal power in the polarisation components aligned with each modulating crystal axis .equivalently , we require that the polarisation is elliptical with major and minor axes aligned at to both crystal axes .for concreteness we will choose the ellipse axes to be horizontal and vertical , and the crystal axes to be left and right diagonal ( as in fig .[ figsmvectors](a ) ) .we define the angle as being the phase difference between left - diagonal and right - diagonal electric field components of the light exiting the crystals ( see fig . [ figsmvectors](b ) ) .the identity follows , where are the powers in the horizontal and vertical polarisation components of the input laser beam .we introduce this degree of freedom with the application of gravitational wave detector locking schemes in mind , where laser power is at a premium : choosing such that the beam is almost vertically polarised allows us to retain the majority of the input carrier power , while retaining full access to all modulation states , albeit with reduced amplitude .note that , in the ideal situation where both crystals have identical refractive indices and lengths , the phase angle ( and hence the polarisation state ) is the same before and after the modulator .however , a real device almost certainly does not have identical crystals so that the relative phase between left- and right - diagonal components may well be changed by the crystals , in which case should describe the polarisation state of the light _ exiting _ the modulator , just before it reaches the linearly polarising filter .the inputs to the two crystal electrodes are sinusoidal voltages of frequency ; we describe these as complex phasors ( , and ) so that the single - crystal phase modulations are given by , where and respectively return the real and imaginary components of a complex variable . we can similarly write the output phase and amplitude modulations as complex phasors and where the optical field amplitude exiting the utm can be shown to be : \nonumber \\ = e_{\mathrm{in } } \ , \left [ \ , \mathrm{cos}(\sigma/2 ) + i p \mathrm{cos}(\omega_mt + \phi_p ) + a \mathrm{cos}(\omega_m t + \phi_a ) \ , \right]\end{aligned}\ ] ] where is the input optical field amplitude . in eq .[ eqnelecfield ] , we have assumed small single - crystal modulation depths ( ) , and we have factored away the net optical phase dependence . the relationship between input and output parameters for the utm device can be shown to be : ( a proof of this transfer function is outlined in appendix [ secapp1 ] . ) an important observation from eq .[ eqntf ] is that the phases of the am and pm track the phases of the sum and difference of the electrical signals , respectively .thus , an electronic oscillator for use in demodulation schemes is readily available , and can be calibrated once for all operating points .this phase - tracking property holds for any choice of ; the effect of a change of polarisation ellipticity is to change the overall quantity of pm or am available ( as well as changing the overall beam power in eq .[ eqnelecfield ] ) .returning briefly to the topic of conserving as much carrier power as possible for gravitational wave detector applications , we derive a useful rule - of - thumb for selecting an appropriate amount of power to tap off .consider eq .[ eqntf ] when we set such that and have the same magnitude ( we do this in order to focus on the _ coefficients _ of the complex phasors ) .the ratio of modulation power between the am and pm components is and , comparing this with eq .[ eqnident ] , we have : , the fraction of power tapped off is directly proportional to the fraction of modulation power available to be expressed as am ( rather than pm ) . typically then, if of the output power is tapped off , and the user wishes to alternately produce a pure pm state and then a pure am state ( each of the same modulation power ) , then the am state will require 9 times the input electrical power in compensation , compared to the pm state .it is apparent from eqs .[ eqnelecfield ] and [ eqntf ] that for the carrier and the pm contribution vanish , leaving only the am component .this is the _ carrier suppression _ operating point , where only two sidebands remain .the modulation ceases to be of the phase or amplitude type due to the absence of a carrier as a phase reference , though the modulation retains its beat phase through the second harmonic in optical power .a rearrangement of eq .[ eqnelecfield ] serves to clarify the nature of single sideband modulation : \end{aligned}\ ] ] where the terms represent optical sidebands at relative to the optical carrier frequency , and an asterisk denotes a complex conjugate .it is clear then that choosing eliminates one or other of the sidebands .in other words , a single sideband state corresponds to having equal amounts of pm and am where the two modulations are in quadrature .we now define a geometric phase representation of the modulation parameters of the system to further clarify the transfer characteristics of the utm device .this is analogous to using stokes parameters and the poincar sphere representation for optical polarisation states . in this case , we transform from the complex quantities and to a set of ( real ) coordinates in _m - space _ via the transformation : a single point in m - space represents a distinct modulation state . in particular, the m - space representation suppresses the common ( beat ) phase of the modulation state ; the equations are defined in terms of the relative phase difference between am and pm contributions .this is a useful simplification in that now every physically distinct modulation corresponds to one point and one only . on the other hand, one can not use the representation for demodulation - phase - tracking calculations , when the physical significance of the modulation s overall phase is renewed by the presence of an external ( electrical ) phase reference .[ taboppoints ] the three m - space parameters can be interpreted as follows : measures the extent to which one kind of modulation ( am or pm ) dominates over the other ; measures the degree to which the present pm and am components are correlated or anti - correlated in phase ; and measures the degree to which the present pm and am components are in quadrature phase to each other , and thus also measures the extent to which one frequency sideband has more power than the other .we also define the _ modulation power _ by , from whence follows .therefore , a surface of constant is a sphere in m - space ( see fig .[ figspheres](b ) ) .we attach particular significance to modulation states whose m - space coordinates lie on one cardinal axis .the point represents pure pm , and the point represents pure am .the points represent correlated and anti - correlated pm and am , and represent upper and lower single sideband states .details of these states are summarised in table i. these states , particularly those on the and axes , involve the suppression of a particular signal , and so provide a natural way to test the precision of a real device .before we proceed , it is convenient to define an analogous _ q - space _ representation , , for the electrical input parameters , via : where is the sum input electrical power .physical interpretations for these parameters correspond with the m - space equivalents ( see fig .[ figspheres](a ) for a diagrammatic description ) .now , we can rewrite the utm transfer function , eq .[ eqntf ] , in terms of the m - space and q - space parameters : ( a proof of this transfer function is outlined in appendix [ secapp2 ] . ) since the modulation power depends partly on ( and hence depends on the _ detail _ of the input signals , not just the overall input power ) , a sphere of constant electrical input power in q - space does _ not _ , in general , map to a sphere of constant modulation power in m - space .in fact , eq . [ eqntfellipse ] describes a transfer function from a sphere in q - space to an _ ellipsoid _ in m - space .it can be shown that the ellipsoid is centered at , has radii $ ] , and has an eccentricity of .the ellipse is always prolate with long axis aligned with the -axis , and always has one focus at the m - space origin .the orthogonality of axes is preserved in the transformation , which consists only of a translation ( the ellipsoid is not origin - centered ) and a dilation ( the ellipsoid is squashed in the and directions ) .the centrepoint and proportions of the ellipsoid are parametrised by the input light s polarisation parameter , ( see fig .[ figspheres](c ) ) .a special case occurs for ( circular input polarisation ) , when eq .[ eqntfellipse ] reduces to : here , a sphere in q - space maps to a sphere in m - space .this case is particularly instructive as figs . [figspheres](a ) and [ figspheres](b ) take on a new significance : there is now a direct graphical correspondence between the two spheres , in accordance with eq .[ eqntfsphere ] .so , as an example of using these spheres as a visual tool , we can see `` at a glance '' that the pm operating point is obtained by running both crystals with equal in - phase electrical inputs , or that a single sideband is obtained by running both crystals with equal inputs in quadrature .the following characterisation experiment was designed to measure the amplitude and phase response of a prototype utm device with respect to both the am and pm output states . in particular , we focus here on two kinds of measurement : those that measure the _ variability _ of the utm device , and those that measure the _ purity _ of the modulations that the utm device can produce . by variability , we refer to the capability of the utm prototype to tune around the parameter space of modulations , or `` dial up '' a particular modulation , in a predictable , theory - matching fashion and with a reasonable level of precision . by purity, we speak of the utm prototype s ability to accurately attain a particular modulation ( especially those that involve suppression of a frequency line ) and to hold that modulation indefinitely without drifting .the experiment is shown in fig .[ figexpt ] .the polarisation of a laser source was prepared with a half- and quarter - wave plate in series , and the laser source was passed through the utm , which we operated at mhz .a polarising beam splitter completed the process ; the vertically polarized output carried the modulation described in the theory above .the horizontally polarised output was used to keep track of the polarisation state , and hence to measure the value of .the polarisation state used corresponded to ( near - circular polarisation with the vertical component stronger ) , for experimental convenience .regarding the modulator itself , the original new focus amplitude modulator ( model 4104 ) carried the specification : max v @ . while we did not explicitly measure for the prototype utm ( after the modification from the original am device ) , we point out that it should still be of the same order of magnitude . therefore , since the input voltages used did not exceed 10 v , the following results are in the small modulation depth limit as described in section [ 2model][2phasor2 ] .a heterodyne detection scheme formed part of the measurement apparatus , where a shunted beam was frequency - shifted by mhz with an acousto - optic modulator ( aom ) . if this heterodyne oscillator beam is represented by with and is interfered with the modulated beam from eq .[ eqnelecfield ] , then the detected power is given by : ; \nonumber \\ p_{\mathrm{dc } } & = & \mathrm{cos}^2(\frac{\sigma}{2 } ) ; \nonumber \\p_{\omega_m } & = & 2 \mathrm{cos}(\frac{\sigma}{2 } ) \re\{\tilde{a}e^{i\omega_mt}\ } ; \nonumber \\p_{\omega_h } & = & 2 \mathrm{cos}(\frac{\sigma}{2 } ) \re\{\gamma e^{i\omega_ht}\ } ; \nonumber \\ p_{\omega_{h}-\omega_{m } } & = & 2 \re \left [ \gamma ( \tilde{a } + i\tilde{p})^\ast e^{i(\omega_h - \omega_m)t}\right ] ; \nonumber \\ p_{\omega_{h}+\omega_{m } } & = & 2 \re \left [ \gamma ( \tilde{a } - i\tilde{p } ) e^{i(\omega_h + \omega_m)t } \right]\end{aligned}\ ] ] where the power components are split up according to their respective frequencies . represents the measurable beat due to the presence of am , is the beat between the carrier and the heterodyne oscillator , and and represent heterodyned copies of the two modulation sidebands .a spectrum analyser was used to monitor the strength of these frequency lines .the use of the spectrum analyser facilitated the assessment of the _ purity _ of states produced by the utm prototype , by observing operating points that involved suppression of one of these frequency lines .an alternative set of measurables was obtained by electronically mixing the heterodyne frequencies down to baseband via a _double - demodulation _ scheme .this gave a dc readout of the pm and am amplitudes at a selected beat phase .a complete description of the modulation strengths and quadratures was obtained in this way , and was hence useful to characterise the _ variability _ of the utm prototype .the method of signal extraction via double demodulation is best understood by reworking the last two components of eq .[ eqndetpower ] to : \end{aligned}\ ] ] this signal is mixed down to base band by demodulating at mhz and then mhz in series .the output of a demodulation is sensitive to the relative phases of the signals being mixed : if they are exactly in - phase or out - of - phase , the output will be maximally positive or negative ; if they are in quadrature , the output will be zero .[ eqndemodquad ] shows that the oscillator components of the am and pm portions are orthogonal ( one is the real component , the other is imaginary ) so that the appropriate choice of electrical oscillator phase can force the readout of pm only , or am only , or some linear combination of the two .similarly , the demodulation quadrature of the stage determines the beat phase that the output signal is projected onto .the output dc component of the _ first _ stage of demodulation ( extracted with a bias - t component ) was used as an error signal for locking the optical recombination phase of the heterodyne .this error signal varies sinusoidally with respect to the optical recombination phase , with a zero crossing where the optical heterodyne beat ( ) and the electronic demodulation oscillator are in quadrature . in other words ,the feedback loop `` locks out '' the heterodyne beat . comparing eq .[ eqndetpower ] and eq .[ eqndemodquad ] , we see that the am term has the same phase as the term ( that is , they both contain the terms ) , so that the feedback loop also locks out the am component of modulation , leaving only the pm component ( which has a term ) .this is an important part of the process as , without a locking loop , the demodulation phase of the circuit would be uncharacterised , and would also be free to drift . in the experiment ,two separate double demodulation schemes were used in order to measure pm and am simultaneously ; the signal was split with a electronic splitter to ensure that the two double demodulation circuits scanned orthogonal modulations . hence, when the heterodyne locking loop was connected using a signal from one double - demodulation circuit , this ensured that that particular circuit was sensitive to pm and that the other was sensitive to am .regarding the second stage of demodulation , electronically phase locking the signal generators was sufficient to time - stabilise the demodulation quadrature , and a known modulation was used for calibration .[ figoppoints ] shows spectrum analyser traces for four operating points , to demonstrate the suppression capabilities of the device .the suppression factors are 34.9 db and 39.5 db for the left- and right - hand sidebands ( fig .[ figoppoints](a ) and fig .[ figoppoints](b ) ) .the figures shown suppress the frequency lines down to the electronic noise floor , so that higher suppression factors may be possible . a more detailed trace exhibiting sideband suppressionis given in fig .[ figbestssb ] , with 35.2 db relative suppression recorded .we measured around 38 db suppression of the am beat compared with a similar - input - power pure - am state ( fig .[ figoppoints](c ) ) . in this and the other diagrams in fig .[ figoppoints ] , we have shown `` max - hold '' data , demonstrating that the device is highly stable , maintaining these operating points without significant drift on the timescale of hours . in fig .[ figoppoints](d ) , the carrier is suppressed by selecting the am operating point and setting ( horizontal polarisation ) .the heterodyne measurement of the carrier is down by around 30 db , from which we can infer that the carrier power ( which goes as the square of the heterodyne measurement ) is down by 60 db .this heterodyne measurement is further supported by observing that the first and second harmonics of direct am beat are approximately equal , which is consistent with the fact that the carrier ( as measured by the heterodyne ) is approximately 6 db weaker than the sidebands .we were especially careful to validate this heterodyne measurement of the carrier , because the directly detected power only dropped by 40 db .the reason for this is that the majority of the residual carrier power was now in a higher - order , odd spatial mode ( easily verified by looking at the intensity profile of the beam ) , which did not interfere efficiently with the heterodyne oscillator beam .the higher - order modes were thought to result from a spatially non - uniform polarisation of the light exiting the utm which , when subsequently passed through a polarising beam splitter , produced modes reflecting the symmetry of the modulator . as such , for applications where spatial mode interference is important , the employment of a mode - cleaner cavity with free spectral range equal to the modulation frequency ( or one of its integer divisors ) should solve the problem . in an attempt to confirm the variability of the utm prototype , we endeavoured to map significant paths in the modulation parameter space .[ figsweep1 ] and fig .[ figsweep2 ] show the result of sweeping through m - space in two cardinal directions : varying the relative phase between the two ( constant amplitude ) electrical signals ; and varying the amplitude of one electrical signal while keeping the other signal amplitude , and the relative phase , constant . for matters of calibration ,the utm was initially set to the pm operating point ( this initial state was used as the reference in labelling `` in - phase '' and `` quadrature '' components ) , and the relative gains and phases of the two double demodulation circuits were measured and factored out via correspondence with spectrum analyser data . the input polarisation was set to be elliptical with a vertical component slightly larger than the horizontal component .the results show that changing the phase between the two electrical signals produces am in quadrature to the original pm , whereas changing the amplitude of one electrical signal generates am in - phase with the original pm .the data points are generally in good agreement with the theoretical predictions .some small systematic errors are apparent , and are thought to be associated with unmatched electrical and optical impedances between the two crystals , and possibly to do with slightly unmatched optical power levels probing each crystal ( ie a violation of our original assumption regarding polarisation states ) .overall , the device is shown to be highly predictable , and that one can `` dial - up '' a particular modulation state on call , having calibrated the device initially . it is instructive to notice that the cardinal operating points are not evenly spaced when sweeping across the parameter space . in particular , we see that the two single - sideband operating points are shifted toward the am operating point in fig .[ figsweep1 ] , and the correlated pm & am operating point is similarly shifted in fig .[ figsweep2 ] , relative to the perpendicular .this is a product of the non - circular polarisation of light input into the modulator device , and can be understood by reviewing the modulation ellipses in the two figures ; the and axes intercept the ellipse at points that are closer to the am operating point than the pm .in fact , this provides us with a means of calibration of the relative transfer - function - amplitudes of the pm and am double - demodulation circuits : they are set by assuring that ( in fig .[ figsweep1 ] ) the pm and am components are equal at the phase where the single sideband is known to occur ( information which is obtained via comparison with spectrum analyser data ) .also , by using the identities in table i , we can derive a value of the polarisation parameter : .[ figoffsetlock ] shows scans of the local region near the pm operating point , with deviations in four directions .these data were taken with an offset locking application in mind , as discussed earlier .the in - phase pm component ( in - phase by definition ) is by far the dominant signal , and has been scaled down by a factor of 10 to fit in the graphs .as expected , the utm device can produce am that is in - phase with the present pm , or am that is in quadrature with the present pm , by changing the relative electrical signal amplitudes or phases respectively .in addition , both of these parameters can be varied together or oppositely , to give independent control over the two quadratures of am resulting . we note that in the case of producing both in - phase and quadrature am , a small component of quadrature pm appears ( in other words , the phase of the pm changes , relative to that of the pure pm point used for the initial calibration ) .in actual fact , the quadrature pm data has a noticeably larger systematic error than the other three signals .this is most likely due to pollution from its in - phase counterpart , caused by the demodulation oscillator s phase drifting marginally ( this was observed to happen even in spite of the electronic phase locking between signal generators ) .a number of experimental difficulties deserve mentioning .as described in the context of the carrier suppression results , the utm produced a spatially varying polarisation state , which produced a small percentage of higher - order spatial modes upon selecting out a polarisation component ( of order of the overall power ) .this interfered with our ability to directly measure the polarisation , which we did by measuring the overall detected power while rotating a diagnostic half - wave plate placed before the pbs . in this way , we measured a value of for the results shown in figs .[ figsweep1 ] to [ figoffsetlock ] , as compared to measured by inference from spectrum analyser data .we assign a reasonably large error to the direct measurement value due to the presence of the higher - order spatial modes , and we favour measuring by inference from the data .the overall optical path lengths of the modulating crystals were found to change significantly as they warmed up after starting the laser . the dual crystal design of the device goes a long way toward minimising this problem , and we have seen that the device is stable once it has warmed up . however , long term drift of the crystal lengths is possible , having a direct effect on the polarisation state leaving the device .we propose that , in circumstances where this becomes a problem , a feedback loop be employed to lock the polarisation state .a possible scheme would see the power level out of one pbs port monitored and used as a feedback signal ( minus an offset equal to the desired power level ) to the dc optical path length of one of the crystals , thus compensating for a mechanical path length change by feeding back to the refractive index . finally ,the electrical impedances of the two crystals were not well matched for two reasons : the electronics were not identical ; and the crystals themselves had `` good '' and `` bad '' spots which generated varying levels of modulation . in general terms , careful alignment can largely overcome this problem and return the two crystals to an equal footing .the differing electrical impedances complicate the issue of generating a local oscillator with the phase - tracking property discussed earlier , since the voltages and modulation depths are not then related by the same factor .we submit that the device tested here is merely a prototype , and that careful management will suffice to deal with these issues as the need arises .we presented a thorough investigation , both theoretical and experimental , of a prototype universally tunable modulator ( utm ) .the electrical - to - optical transfer function of the device was derived , both in terms of electrical and optical phasor notation , and using a geometric phase `` modulation sphere '' representation .both pictures were shown to have merit , and a set of cardinal modulations were described in each notation .we reported on an experiment to characterise the prototype utm , which involved using dual double - demodulation circuits to measure both am and pm components simultaneously .data sets were obtained and analysed to illustrate the variability and purity characteristics of the device ; the device was shown to be highly predictable and capable of highly pure states .applications for the utm were discussed .the authors are grateful to russell koehne for his skill and labour in modifying the original am device .stan whitcomb thanks the gravitational wave research group at the australian national university for their support and hospitality during his stay at the anu .this material is based on work supported by the australian research council , and supported in part by the united states national science foundation under cooperative agreement no .this paper has been assigned ligo laboratory document number ligo - p030031 - 00r .this is an outline of the derivation of eq .[ eqntf ] .first , we find the ( more general ) transfer function of the utm when we allow the input beam to have any polarisation . at the end, we will simplify to the subcase described in the main text .the input beam s polarisation will be characterised by two electric field phasor components , and , as defined in a set of left - diagonal and right - diagonal spatial coordinate axes ( with unit vectors and respectively ) .the electric field exiting the utm can be written as a vector ( to include polarisation information ) as : e^{i\omega t}.\ ] ] ( from hereon in , we will suppress the term for the sake of brevity . ) upon passing through the vertically aligned linear polariser ( equivalent to taking a dot product with the vector ) , and expanding the complex exponentials to first order ( hence assuming ) , the vertical electric field amplitude becomes : .\ ] ] these real and imaginary oscillating terms correspond to am and pm respectively , so we parameterize via : with where the net optical phase shift has been discarded .[ eqnapp5 ] and [ eqnapp6 ] constitute the general utm transfer function for arbitrary input polarisation .note that the pm component does not have the property of tracking the phase of the sum of the input electrical signals , since it generally depends on these inputs in different proportions .this is one of the reasons why we chose to restrict the polarisation to a subset .if we choose said subset , ( equal power in the two polarisation axes ) , and define as the phase between the two polarisation components , then the equations reduce to : and this is an outline of the derivation of eq .[ eqntfellipse ] . as with appendix [ secapp1 ], we derive the q - space to m - space transfer function of the utm for any input light polarisation state . at the end , we reduce the equations to the case where the allowed polarisations are restricted .the derivation consists of transforming parameters and to m - space parameters using eq .[ eqngeospherem ] , and transforming parameters and to q - space parameters using eq . [ eqngeospherep ] , and hence converting eq .[ eqnapp6 ] from one set of coordinates to another . + 2l^2r^2q_2}{2|\tilde{l}+\tilde{r}|^2 } \nonumber \\ + \frac{\frac{1}{2}[(l^4 + r^4)q_0 + ( l^4 - r^4)q_1]}{2|\tilde{l}+\tilde{r}|^2 } \nonumber\\ + \frac{\re \{\tilde{r}\tilde{l}^*\}[(l^2+r^2)q_0 + ( l^2-r^2)q_1 + ( l^2+r^2)q_2 ] } { 2|\tilde{l}+\tilde{r}|^2}\end{aligned}\ ] ] things can be simplified somewhat by using a stokes parameter representation for the polarisation from this point . here, we ( unusually ) define the stokes parameters in terms of left - diagonal and right - diagonal electric field components ( equivalent to rotating the usual x - y axes anti - clockwise by : when the previous four equations become : \nonumber \\a^2 & = & \frac{1}{8 } \frac{s_3 ^ 2}{s_0+s_2}(q_0-q_2 ) \nonumber \\\re\{\tilde{p}\tilde{a}^*\ } & = & \frac{1}{8}\left [ s_3 q_1 - \frac{s_3 s_1}{s_0 + s_2}(q_0 - q_2 ) \right ] \nonumber \\\im\{\tilde{p}\tilde{a}^*\ } & = & -\frac{1}{8 } s_3 q_3\end{aligned}\ ] ] eq .[ eqnapp15 ] is the general q - space to m - space transfer function for the utm with any input polarisation .it gives further insight into the manner in which choosing affects the properties of the system .the parameters and depend on both and , which means that a sphere in q - space will be distorted in the - plane by the transformation .the resulting surface is an ellipsoid whose major axis is no longer collinear with the -axis , but is at an angle subtended in the - plane .this distortion certainly interferes with most of the favourable properties of the system .for example , none of the poles of the ellipse coincide with a coordinate axis , making any of the six operating points discussed more complicated to find .the restriction on polarisation states discussed in the text is equivalent to setting to zero ( which in turn forces , and ) , when eq .[ eqnapp15 ] reduces to : j. mizuno , k. a. strain , p. g. nelson , j. m. chen , r. schilling , a. rudiger , w. winkler , and k. danzmann , `` resonant sideband extraction : a new configuration for interferometric gravitational wave detectors '' , phys .a * 175 * , 273 - 276 ( 1993 ) g. de vine , d. a. shaddock , and d. e. mcclelland , `` experimental demonstration of variable reflectivity signal recycling for interferometric gravitational wave detectors '' , opt* 27 * , 1507 - 1509 ( 2002 ) see , for example : a. loayssa , d. benito , and m. j. garde , ieee phot .* 13 * , 869 - 971 ( 2001 ) ; s. shimotsu , s. oikawa , t. saitou , n. mitsugi , k. kubodera , t. kawanishi , and m. izutsu , ieee phot . tech .* 13 * , 364 - 366 ( 2001 ) see , for example , w. p. bowen , n. treps , b. c. buchler , r. schnabel , t. c. ralph , h .- a .bachor , t. symul , and p. k. lam , phys .a * 67 * , 032302 ( 2003 ) or a. furusawa , j. l. sorensen , s. l. braunstein , c. a. fuchs , h. j. kimble , and e. s. polzik , science * 282 * , 706 ( 1998 ) .
|
we report on the analysis and prototype - characterization of a dual - electrode electro - optic modulator that can generate both amplitude and phase modulations with a selectable relative phase , termed a _ universally tunable modulator _ ( utm ) . all modulation states can be reached by tuning only the electrical inputs , facilitating real - time tuning , and the device is shown to have good suppression and stability properties . a mathematical analysis is presented , including the development of a geometric phase representation for modulation . the experimental characterization of the device shows that relative suppressions of 38 db , 39 db and 30 db for phase , single - sideband and carrier - suppressed modulations , respectively , can be obtained , as well as showing the device is well - behaved when scanning continuously through the parameter space of modulations . uses for the device are discussed , including the tuning of lock points in optical locking schemes , single sideband applications , modulation fast - switching applications , and applications requiring combined modulations .
|
there have been many researches to evaluate densities and quantiles of symmetric or general stable distributions .mcculloch ( 1998 ) considered efficient algorithms for approximating symmetric stable densities for the range , where parameter denotes the characteristic exponent .nolan ( 1997 ) gave accurate algorithms for general stable densities based on integral representations of the densities which were derived by zolotarev ( 1986 ) .nolan provides a very useful program package `` stable '' on his web page .however his program exhibits some unreliable behavior around the boundary as stated in the users guide of stable. therefore even in the case of symmetric stable distributions , reliable computations of density functions including all the boundary cases is still needed .furthermore for maximum likelihood estimation , it is desirable to directly compute the derivatives of the density function . in this paperwe present reliable computations of symmetric stable density functions and their partial derivatives .our computation of densities is accurate for all values of and . concerning the partial derivatives it is accurate in a somewhat smaller range of values . regarding maximum likelihood estimation for the range , nolan ( 2001 ) used interpolated stable densities and maximized the likelihood by approximate gradient search ( constrained quasi - newton method ) because of its efficiency . butnear the boundary of the parameter space interpolation may be inaccurate and the direct integral representation is used .note that the direct integral representation is also not very reliable and slow near the boundary . in the symmetric case ,brorsen and yang ( 1990 ) discussed maximum likelihood estimation using an integral representation of the densities given by zolotarev ( 1986 ) .but they have only considered the range to avoid the discontinuity and nondifferentiability at .furthermore they did not check the sample covariances of their maximum likelihood computation with the fisher information matrix .these previous researches on maximum likelihood estimation have not used direct evaluation of the derivatives of the log likelihood function with respect to the parameters . for reliable evaluation of the maximum likelihood estimator and its standard error , direct and reliable evaluation ofthe first and the second derivatives of the log likelihood function is desirable .the organization of this paper is as follows . in section [sed : preliminaris ] we summarize notations and preliminary results on symmetric stable density . in section [ sec :density ] we provide an accurate algorithm for calculations of symmetric stable distributions which modifies nolan ( 1997 ) for near or and for or using various expansions .accurate algorithms for the partial derivatives of symmetric stable distributions with respect to the parameters are given in section [ sec : derivative - location - scale ] and section [ sec : derivative - alpha ] .fisher information matrices are calculated in section [ sec : fisher - information ] , together with some simulation studies on the variance of the maximum likelihood estimator and the observed fisher information .we also discuss behavior of fisher information as .some discussions are given in section [ sec : discussion ] .in this section we prepare notations and summarize preliminary results .there are many parameterizations for stable distributions and much confusion has been caused .our parameterizations follow the useful parameterizations for statistical inference which were given in nolan ( 1998 ) .let denote the characteristic function of symmetric stable distribution with parameters where is the characteristic exponent , is a location parameter and is a scale parameter . for the standard case we simply write the characteristic function as the corresponding densityis written as and in the standard case : at is the cauchy density and at is the normal density .accordingly the density can be defined to constitute a location - scale family . in the following ,the first derivative of with respect to is denoted by and the second derivative is denoted by . then the partial derivatives with respect to and are written by subscripts , e.g. , as above , when these derivatives are evaluated at the standard case we write , , etc .note that furthermore we write the reason we consider up to the second order derivatives of the density function is that in assessing the standard error of the maximum likelihood estimator , the observed fisher information is usually preferred to the value of the fisher information matrix at ( e.g. efron and hinkley ( 1978 ) ) .note that there are other parameterizations of symmetric stable distributions than ( [ eq : characteristic - function ] ) .however different parameterizations in the literature are smooth functions of each other including the boundary and differentiations in terms of other parameterizations can be obtained from the results of this paper by the chain rule of differentiation . from equation ( 2.2.18 ) of zolotarev ( 1986 ) or theorem 1 of nolan ( 1997 ), the density for the case and is written as where note that at for all .for the case and , , the following expansion can be used . for , this series is not convergent but can be justified as an asymptotic expansion as . for , it is convergent for every .similarly for the case and , , we have for this series converges for every and for this series can be justified as an asymptotic expansion as . for asymptotic expansion is zero , which corresponds to the fact that the tail of normal distribution is exponentially small .these ( asymptotic ) expansions are stated in bergstrm ( 1953 ) , section xvii.6 of feller ( 1971 ) , section 2.4 and section 2.5 of zolotarev ( 1986 ) .as in nolan ( 1997 ) we numerically evaluate the density function using ( [ dense ] ) . in ( [ dense ] ) the function \to [ 0,\infty]$ ] plays an important role , because properties of this function make the numerical integration quite efficient .note that is continuous and positive , strictly increases from to for and strictly decreases from to for .therefore the integrand is unimodal and its maximum value is uniquely attained at satisfying .when the value of density is small , the integrand concentrates around its mode very narrowly .then quadrature algorithms may miss the integrand .therefore we solve for and the integral is divided into two intervals around this mode ( see nolan ( 1997 ) ) . for numerical calculations of ( [ dense ] ) we use adaptive integration with singularity ( qags ) in gnu scientific library ( 2003 ) . for most values of and integration works well .however , when is close to this algorithm has some difficulty .note that for and . therefore exceed when .there are some other numerical difficulties in ( [ dense ] ) .we list these difficulties and propose alternative practical methods for evaluating the density . 1 . is small and : + if is small , the density is very much concentrated at .for example nolan ( 1997 ) states whereas . in our calculations when is small and near the values of ( [ dense ] ) sometimes become larger than , contradicting the unimodality of the stable density . for this case we can use the asymptotic expansion ( [ eq : xto0 ] ) . : + we can not guarantee the accuracy of ( [ dense ] ) in the case of . since stable distributions have heavy tails , reliable calculation of their densities is needed for large . in our calculationswhen and is large the values of ( [ dense ] ) sometimes become much smaller than the asymptotic expansion ( [ eq : xtoinf ] ) . for this case we can use the asymptotic expansion ( [ eq : xtoinf ] ) . is near 1 : + the representation ( [ dense ] ) can not be applied at theoretically .the numerical quadrature of ( [ dense ] ) becomes unreliable because of roundoff errors , when is close to . is near 2 : + though the representation ( [ dense ] ) can be applied near theoretically , it seems to be too close to the normal distribution in the tail of the distribution .actually the values of the density in the tail obtained by the integral representation ( [ dense ] ) is much smaller than the asymptotic expansion ( [ eq : xtoinf ] ) . for the rest of this section, we discuss the cases 3 and 4 above . for , we consider taylor expansion of the density around .let denote the euler s constant throughout the rest of this paper .the taylor expansion of around is given as follows . where for convenience the explicit form of is given in appendix [ sec : alpha-3 ] .( [ eq : alpha-1 - 1 ] ) and ( [ eq : alpha-1 - 2 ] ) are proved as follows . from the equation 4.40 on page 18 of oberhettinger ( 1990 ) ,\ ] ] where differentiating ( [ eq : oberhettinger ] ) several times with respect to , setting , and combining the results in the inversion formula we obtain ( [ eq : alpha-1 - 1 ] ) and ( [ eq : alpha-1 - 2 ] ). the conditions for change of integral and differentiation are satisfied in these cases .although higher order derivatives of with respect to can be evaluated along the same line , we found that the three term expansion ( [ eq : alpha-1 ] ) is sufficiently accurate .now we consider the case and .it seems natural to use asymptotic expansion ( [ eq : xtoinf ] ) .however the normal density has an exponentially small tail and this expansion is meaningless for .however in view of smoothness at , an approximation around the normal density is desirable .this case is somewhat subtle , but we found that the following procedure works well numerically . note that from ( [ eq : xto0 ] ) , for each fixed , is differentiable with respect to even at , i.e. , for each fixed we have differentiating ( [ eq : xtoinf ] ) with respect to , for large , heuristically we have and we summarize our treatments of various boundary cases in table [ tdense ] .the range of and and the number of terms in the expansions ( [ eq : xto0 ] ) and ( [ eq : xtoinf ] ) are shown .for we use formula ( [ eq : alpha-1 ] ) and for and for large we use the maximum of ( [ dense ] ) and ( [ eq : alpha-2 ] ) . note that for most cases a small number of terms in the expansions ( [ eq : xto0 ] ) or ( [ eq : xtoinf ] ) is sufficient .the approximations are very effective since the expansions and the integral ( [ dense ] ) give virtually the same results for most values of and ..approximations to stable density at boundary cases [ cols="^,^,^,^,^",options="header " , ] [ tbl : information - alpha - sigma ]in this paper we proposed reliable numerical calculations of the symmetric stable densities and their partial derivatives including various boundary cases .we found that except for very small values of ( ) our method works very well .this enables us to reliably compute the maximum likelihood estimator of the symmetric stable distributions and its standard error . for the family of stable distributions ,the use of the observed fisher information in ( [ eq : observed - fi ] ) for assessing the standard deviation of the maximum likelihood estimator needs further investigation .our simulation suggests that there may be some merit in using only the first term on the right hand side of ( [ eq : observed - fi ] ) .further study is needed to theoretically establish the limiting behavior of the fisher information matrix as .finally it is of interest to extend the methods of the present paper to the general asymmetric stable densities and to the multivariate symmetric stable densities .these extensions will be studied in our subsequent works .write differentiating ( [ eq : oberhettinger ] ) three times and setting we get , \nonumber\end{aligned}\ ] ] where . then the purpose of checking some of our calculations , we can use the explicit formula of the density at the special case of . from ( 2.8.30 ) of zolotarev ( 1986 ) is written as ,\ ] ] where are known as fresnel integral functions .we differentiate the above representation and obtain \\ & & -\frac{x^{-\frac{7}{2}}}{4\sqrt{2\pi}}\left [ \cos\left(\frac{1}{4x}\right ) \left\{\frac{1}{2}-s\left(\frac{1}{\sqrt{2\pi x}}\right)\right\ } -\sin\left(\frac{1}{4x}\right ) \left\{\frac{1}{2}-c\left(\frac{1}{\sqrt{2\pi x}}\right)\right\}\right ] + \frac{x^{-3}}{4\pi}\end{aligned}\ ] ] and \\ & & + \frac{5}{4}\frac{x^{-\frac{9}{2}}}{\sqrt{2\pi}}\left [ \cos\left(\frac{1}{4x}\right ) \left\{\frac{1}{2}-s\left(\frac{1}{\sqrt{2\pi x}}\right)\right\ } -\sin\left(\frac{1}{4x}\right ) \left\{\frac{1}{2}-c\left(\frac{1}{\sqrt{2\pi x}}\right)\right\}\right ] -\frac{9}{8}\frac{x^{-4}}{\pi } .\end{aligned}\ ] ] we have confirmed that our formulas in section [ sec : derivative - location - scale ] numerically coincide with these explicit expressions at including the boundary cases .nolan , j. p. ( 2001 ) .maximum likelihood estimation and diagnostics for stable distributions . _lvy processes : theory and applications _ ( o. e. barndorff - nielsen et al . eds . ) , birkhauser , boston , 379400 .oberhettinger , f. ( 1990 ) ._ tables of fourier transforms and fourier transforms of distributions . _springer - verlag , berlin .zolotarev , v. m. ( 1986 ) ._ one - dimensional stable distributions ._ transl . of math .monographs , * 65 * , amer .soc . , providence , ri .( transl . of the original 1983 russian )
|
we propose improvements in numerical evaluation of symmetric stable density and its partial derivatives with respect to the parameters . they are useful for more reliable evaluation of maximum likelihood estimator and its standard error . numerical values of the fisher information matrix of symmetric stable distributions are also given . our improvements consist of modification of the method of nolan ( 1997 ) for the boundary cases , i.e. , in the tail and mode of the densities and in the neighborhood of the cauchy and the normal distributions .
|
this memo discusses the problem of detection of periodic components in variable star data sets .this type of data tends to be highly non - sinusoidal , the period and the amplitude may change secularly or periodically , there may be multiple modes present , other types of variations due to companions , eclipses , etc .could occur , and the data set usually contains gaps , which themselves may contain periodic variations ( daily , yearly , etc . ) . although we will focus on rr lyrae data , the techniques are applicable to other classes of stars and , indeed , other application areas as well .most analyses of periodic signals depend on the classical techniques of discrete fourier transforms ( ft ) .an ft analysis decomposes the time data into a sum of sine and/or cosine components whose coefficients represent the amplitude of each frequency in the expansion .these coefficients are computed as follows . the squares of these amplitudes then describe a `` power spectrum '' , i.e. power versus frequency , function for the data set . see press et al .( 1992 ; henceforth nrc92 ) , chapter 13 , for details . for long sets of equally spaced datathis approach is optimum . in real data sets ,however , the time intervals are generally not equal , and may contain large gaps due to observational constraints .this has led to the use of other techniques applicable to non - equal - spaced data .the `` periodogram '' is a classical analysis technique that employs an ft type of expression , but ignores the `` equal spacing '' requirement .the power spectrum of the pg approach is computed thus : \\\ ] ] here is the set of time and data values , is the frequency , and the number of data points .this technique has been thoroughly analyzed by scargle ( 1982 ) .a variation has been derived by vanicek ( 1971 ) and lomb ( 1976 ) with desirable statistical properties .this power spectrum is given by ^ 2 } { \sum_j { \rm cos^2}\ , \omega ( t_j-\tau ) } + \frac { \left [ \sum_{j } x_j\ , { \rm sin}\ , \omega ( t_j-\tau ) \right]^2 } { \sum_j { \rm sin^2}\ , \omega ( t_j-\tau ) } \right)\ ] ] with with the lomb normalization the statistical distribution of p for a pure gaussian noise signal is exponential .this is a useful property when estimating the `` significance '' , or chance that a detected signal is due to noise fluctuations .lomb also showed that the periodogram was equivalent to a least squares fit to the folded data at each frequency by a sine wave .this is illustrated in figure 1 , below .the `` phase dispersion minimization '' technique was originally presented in stellingwerf ( 1978 ) .this is basically a folding of the data , together with a binning analysis of the variance at each candidate frequency .it is a least squares fit , as in the periodogram , but to a mean curve that is determined by the data , rather than a sine wave . rather than a power curve, the pdm analysis computes the sum of the bin variances divided by the total variance of the data . for uncorrelated datathis ratio is close to unity . at a possible periodthe bin variances are less than the overall variance , and the statistic ( called theta ) drops to some value greater than zero .the pdm computation can be done very efficiently . for a time / data set and candidate frequency ,the phase of a data point is the fractional part of . assuming 10 bins , the bin number of this point is just the integer part of 10 * .once the bin number is known , the statistics for this bin can be updated .thus the computation consists of a single loop over data points for each frequency .this is about 10 times more efficient than the original pdm code , which contained nested loops over data and bins for maximum generality .also note that no trigonometric function evaluations are required . for data sets with large gaps in time ,the pdm technique is usually applied to each cluster of points ( called a * `` segment '' * ) separately , greatly reducing the computation required , since the spectral line width ( which determines the number of frequencies needed ) will be the time length of the longest segment , rather than the total time span for the data set .similarly , with appropriate adjustments in scaling and zero point , different color observations can be combined in the same fashion .this is illustrated in figure 3 .additional improvements to the original pdm technique have been made .the updated version , called * pdm2 * , is available at the website www.stellingwerf.com .the other improvements and changes include : \1 .experience has shown that * 10 bins * are adequate ( and usually optimum ) for variable star data sets . for data sets with more than about 100 points , these bins are non - overlapping .for data sets with less than about 100 points , better results are obtained if the bins are double - wide , and overlap .the switch point is adjustable , and one or the other approach can be specified if desired .one approach is to use overlapping bins to identify possible candidate frequencies , and then use the non - overlapping option to improve the period estimate and obtain a mean light curve .. the statistical significance of each frequency is now computed either with a * beta distribution * ( schwarzenberg - czerny , 1997 ) or a * monte carlo * computation ( nemec & nemec 1985 ) rather than the original f test .this will be discussed below .the sum of the bin variance technique is equivalent to a least squares fit to a step function through each bin mean . for clean datathe results can be improved by fitting to a * linear variation * through the bin means , or by a * b - spline curve * through the bin means .these are now options .they are only used at frequencies where a significant signal is present .* subharmonic averaging : * unlike the fourier techniques , pdm detects a signal at 1/2 , 1/3 , etc the actual frequency , since these multiple period folding frequencies also produce periodic variations .this option looks for a significant minimum in theta at both the main and 1/2 the main frequency . for actual variations , both will be present . for a noise result, the 1/2 frequency signal will not be present .the theta variation for a sine wave with frequency 1.0 ( overlapping bins ) is shown in figure 4 , illustrating this effect .period variation : * pdm2 has an option to include a slow increase or decrease in the period across the data set .the rate of variation can be varied , and the most likely value for the period change can be computed .here we compare the lomb periodogram and the pdm techniques for various test cases . the first data set consists of 10 cycles of a sine wave plus gaussian noise .figure 5 shows the folded data , the lomb power spectrum , and the pdm theta function for this case . aside from the flip and the pdm subharmonics ,the variation is almost identical in the two cases .we conclude that either approach will work for a simple signal plus noise case .the effect of a large gap in the data is shown in figure 6 . herethe data consists of 10 cycles of a frequency = 1 variation , followed by a gap of about 50 , then 10 additional cycles . no noise was added .again , the lomb and the pdm responses are very similar .subharmonic averaging was used in this pdm analysis , which emphasizes the correct minimum over the adjacent frequencies .figure 7 shows a pdm analysis of the same data set , but with the data treated as three segments .this is the default mode of analysis , and the same result is obtained much more easily since the fine structure is eliminated .normally this analysis would be followed by a pass without segmentation , but covering only the central dip seen in figure 6 to improve the period estimate .the next test consists of a series of narrow pulses ( short duration eclipses would be similar ) .figure 8 shows that the two techniques each get the correct answer , but the curves do look quite different for this case . herethe pdm result is much more definitive , with narrower spectral lines and smaller side lobes .a final test case shows that for complex wave forms , divergent results can be obtained .this case ( figure 9 ) has alternating peaks , so that the periodic signal is at period = 2 .the lomb technique identifies most of the power at frequency 1.0 , whereas pdm correctly identifies the periodic variation at frequency 1/2 .it is important to understand the likelihood that a possible periodic variation represents an actual measured variation , rather than just a low ( or high ) value in a noise signal containing no actual variation . to assist in this evaluation ,these techniques provide a `` confidence level '' or a `` significance '' representing the probability that a pure noise signal would produce the given result . to do this , a series of analyses of a pure gaussian noise signalcan be made , and distribution function for various values of the power ( lomb ) , or of theta ( pdm ) can be derived ( see stellingwerf , 1978 , lomb , 1976 , scargle , 1982 ) , and nrc92 section 13.8 ) .note that these estimates will be accurate only if the noise observed is pure gaussian , as assumed in the derivation .in many cases , such as variation due to other modes , blazhko effect , light contamination , etc . the noise will not be gaussian , and the analytical estimate is not valid . in many cases ,however , a monte carlo technique will still be applicable ( see below ) .as an example , consider the case of 51 data points .the question is whether a signal has ( or can ) be detected in the presence of noise .so we consider first the case of pure noise .figure 10 ( top ) shows the power spectrum from a lomb analysis of this data set .note that most of the spectral peaks are value 2 or below , with one peak up to about 4.3 .the lower part of figure 10 shows the pdm analysis of the same data set . herethe theta variation is above about 0.90 .the distribution function ( probability of a given value ) corresponding to the lomb case is the left curve in figure 11 .this can be obtained either theoretically ( exponential ) or directly from the data in figure 10 , and the results agree .this is not the final result , however , since a correction must be made to account for the number of samples ( frequencies ) , called the `` bandwidth correction '' ( see nrc92 for details ) .this moves the line to the right - hand curve in figure 11 .this correction is not small , and there is often some uncertainty in its application .note that the corrected curve is obviously not a simple exponential .we will show below the corrected distribution is an `` extreme value '' distribution , ( e.g. weibull distribution ) .significant results must fall to the right of the tail of the distribution ( value greater than about 7.5 ) .note that this value is significantly larger than the computed maximum of about 4.5 .the pdm distribution is the beta distribution shown in figure 12 ( right curve ) .the left - hand curve is the distribution after the bandwidth correction is applied .`` significant '' results will now fall to the left of the tail , values of less than about 0.65 .again , this is much smaller than the computed minimum of about 0.90 .note that this distribution falls off more rapidly than the exponential in the periodogram case .this should result in a cleaner distinction between real and random results . as mentioned above, we can imagine a variety of cases in which the exponential or beta function analyses are not expected to be accurate .in these cases the monte carlo approach developed by nemec & nemec ( 1985 ) can be used . in this method ,multiple analyses of the data are done , with phase redistribution applied to each run to obtain independent random samples . for each case the minimum value of theta is noted .the distribution of these minima is then used to compute the probability that an observed minimum is real .this is an `` extreme value '' distribution .the result of applying this approach to a pdm analysis of the 51 point gaussian noise test case ( 500 passes ) is shown in figure 13 ( left hand curve .note that this curve corresponds closely to the analytic beta distribution result shown in figure 12 _ after the bandwidth correction ._ for comparison , a distribution curve was computed for a single pdm run , and this is the right hand curve in figure 13 .it corresponds closely with the beta result in figure 12 _ before correcting ._ this provides another interpretation of the bandwidth correction it is the extreme value distribution corresponding to the exponential or beta distribution of the analysis .this explanation is less ambiguous and clearer than the usual interpretation .for a quick estimate , the analytic distributions usually suffice , but for important borderline cases , the monte carlo approach is favored . in pdm2 250mc passes is the default , with an additional check with 500 passes recommended to validate important cases .several test cases on data kindly supplied by the authors of kunder , et.al .( 2011 ) were analyzed and will be briefly shown here .three variables were considered from the ic4499 data set : v04 , v48 , and v83 .v04 is a type a rr lyrae with a well determined period and period change .the other two are blazhko variables with known periods , but no period change estimates .figure 14 show the data and pdm analysis for v04 .the data set is shown in the top panel , colored on the sigma ( uncertainty ) for each point .the sigmas are small for this data set , but were taken into account in the pdm analysis .the middle panel shows the result of a broad - range frequency scan with nine data segments to broaden the features .the bottom panel shows the result of a fine scan near the candidate frequency , resulting in a final estimate of 1.60355 / day .the published value is 1.60354 . in the 2011 paperan o - c analysis is used to estimate the period change .if the period is given by t , the value for was found to be 0.18 + /-.05 d / my .pdm2 has a new option to vary and see if the variance of the fit is reduced .the result of this scan is shown in figure 15 .a clearly defined minimum is seen around beta = 1.5 or so with an uncertainty of about + /-this is less accurate than the o - c analysis result , but is consistent with it .figure 16 shows the resulting light curve with the period change taken into account .the data are very tight , and the scatter is reduced from the constant period case .variable v48 is a blazhko variable and the o - c analysis could not produce an estimate for the period change rate .the pdm analysis is similar to that of v04 , but the mean light curve , as shown in figure 17 shows a typical blazhko variation in amplitude .the point colors are time values , so the red low amplitude peak represents late points , so a decreasing period may be suspected .a period change scan was run on this data , with the results shown in figure 18 . by looking at the scatter at all phases simultaneously , pdm is able to determine the period change to be beta = -0.13+/-0.05 with a well - defined minimum . with this period change in effect ,the scatter on the rising branch is eliminated .variable 83 is also classified as blazhko . the light curve resulting from a pdm analysis is shown in figure 19 . herethe scatter is mainly at minimum light , but the red points are again found to the left of the other data , suggesting a decreasing period .this is confirmed by the period scan shown in figure 20 , producing a result of beta = -0.20 + /-a new version of the period analysis program pdm has been developed and is available as a windows executable or c source code for a unix compile .the tests discussed here show that this approach is preferable in many cases for a straight period search , while the lomb or fourier techniques are preferable for cases with broad frequency spreads whose power distributions are desired .the new version of pdm has a number of new features and updates several known problems with the original version of the technique .we show , in particular that it is applicable to determining period changes for blazhko variables that present difficult cases for fixed - phase analyses .
|
the classic problem of detection of periodic signals in the presence of noise becomes much more challenging if the observation times are themselves periodic , contain large gaps , or consist of data from several different instruments . for rr lyrae light curves the additional attribute of highly non - sinusoidal variation adds another dimension . this memo discusses and contrasts discrete fourier , periodogram , and phase dispersion minimization ( pdm ) analysis techniques . a new version of the pdm technique is described with tests and applications . [ 1996/06/01 ] 2 2c ii 4c iv 2fe ii 3fe iii 1 mg i 2 mg ii 2si ii 4si iv 2al ii 3al iii 1o i 1n i 1h i =
|
during the last twenty - five years the lattice - boltzmann methods ( _ lbm _ ) have been greatly developed in many aspects .today they can be used , to treat multiple problems involving both compressible and incompressible flows on simple and complex geometrical settings .it is of crucial importance , in many applications that involve moving bodies surrounded by a fluid flow , to have a good method or algorithm to compute the flow force and torque acting on the bodies . by good we mean a method that is simple to apply , that is accurate and fast , so as not to spoil the efficiency of the flow computing method .the classical way to compute forces , and so torque , on submerged bodies is via the computation and integration of the stress tensor on the surface of the body . in lbmthe stress tensor is a local variable , its computation and extrapolation from the lattice to the surface is computationally expensive , which ruins the efficiency of the lbm .however , this method is widely used in lbm . in 1994ladd introduced a new method , the _ momentum exchange _ ( _ me _ ) , to compute the flow force on a submerged body .ladd s idea was rather heuristic and very successful , where the force is obtained by accounting the exchange of momentum between the surface of the body and the fluid , the latter being represented by `` fluid particles '' whose momentum is easily written in terms of the lbm variables that describe the fluid at the mesoscopic scale .al . some improvements to ladd proposal , obtaining a robust method to analyze suspended solid particles , and excluding the simulation of the interior fluid with a modified midway bounce - back boundary condition .then , using boundary condition method to arbitrary geometries , mei et . proposed a method to evaluate the fluid forces from the idea of me .the me algorithm is specifically designed and adapted to lbm ; it is therefore more efficient than stress integration from the computational point of view .the me algorithm has been tested and applied successfully to a variety of problems . forthe mentioned me methods , except the presented in , some accuracy problems have been detected though , when applied to moving bodies .some approaches to improve the methods in problems with moving bodies were made . , based in the proposal of gives corrections terms to the forces given from .others alternative improved me methods , based in the evaluation of force respect to a moving frame of reference , were proposed in .the main goal of this paper is to provide a formal derivation of the momentum exchange algorithm .this new derivation provides in turn , some corrections to the mei et . formula and also to some newer , improved versions of momentum exchange algorithm that have been proposed .the rest of the paper is organized as follows . in section [ sec_lbm ]we briefly discuss the lattice - boltzmann method with the main purpose of introducing notation ; the method used to treat boundary conditions is also explained in this section . in section [ sec : momentum_exchange ] , the core of the paper , we present a derivation of the momentum exchange method to determine both , the flow force and torque on static or moving bodies . in section [ sec : numerical_tests ] we present two numerical tests where we implement the methods derived in section [ sec : momentum_exchange ] . in section [ sec : comments ] we make some comments .in this section we present the basic equations of the lattice boltzmann methods with the main purpose of introducing the notation used along the paper . for a thorough description of the boltzmann equationwe refer to . for a more complete presentation of lbmwe refer to .the boltzmann equation ( _ be _ ) governs the time evolution of the single - particle distribution function where and are the position and velocity in phase space .the lattice boltzmann equation ( _ lbe _ ) is a discretized version of the boltzmann equation , where takes values on a uniform grid ( the lattice ) , and is not only discretized , but also restricted small number of values . by farthe models used most frequently are the ones with collision integral simplified according to the bhatnagar , gross , and krook ( _ bgk _ ) approximation with relaxation time . in an isothermal situation and in the absence of external forces , like gravity , the lbe of this models read here is the -th component of the discretized distribution function at the lattice site time and corresponding to the discrete velocity . is the -th quadrature weight ( explained below ) , and the number of discrete velocities in the model . in compressible - flow modelsthe lattice constant that separate two nearest neighbor nodes , and the time step are related with the speed of sound by .and holds , but the constant is no longer related to the speed of sound . ] the coordinates of a lattice node are , where the integer multi index ( or , in the two - dimensional case ) denotes a particular site in the lattice .the equilibrium distribution function is a truncated taylor expansion of the maxwell - boltzmann distribution .it is this approximation one of the reasons that makes lbm accurate only at low mach numbers .the macroscopic quantities such as the fluid mass density and velocity , are obtained , in boltzmann theory , as marginal distributions of and when integrating over . in lbmthis integrals are approximated by proper quadratures .specific values of s and s , are made so that these quadratures give exact results for the -moments of order 0 , 1 and 2 .we have and in the simulations we present in this paper , we are interested in incompressible flow problems , where we modify eq .[ eq : lbmmacroscopics_1 ] according to the quasi - incompressible approximation presented in . in this approximation replaced by , a constant fluid mass density .a single time step of the discrete evolution equation is frequently written as a two - stage process and the computation of on the whole lattice , eq ., is called the _ collision step _ , while the computation of at , eq . , on the whole lattice is called _ streaming step . _many methods have been proposed in the literature to implement boundary conditions on moving boundaries with complex geometries in lbm .the method introduced in , later improved in , has been extensively tested and is the one we use in the simulations presented in this paper .we explain this method briefly in what follows .we emphasize that our derivation of momentum exchange is completely independent of the boundary condition method selected to perform the numerical tests .we consider a body that fills a region with closed boundary immersed in a fluid flow , and concentrate in a small portion of the boundary and its surrounding fluid as shown in figure [ fig_01 ] .the lattice nodes and links are also shown in the figure .empty circles represent nodes lying inside the body region ( solid nodes ) , while filled circles and squares represent nodes lying in the fluid region at the time shown . at time a piece of boundary lie , in general , between lattice nodes .consider a node on the fluid with a neighbour node inside the body . to determine the values of ,the streaming step needs `` non - existent '' information coming from node .it is the lbm implementation of the boundary conditions what provides this information with the desired accuracy .the implementation of boundary conditions in lbm can be thought , at mesoscopic scale , as the introduction of a fluid flow inside it is this artifficial flow what provides the needed information to evolve the outer flow so that it satisfies the right macroscopic boundary conditions at even when the boundary is a physical boundary for the fluid , the mesoscopic lbm description of the fluid allow the fluid `` particles '' to stream across the surface , both from inside out and viceversa .we present here some particular proposals that will be used in section [ sec : numerical_tests ] . from now on we refer as `` boundary nodes '' those lattice nodes on the fluid side , like that are involved in the imposition of boundary conditions .the method presented in proposes to determine so that the linearly interpolated velocity at the boundary point is the correct boundary velocity at that point .this is where denotes the index for the opposite direction to ( i.e. , ) , and is a fictitious equilibrium distribution function at the fluid node are the weight factors of the lbm method . and are the boundary and fluid velocities respectively , with the intersection point between the boundary and the link joining with different choices of , a velocity between and , give alternative values of the parameter , the weighting factor that controls the interpolation ( or extrapolation ) . to improve numerical stability propose and where is the fractional distance when the body moves with respect to the lattice, there may be nodes in the body region at time that become fluid nodes at time .it is then necessary to assign initial values to the variables at the new fluid nodes to evolve them . a practical way to dothis is to evolve the nodes in the body region ( solid nodes ) so that they have values assigned when they become fluid nodes .there are more precise initializations for the variables at these nodes that change domain , like the one proposed in .it is of great interest to have a robust and accurate method to compute flow forces in fluid mechanics .several algorithms have been proposed to carry out this in the context of lbm .many of these procedures fall in one of the categories : stress integration ( _ si _ ) or momentum exchange ( _ me _ ) .stress integration is based on the classical hydrodynamic approach ( see e.g. , ) . in the context of lbm ,the computational performance of me is higher than that of si . in sione needs to compute the stress tensor in all lattice nodes which are near neighbors of the body surface .one then needs to extrapolate the stress tensor to the surface , and finally obtain the total flow forces on the body as an integral over the whole body surface . in methe procedure is simpler .the total force on the body is the sum of all contributions due to momentum change , in the directions pointing towards the body surface , over all boundary nodes . in this sectionwe write forces in general when we mean either force or torque .the idea of forces evaluation via momentum exchange was introduced by ladd as a heuristic algorithm by thinking the flow as composed by `` fluid particles '' and using particle dynamics to describe their interaction with the boundaries . in this method, a particle suspension model is proposed where the same boundary condition procedure is applied for both interior and exterior fluid , using in all cases a midway bounce - back boundary condition .the forces evaluations are carried out considering the interior and exterior fluid . based in the works of ladd , aidun et .al . some improvements to ladd proposal , obtaining a robust method to analyze suspended solid particles with any solid - to - fluid density ratio .they also proposed a modified midway bounce - back as boundary condition , and exclude the simulation of the interior fluid .the forces are evaluated considering the exterior fluid plus an impulsive contribution due to the nodes that are covered or uncovered when the body of interest move inside the fluid . then , from the idea of momentum exchange , mei et . proposed a method to evaluate the fluid forces acting on a submerged body using a boundary condition method applied to arbitrary geometries .they exclude the simulation of the interior fluid as done in .the direct application of this method to problems with moving bodies fails to obtain accurate forces evaluation as was shown in .some proposals to improve the method presented in for problems with moving bodies were made . presented one of this proposals .their correction is based in the introduction of terms representing impulsive forces .al . an improved an accurate method in moving geometry problems .the impulsive force terms introduced in and , come from the nodes that are covered or uncovered when the body moves with respect to the lattice .this correction provoked some controversies , the main discussion being about some `` noise '' that appear in the evaluation of forces .based on the work of mei et . , other approaches to evaluate forces in moving geometries , without the introduction of impulsive terms , were made .no rigorous proof was presented for these methods .both and present a similar methods that are based in computing the momentum exchange in a reference frame comoving with the wall .all the me based methods cited here were specifically designed for lbm and have been implemented and tested in many fluid - mechanical problems . to the knowledge of the authors there is no formal derivation of them in the literature .the work of caiazzo and junk present an analysis of me that uses an asymptotic expansion . in this workwe give a demonstration of me , from a fluid mechanics perspective , in which some terms previously introduced as ad - hoc corrections appear naturally .in particular , we find that the corrections proposed in and are adequate when evaluating the force in a reference frame fixed to the lattice . in the spirit of our deduction of me , we also deduce the alternative description presented in , which is based on a reference frame comoving with the body .we want to simulate a fluid flow around a submerged body , within a region of space that we denote by we consider to be a fixed region of space as seen on an inertial reference frame .we have covered with a uniform constant lattice to solve the fluid motion by applying the lattice - boltzmann method as described in section [ sec_lbm ] .the submerged body occupies a sub - region that we consider , along the whole simulation , strictly contained in .as the time dependence indicates , does nt need to be fixed . can move and could even change shape . in this sectionwe derive the force and torque that the flow applies on the body .the movement of the body is assumed to be prescribed along this derivation , i.e. , is a given function of . during an actual computation the body movementis determined by integrating the equations of motion of the body simultaneously with the flow equations .the equations of motion of the body take into account the fluid force on the body , the bulk forces like weight , etc . for future referencewe briefly remind here the reynolds transport theorem .we consider first the case of a fluid system .let denote a region that encloses a fluid system , that is a fixed material portion of the flow . in this casethe velocity of the surface at any point is precisely the fluid velocity at that point .let denote a ( volume ) density describing some property of the fluid ( like mass density , momentum density , angular momentum density , etc . ) .the corresponding extensive property for the system is then the transport theorem states that here denotes the fluid velocity , and is the outward directed normal to the boundary now , let be a control volume ( a region of fluid defined for convenience that does not necesarily move with the flow ) with arbitrary movement , and let denote the velocity of a point at the surface in this case we have now , at a particular time of interest we choose a control volume which is concident with a system volume , but not in general at future times .that is , but if then we can eliminate the first term on the right hand side in by using which gives , notice that measures the fluid velocity at a boundary point with respect to that boundary point .we are interested in two particular cases .one of them is when is the total momentum contained in so that .the second case is when is the total angular momentum , with respect to a reference point , so that with the evaluation of equation or for the momentum and angular momentum cases give us the total force and torque applied over the fluid system contained in .the first term on the right hand side in represents the total variation of contained in the control volume , while the second term in the right hand side is a surface integral that amounts the flowing out of the volume . as explained in section [ subsec : boundaryconditions ] , the boundary conditions can be thought as an artificial flow inside this artificial flow is in turn decomposed into artificial flows , one for each fundamental velocity in the method .to explain the effect of these flows we refer back to the figure [ fig_01 ] .consider the boundary node and the direction pointing from to . at every time step ,the rol of the boundary condition is to replace the value of that would otherwise be provided by a collision step , by a new value .altogether , these replacements carried out by the boundary condition are a way of introducing a certain amount of momentum in the direction , at every time step . we derive me by computing the amount of momentum that the boundary condition introduces per unit time . in this waywe compute the force that each of these artifficial flows apply to the external flow .the addition over all elementary directions accounts for the total force the submerged body applies over the surrounding flow . by action - reaction principle , the force that the surrounding flow applies overthe the submerged body is exactly the opposite .we consider the system of particles associated to a lattice velocity that at time is exactly inside . at system moves by an amount we call the set of nodes associated to this system of particles at time and denotes its momentum at time .finally we denote the set of lattice nodes inside in the following subsections we derive the force and torque that the flow applies to the body through its surface .the cases of static and moving bodies are treated .the amount of momentum the boundary conditions add per unit time to the -th system of particles is where neglecting terms we have we assume first the case of a static body , so that and the set are constant in time .the first term in can be rewritten in terms of the sets of _ gained _ and of _ lost _ nodes as a consequence of the displacement of the system of particles from to .this displacement is exemplified in figure [ fig_02 ] for the model and the directions and to simplify notation we define .the first term in becomes and for .the figure shows shaded areas proportional to the size of the sets _ gained _ and _ lost _ nodes when is displaced one lattice site in the ( left ) and ( right ) directions in the model . ] inserting this into and adding over the systems we get the lbm approximation to the force introduced by the boundary conditions . we want to compare this expression with the reynolds transport theorem applied to the artificial flow inside .the force introduced by the boundary conditions is the constraint force acting on the body to keep it at a fixed position .the first term in the right hand side of is an lbm approximation of the volume term in .the second term in is composed of sums on sets of nodes which are near neighbours of the boundary .this second term is precisely the lbm approximation to the surface integral term in . as the interaction between the body andthe surrounding fluid occurs only through the body s surface , this second term in is the term we are interested in . by action - reaction principlethe flow force on the body is , notice that if and only if there is a node such that therefore now , a sum over all sets can be written as a sum over all sets , we obtain we notice that the first identity is the streaming step from the outer nodes in a direction that points into ( across the boundary ) .this values of are provided by the collision step .the second identity is a streaming step from inner nodes in a direction pointing outwards ( across the boundary ) ; these value of are provided by the boundary condition . the flow force on the sumberged body can then be written as to compare the equation with the equivalent ones in the literature , care has to be taken as regards different definitions of the sets equation is precisely the expression that appears extensively in the literature as the momentum exchange method to evaluate forces in static bodies .for the case of a moving body we show two alternative derivations of the flow force . in this waywe recover the two main proposals that appeare in the literature . when the submerged body is moving , the region and the set of lattice nodes are no longer constant . for some time steps, one can even expect the set of nodes to be the same as the set of nodes in any case it is useful define the sets of nodes and as figure [ fig_03 ] shows a scheme of a typical situation when the body moves .the expression is still valid in this case .however , at time we want to make reference to the body s new position , so we rewrite the term that sums over as and .the figure shows shaded areas proportional to the size of the lattice nodes and ( left ) , and and ( right ) as defined in the text . ] we insert into and use the result into .then we add over to get an approximation of the flow force acting on the body where again , as we are looking for the surface contributions to the force , we dropped the volume contribution .equation shows a main term , which is the same as in the case of a static body , representing the particle s exchange of momentum across the boundary , but now this term is corrected by the last two terms which accounts for the momentum associated to the nodes that enter or leave as a consequence of the body movement . in this way we obtain terms similar to that proposed by aidun et . to evaluate the force on a moving body .we show that these terms are correct and necessary to obtain the complete superficial contribution to the force when the body moves .equation is then similar to that introduced in and by wen et . , extensively used in the literature to evaluate the fluid force on moving bodies .there is a minor difference between the expression and those introduced in and . in their cases ,the force at time considers the lattice nodes that enter and leave between and ( i.e. , backward in time ) . in our case , requires to know the sets and , that is the sets of nodes that enter and leave between and ( i.e. , forward in time ) .the determination of the sets and is direct if the movement of the body is given ( predetermined ) at all times , in this case is an explicit expression .if , however , the motion of the body is to be computed simultaneously with the flow , the equation becomes implicit . in this last case it is convenient to use an approximation to determine and so that the equation becomes explicit . in the numerical tests in section [ sec : numerical_tests ] , we implement two different approximations to find the sets and .both approximations work well , giving no appreciable difference in the outcomes of the benchmark tests .the first approximation is the procedure proposed in .the second approximation is more complicated .it computes the sets and by approximating the region as if it was moving with the speed computed at the previous time step . with this informationthe flow force can be computed at time and then the correct displacement of from to recomputed .though computationally more expenssive , as two displacements of are computed at each time step , this second approximation is more precise than the first one and may be worth using it in some situations .notice that the variables associated to the lattice nodes belonging to do not have values assigned at time since these nodes enter the fluid region between and .these values are needed in order to compute the time step from to .as mentioned previously , various rules to `` initialize '' these variables are proposed in the literature . in our simulationswe implement the proposals given in and .also we implement a method that sets the mentioned variables by using the equilibrium distribution function , where the macroscopic variables are set as an average of the values at the nearest neighbor fluid nodes .the evaluation of the force by we present in section [ sec : numerical_tests ] show a short time scale noise .the use of the first two methods mentioned before to initialize the nodes that enter the fluid region present lower noise level .the main sources of `` noise '' in the force evaluation using are the impulsive nature of the additional terms related to and .this noise have been observed before . in authors show some alternative methods to avoid this undesirable effect . as the time derivative of the momentum is independent of the inertial reference frame , we can recover these methods by repeating the derivation we did before by choosing , for each lattice node and direction a convenient reference frame . for those nodes which are close to the boundary and for each direction pointing to the boundarywe express the momentum in the reference frame in which the velocity of the intersection point of the boundary with the lattice link joining with is zero .the interior nodes that are far from the boundary contribute only to a volume term in the force , this volume term is dropped and therefore the reference frame is unimportant . the result obtained in this way is an lbm discretization of the surface term in the right hand side of the last two terms in the right hand side of are negligible since both and represent close approximations to at the boundary points .either expressions and are correct expressions ; they constitute different approximations of the flow force .the later has some advantages though .first , it is computationally more efficient , since it is not necessary to determine the sets and . as a resultthe method is always explicit and it presents a notorius noise decrease in force evaluation as shown in . the derivation of the torque acting on the submerged body is analogous to that of the force .the angular momentum per unit time introduced by the -th artificial flow is where with is the angular momentum of the particle system at time with respect to a fixed point neglecting terms in equation we have as we have done in section [ sssec : force ] , we treat the case of a static body first and then extend the proposal to the case of a moving body .using the lattice nodes sets and ( shown in figure [ fig_02 ] ) to rewrite the first term in , and denoting for simplicity , we have inserting this into and adding over the systems we get an approximation to the constraint torque acting on as in the force case , we can compare this expression with the reynolds transport theorem , then keeping just the approximation of the surface term in , we get an expression for the torque that the flow applies on the body , recalling the relation between and ( ) , and using this equation is the expression that appears in the literature extensively as the momentum exchange method to evaluate torque on static bodies . for a moving bodywe follow a procedure and reasoning analogous to that of section [ sssec : forces in moving body ] .we rewrite the first term on the right hand side of to get the correct surface contribution when the surface moves , we replace in , then from equation and adding over the systems we obtain an approximation of the constraint torque acting on the body at time thus the flow torque on a moving body turns out to be where we have used the relation of sets and , and the equalities .the equation has two distinct contribution to the flow torque on .the first one , is the contribution due to the exchange of momentum across the boundary as a consequence of the displacement of the particle system from to .the second one , is the contribution to the torque by the lattice nodes that enter and leave as a consequence of its displacement to .these are impulsive terms as we have noted in section [ sssec : forces in moving body ] .expression is similar to the one presented in the literature to evaluate the flow torque on moving bodies .this expression naturally introduces the ad - hoc correction terms first presented in and used in . as with the force ,a difference between our proposal and those in the literature is the time at which the sets of lattice nodes and are evaluated . to avoid implicit expressionswhen the body movement is not predefined , we use some approximation methods , presented in section [ sssec : forces in moving body ] , to approach and .as one could expect , some short time scale noise in the torque computation appears as a consequence of the lattice nodes that enter and leave the fluid domain as the body moves . as with the force derivation, we also obtain an alternative derivation for the torque by considering the time derivatives of the angular momentum in different reference frames for each particle .the reference frames to compute the torque on the boundary nodes are chosen as in the derivation of .the expression we get for the flow torque on the body is as with the force , the last two terms are negligible .dropping these terms , the expression becomes explicit and present lower noise level in the torque evaluation .in this section we compare the results obtained with the expressions derived in section [ sec : momentum_exchange ] to compute the force and torque acting on a submerged body . to this endwe perform two benchmark tests on well known problems that have been tested and benchmarked widely with others computational fluid dynamics methods , such as finite element method and finite difference methods .we are interested in analyzing the dynamics of single bodies sedimenting along a vertical channel filled with a newtonian fluid .the bodies are either circular or elliptic discs .the accuracy in the determination of the force and torque acting on the falling body directly affects the body s movement .if the force and torque are computed correctly , the displacement and rotation of the bodies along the domain should be in agreement with data presented in the literature . to solve the flow we use a _d2q9 _ lattice scheme and srt with . the fluid density and the kinematic viscosityare set to / m and /s respectively .the fluid is initially at rest and has zero velocity at the horizontal and vertical boundaries at all times .we implement these boundary conditions with the method presented in .the acceleration of gravity acting on the body is / s downwards .the motion of each body is determined by integrating newton s equation of motion , where the force is given by the fluid flow force , weight and buoyancy force and the torque is given by the flow torque . to integrate in time we use euler forward numerical scheme , which is first order accurate as the lbm method itself .we have also implemented two step ( adams - bashforth ) integration in time and noticed no appreciable difference in the results . in this benchmark testwe analyze the dynamics of a single two - dimensional disc sedimenting along a vertical channel , shown schematically in figure [ fig_04 ] .we test the dynamics of the disc for two density relations , with and the densities of the body ( disc ) and the fluid respectively .the dimensions of the vertical channel are and ; the disc diameter is .the disc center is initially placed at with the coordinate origin at from the bottom of the channel and placed as shown in figure [ fig_04 ] .we discretized the computational domain with lattice points .we test the performance of the method for two density ratios and in figures [ fig_05 ] and [ fig_06 ] we show the horizontal and vertical velocities and the trajectory of the center of the disc and the rotation angle of the disc as functions of time , for and .when the disc is released from the initial position at , it starts moving and rotating along the channel .as one can see in the figures [ fig_05 ] and [ fig_06 ] , the movement of the disc can be divided into two regimes : a transient and a stationary regime .we compare results we obtained using a classical me , and the corrected methods given by , and , .these results , particularly those obtained with the corrected methods are in good agreement with tests presented in ( obtained using lbm with si ) , ( obtained using lbm with an expression similar to , ) and ( obtained using fem ) .we observe visible discrepancies between the classical and the corrected methods for the horizontal velocity and position .the major discrepancy shows in the transient regime ; no significant discrepancies can be seen in the stationary regime .similar observations have been made by wen et . and li et .al . for .all magnitudes are expressed in the international system of units.,title="fig : " ] for .all magnitudes are expressed in the international system of units.,title="fig : " ] + for .all magnitudes are expressed in the international system of units.,title="fig : " ] for .all magnitudes are expressed in the international system of units.,title="fig : " ] for .all magnitudes are expressed in the international system of units.,title="fig : " ] for .all magnitudes are expressed in the international system of units.,title="fig : " ] + for .all magnitudes are expressed in the international system of units.,title="fig : " ] for .all magnitudes are expressed in the international system of units.,title="fig : " ] in this section we present a benchmark test , similar to the previous one , where the circular disc is replaced with an elliptical disc , also sedimenting in a vertical channel filled with newtonian fluid .this test is also widely analyzed in the literature .we study a problem as the one presented by xia et . , where the authors use lbm and si to obtain the forces on the body .we show in figure [ fig_07 ] a schematic diagram of the problem .we define three dimensionless parameters that characterize the problem .these parameters are the aspect ratio , with and the major and minor axes of the ellipse respectively , the blockage ratio , with the width of the vertical channel , and the density ratio as defined in section [ subsec : sedimentationdisc ] . an exhaustive analysis of this sedimentation problem was carried out by xia et . . they studied the influence on the dynamics of the density ratio , the aspect ratio , and the channel blockage ratio .for simplicity we analyze this problem with a fix blockage ratio , chosen so that we do nt need to consider the wall - particle interaction .our interest is to test the method proposed in the present work , not to give a complete description of the sedimentation problem .we carry out simulations with a fixed geometrical configuration . in our testswe use major axis m , aspect ratio and blockage ratio .the properties of the fluid are the same used in section [ subsec : sedimentationdisc ] .initially , the fluid is at rest , the center of the ellipse is placed at m . the coordinate origin at m from the bottom of the vertical channel . to break the symmetry of the problem , we choose an initial angular position set , following , a height and a width .the domain is discretized in a lattice with points and density ratio is in the figure [ fig_08 ] we show the dynamical variables given as a function of time and the complete trajectory of the ellipse computed using a classical me , and the corrected methods given by , and , .our results using the corrected methods are in good agreement with the results of xia et .it is clear from figure [ fig_08 ] , that there exists an important difference , in the transient regime , and a minor difference in the final horizontal position between the corrected and uncorrected methods .in this work we have presented a new derivation of the momentum exchange method to compute the flow force and torque acting on a submerged body .the expressions we obtain , for the case of static bodies , are coincident with those presented in . from our derivationwe see that the expressions derived for the flow force and torque on static bodies are not appropriate to treat moving bodies .moreover , we derive two of the proposals apeearing in the literature to compute flow force and torque on moving bodies as particular cases . these last two alternatives to compute the force and torque are correct but different approximations to the same problem .the one consisting in and results in less noisy force and torque computations and is also more efficient from the computational point of view .our method of deriving momentum exchange does not use a particular treatment of the boundary conditions on the body surface and can be applied with several of the various methods proposed in the literature . in the last part of the paper we have tested the corrected momentum exchange expressions we obtained by simulating two problems which are well know in the literature , a sedimenting circular disc and a sedimenting elliptic .our results clearly show the difference , for the case of moving bodies , between the results of the corrected momentum exchange methods as compared to those given by equations and .these results are in good agreement with those obtained by other authors using similar and different computational fluid dynamic methods such as finite element methods .we want to thank carlos sacco and ezequiel malamud for useful discussions .j. p. giovacchini is a fellowship holder of conicet ( argentina ) .this work was supported in part by grants 05-b454 of secyt , unc and piddef 35 - 12 ( ministry of defense , argentina ) .we want to thank the corrections and suggestions made by the referees of the first manuscript that helped us improve our work .using all magnitudes are expressed in the international system of units.,title="fig : " ] using all magnitudes are expressed in the international system of units.,title="fig : " ] + using all magnitudes are expressed in the international system of units.,title="fig : " ] using all magnitudes are expressed in the international system of units.,title="fig : " ]
|
we present a new derivation of the momentum exchange method to compute the flow force and torque on a submerged body in lattice boltzmann methods . our derivation does not depend on a particular implementation of the boundary conditions at the body surface and relies on general principles . we recover some well known expressions , in some cases with slight corrections , to treat the cases of static and moving bodies . we also present some numerical tests that support the correctness of the formulas derived .
|
crotons are pregeometric objects that emerge both as labels encoded on the boundary of a `` volume '' and as complementary aspects of geometric fluctuations within that volume . to express their multifacetedness , the name _ croton _ was chosen , after crotos , son of pan and eupheme , who , once a mortal being , was put in sky by muses as the celestial fixture sagittarius .the term volume is normally linked to the categories space or space - time . in a pre - geometric setting ,more basic categories are needed mersenne fluctuations and qphyla .both require various stages of _ pregeometric refinement _ which , expressed through the complementary aspects croton amplitude and phase , combine to eventually form or dissolve real geometric objects .advancing from mark to thus , in what follows , means a time - like refinement ( roughly the analog of an exponential increase of temporal resolution in tenacious time ) , and an increase from to mark a space - like refinement ( analoguous to increase of spatial resolution in sustained space ) . on the boundary , these increases find expression in additionally encoded labels . in a previous work , basic croton components have been identfied , though at the time the name croton was not yet used .the starting point was the equivalence between a mersennian identity , destilled from the special case that two parafermi algebras are of neighboring orders , ( order marked by parenthesized superscript ) : and the identities, , : + , , leaving the details to [ sec : crotons - as - boundary ] , the way croton base numbers are derived and how they are subdivided into bases pop out naturally when the matrix elements of and are constructed .crotons , conceived of as linear combinations , use the following croton base numbers as scalars ( underlining explained later ) : for , , ; for , , ; for , , to name only the first few ( singletons and bases ) .they are instructive enough to show how label encoding works on the boundary .we first concentrate on order , dropping the parenthesized superscript and just asking the reader to bear in mind that the crotons examined belong to .our boundary is then defined by the outer nodes of a -cube complex , being the number of croton base numbers to handle : for , and for .let the -th node out of the of the first boundary bear the label , and , correspondingly , the -th node out of the of the second boundary the label (summation convention , and denoting all non - null -tuples out of possible from ) .it s easy to see that the total of labels form a croton field in either case : and .the fact aside that nodes can be grouped into pairs bearing values of opposite sign , field values may occur multiply , for instance . with each fielddefined on its own boundary , it s far from obvious they should have anything in common . yet , as we assume either one deals with a distinct crotonic aspect , we have to find a way of considering them side by side .we may , for instance , ask how many distinct labels there can be expressed _ potentially _ , neglecting mere sign reversals . counting from 1 on and taking as the highest conceivable value the sum of croton base numbers in absolute terms , we arrive at the number of potential labels from . out of these , 170 are realized as node labels . those not realizable are 20 in number : .the converse holds true for the case .of 212 potentially attainable labels , 40 are realized by ( sign - reversals included , that s the stock of nodes ) , leaving 172 labels in potential status : .a comparable situation arises when we bunch together croton base numbers that are rooted in neighboring mersenne orders , a process we have previously termed _interordinal _ to express this kind of hybridization .we now have for , and for .neglecting sign reversals and counting again from 1 on , we get 191 potential labels from the enlarged and 216 from the enlarged .all of the 191s bunch are realized as on the expanded boundary s nodes ; but a singularity also springs up , . by contrast ,202 out of the 216s bunch are realized as , on another expanded boundary s nodes and with no singularity popping up , leaving 14 in potential status : .the conclusion is that the fields are dual to each other with respect to realizability of labels on the boundary , the reason being that they encode complementary aspects of crotonicity croton amplitude and phase in the volume .the duality is controlled by two quantities , catalan number and the number : _ intraordinal case _ : _ interordinal case _ : ( singularity assignment included . ) the key role in that duality is taken by the quantity around which the croton base numbers belonging to a basis of order are built ( hence the underlining ) : _ _ + _ intraordinal case _ : _ _ _ interordinal case _ : `` volume '' as the term is used here , a multitude of mersenne fluctuations are constituive .they assume a descriptive shape when amplitude is plotted versus `` time '' .nodes on legs of a ` ' each bear a croton amplitude that emerges with a specific time - like and space - like refinement on the left leg , on the right and the peak amplitude is reached at .the left - leg structure is given by the right - leg structure by , under the constraint of a maximal croton - amplitude shift between and of 1 : . a typical mersenne fluctuation is shown in fig .[ fig : a - prototype - geometric ] : 12cm > p1cm > p1 cm & & & & & & & & & & & & & & & + & & & & & & & & & & & & & & & + & & & & & & & & & & & & & & & + & & & & & & & & & & & & & & & + & & & & & & & & & & & & & & & + & & & & & & & & & & & & & & & + & & & & & & & & & & & & & & & + & & & & & & & & & & & & & & & + & & & & & & & & & & & & & & & + & & & & & & & & & & & & & & & + & & & & & & & & & & & & & & & + we can stay in the ( `` time'',amplitude ) coordinate system and observe how fluctuations which share amplitudes that differ maximally by at each node but peak at different heights , organize into what we have previously termed _ _ qphylum ._ _ _ one such qphylum would for instance house ( peaks in boldface ) the mersenne fluctuations + ( ,35,72,145,291,584,,585,292,145,72,35,17, ) , + ( ,36,72,145,291,584,1169,,1170,585,292,145,72,36,18, ) , + ( ,35,72,146,292,584,1169,2340,,2340,1169,584,292,145,72,36,17, ) etc .however , the mersenne fluctuation ( ,1168,2337,4676,,4675,2336,1168,583, ) definitely belongs to a different qphylum one worth mentioning also because it is one of the rare instances where overt inversion of the term occurs : increase from 4676 to 9351 implies ( see eq .( [ eq : left - leg ] ) ) . ] ] seen top - down , a qphylum is a left - complete binary tree , that is : a rooted tree whose root node and left child nodes have left and right child nodes , while right child nodes have only right child nodes , as shown in fig .[ fig : a - prototype - qphylum ] . typically , qphyleticly related amplitudes are rooted in different time - like and space - like refinements ; nodes of a qphylumthus are associated with a set of frozen - in pregeometric `` time '' and `` space '' signatures .`` volume '' then becomes the assembly of all distinct qphyla .but let us go back one step and ask what it means when an amplitude in a given fluctuation reaches a certain level .if that level coincides with or , where denotes the kissing number of -dimensional euclidean space , it could mean that a chunk of space containing an -sphere packing _ with _ or _ without _ centerpiece was created in that fluctuation or dissolved if the amplitude did not peak : crotons which wax and wane .as an example , fig .[ fig : a - prototype - geometric ] shows a fluctuation that peaks at 918 , a quantity considered to be the proper kissing number of 13-dimensional euclidean space .a chunk of 13-space containing 918 12-spheres certainly is hard to visualize , so a 2 version may suffice to give a first impression ( see fig .[ fig:1-sphere - packing - with ] ) : 1-sphere packing with our without centerpiece ] we may ask if and how the peak amplitude 918 is related to a label on the boundary .certainly it is a realizable label , and one realizable _ _ intra__ordinally : all kissing numbers lying on the basis of in the range of potentially attainable labels can be seen to be realizable intraordinally , and this holds true too for the current basis , the 18-tuple ; its origin and the origin of the tuple are elucidated in [ sec : crotons - as - boundary ] ; [ sec : crotons - in - the ] tabularizes various kissing - number related croton amplitudes , among them also amplitude 918 ] , where 918 belongs : .the crucial question is , do we require _ all _ croton amplitudes from a given mersenne fluctuation with one of them `` geometrizing '' to have counterparts in intraordinally realizable labels on the boundary , in a narrow interpretation of the holographic principle ?amplitudes `` on the way / from there '' may at least in principle be amenable to an answer . and, what does this mean for mersenne fluctuations `` making detours '' which presumably are by far in the majority ? if one of the croton amplitudes , call it _ pivotal _ , comes only close and does not `` geometrize '' , it is because some residual mersenne fluctuations co - evolve in _ different _ qphyla . yet , with sufficiently tight space - like and time - like refinement constraints , fluctuations that are inter - qphyleticly linked to the pivotal fluctuation can be identified and examined .see the example below where one of the residual partial amplitudes is as , and the pivotal amplitude , together with a second residual partial amplitude , is closing in on as : [ cols="^,^,^,^,^,^ " , ]in both sects . [ sub : croton - field - duality ] and [ sec : application - to - subatomic ] , the part of duality control was taken by catalan numbers in conjunction with the numbers . understood as an expansion factor , the is the result of rescaling by a constant factor ; cf .( [ eq : rows ] ) .the ` mersennians ' of 5 , , in turn are the numbers . to distinguish the latter from regular mersenne numbers , they will be denoted , motivated by the fact differs from only by the small amount .resuming the discussion of labels on the boundary , we recall that out of , in the interordinal case , 192 distinct crotons ensuing from the ( enlarged ) basis including a null - singularity and neglecting sign reversals , _ all _ are realized on the enlarged boundary , an effect one would expect to carry over to the next level of hybridization , and , thus providing a useful testbed for the specifiability of characteristic physical quantities . in was shown that the catalan number is central to croton base numbers keeping time with the successors a fundamental connection which allows to define a span parameter quark constituents are often deemed too artifical to be true elements of nature , but , as we have seen in sect .[ sec : application - to - subatomic ] , if we concentrate on their role as carriers of fractional electric charge , provides a natural frame for them .it therefore comes as no surprise that ( [ eq : span ] ) supplies up - type _ and _ down - type interordinal bounds for the electromagnetic coupling constant , the dimensionless quantity .when normalized with the factor , the down - type parameter provides a tight upper bound to the current measured value . andthe up - type parameter in combination with the plus - sign normalizing factor yields an even better lower bound : one that encompasses the residual force known as weak force , with coupling strength the location where interpolates the interval ] . as we shall see , a croton qualifying as a pivot for has the special property that the gap between it and the target can be bridged by integer addition within a collection of local co - amplitudes a procedure that is mirrored on a refinement - dependent , _ local _ boundary .for the envisioned relationship we establish the following rules : + ( 1 ) when , qualifies as a pivot with target if does not exceed a prespecified range , say , as a heuristic , and the local co - amplitudes to include in the collection are located to the pivot s right ; conversely , for is required , and the inclusions of take place to the left of the pivot ; + ( 2 ) the prime factorization of determines how many co - amplitudes are included ; if it contains at least one factor ( ) , then the number of co - amplitudes , in a success - dependent way , + ( 2a ) is directly equal to this factor or + ( 2b ) the factor is interpreted as , and is assigned for the number of inclusions .+ ( 1 ) and ( 2 ) are only necessary conditions .a obeying them has an entourage of co - amplitudes that still have duplicates . that are distinct enter the tuple from which the nodes of a -cube complex as a local boundary get encoded . either under ( 2a ) or ( 2b )we get a label that is equal to ; if ( 2b ) applies , then , additionally , ` catalan - tie and properties ' have to be deployed to ensure a canonical form for : + ( 2b ) if co - amplitudes are multiples of catalan - tie numbers 6,7 or 11,12 , they induce a sign divide : multiples of 6 ( 7 ) and co - amplitudes of the form have their signs preserved while all others incur sign inversion ; if they instead are upper - tie type multiples , namely of 11 ( 12 ) , then only they and co - amplitudes of the form escape sign inversion .let us look into the creation of the eighteenth dimension , one of the roots of the branching process .for , we reported a match with via mersenne fluctuations of type 1 , .as far as can be told , no matches with any do exist for this kissing number , only close pivots .one that obeys constraint ( 1 ) is a less - than - target situation .the factor 7 in directly gives the number of inclusions for on the pivot s left : .which , in the sequel , yields a quadruple of distinct co - amplitudes , leading to a label on encoded by to mirror the equation .for the nineteenth dimension , the same online cfr calculator offers a candidate pivot with target .it satisfies constraint ( 1 ) , and , with factoring as , we encounter a situation , hence constraint ( 2b ) applies and we are led to a string of co - amplitudes to the pivot s ( boldface ) right , + are distinct , so we recognize that three of them are lower - tie type multiples , namely of 6 , and one is equal to then , under constraint ( 2b ) , is a node label on the local boundary that mirrors the equation . similarly works the creation of the twentieth dimension with a candidate pivot for , . again ,constraint ( 1 ) is obeyed .the factorization of being , it turns out that nineteen co - amplitudes to the pivot s right do nt fill the gap ; instead , factor 19 is interpreted as so that co - amplitudes to the pivot s ( boldface ) right are included , _i.e. _ of them distinct , and a tuple develops .again , we recognize co - amplitudes for whom there is sign preservation under constraint ( 2b ) : three that are lower - tie type multiples , namely of 7 , and one of the form , so that is a node label on a new local boundary , mirroring . obviously , the lower - tie situations bear the imprint of duality controls the part being expressed by sign preserving in , and the catalan number part by the sign preserving in multiples of tie numbers 6 and 7 , respectively .more instances of dimensional branching have to be examined before one can definitely say a new invariant is looming here namely that the number of distinct co - amplitudes that have their signs preserved in gap filling be equal to 4 as suggested by eqs .( [ eq : l_18],[eq : l_19],[eq : l_20 ] ) .
|
the article introduces crotons , multifaceted pre - geometric objects that occur both as labels encoded on the boundary of a `` volume '' and as complementary aspects of geometric fluctuations within that volume . if you think of crotons as linear combinations , then the scalars used are croton base numbers . croton base numbers can be combined to form the amplitudes and phases of mersenne fluctuations which , in turn , form qphyla . volume normally requires space or space - time as a prerequisite ; in a pregeometric setting , however , `` volume '' is represented by a qphyletic assembly . various stages of pre - geometric refinement , expressed through the aspects crotonic amplitude or phase , combine to eventually form and/or dissolve sphere - packed chunks of euclidean space . a time - like crotonic refinement is a rough analog of temporal resolution in tenacious time ; a space - like crotonic refinement corresponds to spatial resolution in sustained space . the analogy suggests the existence of a conceptual link between the ever - expanding scope of mersenne fluctuations and the creation and lifetime patterns of massive elementary particles , an idea that is exploited to substantiate our previously proposed preon model of subnuclear structure . pre - geometry , crotons , mersenne fluctuations , qphyla , continued fractions , kissing numbers , magnus equation , preons , up - type and down - type interordinal bounds 06b15 , 11a55 , 11h99 12.50.ch , 12.60.rc
|
in this talk we discuss the use of information geometry , in particular the _ thermodynamic geometry _ , also known as ruppeiner geometry , of various black hole ( bh ) families .this has been studied over the past few years in , , , , , and recently in among several dozens of papers devoted to the use of this method to study bhs .our results so far have been physically suggestive , particularly in the myers - perry ( mp ) kerr bh case where the curvature singularities signal the initial onset of thermodynamic instability of such bh .the geometrical patterns are given by the curvature of the ruppeiner metric where is thermodynamic temperature of the system of interest .] defined as the hessian of the entropy on the state space of the thermodynamic system where denotes mass ( internal energy ) and are other parameters such as charge and spin .the minus sign in the definition is due to concavity of the entropy function .interpretations of the geometries associated with the metric are discussed in plus references therein .it has been argued that the curvature scalar of the ruppeiner metric measures the complexity of the underlying interactions of the system , the metric is flat for the ideal gas whereas it has curvature singularities for the van der waals gas .we take note that most interesting ruppeiner metrics that we encounter have curvature singularities which are physically suggestive , but there are some known flat ruppeiner metrics that can be understood from a mathematical point of view we proved in man a flatness theorem which states that riemann curvature tensor constructed out of the negative of the hessian of the entropy of the form will vanish , where is an arbitrary analytic function and .the latter condition is necessary in order for the metric to be nondegenerate .this theorem has proven useful in our work on the dilaton bhs as it allows us to see the local geometry already by glancing at the entropy function .we also note that the signature of the ruppeiner metric corresponds to the sign of the system s specific heat , the signature is lorentzian for system with negative specific heat and euclidean when all specific heats are positive .we have observed that the ruppeiner curvature scalar diverges in the extremal limits of the kerr , mp and reissner - nordstrm ( rn ) ads bh whilst it is vanishing for rn , btz and dilaton bhs . despite the flat thermodynamic geometry one can extract useful information on the bh solutions by plotting the state space of such the flat geometry .normally it is done by first transforming the metric into a manifestly flat form and then bringing it into a recognizably minskowskian form .[ fig1 ] incidentally there is a so - called _ poincar _ s linear series method for analyzing stability in non - extensive systems .the simplicity of this method is owing to the fact that it utilizes only a few thermodynamic functions such as the fundamental relations in order to study / analyze ( in)stabilities .this method can thus be applied to bhs although they are non - extensive systems .non - extensitivity in bhs is due to self - gravitation and furthermore the bhs can not be subdivided and the bh entropy scales with area instead of the volume . for the analysis we use the recipes given in without further elaboration . in the poincar method oneplots a conjugacy diagram an inverse temperature versus mass and we can infer some information about the existence of instability if the turning point is present ( see fig . [ fig1 ] ) .in higher dimensions bhs are more interesting objects due to richer rotation dynamics , and the appearance of extended black objects such as black strings . for more detailed and complete information. the higher - dimensional generalization of the 4d bhs is the myers - perry bhs . in 5dthere is a black ring solution which is a bh with a horizon topology in asymptotically flat spacetime .black strings and black branes suffer from instabilities known as the gregory - laflamme instability .the mp black holes with only one angular momentum turned on when spinning ultrafast also suffer from instabilities .the ruppeiner curvature scalar for the mp bh with one spin is singular at which coincides with that found by emparan and myers in but in different coordinates and the bh temperature reaches its minimum at this point .this is where the mp bh starts behaving like black membrane ( which is an unstable object ) but the instability is believed to set in somewhat later when the mass of the system decreases plus that there are unstable dynamical modes due to metric perturbations which the poincar method does not see such as those recently investigated ( in various dimensions ) by dias in .attempts to analyze such the instabilities by means of thermodynamic geometry are being made in .the weinhold metric for the mp bh is flat and can be brought into a manifestly flat form by coordinate transformations .the state space of the mp bhs appears as a wedge embedded in a minkowski space and we call this diagram a _ thermodynamic cone_. as mentioned above one can obtain some information from the thermodynamic cone the bh entropy is vanishing on the light cone whereas the edge of the wedge is where which is the bh s extremal limit .the opening angle is unique for each bh and it increases as the number of dimensions increases . as we increase the number of dimensions the opening angle tends to the right angle ( see fig .[ fig12 ] ) . in authors show that the method of thermodynamic geometry is consistent with the poincar stability analysis , and they are able to prove that one of the black ring branches is always locally unstable , showing that there is a change of stability at the point where the two black ring branches meet .their results using two different methods ( ruppeiner and poincar ) are consistent with each other .information geometry is a new approach for studying bh thermodynamics and possibly bh instabilities .this approach opens up new perspectives and sheds light on critical phenomena in bh systems .the signature of the thermodynamic metric and the curvature singularities correspond to the sign of the specific heat and the extremal limit of the bhs respectively .the curvature singularity is physically suggestive in that it signals the initial onset of instability in higher dimensional bhs .this method has so far been consistent with the poincar analysis .geometrical patterns of bh thermodynamics uncovered may play an important role in the context of quantum gravity .np would like to acknowledge the organizers of the ere2009 in bilbao for their kind support , and he would also like to thank roberto emparan for useful discussions .np and j thank ingemar bengtsson on various comments .finally np warmly thanks the kof group and department of physics , stockholm university for the kind hospitality .
|
we discuss the use of information geometry in black hole physics and present the outcomes . the type of information geometry we utilize in this approach is the thermodynamic ( ruppeiner ) geometry defined on the state space of a given thermodynamic system in equilibrium . the ruppeiner geometry can be used to analyze stability and critical phenomena in black hole physics with results consistent with those from the poincar stability analysis for black holes and black rings . furthermore other physical phenomena are well encoded in the ruppeiner metric such as the sign of specific heat and the extremality of the solutions . the black hole families we discuss in particular in this manuscript are the myers - perry black holes .
|
-density parity - check ( ldpc ) codes have been one of dominant error - correcting codes for high - speed communication systems or data storage systems because they asymptotically have capacity - approaching performance under iterative decoding with moderate complexity . among them , quasi - cyclic ( qc ) ldpc codes are well suited for hardware implementation using simple shift registers due to the regularity in their parity - check matrices so that they have been adopted in many practical applications . for a high - rate case, it is difficult to randomly construct good ldpc codes of short and moderate lengths because their parity - check matrices are so dense compared to a low - rate case for the given code length and degree distribution that they are prone to making short cycles . among well - known structured ldpc codes , finite geometry ldpc codes - and ldpc codes constructed from combinatorial designs - are adequate for high - rate ldpc codes .the error correcting performance of these ldpc codes is verified under proper decoding algorithms but they have severe restrictions on flexibly choosing the code rate and length . also , since finite geometry ldpc codes usually have much redundancy and large weights in their parity - check matrices , they are not suitable for a strictly power - constrained system with iterative message - passing decoding. it is known that the parity - check matrix structure consisting of a single row of circulants - , , is adequate for generating high - rate qc ldpc codes of short and moderate lengths .the class - i circulant eg - ldpc codes in and the duals of one - generator qc codes in are constructed from the euclidean geometry and the affine geometry , respectively , and they have very restricted rates and lengths and much redundancy . in - , qc ldpc codes constructed from cyclic difference families ( cdfs ) are proposed , which also have restricted lengths .computer search algorithms are proposed in and for various - length qc ldpc codes of this parity - check matrix structure , but they can not generate qc ldpc codes as short as the qc ldpc codes constructed from cdfs for most of code rates . in this paper , new high - rate regular qc ldpc codes with parity - check matrices consisting of a single row of circulants with the column - weight 3 or 4 are proposed based on special classes of cdfs . in designing the proposed qc ldpc codes , we can flexibly choose the code rate and length including the minimum achievable code length for a given design rate .the parity - check matrices of the proposed qc ldpc codes have full rank when the column - weight is 3 and have almost full rank when the column - weight is 4 because there is just one redundant row .numerical analysis shows that the error correcting performance of the proposed qc ldpc codes of short and moderate lengths is almost the same as that of the existing high - rate qc ldpc codes .the remainder of the paper is organized as follows .section [ sec : cdf ] introduces the definition and the existence theorems of cdfs , and provides a construction method of a class of cdfs . in section [ sec : proposed ] , high - rate regular qc ldpc codes are proposed and analyzed . in section [ sec : simulation ] , the error correcting performance of the proposed qc ldpc codes is compared to that of the existing high - rate qc ldpc codes via numerical analysis .finally , the conclusions are provided in section [ sec : conclusions ] .a cyclic difference family is defined as follows .consider the additive group .then -element subsets of , , , , form a _ cyclic difference family _ ( cdf ) if every nonzero element of occurs times among the differences , , , .according to - , cdfs are adequate for constructing parity - check matrices of qc ldpc codes with girth at least 6 .theorem [ thm : cdf_existence ] shows the existence of such cdfs .the existence of cdfs is given as : * there exists a cdf for all .* a cdf exists for all .* a cdf exists for and .[ thm : cdf_existence ] in this paper , a special class of cdfs , called perfect difference family ( pdf ) , will be used to construct high - rate regular qc ldpc codes with parity - check matrices consisting of a single row of circulants so that more various code parameters can be achieved . before introducing its definition ,we will define two terms as follows .consider -element subsets of , , , . among the differences , , , , we will call differences , , , as the _ forward differences over _ of the subsets and the remaining differences as the _ backward differences over _ of the subsets . consider a cdf , , , , with .then the cdf is called a _ perfect difference family _ ( pdf ) if the backward differences cover the set .the condition on the existence of pdfs is stricter than that of cdfs .some recent results on the existence of pdfs are summarized in the following theorem .the existence of pdfs is given as : * a pdf exists if and only if .* a pdf exists for , . * pdfs are known for but for no other small value of .* there is no pdf for .[ thm : pdf_existence ] since there are no pdfs for and no sufficiently many pdfs for from theorem [ thm : pdf_existence ] , we focus on the case of .the construction of pdfs for and is provided in and , respectively .although pdfs do not exist for , a class of cdfs constructed from hooked skolem sequences for can be used to construct parity - check matrices of qc ldpc codes with various code parameters , which consist of a single row of circulants .a _ skolem sequence _ of order is a sequence of integers satisfying the following two conditions : * for every , there exist exactly two elements and in such that . *if with , then .skolem sequences are also written as collections of ordered pairs with .a _ hooked skolem sequence _ of order is a sequence satisfying the conditions i ) and ii ) .[ def : skolem ] a hooked skolem sequence of order exists if and only if , and it can be constructed by the method in as follows .note that the ordered pairs are used to represent a hooked skolem sequence as in definition [ def : skolem ] .* ; * ; * , ; * * , ; * from hooked skolem sequences , cdfs can be constructed for . after constructing a hooked skolem sequence of order , , a cdf is obtained by letting .we can easily check that all differences in s cover the set .consider a binary regular ldpc code whose parity - check matrix is a array of circulants given as where a _ circulant _ is defined as a matrix whose each row is a cyclic shift of the row above it .this ldpc code is quasi - cyclic because applying circular shifts within each length- subblock of a codeword gives another codeword .a circulant is entirely described by the positions of nonzero elements in the first column .let , , be the index of the -st element in the first column .then , the _ shift value_(s ) of a circulant is ( are ) defined as the index ( indices ) of the nonzero element(s ) in the first column .note that a shift value takes the value from to .let ( ) denote the column - weight ( row - weight ) of .then we have and the design rate of this ldpc code is . let , and , denote the -th smallest shift value of , that is , , which correspond to the indices of 1 s in the first column of .we propose a new class of high - rate qc ldpc codes which have the parity - check matrix form in ( [ eq : h ] ) constructed from pdfs or cdfs given in subsection [ subsec : skolem ] . under some proper constraints , a parity - check matrix of qc ldpc code with the column - weight 3 or 4can be obtained by taking shift values of from in the cdf or the pdf .more concretely , qc ldpc codes can be constructed by using , , of cdf or pdf as follows : * choose the code parameters , , and such that * * for and for * * , where for and . * if and , construct a cdf , , as in subsection [ subsec : skolem ] . otherwise , construct a pdf , , as in and .* let , and . for ,qc ldpc codes with parity - check matrices consisting of a single row of circulants can also be constructed by the proposed procedure , but for other than 6 , 8 , and 10 , pdfs are still unknown as in theorem [ thm : pdf_existence ] .it is well known that parity - check matrices including a circulant of column - weight larger than or equal to 3 have a girth at most 6 . in this subsection, we will show that the proposed qc ldpc codes have the girth 6 by proving that there is no cycle of length 4 in the parity - check matrices .let denote the difference of shift values and in .consider a pdf , .the proposed qc ldpc codes constructed from this pdf do not have any cycle of length 4 for every .[ thm : main1 ] the necessary and sufficient condition for avoiding cycles of length 4 is that all s , and , are distinct .the backward differences over of the shift values cover to due to the property of the pdf , and thus the forward differences over of the shift values cover to . since , all s are distinct . for ,consider a cdf , , constructed from a hooked skolem sequence of order .the proposed qc ldpc codes constructed from this cdf do not have any cycle of length 4 for every and .[ thm : main2 ] we only need to show that all s , and , are distinct for .we can see from definition [ def : skolem ] that the maximum value of in a hooked skolem sequence of order is . since , the minimum and the maximum of the backward differences over of the shift values are and , respectively .since all backward differences over are distinct and take values through , every forward difference over has a value from to .therefore , all s are distinct for .note that for , , and , a forward difference and a backward difference over of the shift values in the proposed qc ldpc codes have in common .thus it seems that another construction method is needed for .however , in fact , there does not exist any construction method which avoids cycles of length 4 as shown in the following theorem . for , , and , which is not the case covered in theorem [ thm : main2 ] , qc ldpc codes whose parity - check matrices have the form in ( [ eq : h ] ) can not avoid cycles of length 4 for any shift value assignment .[ thm : main3 ] assume that there exists a shift value assignment such that the parity - check matrix avoids cycles of length 4 .if for some , , , the difference is also .therefore , all differences , and , have to cover to except .let denote the sum of backward differences over of the shift values .note that the addition is calculated over the integer .then , is odd because on the other hand , since can be expressed as this contradicts ( [ eq : diff_sum ] ) in that the parities of are different .therefore , there is no shift value assignment such that the parity - check matrix in ( [ eq : h ] ) avoids cycles of length 4 for , , and .the proposed qc ldpc codes have advantages mainly on being able to have various code parameters while guaranteeing the girth 6 . for , only has to be larger than or equal to 2 and for , can be any integer from 4 to 1000 , which are the same as the conventional qc ldpc codes constructed from cdfs - except for and .moreover , for a fixed , can have any value as long as except for the case of , , and . on the other hand, the conventional qc ldpc codes constructed from cdfs generally do not guarantee the girth 6 for for a given .in fact , it can be easily seen that can have the arbitrary value from and for the conventional qc ldpc codes to have the girth 6 because for the given cdf , , the conventional qc ldpc codes can be constructed by only using the sets for . however , the conventional qc ldpc codes can not still have sufficiently various code lengths .it is obvious that the proposed qc ldpc codes can achieve the minimum code length , which corresponds to the theoretical lower bound , among all possible regular ldpc codes for given and design rate under the girth 6 is guaranteed because the parity - check matrix of the proposed qc ldpc code with is actually an incidence matrix of a steiner system , .that is , the minimum achievable code length of the proposed qc ldpc codes is . for comparison ,consider qc ldpc codes whose parity - check matrices have the form of a array of circulant permutation matrices , which correspond to the most common form of the existing regular qc ldpc codes . obviously ,these qc ldpc codes have the same design rate , row- and column - weights of the parity - check matrices with the proposed qc ldpc codes . the necessary condition on for these qc ldpc codes to have the girth larger than or equal to 6 is and the array ldpc codes are known to achieve the equality .thus , the minimum achievable code length of these qc ldpc codes for guaranteeing the girth 6 is . fig .[ fig : min_length ] illustrates such minimum achievable code lengths and by varying for and and we can see that the proposed qc ldpc codes can flexibly have a very short length up to the theoretical lower bound unlike the array ldpc codes . note that qc ldpc codes in and can not achieve the minimum code length .the error correcting performance of the proposed qc ldpc codes may not be good for much larger than because , regardless of the code length , the girth of the proposed qc ldpc codes is fixed to 6 and the minimum distance of these qc ldpc codes has a value between and , .however , these restrictions on the girth and the minimum distance are not problematic for the proposed high - rate short qc ldpc codes and for large and small , the error correcting performance of the proposed qc ldpc codes will be compared with that of other qc ldpc codes in section [ sec : simulation ] .therefore , the proposed construction is adequate for high - rate qc ldpc codes of short and moderate lengths .it is difficult to analyze the rank of the parity - check matrices of the proposed qc ldpc codes because they do not have an algebraic structure like the codes in - . instead, the rank of the proposed parity - check matrices for various parameters can be numerically computed .it is observed that the parity - check matrices of the proposed qc ldpc codes have full rank for the parameters , , and such that the code length is less than or equal to 3,000 , and have almost full rank , i.e. , just one redundant row , for the parameters , , and such that the code length is less than or equal to 3,000 .moreover , every parity - check matrix has at least one full - rank circulant for , which enables a simple encoding of the proposed qc ldpc codes , and has at least one almost full - rank circulant for .assume that in ( [ eq : h ] ) is invertible .then , a generator matrix of systematic form is simply obtained as where represents the identity matrix .this full - rank property of the parity - check matrices differentiates the proposed qc ldpc codes from the qc ldpc codes in and , whose parity - check matrices consist of a single row of circulants and have many redundant rows .in this section , the error correcting performance of the proposed qc ldpc codes is verified via numerical analysis and compared with that of some algebraic qc ldpc codes and progressive edge - growth ( peg ) ldpc codes with girth 6 . as algebraic qc ldpc codes , affine geometry qc ldpc codes and array ldpc codes are used .note that the peg ldpc codes are not quasi - cyclic but random - like , and they are known to have the error correcting performance as good as random ldpc codes .the parameters of the algebraic qc ldpc codes are set to have as equal values with those of the proposed qc ldpc codes as possible and the parameters of the peg ldpc codes are exactly the same as those of the proposed qc ldpc codes .all results are obtained based on pc simulation using the sum - product decoding under the additive white gaussian noise ( awgn ) channel .the maximum number of iterations is set to 100 .first , the rate-0.9167 ( 1020 , 935 ) proposed qc ldpc code with , , and is compared with the rate-0.9131 ( 1024 , 935 ) affine geometry qc ldpc code and the rate-0.9167 ( 1020 , 935 ) peg ldpc code .the bit error rate ( ber ) performance of these ldpc codes is shown in fig .[ fig : performance ] and we can see that the proposed qc ldpc code and the peg ldpc code show a better ber performance than the affine geometry qc ldpc code in the high signal - to - noise ratio ( snr ) region .second , the rate-0.9333 ( 2115 , 1974 ) proposed qc ldpc code with , , and is compared with the rate-0.9348 ( 2115 , 1977 ) array ldpc code and the rate-0.9333 ( 2115 , 1974 ) peg ldpc code . it is shown in fig . [fig : performance ] that these ldpc codes have almost the same ber performance . finally , the rate-0.9006 ( 1640 , 1477 ) proposed qc ldpc code with , , and is compared with the rate-0.9024 ( 1640 , 1480 ) array ldpc code and the rate-0.9006 ( 1640 , 1477 ) peg ldpc code .it is shown in fig .[ fig : performance ] that these ldpc codes also have almost the same ber performance .in this paper , a new class of high - rate qc ldpc codes with or is proposed , which have parity - check matrices consisting of a single row of circulants and having the girth 6 .the construction of these qc ldpc codes exploits the cdfs constructed from hooked skolem sequences in the case of and , and the pdfs in other cases . in designing the proposed qc ldpc codes , we can flexibly choose the values of and including the minimum achievable code length for a given design rate .the parity - check matrices of the proposed qc ldpc codes have full rank when and have almost full rank , i.e. , just one redundant row , when .via numerical analysis , it is verified that the error correcting performance of the proposed qc ldpc codes is better than or almost equal to that of the affine geometry qc ldpc codes , the array ldpc codes , and the peg ldpc codes .y. kou , s. lin , and m. p. c. fossorier , `` low - density parity - check codes based on finite geometries : a rediscovery and new results , '' _ ieee trans .inf . theory _ ,47 , no . 7 , pp . 2711 - 2736 , nov . 2001 .i. djurdjevic , j. xu , k. abdel - ghaffar , and s. lin , `` a class of low - density parity - check codes constructed based on reed - solomon codes with two information symbols , '' _ ieee commun ._ , vol . 7 , no . 7 , pp317 - 319 , jul . 2003 .h. tang , j. xu , y. kou , s. lin , and k. abdel - ghaffar , `` on algebraic construction of gallager and circulant low - density parity - check codes , '' _ ieee trans .inf . theory _ ,1269 - 1279 , jun . 2004 .b. ammar , b. honary , y. kou , j. xu , and s. lin , `` construction of low - density parity - check codes based on balanced incomplete block designs , '' _ ieee trans .inf . theory _ ,50 , no . 6 , pp .1257 - 1268 , jun . 2004 .m. fujisawa and s. sakata , `` a construction of high rate quasi - cyclic regular ldpc codes from cyclic difference families with girth 8 , '' _ ieice trans .fundamentals _ , vol .e90-a , no .1055 - 1061 , may 2007 .r. julian , r. abel , s. costa , and n. j. finizio , `` directed - ordered whist tournaments and difference families : existence results and some new classes of -cyclic solutions , '' _ discrete appl .43 - 53 , 2004 .
|
for a high - rate case , it is difficult to randomly construct good low - density parity - check ( ldpc ) codes of short and moderate lengths because their tanner graphs are prone to making short cycles . also , the existing high - rate quasi - cyclic ( qc ) ldpc codes can be constructed only for very restricted code parameters . in this paper , a new construction method of high - rate regular qc ldpc codes with parity - check matrices consisting of a single row of circulants with the column - weight 3 or 4 is proposed based on special classes of cyclic difference families . the proposed qc ldpc codes can be constructed for various code rates and lengths including the minimum achievable length for a given design rate , which can not be achieved by the existing high - rate qc ldpc codes . it is observed that the parity - check matrices of the proposed qc ldpc codes have full rank . it is shown that the error correcting performance of the proposed qc ldpc codes of short and moderate lengths is almost the same as that of the existing ones through numerical analysis . code length , code rate , cyclic difference families ( cdfs ) , girth , quasi - cyclic ( qc ) low - density parity - check ( ldpc ) codes .
|
etcs play an important role in modern instrument use as they allow observers to determine how to carry out specific investigations and , especially , to predict the amount of time these will require .since the time needed for the various programmes is a very sensitive issue in the allocation process for most modern high visibility ground and space - based facilities , the accuracy of these simulators must be well understood both by the observers and the time allocation committees that must rely on their results for a fair and scientifically effective distribution of the available time . in this context , unfortunately , besides the documentation accompanying the software tools , there is practically no published information on the reliability of existing etcs of imaging cameras .the wfpc2 has been so far the principal instrument on board the hst and it is expected to be of extreme utility to image parallel fields even now that the advanced camera for surveys ( acs ) is installed on the hst . etc software utilities are available on the internet site of the stsci which simulate analytically the photometry for a given target for each hst instrument .the accuracy of these programmes plays a fundamental role in the planning of observations , in particular when extremely deep imaging is required and whenever the performances of two different instruments have to be compared . while performing simulations for an hst proposal for the wfpc2 and the acs , in which high accuracy was needed in order to evaluate the limiting magnitudes for deep observations of a globular cluster , we found substantial differences between the wfpc2 etc results and real photometry obtained on archival images .we found similar differences also in archival non crowded fields , so that we decided to analyse the problem by directly comparing the etc predictions with our photometry in various circumstances and here we show the results and the way in which they depend on field crowding .we also compare our photometry with the result of the recently published `` etc++ '' software , whose calculations are based on statistical analysis tools and take into account the real effects of the pixel size and charge diffusion .the wfpc2 etc computes the expected snr of a point source from its input parameters , namely : the magnitude of the star in a given spectral band , the spectral type , the filter to use , the channel of the detector ( pc1 or wf2 , wf3 , wf4 ) , the analogue to digital gain , the position of the star on the pixel ( centre or corner ) , the exposure time of the whole observation ( i.e. the sum of all the exposure frames ) and the sky coordinates of the target . as of late ,the option to manually select a specific value of the sky brightness has been added .first , the programme computes the source count rate , assuming a blackbody spectrum , if the user has not specified it , and multiplies it by the response curves of the detector and filter .then , the programme takes into account the various noise sources , including photon noise , read noise , dark noise and sky noise .the latter depends on the target position on the sky , with the sky brightening by about one magnitude from the ecliptic pole to the ecliptic plane .the programme uses the values from table 6.4 in the wfpc2 instrument handbook to compute the sky count rate per pixel and hence its photon noise .the contribution of the total noise to the photometry of a star depends upon the number of pixels in the point spread function ( psf ) and how these pixels are weighted during data reduction .the wfpc2 etc assumes that the data reduction employs psf fitting photometry , so that it weights the pixels in proportion to their intensity , which maximises the snr .the multiple read errors for a `` cosmic ray splitted '' ( cr - split ) image , i.e. an image composed by many shorter frames , is then computed for a set of default splitting values and the corresponding snr is also given in the etc result page . in order to quantify the possible wfpc2 etc deviations from real photometry, we performed accurate aperture photometry ( using the daophot package ) on both crowded and non crowded archival fields .the average image used in our analysis was computed after aligning the individual frames in the dithering pattern and removing cosmic ray hits .a custom programme was used , which computes the offsets of the frames by measuring the mean displacement of the centroid of some reference stars .the task then registers all the images to the first one , creates a mask of the cr - contaminated pixels , by means of an iterative sigma clipping routine with respect to the median value of the corresponding pixels in all frames , and finally computes the mean image by averaging the corresponding pixel on all the images if not included in the cr - mask .the cr - corrupted pixel in the original un - shifted frames are also replaced by the value of the corresponding pixel in the mean image in order to allow us to perform photometry on both the combined and on the individual frames .our instrumental magnitudes were then transformed to the johnson / cousin ubvri system by following the prescription of specifically , their equation8 , which also takes account of the colour correction by means of the coefficients in table 7 therein and by making an optimal choice of the aperture radius for each star , so as to minimise the associated photometric error . for the crowded field ,we have used the images of the galactic globular cluster ( hst proposal 5461 ) , obtained in the f555w and f814w filters .these are deep images centered in a region at one core radius from the centre of a dense globular cluster and should be representative of the cases in which the field observed is filled with a multitude of very bright and saturated stars ( ) , whose haloes overlap each other and cover a significant fraction of the frame ( figure [ fig1 ] ) . images of the field of arp2 taken from hst proposal 6701 , also obtained through f555w and f814w filters , were used as representative of a sparsely filled region in which the field is populated with faint stars with no appreciable overlapping haloes ( figure [ fig2 ] ) .aperture photometry was performed on both series of images ( i.e. crowded and non crowded ) , with the following parameters .the flux of the object was sampled within an aperture of radius , which is varied in steps of pixel .the background is sampled within an annulus drawn from an inner radius pixel to an outer radius , with an annulus width which is varied from pixel up to pixel . as is discussed later in this section ,an adjustable aperture radius and annulus size allow us to maximise the snr , by limiting the noise generated by the contamination of the neighbouring objects .moreover , the background is always estimated by taking the mode , rather than the mean or median , of the pixel distribution within the annulus .appropriate aperture corrections were applied , which were directly measured from the most isolated non saturated stars in the field .a direct comparison with the encircled energy curve for the wfpc2 psf shows a perfect match , thus proving that the growth curves that we measured are reliable .the daophot task , used with the optimal aperture radius and the radii and for the sky annulus , gives the best estimate of both the magnitude and the associated error , from which we compute the snr by using the equation : that comes from inverting pogson s relation , where the numerical constant is equal to .hereafter , the acronym indicates the snr estimated on the basis of the photometric error given by daophot . as an independent check ,we have computed the snr as indicated in equation 6.7 of the wfpc2 handbook which , in the practical case of observed quantities , becomes : where is the optimal aperture radius used by daophot , is the number of frames combined together , the read - out noise ( in units of electrons ) of each specific ccd , the average background per pixel inside the annulus from to , in units of dn , is the flux within the aperture of radius after subtraction of the background contribution , in units of dn , and is the effective gain factor , i.e. the ccd gain times the number of frames averaged togheter .finally , is a small ( although non negligible ) contribution to the error affecting the estimate of the background level which takes on the form : the computation of makes no use of the error estimate on the magnitude or flux provided by daophot , so it is reassuring to find that is in excellent agreement with .this , however , only happens if we use an adaptive choice for aperture radius and for the background annulus , as explained above .in fact , if we select a fixed radius and annulus size in a crowded environment , the contamination due to neighbouring stars alters the statistics of the sky within the annulus and we always find .this is precisely the reason that made conclude that core aperture photometry , i.e. source and sky measurement conducted as close to the source as possible , as well as the use of the mode for the background are most advisable in crowded environments . in light of the consistency between and andsince the latter stems directly from equation 6.7 of the wfpc2 handbook , on which the wfpc2 etc is also based , we can now proceed and compare our measured with the etc predictions . before doing so , however , we must make sure that the way in which we measure the snr ( i.e. ) is consistent with the way in which the etc software expects users to carry out the photometry .in fact , the latter assumes that the data reduction process employ psf fitting photometry , i.e. that optimal weighting be assigned to each pixel in proportion to its intensity in the psf . as discussed above , however , we have used aperture photometry to determine .the wfpc2 etc instructions would indeed offer a correction to apply to the ideal psf fitting case ( we call it `` etc optimal snr '' ) in order to convert it to the equivalent snr that would be obtained with canonical aperture photometry ( `` etc aperture snr '' ) .following the wfpc2 etc instructions in , we have : where for the pc camera and for the wf chips , particularly valid when the aperture radius is pixel for the pc and pixel in the case of the wf .since we determined by using aperture photometry , it would seem that we need to take into account the correction given by equation4 .we show , however , that this correction is not necessary thanks to the adaptive method that we used for photometry . in figure[fig3 ] we plot , for the pc chip , the measured against the prediction of the etc for the aperture photometry case , i.e. .we should like to clarify here how figure[fig3 ] and others of the same type in the following were built . after having measured the calibrated magnitude of a star in the images , we folded the latter value through the wfpc2 etc so as to calculate the estimated snr for an object of that brightness and for the exposure time and cr - split pattern corresponding to those of the actual combined image . for this and all the other figures in this paper , unless otherwise specified , we used the `` average sky '' option for the sky brightness setting as allowed by the new wfpc2 etc version 3.0 .we can see from figure [ fig3 ] that the prediction of the etc for aperture photometry ( ) are over - estimated for faint stars and under - estimated for bright objects with respect to the measured values for both the sparse and the crowded field .figure [ fig4 ] is the analogue of figure [ fig3 ] but here the reference is the etc optimal snr , , i.e. without any correction for aperture photometry. as one can easily see , the etc in this case always overestimates the value of the snr with respect to the measured ones by up to for the fainter stars .as the right hand side axis shows , such a mismatch of the snr corresponds to a time estimation error of the same amount ( see equation7 ahead ) , i.e. the etc appears to underestimate the exposure time actually needed to achieve a given snr. a closer look at figure [ fig4 ] , however , reveals that the scatter of the representative points on the plot is smaller when our measurements are compared with snr than for snr and that the overall behaviour is closer to the etc prediction at any magnitude .this is a consequence of our optimised aperture and annulus photometry closely approaching psf fitting . in light of these results , in the following we ignore the correction for aperture photometry given by equation4 and compare our measurements directly with the etc optimal snr , i.e. snr .figures [ fig3 ] and [ fig4 ] clearly witness the dependence of the actual snr upon the level of field crowding and , at the same time , its independence of the filter used . in principle , one could question the validity of our latest assumption , i.e. that of ignoring the correction to be applied to the snr measured with aperture photometry . in fact , in a crowded field, psf fitting photometry is expected to give better results .we have , therefore , attempted a direct comparison between the predictions of the etc and the results of psf fitting photometry . rather than carrying out the reduction ourselves , we have utilised one of the finest examples of photometric work carried out on these very m4 data by , who employed very accurate allframe photometry as described in detail in .in their paper , these authors measure the magnitude of each star from the individual frames in the dithering stack and compute the combined magnitudes as the weighted average of the corresponding fluxes , the error on them , , being related to the flux scatter amongst the frames . in order to make a reliable comparison with our results ,we have performed , in a similar way , optimised aperture photometry on the individual frames ( i.e. the original , not yet aligned images , in which cr - hits had been removed as described above ) .the measured fluxes were averaged with a weight inversely proportional to the daophot estimated uncertainty after rescaling for the flux ratio .our final magnitude errors , , are thus derived from the standard deviation of the fluxes , divided by the square root of the number of images combined .figure [ fig5 ] displays the comparison between , and the etc prediction , showing that the two photometric uncertainties overlap each other , while the etc largely overestimates the precision that can be attained with psf fitting photometry , even by one of the most experienced teams .thus , in this crowded case , it is also apparent that the etc deviations are independent of the photometric technique adopted . in sparse fields , where aperture photometry and psf fitting are equally effective and reliable , figures [ fig3 ] and [ fig4 ] already prove that the etc predictions depart from the measured data , although by a smaller amount than that applicable in the crowded case .finally , in figure [ fig6 ] we compare the predictions of the etc with the actual measurements for both the pc and wf , to show that the behaviour of the etc applies regardless of the channel .the relevance of the above considerations becomes clear when one uses an etc to simulate very deep observations , especially when a comparison between instruments , e.g. acs / wfc and wfpc2 , is required to compare the limiting magnitude in given exposure times . as experience shows , a star finding programme is able to detect a faint point source only when its brightest pixel is at least or above the sky background ( where is the standard deviation of the background ) , with a value of or more being the typical prerequisite in most faint photometry precision applications .if we plot the so called object _ detectability _ , defined as : as a function of the magnitude error , we obtain the graph in figure [ fig7 ] . herewe notice that the detectability ( which is practically independent of the filter and crowding ) drops to the value of just when the magnitude error approaches , which is usually considered the maximum allowed error in canonical photometric work . by relating the detectability with the etc optimal snr , , as done in figure [ fig8 ] , we see that corresponds to an etc optimal snr of for the non crowded case and to for the crowded case .this literally means that if we need to know the magnitude of the faintest detectable star in an observation of a stellar field with the wfpc2 we should query the etc , setting `` average sky '' , for a snr of and , respectively in a crowded and in a sparse environment .it is normally assumed that a detection requires a snr of 3 , but in the case of the snr provided by the wfpc2 etc , this is only true for an isolated object .the direct consequence of what we have illustrated so far is that , if the etc were used to plan observations of faint stars in a globular cluster like m4 with the wfpc2 , the predicted exposure time could be considerably underestimated .conversely , the same predictions would be almost correct for a star of equal brightness in a sparse field . in the followingwe try and provide an empirical correction formula that can be applied to the snr given by the wfpc2 etc to compensate for the effects of crowding . in order to understand the discrepancy between the expected and measured snr and to clarify how to exactly account for the effects of crowding in the simulations, we artificially modified the background level and photon noise in the sparse field so as to reproduce the sky level and sky variance measured on the crowded field . in practice , we added to the sparse field a gaussian noise with a mean equal to the difference in the sky level between the two fields and a variance equal to the quadratic difference of the sky variances between them .the snr diagramme for the modified image ( figure [ fig9 ] ) reveals that the locus of the modified sparse data points shifts towards and perfectly overlaps the crowded field locus .this tells us , as expected , that the increased background level resulting from crowding is responsible for the differences shown in figures [ fig3 ] and [ fig4 ] between sparse and crowded fields .it is , however , true that the etc gives the snr under the best possible sky conditions , which are rarely encountered , if ever , in real observations .moreover , it is generally not expected of the etc to take account of the position and brightness of all the stars in the field as would be necessary to simulate how crowding increases the background level .we have , therefore , manually set the etc sky brightness to match the levels directly measured with the daophot sky task on the crowded image ( i.e .. the mode of the levels distribution ) , hoping in this way to force the snr simulated by the etc to agree with our measurements .in fact , the results change only marginally , as shown in figure[fig10 ] , where and are plotted against the observed magnitude ( johnson in this case ) .the etc simulation gets closer to the real data , but it does not still match them .moreover , it seems as if a suitable value for the background can not be found at all as shown in figure [ fig11 ] , where one sees that the sky value that would force the etc prediction to match , changes significantly as a function of star brightness .we must , thus , conclude that the treatment of the background is a major issue for the wfpc2 etc , although that alone can not explain the whole discrepancy .it goes without saying that we have verified and confirmed that the predictions of the etc as concerns the count rates per pixel in the source and background are precise to within an accuracy of 10% , as one would expect of a professional tool .we have also repeated all our tests on the individual frames , compared in turn with the predictions of the etc for a case of cr - split=1 .the result being the same , we can exclude an error in either the way in which we combined the data or in the way in which the etc accounts for cr - split .the rest of the discrepancy , then , must be attributed to the way in which the noise is estimated , the signal being correct .a delicate issue could be , for instance , the value and operational definition of .we notice here that large variations in the value of are possible , in the crowded environment , depending as to whether we measure it with the iraf sky task , which fits a gaussian around the mode , or as the standard deviation that one obtains by manual analysis over the darkest regions of the background in the image .in fact the latter can be up to 3 times smaller than the former , and also 2 times smaller than the mean sigma as measured inside the photometric sky annulus around each star .all these numbers turn out to be quite similar for the sparse field image . to try and account for the possible sources of the residual error, we considered recent results published by , who uses fourier analysis and fisher information matrices to show to which extent the snr of a point source depends on factors which normally are not considered in etc programmes , such as pixel size , intra - pixel response function , extra - pixel charge diffusion and cosmic ray hits . according to this work, a programme that does not take all these parameters into account may overestimate the snr by up to a factor of2 . more precisely , whenever background limited point source photometry is involved , the key factor for the snr calculation , namely the `` effective area '' ( see equation12 in bernstein 2001 ) , strongly depends on the detector geometry , such as pixel size , under - sampling factor , intra - pixel response function and charge diffusion .the finite pixel size plays an important role , as even a nyquist sampled pixel ( i.e. one in size ) causes a 13% degradation in the snr of a faint star and the same applies to extra - pixel charge diffusion . in order to check whether these problems also affect the wfpc2 , we configured bernstein s`` etc++ '' software to simulate wfpc2 point source photometry for the sparse field .the result is shown in figure [ fig12 ] where the measured snr ( ) , the etc optimal snr ( ) and the etc++ snr for aperture photometry are plotted against the stellar magnitude .the etc++ gives a confidence level for its results as the value of the cumulative function of the stars distribution above the computed snr .the etc++ line in figure [ fig12 ] means that 50% of the stars of any given magnitude should be above this line .the wfpc2 etc does not give confidence levels , but we can assume that its snr is computed as the mean of the snr distribution at any given magnitude , i.e. at 50% confidence level .if this is the case , figure [ fig12 ] indicates that the actual snr is located in between the wfpc2 etc and the etc++ predictions , thus confirming the difficulty of any analytical etc in reliably estimating the snr .thus , a correction for the currently on - line wfpc2 etc can only be empirical in nature .the following formula can be used to obtain a realistic estimate of the snr : where is the snr estimated by the etc without correction for aperture photometry and is a measure of the crowding , defined as the logarithm of the ratio between the total area of the chip and the number of pixel with value lower than the modal sky value plus one standard deviation . for example is equal to for our sparse field , whereas it grows to in the crowded case of m4 . for faint stars , e.g. for , this equation can be roughly approximated by the rule of thumb that the actual snr is about , or , of the , respectively for a crowded and non crowded environment .it should be noted that not even in an ideal case of zero crowding ( ) would the measured snr match the prediction of the etc , since there would still be a discrepancy of the same order of that found in the sparse case .the advantage of this formula is that would now always imply a detection , regardless of the level of crowding in the image .the correction that we propose would allow an observer to accurately plan his observations and make the best use of the hst time .for the low snr regime ( e.g. ) , equation[eq4 ] can actually be rewritten to more explicitly show the effects of crowding on the exposure time : where is the exposure time predicted by the etc to reach a certain snr and is its actual value .an example of how serious the underestimate of the exposure time can be when the etc is not used with the above caveat in mind is given in figures [ fig13]a and [ fig13]b for a crowded environment .there we show a simulation of the detectability of the white dwarf cooling sequence with the wfpc2 in ngc6397 , the nearest globular cluster , through the filters f606w and f814w .we have adopted the theoretical wd cooling sequence of which provide a perfectly thin isochrone and have applied to it the colour and magnitude uncertainty that one obtains from the estimated snr by inverting equation[eq1 ] .two cases are shown : one ( a ) as predicted by the wfpc2 etc and one ( b ) for our corrected estimate of equation[eq4 ] .the difference is outstanding , as the etc predictions , taken at face value and ignoring the effects of crowding , would suggest that the sequence is not spread very much by photometric errors and its quasi - horizontal tail between and is clearly noticeable , whereas in our realistic simulation the sequence is widely spread and its lower part lies well below the detection limit .the delicacy of the issue is immediately apparent when one considers that , based on the etc estimates , one would deem that 120 orbits are sufficient to reliably secure the white dwarf cooling sequence in the colour magnitude diagramme of ngc6397 down to and , whereas , in fact , the correction shows that as many as 255 orbits would be needed to comfortably reach those limits with the wfpc2 .all of the above considerations are valid not only for the wfpc2 , but also for any analytical etc in general , especially when used to estimate the snr of stars embedded in crowded environment or when the detector considerably under - samples the psf , as suggested in .we should underline here , however , that this does not mean at all that the etcs are unreliable nor that they are useless .one of the most important and practical reasons for having a standardised etc is to allow the telescope time allocation committees to compare all the proposals on equal footing . in this respect ,the etc does not necessarily need to be accurate .clearly , the better the detector s cosmetics , intra - pixel response , charge diffusion and readout noise , the closer will the real photometry be to the etc prediction .thus , we expect , for example , a better behaviour of the acs / wfc on - line simulator with respect to the wfpc2 . a non - analytical snr calculator , which would simulate the whole observing session , including the dithering pattern , by numerically reproducing the real field ( i.e. with the correct stellar positions and brightness , as imaged by a realistic model of the detector ) and which uses the same photometric tools that will be adopted by the user ( such as daophot , allstar , and the like ) , would be , in our opinion , the best method to accurately predict the expected performances of any planned observing programme providing reliable results .alternatively , at least for imaging etcs which have very few configuration parameters and are fairly stable such as those in space telescopes , one should consider empirical modeling .one can take real results , such as we did in this paper , to calibrate an etc which in turn interpolates between calibrations . in this way, the use of an empirical correction formula such as the one proposed here would guarantee a closer matching between simulations and real observations .the results of the wfpc2 exposure time calculator for point sources have been analysed by direct comparison with aperture and psf photometry on real archival images .significant deviations have been found between the etc predictions and the actual photometry on the real data .specifically , the analysis shows that the etc deviations are _i ) _ independent of the filter , _ ii ) _ independent of the choice of optimised aperture photometry or pfs fitting photometry , _ iii ) _ independent of the pc or wf channel used , _ iv ) _ strongly dependent upon the level of crowding in the field and that _ v ) _ the etc systematically overestimates the snr , slightly for the bright sources and more seriously for faint sources close to the detection limit . moreover ,when data reduction follows the optimised aperture photometry method , the measured snr will be as good as that obtained with psf fitting and there is no need to apply the aperture photometry conversion suggested in the etc documentation .an empirical correction formula is given to compute realistic snr estimates , so as to assist observation planning when extremely faint sources have to be imaged , an example of which is presented . manually increasing the value of the sky brightness in the simulator , so as to mimic the effects of crowding , shows that , although important , the background level is not the key parameter to explain the discrepancy , which is present even for data collected in rather sparse environments .thus , it is not possible to correct the wfpc2 etc predictions by just modifying the sky level . a comparison with a software tool developped by , whose predictions slightly underestimate the snr at variance with the wfpc2 etc , suggests that the effects of pixel size , charge diffusion and cosmic rays hits could be more important than previously thought .it s our pleasure to thank h. ferguson , m. stiavelli , s. casertano , f. massi , l. pulone and r. buonanno for helpful discussions .we are indebted to f. valdes , the referee of this paper , for his useful comments and suggestions .g. li causi is particularly grateful to the eso director general s discretionary fund for supporting his work .we also wish to thank gary bernstein for making his etc++ software available to us .figure 3 : the ratio between the snr measured in crowded and sparse fields ( snr in the text ) and the wfpc2 etc prediction for aperture photometry ( snr in the text ) is shown for the f555w and f814w filters . figure 4 : the ratio between the snr measured in crowded and sparse fields ( snr in the text ) and the wfpc2 etc prediction for psf - fitting photometry ( snr in the text ) is shown for the f555w and f814w filters .the right hand side axis applies to the low snr regime ( ) and indicates the amount of the time estimation error , i.e. the ratio between the actual exposure time ( equation7 in the text ) and that estimated by the etc , for a given snr in the abscissa .figure 5 : measured magnitude error from psf - fitting photometry of ( ) and from our optimized aperture photometry ( ) , as a function of the magnitude , compared with the wfpc2 etc prediction , for the crowded field and f555w filter .figure 9 : the ratio between the measured snr ( snr ) and the etc optimal snr ( snr ) is shown for the crowded and sparse fields , before and after the artificial brightening of the sparse field background . figure 10 : comparison between the measured snr ( snr ) and the etc predictions for i ) default low background , ii ) default average background , iii ) default high background and iv ) actually measured background .figure 12 : comparison between the prediction of the wfpc2 etc v.3.0 , the prediction of the etc++ software and the measured snr ( snr ) , for the crowded and non crowded cases in the two filters ( both etcs were used here after setting the sky magnitude to the value measured in the real images ) .figure 13 : comparison between ( a ) the wfpc2 etc predictions ( snr ) and ( b ) our correction of equation[eq4 ] ( snr ) , in a simulation of a 120 hst orbits observation of the white dwarfs cooling sequence in ngc6397 , in a colour magnitude diagramme made through the filters f606w and f814w .
|
we have studied the accuracy and reliability of the exposure time calculator ( etc ) of the wide field planetary camera 2 ( wfpc2 ) on board the hubble space telescope ( hst ) with the objective of determining how well it represents actual observations and , therefore , how much confidence can be invested in it and in similar software tools . we have found , for example , that the etc gives , in certain circumstances , very optimistic values for the signal - to - noise ratio ( snr ) of point sources . these values overestimate by up to a factor of 2 the hst performance when simulations are needed to plan deep imaging observations , thus bearing serious implications on observing time allocation . for this particular case , we calculate the corrective factors to compute the appropriate snr and detection limits and we show how these corrections vary with field crowding and sky background . we also compare the etc of the wfpc2 with a more general etc tool , which takes into account the real effects of pixel size and charge diffusion . our analysis indicates that similar problems may afflict other etcs in general showing the limits to which they are bound and the caution with which their results must be taken .
|
the aim of this paper is to develop efficient and easy to implement importance sampling estimators of expectations of functionals of lvy processes , corresponding to option prices in exponential lvy models .lvy processes are stochastic processes with stationary independent increments .they are used as models for asset prices when jump risk is important , either directly ( as in the variance gamma model , normal inverse gaussian process , cgmy model ) or as building blocks for other models ( affine processes , stochastic volatility models with jumps , local lvy models etc . ) . to model a financial market with a lvy process , we assume that the market consists of a risk - free asset and risky assets such that where is a lvy process under the risk - neutral probability .we consider a derivative written on with pay - off which depends of the entire trajectory of the stocks .we are interested in computing the price of this derivative , given by the risk - neutral expectation ] is defined as the empirical mean where are i.i.d .samples with the same law as .note that simulation methods exist for all parametric lvy models , including multidimensional lvy processes ( see chapter 6 of for a review ) .the precision of standard monte carlo is often too low for real - time applications , particularly when ] . for lvy processes , a natural choice of probability measure for importance sampling is given by the esscher transform },\ ] ] which is well defined for all such that ] . under , the process has independent increments and is thus easy to simulate .the optimal choice of should minimize the variance of the estimator under , - \mathbb e\left[p\right]^2.\ ] ] importance sampling is most effective in the context of _ rare event simulation _, e.g. , when for most of the trajectories of under the original measure . since the theory of large deviations is concerned with the study of probabilities of rare events , it is natural to use measure changes appearing in or inspired by the large deviations theory for importance sampling .we refer , e.g. , to and references therein for a review of this approach and to the above quoted references for specific applications to financial models .the main contribution of this paper , inspired by the work of guasoni and robertson in the setting of the black - scholes model , is to use the large deviations theory to construct an easily computable approximation to the optimal importance sampling measure .namely , we use varadhan s lemma and the pathwise large deviation principle for lvy processes due to leonard to derive a proxy for the variance of the importance sampling estimator which is much easier to compute than the true variance .we propose then to use the parameter , obtained by minimizing this proxy , in the importance sampling estimator .numerical illustrations in section [ num.sec ] show that the variance obtained by using instead of is very close to the optimal one , and that a considerable variance reduction is obtained in all examples with very little computational overhead . when the logarithm of the pay - off is concave , which is the case in many applications , the proxy for the variance may be further simplified using convex duality .the computation of the asymptotically optimal parameter then reduces to one finite - dimensional optimization problem for european options and to the solution of one ode system ( euler - lagrange equations ) for the path - dependent ones .in other words , additional complexity is the same as in the case of the black - scholes model studied in , even though our model is much more general and complex .the rest of this paper is structured as follows . in section [ ld.sec ]we recall the notation and results from the theory of large deviations which are used in the paper .section [ main.sec ] provides a representation for the proxy of the variance , a simplified representation in the case of concave log - payoffs and an easy to verify criterion for concavity .section [ ex.sec ] presents explicit computations for european basket and asian options .numerical illustrations of these examples , in the context of the variance gamma model , are provided in section [ num.sec ] .lastly , the appendix contains a technical lemma .in this section we recall the known results on large deviations for lvy processes , which will be used in the sequel , and introduce all the necessary notation .we first formulate the large deviations principle ( ldp ) on abstract spaces .let be a haussdorf topological space endowed with its borel -field .a rate function is a ] .the subspace of containing all functions on ] .note that there is a one - to - one correspondence between elements of and those of : in particular , for every , the function ) ] with lvy measure and without diffusion part .we introduce a family of lvy processes defined by .the following theorem can be found in .[ leonard.thm ] suppose that assumption ( a1 ) holds true .then the family satisfies the ldp in for the -topology with the good rate function where } x_t \mu(dt ) - \int_0^t g(\mu([t , t ] ) ) dt\right\ } & \text{if } x \in v_r \\ & + \infty & \text{otherwise . }\end{aligned } \right.\ ] ] note that de acosta proves an ldp for the uniform topology under the assumption that all exponential moments are finite . however , this assumption is too strong in practice , since most financial models are based on lvy processes with exponential tail decay .the rate function appearing in theorem [ leonard.thm ] admits a more explicit expression ( see section 6 in ) .define the fenchel conjugate of : and its recession function then , } l_a(\frac{d\dot x_a}{dt}(t))dt + \int_{[0,t ] } l_s(\frac{d\dot x_s}{d\mu}(t))d\mu & \text{if } x \in v_r \\ & + \infty & \text{otherwise , } \end{aligned } \right.\ ] ] where is the decomposition of the measure in absolutely continuous and singular pars with respect to and in any nonnegative measure on ] . the optimal choice of should minimize the variance of the estimator under , - \mathbb e\left[p\right]^2\ ] ] denote .then , using lemma [ gfunc.lm ] , the minimization problem writes } x_t \cdot \theta(dt ) + \int_0^t g(\theta([t , t]))dt \right\}\right],\ ] ] where given the possibly complex form of the log - payoff , the above expression for the variance is difficult to minimize .our approach is instead to minimize a proxy of the variance , which has a more tractable form .our first main result provides an expression for such a proxy , which we aim to minimize to obtain an asymptotically optimal variance reduction .[ varadhanapplies ] let assumption ( a1 ) hold true , and suppose that the set is open and that is continuous on this set for the -topology and satisfies } \sum_{i=1}^n |x^i_s|\ ] ] with . then , for every such that )| < \lambda_0 - 4nb,\ ] ] it holds that } x^\varepsilon_t \cdot \theta(dt ) } { \varepsilon } } \right ] = \sup_{x \in d } \left\{2h(x ) - \int_{[0,t ] } x_t \cdot \theta(dt ) - \bar j(x ) \right\}. % \\ % & = \sup_{x\in v_r } \left\{2h(x ) - \int_{[0,t ] } x_t \cdot \theta(dt ) - \int_{[0,t ] } l_a(\frac{d\dot x_a}{dt}(t))dt - \int_{[0,t ] } l_s(\frac{d\dot x_s}{d\mu}(t))d\mu \right\}.\end{aligned}\ ] ] since the pay - off is assumed to be continuous , the continuity of the mapping } x_t \cdot \theta(dt)\ ] ] for the -topology follows from the definition of this topology .it remains to check the integrability condition of varadhan s lemma . by assumptions of the proposition, we may choose and with , as well as , such that )| < \lambda_0 \quad \text{and}\quad 4 pn b < \lambda_0 .\end{aligned}\ ] ] moreover , there exists with <0\quad \text{and}\quad \mathbb e\left [ ( -x^i_{t}-bt)e^{4b\gamma pn ( -x^i_{t}-bt)}\right]<0\ ] ] for all .then , by the assumption on , the cauchy - schwarz inequality , and lemma [ wienerhopf.lm ] , the following estimates hold true : } x^\varepsilon_t \cdot \theta(dt ) ) } { \varepsilon}}\right ] \\ & \leq 2a\gamma + \lim\sup_{\varepsilon\to 0 } \varepsilon \log \mathbb e \left[e^{\frac{\gamma(2b\sup_{s\in [ 0,t ] } \sum_{i=1}^n|x^{\varepsilon , i}_s| - \int_{[0,t ] } x^\varepsilon_t \cdot \theta(dt ) ) } { \varepsilon}}\right]\\ & = 2a\gamma + \lim\sup_{\varepsilon\to 0 } \varepsilon \log \mathbb e \left[e^{{\gamma(2b\sup_{s\in [ 0,t]}\sum_{i=1}^n |x^i_{s/\varepsilon}| - \int_{[0,t ] } x_{t/\varepsilon } \cdot \theta(dt ) ) } } \right]\\ & \leq 2a\gamma + 2bnt+ \sum_{i=1}^n \lim\sup_{\varepsilon\to 0 } \varepsilon \log \mathbb e\left[e^{4b\gamma pn\sup_{s\in [ 0,t ] } ( x^i_{s/\varepsilon}-bs/\varepsilon)}\right ] \\&+\sum_{i=1}^n\lim\sup_{\varepsilon\to 0 } \varepsilon \log \mathbbe\left[e^{4b\gamma pn\sup_{s\in [ 0,t ] } ( -x^i_{s/\varepsilon}-bs/\varepsilon)}\right]\\ & + \lim\sup_{\varepsilon\to 0 } \varepsilon \log \mathbb e \left[e^{- q\gamma\int_{[0,t ] } x_{t/\varepsilon } \cdot \theta(dt ) } \right]\\ & \leq 2a\gamma + 2bnt+ \sum_{i=1}^n \lim\sup_{\varepsilon\to 0 } \varepsilon \log \mathbb e\left[e^{4b\gamma pn\sup_{s\geq 0 } ( x^i_{s}-bs)}\right ] \\ & + \sum_{i=1}^n\lim\sup_{\varepsilon\to 0 } \varepsilon \log \mathbb e\left[e^{4b\gamma pn\sup_{s\geq 0 } ( -x^i_{s}-bs)}\right ] + \int_{[0,t ] } g(-q\gamma\theta([t , t ] ) ) < \infty . \end{aligned}\ ] ] the result of proposition [ varadhanapplies ] leads us to introduce the following definition .[ optvar.def ] we say that the variance reduction parameter is asymptotically optimal if it minimizes } x_t \cdot \theta(dt ) + \int_{[0,t ] } g(\theta([t , t]))dt- \bar j(x ) \right\}\ ] ] over .the optimization functional in the definition [ optvar.def ] is difficult to compute in practice , since the rate function is usually not known explicitly .the following theorem shows that for concave log - payoffs , the computation of the optimal parameter is greatly simplified .european basket put options and many paht - dependent put - like payoffs encountered in practice are indeed concave .[ maindual.thm ] let be concave and upper semicontinuous on and assume that for every there is a sequence converging to in the -topology and such that .let assumption ( a2 ) be satisfied .then , } x_t \cdot \theta(dt ) + \int_{[0,t ] } g(\theta([t , t]))dt - \bar j(x ) \right\ } \\= 2\inf_{\theta \in m}\{\widehat h(\theta ) + \int_{[0,t ] } g(\theta([t , t]))dt\}\label{dualtheta}\end{gathered}\ ] ] where } x_t \theta ( dt)\}.\ ] ] moreover , if the infimum in the left - hand side of is attained by then the same value attains the infimum in the right - hand side of .the assumption that the effective domain of is bounded is the most restrictive .it is satisfied by models where the tail decay is exactly exponential , such as variance gamma , normal inverse gaussian , cgmy and their multidimensional versions .however , it rules out models with faster than exponential tail decay such as the celebrated merton s model .we expect that for such models a similar result may still be shown , but one would need to use different , and slightly more complex methods ( orlicz spaces instead of ) . to keep the length of the proof reasonable , we have chosen to present the argument in the case of a bounded domain .step 1 . by assumption of the proposition, } x_t\cdot \mu(dt)+ \int_{[0,t ] } g(\mu([t , t ] ) ) dt\ } \notag\\&\qquad= { \sup_{x\in v^{ac}_r}\inf_{\mu\in m}}\{2h(x ) - \int_{[0,t ] } x_t\cdot \mu(dt)+ \int_{[0,t ] } g(\mu([t , t ] ) ) dt\}\notag\\ & \qquad= { \sup_{x\in v^{ac}_r}\inf_{\mu\in m}}\{2h(x ) - \int_{[0,t ] } \dot x_t\cdot \mu([t , t])dt + \int_{[0,t ] } g(\mu([t , t ] ) ) dt\}\notag\\ & \qquad= { \sup_{x\in v^{ac}_r}\inf_{y\in l^\infty([0,t])}}\{2h(x ) - \int_{[0,t ] } \dot x_t\cdot y_t dt + \int_{[0,t ] } g(y_t ) dt\},\label{step1minimax}\end{aligned}\ ] ] where the last equality follows by approximating an function with a sequence of continuous functions with bounded variations , and using the dominated convergence theorem .our aim in this step is to show that and in may be exchanged .we adapt the classical argument , which may be found , e.g. , in .fix and define , for and ) ] , since the family is bounded in , by banach - alaoglu theorem there exists a sequence and a point such that in the weak topology of and hence in the weak topology of for all . since }g(y_t ) dt\ ] ] is convex and lower semicontinuous in the strong topology of ( see ) , it is also weakly lower semicontinuous and from it follows that for , ) } l_{\varepsilon_j}(x , y ) \leq\sup_{x\in v^{ac}_r } \inf_{y\in l^\infty([0,t ] ) } l(x , y),\end{aligned}\ ] ] which , together with the standard minimax inequality , proves that ) } l(x , y ) = \inf_{y\in l^\infty([0,t ] ) } \sup_{x\in v^{ac}_r } l(x , y)\\ & = \inf_{y\in l^\infty([0,t ] ) } \sup_{x\in v^{ac}_r } \{2h(x ) - \int_{[0,t ] } \dot x_t\cdot y_t dt + \int_{[0,t ] } g(y_t ) dt \}.\end{aligned}\ ] ] step 3 .given ) ] ( use lusin s theorem plus an approximation of continuous functions with functions of bounded variation ) . then , by the dominated convergence theorem , as , } \dot x_t \cdot y^n_t dt \to \int_{[0,t ] } \dot x_t \cdot y_t dt.\ ] ] on the other hand , letting , we have that , for each , } g(y^n_t\mathbf 1_{y^n_t \in a_m } ) dt \to \int_{[0,t ] } g(y_t\mathbf 1_{y^n_t \in a_m } ) dt\ ] ] by dominated convergence and , for each , } g(y^n_t\mathbf 1_{y^n_t \in a_m } ) dt \to \int_{[0,t ] } g(y^n_t ) dt\ ] ] by monotone convergence .these two observations together with an integration by parts for the second term , imply that ) } \sup_{x\in v^{ac}_r } l(x , y ) = \inf_{\mu \in m } \sup_{x\in v^{ac}_r } \{2h(x ) - \int_{[0,t]}x_t\cdot d\mu + \int_{[0,t ] } g(\mu([t , t ] ) ) dt \}.\ ] ] in addition , the assumption of the proposition implies that the inner supremum may also be computed over .finally , the following computation allows to finish the proof . } x_t \cdot \theta(dt ) + \int_{[0,t ] } g(\theta([t , t]))dt - \bar j(x)\}\\ & \qquad= \inf_{\theta\in m } { \sup_{x\in v_r}\inf_{\mu\in m}}\{2h(x ) - \int_{[0,t ] } x_t ( \theta(dt)+ \mu(dt ) ) \\ & \qquad\qquad \qquad + \int_{[0,t ] } g(\theta([t , t]))dt + \int_{[0,t ] } g(\mu([t , t ] ) ) dt\}\\ & \qquad= \inf_{\theta\in m } { \inf_{\mu\in m}\sup_{x\in v_r}}\{2h(x ) - \int_{[0,t ] } x_t ( \theta(dt)+ \mu(dt ) ) \\ &\qquad\qquad \qquad + \int_{[0,t ] } g(\theta([t , t]))dt + \int_{[0,t ] } g(\mu([t , t ] ) ) dt\ } \\ &\qquad = \inf_{\theta\in m } \inf_{\mu\in m}\{2\widehat h\left(\frac{\theta+\mu}{2}\right ) + \int_{[0,t ] } g(\theta([t , t]))dt + \int_{[0,t ] } g(\mu([t , t ] ) ) dt\ } \\ & \qquad=2\inf_{\theta \in m}\{\widehat h(\theta ) + \int_{[0,t ] } g(\theta([t , t]))dt\},\end{aligned}\ ] ] where the last equality follows by convexity of , taking .[ [ concavity - of - the - log - payoff ] ] concavity of the log - payoff + + + + + + + + + + + + + + + + + + + + + + + + + + + the concavity of the log - payoff function may be tested using the following simple lemma .we recall that for .[ conc.lm ] let and assume that is concave on the set and that the set is convex .then the log - payoff is concave in .let and choose .then , which shows that is concave on .since for and the set is convex , is also concave on the whole space .in this section , we specialize the results of the previous section to several option pay - offs encountered in practice . throughout this sectionwe assume that the lvy process satisfies the assumptions ( a1 ) and ( a2 ) . for each considered pay - off, we need to check the assumptions of proposition [ varadhanapplies ] to ensure that the asymptotically optimal variance reduction measure may indeed be defined as in definition [ optvar.def ] , and the assumptions of theorem [ maindual.thm ] , to ensure that one can use the simplified formula to compute .[ [ general - european - pay - off ] ] general european pay - off + + + + + + + + + + + + + + + + + + + + + + + + we first check the assumptions of theorem [ maindual.thm ] and show that the problem of finding the optimal parameter becomes finite - dimensional .the assumptions of proposition [ varadhanapplies ] can be checked on a case - by - case basis as will be illustrated below .[ euro.prop ] assume that with concave and upper semicontinuous .then , assumptions of theorem [ maindual.thm ] are satisfied and , where is the dirac measure at , and where .the log - payoff clearly satisfies the assumptions of theorem [ maindual.thm ] . if , then since one can choose with arbitrary .this means that one can restrict the optimization in to measures of the form where , and the rest of the proof follows easily .we observe that the function is known explicitly in most models .in addition , under the measure , is still a lvy process which often falls into the same parametric class ( see e.g. , the variance gamma example in the following section ) . thus , the only overhead of using the importance sampling estimator proposed in this paper for european options is due to the additional time needed to solve an explicit convex optimization problem in dimension , which is usually negligible .[ [ european - basket - put - option ] ] european basket put option + + + + + + + + + + + + + + + + + + + + + + + + + + now consider a specific european pay - off of the form .then , using the notation of the previous paragraph , since this functions is bounded from above and continuous on the set where it is not equal to , assumptions of proposition [ varadhanapplies ] are satisfied and one can define the asymptotically optimal variance reduction measure . on the other hand ,the function is concave on by convexity of the exponential and the set is convex .therefore , by lemma [ conc.lm ] , the function is concave . since it is also clearly upper semicontinuous , the optimal measure given in proposition [ euro.prop ] , where the convex conjugate of is easily shown to be explicit and given by numerical examples for the european basket put option are given in the next section .[ [ asian - put - option ] ] asian put option + + + + + + + + + + + + + + + + in this example we consider the asian option with log - payoff first note that may not be continuous in the -topology even on the set where it is finite .we shall nevertheless use the definition of the asymptotically optimal variance reduction parameter .this may be justified by the fact the discretely sampled asian option is -continuous , and the variance of the discretely sampled asian pay - off converges to that of the continuously sampled pay - off as the discretization step tends to zero . let us now check the assumptions of theorem [ maindual.thm ] .remark that is concave by convexity of the exponential , and for such that and , which implies that the set is convex .therefore , by lemma [ conc.lm ] , is concave .moreover , assume that in . then , by fatou s lemma which shows that is upper semicontinuous .finally , may be approximated by a uniformly bounded sequence of , so that by the dominated convergence theorem .therefore , all assumptions of theorem [ maindual.thm ] are satisfied by the asian put option .the convex conjugate of and the asymptotically optimal parameter are described by the following proposition . * if is absolutely continuous , with density ( also denoted by ) satisfying for all ] such that .then , letting for and for , and making tend to , we see that .therefore , from now on we may assume that is a negative measure .assume that it is not absolutely continuous .then , there exists such that for all , there exists a finite sequence of pairwise disjoint sub - intervals of ] , the optimal is the minimizer of where is the `` adjoint state '' satisfying numerical examples for the asian put option are given in the next section .in this section , we illustrate the results of this paper with numerical computations in the multivariate variance gamma model .let , be a positive definite matrix , and define where is a standard brownian motion in dimension , is a gamma process with = t ] for all and .then , the cumulant generating function under the original measure is given by with under the measure , the cumulant generating function of can be written as therefore , under the measure , the process is also a variance gamma process with parameters , , and . [ [ vanilla - put - in - the - variance - gamma - model ] ] vanilla put in the variance gamma model + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in the first example , we let and price a european put option with pay - off .the model parameters are , and , which corresponds to annualized volatility of , skewness of and excess kurtosis of .table [ vred1d.tab ] shows the variance reduction ratios and the values of the asymptotically optimal parameter as function of strike and time to maturity .we see that the highest ratios are attained for out - of - the - money options , whose exercise is a rare event , but that even for at - the - money options , the variance reduction ratios remain quite significant .it is also important to understand , how close are these ratios to the optimal ones which would have been obtained by minimizing the actual variance of the estimator rather than its asymptotic proxy .this is illustrated in figure [ vred1d.fig ] , which plots the variance of the importance sampling estimator ( evaluated by monte carlo ) as function of the parameter .we see that for the chosen parameter values is very close to optimality ..single - asset european put option .top : variance reduction ratios as function of time to maturity , for .bottom : variance reduction ratios as function of strike , for . [ cols="<,^,^,^,^,^,^",options="header " , ] , for and .,scaledwidth=60.0% ][ wienerhopf.lm ] let be a lvy process .we denote .let be such that <\infty ] . then , < \infty\ ] ] for all . by the wiener - hopf factorization ( theorem 6.16 in ,see also equation ( 47.9 ) in ) , dt = \exp\left(\int_0^\infty \frac{e^{-qt}}{t } dt \int_0^\infty ( e^{\beta x } - 1 ) \mathbb p[x_t\in dx ] \right).\ ] ] letting and using the cauchy - schwarz inequality , we get & = \exp\left(\int_0^\infty \frac{1}{t } dt \int_0^\infty ( e^{\beta x } - 1 ) \mathbb p[x_t\in dx ] \right)\\ & = \exp\left(\int_0^\infty \frac{1}{t } dt\ , \mathbb e[(e^{\beta x_t}-1)\mathbf 1_{x_t \geq 0 } ] \right)\\ & \leq \exp\left(\int_0^\infty \frac{\beta}{t } dt\ ,\mathbb e[x_t\,e^{\beta x_t}\mathbf 1_{x_t \geq 0 } ] \right)\\ & \leq \exp\left(\int_0^\infty \frac{\beta}{t } dt\ , \mathbb e[|x_t|^{\frac{\beta'}{\beta'-\beta}}\mathbf 1_{x_t\geq 0}]^{1-\frac{\beta}{\beta'}}\ , \mathbb e[e^{\beta ' x_t}\mathbf 1_{x_t \geq 0}]^{\frac{\beta}{\beta ' } } \right)\end{aligned}\ ] ] let .then , all moments of are finite and \leq \mathbb e[|\widehat x_t|^{\frac{\beta'}{\beta'-\beta}}].\ ] ] let . then , \leq \mathbb e[|\widehat x_t|^{n^*}]^{^{\frac{\beta'}{n^*(\beta'-\beta ) } } } \leq c t^{^{\frac{\beta'}{n^*(\beta'-\beta)}}}\ ] ] for some .thus , \leq \exp\left(\beta c\int_0^\infty dt\ , t^{\frac{1}{n^*}-1}\ , \mathbb e[e^{\beta ' x_t}\mathbf 1_{x_t \geq 0}]^{\frac{\beta}{\beta ' } } \right)\label{int}.\end{aligned}\ ] ] further , = e^{t\psi(\beta ) } \widehat{\mathbb p}[x_t\geq 0],\ ] ] where }. $ ] by cramer s theorem , = -\inf_{x\geq 0}\lambda^*(x ) = - \lambda^*(0),\ ] ] where \ } = \sup_{\lambda } \{\lambda x - \psi(\lambda + \beta ) + \psi(\beta)\}.\ ] ] therefore , \right ) = \inf_{\lambda } \psi(\lambda + \beta ) < 0,\ ] ] which shows that the integral in converges .
|
we develop generic and efficient importance sampling estimators for monte carlo evaluation of prices of single- and multi - asset european and path - dependent options in asset price models driven by lvy processes , extending earlier works which focused on the black - scholes and continuous stochastic volatility models . using recent results from the theory of large deviations on the path space for processes with independent increments , we compute an explicit asymptotic approximation for the variance of the pay - off under an esscher - style change of measure . minimizing this asymptotic variance using convex duality , we then obtain an easy to compute asymptotically efficient importance sampling estimator of the option price . numerical tests for european baskets and for asian options in the variance gamma model show consistent variance reduction with a very small computational overhead . * key words : * lvy processes , option pricing , variance reduction , importance sampling , large deviations * msc2010 : * 91g60 , 60g51
|
the process of extracting or reconstructing neuronal shapes from an em dataset is laborious . as a result , the largest connectomes produced involve only hundreds to thousands of neurons .nanometer resolution imaging is necessary to resolve important features of neurons , but as a consequence , a neuron that spans just 50 microns might intersect thousands of image planes .also , neurons often exhibit complicated branching patterns .figure [ fig : neuronex]a shows a small part of an em dataset and an intersecting neuron .* partial neurons extracted from em data .* a ) neurons can have intricate branching and span thousands of images .b ) single false merge or false split errors can greatly impact the neuron shape and its corresponding connectivity.,scaledwidth=80.0% ] image segmentation attempts to reconstruct these shapes automatically , by first identifying voxels belonging to neuronal boundaries and then partitioning the data using these boundaries , such that each partition yields a different neuron . despite continual advances in machine learning ,the state - of - the - art segmentation approaches still require manual `` proofreading '' .proofreading involves merging falsely split segments and splitting falsely merged ones .algorithms are generally biased in favor of false split over false merge errors .this is because manual correction of a falsely split segment is trivial , but manually splitting a falsely merged segment is much more labor intensive , requiring definition of the split boundary .recent efforts aim to segment and evaluate large - scale image segmentation .the general approach involves segmenting small subvolumes and concatenating these subvolumes to form a segmentation for the entire image volume .such approaches still do not take the global context of the segmentation problem into account .thus , even though state - of - the - art strategies may perform well when compared with small , manually reconstructed ground truth volumes , the quality of their results still suffers on large datasets .for example , if the segmentation strategy employs a classifier which fails to generalize to all parts of the dataset , false merges may be concentrated in one region of the global dataset ( figure [ fig : badseg ] ) , propagating errors throughout the volume .this scenario is especially likely in regions where the original data contains image artifacts . even in regions with good classifier performance , sparse errors may not corrupt the local segmentation quality , but the effect of these sparse errors within the larger volume can be catastrophic .since neurons typically span vast lengths within a large volume , even relatively sparse errors within a local context can have significant nonlocal consequences .these can severely affect the accuracy of the extracted connectome , as shown in figure [ fig : neuronex]b . * em examples that are hard to segment . *a ) this example contains parts of the neuron not included in the training set .poor classifier generalizability contributes to a bad segmentation result .b ) image artifacts such as membrane holes can result in false merging.,scaledwidth=80.0% ] in addition to these challenges , performing image segmentation on teravoxel-(or higher ) scale datasets involves many practical considerations : 1 . processing a large dataset requires significant compute resources .unavoidable crashes ( e.g. , those due to network outages ) or bugs that kill a long running cluster job require costly re - runs .2 . segmentation algorithms are continually advancing .it is important to be able to quickly swap and evaluate algorithm components and to partially refine an existing segmentation .large software frameworks tend to be difficult to understand and deploy by non - experts in diverse compute environments . in this paper, we introduce an open - source segmentation framework that runs efficiently on large image datasets .our framework is implemented in apache spark , a fast distributed engine for processing data in parallel .spark is widely adopted on multiple platforms making it easy to deploy in many different compute environments .we also layer this framework over a volume data - service for accessing and versioning volume data , dvid , which allows us to abstract the storage / data layer from the core algorithms .other key features of our framework include : 1 . a flexible plugin system to enable easy swapping of segmentation components 2 .check - pointing mechanisms that save partial computations to enable fast rollback and recovery 3 . in - memory representations of segmentation to reduce disk ia novel segmentation stitching strategy to prevent propagation of local segmentation errors we evaluated the pipeline on several em datasets .we demonstrate the scalability and the efficiency of rollback / recovery on a 400 gb volume .furthermore , we show the effectiveness of our stitching approach by comparing automatic image segmentation to large ground - truth datasets . the paper is organized as follows . in section [ sec : background ], a standard framework for segmentation is described .we then introduce our large - scale framework in section [ sec : framework ] and highlight the key features of the code design in section [ sec : dvidsparkservices ] . finally , we present results and conclusions . for reference ,the appendix outlines the segmentation plugins available and provides a sample configuration file .a general strategy for segmenting an em volume is shown in figure [ fig : basicseg ] .a classifier is trained to distinguish neuronal membrane boundaries from the rest of the image . applying the classifier tothe image volume produces a boundary probability map .typically , some supervoxel generation algorithm such as watershed is applied to the data , yielding an oversegmentation of regions within likely boundaries .depending on the voxel resolution ( nearly isotropic or anisotropic ) , the previous steps could be applied to either 2d slices or 3d subvolumes . because over - segmentation errors are easier to correct than under - segmentation errors , the boundary prediction is often conservative , producing a segmentation with many small fragments .clustering or agglomerating this over - segmentation produces the final result . in most cases , a neuron is still split into multiple segments .* segmentation workflow for em dataset . * the workflow typically involves voxel prediction , generation of an over - segmentation ( using algorithms like watershed ) , and agglomerating these segments.,scaledwidth=100.0% ] * framework for segmenting large datasets . *the dataset is split into several overlapping regions .each region is segmented and the overlapping bodies are stitched together.,scaledwidth=100.0% ] for large datasets , the computation must be partitioned across multiple machines .computing the boundary prediction typically requires only local context and is easily parallelized .while watershed and agglomeration would ideally be computed as global operations across the entire volume , it is reasonable to spatially partition the dataset and apply these algorithms on overlapping blocks .the shared regions between blocks are then used to merge the blockwise segmentation results together at the end of the pipeline .this pipeline is shown in figure [ fig : largeseg ] and is roughly the strategy used in previous work . in principle , agglomeration could be done with more global scope by successively partitioning the global graph that describes the inter - connections between connected components produced by watershed .we introduce our framework at the algorithm and system level in this section , modeled off of the design in figure [ fig : largeseg ] . in section [ sec : dvidsparkservices ] , we will discuss lower - level details .after first generating segmentation on small , overlapping subvolumes , their segments are stitched based on voxel overlap along a subvolume face .we will first assume that the subvolume segmentation is strictly over - segmented ( no false merges ) .even with this simplifying assumption , there will not be 100% overlap between segments from the same neuron . in one scenario , slight algorithm instability and differing contextscan cause small boundary shifts in the overlap region between subvolumes of a few pixels .for example , figure [ fig : stitch]a shows two neurons running in parallel whose exact boundary varies between subvolumes . even with slight overlap should only merge with and not . in another scenario, a neuron can branch near the subvolume border .figure [ fig : stitch]b shows that segment and should be merged together and joined with segment .stitching based on any overlap will cause the neuronal segments to be merged correctly , unlike in the first scenario . *scenarios for stitching neurons based on overlap between subvolumes . *four scenarios are given showing two subvolumes where the overlap is shown in gray .a ) slightly different context and algorithm instability can cause a small shift in the segmentation leading to a non - exact match .b ) neuronal branching can link two segments in another subvolume together .c ) false merging in subvolume causes false merging in subvolume due to segment overlap .d ) conservative matching can avoid propagating false mergers.,scaledwidth=100.0% ] a simple stitching rule properly handles both scenarios .a segment in subvolume matches a segment in subvolume ( meaning they are in the match set ) , if and only if , overlaps with the most or vice versa : applying this rule to all and means that each segment in ( and ) will be linked to at least one other segment in ( and ) .similarly , we could define the matching condition as needing a minimum overlap threshold : this would not guarantee one match for each segment . for an over - segmented volume ,these definitions handle most scenarios . there are some remaining corner cases that could be handled with a few additional considerations .for instance , segments that branch outside of the overlap region can be temporarily split into connected components and each component can separately seek a match .segments with a large overlap in absolute voxel count could be matched even if the above conditions fail to hold .complications arise in practice because of false merging .when two neuronal segments in a substack are falsely joined , these errors can propagate to the substack boundary .given the complexity of using higher - level biological context to guide segmentation and stitching , it not easy to tell whether branching that occurs at a substack face is the result of error or biology .figure [ fig : stitch]c shows an example where two neurons are falsely merged due to an image artifact , causing the simple stitching rule to result in more errors . a single substack with bad image quality or poor segmentation for any other reason could potentially propagate errors throughout the entire volume . the effects are more dramatic for larger volumes since a single neuron spans multiple subvolumes , and there are more opportunities for false merging .future work will exploit shape priors to eliminate error propagation and hopefully identify the sources of the false merges . for now , we introduce a straightforward strategy that admits many matches but conservatively avoids ones that are the most dangerous .namely , we avoid multiplying errors by eliminating matches that bridge multiple segments along a single subvolume face .for a given subvolume face , we specify that a match between two segments implies that a segment in and a segment in each only have one match : we implement a couple of heuristics to satisfy the above constraint if or has multiple matches according to equation [ eq : match ] . in one approach, we can satisfy the condition by changing to in equation [ eq : match ] .for example , if a neuron branches , we potentially choose just one branch , where there is largest mutual overlap .this is shown in figure [ fig : stitch]d .we also consider an even more conservative approach , by changing the to in equation [ eq : match2 ] with .image segmentation over a large dataset can be very time consuming , requiring significant cluster resources .this computational load increases the likelihood that the segmentation job will fail , or , even when successful , will be invalidated by newer , better segmentation results .first , a long - running job is more vulnerable to network outages or other events that might disrupt the computation on a shared compute resource .second , any software bugs might be uncovered only after significant portion of the computation has already completed successfully . in both cases ,a potentially costly rerun of the pipeline may be necessary , but unnecessary recomputation of satisfactory results should be avoided .figure [ fig : robust ] illustrates our strategy to make the segmentation pipeline more robust to failures .specifically , we focus on the subvolume segmentation component since each task is disjoint and the other parts of the pipeline are comparatively less time consuming . our solution is to divide the set of subvolumes spanning the large dataset across multiple iterations . in each iteration, a disjoint subset of the subvolumes is processed .the segmentation labels for each subvolume in this procedure are compressed using lossless compression ( such as lz4 ) , and serialized to disk .if there are any unexpected errors in the middle of the job , the pipeline will automatically rollback to the previously computed result ( _ i.e. , _ the most recently completed iteration ) .we note in the results that the high compressability of these label volumes results in a minimal memory and i / o footprint for these checkpoints .* our robust segmentation framework .* disjoint subsets of subvolumes are processed and checkpointed to allow recovery from any software crash .note that the dataset and final segmentation is stored using dvid.,scaledwidth=100.0% ] as mentioned earlier , there is another risk to time consuming segmentation runs .namely , it is often desirable to run multiple algorithms and to make successive refinements to the segmentation approach .however , in our application domain , the segmentation is continually refined and examined and it is undesirable to wait several months to begin this process . to address these issues, we have the option to retain previously examined ( proofread ) results across future segmentations .specifically , we provide the option to preserve a set of segment ids .these segments are masked out of the image , so that the watershed algorithm floods around these regions .our segmentation approach requires efficient access to subsets of a large dataset , which poses many infrastructural challenges . as em datasets extend to the 100tbs and beyond , storing this data in a single file on a single machine , such as in hdf5 format ,is limiting in bandwidth .furthermore , distributing data using a customized system of sharding file blobs might not generalize across different compute environments and also fails to reuse ongoing efforts outside of connectomics in large - scale database technology .these issues are complicated by the desire to also keep track of segmentation changes due to proofreading or revisions in the segmentation algorithms . to handle these issues , we adopt the distributed , versioned , image - oriented dataservice ( dvid ) to access our large image volume data .dvid allows one to access large volumetric data through a restful api and track changes in label data using a _git_-like model of commits and branches . by satisfying the service interface, we can isolate the details of data storage from our segmentation services .in particular , we fetch subvolumes through dvid and write segmentation label data back through dvid .we implement the segmentation framework _ dvidsparkservices _ using apache spark , which allows us to manipulate these large label volumes in shared memory across several cluster nodes . while the subvolume segmentation part of the pipeline requires minimal inter - process communication and might not benefit much from shared memory dataset representation , we believed spark to be an ideal framework for several reasons : * spark supports several primitives for working on large distributed datasets which support more structured semantics than more traditional approaches , such as ad - hoc batch scripts , would provide .* spark is supported on many different cluster compute environments .* an in - memory representation for large label data will empower future algorithms to use high - level , global context to improve segmentation quality .the next subsection details the framework and its use of spark primitives. then we highlight a plugin system to enable outside contributors to flexibly swap algorithms for different parts of the pipeline .dvidsparkservices contains several workflows for working with the dvid dataservice .each workflow is designed as a custom spark application , written in python using pyspark .access to and from dvid is controlled through the sparkdvid module .our segmentation pipeline framework is one such workflow .the key stages of the pipeline and technical details are listed below , with a description of the spark primitive used in each stage . 1 .logically split the dataset space by defining overlapping bounding boxes ( one per subvolume ) which , collectively , provide coverage over a region of interest ( roi ) .( primitive : parallelize ) 2 .map each subvolume s bounding box into a grayscale subvolume .this requires reading subvolumes in parallel using dvid .( primitive : map ) 3 .divide the subvolumes into groups , each of which will be processed in a separate iteration . 1 .map each grayscale subvolume into a volume of probabilities using some voxel - level classifier .( primitive : map ) 2 .map each probability volume into an over - segmentation of supervoxels .( primitive : map ) 3 .map each over - segmentation into a final subvolume segmentation via agglomeration .( primitive : map ) 4 .optionally serialize the subvolume segmentation to disk for checkpointing .( primitive : saveasobject ) 4 .extract overlapping regions from each pair of neighboring subvolumes .( primitive : flatmap ) 5 .group subvolume overlap regions together .this requires data to be shuffled throughout the network .( primitive : groupbykey ) 6 .map each overlap region into a set of matches .return these matches to the driver , transitively resolve the matches across the subvolumes , and broadcast to each subvolume .( primitives : map , collect , broadcast ) 7 .apply the matches to each subvolume .( primitive : map ) 8 .write segmentation to dvid through sparkdvid .( primitive : foreach ) * main components of our spark framework . *the segmentation framework is a spark application that transforms the raw image dataset into a segmentation .sparkdvid handles communication with dvid .the subvolume segmentation can be defined by a custom plugin .the default plugin allows customization of boundary prediction , supervoxel creation , and agglomeration .subvolume stitching is the only major operation that causes data to be shuffled across the network.,scaledwidth=100.0% ] the main points are emphasized in figure [ fig : sparkarch ] .most operations require little communication from the driver .the primary exception is extracting the overlapping regions of a subvolume and matching them , which requires data to be moved throughout the spark cluster . given the relatively small size of the overlap region compared to the subvolume ( close to 10% of the volume for each subvolume face in our experiments ) and the high compressibility of the labels , there is not much data that needs to be moved around .step # 3 executes the core segmentation algorithms .our framework is intended to flexibly adopt new algorithms and tailor solutions to specific application domains . to do this ,we provide several plugin interfaces to our segmentation framework , which can be specified when calling the workflow using a configuration file ( see appendix [ sec : config ] for an example configuration file ) . the plugin is defined in python and can call any code necessary to satisfy the given interface .figure [ fig : sparkarch ] highlights the main plugin options available currently in the segmentation framework . at the top - level , the entire segmentation module which takes grayscale volume data and returns labels for a given subvolume can be overwritten .the default segmentation plugin itself allows fine - grain plugin - based control over each stage of the pipeline. the voxel prediction , supervoxel creation , and agglomeration can all be customized and implemented with an alternative python function .we briefly describe available plugins in appendix [ sec : plugin ] .the package dvidsparkservices implements our segmentation workflow and is available on github ( https://github.com/janelia-flyem/dvidsparkservices ) .we evaluate this code both for its robustness at scale and the effectiveness of its stitching strategies . in all examples, we read and write data through dvid running over an embedded leveldb database on a single machine .therefore , read and write throughput will be a be limited by the i / o capacity of the database server .we are using the ilastik plugin for boundary prediction and the neuroproof plugin for agglomeration .the subvolume size used is 512x512x512 , plus an additional 20 pixel border on all faces to create overlap with adjacent subvolumes .we applied image segmentation on a portion of the fly optic lobe around 232,000 cubic microns in size imaged at approximately 8x8x8 nm resolution using fib - sem .this encompasses 453 gb of data or 3,375 subvolumes based on our partitioning .the segmentation preserves pre - existing proofread labels .that is , only as - yet unproofread voxels are replaced with automated segmentation results .figure [ fig : fib19seg ] shows an example segmented slice .* one image plane from the segmentation of a large optic lobe dataset . *the false coloring depicts the segmented neurons and the 3d neuron is a previously proofread body that was untouched during this re - segmentation.,scaledwidth=100.0% ] .[ tab : runtime ] * rough breakdown of time to run segmentation on a large portion of the optic lobe dataset*. the subvolume segmentation involves seven checkpoint iterations .the main component of runtime is the subvolume voxel prediction .we expect significant improvements to subvolume writes if we use a distributed back - end database behind dvid .[ cols="<,>",options="header " , ] * segmentation quality of aggressive stitching throughout a large dataset .* the vi metric is computed on small regions to provide a heatmap indicating how segmentation quality varies as a function of location .connected components is run in each subvolume to compare against the ground truth .( it is possible that bad false merging outside of a given subvolume is not fixable by connected components within the region . )a ) the larger medulla sample has poor segmentation in the lower regions which results in a lot of false merging .b ) the smaller mb sample has more uniform segmentation quality.,scaledwidth=100.0% ] table [ tab : vi ] shows the quality of segmentation using the conservative and straightforward matching strategy over the two datasets .the similarity is scored using variation of information ( vi ) .this metric allows us to decompose the similarity into a false merge ( f.merge ) and false split ( f.split ) score .a higher number indicates more errors . in both datasets, we see that both stitching strategies results in better total vi than no stitching .conservative stitching results in more false splits than aggressive stitching .however , for the medulla dataset , the conservative stitching produces significantly less false merge errors and therefore a better overall segmentation . in general , false merges are harder to fix manually .a parameter controls the aggressiveness of the stitching procedure .the setting should be chosen based on the expected quality of the subvolume segmentation results .in general , a large dataset probably increases the odds that there is some anomaly or problem that could cause false merging .( the medulla dataset is considerably larger than the mb dataset . )figure [ fig : heatmap ] shows the false merge vi in small subregions of the large volume as a heat map .notice that there are few areas toward the bottom of the medulla dataset that contains more false merge errors . despite the segmentation being good in many parts of the dataset, the overall similarity is compromised .we introduce a large - scale , open - source em segmentation infrastructure implemented in apache spark .we show that our segmentation can robustly segment large datasets by avoiding the propagation of bad false mergers and efficiently maintaining checkpoints .our implementation allows custom plugins for different parts of the segmentation pipeline .we have tested our system on both our internal compute cluster and on google compute engine .the use of spark will allow the solution to easily port to different compute environments .future work entails exploiting the more global context afforded by our system to guide new segmentation and stitching algorithms .the ability to store a large segmentation in memory across several machines will enable novel strategies . we also plan to leverage ongoing research to add different storage backends to dvid . by using a distributed database backend ,read and write bottlenecks will be greatly alleviated .* acknowledgements : * we would like to thank the flyem project team at janelia research campus .zhiyuan lu , harald hess , and shan xu prepared fly samples and created the image datasets .pat rivlin , shin - ya takemura , and the flyem proofreading team provided biological guidance and the groundtruthing effort .toufiq parag provided image segmentation insights and helped in tuning segmentation performance .bill katz implemented dvid api that used in our segmentation system .1 b. andres , _, `` 3d segmentation of sbfsem images of neuropil by a graphical model over supervoxel boundaries , '' _ med .image anal _ , 2012 , pp .796805 .s. beucher , f. meyer , `` the morphological approach to segmentation : the watershed transformation , '' _ mathematical morphology in image processing _ 1993 , pp .433481 .j. funke , b. andres , f. hamprecht , a. cardona , m. cook , `` efficient automatic 3d - reconstruction of branching neurons from em data . '' _ proc .ieee conference on computer vision and pattern recognition _, 2012 , pp . 10041011 .g. huang , v. jain , `` deep and wide multiscale recursive networks for robust image labeling , '' _ corr _ , 2013 .w. katz , `` distributed , versioned , image - oriented dataservice ( dvid ) , '' http://github.com/janelia-flyem/dvid v. kaynig , _ et al ._ , `` large - scale automatic reconstruction of neuronal processes from electron microscopy images , '' _ medical image analysis _ , 2015 , 22(1 ) , pp .77 - 88 .j. kim , m. greene , a. zlateski , k. lee , m. richardson , `` space time wiring specificity supports direction selectivity in the retina , '' _ nature _ , may 2014 , pp .331 - 336 .g. knott , h. marchman , d. wall , b. lich , `` serial section scanning electron microscopy of adult brain tissue using focused ion beam milling , '' _ j. neurosci _, 2008 , pp .2959 - 2964 .m. meila , `` comparing clusterings . ''_ proceedings of the sixteenth annual conference on computational learning theory _ , 2003 , springer .j. nunez - iglesias , r. kennedy , t. parag , j. shi , d. chklovskii , `` machine learning of hierarchical clustering to segment 2d and 3d images , '' _ plos one _ , august 2013 , 8(8 ) : e71715 .doi : 10.1371/journal.pone.0071715 .d. olbris , p. winston , s. plaza , m. bolstad , p. rivlin , l. scheffer , d. chklovskii , `` raveler : a proofreading tool for em reconstruction , '' _ unpublished _ , 2016 .t. parag , a. chakraborty , s. plaza , ` a context - aware delayed agglomeration framework for em segmentation`analysis paper , '' _ arxiv _ , june 2014 .s. plaza , l. scheffer , d. chklovskii , toward large - scale connectome reconstructions , " _ current opinion in neurobiology _ , april 2014 , pp .. s. plaza , l. scheffer , m. saunders , `` minimizing manual image segmentation turn - around time for neuronal reconstruction by embracing uncertainty , '' _ plos one _ , september 2012 , 7(9 ) : e44448 .doi : 10.1371/journal.pone.0044448 w. roncal , _ et al ._ , `` an automated images - to - graphs framework for high resolution connectomics , '' _ frontiers in neuroinformatics _ , 2015 , 9:20 .doi : 10.3389/fninf.2015.00020 . c. sommer , c. straehle , u. koethe , f. hamprecht , `` ilastik : interactive learning and segmentation toolkit , '' _ proc .ieee international symposium on biomedical imaging _, 2011 , pp . 230233 .s. takemura , a. bharioke , z. lu , a. nern , s. vitaladevuni , _ et al _ , `` a visual motion detection circuit suggested by drosophila connectomics , '' _ nature _ , 2013 , pp. 175 - 181 .s. takemura , _ et al ._ , `` synaptic circuits and their variations within different columns in the visual system of drosophila , '' _ pnas _ , 2015 , 112 , 44 , pp . 13711 - 13716. t. zhao , `` neutu , '' http://github.com/janelia-flyem/neutu m. zaharia , m. chowdhury , m. franklin , s. shenker , i. stoica , `` spark : cluster computing with working sets , '' _ hotcloud _ , 2010 , pp .10 - 10 . leveldb : a fast and lightweight key / value database , `` '' https://github.com/google/leveldb .the segmentation workflow in the dvidsparkservices package is defined by the createsegmentation class .this class implements the pre - segmentation logic for the following steps : 1 . dividing the region of interest into logical subvolumes 2 . grouping subvolumes into iterations 3 .mapping subvolumes to grayscale data and the post - segmentation logic for the following steps : 1 . serializing the subvolume segmentation results 2 . extracting the segmentation pixels for overlapping regions between substacks and finding matches between adjacent subvolume segments 3 . uploading the final stitched segmentation results to dvid the work of actually segmenting each subvolume block and stitching the blocks togetheris performed in a separate class , segmentor .the segmentor class implements two methods : segment ( ) and stitch ( ) . the segment ( )function is implemented in three steps , each of which can be customized by implementing a python function , which will be called at the appropriate time : * predict - voxels * create - supervoxels * agglomerate - supervoxels to customize one of these steps , a developer must simply implement a python function with the appropriate signature , and provide the name of that function in the configuration json file as described below .the logic for obtaining and uncompressing the input data to each function is handled in segmentor , so the custom python functions at each stage merely deal with plain numpy arrays .for example , an extremely simple method for producing voxel - wise membrane `` predictions '' might be the following : .... # mymodule.py def predict_via_threshold(grayscale_vol , mask_vol , threshold ) : # this function assumes dark pixels are definitely membranes . # all dark pixels are returned as 1.0 , and everything else 0.0 .return ( grayscale_vol < threshold).astype(numpy.float32 ) .... the segmentor class can be instructed to call this function during the `` predict - voxels '' step via the config file .parameters such as `` threshold '' should also be specified in the config file , as shown in the example below . .... { " options " : { " segmentor " : { " class " : " dvidsparkservices.reconutils.segmentor " , " configuration " : { " predict - voxels " : { " function " : " mymodule.predict_via_threshold " , " parameters " : { " threshold " : 30 } } } } , ... additional configuration settings omitted ... } } .... as demonstrated above , each custom function must accept a specific set of required arguments , and any number of optional keyword arguments .the following describes each customizable function s purpose , along with its required arguments . * * predict - voxels * given a grayscale volume and a boolean mask of background pixels for which voxel predictions are not needed , return a volume of predictions ( range : 0.0 - 1.0 , dtype : float32 ) indicating the probability of the presence of a membrane at each pixel .additional channels representing other probability classes may be optionally appended to the first channel .required arguments : grayscale_vol , mask_vol * * create - supervoxels * given the prediction volume from the `` predict - voxels '' step and a background mask volume , produce an oversegmentation of the volume into supervoxels .the supervoxels must not bleed into any background regions , as indicated by the background mask. required arguments : prediction_vol , mask_vol * * agglomerate - supervoxels * given the prediction volume from the `` predict - voxels '' step and the oversegmentation from the `` create - supervoxels '' step , produce a segmentation volume . required arguments : predictions , supervoxels as described above , each stage of the segmentor.segment ( ) method can be customized , but there is no need to implement your own functions for every stage from scratch .the dvidsparkservices package already includes built - in functions which can be used for each stage .each of these is defined within the dvidsparkservices.reconutils.plugins namespace .* predict - voxels step * * ilastik_predict_with_array ( ) + performs voxel prediction using a trained ilastik pixel classification project file ( .ilp ) . * * two_stage_voxel_predictions ( ) + run a two - stage voxel prediction using two trained ilastik pixel classification project files .the output of the first stage will be saved to a temporary location on disk and used as input to the second stage . * * naive_membrane_predictions ( ) + implements an extremely naive heuristic for membrane probabilities by simply inverting the grayscale data .this function is mostly intended for testing purposes , but for extremely clean data , this might be sufficient .* create - supervoxels step * * seeded_watershed ( ) + computes a seeded watershed over the membrane prediction volume .the seeds are generated by simply thresholding the prediction volume .* agglomerate - supervoxels step * * neuroproof_agglomerate ( ) + agglomerates the oversegmentation image using a trained neuroproof classifier file ..... { " dvid - info " : { " dvid - server " : " 127.0.0.1:8000 " , " uuid " : " abcde12345 " , " segmentation - name " : " my - segmentation - result " , " roi " : " my - predefined - region - of - interest " , " grayscale " : " grayscale " } , " options " : { " segmentor " : { " class " : " dvidsparkservices.reconutils.segmentor " , " configuration " : { " predict - voxels " : { " function " : " dvidsparkservices.reconutils.plugins.ilastik_predict_with_array " , " parameters " : { " ilp_path " : " /path / to / my / trained - membrane - predictor.ilp " , " lazyflow_threads " : 1 , " lazyflow_total_ram_mb " : 1024 } } , " create - supervoxels " : { " function " : " dvidsparkservices.reconutils.plugins.seeded_watershed " , " parameters " : { " boundary_channel " : 0 , " seed_threshold " : 0.01 , " minsegmentsize " : 300 , " seed_size " : 5 } } , " agglomerate - supervoxels " : { " function " : " dvidsparkservices.reconutils.plugins.neuroproof_agglomerate " , " parameters " : { " classifier " : { " path " : " /path / to / my / np - agglom.xml " } , " threshold " : 0.2 , " mitochannel " : 2 } } } } , " stitch - algorithm " : " medium " , " chunk - size " : 512 } } ....
|
the emerging field of connectomics aims to unlock the mysteries of the brain by understanding the connectivity between neurons . to map this connectivity , we acquire thousands of electron microscopy ( em ) images with nanometer - scale resolution . after aligning these images , the resulting dataset has the potential to reveal the shapes of neurons and the synaptic connections between them . however , imaging the brain of even a tiny organism like the fruit fly yields terabytes of data . it can take years of manual effort to examine such image volumes and trace their neuronal connections . one solution is to apply image segmentation algorithms to help automate the tracing tasks . in this paper , we propose a novel strategy to apply such segmentation on very large datasets that exceed the capacity of a single machine . our solution is robust to potential segmentation errors which could otherwise severely compromise the quality of the overall segmentation , for example those due to poor classifier generalizability or anomalies in the image dataset . we implement our algorithms in a spark application which minimizes disk i / o , and apply them to a few large em datasets , revealing both their effectiveness and scalability . we hope this work will encourage external contributions to em segmentation by providing 1 ) a flexible plugin architecture that deploys easily on different cluster environments and 2 ) an in - memory representation of segmentation that could be conducive to new advances .
|
inspiraling black holes are among the strongest astrophysical sources of gravitational radiation .the expectation that such systems may soon be studied with gravitational wave detectors has focused attention on solving einstein s equations for predictions of gravitational wave content .although the einstein equations present several unique challenges to the numerical relativist , on several of which we do not elaborate here , black holes present one particular additional challenge : they contain physical curvature singularities .while the infinities of the gravitational fields associated with this singularity can not be represented directly on a computer , the spacetime near the black hole must be given adequately to preserve the proper physics .different strategies have thus been developed to computationally represent black holes , while removing the singularity from the grid .one method exploits the gauge freedom of general relativity by choosing a time coordinate that advances normally far from a singularity , slows down as a singularity is approached , and freezes in the immediate vicinity .coordinates with this property are `` singularity avoiding '' .while singularity avoiding coordinates have some advantages , one potential disadvantage is that the hypersurfaces of constant time may become highly distorted , leading to large gradients in the metric components .these slice - stretching ( or `` grid - stretching '' ) effects , however , can be partially avoided through an advantageous combination of lapse and shift conditions .for example , long - term evolutions of single black holes have been reported by alcubierre et al .singularity avoiding slicings may be combined with black hole excision , a second method for removing the singularities from the computational domain .currently , long - term binary black hole evolutions have only been performed using both techniques together .excision is based on the physical properties of event horizons and the expectation that singularities always form within such horizons , and thus can not be seen by distant observers , as formulated by the cosmic censor conjecture . as no future - directed causal curve connects events inside the black hole to events outside, unruh proposed that one could simply remove the black hole from the computational domain , leaving the exterior computation unaffected .thus the black hole singularity is removed by placing an inner boundary on the computational domain at or within the event horizon .excision has been extensively used in numerical relativity in the context of cauchy formulations .in particular , excision with moving boundaries , which is the primary focus of this paper , was explored in .the physical principles that form the basis of excision make the idea beautiful in its simplicity .translating them into a workable numerical recipe for black hole evolutions , on the other hand , requires some attention to detail .two general questions arise regarding the implementation of excision , ( 1 ) where and how to define the inner boundary ? and ( 2 ) how to move the boundary? the first question applies to all excision algorithms , while the last question is specific to implementations where the excision boundary moves with respect to the grid . in addressing these questionswe assume a symmetric ( or at least strongly ) hyperbolic formulation .this is because excision fundamentally relies on the characteristic structure of the einstein equations near event horizons , a structure which can only be completely defined and understood for strongly and symmetric hyperbolic sets of equations .the first question involves several considerations , including the location of the boundary , its geometry , and its discrete representation .the requirement that all modes at the excision boundary are leaving the computational domain can be non trivial .it may appear that this condition would be satisfied simply by choosing any boundary within the event horizon ( or , for practical purposes , the apparent horizon ) .however , the outflow property of the excision boundary depends on the characteristic speeds of the system in the normal directions to the boundary .for example , in the analytic schwarzschild solution , assuming that the system has characteristic speeds bounded by the light cone , a spherical boundary can be excised at .a cubical boundary , on the other hand , imposes an onerous restriction on the excision volume : in cartesian kerr schild coordinates the faces of a cube centered on the black hole must be less than in length .remarkably , as was first noticed by lehner , for the rotating kerr solution in kerr schild coordinates a well - defined cubical excision boundary is impossible for interesting values of the spin parameter .( see the appendix for further discussion . )whereas with a pseudospectral collocation method the implementation of a smooth spherical excision boundary is trivial , this is generally not the case for finite differencing .as may be expected , smooth boundaries , which can be adapted to the spacetime geometry , allow the excision boundary to be as far from the singularity as possible , making the most efficient use of the technique .the discrete representation of boundaries can be a delicate issue , especially in numerical relativity where many large - scale finite difference computations are done in cartesian coordinates .we focus our attention on smooth boundaries that may be defined as a constant value in the computational coordinates , e.g. , in spherical coordinates describes a simple spherical boundary .the importance of accurately representing smooth boundaries has been demonstrated for the euler equations , for example , by dadone and grossman for finite volume methods , and bassi and rebay using finite elements .bassi and rebay studied high resolution planar fluid flow around a cylinder .they report spurious entropy production near the cylinder wall , which corrupts the solution even on extremely refined grids , when the cylindrical boundary is approximated by a polygon .furthermore , in the conformal field equations approach to general relativity , a smooth boundary is required to avoid uncontrollable numerical constraint violation .the second question applies to excision boundaries that move with respect to the grid .when the inner boundary moves , points that previously were excised enter the physical part of the grid , and must be provided with physical data for all fields . in recently proposed excision algorithms ,these data are obtained by extrapolating the solution from the physical domain of the calculation .numerical experiments have indicated that the stability of the method is very sensitive to the details of the extrapolation , see e.g. , ref . . to examine the black hole excision problem with moving inner boundaries, we adopt an approach with some unique features .the heart of our method for moving excision is to use multiple coordinate patches such that each boundary is at a fixed location in one coordinate system .adapting coordinate patches to the boundary geometry allows us to excise as far from the singularity as possible and simplifies the determination of the outflow character of the excision boundary .the motion of the boundaries is incorporated through the relationships among the various coordinate systems .the grids representing the different coordinate patches overlap and communicate via interpolation .this technique is an extension of the one used in well - posedness proofs for problems in general domains ( see sec .13.4 of ) . in this paperwe demonstrate the algorithm by solving the massless klein - gordon equation on a fixed , boosted schwarzschild background .we find that the algorithm is stable for ( apparently ) all values of the boost parameter , , and present results here showing stable evolutions for several cases with .we specialize to axially symmetric spacetimes to reduce the computational requirements for our single - processor code .axially symmetric spacetimes have sometimes been avoided in numerical relativity , with notable exceptions , see e.g. , , owing to the difficulties in developing stable finite difference equations containing the axis of symmetry . in this paperwe further present finite differencing methods for the wave equation in axially symmetric spacetimes in canonical cylindrical and spherical coordinates .these differencing schemes are second order accurate and their stability for a single grid is proved using the energy method .maximally dissipative boundary conditions are applied using the projection method of olsson .we present the differencing algorithm in detail , and indicate precisely how boundary conditions are applied .this paper is organized as follows : in sec .[ sec : overview_excision ] we motivate our approach and review the overlapping grid method .we recall the concept of conserved energy for a first order symmetrizable hyperbolic system in sec .[ sec : fosh ] and provide an energy preserving discretization . in sec .[ sec : we_flat ] we analyze the axisymmetric wave equation around a minkowski background as an introduction to our numerical methods .the analysis is then repeated for the black hole background case in sec .[ sec : we_bh ] .the excision of a boosted black hole with the overlapping grid method is described in sec .[ sec : boosted ] .the numerical experiments , along with several convergence tests , are included in sec .[ sec : numexp ] .our primary goal is to obtain a numerical algorithm for excision with moving black holes that is stable and convergent ( in the limit that the mesh spacing goes to zero ) .these desired properties for the discrete system closely mirror the continuum properties of well - posed initial boundary value problems ( ibvps ) : the existence of a unique solution that depends continuously on the initial and boundary data .furthermore , we believe that we will not obtain long - term convergent discrete solutions _ unless _ the underlying continuum problem is also well - posed . unfortunately there are few mathematical results concerning the well - posedness of general classes of equations .the energy method , however , can be used with symmetric hyperbolic ibvps , and gives sufficient conditions for well - posedness .when a black hole moves with respect to some coordinate system , the inner excision boundary must also move .we use multiple coordinate patches , such that every boundary is fixed with respect to at least one coordinate system . coordinate transformations relate the coordinate systems , and become time dependent when the hole moves .the movement of the inner boundary is also expressed by these time - dependent coordinate transformations .these ideas are illustrated in fig .[ fig : patches ] . surrounded by an event horizon is moving with respect to the base coordinate system . a coordinate patch ( shaded region )adapted to follows the motion of the singularity . with respect to this patch , is a purely outflow boundary and requires no boundary conditions .the base system terminates somewhere inside the shaded region and it gets boundary data from the moving patch .similarly , the data at the outer boundary of the moving patch is taken from the base system.,height=302 ] in our axially symmetric model problem of a scalar field on a boosted schwarzschild spacetime , the computational frame is covered with cylindrical coordinates , while a second patch of spherical coordinates is co - moving with the black hole .( in these coordinates the event horizon is always located at while the time coordinate is taken from the cylindrical patch so that data on all grids are simultaneous . )the inner boundary of the spherical grid , located at or within the event horizon , is a simple outflow boundary , and requires no boundary condition .the cylindrical domain has an inner boundary somewhere near the black hole , whether inside or outside of the horizon is immaterial , as long as it is covered by the spherical coordinate patch .an exchange of information between the two coordinate patches is required to provide boundary conditions at the inner cylindrical boundary and the outer spherical boundary . on each gridthe discrete system is constructed using the energy method .we define an energy for the semi - discrete system and , using difference operators that satisfy summation by parts , we obtain a discrete energy estimate .well - posed boundary conditions can then be identified by controlling the boundary terms of the discrete energy estimate .the conditions are discretized using olsson s projection method . in particular ,the symmetry axis ( in canonical cylindrical coordinates ) is included in the discrete energy estimate , allowing us to naturally obtain a stable discretization for axisymmetric systems .we implement our excision algorithm using overlapping grids , also known as composite mesh difference method .the two grids are coupled by interpolation , which is done for all the components of the fields being evolved .if the system is hyperbolic this means that one is actually over specifying the problem .however , as it is pointed out in sec .13.4 of and as it is confirmed by our experiments , this does not lead to a numerical instability .the fully discretized system is completed by integrating the semi - discrete equations with an appropriate method for odes ; we choose third and fourth order runge - kutta , which does not spoil the energy estimate of the semi - discrete system .kreiss - oliger dissipation is added to the scheme , as some explicit dissipation is generally necessary for stability with overlapping grids . whereas the stability theory for overlapping grids for elliptic problems is well developed, there are very few results concerning hyperbolic systems .starius presents a stability proof for overlapping grids in one dimension .finally , we note that thornburg has also explored multiple grids in the context of numerical relativity with black hole excision .the structure of the overlapping grids used in this work is illustrated in fig .[ fig : overlapping ] .the additional complication of the axis of symmetry is discussed below . for simplicitywe choose the outer boundary to be of rectangular shape .the introduction of a smooth spherical outer boundary , along with another grid overlapping with the base cylindrical grid , is certainly possible and , we believe , likely to improve the absorbing character of the outer boundary when the incoming fields are set to zero .to demonstrate our excision algorithm , we choose the evolution of a massless klein - gordon scalar field on an axisymmetric , boosted schwarzschild background as a model problem . in this sectionwe summarize basic definitions for linear , first order hyperbolic initial - boundary value problems .we employ the energy method to identify well - posed boundary conditions . the discrete version of this method , based on difference operators satisfying the summation by parts rule , is then used to discretize the right hand side of the system and the boundary conditions on a single rectangular grid . ( for an introduction to these methods in the context of numerical relativity see refs . .) we then introduce the axisymmetric scalar field equations on a curved background , along with their semi - discrete approximation . in this paperwe adopt the einstein summation convention and geometrized units ( ) .latin indices range over the spatial dimensions , and greek indices label spacetime components .consider a linear , first order , hyperbolic ibvp in two spatial dimensions , consisting of a system of partial differential evolution equations , and initial and boundary data , of the form \times \omega \label{eq : fosh}\\ & & u(0,\vec{x } ) = f(\vec{x})\qquad \vec{x } \in \omega \label{eq : initialdata}\\ & & lu(t,\vec{x } ) = g(t,\vec{x } ) \qquad ( t,\vec{x } ) \in [ 0,t]\times \partial\omega , \label{eq : boundarydata}\end{aligned}\ ] ] where , and are vector valued functions with components , and are matrices that depend on the spacetime coordinates but not on the solution , and stands for .the boundary of is assumed to be a simple smooth curve .the operator and the data that appear in the boundary condition ( [ eq : boundarydata ] ) will be defined below in eqs .( [ eq : maxdissip][eq : l ] ) . system ( [ eq : fosh])([eq : boundarydata ] ) is said to be _ strongly _ hyperbolic in \times\omega ] are the eigenvalues of . in sec .[ sec : numexp ] we will show how the maximum value of the characteristic speeds in the region \times\omega ] denotes the matrix element of .the notation indicates that the preceding term is repeated with the exchange of the vectors and .in particular , if and vanish , then also vanishes .however , in general , the absence of coupling on the two adjacent sides is not consistent with a vanishing at the corner , eq .( [ eq : scorner ] ) .we now turn to the massless scalar field propagating on a curved background .the equation of motion is the second order wave equation where denotes the covariant derivative associated with the lorentz metric . in terms of the tensor density , the wave equation can be written we introduce the auxiliary variables and , and rewrite eq .( [ eq : wave2 ] ) in first order form , the component of a sufficiently smooth solution of the first order system satisfies the second order wave equation provided that the constraints are satisfied .an attractive feature of this particular first order formulation is that the constraint variables propagate trivially , namely .in particular , this ensures that any solution of ( [ eq : wegen1][eq : wegen3 ] ) which satisfies the constraints initially , will satisfy them at later times , even in the presence of boundaries . since does not appear in eqs .( [ eq : wegen2][eq : wegen3 ] ) , we will drop eq .( [ eq : wegen1 ] ) from the system .the constraints are replaced by } = 0 ] , which is usually the case , then the time derivative of the discrete constraint variable } ] .smoothness at the axis requires that the odd -derivatives of the scalar field vanish on the axis , namely for .this implies that the following conditions for the auxiliary variables , , and , have to hold during evolution if the initial data satisfies ( [ eq : oddp ] ) and ( [ eq : eventz ] ) , and the prescription is used as a boundary condition at , then the above conditions will hold at later times . since in this coordinate system is a killing field , the energy ( [ eq : simplee ] ) is conserved .the time derivative of gives only boundary terms which can be controlled by giving appropriate boundary data {z = z_{\rm min}}^{z = z_{\rm max } } \rho d\rho\;. \nonumber\end{aligned}\ ] ] the discretization of the right hand side of eq .( [ eq : weqcyl1_1 ] ) at the axis deserves special attention . as a consequence of the regularity conditions we have that andtherefore no infinities appear on the right hand side .this suggests considering the semi - discrete approximation where and , with and .the difference operators and are second order accurate centered difference operators where their computation does not involve points which do not belong to the grid , and are first order accurate one sided difference operators otherwise .the regularity condition , for , is enforced for all , and eq .( [ eq : oddp ] ) ensures that is , in fact , a second order approximation . a solution of ( [ eq : sdwe_flat_cyl1 ] , [ eq : sdwe_flat_cyl2 ] , [ eq : sdwe_flat_cyl3 ] ) conserves the discrete energy \sigma_j \delta z\,,\nonumber\end{aligned}\ ] ] which is consistent with the continuum expression ( [ eq : energy_axi ] ) .more precisely , using the fact that and the basic properties of the finite difference operators , one can see that the following estimate holds , consistently with the continuum limit ( [ eq : edotcontcyl ] ) . as it is pointed out in section 12.7 of , one order less accuracy at the boundary is allowed , in the sense that it does not affect the overall accuracy of the scheme , provided that the physical boundary conditions are approximated to the same order as the differential operators at the inner points .by inspecting the boundary terms of the discrete energy estimate ( [ eq : cse ] ) , we can readily see how the boundary data should be given at each boundary grid point ( , , and ) . in the case of a uniform grid ( ) , in order to control the energy growth boundary data should be given in maximally dissipative form in the directions shown in fig .[ fig : cyl_grid ] .the presence of lower order terms in ( [ eq : discecyl ] ) , in addition to ensuring that the discrete energy is positive definite on the axis , indicates how to specify boundary data at the corner grid points that lie on the axis .in this section we discretize the wave equation in minkowski space with spherical coordinates . the second order axisymmetric wave equation on a flat background in spherical coordinates is written in first order form as where , and are functions of \times [ r_{\rm min},r_{\rm max } ] \times [ 0,\pi] ] . in order to excise this region from the computational domain, we must ensure that its boundary is purely outflow , i.e. , that no information can enter the computational domain . to determine the allowed values of , we calculate the characteristic speeds on each face of the cube and check that the inequality , where is the outward unit normal to the boundary , is satisfied .the schwarzschild solution is obtained by setting , and the calculation gives .the calculations for kerr ( ) are more involved , and we present our numerically generated results in fig . [fig : kerr_cubical_excision ] .we find that because of the ring singularity ( , ) , in addition to a maximum size for the excision cube , there is also a _ minimum _ size .in addition , we notice that no cubical excision is possible for .this is a severe constraint on the spin parameter , and precludes cubical excision for interesting values of spin .we note , however , that this limitation is coordinate dependent and that it might be possible to choose coordinates in which cubical excision may be done for higher values of .m. scheel , talk at miniprogram on colliding black holes : mathematical issues in numerical relativity , institute for theoretical physics , university of california at santa barbara , january 1028 , 2000 .available at http://online.kitp.ucsb.edu/online/numrel00 some recent examples of numerical relativity studies in axisymmetry include : m.w .choptuik , e.w .hirschmann , s.l .liebling , and f. pretorius , gr - qc/0305003 ( 2003 ) ; f. siebel , j.a . font , e. mller , and p. papadopoulos , gr - qc/0301127 ( 2003 ) ; m. shibata , phys . rev .d * 67 * , 024033 ( 2003 ) ; m.w .choptuik , e.w .hirschmann , s.l .liebling , and f. pretorius , class .20 * , 1857 ( 2003 ) ; j. frauendiener , phys . rev .d * 66 * , 104027 ( 2002 ) ; h. dimmelmeier , j.a . font , and e. mller , astron .astrophys . * 393 * , 523 ( 2002 ) ; h. dimmelmeier , j.a . font , and e. mller , astron . astrophys . * 388 * , 917 ( 2002 ) ; f. siebel , j.a . font , e. mller , and p. papadopoulos , phys .d * 65 * , 064038 ( 2002 ) ; h. dimmelmeier , j.a . font , and e. mller , astrophys .j. * 560 * , l163 ( 2001 ) ; j.a .font , h. dimmelmeier , a. gupta , and n. stergioulas , mon . not .astron . soc . *325 * , 1463 ( 2001 ) ; m. alcubierre , b. brgmann , d. holz , r. takahashi , s. brandt , e. seidel , and j. thornburg , int . j. modd * 10 * 273 ( 2001 ) ; m. shibata , prog .. phys . * 104 * , 325 ( 2000 ) ; d. garfinkle and g.c .duncan , phys .d * 63 * , 044011 ( 2001 ) ; s. brandt , j.a . font , j.m .iba~ nez , j. mass , and e. seidel comput .. commun . * 124 * , 169 ( 2000 ) ; p. papadopoulos and j.a . font , phys .d * 58 * , 024005 ( 1998 ) ; p. anninos , s.r .brandt , and p. walker , phys .d * 57 * , 6158 ( 1998 ) ; s. bonazzola , j. frieben , and e. gourgoulhon , ap. j. * 460 * , 379 ( 1996 ) ; m. bocquet , s. bonazzola , e. gourgoulhon , and j. novak , astron . astrophys . * 301 * , 757 ( 1995 ) ; p.anninos , d. hobill , e. seidel , l. smarr , and w .- m .suen , phys .d * 52 * , 2044 ( 1995 ) ; r. gomez , p. papadopoulos , and j. winicour , j.math.phys . *35 * , 4184 ( 1994 ) ; p.anninos , d. hobill , e. seidel , l. smarr , and w .- m .suen , phys .* 71 * , 2851 ( 1993 ) .kerr and a. schild in _ comitato nazionale per le manifestazioni celebrative del iv centenario della nascita di galileo galilei , atti del convegno sulla relativit generale : problemi dellenergia e onde gravitazionali , _ 112 , edited by g. barb ' era , florence , ( 1965 ) ; r.p .kerr and a. schild , in _ proceedings of symposia in applied mathematics _* 17 * , 199 , american math .
|
it is expected that the realization of a convergent and long - term stable numerical code for the simulation of a black hole inspiral collision will depend greatly upon the construction of stable algorithms capable of handling smooth and , most likely , time dependent boundaries . after deriving single grid , energy conserving discretizations for axisymmetric systems containing the axis of symmetry , we present a new excision method for moving black holes using multiple overlapping coordinate patches , such that each boundary is fixed with respect to at least one coordinate system . this multiple coordinate structure eliminates all need for extrapolation , a commonly used procedure for moving boundaries in numerical relativity . we demonstrate this excision method by evolving a massless klein - gordon scalar field around a boosted schwarzschild black hole in axisymmetry . the excision boundary is defined by a spherical coordinate system co - moving with the black hole . our numerical experiments indicate that arbitrarily high boost velocities can be used without observing any sign of instability .
|
the proliferation of online services and the thriving electronic commerce overwhelms us with alternatives in our daily lives . to handle this information overload and to help users in efficient decision making , recommender systems ( rs ) have been designed .the goal of rss is to recommend personalized items for online users when they need to choose among several items .typical problems include recommendations for which movie to watch , which jokes / books / news to read , which hotel to stay at , or which songs to listen to .one of the most popular approaches in the field of recommender systems is _ collaborative filtering _the underlying idea of cf is very simple : users generally express their tastes in an explicit way by rating the items .cf tries to estimate the users preferences based on the ratings they have already made on items and based on the ratings of other , similar users .for a recent review on recommender systems and collaborative filtering , see e.g. , .novel advances on cf show that _ dictionary learning _based approaches can be efficient for making predictions about users preferences .the dictionary learning based approach assumes that ( i ) there is a latent , unstructured feature space ( hidden representation ) behind the users ratings , and ( ii ) a rating of an item is equal to the product of the item and the user s feature . to increase the generalization capability , usually regularization is introduced both for the dictionary and for the users representation .there are several problems that belong to the task of dictionary learning , a.k.a .matrix factorization .this set of problems includes , for example , ( sparse ) principal component analysis , independent component analysis , independent subspace analysis , non - negative matrix factorization , and _ structured dictionary _ learning , which will be the target of our paper .one predecessor of the structured dictionary learning problem is the _ sparse coding _task , which is a considerably simpler problem . herethe dictionary is already given , and we assume that the observations can be approximated well enough using only a few dictionary elements . although finding the solution that uses the minimal number of dictionary elements is np hard in general , there exist efficient approximations .one prominent example is the lasso approach , which applies convex relaxation to the code words .lasso does not enforce any _ group _ structure on the components of the representation ( covariates ) . however , using _ structured sparsity _, that is , forcing different kind of structures ( e.g. , disjunct groups , trees ) on the sparse codes can lead to increased performances in several applications . indeed , as it has been theoretically proved recently structured sparsity can ease feature selection , and makes possible robust compressed sensing with substantially decreased observation number .many other real life applications also confirm the benefits of structured sparsity , for example ( i ) automatic image annotation , ( ii ) group - structured feature selection for micro array data processing , ( iii ) multi - task learning problems ( a.k.a .transfer learning ) , ( iv ) multiple kernel learning , ( v ) face recognition , and ( vi ) structure learning in graphical models . for an excellent review on structured sparsity , see .all the above mentioned examples only consider the structured sparse coding problem , where we assume that the dictionary is already given and available to us .a more interesting ( and challenging ) problem is the combination of these two tasks , i.e. , learning the best structured dictionary and structured representation .this is the _ structured dictionary learning _( sdl ) problem .sdl is more difficult ; one can find only few solutions in the literature . this novel field is appealing for ( i ) transformation invariant feature extraction , ( ii ) image denoising / inpainting , ( iii ) background subtraction , ( iv ) analysis of text corpora , and ( v ) face recognition .* our goal * is to extend the application domain of sdl in the direction of collaborative filtering .with respect to cf , further constraints appear for sdl since ( i ) online learning is desired and ( ii ) missing information is typical .there are good reasons for them : novel items / users may appear and user preferences may change over time . adaptation to users also motivate online methods .online methods have the additional advantage with respect to offline ones that they can process more instances in the same amount of time , and in many cases this can lead to increased performance . for a theoretical proof of this claim , see .furthermore , users can evaluate only a small portion of the available items , which leads to incomplete observations , missing rating values . in order to cope with these constraints of the collaborative filtering problem, we will use a novel extension of the structured dictionary learning problem , the so - called online group - structured dictionary learning ( osdl ) .osdl allows ( i ) overlapping group structures with ( ii ) non - convex sparsity inducing regularization , ( iii ) partial observation ( iv ) in an online framework .our paper is structured as follows : we briefly review the osdl problem , its cost function , and optimization method in section [ sec : osdl problem ] .we cast the cf problem as an osdl task in section [ sec : osdl via cf ] .numerical results are presented in section [ sec : numerical results ] .conclusions are drawn in section [ sec : conclusions ]. * notations .* vectors ( ) and matrices ( ) are denoted by bold letters . represents the diagonal matrix with coordinates of vector in its diagonal .the coordinate of vector is .notation means the number of elements of a set and the absolute value for a real number . for set , denotes the coordinates of vector in . for matrix , stands for the restriction of matrix to the rows . and denote the identity and the null matrices , respectively . is the transposed form of . for a vector ,the operator acts coordinate - wise .the ( quasi-)norm of vector is ( ) . denotes the unit sphere in .the point - wise and scalar products of are denoted by ] range .the worst and best possible gradings are and , respectively .a fixed element subset of the jokes is called gauge set and it was evaluated by all users .two third of the users have rated at least jokes , and the remaining ones have rated between and jokes .the average number of user ratings per joke is . in the neighbor correction step orwe need the values representing the similarities of the and items .we define this value as the similarity of the and rows ( and ) of the optimized osdl dictionary : where is the parameter of the similarity measure .quantities are non - negative ; if the value of is close to zero ( large ) then the and items are very different ( very similar ) . in our numerical experiments we used the rmse ( root mean square error ) and the mae ( mean absolute error ) measure for the evaluation of the quality of the estimation , since these are the most popular measures in the cf literature .the rmse and mae measure is the average squared / absolute difference of the true and the estimated rating values , respectively : where denotes either the validation or the test set .here we illustrate the efficiency of the osdl - based cf estimation on the jester dataset ( section [ sec : jester ] ) using the rmse and mae performance measures ( section [ sec : performance measure ] ) .we start our discussion with the rmse results .the mae performance measure led to similar results ; for the sake of completeness we report these results at the end of this section . to the best of our knowledge ,the top results on this database are rmse = and rmse = .both works are from the same authors .the method in the first paper is called item neighbor and it makes use of only neighbor information . in ,the authors used a bridge regression based unstructured dictionary learning model with a neighbor correction scheme , they optimized the dictionary by gradient descent and set to 100 .these are our performance baselines . to study the capability of the osdl approach in cf , we focused on the following issues : * is structured dictionary beneficial for prediction purposes , and how does it compare to the dictionary of classical ( unstructured ) sparse dictionary ?* how does the osdl parameters and the similarity / neighbor correction applied affect the efficiency of the prediction ?* how do different group structures fit to the cf task ? in our numerical studies we chose the euclidean unit sphere for ( ) , and , and no additional weighting was applied ( , , where is the indicator function ) .we set of the group - structured regularizer to .group structure of vector was realized on * a toroid ( ) with applying neighbors to define . for ( ) the classical sparse representation based dictionaryis recovered . * a hierarchy with a complete binary tree structure . in this case : * * , and group of contains the node and its descendants on the tree , and * * the size of the tree is determined by the number of levels .the dimension of the hidden representation is then .the size of mini - batches was set either to , or to and the forgetting factor was chosen from set .the weight of structure inducing regularizer was chosen from the set .we studied similarities , [ see - ] with both neighbor correction schemes [ - ] . inwhat follows , corrections based on and will be called , and , , respectively .similarity parameter was chosen from the set . in the bcd step of the optimization of , iterations were applied . in the optimization step ,we used iterations , whereas smoothing parameter was .we used a random split for the observable ratings in our experiments , similarly to : * training set ( ) was further divided into 2 parts : * * we chose the observation set randomly , and optimized according to the corresponding observations , * * we used the remaining for validation , that is for choosing the optimal osdl parameters ( or , , ) , bcd optimization parameter ( ) , neighbor correction ( , , , ) , similarity parameter ( ) , and correction weights ( in or ) . *we used the remaining of the data for testing .the optimal parameters were estimated on the validation set , and then used on the test set .the resulting rmse / mae score was the performance of the estimation . in this sectionwe provide results using toroid group structure .we set .the size of the toroid was , and thus the dimension of the representation was . in the * first experiment * we study how the size of neighborhood ( ) affects the results .this parameter corresponds to the `` smoothness '' imposed on the group structure : when , then there is no relation between the columns in ( no structure ) .as we increase , the feature vectors will be more and more aligned in a smooth way . to this end , we set the neighborhood size to ( no structure ) , and then increased it to , , , , and .for each , we calculated the rmse of our estimation , and then for each fixed ( ) pair , we minimized these rmse values in .the resulting validation and test surfaces are shown in fig .[ fig : torus : validation surfaces ] . for the best ( ) pair, we also present the rmse values as a function of ( fig .[ fig : torus : validation vs test curve ] ) . in this illustrationwe used neighbor correction and mini - batch size .we note that we got similar results using too .our results can be summarized as follows .* for a fixed neighborhood parameter , we have that : * * the validation and test surfaces are very similar ( see fig . [fig : torus : validation surfaces](e)-(f ) ) .it implies that the validation surfaces are good indicators for the test errors . for the best , and parameters, we can observe that the validation and test curves ( as functions of ) are very similar .this is demonstrated in fig .[ fig : torus : validation vs test curve ] , where we used neighborhood size and neighbor correction .we can also notice that ( i ) both curves have only one local minimum , and ( ii ) these minimum points are close to each other .* * the quality of the estimation depends mostly on the regularization parameter .as we increase , the best value is decreasing . * * the estimation is robust to the different choices of forgetting factors ( see fig .[ fig : torus : validation surfaces](a)-(e ) ) .in other words , this parameter can help in fine - tuning the results .* structured dictionaries ( ) are advantageous over those methods that do not impose structure on the dictionary elements ( ) . for and neighbor corrections ,we summarize the rmse results in table [ tab : torus : perf : r ] .based on this table we can conclude that in the studied parameter domain * * the estimation is robust to the selection of the mini - batch size ( ) .we got the best results using . similarly to the role of parameter , adjusting can be used for fine - tuning .* * the neighbor correction lead to the smallest rmse value . * * when we increase up to , the results improve .however , for , the rmse values do not improve anymore ; they are about the same that we have using . * * the smallest rmse we could achieve was , and the best known result so far was rmse = .this proves the efficiency of our osdl based collaborative filtering algorithm . * * we note that our rmse result seems to be significantly better than the that of the competitors : we repeated this experiment more times with different randomly selected training , test , and validation sets , and our rmse results have never been worse than . + , regularization weight , forgetting factor , mini - batch size , and similarity parameter .the applied neighbor correction was .,width=234 ] in the * second experiment * we studied how the different neighbor corrections ( , , , ) affect the performance of the proposed algorithm .to this end , we set the neighborhood parameter to because it proved to be optimal in the previous experiment .our results are summarized in table [ tab : torus : perf : s ] . from these resultswe can observe that * our method is robust to the selection of correction methods .similarly to the and parameters , the neighbor correction scheme can help in fine - tuning the results . *the introduction of in with the application of and instead of and proved to be advantageous in the neighbor correction phase .* for the studied cf problem , the neighbor correction method ( with ) lead to the smallest rmse value , .* the setting yielded us similarly good results .even with , the rmse value was ..performance ( rmse ) of the osdl prediction using toroid group structure ( ) with different neighbor sizes ( : unstructured case ) .first - second row : mini - batch size , third - fourth row : .odd rows : , even rows : neighbor correction . for fixed ,the best performance is highlighted with boldface typesetting . [ cols="^,^,^,^,^,^,^",options="header " , ] [ tab : mae , torus : perf : s ]we have dealt with collaborative filtering ( cf ) based recommender systems and extended the application domain of structured dictionaries to cf .we used online group - structured dictionary learning ( osdl ) to solve the cf problem ; we casted the cf estimation task as an osdl problem .we demonstrated the applicability of our novel approach on joke recommendations .our extensive numerical experiments show that structured dictionaries have several advantages over the state - of - the - art cf methods : more precise estimation can be obtained , and smaller dimensional feature representation can be sufficient by applying group structured dictionaries .moreover , the estimation behaves robustly as a function of the osdl parameters and the applied group structure .the project is supported by the european union and co - financed by the european social fund ( grant agreements no .tmop 4.2.1/b-09/1/kmr-2010 - 0003 and kmop-1.1.2 - 08/1 - 2008 - 0002 ) .the research was partly supported by the department of energy ( grant number desc0002607 ) .d. m. witten , r. tibshirani , and t. hastie , `` a penalized matrix decomposition , with applications to sparse principal components and canonical correlation analysis , '' _ biostatistics _ , vol .10 , no . 3 , pp .515534 , 2009 .j. a. tropp and s. j. wright , `` computational methods for sparse solution of linear inverse problems , '' _ proc . of the ieee special issue on applications of sparse representation and compressivesensing _ ,98 , no . 6 , pp . 948958 , 2010 .
|
structured sparse coding and the related structured dictionary learning problems are novel research areas in machine learning . in this paper we present a new application of structured dictionary learning for collaborative filtering based recommender systems . our extensive numerical experiments demonstrate that the presented technique outperforms its state - of - the - art competitors and has several advantages over approaches that do not put structured constraints on the dictionary elements . collaborative filtering , structured dictionary learning
|
if it is the case that orthodox quantum mechanics ( qm ) is an incomplete description of nature , then several puzzling aspects of the theory become less so : the role of conscious observers in the theory and collapse of the wavefunction being the preeminent examples .it is well appreciated that the bell and kochen - specker theorems significantly constrain _ any _ attempts at a more complete description . the ks theorem in particularis often taken as implying that any ` complete ' description of reality must be so strange that such attempts should automatically be abandoned .it seems to me , however , that the ks theorem tells us only that the outcomes of experiments must depend on the physical properties of both the measurement apparatus and the system under investigation - it provides evidence against the most dramatic form of reductionism .this raises the question : without such reductionism at hand , must any complete description be so complicated that occams razor necessitates its abandonment ?one purpose of this note is to show how a theory can be contextual without becoming particularly complicated , and without requiring a deep understanding of the dynamics of measurement processes in the theory .i will be concerned here with the possible existence of what will briefly be termed _ ontological models _ ( om s ) - models which ideally reproduce the quantum predictions and have the following features : firstly , all the physical properties of an individual system are presumed to be determined by ( or determinable from ) some mathematical object ( the `` ontic state '' or `` hidden variable '' of the system ) which is an element of a space of such objects .the quantum mechanical wavefunction an element of a dimensional hilbert space is presumed an incomplete description of this underlying reality , and thus corresponds in the om to a probability density over note that unlike the commonly utilized representations of quantum systems in terms of quasi - probability distributions , which can become negative , this probability density is presumed positive and normalized with respect to some appropriate measure on .crucially , in om s the probabilistic nature of quantum mechanical predictions arises _ only _ from such classical uncertainty .secondly , measurement outcomes in om s are deterministic . as such ,if an outcome measurement is performed which in qm is described by a ( complete ) set of orthogonal projection operators , then for a system in ontological state there is a unique one of these outcomes which will always be obtained .that is , if the value of was known then the outcome which would be obtained is determinable .one can therefore divide the space up into subspaces according to which of the measurement outcomes each is associated with .mathematically we define different _ characteristic functions _ over which take the value 1 over the values of which yield the particular outcome , and which are 0 otherwise as which would more accurately reflect the possibility of contextuality , i.e. that the full set of measurement operators must be known in order to determine whether the system will give outcome or not . ] .thus the probability of any particular measurement outcome is given by the correspondence: finally , certain other assumptions necessarily lurk in the background of om descriptions of qm .the first is quite reasonable : what corresponds to the schrdinger evolution of the system is some sort of transformation of the ontological state space into itself .the second is less simple : measurements must disturb the ontological state of the system .a non - disturbing measurement of some outcome would result in the observer assigning the posterior distribution to the system : a sequence of such non - disturbing measurements would generally allow the observer to narrow down their description of the system in such a way that they obtain more predictability than we know to be the case for qm .thus it is presumed that upon measurement an imprecisely understood disturbance occurs which forces an observer to `` smear '' the post - measurement distribution they assign .note that this smearing necessarily happens only over states which lie in supp( ) , the support of the characteristic function ( i.e. the region over which the function is non - zero ) , because we know empirically that ( in the absence of intermediate evolution ) a second measurement containing the same projector must yield this same outcome with certainty . it is not known whether or not there exist om s satisfying all of the above for an arbitrary dimensional quantum system .kochen and specker found such a model for a spin-1/2 ( qubit ) system - their model is reviewed in the next section .subsequently i go on to examine similar models for a three dimensional ( qutrit ) system , models which unfortunately do _ not _ exactly reproduce the quantum predictions .they do , however , come damn close .since the ks theorem applies in three and higher dimensions it is interesting to see the manner in which these models manifest contextuality , as one suspects they must if they are to come as close to qm as they seem to do .we begin with a simple and concrete model which reproduces the operational predictions of the quantum mechanics of a qubit .we then formalize this model a little , which enables us to see how it can be generalized to higher dimensional systems .our initial simple model , that we call `` marble world '' , consists of a marble constrained to move on the surface of a sphere . is then the set of points of a unit sphere , and a generic element can be parameterised as a vector ] , thus the qm predctions for outcomes 0 and 1 are .the blue crosses correspond to the probability of the outcome which in qm is exactly 0.,title="fig:",width=302 ] it is not simple to analytically examine the extent to which the desired correspondence of eq . ( [ probcorrespondence ] ) is satisfied , because of the complexity of parameterizing the integration regions . as suchi simply do the integrations numerically by a monte - carlo method .it turns out that this om does _ not _ reproduce the quantum statistics perfectly - i.e. the correspondence of eq .( [ probcorrespondence ] ) is not exactly satisfied .empirically i find that the maximum deviation from the quantum prediction occurs when the quantum state lies in a 2-dimensional subspace spanned by two of the three considered measurement outcomes .that is , taking and then examining the probabilities for obtaining outcomes gives by far the largest deviations from the quantum predictions .these deviations can be succinctly plotted as in fig .[ su3cosqd ] . in the figurethe black lines correspond to the standard quantum mechanical and predictions for outcomes and .the red and green crosses are the corresponding predictions of the om .note that while the probability of the outcome is strictly 0 in qm , in the om it is finite ( the blue crosses ) .the figure shows the curves for two different values of .there is nothing particularly special about - i have not had the patience ( the monte - carlo integrations take quite a while ) to find the actual optimal choice , but trying a few values and comparing `` by eye '' it seems close to the optimal .note that the reason the probability of obtaining the outcome is non - zero in the om is that whenever there exist which are closer to than but which are still close enough to ( for ) to lie within the support of .why then not choose ?doing so seems to cause the other curves to deviate more from the quantum predictions .there are a large variety of `` tweaks '' of this model that one can imagine , and i do not have the computing power ( or patience ) to find optimal parameter choices for each such tweak . before i go on to describe a different om that i personally find more appealing than this first qutrit version , i am going to digress a little and talk about the kochen - specker theorem and this particular om .this is because although this model does not exactly reproduce qm , i believe the sort of manner in which it manifests contextuality is quite natural and much more benign than the dramatic statements one often sees about the implications of the kochen - specker theorem for realist interpretations of quantum mechanics might lead one believe .one of the most profound restrictions on interpretations of qm is provided by the kochen - specker theorem , which holds for three or more dimensional quantum systems .the om of the previous section exhibits the contextuality required by the ks theorem in the following way .consider two different complete ( projection - valued ) measurements in qm : and recall that our rule assignment to an arbitrary element is that a system in state gives the outcome corresponding to the central element to which it is closest .now , there exist some ontological states that are closer to than to but which are closer to one of than to this can be seen simply by means of the 3 dimensional real space analogue in figure 1 , which loosely corresponds to the version of the qutrit om described above . in the figurethe three orthogonal unit vectors represent , and , and the quadrant of the sphere depicted is part of the space .the red lines depict the boundaries of the supports of the characteristic functions - that is , elements within the red boundary are closest to the appropriate vector .clearly if the space is rotated around to obtain a new set of vectors then those points which lie in the region marked `` unfaithful '' will change the outcome they are associated with from to either or . we will refer to elements which always give outcome as `` faithful '' to thus the measurement outcomes in the om are contextual :sometimes ( interestingly , however , not all the time ) the system is in an ontic state for which knowledge of all three measurement outcomes being measured simultaneously is required in order to determine which outcome will be obtained .that is , the probability of a specific outcome depends on its context . a real space analogue depicting a quadrant of the space and three orthogonal `` central elements '' and .the regions of ontic states closest to each such element are bounded by the red curves .the ontic states which always give the outcome regardless of their context are indicated as `` faithful '' .the set of faithful states also forms for the particular choice of .,title="fig:",width=226 ] i now proffer the opinion that the contextuality of this om does not , however , encumber us with the conceptual problems which are often associated with understanding the implications of the kochen - specker theorem . roughly speaking ,these puzzling implications are usually taken to be that the theorem presents an obstruction to our making sense of the system `` having values '' of certain physical quantities which are subsequently revealed by measurements .the om sidesteps these conceptual issues because it is , in some sense , _relational_. more precisely , we should expect that a more complete description of measurements would involve consideration of the ontic states associated with the measurement apparatus , and it is not unreasonable that the response of the system to a measurement depends on the _ relationship _ between the ontic state of the apparatus and the ontic state of the system . because the measurements and involve different physical arrangements , we should expect that the corresponding ontic states of the measurement apparatus itself are different . without the more complete descriptionwe do not ( yet ) have a proper description of the apparatus ontic states .however , since in our om the measurement outcome is determined solely by the relationship between the _ central elements _ and the actual state of the system , it is quite uncontroversial to view the central elements or as in a natural correspondence with these ( as yet undetermined ) apparatus ontic states .now , physics would be an extremely difficult endeavor if we needed to know everything about the measurement apparatuses _ and _ the systems under investigation to form a coherent understanding of the world - in effect we need the simplifications afforded by the implicit reductionism which allows us to talk about systems at all .the ks theorem denies us the most extreme form of reductionism .however , in the sort of om discussed here we obtain the next best thing : we do not need to know everything about the highly complex physical reality of the macroscopic measurement devices - we need only know that the important features of the apparatus for the purposes of understanding its role as a measuring device can be encapsulated in the simple mathematical objects , which form the central elements of the measurement .one upshot of this is that the quantum projector is always associated with a _unique _ central element . in spekkensshowed how contextuality could be defined operationally ( and thus for any physical theory , not just quantum mechanics ) in terms of the redundancies in the mathematical description of one theory by another .such mathematical redundancy is certainly present at the level of characteristic functions in this om : there are a continuous infinity of characteristic functions corresponding to , one for each of the continuous infinity of possibly co - measured projectors ( corresponding to the different rotations about the axis in fig . [ omfigure ] ) .if we quantify contextuality by simply counting this redundancy , then this om is highly contextual . in my opinion , however , the contextuality of the model is ameliorated by the fact that there is no need to have a large mathematical redundancy in the ontic states , the relations to which represent `` actual physical quantities '' .looking at this another way , there is a simplicity in the _ mathematical rules _ from which the contextual measurement outcomes are determinable .it is not as if we have to compile a huge table of different outcomes for each of the different possible measurements and all possible input ontic states .it seems likely to me that at the level of just counting ` representation redundancies ' all om s will need to be infinitely ( preparation and measurement ) contextual .some models will manifest that contextuality in a more simple and natural way than others . a final word about relationalism .there is a large body of work advocating identification of relational `` elements of reality '' in classical physical theories - general relativity in particular . even in terms of non - relativistic qm, however , it should be realized that the quantum mechanical wavefunction is itself devoid of operational meaning unless the experimentalist understands which external devices the relative phases in a particular superposition are defined with respect to .it is perhaps not too much of a stretch to see contextuality in general as simply a reflection of this essential feature of physics .the puzzle then becomes why it is that classical physics is non - contextual .upon first exposure to the kochen - specker qubit model of section ii my initial reaction was to feel slightly cheated with regard to the manner in which the born rule is obtained .that is , the specific choice of a cosine distribution peaked at the corresponding quantum mechanical bloch vector is clearly what leads directly to the quantum mechanical `` '' type of probability distribution .if some sort of om really does underpin qm , then we should expect that the probability density would take some `` natural '' form .in fact there are very few probability densities which arise naturally in physics - and the ones which do ( such as the gibbs distribution ) tend to be understandable in terms of an application of a maximum entropy principle . in looking for om s there is a further intuition at play - namely that many of the strange features of quantum mechanics are derivable from us as observers being subject to an encompassing _ epistemic restriction _( see e.g. for much more on this sort of thing ) .it seems intuitive that such restrictions would generally lead to assignment of _ uniform _ probability distributions over the ` hidden ' variables .( again , see for this type of thing at play ) .there is another point in favor of looking for uniform distributions : as has been mentioned several times , ` collapse ' - viewed as an updating of knowledge in an om - is still necessarily accompanied by physical disturbance . collapses which induce a `` disturbing dynamics '' that then leads to a new assignment of some strange probability density function would presumably have to be quite specialized .given the variety of systems to which qm applies this seems somewhat unlikely .thus i have spent some time looking for om s which , like the above models , preserve much of the hilbert space structure of qm , but which do nt require assigning some sort of contrived probability density in order to conform closely to qm .here is one example of an om which only involves uniform distributions .we let the quantum system which has dimension be described in an ontic space consisting of all rank 1 projectors in where .( in fact for the examples presented here i ll take . ) corresponding to the quantum state there is an element defined via : \ ] ] ( i.e. the bottom right block is dimensional ) .this is simply choosing exactly equal to on a fixed d - dimensional subspace of .elements corresponding to the qm measurement projectors can similarly be defined .the probabilities of obtaining for a qubit in state .the black curves are the qm predictions , the crosses are the probabilities for an om based on uniform probability distributions in a 3-dimensional ontic state space . , title="fig:",width=302 ] an equivalent plot to fig .[ omfigure ] , but now for a qutrit om based on uniform probability densities in a 4-dimensional ontic state space.,title="fig:",width=302 ] the probability density corresponding to quantum state is then note that i am leaving in the parameter which determines the size of the support of .the rule for determining measurement outcomes ( and therefore the characteristic functions ) are unchanged from that of the qutrit model discussed previously . in fig .( [ su2_in_d3_uniform ] ) the the quantum probabilities and those obtained for the version of this om are plotted .unlike the kochen - specker om for a qubit this one does not reproduce the quantum probabilities exactly .( [ su3_in_d4_uniform ] ) is a plot for the qutrit version of this om - it essentially does no worse than the original qutrit om despite it utilizing only uniform distributions .the interesting conclusion is that something close to the desired sin / cos distributions can be `` inherited '' in what i like to think of as a kind of ` concentration of measure ' effect ( though i m not sure this is exactly what is going on , it is simply where the intuition came from ) .the kochen - specker theorem is essentially the only nontrivial theorem known which affects any attempt at constructing an ontological model ( taking the viewpoint that locality is simply a way of enforcing non - contextuality in a natural manner , and bell s theorem demonstrates the need for a violation of this ) . for each quantum mechanical projector system prepared in gives this outcome with probability 1 , and in the om s presented here this is reflected in the fact that clearly , however , must have support only over states faithful to if not , then a system prepared in would sometimes give an outcome different to , were an observer to vary over the multiple sets of measurements containing this projector / central element .we see therefore that in the om s considered here the strict inclusion holds : ( in the analogy of figure 1 the boundary of the support of for is denoted by the blue line of latitude - i.e the probability distributions have support over the complete set of states faithful to their associated measurement . )one may well ask whether this strict inclusion is a feature specific to these ontological models . in some related work show that in fact _ every _ reasonable ontological model of three or more dimensional quantum systems _ must _ have this feature for an infinite number ( though perhaps not all ) of the probability distributions corresponding to quantum states .interestingly , this is proven by applying a kochen - specker style construction to quantum _ preparation procedures _ instead of measurements .it would be nice to find an om which is so close to the quantum predictions that current experiments can not rule it out .( i am certainly interested in hearing about present experimental bounds on how precisely the born rule is known to be satisfied in higher dimensional systems - this can then give me a target to ` shoot for ' . ) on prime numbered days i am not even sure current experiments rule out the om s discussed above .i have played around with many variations on the sorts of om i have discussed here that use different spaces , but have not yet found an om reproducing the quantum statistics exactly for any dimension .i know of no proof , however , that such an om can not be lurking out there .i certainly hope that one is at present it seems the only way qm can really make sense to me .this paper would not have arisen without my many stimulating and conceptually important discussions with nicholas harrigan , who declined an offer to share authorship of it . over the yearsmy thoughts about an ontology for quantum mechanics have benefited greatly through discussions with rob spekkens .99 j. s. bell , rev . mod . phys . * 38 * , 447 ( 1966 ) .s. kochen and e. p. specker , j. math . mech .* 17 * , 59 ( 1967 ) .spekkens , eprint : quant - ph/0401052 .fuchs , quant - ph/0105039 .t. tilma and e.c.g .sudarshan , j. phys .a : math . gen . * 35 * 10467 , ( 2002 ) . also at eprint : math - ph/0205016v5 .spekkens , phys . rev .a * 71 * , 052108 ( 2005 ) . also at eprint : quant - ph/0406166 .n. harrigan , t. rudolph and r.w .spekkens ( in preparation ) .
|
certain concrete `` ontological models '' for quantum mechanics ( models in which measurement outcomes are deterministic and quantum states are equivalent to classical probability distributions over some space of ` hidden variables ' ) are examined . the models are generalizations of kochen and specker s such model for a single 2-dimensional system - in particular a model for a three dimensional quantum system is considered in detail . unfortunately , it appears the models do not quite reproduce the quantum mechanical statistics . they do , however , come close to doing so , and in as much as they simply involve probability distributions over the complex projective space they do reproduce pretty much everything else in quantum mechanics . the kochen - specker theorem is examined in the light of these models , and the rather mild nature of the manifested contextuality is discussed .
|
in qnd measures are exploited to detect and/or produce highly non - classical states of light trapped in a super - conducting cavity ( see for a description of such qed systems and for detailed physical models with qnd measures of light using atoms ) . for such experimental setups ,we detail and analyze here a feedback scheme that stabilize the cavity field towards any photon - number states ( fock states ) .such states are strongly non - classical since their photon numbers are perfectly defined. the control corresponds to a coherent light - pulse injected inside the cavity between atom passages .the overall structure of the proposed feedback scheme is inspired of using a quantum adaptation of the observer / controller structure widely used for classical systems ( see , e.g. , ) .the observer part of the proposed feedback scheme consists in a discrete - time quantum filter , based on the observed detector clicks , to estimate the quantum - state of the cavity field .this estimated state is then used in a state - feedback based on lyapunov design , the controller part . in theorems [ thm : main ] and[ thm : initial ] we prove the convergence almost surely of the closed - loop system towards the goal fock - state in absence of modeling imperfections and measurement errors .simulations illustrate this results and show that performance of the closed - loop system are not dramatically changed by false detections for of the detector clicks . in similar feedback schemes are also addressed with modified quantum filters in order to take into account additional physical effects and experimental imperfections . focuses on physics and includes extensive closed - loop simulations whereas here we are interested by mathematical aspects and convergence proofs .in section [ sec : ideal ] , we describe very briefly the physical system and its quantum monte - carlo model . in section [ sec : feedback ] the feedback is designed using lyapunov techniques .its convergence is proved in theorem [ thm : main ] .section [ sec : filtering ] introduces a quantum filter to estimate the cavity state necessary for the feedback : convergence of the closed - loop system ( quantum filter and feedback based on the estimate cavity state ) is proved in theorem [ thm : initial ] assuming perfect model and detection.this section ends with theorem [ thm : contract ] proving a contraction property of the quantum filter dynamics .section [ sec : simul ] is devoted to closed - loop simulations where measurement imperfections are introduced for testing robustness .the authors thank michel brune , serge haroche and jean - pierre raimond for useful discussions and advices .as illustrated by figure [ fig : expscheme ] , the system consists in a high - q microwave cavity , in a box producing rydberg atoms , in and two low - q ramsey cavities , in an atom detector and in a microwave source . the dynamics model is discrete in time and relies on quantum monte - carlo trajectories ( see ) .it takes into account the back - action of the measure .it is adapted from where we have just added the control effect .each time - step indexed by the integer corresponds to atom number coming from , submitted then to a first ramsey -pulse in , crossing the cavity and being entangled with it , submitted to a second -pulse in and finally being measured in .the state of the cavity is described by the density operator . herethe passage from the time step to corresponds to the projective measurement of the cavity state , by detecting the state of the rydberg atom number after leaving . during this same step , an appropriate coherent pulse ( the control )is injected into . denoting , as usual , by the photon annihilation operator and by the photon number operator , the density matrix is related to through the following jump - relationships : where * the measurement operator ( resp . ) , when the atom is detected in the state ( resp . ) with such measurement process corresponds to an off - resonant interaction between atom and cavity where is the direction of the second ramsey -pulse ( in figure [ fig : expscheme ] ) and is the de - phasing - angle per photon . * the probability ( resp . ) of detecting the atom in ( resp . ) is given by ( resp . .* is the displacement operator describing the coherent pulse injection , , and the scalar control is a real parameter that can be adjusted at each time step .the time evolution of the step to , in fact , consists of two types of evolutions : a projective measurement and a coherent injection . for simplicity sakes, we will use the notation of , to illustrate this intermediate step .therefore , in the sequel , we consider the finite dimensional approximation defined by a maximum of photon number , . in the truncated fock basis , corresponds to the diagonal matrix , is a symmetric positive matrix with unit trace , and the annihilation operator is an upper - triagular matrix with as upper diagonal , the remaining elements being .we restrict to real quantities since the phase of any fock state is arbitrary .we set it here to .we aim to stabilize the fock state with photons characterized by the density operator . to this endwe choose the coherent feedback such that the value of the lyapunov function decreases when passing from to .note that , for small enough , the baker - campbell - hausdorff formula yields the following approximation + \frac{\alpha^2}{2}[[\rho , a^\dag - a],a^\dag - a]\end{aligned}\ ] ] up to third order terms .therefore , for small enough , we have \bar \rho\right ) } \\+ \frac{\alpha_k^2}{2}{\text{tr}\left([[\rho_{k+{\text{\scriptsize }}},a^\dag - a],a^\dag - a]\bar\rho\right)}.\end{gathered}\ ] ] thus the feedback \rho_{k+{\text{\scriptsize }}}\right)}\ ] ] with a gain small enough ensures that \rho_{k+{\text{\scriptsize }}}\right)}\big|^2,\ ] ] since \bar \rho\right)}=-{\text{tr}\left([\bar \rho , a^\dag - a]\rho_{k+{\text{\scriptsize }}}\right)} ] and .thus and consequently , the expectation value of decreases at each sampling time : considering the markov process , we have therefore shown that is a super - martingale bounded from below by 0 .when reaches its minimum , the markov process has converged to .however , one can easily see that this super - martingale has also the possibility to converge towards other attractors , for instance other fock states which are all the stationary points of the closed - loop markov process but with instead of .following , we suggest the following modification of the feedback scheme : \rho_{k+{\text{\scriptsize }}}\right ) } & \mbox { if } v(\rho_{k})\le 1-\varepsilon\\ \underset{\alpha\in[-\bar\alpha,\bar\alpha]}{\text{argmax}}~{\text{tr}\left(\bar\rho d(\alpha)\rho_{k+{\text{\scriptsize }}}d(-\alpha)\right ) } & \mbox { if } v(\rho_{k})>1-\varepsilon \\\end{array } \right.\]]with constants .[ thm : main ] consider and assume that for all we have and that take the switching feedback scheme with .for small enough and , the trajectories of converge almost surely towards the target fock state .[ rem : support ] the second part of the feedback , dealing with states near the bad attractors , is not explicit and may seem hard to compute .note that , this form has been particularly chosen to simplify the proof of the theorem [ thm : main ] and in practice , one can take it to be any constant control field exciting the system around these bad attractors and ensuring a fast return to the inner set .[ rem : gain ] the controller gain can be tuned in order to maximize at each sampling time , for near .up to third order term in , yields to\rho_{k+{\text{\scriptsize }}}\right)}\right)^2 \left ( c_1 - \frac{c_1 ^ 2}{2 } { \text{tr}\left([\bar\rho , a^\dag - a][\bar\rho , a^\dag - a]\right)}\right).\end{gathered}\]]thus [\bar\rho , a^\dag - a]\right ) } \approx 1/(4\bar n+2) ] . _ proof of lemma [ lem : step1 ] : _ define the matrices and being diagonal , we trivially have .let us fix and assume that for all \frac{1}{2} \left(\frac{\big|\cos\left(\frac{\phi_r+\phi}{2}+\bar n\phi\right)|^4}{{\text{tr}\left(m_g\rho_k m_g^\dag\right)}}+\frac{\big|\sin\left(\frac{\phi_r+\phi}{2}+\bar n\phi\right)|^4}{{\text{tr}\left(m_e\rho_k m_e^\dag\right)}}-1\right)$}{\text{tr}\left(\rho_k\bar\rho\right)}^2.\end{gathered}\]]noting that , , and by cauchy - schwartz inequality , we have with equality if and only if we apply , now , the kushner s invariance theorem ( cf .appendix , theorem [ thm : kushner ] ) to the markov process with the lyapunov function .the process converges in probability to the largest invariant set included in \rho m_s^\dag\right)}=0,~s = g , e\right\}.\end{gathered}\ ] ] in particular , by invariance , belonging to this limit set implies for .taking , and noting that , this leads to however , by cauchy - schwartz inequality , and applying the fact that is a positive matrix , we have with equality if and only if and are co - linear .since has a non degenerate spectrum , is necessarily a projector over one of the eigen - state of , i.e. , a fock state , for some . finally , as we have restricted ourselves to the paths never leaving the set , the only possibility for the invariant set is the isolated point . [ lem : step3 - 2 ] converges to for almost all paths remaining in the set ._ proof of lemma [ lem : step3 - 2 ] : _ define the event . through lemma [ lem : step3 - 1 ] , we have shown that by continuity of , this also implies that as , we have thus and so by theorem [ thm : doob ] , we know that converges almost surely and therefore , as is bounded , by dominated convergence , we obtain now , we have all the elements to finish the proof of the theorem [ thm : main ] . from steps 1 and 2 and the markov property ,one deduces that for almost all paths , there exists a such that for never leaves the set .this together with the step 3 finishes the proof of the theorem. feedback law requires the knowledge of .when the measurement process is fully efficient and the jump model admits no error , it actually represents a natural choice for quantum filter to estimate the value of by satisfying where or , depending on measure outcome and on the control . before passing to the parametric robustness of the feedback scheme ,let us discuss the robustness with respect to the choice of the initial state for the filter equation when we replace by in the feedback . note that , theorem [ thm : main ] shows that whenever the filter equation is initialized at the same state as the one which the physical system is prepared initially , the feedback law ensures the stabilization of the target state .the next theorem shows that as soon as the quantum filter is initialized at any arbitrary fully mixed initial state ( not necessarily the same as the initial state of the physical system ) and whenever the feedback scheme is applied on the system , the state of the physical system will converge almost surely to the desired fock state .[ thm : initial ] assume that the quantum filter is initialized at a full - rank matrix and that the feedback scheme is applied to the physical system .the trajectories of the system , will then converge almost surely to the target fock state ._ proof of theorem [ thm : initial ] : _ the initial state being full - rank , there exists a such that where is the initial state of at which the physical system is initially prepared and is a well - defined density matrix .indeed , being positive and full - rank , for a small enough , remains non - negative , hermitian and of unit trace .assume that , we prepare the initial state of another identical physical system as follows : we generate a random number in the interval ; if we prepare the system in the state and otherwise we prepare it at .applying our quantum filter ( initialized at ) and the associated feedback scheme , almost all trajectories of this physical system converge to the fock state .in particular , almost all trajectories that were initialized at the state converge to . this finishes the proof of the theorem . quantum filter admits also some contraction properties confirming its robustness to experimental errors as shown by simulations of figures [ fig : trajerr ] and [ fig : mean ] where detection errors are introduced .we just provide here a first interesting inequality that will be used in future developments .[ thm : contract ] consider the process and the associated filter for any arbitrary control input .we have _ proof _ before anything , note that the coherent part of the evolution leaves the value of unchanged : concerning the projective part of the dynamics , we have applying a cauchy - schwarz inequality as well as the identity , we have applying and , we only need to show that noting , once again , that , we can write : and therefore is equivalent to note that as and are positive hermitian matrices , their square roots , and , are well - defined . once again by cauchy - schwarz inequality , we have summing over , we obtain the inequality and therefore we finish the proof of the theorem [ thm : contract ] . corresponds to a closed - loop simulation with a goal fock state and a hilbert space limited to photons . and are initialized at the same state , the coherent state of mean photon number .the number of iteration steps is fixed to .the dephasing per photon is .the ramsey phase is fixed to the mid - fringe setting , i.e. . the feedback parameter ( with instead of ) are as follows : , and . ) ., scaledwidth=50.0% ] any real experimental setup includes imperfection and error . to test the robustness of the feedback scheme ,a false detection probability is introduced . in case of false detection at step ,the atom is detected in ( resp . ) whereas it collapses effectively in ( resp . ) .this means that in , ( resp . ) , whereas in , it is the converse ( resp . ) .simulations of figure [ fig : trajerr ] differ from those of figure [ fig : trajideal ] by only : we observe for this sample trajectory a longer convergence time . ., scaledwidth=50.0% ] a much more significative impact of is given by ensemble average .figure [ fig : mean ] presents ensemble averages corresponding to the third sub - plot of figures [ fig : trajideal ] and [ fig : trajerr ] . for (left plot ) , we observe an average fidelity converging to : it exceeds after steps . for ,the asymptotic fidelity remains under and reaches after iteration .the performance are reduced but not changed dramatically .the proposed feedback scheme appears to be robust to such experimental errors .closed - loop quantum trajectories similar to the one of figure [ fig : trajideal ] ( left , ) and [ fig : trajerr ] ( right , ) ., title="fig:",scaledwidth=25.0% ] closed - loop quantum trajectories similar to the one of figure [ fig : trajideal ] ( left , ) and [ fig : trajerr ] ( right , ) ., title="fig:",scaledwidth=25.0% ]in more realistic simulations are reported .they include nonlinear shift per photon ( replaced by a non linear function in ) and additional experimental errors such as detector efficiency and delays .these simulations confirm the robustness of the feedback scheme , robustness that needs to be understood in a more theoretical way .in particular , it seems that the quantum filter forgets its initial condition almost surely and thus admits some strong contraction properties as indicated by theorem [ thm : contract ] . with the truncation to photons, convergence is proved only in the finite dimensional case . but feedback and quantum filter are still valid for .we conjecture that theorems [ thm : main ] and [ thm : initial ] remain valid in this case . in the experimental results reported in the time - interval corresponding to a sampling step is around .thus it is possible to implement , on a digital computer and in real - time , the lyapunov feedback - law where is given by the quantum filter .we recall here doob s inequality and kushner s invariance theorem . for detailed discussions and proofs we refer to ( sections 8.4 and 8.5 ) .[ thm : doob ] let be a markov chain on state space .suppose that there is a non - negative function satisfying where on the set . then furthermore , there is some random , so that for paths never leaving , almost surely . for the statement of the second theorem , we need to use the language of probability measures rather than the random process .therefore , we deal with the space of probability measures on the state space .let be the initial probability distribution ( everywhere through this paper we have dealt with the case where is a dirac on a state of the state space of density matrices ) .then , the probability distribution of , given initial distribution , is to be denoted by .note that for , the markov property implies : [ thm : kushner ] consider the same assumptions as that of the theorem [ thm : doob ] .let be concentrated on a state ( being defined as in theorem [ thm : doob ] ) , i.e. .assume that in implies that .under the conditions of theorem [ thm : doob ] , for trajectories never leaving , converges to almost surely .also , the associated conditioned probability measures tend to the largest invariant set of measures whose support set is in .finally , for the trajectories never leaving , converges , in probability , to the support set of .m. brune , s. haroche , j .-raimond , l. davidovich , and n. zagury .manipulation of photons in a cavity by dispersive atom - field coupling : quantum - nondemolition measurements and gnration of `` schrdinger cat '' states ., 45(7):51935214 , 1992 .i. dotsenko , m. mirrahimi , m. brune , s. haroche , j .-raimond , and p. rouchon .quantum feedback by discrete quantum non - demolition measurements : towards on - demand generation of photon - number states . , 2009 .submitted .s. gleyzes , s. kuhr , c. guerlin , j. bernu , s. delglise , u. busk hoff , m. brune , j .- m .raimond , and s. haroche .quantum jumps of light recording the birth and death of a photon in a cavity . , 446:297300 , 2007 . c. guerlin , j. bernu , s. delglise , c. sayrin , s. gleyzes , s. kuhr , m. brune , j .-raimond , and s. haroche .progressive field - state collapse and quantum non - demolition photon counting ., 448:889893 , 2007 .
|
a feedback scheme for preparation of photon number states in a microwave cavity is proposed . quantum non demolition ( qnd ) measurement of the cavity field provides information on its actual state . the control consists in injecting into the cavity mode a microwave pulse adjusted to increase the population of the desired target photon number . in the ideal case ( perfect cavity and measures ) , we present the feedback scheme and its detailed convergence proof through stochastic lyapunov techniques based on super - martingales and other probabilistic arguments . quantum monte - carlo simulations performed with experimental parameters illustrate convergence and robustness of such feedback scheme .
|
the role of the geomagnetic field in life processes remains unclear .even the fact that some animals can navigate using the geomagnetic field is not yet explained . the nature of biological effects caused by such weak magnetic fields is a physical problem . there are both epidemiological and laboratory studies showing some association between the level of ac electromagnetic fields and human health .however , relatively little is known about the effects of weak static magnetic field , on the order of the geomagnetic field ( gmf ) , in humans .there were only a few laboratory studies focused on the cognitive effects of weak static magnetic fields , particularly the hypomagnetic field .in , 24 subjects , two people at a time , were continuously exposed to 50 nt hypomagnetic field for up to two weeks .a range of psychological tests were performed before and after the magnetic exposure : the space perception test , visual spatial memory , the hand - eye coordination , the reproduction of time intervals , the subject s equilibrium . in all these testsno significant difference was found between the data collected in the geomagnetic and in the hypomagnetic environments .however in , averaged over 55 subjects , the sensitivity of the human eye to a visual light stimulus in the hypomagnetic field was less than that in the gmf by ( 6)% .thus , data available on the effect of hmf on human cognitive processes were insufficient and inconsistent . in our earlier work , we reported that deprivation of the gmf to the level lower than 400 nt affected human cognitive processes .forty people , who all gave their informed concent , were tested in a series of four cognitive tests . under hmf , both the number of errors and task processing times increased by about ( 1.5.5)% , on average .these results were obtained by using several multivariate statistical methods : manova , the discriminant , the factor , and the cluster analyses .the total magnetic effect , calculated as the average over about 120000 trials , was ( 1.7.2)% . this value was rather steady : when the array of data was limited to the measurements of the task processing times only , the average effect was 1.64% ; if the results of six subjects who showed maximal effects were removed from the array , the average effect , then 1.49% , retained its statistical significance at .so , within the limits of this study , the global mean magnetic effect in humans was formed by the bulk of the measured data and by all the subjects .the observed magnetic effect was the consequence neither of particular efficiency of any test used , nor of the presence of particularly sensitive subjects .temperature and atmospheric pressure were studied among possible essential factors , but they did not affect the results . it should be noted that all eight measurable characteristics were subjective psychological reactions .it was interesting to understand as well , whether the hypomagnetic field can influence human reactions that are mostly independent of the will of a subject .the pupil size is a characteristic clearly involved in the execution of the aforementioned psychological tests .although psychologically induced pupil constriction / dilatation is known , a physiological reaction to light , the pupillary light reflex , is well expressed .for this reason , the pupil size has been chosen for tracking simultaneously with the above testing of the subjects under hmf / gmf exposure .the aim of conducting the present study was to investigate whether the hypomagnetic field can cause the eye pupil to change in size .no special selection of subjects was made but for equal numbers of men and women and of people aged less than and more than 40 years . there were 20 subjects in each gender - specific or age - specific group , and each subject was tested both in gmf and hmf conditions .gmf deprivation has been reached by the compensation of gmf in a special wooden box of the size m .the box included a wire mesh that shielded a test subject from the outer randomly variable electrostatic field .the magnetic field inside the box was measured by fluxgate sensors fixed near the head of the subject , approximately at the center of the box . a digital feedback system compensated ( along the main axis ) the outer magnetic field and its variations caused by the city electric vehicles and industrial pulses .four circular coils 1 m in diameter were spaced at 0.5 m while having 40 windings in the side coils and 26.5 in the middle ones .the total active electrical resistance was 1.23 ohm .the mf inhomogeneity inside the workspace of the system did not exceed 2% .the main axis of the system was oriented parallel to the gmf ( 44 t ) vector at a precision of 0.5 degree .the bandwidth of the feedback system was about 10 hz , at the mf measuring rate 1000 hz .the residual value of mf inside the box during experiments did not exceed 0.4 t along the main axis and 0.6 t in perpendicular directions .each subject has been tested twice ; the second session was conducted usually in 30 days after the first one . in one of these two sessions, hmf was used , and in the other , for comparison , there were the same conditions but without gmf deprivation . to exclude the possible contribution from the order of hmf and gmf sessions , the order of those for a half of the subjects was opposite to those for the other half .measurable were the task processing times and the number of errors in the following tests : ( i ) the rate of a simple motor reflex , ( ii ) recognition of colored words , ( iii ) short - term color memory , and ( iv ) recognition of rotated letters .two of these were modifications of the well - known j.r .stroop and r.n .shepard tests .a total of eight parameters were measured in this study .the protocol of this experiment is described in detail in .what is essential is the following : each of 80 experiments consisted of three time periods : the 10 min of accommodation to the environment at gmf conditions and preparing to be tested ; 10 + 10 min of testing to collect reference data , also at gmf conditions ; and 10 + 10 + 10 + 10 min of testing under gmf conditions ( in 40 `` sham '' experiments ) or under hmf conditions ( in other 40 `` real '' experiments ) .one - minute relax intervals were placed between all these 10-min periods , so that the total duration of an experiment was 76 min .test subjects were not aware of which magnetic field , gmf or hmf , they were subjected to during 40 min of `` exposure . ''a special device has been made for recording eye movements .the plastic frame that was fixed on the subject s head carried an analog video camera ace - s560h ( 0.05 lux , 600 lines ) .a filter was mounted in front of the camera inside the camera cylinder to cut off light with wavelength less than 810 nm .the sensitivity of the camera was enough to work in the ir range .ir leds that were placed around the camera aperture illuminated right eye area , which made it possible to significantly contrast the eye pupil .the pupil movements were recorded in a digital format mpeg-4 converted to 8-bit gray by means of a video capture device .the rate was 25 fps ; the duration of each of 80 records was 76 min. it can be easily derived that a total of about 9 million frames have been collected in this study , half for gmf- , and half for hmf - type experiments .an original computer program has been developed that could treat the footages , frame by frame .after 80 records were made , it turned out that one record failed due to a technical fault .so the results of the corresponding subject were removed from the data set , and the program processed only 78 video files of 39 subjects .a preliminary treatment was as following .first , the program cut the fragments of the footages that corresponded to the accommodation / relax intervals and to the short intervals of eye blinking .what was left were a little less than 20 min of the reference interval ( control gmf conditions ) , and 40 min of the sham ( gmf ) or real ( hmf ) exposure .so we had 39 one - hour footages of gmf / gmf type , i.e. , those of `` sham '' or `` simulation '' experiments , and 39 movies of gmf / hmf type , or movies with `` real '' exposure to hmf . for each frame ( 680 pixel ) of the movies ,the program found the image of eye pupil , approximated the pupil by an ellipse , and determined its parameters : short and long axes , rotation angle , horizontal and vertical positions of the pupil , fig .these values were saved in a file also containing a timestamp , frame sequence number , and the mean luminance of the frame ( the mean density of the gray within 8-bit range 0255 ) .the mean luminance was calculated for the entire frame area except the area of the eye pupil .the results of each of 78 experiments were presented as a table / file , each line of which corresponded to a single frame and included the data of its treatment .each column of the file represented an array : sizes of the ellipse axes , frame luminance , etc .the second step of treatment was in examining the written arrays for outliers . due to the contribution of many uncontrollable factors ,the arrays contained not only regular changes , but also a noise .some values in the arrays can deviate from the means so far that their artifact origin is very probable .such data are often removed from samples .the program removed an entire frame s line from the file , if one of the values in the line was spaced from the corresponding sample mean by more than three standard deviations .the reduced arrays were used for further calculations .while a subject is tested , his or her eye rotates in different directions , so the eye pupil is seen from the camera aperture under different angles , as an ellipse .the actual size of the pupil is closer to the ellipse s major axis , because the minor axis varies as a cosine of the angle of view .we used the major axis as a main observable that was determined for each frame . inwhat follows , the arrays of measured pupil sizes , corresponding to the control , or reference , 20-min interval and to `` exposure '' 40-min intervals , are denoted as and for `` real '' experiments , and and for `` sham '' experiments , respectively , or , in a different order , and stand for controls , and and stand for exposure intervals .let , , , and be sample means of those arrays , , , and be their standard deviations , and stand for the index that numbers the subjects . mathematical operations like imply that multiplication by is applied to each element of the array .then we could determine the result of a subject exposure in `` real '' experiment as the mean of the array , or a normalized effect that is centered on unity .however , that would not be a magnetic effect , because the change from to could be due to natural physiological rhythms , to learning in the course of testing , etc .right determination of the magnetic effect of the real hmf exposure would be only in its comparison to the result of `` sham '' experiment , where the mean of the array , or , is calculated .thus , we determine mean magnetic effect as . as can be seen , with such determination , the mean magnetic effect can be considered as the mean of the array .this is the array of `` elementary magnetic effects '' defined for each separate frame of the array .it is convenient , because it makes possible to build different distribution functions and to compare their statistics .first , the magnetic effect has been calculated in average all over the subjects .all the normalized arrays of each subject were combined in single arrays , and , separately for `` real '' and `` sham '' experiments : the distributions of the and elements , i.e. , pupil sizes normalized to their means in controls , are shown on fig .[ hmfvsgmf ] .it is the distributions over the relative pupil size values , which are built as histograms , or relative frequencies of corresponding values .the distributions are shown as normalized to unit area under the curves .the arrays lengths were 1692192 for and 1671263 for . , and , for hmf and gmf correspondingly . ]as can be seen , the distributions are different in their means .the distributions are close to the normal one .two - sample -test shows that the difference is statistically significant with an infinitesimal probability of error ( -statistics equals 152 ) . as to the magnitude of the magnetic effect, the ordinary definition gives % .the illumination of the eye in our experiments could possibly vary due to many reasons .it is both the natural and artificial room daylight variations , light from the moving objects on the lcd monitor in front of a test subject , and its individual position in the magnetic exposure box . despite the effective spectrum of measuring radiationwas shifted to the ir range , optical radiation variations could contribute to the outcome because of the pupillary light reflex .therefore , we paid particular attention to this fact .the mean illuminance of the area around the eye pupil was calculated along with the size of the eye pupil , for each frame .it appeared that there is a direct correlation between these two values , and not an inverse one as could be expected , fig . [ correlationlumsize ] .the reason for this is that the position of the camera was not fixed relative to the face of a subject .a subject could adjust the position in the course of a session so that the distance between the camera and the eye often varied .the smaller the distance , the greater was the illuminance due to the ir leds and the greater was the size of the pupil apparent to the camera ; it is a geometric constraint . at the same time , the average luminance over all the frames in hmf experiments has occasionally been greater than in gmf experiments . for this reason, we had to suggest that the observed increase in pupil size under hmf was at least partly caused by this geometric effect .therefore , a correction was necessary to allow for the correlation and exclude the geometric effect of luminance .the correction procedure was to determine the coefficients of simple linear regression and to correct pupil sizes by subtracting corresponding contributions of the regression .the slope of the regression line in fig .[ correlationlumsize ] is , so that corrected values for the sizes have been calculated as , where is an element of arrays , , etc ., is the corresponding luminance , and is the mean luminance averaged over the entire data set .of course , no correlation was found between the values of the frame luminance and corrected pupil sizes .nonetheless , the magnetic effect has stood 100% significant ( -statistics equals 77 , and that figure differs from 100% by a number less than ... ) , at a reduced value though .the mean magnitude of the effect from magnetic exposure can only be computed rather than directly measured , so it depends on the definition .a definition gives for the corrected set of data % .the pupil size distributions corresponding to the `` real '' and `` sham '' experiments , i.e. those built on the arrays and , are practically the same as in fig .[ hmfvsgmf ] , with a smaller gap ( not shown ) .it is essential that if the _ area _ rather than the size of pupil was used to calculate the magnetic effect , it would be twice as large , % , with also twice the standard deviation of the distributions . with any definition ,the magnetic effect is 100% statistically significant .statistical significance between `` gmf '' and `` hmf '' persists at even if four people , showing maximal positive magnetic effects ( 13 , 12 , 10 , and 7% ) are removed from the sample . ) and in the work from the parameters of cognitive tests ( dash line , ) ; mean values are shown .points in the inlet plot : a correlation diagram for the individual magnetic effects . ]individual magnetic effects were studied after the regression correction was made for pupil size in each frame .the individual effects reflect the individual sensitivities of subjects to 40-min hmf exposure .the individual magnetic effects in its ordinary definition have been calculated for each test subject . for this , arrays were separated , and for each one the mean value was calculated .the number of available quantities , 39 , was enough to compose an array .the distribution of its elements is shown in fig .[ pupsize - cogntests - distribs ] . also shown is the same distribution calculated from the parameters of the cognitive tests ( see below ) .the distributions are given as the density estimation functions with a gaussian kernel of width equal to 0.25 standard deviations , which corresponds to a histogram of about eight bins in its main interval from to .it is essential that the individual mean magnetic effects , taken separately , were statistically significant .all the subjects except two , who showed the smallest means 0.13% and 0.04% , had their mean magnetic effects significant at the level , at least .the `` real '' and `` sham '' distributions for each one of the test subjects are similar to those in fig .[ hmfvsgmf ] , however with greater gaps and noise .the present study demonstrates that the 40-min exposure to hmf has a statistically significant effect on the subjects : their eye pupils experience a weak dilatation . despite the total mean effectis small , the distributions of the measured values say something essential about the nature of magnetic effects in human .a distribution built on the joint array mixes two different distributions of magnetic effects that can be isolated .of interest are the shapes of these distributions .the first one is the _ general shape _ of individual distributions of the `` elementary magnetic effects , '' i.e. , something that is common to individual distributions apart from their mean values .the individual distributions differ by their means , but have something in common their shape , that can be seen after the means are subtracted from the arrays . in other words , it is the shape of distribution in the joined array , fig .[ distrib - elementaryvsindividual]-a .the other one is the shape of distribution of 39 individual magnetic effects in the array , fig .[ distrib - elementaryvsindividual]-b .the distributions are distinct because the nature of their variances is different .the variance of the first distribution is conditioned by many random factors of brain functioning and physical environment , while the variability of the individual magnetic sensitivity , taken as a biological characteristic , is determined mostly by the phenotypic variation .the distributions fig .[ distrib - elementaryvsindividual ] show that the mean magnetic effect is not due to the presence of a small hypersensitive group of subjects .practically all the persons have demonstrated sensitivity to hmf .however , nearly equal parts of the subjects gave opposite responses to hmf that resulted in a small average effect .at the same time , individual magnetic effects varied significantly within the range % .thus , the standard deviation is essentially greater than the mean of the individual means .for this reason , the total mean is of little value .it resembles a mean fingerprint pattern , which is actually no pattern at all .as one can see , a random error in these experiments is very small due to the huge volume of the data set ; standard errors of the means are about , which certifies the second significant decimal digit in mean magnetic effect magnitudes .possible systematic _ a posteriori _ bias has only been related to the enhanced level of eye luminance in the `` real '' set of experiments .however it has appeared to be inessential , because the pupillary reflex is appreciable only at visible light and not at ir radiation .apart from the geometric effect , no other possible effects on pupil size have been found .neither leds highlight , nor the artificial indoor lighting , nor the outdoor daylight variations influenced the pupil size , which has been established in a special session of testing .as was said above , there were measurands in the cognitive tests .for each of them , the array of the individual mean magnetic effects was separated , and all the arrays were sorted according to the subject s sequence number .let denote these ordered arrays , where index , is the sequence number of a `` psychological '' measurand used ; standing for the measurand of the `` eye pupil size '' . then , one could estimate the correlation between these arrays .a large correlation would signify that one and the same subject possesses higher or lower magnetic sensitivity in different tests , i.e. , independently of the measurand used to determine his or her sensitivity .it has turned out that all these arrays _ do not correlate _ ; the mean level of the matrix of correlation coefficients was unambiguously confirms that there were no particularly sensitive subjects among 39 tested , although in every separate test there were people showing rather clear response to the hypomagnetic exposure .constrictions and dilatations of the eye pupil occur independently of a human will .it is an objective physiological reaction rather than a reaction based on a subjective will .it is interesting therefore that there is a similarity between the distributions built on both the reactions , fig.[pupsize - cogntests - distribs ] .essential conclusions follow the fact that this similarity exists together with no correlation between individual means of different measurands .\(1 ) `` wings '' are seen in the shape of distributions , at greater absolute values of the magnetic effect magnitudes .the wings , a few percent in area , are not as clear as the main peaks ; however they can be seen in the shape of individual distributions both for pupil sizes and for the parameters of psychological reactions .this makes it possible to question the statement that there exist in human population a group of people who are particularly sensitive to electromagnetic fields .it is the so - called `` electromagnetic hypersensitivity syndrome '' repeatedly reported in literature .it states that a few percent of people can markedly react even to relatively weak electromagnetic fields that are incapable of appreciable tissue heating . on the face of it , the wing - shaped distribution of individual meansdoes not contradict the hypothesis of hypersensitivity .however , the fact that there is no correlation between the magnetic effects as measured by eye pupil tracking and by psychological reactions , fig .[ pupsize - cogntests - distribs ] inlet , indicates that people demonstrating a very clear magnetic effect can be different . according to our results ,some people tested for a particular biological parameter will clearly react to emf exposure .however , if a different parameter was chosen to be measured , another minor group would react to the same emf .we suggest that emf hypersensitivity exists only as a casual reaction .\(2 ) as was said above , individual magnetic effects have been determined for the same subjects , but from their different characteristics , from the eye pupil size in the present study , on the one hand , and from the number of errors and the test processing time in , on the other hand .these magnetic effects have appeared to be uncorrelated . at the same time , the distributions of these effects are rather similar : both have two major peaks and two minor peaks , or wings , in fig.[pupsize - cogntests - distribs ] .this fact indicates that the reaction of a human to mf exposure is not a systemic reaction .an external factor , like acoustic noise or light , can cause only a systemic reaction that is conditioned by the human perception , by the functioning of the central nervous system . in this case ,different organism s reaction to the external factor should be correlated .apparently , the same is valid with regard to an internal , but _ab initio _ already systemic factor like a biological rhythm . unlike such factors of a systemic action, mf is an agent that bypasses human signaling systems , acts directly on tissues , and consequently acts without system , at random .it is exactly this that is observed as the absence of correlation , see ( [ correlation ] ) , between different biological measurands when a subject is exposed to a hmf .a subject during testing can be magnetically sensitive as measured by one parameter and simultaneously insensitive as measured by another one .\(3 ) the data are in accordance with our results that changes between gmf and hmf cause a measurable biological reaction in humans .the authors of the former study have concluded that their data are in agreement with the so called `` radical - pair mechanism '' , see for example . according to this concept , some animal species have a magnetic sense , because the gmf affects spin - correlated pairs in cryptochrome photoreceptors in the eye retina . the findings of the present study are at variance with this hypothesis .the absence of the correlation between different measurands in ( [ correlation ] ) proves that human reaction to magnetic field is not a systemic reaction .consequently , it is not a reaction caused by the visual analyzer , and in particular , by the changes in its retinal cryptochromes .our data are in a better agreement with the idea that the targets of mf are more or less evenly spread over human organism. it might be magnetic nanoparticles found in human brain tissues .magnetic nanoparticles are small magnets that behave like a compass needle ; they can rotate in an external mf .magnetic nanoparticles produce their own relatively large mt - level mf . in turn, this mf can affect magnetosensitive radical - pair biochemical reactions , so that external mfs as weak as 200 nt can cause biological effects .the hypomagnetic field of about 400 nt widens the area of the human eye pupil by about % on total average , with a high statistical confidence .this result is based on human eye video recording at cognitive testing of 39 people in usual geomagnetic environment and under exposure to the hypomagnetic field .the hypomagnetic effect observed in 39 test subjects as measured by eye pupil size and by eight cognitive parameters is likely a general magnetic effect in the human population . due to the fact that magnetic reactions observed simultaneously with respect to different measurandsdo not correlate , these reactions to magnetic fields are mostly casual reactions .it takes a large volume of observations in order to register a very weak total magnetic effect .* kirschvink , j. l. , kobayashi - kirschvink , a. , diaz - ricci , j. c. and kirschvink , s. j. * ( 1992 ) .magnetite in human tissues : a mechanism for the biological effects of weak elf magnetic fields ._ bioelectromagnetics _ * suppl 1 * , 101113 .
|
previously , we reported that the hypomagnetic field obtained by the 100-fold deprivation of the geomagnetic field affected human cognitive processes as estimated in several computer tests . the exposure to the hypomagnetic field caused a statistically significant increase both in the task processing time and in the number of errors . the magnitude of this magnetic effect , averaged over 40 healthy subjects and more than separate trials , was about 1.7% . in the present work , the results of a simultaneous study are described , in which the right eye of each subject was video recorded , while the subject performed the tasks . it has appeared that the pupil size grows in the hypomagnetic field . this effect has been calculated based on the treatment of a large data set of about 6 video frames . averaged all over the frames , the magnetic effect on the pupil area was about 1.6% , with high statistical confidence . this is the first laboratory study in which the number of separate trials has been large enough to obtain rather smooth distribution functions . thus , the small effect of the hypomagnetic field on humans has become evident and statistically valid . + [ 2 mm ] * key words * : biological effects of magnetic fields , magnetoreception in humans , zero magnetic field a.m. prokhorov general physics institute of the russian academy of sciences ; vavilov st . , 38 , moscow , 119991 ; email : binhi.gpi.ru
|
in the 1990s it was discovered that multifractal stochastic processes are useful as models for financial time series .the advantage of these processes is that they naturally combine uncorrelated returns with long - range volatility dependence , and therefore provide accurate models without over - parametrization .several comparison tests have shown that multifractals out - perform generalized autoregressive conditional heteroskedasticity ( garch ) type models as descriptions of asset prices , currency exchange rates and short - term interest rates . a stochastic process is denoted multifractal if it exhibts stationary increments and its structure functions ] , where is the wavelet transform that is the first derivative of a gaussian . ] of . from these momentsthe scaling function is defined by the relation : \sim \tau^{\zeta(q)+q/2}.\ ] ] equation ( [ waveletscaling ] ) is only meaningful if the wavelet - based structure functions $ ] are well - approximated by power - laws which , as mentioned in the introduction , is typically difficult to verify in financial time series .however , by employing deseasonalised high - frequency data , we can demonstrate existence of clear scaling regimes for at least four decades in time and for moments at least as high as .this is shown in figure [ fig2](d ) .the corresponding scaling function is plotted ( as circles ) in figure [ fig3](a ) . in this figurewe have also included the scaling function estimated from standard difference - based structure function analysis ( crosses ) .the two methods yield very similar results , but he wavelet - based structure functions are better approximated by power - laws , and will therefore be preferred in this study .we have also included the scaling function estimated from a realization of a monofractal random walk ( squares ) in order to highlight the concave shape of the scaling function for the rec data .the results presented in this section demonstrates the relevance of multifractal methods to these data and serve as verification of the intermittent nature of stock - price fluctuations .obtained from the wavelet - based structure functions which are plotted in figure [ fig2](d ) .the crosses represent the scaling function estimated from the standard difference - based structure functions .the squares are obtained by applying the wavelet - based structure function analysis to a random walk .the solid curve is the graph of the expression in equation ( [ zeta ] ) with .( b ) : for each month of 2008 we have plotted the ml estimate of the intermittency parameter from the rec price data .( c ) : the dotted curve with crosses is the same as ( b ) , and the full curves with small dots are computed from realisations of the mrw model with .( d ) : for each curve like those plotted in ( c ) we compute the range ( the difference between the maximum value and the minimum value ) .the estimated distribution function of this quantity over a large ensemble of realisations is plotted as the solid curve .the shaded region represents the 95% percentile of this distribution and the solid vertical line represents the range of the curve in ( b).,width=566 ] in the rec time series ( this panel is identical to figure [ fig3](b ) ) .( b ) : monthly average of the investment grade spread for the year 2008 .( c ) the ensemble mean of the year - by - year estimates of that are presented in table [ tab2 ] .the error bars correspond to the sample standard deviation for each year .( d ) annual averages of the investment grade spread for the years 2003 - 2009 . , width=529 ] .ml estimates of and computed from the rec data for each month of 2008 .the estimates of are plotted in figure [ fig3](b ) . [cols="^,^,^,^,^,^,^,^,^,^,^,^,^",options="header " , ]we have shown that the range of variability of estimated for the rec stock returns can not be attributed to uncertainty in the estimator .a prominent feature of this variability is the abrupt drop of in the months 8 - 10 of 2008 , at the peak of the 2008 financial crisis ( lehman brothers filed for bankruptcy on september 15th 2008 ) .this coincidence suggests a connection between market uncertainty and the intermittency parameter of asset prices . in figures [ fig4](a ) and[ fig4](b ) we compare the the -curve for the rec stock with the investment grade spread .the latter is defined as the difference between the interest rate of aaa bonds and the interest rates on 3-year u.s .federal bonds , and is used as a proxy for the credit spread , and hence as a measure the of perceived market risk . from the figures it appears that the credit spread is roughly in anti - phase with the -curve , at least for the last nine months of 2009 , supporting the notion of an anti - correlation between intermittency and market risk . as a further exploration of this hypothesised connection , it would be useful to compute month - by - month estimates of for a larger ensemble of stocks , and for a longer period of time .unfortunately , reliable month - by - month estimates require high liquidity , and are not available for longer time periods for a larger number of stock .we therefore have to resort to year - by - year estimates , making use of time series of deseasonalised hourly returns from thirteen of the most liquid stocks in the ose .we present year - by - year estimates of for each of these for the time period 2003 - 2009 in table [ tab2 ] .unfortunately the estimates vary significantly between the different stocks , precluding a precise assessment of the evolution of the ensemble averaged during this period .nevertheless , the ensemble mean yields a -evolution over the period ( figure [ fig4](c ) ) which , taking the large error bars into account , is consistent with an anti - correlation with the investment grade spread shown in figure [ fig4](d ) .in particular , we observe that the investment grade spread increases from the middle towards the end of the decade , while the ensemble - averaged decreases during this time .the main result of this paper is that the combination of high - frequency data and deseasonalising provides sufficient information to estimate the intermittency parameter in stock prices and to confirm the multifractal nature of these signals .we also show that if we supplement the structure - function analysis with the mrw model and its corresponding ml estimator , we can estimate the intermittency from short time records ( a month ) . for the data considered in this paperit is observed that the estimated intermittency varies in time , and we have concluded that this variation is statistically significant .one implication is that it is not possible to describe the observed data using the mrw model with constant parameters , and it therefore lends meaning to the notion of time - varying degree of intermittency .based on the examples presented in this paper , we suggest that there is a negative correlation between uncertainty in the financial market and the intermittency of stock price fluctuations .this hypothesis will be investigated further in future work , where the techniques presented in this paper will be applied in a broader study of time - variations of estimated scaling exponents .s. c. chapman , b. hnat , g. rowlands , n. w. watkins , scaling collapse and structure functions : identifying self - affinity in finite length time series , nonlinear processes in geophysics 12 ( 2005 ) 767774 .r. morales , t. di matteo , r. gramatica , t. aste , dynamical generalized hurst exponent as a tool to monitor unstable periods in financial time series , physica a : statistical mechanics and its applications ( 2012 ) doi : 10.1016/j.physa.2012.01.004 .
|
maximum likelihood estimation applied to high - frequency data allows us to quantify intermittency in the fluctuations of asset prices . from time records as short as one month these methods permit extraction of a meaningful intermittency parameter characterising the degree of volatility clustering of asset prices . we can therefore study the time evolution of volatility clustering and test the statistical significance of this variability . by analysing data from the oslo stock exchange , and comparing the results with the investment grade spread , we find that the estimates of are lower at times of high market uncertainty . multifractal , high - frequency data , intraday , maximum likelihood , credit spread
|
there exists a wide range of technologies that would immensely benefit from robust , strong , lightweight and energy efficient compliant structures that can change their shape between any set of one - dimensional continuous functions .for example , currently used aircraft slats are relatively heavy and the gaps between wings and slats increase noise levels particulary during take off and landing .a comparison of existing actuation principles shows that pressure based actuators have the greatest potential to create such structures ( figure [ pic : figure_1 ] ) .hence it is not surprising that the pressure driven nastic movement of plants attracted a lot of attention during the last decade .the results of a considerable research effort in this field that was backed up by the defense advanced research agency , national science foundation and the united states army can be found in the book edited by wereley and sater .a comprehensive understanding of the nastic movement of plants requires various disciplines that range from biology and chemistry to material science and structural engineering .the focus of this article is on the structural engineering side .therefore , no attention is given to the functionality of sub - cellular hydration motors or plant cell materials .instead it is assumed that cellular structures are made from common engineering materials and that cell pressures are provided by an external source such as a compressor .furthermore , only prismatic cells are considered .hence , the problem reduces to the understanding of two - dimensional cell geometries and their interactions .an overview of publications that investigate prismatic pressure actuated cellular structures is subsequently given .a concept that is based on plane symmetry groups was patented by dittrich .dittrich combined convex and concave cells to create actuators that can replace double acting cylinders . a similar approach that uses pressurized and void cells was investigated by luo and tong . although not directly related to adaptive structures , khire et al studied inflatable structures that are made from a large number of uniformly pressurized hexagonal cells .vos and barret subsequently patented a similar concept .further work that investigates pressurized honeycombs can be found in .numerical tools for the simulation and optimization of two - dimensional cellular structures were , among others , published by lv et al .a novel concept for pressure actuated cellular structures that are made from separately pressurized rows of individually tailored prismatic cells ( figure [ pic : figure_2 ] ) was patented by pagitz et al .it was shown that these structures can be made from arbitrary materials that range from elastomers to steel .furthermore it was shown that cytoskeletons can be used within each cell to reduce , increase the structural weight , stiffness .however , the underlying numerical framework is limited to cellular structures with two cell rows and central cell corner hinges ( figure [ pic : figure_3a ] ) .the aim of this article is to extend previous work by taking into account an arbitrary number of cell rows , hinge eccentricities and rotational as well as axial springs ( figure [ pic : figure_3b ] ) .this allows the design of compliant pressure actuated cellular structures that can change their shape between any given set of one - dimensional continuous functions .furthermore we introduce a potential based optimization approach that reduces the required number of iterations by an order of magnitude .finally it is shown how the proposed framework can be tightly coupled to the geometry of a cellular structure .+ continuous functions.,scaledwidth=85.0% ] [ pic : figure_2 ] the outline of this article is as follows : section 2 introduces state and optimization variables as well as geometric and energy terms of a single pentagonal cell with central cell corner hinges .similar expressions for a triangular cell are derived in section 3 .cell side properties due to non - zero hinge eccentricities and rotational springs are given in section 4 .state and optimization variables of a cellular structures with eccentric cell corner hinges and rotational springs are presented in section 5 .it is shown that the proposed framework reduces to the numerical model published in if hinge eccentricities and rotational springs are small .the computation of equilibrium and optimal reference configurations as well as the optimization of cell side lengths for given target shapes is outlined .previously published optimization approach is enhanced with a potential based approach that reduces the number of iterations by an order of magnitude .section 6 introduces additional state variables for the consideration of axial strains and outlines the computation of corresponding equilibrium configurations and optimal cell side lengths .section 7 presents several examples that demonstrate the performance of the numerical method .section 8 shows how the proposed framework can be tightly coupled to the geometry of a cellular structure .section 9 concludes the article .pressure actuated cellular structures consist of pentagonal and hexagonal cells where the latter can be split into a pentagonal and triangular part .cell side- , internal lengths and state- , internal angles of a single pentagonal cell are shown in figure [ pic : figure_4 ] .it can be seen that hinge eccentricities as well as rotational and axial springs are not considered .this is possible since the corresponding effects can be later added without loss of accuracy .state variables and cell sides are ^\top \hspace{5mm}\textrm{and}\hspace{5 mm } \mathbf{v}^\textrm{p } = \left [ \begin{array}{cccc } b_1^\textrm{p } & b_2^\textrm{p } & c_1^\textrm{p } & c_2^\textrm{p } \end{array } \right]^\top .\end{aligned}\ ] ] note that a superscript p " is used for state and optimization variables that define the geometry of a pentagonal cell .furthermore , it is used for internal variables that could be confused with subsequently derived variables .the length of line that divides the pentagon into a triangular and quadrilateral part is and its first - order derivatives with respect to state variables and cell sides are \hspace{4mm}\textrm{and}\hspace{4 mm } \frac{\partial y}{\partial \mathbf{v}^\textrm{p } } = \frac{1}{y } \left [ \begin{array}{c } -\sin\left(\alpha_1^\textrm{p}\right)a^\textrm{p } - \cos\left(\alpha_1^\textrm{p}-\alpha_2^\textrm{p}\right)b_2^\textrm{p } + b_1^\textrm{p}\\ \ \ \ \sin\left(\alpha_2^\textrm{p}\right)a^\textrm{p } - \cos\left(\alpha_1^\textrm{p}-\alpha_2^\textrm{p}\right)b_1^\textrm{p } + b_2^\textrm{p}\\ 0\\ 0 \end{array } \right ] .\end{aligned}\ ] ] the altitude of a pentagonal cell is so that its first - order derivatives with respect to base side and cell sides ^\top ] are ^\top \end{aligned}\ ] ] where hence , first - order derivatives with respect to state variables and cell sides are the pressure potential of a triangular cell without hinge eccentricities is and its first - order derivatives with respect to state variables and cell sides are ^\top \right ) .\end{aligned}\ ] ]the previously published numerical framework assumes rigid cell sides that are connected at cell corners via frictionless hinges .this assumption is valid as long as cell sides are relatively thin .it is subsequently shown how the influence of rotational springs and hinge eccentricities at cell corners can be taken into account .we assume that undeformed cell sides are straight and that , although rigid body motions might be large , rotations at hinges are small .this assumption is valid since cell side rotations in compliant structures are limited by material properties .hence it is possible to assume that cell side lengths are invariant to cell corner rotations .nevertheless , a similar approach could be used for arbitrarily large rotations . however, this would increase the number of coupling terms and thus complicate the following derivations .therefore , in order to increase readability , the consideration of axial strains is separately outlined in section 6 .the required state and optimization variables as well as hinge eccentricities and rotational springs of a single cell side ( figure [ pic : figure_6 ] ) are ^\top , \hspace{5 mm } v^\textrm{s } = \left[l^\textrm{s}\right ] , \hspace{5 mm } \mathbf{w}^\textrm{s } = \left [ \begin{array}{cc } d_-^\textrm{s } & d_+^\textrm{s } \end{array } \right]^\top \hspace{5 mm } \textrm{and } \hspace{5 mm } \mathbf{x}^\textrm{s}_{\boldsymbol{\kappa } } = \left [ \begin{array}{cc } e_-^\textrm{s } & e_+^\textrm{s } \end{array } \right]^\top .\end{aligned}\ ] ] the additional global state variables for the hinge eccentricities at cell corners are expressed with respect to cell sides " .hence it is possible to directly write down the state variables of cell sides " as ^\top = \left [ \begin{array}{cc } \kappa_{1-}^\textrm{s } & \kappa_{1+}^\textrm{s } \end{array } \right]^\top \hspace{5mm}\textrm{and}\hspace{5 mm } \left [ \begin{array}{cc } \kappa_{b2-}^\textrm{s } & \kappa_{b2+}^\textrm{s } \end{array } \right]^\top = \left [ \begin{array}{cc } \kappa_{2-}^\textrm{s } & \kappa_{2+}^\textrm{s } \end{array } \right]^\top .\end{aligned}\ ] ] in contrast , state variables of cell sides " and " are not only a function of global state variables but also of local , global pentagonal state variables as shown in figure [ pic : figure_7 ] so that = \left [ \begin{array}{c } \kappa_{1-}^\textrm{s } + \delta\alpha_1^\textrm{p}\\ \kappa_{2-}^\textrm{s } + \delta\alpha_2^\textrm{p } \end{array } \right],\hspace{3 mm } \left [ \begin{array}{c } \kappa_{c1-}^\textrm{s}\\ \kappa_{c1+}^\textrm{s } \end{array } \right ] = \left [ \begin{array}{c } \kappa_{1+}^\textrm{s } + \delta\theta_1^\textrm{p}\\ \kappa_{3-}^\textrm{s } - \delta\alpha_1^\textrm{p } + \delta\theta_1^\textrm{p } + \delta\beta^\textrm{p } \end{array}\right]\hspace{3mm}\textrm{and}\hspace{3 mm } \left [ \begin{array}{c } \kappa_{c2-}^\textrm{s}\\ \kappa_{c2+}^\textrm{s } \end{array } \right ] = \left [ \begin{array}{c } \kappa_{2+}^\textrm{s } - \delta\theta_2^\textrm{p}\\ \kappa_{3-}^\textrm{s } - \delta\alpha_2^\textrm{p } - \delta\theta_2^\textrm{p } + \delta\beta^\textrm{p } \end{array}\right ] .\end{aligned}\ ] ] note that , for example , is the difference of between the current and reference configuration . in other words , current state variables of cell sidesdepend on the reference configuration .furthermore it should be noted that for the outer cell row .bending angles of a single cell side are and the corresponding first - order derivatives with respect to state variables , cell side length and hinge eccentricities are ^\top & \hspace{10 mm } \frac{\partial \varphi_+^\textrm{s}}{\partial \boldsymbol{\kappa}^\textrm{s } } & = \frac{1}{l^\textrm{s } - d_-^\textrm{s } - d_+^\textrm{s } } \left [ \begin{array}{cc } d_-^\textrm{s } & l^\textrm{s } - d_-^\textrm{s } \end{array } \right]^\top\\\nonumber \frac{\partial \varphi_-^\textrm{s}}{\partial v^\textrm{s } } & = \frac{1}{l^\textrm{s } - d_-^\textrm{s } - d_+^\textrm{s } } \left(\kappa_-^\textrm{s } - \varphi_-^\textrm{s}\right ) & \hspace{10 mm } \frac{\partial \varphi_+^\textrm{s}}{\partial v^\textrm{s } } & = \frac{1}{l^\textrm{s } - d_-^\textrm{s } - d_+^\textrm{s } } \left(\kappa_+^\textrm{s } - \varphi_+^\textrm{s}\right)\\\nonumber \frac{\partial \varphi_-^\textrm{s}}{\partial \mathbf{w}^\textrm{s } } & = \frac{1}{l^\textrm{s } - d_-^\textrm{s } - d_+^\textrm{s } } \left [ \begin{array}{cc } \varphi_-^\textrm{s } & \kappa_+^\textrm{s } - \kappa_-^\textrm{s } + \varphi_-^\textrm{s } \end{array } \right]^\top & \hspace{10 mm } \frac{\partial \varphi_-^\textrm{s}}{\partial \mathbf{w}^\textrm{s } } & = \frac{1}{l^\textrm{s } - d_-^\textrm{s } - d_+^\textrm{s } } \left [ \begin{array}{cc } \kappa_-^\textrm{s } - \kappa_+^\textrm{s } + \varphi_+^\textrm{s } & \varphi_+^\textrm{s } \end{array } \right]^\top .\end{aligned}\ ] ] the linearized pressure potential of a cell side due to the differential pressure between adjacent cells is \left [ \begin{array}{c } \kappa_-^\textrm{s}\\ \kappa_+^\textrm{s } \end{array } \right ] \end{aligned}\ ] ] so that first - order derivatives with respect to state variables , cell side length and hinge eccentricities are ^\top\\\nonumber \boldsymbol{\pi}^{\textrm{s},v}_p & = -\frac{\delta p^\textrm{s}}{2 } \left ( d_+^\textrm{s } \kappa_+^\textrm{s } - d_-^\textrm{s } \kappa_-^\textrm{s } \right)\\\nonumber \boldsymbol{\pi}^{\textrm{s},\mathbf{w}}_p & = -\frac{\delta p^\textrm{s}}{2 } \left [ \begin{array}{cc } -\left(l^\textrm{s } - d_+^\textrm{s}\right)\kappa_-^\textrm{s } - d_+^\textrm{s } \kappa_+^\textrm{s } & \left(l^\textrm{s } - d_-^\textrm{s}\right)\kappa_+^\textrm{s } + d_-^\textrm{s } \kappa_-^\textrm{s } \end{array } \right]^\top .\end{aligned}\ ] ] the potential of rotational cell side springs is so that its first - order derivatives with respect to state variables , cell side length , hinge eccentricities and springs are ^\top .\end{aligned}\ ] ] the total energy of a cell side is the sum of its pressure and bending energy is and its first - order derivative with respect to local , global pentagonal state variables results in variables that are used for the description of a cellular structure are summarized in figure [ pic : figure_8 ] .state variables describe the state of a cellular structure with central cell corner hinges and state variables augment the latter to describe hinge eccentricities ^\top\in\mathbb{r}^{n^\alpha}\\\nonumber \mathbf{u}_{\boldsymbol{\kappa}}&=\left [ \begin{array}{cccccccc } \kappa_{1,1 - } & \kappa_{1,1 + } & \ldots & \kappa_{n^l , n^p - n^l+2 - } & \kappa_{n^l , n^p - n^l+2 + } & \ldots & \kappa_{n^l+1,n^p - n^l - } & \kappa_{n^l+1,n^p - n^l+1 - } \end{array}\right]^\top\in\mathbb{r}^{n^\kappa}. \end{aligned}\ ] ] the total state vector of a cellular structure is defined as ^\top\in\mathbb{r}^{n^\alpha + n^\kappa } \end{aligned}\ ] ] where the number of state variables and is note that is the total number of cells , the number of cell levels and the number of base pentagons .the ratio between state variables of a cellular structure with and without hinge eccentricities is therefore , consideration of hinge eccentricities can , depending on the number of base pentagons and cell rows , triple the number of state variables .side lengths , hinge eccentricities and rotational springs are ^\top\in\mathbb{r}^{n^v}\\\nonumber \mathbf{w}&=\left [ \begin{array}{cccccccccccc } d_{b,1 - } & d_{b,1 + } & \ldots & d_{b , n^p+1 - } & d_{b , n^p+1 + } & \mathbf{d}_{c,1 \pm } & \ldots & \mathbf{d}_{b , n^l \pm } & \mathbf{d}_{c , n^l \pm } & \mathbf{d}_{a,1 \pm } & \ldots & \mathbf{d}_{a , n^p \pm } \end{array}\right]^\top\in\mathbb{r}^{n^w}\\\nonumber \mathbf{x}_{\boldsymbol{\kappa}}&=\left [ \begin{array}{cccccccccccc } e_{b,1 - } & e_{b,1 + } & \ldots & e_{b , n^p+1 - } & e_{b , n^p+1 + } & \mathbf{e}_{c,1 \pm } & \ldots & \mathbf{e}_{b , n^l \pm } & \mathbf{e}_{c , n^l \pm } & \mathbf{e}_{a,1 \pm } & \ldots & \mathbf{e}_{a , n^p \pm } \end{array}\right]^\top\in\mathbb{r}^{n^{x\boldsymbol{\kappa } } } \end{aligned}\ ] ] so that note that cell sides are the reference for state variables .furthermore , since reference sides are assumed to be straight .the total number of pentagonal and hexagonal cells and triangular cells of a cellular structure are the numerical model can be simplified if hinge eccentricities vanish so that cell corner hinges coincide as shown in figure [ pic : figure_9 ] .contributions to the pressure potential from cell sides are zero for and bending angles reduce to = \left [ \begin{array}{rr } 0 & 0\\ 0 & 1\\ -1 & 0\end{array } \right ] \left [ \begin{array}{c } \delta\alpha_1\\ \delta\alpha_2 \end{array } \right ] - \left [ \begin{array}{c } 1\\ 1\\ 1 \end{array } \right ] \kappa \end{aligned}\ ] ] where .the total bending energy \left [ \begin{array}{c } \delta\alpha_1\\ \delta\alpha_2 \end{array } \right ] \end{aligned}\ ] ] so that angles are = \frac{1}{e_1+e_2+e_3 } \left [ \begin{array}{cc } e_3 & -e_2\\ e_3 & e_1+e_3\\ -e_1-e_2 & -e_2 \end{array } \right ] \left [ \begin{array}{c } \delta\alpha_1\\ \delta\alpha_2 \end{array } \right ] .\end{aligned}\ ] ] the bending energy becomes where .hence it is possible to derive a set of cell corner springs that are energetically conjugate to cell corner angles so that ^\top = \frac{1}{e_1+e_2+e_3 } \left [ \begin{array}{ccc } e_3e_1 & e_1e_2 & e_2e_3 \end{array } \right]^\top .\end{aligned}\ ] ] therefore it is possible to eliminate state variables if and thus to significantly simplify the numerical model .furthermore , the presented framework reduces to the numerical model introduced in if hinge eccentricities and rotational springs are zero .state variables of the -th pentagonal cell in the -th cell row can be expressed in terms of local , global state and optimization variables of the -th triangular cell in the -th cell row as shown in figure [ pic : figure_8 ] so that ^\top = \left [ \begin{array}{ccc } \beta_{1,i , j}^\textrm{t}-\alpha_{2,i , j}^\textrm{t}-\theta_{i , j}^\textrm{t } & \beta_{2,i , j}^\textrm{t}-\alpha_{3,i , j}^\textrm{t}-\theta_{i , j}^\textrm{t } & a_{i , j}^\textrm{t } \end{array } \right]^\top .\end{aligned}\ ] ] transformation matrix relates state variables of a pentagonal cell to global state variables of a triangular cell \end{aligned}\ ] ] and relates pentagonal to triangular state variables so that ^\top .\end{aligned}\ ] ] similarly , links state variables of a pentagonal cell to triangular cell side lengths ^\top .\end{aligned}\ ] ] transformation matrices for reference state variables are derived in a similar manner and denoted as , for example , .corresponding higher order transformation matrices are the potential energy of a cellular structure is where is a kronecker delta . required derivatives for the simulation and optimization of pressure actuated cellular structures are summarized in figure [ pic : figure_10 ] .the first - order derivative is subsequently used to outline the assembly process .the derivative of the -th cell row is where are the state variables of a cellular structure that consists only of cell rows ] are the cartesian coordinates of the center of the obstacle and ^\top$ ] are the vector components of cell side .the radius of a cell side is minimal for the change of state variables and lagrange multipliers at an equilibrium configuration with respect to cell side lengths is = - \left [ \begin{array}{ccc } \displaystyle\boldsymbol{\pi}^\mathbf{uu } + \sum_{i=1}^{n^{\lambda_r}}\frac{\partial^2 r_{\textrm{min},i}}{\partial\boldsymbol{\alpha}^2 } \lambda_r^i & \displaystyle\frac{\partial \mathbf{r}_{\textrm{min}}}{\partial\boldsymbol{\alpha}}^\top & \displaystyle{\frac{\partial\mathbf{c}}{\partial\boldsymbol{\alpha}}}^\top\\ \displaystyle\frac{\partial \mathbf{r}_{\textrm{min}}}{\partial\boldsymbol{\alpha } } & \mathbf{0 } & \mathbf{0}\\ \displaystyle\frac{\partial\mathbf{c}}{\partial\boldsymbol{\alpha } } & \mathbf{0 } & \mathbf{0 } \end{array } \right]^{-1 } \left [ \begin{array}{c } \displaystyle\boldsymbol{\pi}^\mathbf{uv}\\ \mathbf{0}\\ \mathbf{0 } \end{array } \right ] \end{aligned}\ ] ] where and are linear constraints for the boundary conditions .the part of the gradient matrix that corresponds to the second pressure set is build from . like in previous examples ,the contact force optimization is performed by successively using a gradient and potential based approach .note that convergence details are not provided since they do not differ substantially from previous examples .the numerical framework approximates cellular structures by using eccentric cell corner hinges and rotational , axial springs .this is a necessary simplification for an efficient optimization .however , eccentricities and springs need to be iteratively correlated to the geometry of a cellular structure before it can be build with the desired shape changing properties . the numerical and geometric model of a single cell corneris shown in figure [ pic : figure_15 ] .it is subsequently outlined how both models can be closely coupled .the vector that consists of axial forces , cell corner rotations and cell side lengths is known from the numerical model for each pressure set .axial forces , cell corner rotations are used since they have to be carried , sustained by cell corners .their dependence on cell side springs is small .in contrast , axial strains and bending moments depend heavily on cell side springs . the parameters of the geometric model can be optimized for such that the cross sectional area of a cell corner is minimized for a given stress constraint so that where is augmented with a lagrange multiplier so that note that this constraint considers all pressure sets .the constraint can be replaced at an optimized configuration with to obtain the gradient matrix .\end{aligned}\ ] ] the geometric parameters at the interface of neighbouring cell corners are coupled so that continuity is preserved at any point .hinge eccentricities , rotational and axial springs can be obtained from the geometric model .for example , the first - order derivative of with respect to is derivatives of vector with respect to state variables and cell side lengths are known from the numerical model .hence it is possible to obtain derivatives of , for example , hinge eccentricities with respect to state variables so that these terms can then be used to fully couple the geometric and numerical model .this article presented a computational framework for the simulation and optimization of compliant pressure actuated cellular structures .it complements previous work by taking into account an arbitrary number of cell rows , rotational , axial springs and hinge eccentricities at cell corners .it was shown that the proposed method reduces to the previously published work if hinge eccentricities and rotational , axial springs are negligible .the convergence rate of the optimization was significantly enhanced by combining a gradient and potential based approach .several examples were presented that demonstrate the quadratic convergence rate for both , the computation of equilibrium configurations and optimal cell side lengths .finally , it was shown that the parameters of the proposed framework can be tightly coupled to the geometry of a cellular structure . _( an early version of this article was uploaded on arxiv in march 2014 . ) _
|
previous work introduced a novel concept for pressure actuated cellular structures that can change their shape between any two given one - dimensional continuous functions . however , the underlying computational framework is limited to two cell rows and central cell corner hinges . this article extends these results by taking into account an arbitrary number of cell rows , hinge eccentricities and rotational as well as axial cell side springs . this allows the design of compliant pressure actuated cellular structures that can change their shape between any given set of one - dimensional continuous functions . furthermore , we introduce a potential based optimization approach that reduces the previously required number of iterations by an order of magnitude . finally , it is shown how the optimization can be tightly coupled to the geometry of a cellular structure . several examples are presented that demonstrate the performance of the proposed framework . + + * keywords * _ adaptive - biomimetic - cellular - compliant - morphing - structure _ * notation *
|
during the last two decades , blind source separation ( bss ) has attracted an important interest . the main idea of bssconsists of finding the transmitted signals without using pilot sequences or a priori knowledge on the propagation channel .using bss in communication systems has the main advantage of eliminating training sequences , which can be expensive or impossible in some practical situations , leading to an increased spectral efficiency .several bss criteria have been proposed in the literature e.g. .the cm criterion is probably the best known and most studied higher order statistics based criterion in blind equalization and signal separation areas .it exploits the fact that certain communication signals have the constant modulus property , as for example phase modulated signals . the constant modulus algorithm ( cma )was developed independently by and was initially designed for psk signals .the cma principle consists of preventing the deviation of the squared modulus of the outputs at the receiver from a constant .the main advantages of cma , among others , are its simplicity , robustness , and the fact that it can be applied even for non - constant modulus communication signals .many solutions to the minimization of the cm criterion have been proposed ( see and references therein ) .the cm criterion was first minimized via adaptive stochastic gradient algorithm ( sga ) and later on many variants have been devised .it is known , in adaptive filtering , that the convergence rate of the sga is slow . to improve the latter , the authors in proposed an implementation of the cm criterion via the recursive least squares ( rls ) algorithm .the author in proposed to rewrite the cm criterion as a least squares problem , which is solved using an iterative algorithm named least squares cma ( ls - cma ) . in , the authors proposed an algebraic solution for the minimization of the cm criterion .the proposed algorithm is named analytical cma ( acma ) and consists of computing all the separators , at one time , through solving a generalized eigenvalue problem .the main advantage of acma is that , in the noise free case , it provides the exact solution , using only few samples ( the number of samples must be greater than or equal to , where is the number of transmitting antennas ) .moreover , the performance study of acma showed that it converges asymptotically to the wiener receiver .however , the main drawback of acma is its numerical complexity especially for a large number of transmitting antennas . an adaptive version of acma was also developed in . more generally , an abundant literature on the cm - like criteria and the different algorithms used to minimize them exists including references .in this paper , we propose two algorithms to minimize the cm criterion .the first one , referred to as givens cma ( g - cma ) , performs prewhitening in order to make the channel matrix unitary then , it applies successive givens rotations to find the resulting matrix through minimization of the cm criterion . for large number of samples , prewhitening is effective and the transformed channel matrix is very close to unitary , however , for small sample sizes , it is not , and hence results in significant performance loss .in order to compensate the effect of the ineffective prewhitening stage , we propose to use shear rotations .shear rotations are non - unitary hyperbolic transformations which allow to reduce departure from normality .we note that the authors in used givens and shear rotations in the context of joint diagonalization of matrices .we thus propose a second algorithm , referred to as hyperbolic g - cma ( hg - cma ) , that uses unitary givens rotations in conjunction with non - unitary shear rotations .the optimal parameters of both complex shear and givens rotations are computed via minimization of the cm criterion .the proposed algorithms have a lower computational complexity as compared to the acma .moreover , unlike the acma which requires a number of samples greater than the square of the number of transmitting antennas , g - cma and hg - cma do not impose such a condition .finally , we propose an adaptive implementation of the hg - cma using sliding window which has the advantages of fast convergence and good separation quality for a moderate computational cost comparable to that of the methods in .the remainder of the paper is organized as follows .section [ sec : formulation ] introduces the problem formulation and assumptions . in sections [ sec : gcma ] and[ sec : hgcma ] , we introduce the g - cma and hg - cma , respectively .section [ sec : ahgcma ] is dedicated to the adaptive implementation of the hg - cma . some numerical results and discussionare provided in section [ sec : results ] , and conclusions are drawn in section [ sec : conclusion ] .consider the following multiple - input multiple - output ( mimo ) memoryless system model with transmit and receive antennas : where ^{t} ] is the additive noise vector , represents the mimo channel matrix , and ^{t} ] is the vector of the estimated source signals , is the global system matrix and is the filtered noise at the receiver output .ideally , in bss , matrix separates the source signals except for a possible permutation and up to scalar factors , i.e. where is a permutation matrix and is a non - singular diagonal matrix . in the sequel, we propose to use the well known cma to achieve the desired bss .in other words , we propose to estimate the separation matrix by minimizing the cm criterion : where is the entry of , with ] include all possible combinations of source vectors , then the criterion ( where is such that is non singular ) is minimized if and only if satisfies : or , in the absence of noise : where is an permutation matrix and is an diagonal non - singular matrix .the proof can easily be derived from that of theorem in .in this section , we propose a new algorithm , referred to as g - cma , based on givens rotations , for the minimization of the cm criterion .it is made up of two stages : 1 ._ prewhitening : _ the prewhitening stage allows to convert the arbitrary channel matrix into a unitary one . hence , this reduces finding an arbitrary separation matrix to finding a unitary one . moreover , prewhitening has the advantage of reducing vector size ( data compression ) in the case where and avoiding trivial undesired solutions .givens rotations : _ after prewhitening , the new channel matrix is unitary and can therefore be computed via successive givens rotations .here , we propose to compute the optimal parameters of these rotations through minimizing the cm criterion .the prewhitening matrix can be computed by using the classical eigendecomposition of the covariance matrix of the received signal ( often , it is computed as the inverse square root of the data covariance matrix , ) .the whitened signal can then be written as : therefore , assuming the noise free case and that the prewhitening matrix is computed using the exact covariance matrix , we have : where is an unitary matrix . from ( [ eq08 ] ) ,it is clear that , in order to find the source signals , it is sufficient to find the unitary matrix and hence the separator can simply be expressed as : , which , in the absence of noise , results in .now , to minimize the cm criterion in ( [ eq04 ] ) w.r.t . to matrix , we propose an iterative algorithm where is rewritten using givens rotations . indeed , in jacobi - like algorithms , the unitary matrix can be decomposed into product of elementary complex givens rotations such that : where refers to the number of sweeps ( iterations ) and the givens rotation matrix is a unitary matrix where all diagonal elements are one except for two elements and . likewise , all off - diagonal elements of are zero except for two elements and .elements , and are given by : & = & \left[\begin{array}{cc } \cos ( \theta ) & e^{\jmath \alpha } \sin(\theta ) \\-e^{-\jmath \alpha } \sin(\theta ) & \cos(\theta)\end{array}\right]\end{aligned}\ ] ] to compute , we need to find only the rotation angles .the idea here is to choose the rotation angles such that the cm criterion is minimized . for this purpose ,let us consider the unitary transformation even though the latter matrix is transformed at each iteration of the proposed algorithm . ]given the structure of , this unitary transformation changes only the elements in rows and of according to : where refers to the entry of .the algorithm consists of minimizing iteratively the criterion in ( [ eq04 ] ) by applying successive givens rotations , with initialization of . are computed such that is minimized at each iteration . in order to minimize , we propose to express it as a function of .since the application of givens rotation matrix to modifies only the two rows and , the terms that depend on are those corresponding to or in ( [ eq04 ] ) . considering ( [ eq10 ] ) and ( [ eq11 ] ) , we have : + \sum_{j=1}^{k}\sum_{i=1 , i\neq p , q}^{m } \big(|\bar{y}_{ij}|^{2}-1\big)^{2 } \end{array}\end{aligned}\ ] ] on the other hand , by considering ( [ eq11 ] ) and the following equalities : and after some manipulations , we obtain : with : ^{t}\label{eq16}\\ & & \mathbf{t}_{j}=\big[\frac{1}{2}\big(|\bar{y}_{pj}|^{2}-|\bar{y}_{qj}|^{2}\big),~ \re(\bar{y}_{pj}\bar{y}_{qj}^{*}),~\im(\bar{y}_{pj}\bar{y}_{qj}^{*})\big]^{t}\label{eq17}\end{aligned}\ ] ] where and denote real and imaginary parts of , respectively . using ( [ eq14 ] ) , we get : then , plugging ( [ eq18 ] ) into ( [ eq12 ] ) yields : given that the second and third summations in ( [ eq19 ] ) do not depend on , the minimization problem is equivalent to the minimization of : where and . finally , the solution that minimizes ( [ eq20 ] ) is given by the unit norm eigenvector of corresponding to the smallest eigenvalue eigenvalue problem that can be solved explicitly . ] .given ^t ] is the noise covariance matrix .the source signals are assumed to be of unit variance .we use the data model in ( [ eq01 ] ) ; the system inputs are independent , uniformly distributed and drawn from 8-psk , or 16-qam constellations .the channel matrices are generated randomly at each run but with controlled conditioning ( their entries are generated as i.i.d .gaussian variables ) .unless otherwise specified , we consider transmit and receive antennas .the noise variance is determined according to the desired signal to noise ratio ( snr ) . in all figuresthe results are averaged over 1000 independent realizations ( monte carlo runs ) .[ fig_1 ] depicts the sinr of hg - cma vs. the snr .we compare the three solutions , i.e. , linear approximation to zero , semi - exact and exact solutions for shear rotations in hg - cma for 8-psk and 16-qam constellations .the sample size is and the number of sweeps is set equal to 10 .we observe that the three solutions have almost the same performance for both 8-psk and 16-qam constellations .therefore , in the following simulations , in hg - cma , we will consider the linear approximation to zero solution . in fig .[ fig_2 ] , we investigate the effect of the number of sweeps on the performance of g - cma and hg - cma . the figure shows the sinr vs. the snr for different numbers of sweeps . in this simulation, we assumed 8-psk constellation and samples .we observe that , as expected , the performance is improved by increasing the number of sweeps and from 5 sweeps upwards , the performance remains unchangeable .in the rest of this section we consider sweeps in g - cma and hg - cma .moreover , we can see that for small number of iterations hg - cma is much better than g - cma and the gap between them decreases as the number of iterations increases .[ fig_3 ] compares the proposed hg - cma and g - cma algorithms with acma in terms of sinr vs. snr for 8-psk constellation and various numbers of samples .we observe that , as expected , the larger the number of samples , the better the performance for all algorithms . for small number of samples , i.e. , we observe that hg - cma significantly outperforms acma and g - cma .we also observe that g - cma performs better than acma for low to moderate snr while for , acma becomes better .the reason that acma performs worse than hg - cma is that the number of samples is less than the number of transmit antennas squared , i.e. , and as we stated above for acma to achieve good performance in the case of psk constellations the number of samples must be at least greater than . for , hg - cma still provides the best performance while the performance of acma becomes very close to that of hg - cma and better than that of g - cma .we can say that for small or moderate number of samples the proposed algorithms are more suitable as compared to acma even for psk constellations . in fig .[ fig_4 ] , we consider the case of 16-qam constellation .we notice that the proposed hg - cma and g - cma algorithms provide better performance as compared to acma .we also observe that , unlike the 8-psk case in fig .[ fig_3 ] , the performance of hg - cma and g - cma are close in the case of 16-qam .moreover , we can see that the gap between the performance of the proposed algorithms and acma gets smaller as the number of samples increases .we can say that the proposed hg - cma and g - cma algorithms are more suitable as compared to acma for non - constant modulus constellations , since they provide better performance for a lower computational cost . in figs .[ fig_5 ] and [ fig_6 ] , we plot the sinr of hg - cma , g - cma and acma vs. the number of samples for 8-psk and 16-qam constellations , respectively .we compare the performance of the proposed algorithms hg - cma and g - cma with acma for different antenna configurations and snr=30 db . in both figureswe observe that , the larger the number of samples , the better the performance . in fig .[ fig_5 ] , in the case of 8-psk constellation , we observe that hg - cma provides the best performance . for small number of samples , g - cma outperforms acma .however , for large number of samples acma performs better . in fig .[ fig_6 ] for 16-qam , hg - cma and g - cma outperform acma and the gap is larger for small number of samples and decreases as the number of samples increases . in figs .[ fig_7 ] and [ fig_8 ] we plot the symbol error rate ( ser ) of hg - cma , g - cma and acma vs. snr for different number of samples for 8-psk and 16-qam constellations , respectively .we considered and . in fig .[ fig_7 ] , for 8-psk case , we notice that the proposed hg - cma provides the best performance . we also observe that g - cma outperforms acma for small number of samples , here .however , for large number of samples acma performs better than g - cma for all snrs .note that for very large snr and it is expected that acma outperforms hg - cma since acma in this case provides the optimal ( exact in the noiseless case ) solution . in the case of 16-qam in fig .[ fig_8 ] , we observe that the proposed hg - cma and g - cma algorithms always outperform acma , even for large number of samples .therefore , we can conclude that the proposed hg - cma and g - cma are preferable to acma in the case of non - constant modulus constellations , i.e. 16-qam , for any number of samples . in the case of constant modulus constellations , e.g. psk, hg - cma and g - cma are better than acma for small number of samples . however ,for large number of samples and the range of interest of snr from db , hg - cma and acma have close performance and acma is better than g - cma . to assess the performance of the adaptive hg - cma , we consider here , unless stated otherwise , a mimo system ( i.e. ) , an i.i.d .8-psk modulated sequences as input sources , and the processing window size is set equal to . in fig .[ fig_time ] , we compare the convergence rates and separation quality of adaptive hg - cma ( with different number of rotations per time instant ) , ls - cma and adaptive acma. one can observe that adaptive hg - cma outperforms the two other algorithms in this simulation context .even with only two rotations per time instant , our algorithm leads to high separation quality with fast convergence rate ( typically , few tens of iterations are sufficient to reach the steady state level ) . in fig .[ fig_snr ] , the plots represent the steady state sinr ( obtained after 1000 iterations ) versus the snr .one can see that the adaptive hg - cma has no floor effect ( as for the ls - cma and adaptive acma ) and its sinr increases almost linearly with the snr in db . in fig .[ fig_m ] , the snr is set equal to and the plots represent again the steady state sinr versus the number of sources .severe performance degradation is observed ( when the number of sources increases ) for the ls - cma and adaptive acma while the adaptive hg - cma performance seems to be unaffected . in fig .[ fig_k ] , the plots illustrate the algorithms performance versus the chosen processing window size .] .surprisingly , hg - cma algorithm reaches its optimal performance with relatively short window sizes ( can be chosen of the same order as ) . in the last experiment ( fig .[ fig_qam ] ) , we consider 16-qam sources ( with non cm property ) . in that case, all algorithms performance are degraded but adaptive hg - cma still outperforms the two other algorithms . to improve the performance in the case of non constant modulus signals , one needs to increase the processing window size as illustrated by this simulation result but more importantly , one needs to use more elaborated cost functions which combines the cm criterion with alphabet matching criteria e.g. . , , , 8-psk , 16-qam , and the number of sweeps is 10.,width=566,height=453 ] , , , and 8-psk.,width=566,height=453 ] .8-psk case , , , and 10 sweeps.,width=566,height=453 ] .16-qam case , , , and 10 sweeps.,width=566,height=453 ] for different antenna configurations .8-psk case , snr=30 db , and 10 sweeps.,width=566,height=453 ] for different antenna configurations .16-qam case , snr=30 db , and 10 sweeps.,width=566,height=453 ] .8-psk case , , , and 10 sweeps.,width=566,height=453 ] .16-qam case , , , and 10 sweeps.,width=566,height=453 ] , , , 8-psk.,width=566,height=453 ] , , 8-psk.,width=566,height=453 ] , , 8-psk.,width=566,height=453 ] , 8-psk.,width=566,height=453 ] , 16-qam.,width=566,height=453 ]we proposed two algorithms , g - cma and hg - cma , for bss in the context of mimo communication systems based on the cm criterion . in g - cma we combined prewhitening and givens rotations and in hg - cma we combined shear rotations and givens rotations .g - cma is appropriate for large number of samples since in this case prewhitening is accurate . however ,in the case of small number of samples hg - cma is preferred since shear rotations allow to compensate for the prewhitening stage , i.e. , reduce the departure from normality . for psk constellations and small number of samples, we showed that the proposed hg - cma and g - cma algorithms are better than the conventional acma .however for large number of samples hg - cma and acma have close performance and acma outperforms g - cma . in the case of 16-qam constellation ,hg - cma and g - cma outperform largely the conventional acma for small number of samples .also , for the hg - cma , a moderate complexity adaptive implementation is considered with the advantages of fast convergence rate and high separation quality .the simulation results illustrate its effectiveness as compared to the adaptive implementations of acma and ls - cma .they show that the sliding window size can be chosen as small as twice the number of sources without significant performance loss .also , they illustrate the trade off between the convergence rate and the algorithm s numerical cost as a function of the number of used rotations per iteration . as a perspective, the proposed technique can be adapted for the optimization of more elaborated cost functions which combine the cm criteria with alphabet matching criteria .it has been shown in subsection [ sub : exactsol ] that the optimal solution in the sense of minimizing the cm criterion in ( [ eq27 ] ) is given by ( see equation ( [ eq38 ] ) ) : where is the solution of : in the following , we will show that ( [ a2 ] ) is a -th order polynomial equation .let the matrices and ] and ^t ] , ( [ a5 ] ) is rewritten as : which is equivalent to : which is a -th order polynomial equation of the form with : using the same reasoning , we can find the coefficients of the -th order polynomial equation in ( [ eq51 ] ) ; . with ] . where the matrices and $ ] represent the generalized eigendecomposition of the matrix pair ( , ) .lin he , m. g. amin , c. reed , jr ., r. c. malkemes , a hybrid adaptive blind equalization algorithm for qam signals in wireless communications , _ ieee trans .52 , no . 7 , pp . 2058 - 2069 , july 2004 .a. labed abdenour , t. chonavel , a. aissa - el - bey , a. belouchrani , min - norm based alphabet - matching algorithm for adaptive blind equalization of high - order qam signals , _european trans.on telecom ._ , feb . 2013 .r. iferroudjene , k. abed - meraim , a. belouchrani , `` a new jacobi - like method for joint diagonalization of arbitrary non - defective matrices '' , _ applied mathematics and computation _211(2 ) : 363 - 373 ( 2009 ) .l. sheng , r. c. de lamare , blind reduced - rank adaptive receivers for ds - uwb systems based on joint iterative optimization and the constrained constant modulus criterion " , _ ieee tr .
|
we propose two new algorithms to minimize the constant modulus ( cm ) criterion in the context of blind source separation . the first algorithm , referred to as givens cma ( g - cma ) uses unitary givens rotations and proceeds in two stages : prewhitening step , which reduces the channel matrix to a unitary one followed by a separation step where the resulting unitary matrix is computed using givens rotations by minimizing the cm criterion . however , for small sample sizes , the prewhitening does not make the channel matrix close enough to unitary and hence applying givens rotations alone does not provide satisfactory performance . to remediate to this problem , we propose to use non - unitary shear ( hyperbolic ) rotations in conjunction with givens rotations . this second algorithm referred to as hyperbolic g - cma ( hg - cma ) is shown to outperform the g - cma as well as the analytical cma ( acma ) in terms of separation quality . the last part of this paper is dedicated to an efficient adaptive implementation of the hg - cma and to performance assessment through numerical experiments . blind source separation , constant modulus algorithm , adaptive cma , sliding window , hyperbolic rotations , givens rotations .
|
research evaluation systems such as the uk s _ research assessment exercise ( rae ) _ form bases on which governments and funding councils formulate policies on where to focus investment . due to some expectations that higher quality research is generated by larger teams , there have been campaigns to concentrate funding on institutions which already have a wealth of resources , in terms of finances and staff numbers .however , the most recent uk exercise , the results of which were announced in 2009 , has identified pockets of research excellence in smaller universities as well .this has enhanced counter - arguments by supporters of competition , who advocate a more even dispersion of resources .the notion of _ critical mass _ in research has existed for a long time without precise definition .it is loosely described as the minimum size a research team must attain for it to be viable in the longer term .arguments extending this notion to larger teams have been used to support the viewpoint that bigger is better in research . using the uk research base as a test bed ,if a continual policy of concentration of funding were indeed to lead to better quality research , one would expect that the research quality of the _ russell group _ of larger universities with larger research teams would be superior to that of the _ 1994 group _ , which contains smaller universities with smaller teams .one would expect this to be reflected , for example , in a significant difference in the average citations counts associated with researchers from each group .however , a recent report found that this was not to be the case in the main .the first aim of this paper is to explain this recent finding .the explanation comes from the existence of not one , but _ two _ critical masses , which are discipline dependent .the lower of these matches the heretofore loose notion of critical mass described above , and the research quality of teams up to about twice this size is strongly mass dependent . however , once the quantity of researchers in a team exceeds an _ upper critical mass _ , a crossover occurs and the dependency of quality on quantity reduces significantly .here it is shown that the existence and properties of this second critical mass are the reasons why the research quality of the russell group and the 1994 group are of comparable levels .the consequences of this is that a continual policy of concentration of resources into the largest universities is ineffective . instead, medium sized research groups should be strengthened to achieve upper critical mass , resulting in a greater collective benefit for a given discipline . while the first aim of this paper is thus to address policy issues at the level of government and funding bodies , the second aim concerns policy at the level of universities and teams .in particular , it is shown that two - way communication links are the main drivers of research quality , and therefore , maximisation of the former optimises the latter . in this paperwe describe a mathematical model introduced in ref . and which relates the quality of research to the quantity of individuals in the team .then , we present a brief description of the rae and of how its results can be used to compactly reflect the quality of a research team .the various university representation groupings in the uk are also described , and we demonstrate how our model is capable of capturing the research quality of each such grouping simultaneously .our main results concern the reduced dependency of average research quality on team size beyond the upper critical mass which explains the recent findings reported in ref .we are interested in the relationship between the quality of research and the resources available , specifically in the form of the quantity of individuals in a research team .there are two competing viewpoints in the current debate on the nature of this relationship .the first is that bigger is always better , and support should be concentrated in a few institutes which already have abundant resources .the second viewpoint is that it is the quality of individuals that drives research .this viewpoint is supportive of a policy of spreading of resources to wherever excellent individuals are found and to support competition more evenly . on the other hand , according to a theory recently advanced in ref . research quality is strongly quantity dependent only up to a point , beyond which this relationship reduces .we now summarise the basic reasoning behind these viewpoints .naively , one may expect that the _ strength _ of a research team is approximately proportional to the number of individuals it contains : a research group of ten individuals , say , may be expected , on average , to produce twice as many papers , train twice as many phd students and generate twice as much income as a group of five . representing the research strength of the individual in a research team by ,the combined strength of a team of size according to this view is .defining research _ quality _ as the average strength per team member , one has , where is the mean individual calibre . according to this naive expectation ,different teams are thus supposed to have qualities distributed around an average , the mean calibre for all teams in the discipline .all particular influences , such as the prestige of the institution , the impact of international collaborations , the presence of outstanding scientific personalities in the team , etc , enter the model as _ noise _ , so that .the expected research quality in a given discipline , averaged over all institutions , may then be written which is independent of .borrowing terminology from physics , this naive expectation is that research quality is _ intensive_. this viewpoint leads to the conclusion that the quality of research produced by a given team is a direct measure of the strength of the individuals constituting that team and that individual calibre is the dominant force which drives research quality . on this basis , the best policy to maximise the quality of a team is to recruit members of high individual calibre to maximise the value for the team .here it is demonstrated that this viewpoint is too naive and a more sophisticated one is advocated . in ref . , an alternative , hierarchical theory for team strength was advanced .this has its origins in the statistical physics of _ complex systems _ , the properties of which are not simply the sums of the properties of their individual components . instead_ interactions _ between these components must be taken into account . denoting the strength of the interaction between the and individuals in a research team as , the overall team strength for a sufficiently small teamis now modelled by where , provided the team is not large ( see below ) , the number of two - way communication links is .however , there is a limit to the number of two - way communication links an individual can sustain in a large team .if the average such limit is denoted , then a large team of size may fragment into smaller subteams , which themselves may interact . in this case . with an average intra - subteam interaction strength , and an average inter - subteam interaction strength , say , the average strength of a team of size is now where is the number of inter - subteam interactions .therefore the expected , relationship between research quality and team quantity may be modelled by where , , and are related to the various mean interaction strengths between hierarchies .we refer to as the _ upper critical mass_. in fact , and is small for large . for this model ,research quality is , in fact , _ extensive _ - it depends on the quantity of individuals involved in the activity. however , this dependency reduces beyond the upper critical mass , becoming more intensive for sufficiently large .the model ( [ extensive ] ) allows for a definition of another critical mass in research , which we refer to as the _ lower critical mass_. this captures the traditional notion of critical mass described in the introduction . by considering the predicted effects of adding new members of staff to a research team , or of transferring staff between teams of various sizes , a scaling relation between the lower and upper critical masses , and ,was found .this relationship is we define a research team of size to be in ref . , it was also shown that , in order to maximise the overall strength of a research discipline , it is best to prioritise support for medium teams , while small teams must strive to surpass the lower critical mass to survive . of course , while team strength also increases with increasing calibre of individuals , this is not the dominant mechanism .in fact , it is an order of magnitude smaller than the collaborative effect . in ref . , critical masses were determined for a multitude of research areas on the basis of the quality measurements coming from the uk s most recent rae . using hypothesis testing , model ( [ intensive ] )was rejected in favour of model ( [ extensive ] ) .the resulting critical masses are listed in table 1 , alongside the estimates for the parameters for the disciplines analysed . while most data sets are normal , some fail either the kolmogorov - smirnov and/or the anderson - darling tests and these are flagged in the table .confidence intervals associated with fits to these data must be treated carefully as approximate only .if the breakpoint were absent , the linear relationship between research quality and team quantity in the first part of ( [ extensive ] ) would be expected to extend indefinitely . in this circumstance ,maximisation of research quality would indeed be achieved by an unlimited policy of concentration . however , as evidenced in ref . , and as we shall see in the next section , evidence for the existence of the upper critical point is overwhelming .the uk s 2008 rae is considered to be the most precise evaluation of its kind to date .this exercise was not based on citation counts .instead research areas were scrutinized by experts in various fields to determine the proportion which fell into five quality levels .these are defined as 4 * ( world - leading research ) , 3 * ( internationally excellent ) , 2 * ( recognised internationally ) , 1 * ( recognised nationally ) and unclassified . in 2009 , a formula based on the resulting quality profiles was used to determine how research funding is distributed to each university . the formula used by the funding council for england associates each rank with a weight in such a way that 4 * and 3 * research respectively receive seven and three times the amount of funding allocated to 2 * research .research ranked at or below 1 * attract no funding .this funding formula may therefore be considered as a measure of the quality of a research team . denoting the percentage of a team s research which was evaluated as by , we define the quality of that team by . in this way , the theoretical maximum quality is .in fact no team achieved such a score , with the best teams achieving about half this .the uk s academic research base is organised into a number of representation groups ( see e.g. , ref . for an overview ) .these are ( i ) the _ russell group _ of research intensive universities , mostly with medical schools , ( ii ) the _ 1994 group _ of research intensive universities mostly without medical schools , ( iii ) the _ million+ group _ of modern universities which were formed after 1992 , ( iv ) _ university alliance _ of business - like universities ( v ) the _ guildhe _ education - focused group and the remaining ( vi ) unaffiliated universities . as mentioned in the introduction , the result of ref . ( which is perhaps surprising to proponents of a policy of concentration ) is that , based on citation counts , there is little difference between the research quality of the russell group and the 1994 group .we begin the explanation of why this is the case by the sequence of plots in fig.1 , for physics . in fig.1(a ) we normalise the quality measurements to the mean coming from eq.([intensive ] ) by plotting against the names of the various institutions , listed alphabetically . for physics ,the mean measured quality of research teams in the uk is . from fig.1(a ) , the research teams in the russell and 1994 groups mostly have quality values lying above this mean while those of the remaining universities mostly lie below .the nature of the situation is better revealed , however , in fig.1(b ) , where the same data are plotted against the size of the research teams .the solid line is a piecewise linear regression fit to the model ( [ extensive ] ) and the dashed curves represent the resulting confidence intervals .the correlation between quality and quantity to the left of the breakpoint is evident , but this dependency reduces on the right . a statistical analysis of this and other fits and the resulting values for the modelare detailed in ref . where the coefficients of determination are also given .the dotted line in fig.1(b ) is the extrapolation of the left fit into the supercritical region . in the absence of a breakpoint ,if the interactions which govern research quality for the small and medium universities ( described by the first part of eq.([extensive ] ) ) applied also to the large ones , then the research quality for the latter may also be expected to follow this line . in this case , a policy of concentration of resources could be justified . clearly this is not the case .the reason for the comparable qualities of the russell group and the 1994 group is now clear from fig.1(b ) .the large research teams in both representation groups have a different interaction pattern than those for small and medium groups . with a large value of , research quality is saturated to the right of the breakpoint , the concentration of more staff into these teams only leads , on average , to a linear increase in research strength and therefore does not significantly increase overall average research quality .the rae quality results for the other representation groups are also elucidated in fig.1(b ) : they are scattered about a line of positive slope for . in these cases ,the addition of more mass , in the form of new staff , to these teams is expected to drive up quality as the number of two - way communication links within the team increases quadratically .a policy of supporting medium sized groups is expected to enhance the quality of research in the sector overall .the effectiveness of the model is reinforced in fig.1(c ) , which is on the same scale as fig.1(a ) to facilitate comparison . in fig.1(c )the quality scores have been renormalised by plotting against the alphabetically arranged research teams , where is the expected quality value coming from the model ( [ extensive ] ) and is .the standard deviations for fig.1(a ) and fig.1(c ) are and , respectively , the tighter distribution about model ( [ extensive ] ) compared to ( [ intensive ] ) , illustrating its superiority . moreover , in contrast to fig.1(a ) , the data for all representation groups and for the teams belonging to unaffiliated universities straddle the line in fig.1(c ) .the model successfully captures the quality of _ all _ groupings and may form the basis of a renormalised ranking system , which takes size into account .similar analyses may be performed for other research areas and those for biology , geography , earth and environmental sciences , archaeology , law , education , applied mathematics and sociology are given in figs.2 - 8 .( in the cases where two or more institutions put forward a joint rae submission , that submission is associated with the first group in the list , 1994 , million+ , alliance , guildhe , unaffiliated to which at least one of them belongs . ) in each case the comparable levels of research quality associated with the large russell and 1994 groups may be explained by the existence of the upper critical point and the levelling of the dependency of quality on quantity in the supercritical zone where .the fitting procedure resulted in three possible values for the critical mass in the computer sciences , and these are labelled in the table with indices 1 , 2 and 3 .the coefficients of determination for these fits were , , and , respectively .the competing nature of these fits may be explained if computer science is not one but several subject areas , each with their individual work patterns .similarly , for archaeology , the coefficient of determination for the first listed fit is , and the second has .also , in the second panels of each figure the line of best fit for small and medium groups is extrapolated into the supercritical zone .clearly the large groups are not described by this extrapolated line , and this is overwhelming evidence for the existence of the upper critical mass . however , in the case of biology ( fig.2(b ) ) for example , it is interesting to note that a few of the best performing research teams , which appear as outliers to the overall fit , are well described by this overshoot .these teams have sizes only marginally above . a similar ,if less pronounced , phenomenon occurs with many of the other disciplines . while one must be careful not to attempt to explain too much on the basis of a simple model , and there are undoubtedly many more complex factors at work , it is tempting to speculate that this `` overshoot phenomenon '' may be caused by a greater than usual degree of cohesiveness in these highly successful research teams , in which two - way communication links are sustained despite their group sizes exceeding the upper critical mass . in the third part of each figure , renormalised plots of against are presented for the different subject areasin each of these cases ( and for the other subject areas listed in table 1 ) the standard deviations reduce significantly in going from the normalised plots of against to their renormalised counterparts .this tighter bundling of the data about the renormalised , local , quality expectation values indicates that the overall research base is even better than hitherto realised as the performances of small and medium teams are closer to those of the large ones ( mostly from the russell and 1994 groups ) , once size is taken into account .a current debate within academia and between policy makers concerns the relative merits of concentration and dispersion of research resources , and is discussed qualitatively in ref . . herequantitative input into this debate has been given , which clearly supports the viewpoint that ever increasing concentration of resources into a small number of large institutes is not the best way to increase overall research quality .this is because of the existence of an upper critical mass in research , which has been clearly established . below this value, the overall strength of research teams tends to rise quadratically with increasing size , in proportion to the number of two - way communication links . beyond the upper critical mass, however , this rise reduces and approaches linearity . defining quality as the average strength per team member ,this means that research quality levels out for supercritical team size .this is the explanation behind recent findings based on citation counts , which show that the research quality from teams associated with the 1994 group of uk universities is on a par with that of the russell group elite .the analysis presented herein also shows that simple rankings of research teams drawn up in the wake of rae may give a misleading impression , as they do not take size into account , and therefore may not properly compare like with like . taking size into account , as in the third parts of each figure presented here ,is necessary for a better indication of team performance .we have established that the strength of a community of interacting researchers is greater than the sum of its parts .having established the correlation between group size and success , and ascribing this correlation as primarily due to two - way communication links , it is clear that facilitation of such communication should form an important management policy in academia .for example , while modern managerial experiments such as distance working or `` hotdesking '' may be reasonably employed in certain industries , these would have a negative effect for researchers , for whom proximate location of individual office space to facilitate multiple , spontaneous , two - way interactions is important .indeed , we have seen that the best - performing research groups frequently have sizes about , or slightly above , the upper critical mass and we have identified a possible mechanism as to why these groups outperform others . in advance of the uk s future research excellence framework , and similar exercises in other countries, it is hoped that this article will help inform debate on policy matters in the broad academic research community .99 m. harrison , does high quality research require critical mass ?in _ the question of r&d specialisation : perspectives and policy implications _ , pp 57 - 59 . edited by d. pontikakis , d. kriakou and r. van baval , european commission : jrc technical and scientific reports ( 2009 ) .j. adams and k. gurney , funding selectivity , concentration and excellence - how good is the uk s research ? published by the higher education policy institute ( 25 march 2010 ) .see also z. corbyn , data disproves case for distribution of research funds on historical basis , times higher no.1,940 ( 25 - 31 march , 2010 ) p.17 ..the results of the fit ( [ extensive ] ) for a variety of academic disciplines .the symbol indicates failure of the kolmogorov - smirnov normality test and indicates that the anderson - darling test fails as well .the symbol indicates that pure mathematics is best fitted by a single line ( see ref .caveats for the computer sciences and archaeology are discussed in the text . [ cols= " < , > , > , > , > , > " , ]
|
using the results of the uk s research assessment exercise , we show that the size or mass of research groups , rather than individual calibre or prestige of the institution , is the dominant factor which drives the quality of research teams . there are two critical masses in research : a lower one , below which teams are vulnerable and an upper one , above which average dependency of research quality on team size reduces . this levelling off refutes arguments which advocate ever increasing concentration of research support into a few large institutions . we also show that to increase research quality , policies which nourish two - way communication links between researchers are paramount .
|
while mathematics and juggling have existed independently for thousands of years , it has only been in the last thirty years that the mathematics of juggling has become a subject in its own right ( for a general introduction see polster ) .several different approaches for describing juggling patterns have been used .the best - known method is siteswap which gives information what to do with the ball that is in your hand at the given moment , in particular how `` high '' you should throw the ball ( see ) . for theoretical purposesa more useful method is to describe patterns by the use of cards .this was first introduced in the work of ehrenborg and readdy , and modified by butler , chung , cummings and graham .these cards work by focusing on looking at the _ relative order _ of when the balls will land should we stop juggling at a given moment .every throw then has the effect of changing the relative ordering of the balls .but we can only effect the order of a ball that is thrown ; the remaining balls will still have the remaining relative order to each other . as a consequence if there are balls there are different things which can happen .namely , we do nt throw a ball ( the `` '' ) or we throw a ball so that it will land in some order relative to the other balls ( which can be done in ways ) .the four different cards for the case are shown in figure [ fig : basiccards ] ( in all drawings of cards the circle at the bottom indicates the hand which either does not catch the ball at that `` beat '' or catches and throws effecting the relative ordering of the ball(s ) ; we will always think of time moving from left to right ) .the advantage of working with cards is that the cards can work independently of each other , that is the choice of card to use at a given time is not dependent on the preceding choice of cards . in siteswapthe opposite is true in that you must know all preceding throws to determine which throws are possible . given a set of these cards for a given we can repeat these periodically to form a pattern .moreover , every possible siteswap with period and at most balls will occur as a unique combination of these cards ( see ) .therefore the number of different siteswap sequences of period for exactly balls is given by if we want to find all of the juggling patterns of _ minimal _ period and using exactly balls we can then use mbius inversion and divide out by the period to get where is the mbius function ( see ) .( we will revisit this with more detail in section [ sec : counts ] . ) for as long as there has been interest in the mathematics of juggling there has been interest in extending results to multiplex juggling ( where more than one ball is allowed to be caught at a time ) . in ehronborg and readdythey produced possible cards for multiplex juggling which were a natural generalization .namely multiple balls could come down at a given time and would then be redistributed appropriately . while these cards can describe every juggling pattern there is the problem that uniqueness is lost ( see figure [ fig : nonunique ] for an example of two consecutive cards describing the same pattern but using different cards ) .so using these cards to count multiplex juggling patterns is not straight - forward .one approach is to distinguish the balls which come down .this is what was done in butler , chung , cummings and graham , an example of such a card is shown in figure [ fig : separate ] .this avoids ambiguity that might arise but does not accurately reflect multiplex juggling in practice , but rather reflects passing patterns with multiple jugglers involved each juggler catching one ball ( in particular the different points that come down correspond to the different jugglers ) . in this paperwe will propose a new type of card which can be used for multiplex juggling .it solves the ambiguity problem of ehronbrog and readdy and also solves the modeling problem of butler , chung , cummings and graham .however it does come at the mild cost of having a card being dependent on the _ previous _ card which came before . in section [ sec : cards ] we will introduce these cards , and in section [ sec : graph ] we show how to use matrices associated with a weighted graph to count the number of periodic patterns of length .we then count the number of siteswap sequences and the number of juggling patterns in section [ sec : properties ] and section [ sec : counts ] . in section [ sec : hand ]we will consider what happens when we limit the number of balls which can be thrown .we will give some concluding remarks in section [ sec : conclusion ] , including a discussion of counting crossing numbers .most of the enumeration techniques here are fairly standard , it is their application to counting juggling patterns that is new .we will also see that the objects generated in the process of deriving our count seem to have independent combinatorial interest .moreover , while our main goal has been to enumerate juggling patterns , the cards themselves might be useful for the exploration of other combinatorial aspects of juggling . finally , we note that while there has some been interest in counting multiplex juggling patterns , prior to this paper there has been little success .butler and graham made the most progress but their focus was on counting closed walks in a state graph and were not able to efficiently enumerate all juggling patterns .the way that cards describe juggling patterns is through understanding the relative order of their landing times .the ambiguity that appeared in figure [ fig : nonunique ] comes from the fact that two balls are landing _ together _ but still being kept _ separate _ in the ordering . since they are separate we could order them in two ways but that does not effect the pattern .this suggests the following simple fix : tracks no longer represent individual balls , but rather groups of balls which will land together .so now either the `` lowest '' group does nt land , or the lowest group lands and the balls get thrown so that they are placed in new track(s ) or added to the existing tracks . before each throwwe will have an ordered partition of the number of balls on the left , i.e. , which corresponds to the statement that were we to stop juggling we would first have balls land at some point ; then balls land some time later ; and so on until finally balls land at the end .( note that we do not claim that they will land one right after the other ; cards are keeping track of relative ordering of when things land and not the absolute times that they will land . ) similarly after each throw we will have another ordered partition of on the right , i.e. , .( the number of our parts in our two partitions need not be the same but we must have . ) if anything lands then the card in the middle indicates how the balls get redistributed .examples of these cards are shown in figure [ fig : multicards ] where the first card corresponds to going from back to and the second corresponds to going from to .c & + ( 0,0 ) rectangle ( 1,1 ) ( 0,1 ) rectangle ( 1,2 ) ; ( 0,2 ) rectangle ( 1,3 ) ( 0,1 ) rectangle ( 1,2 ) ( 0,0 ) rectangle ( 1,1 ) ( 1,0 ) rectangle ( 2,1 ) ; & ( 0,1 ) rectangle ( 1,2 ) ( 1,1 ) rectangle ( 2,2 ) ( 0,2 ) rectangle ( 1,3 ) ; ( 0,0 ) rectangle ( 1,1 ) ( 0,1 ) rectangle ( 1,2 ) ( 1,1 ) rectangle ( 2,2 ) ( 0,2 ) rectangle ( 1,3 ) ( 1,2 ) rectangle ( 2,3 ) ; an ordered partition can be _ nontrivially embedded _ into an ordered partition if there exists indices so that for .note that given two ordered partitions several nontrivial embeddings are possible .an ordered partition can be _ trivially embedded _ into an ordered partition if and only if . for every nontrivial embedding of , a partition of , into , another partition of , we have a card for multiplex juggling where a throw occurred . as an example in figure [ fig : multicards ]we have also marked underneath how embeds into by drawing the partition of arranged from on the bottom to on the top and shading where sits inside the partition .trivial embeddings , i.e. , , correspond to no throws .all possible cards ( and corresponding embeddings of partition ) for multiplex juggling when are shown in figure [ fig:3ballcards ] . [ cols="^,^,^,^,^,^ " , ] we note that we can ask similar questions about the sum of the entries in as well as the row and column sums as we did with ( i.e. , lemma [ lem : right ] , lemma [ lem : left ] and theorem [ thm : cardrecurse ] ) .however the counts are less clear , and have not appeared in the oeis . as an example if we count the total number of cards when we get the following numbers , starting with : these numbers do appear to satisfy a relatively simple relationship . in particular through numbers agree with the following conjecture .let be the number of cards for multiplex juggling with balls and where each track has capacity at most . then modifying the cards used for juggling , namely allowing groups of balls to be grouped together , we have found a method that works for enumerating multiplex juggling patterns .there are still a few questions that remain , particularly in understanding what happens when we limit the number of balls that can be caught at any given time .ehronborg and readdy introduced cards for juggling and used them to study a -analog of juggling by counting crossings .it is easy to count crossings on each card and then one simply adds up the crossings over all cards used to count the crossings of the pattern .we note that the matrices used here can be easily adapted to this situation .namely for each card we count crossings ( making sure to count multiplicity when balls move in groups ) , and then weight the card ( and hence edge in the graph ) by where is the number of crossings .finally we can form matrices where we add up the weights of cards connecting ordered partitions . as an example we have we note that theorem [ thm : siteswapcount ] and theorem [ thm : capacity ] can be easily modified to count the number of juggling patterns with minimal period based on the number of crossings .an applied mathematical juggler might also want to add the constraint that whenever multiple balls are thrown that no two balls get thrown to the same height .our method can be readily adopted to this situation by simply removing any card which has two balls moved to the same track , which leads to modified graphs , and also modified matrices , .for example we have the matrices might also have independent interest .for example , it is easy to see that for that is a principal submatrix of .this seems to continue for at least the first few cases .does this containment continue ?note that this also seems to indicate a preferred ordering of the ordered partitions if we want to have ( 1 ) containment of the previous matrix in the upper left block and ( 2 ) an upper triangular matrix in the lower left block .what is this ordering ?things get even more interesting when we consider the characteristic polynomial of .if we let , then we have the following . where the polynomials are given by the following . *the sequence of the exponents of in the seem to follow this appears to be the sequence a178841 in the oeis which counts the number of pure inverting compositions of . *the degree of the polynomials follow this appears to be the sequence a000041 in the oeis which counts the number of unordered partitions of . *the second coefficients of the polynomials follow this appears to be the sequence a000712 in the oeis which counts the number of unordered partitions of into parts of kinds .we have no explanations for any of these phenomenon , but given the nature of how the matrix is formed believe this is more than coincedence .we look forward to more research being done into these cards and matrices .
|
mathematics has been used in the exploration and enumeration of juggling patterns . in the case when we catch and throw one ball at a time the number of possible juggling patterns is well - known . when we are allowed to catch and throw any number of balls at a given time ( known as multiplex juggling ) the enumeration is more difficult and has only been established in a few special cases . we give a method of using cards related to `` embeddings '' of ordered partitions to enumerate the count of multiplex juggling sequences , determine these counts for small cases , and establish some combinatorial properties of the set of cards needed .
|
the notion of what is fair in the allocation of one or more infinitely divisible goods to a finite number of agents with their own preferences has long been debated .predictably , no agreement has been reached on the subject .the situation is often exemplified with children ( players ) at a birthday party who are around a table waiting for their slice of the cake to be served , with the help of some parent ( an impartial referee ) .if we think about a special class of resolute children who are able to specify their preferences in terms of utility set functions , the parent in charge of the division could ease his task by using a social welfare function to summarize the children s utility values . among the many proposals , the maxmin or rawlsian division was extensively studied in the seminal work of dubins and spanier , who showed the existence of maxmin optimal partitions of the cake for any completely divisible cake and their main properties .they also showed that when a condition of mutual appreciation holds ( assumption _ ( mac ) _ below ) any optimal partition is also _ equitable _ , i.e. , it assigns the same level of utility for each child . the study of the maxmin optimal partitions and their properties has continued in more recent years . in particular ,its relationship with other important notions such as efficiency ( or pareto optimality ) and , above all , envy - freeness has been investigated with alternating success : each maxmin partition is efficient , but while for the two children case brams and taylor showed that it is also envy - free , the same may not hold when three or more children are to be served , as shown in dallaglio and hill .it is worth pointing out the relationship with player bargaining solutions .if we think about the division as deriving from a bargaining procedure among children , it is straightforward to show that the bargaining solution proposed by kalai , in the case where all the players utilities are normalized to 1 , coincides with the equitable maxmin division . therefore , if the conditions proposed by dubins and spanier hold , the two solutions actually coincide .little attention has been devoted , however , to finding optimal maxmin partitions with one notable exceptions : the case of two players with additive and linear utility over several goods has been considered by brams and taylor , with the adjusted winner procedure . for the case of general preferences ( expressed as probability measures , i.e. nonnegative and countably additive set functions normalized to 1 ) and arbitrary number of players , legut and wilczinski gave a characterization of the optimal maxmin allocation in terms of weighted density functions .moreover , elton et al . and legut provided lower bounds on the maxmin value .the optimization problem was later analysed by dallaglio .the general problem was reformulated as the minimization of a convex function with a particular attention to the case where the maxmin allocation is not equitable and the allocation of the cake occurs in stages to subsets of players .no detail , however , was given on how to proceed with the minimization .in most of the fair division literature , little is assumed about the strategic behaviour of the children .brams and taylor discuss the issue of the manipulability of the preferences : in most cases children may benefit from declaring false preferences .a different approach takes into account the possibility for the children to form coalitions after ( legut and legut et al . ) or before ( dallaglio et al . ) the division of the cake . in both cases coalitional gamesare defined and analysed . in the case of early cooperation among children ,the game is based on a maxmin allocation problem among coalitions , each one having a joint utility function and a weight .the first properties of the game are studied in .it turns out that the analysis of the game is made harder by the difficulties in computing the characteristic function , i.e. , the value associated to each coalition .the tools we introduce , therefore , become essential in computing such values , as well as any synthetic value , such as the shapley value , associated to the game .the coalitional maxmin problem is indeed a generalization of the classical maxmin problem introduced by dubins and spanier .therefore , we consider a common approach to set up an algorithm which , at each step , will compute an approximating allocation , together with lower and upper bounds for the maxmin value .the algorithm is based on a subgradient method proposed by shor and it yields an approximation of the optimal allocation with any fixed degree of precision . in section 2we describe the maxmin fair division problem with coalitions through the strategic model of interaction among players in and the geometrical setting employed in , and . in section 3we present the upper and the lower bounds for the objective value . in section 4we fit the subgradient method to our problem and we derive a procedure where the optimal value and the optimal partition are computed up to a desired precision and we provide a numerical example where we describe two fair division games and we compute the corresponding shapley values . some final considerations are given in section 5 .we represent our completely divisible good as the set a borel subset of , and we denote as the borel of subsets of let be the set of players , whose preferences on the good are where each is a probability measures on by the radon - nikodym theorem , if is a non - negative finite - valued measure with respect to which each is absolutely continuous ( for instance we may consider ) , then , for each where is the radon - nikodym derivative of with respect to we will consider the following assumptions : \a ) _ complete divisibility of the good ( cd ) : _ for each and each such that , there exists a measurable set such that and \b ) _ mutual absolutely continuity ( mac ) : _ if there exists and such that then for each .\c ) _ relative disagreement ( rd ) : _ for each pair and each such that and , there exists a measurable set such that throughout the rest of the work we will assume that ( _ cd _ ) always holds , while ( _ mac _ ) and ( _ rd _ ) are useful , though restrictive , assumptions that we will employ only when strictly needed . for any , let be an -partition , i.e. , a partition of the good into measurable sets .let be the class of all -partitions .how do players behave in the division procedure ? in the simplest case , each player competes with the others to get a part of the cake with no strategic interaction with other players .each determines a division of the good in which player , gets the share with value . here , individual players seek an allocation with values as high as possible . a fair compromise between the conflicting interestsis given by maxmin allocation that achieves here denotes the maxmin value in the classical fair division problem . witha completely divisible good , the allocation is fair ( or proportional ) , i.e. for all moreover , if ( _ mac _ ) holds , it is also egalitarian , i.e. for all ( see ) . therefore , under this assumption , an optimal allocation is also the bargaining solution proposed by kalai and smorodinsky ( see also kalai ) .dallaglio et al . proposed a strategic model of interaction , where players , before the division takes place , gather into mutually disjoint coalitions . within each coalition , players pursue an efficient allocation of their collective share of the cake .let be the family of all partitions of and , for each , let , and let be the coalitions indexes set .thus , players cluster into coalitions specified by the partition for each and each coalition , players state their joint preferences as follows with , and the utility of coalition will be divided among its members in a way that prevents any of them to break the coalition in search of a better deal . once the global coalition structure is known , a fair allocation of the cake among the competing coalitions is sought . in this context , assigning the same value to all coalitions could yield an unfair outcome .fairness here must consider the different importance that coalitions may assume and this is taken into account by a weight function in this framework , each coalition takes the role of a single player in equation .following kalai , when coalitions in are formed and the weight function is considered , players should agree on a division of the cake which achieves the following value each coalition can evaluate its performance in the division by considering the following coalitional game where . the value can be interpreted as the minimal utility that coalition is going to receive in the division when the system of weight is enforced , independently of the behaviour of the other players .a crucial question lies in the definition of the weight system .we consider two proposals : * , .this is certainly the most intuitive setting .although very natural , this proposal suffers from a serious drawback , since players participating in the game may be better off waiting to seek for cooperation well after the cake has been divided ( see ) ; * , , where is the partition maximizing . by seeking early agreements among them , players will be better off than postponing such agreements until the cake is cut .the above mentioned problem is overcome at the cost of a less intuitive ( and more computationally challenging ) formulation ( see ) .it is interesting to note that to find these weights we need to solve .it is easy to verify that , for each , with equality if or where .the optimization problem can be seen as an infinite dimensional assignment problem . in principlewe could attribute any point of the cake to any of the participating players ( provided certain measurability assumptions are met ) .for very special instances this becomes a linear program : when the preferences have piecewise constant densities , or when the cake is made of a finite number of completely divisible and homogeneous parts .the fully competitive value is a special instance of the cooperative case , since , with and for each .therefore , we focus on the cooperative case alone .we now describe a geometrical setting already employed in , , and to explore fair division problems . in what followswe consider the weighted preferences and densities , and , given respectively by the partition range , also known as individual pieces set ( ips ) ( see ) is defined as let us consider some of its features .each point is the image , under , of an -partition of .moreover , is compact and , if ( cd ) holds , is also convex ( see ) .therefore , so , the point is the intersection between the pareto frontier of and the egalitarian line turn our attention to a simpler optimization problem that may have an unfair solution , but it provides easy - to - compute upper and lower bounds for the original problem .these bounds depend on a weighted maxsum partition , which we can derive through a straightforward extension of a result by dubins and spanier .let denote the unit .( see ( * ? ? ?* theorem 2 ) , ( * ? ? ?* proposition 4.3 ) ) let and let be an of . if then the value of this maxsum problem is itself an upper bound for problem . for each choice of , we have a maxsum partition corresponding to the _ partition value vector ( pvv ) _ is defined by the pvv is a point where the hyperplane touches the partition range so lies on the pareto border of moreover , for any there exists at least one pvv ( see ) .we are ready to state the first approximation result .[ propupper ] let be as follows : then , following ( * ? ? ?* proposition 4.3 ) we know that the hyperplane that touches at the point is defined by the equation since , this hyperplane intersects the egalitarian line defined in at the point . since the hyperplane is located above , this point lies above the maxmin point with coordinates .therefore finally , since is a weighted average of the values it follows that the function was already considered in , where it was shown that is convex , and we now turn our attention to a lower bound for although we will see later only one pvv is enough to assure such a bound , we give a general result for the case where several pvvs have already been computed .we derive the second approximation result through a convex combination of these easily computable points in which lie close to the following result generalizes theorem 3 in and theorem 1.1 in .[ proplower ] let a partition value vector such that then , let us consider the following vectors where is the weighted joint utility of the whole cake by coalition now , consider the convex hull of the pvv and the points , , the lower bound we are looking for is the intersection point between and the egalitarian line from ( [ egalitarianline ] ) ( see figure [ fig1 ] ) .let us denote this point as . without loss of generality , let us suppose then , we obtain as follows : we are dealing with a linear system with unknown quantities , thus , by cramer s rule , we get as } = \frac{u_1}{1 + \sum_{q \neq 1}{\frac{(u_1 -u_{q})}{\mu_{q}^{w}(c)}}},\end{aligned}\ ] ] where the second equality derives by suitable exchanges of rows and columns in the denominator matrix .in fact , we get the second one after an even number of exchanges on the first : successive exchanges of the last row until it reach the first position , and successive exchanges of the last column until it reach the first position .so the two matrices in the denominator have the same determinant .it is easy to verify that , for every .finally , since the lower bound belongs to the convex hull of the pvv and the vectors , it is not less than the minimum component of each vector , in particular an illustration of the position of the bounds with respect to the partition range in the case of two coalitions is shown in figure [ fig1 ] .in the previous section we have seen that for each choice of the coefficients vector we can derive upper and lower bounds for we describe a way of improving the coefficients so that eventually the bounds will shrink to the desired value . since in general a non - differentiable convex function , we can rely on a simple minimizing algorithm developed by shor , the subgradient method .in particular , since the domain of is constrained , we must consider an extension , the projected subgradient method , which solves constrained convex optimization problems .let us start by describing the method through some basic definitions and the essential convergence result .let be a closed convex set and let be the euclidean norm .the projection of on is denoted by and it is defined as let be a convex function with domain and let an interior point of .a vector is called a subgradient or a generalized gradient of at if it satisfies moreover , is a bounded subgradient of if there exists such that for all we denote as the set of subgradients of a convex function at any interior point of the domain .a sequence of positive numbers is called _ diminishing step size rule _ if it satisfies the conditions : the subgradient method minimizes a non - differentiable convex function which has a bounded set of minimum points and at least one bounded subgradient .this procedure returns a minimum value for the function moving a point in the domain in the opposite direction of a bounded subgradient by a step belonging to a diminishing step size rule ( see ) .if the domain of the function is constrained , then the point is projected in the domain ( see ) .we recall the general result [ propsgconv ] ( see , ) let be a convex function defined on which has a bounded set of minimum points and let be a bounded subgradient .moreover , let be a diminishing step size rule .then for any the sequence generated according to the formula \ ] ] has the following property : either an index exists such that or where let us check that can be minimized through the projected subgradient method .first of all , is convex with ( see ) and we can easily show that with bounded .in fact , for each point the vector satisfies : we now adapt the general updating rule to our situation . for any diminishing step size rule and any vector of coefficients ,the update rule becomes = ( \alpha^{t } - s_t u^{t } + \lambda \boldsymbol{1})_+\ ] ] where is the normalizing constant such that suppose now that and that the step size is sufficiently small to guarantee here has to be chosen so that i.e. , hence where is the average of the subgradient vector components . in what follows we will make sure to choose a diminshing step size rule small enough so that is verified , or , equivalently , we are now able to state the first convergence result .[ propsprime ] suppose ( cd ) and ( mac ) holds .let be the bounded set of minimum points for and let be a diminishing step size rule .then , there exists another diminishing step size rule which satisfies .consequently , given and the recursive sequence either for some , or first of all , notice that constraint involves only those indexes for which let be the set of those indexes in the step for each of them we would get now , fix an arbitrary integer and define hence , let us define thus , satisfies and , since to show , let us suppose this implies for some and some sequence since taking a further subsequence we have by continuity , lies on the upper surface of so moreover , is supported by the hyperplane by ( [ sum ] ) and ( [ uoptimal ] ) we have that since now , the coexistence of and clashes with the hypothesis ( mac ) . in fact , and there is no for which since there exists such that by ( mac ) we can derive a partition from of subsets with and if such that for all if we consider the partition defined as we get which is a contradiction . the final statement is a direct consequence of propositions [ propupper ] , [ propsgconv ] and of the fact that if then the sequence must converge to some . moreover , by the dominated convergence theorem , ,last equality being again a consequence of proposition [ propupper ] . to prove the convergence of the pvvs and of the lower bound, we assume _ relative disagreement _ ( rd ) .[ convlb ] suppose ( cd ) , ( mac ) and ( rd ) hold . then ,for any and the recursive sequence , one of the following two conditions hold : * either for some and , * or and .by ( rd ) we have that for any point on pareto border of there exists one and only one hyperplane touching ( see ) . by the conclusions of proposition [ propsprime ] either or in the first case , there exists only one pvv corresponding to since the hyperplane with coefficients vector touches the partition range in the point corresponding to the maxmin allocation , then must coincide with also in fact , all the coordinates of are equal and , therefore , maximal . without loss of generalitywe choose as maximal , and the last equality holding by . in the second occurrence ,suppose on the contrary that while .since the sequence is in a compact set , there must be a converging subsequence .the vector is a second pvv associated to , but this is ruled out by ( rd ) .thus , and , by continuity , .we now present two versions of an algorithm for the maxmin division problem .the common initializing elements for both versions are listed in table [ algelement ] .the first version computes upper and lower bounds for and updates the coefficient vector through the subgradient rule .both bounds are updated by means of a simple comparison with the old ones .the generic step is described in table [ algstep ] .a simpler but slower version , described in table [ alg2step ] , computes the approximating optimal partition as well as the value .the finiteness of both algorithms is guaranteed by propositions [ propsgconv ] , [ propsprime ] and [ convlb ] .particular care is needed in choosing the step sequence .a sequence converging too fast to 0 may lead to an increase in the number of steps needed , since the step may soon become too small to reach the optimum .similarly a sequence converging too slowly may result in values of ( and of the corresponding pvv s ) jumping from one extreme to the other of the unit simplex ( and of the partition range for the pvv s ) .this , again , will slow the convergence process ..description and initialization algorithms elements . [ cols="^,<,<",options="header " , ] in figure [ densitiespartition ] , we represent the initial densities ( a ) and then the maxmin partition for the fully competitive context ( b ) , where and for all + for any , we run our algorithm enforcing the two weight systems and , with a tolerance of and we compute the corresponding game values ( table [ game_value ] ) .consequently , in table [ shapley_value ] , we compute the shapley value for each game . the two games share the same ranking for the shapley values which therefore seems to be robust enough to the choice of system weights .also , the weight system amplifies the difference in the shapley values obtained with , yielding a higher variance for the values distributions .in the previous section we described a couple of algorithms that return maxmin values and partitions in both competitive and cooperative settings . it is important to note that we could think of the same procedures as interactively implemented between ( coalitions of ) players and an impartial referee . at firstthe referee proposes a division of the cake based on the maxsum division of the cake with equal weights for all players .the players now report their utilities and the referee corrects the inequalities in the division by proposing a new maxsum division with modified weights : players who were better off will be given a smaller weight and those who were worst off will see their weight increase . of course , one can not hope to achieve the same degree of precision , since the algorithm performs that step dozens of times , but the bounds described in section 3 give a precise idea on how far the proposed division is from the desired one . * in the numerical example it would be interesting to link the shapley value rankings to the original system of preferences . what makes players 5 and 3 the most powerful players in the cooperative division process ?apparently the two utility functions have different features : player 5 s distribution is uniform over the unit interval and his density is maximal only at the very ends of the interval .on the other hand , player 3 s preferences are concentrated at the second half of the interval where he has no competitors , except player 5 ( who , however , has a smaller density ) .no simple explanation could be provided so far .* beyond the convergence of the algorithms , which end in a finite number of steps , returning the approximate solution up to a specified degree of precision , it would be interesting to investigate about the computational efficiency of the same algorithms
|
we consider upper and lower bounds for maxmin allocations of a completely divisible good in both competitive and cooperative strategic contexts . these bounds are based on the convexity properties of the range of utility vectors associated to all possible divisions of the good . we then derive a subgradient algorithm to compute the exact value up to any fixed degree of precision .
|
recently a significant amount of research is going on to ensure secure communication in wireless networks . due to broadcast nature of wireless transmission ,the transmitted messages can be intercepted by eavesdroppers . though cryptographic security can be used to counteract eavesdropping , but secrecy measure of such scheme relies on the computational complexity of cryptographic functions rather than information theoretic principles .also , distributing secret key across the network has its own overhead . on contrary, physical layer security schemes exploit the inherent randomness present in the wireless channel and provide information theoretically provable secure communication irrespective of the computational capability of the eavesdropper(s ) .physical layer security came to existence when wyner in a seminal paper showed that a non - zero secrecy rate is possible for a discrete memoryless channel if the eavesdropper s channel is degraded .following his work researchers have evaluated secrecy capacity and equivocation region of single antenna and multi - antenna systems .however , resource constrained multi - hop networks have not got enough attention though they are practically significant . in a multihop wireless network ,intermediate relay nodes have to follow certain relaying strategy for forwarding packets to the next relay or destination .amplify and forward relaying scheme is simplest among them where each node transmits the message it has received after amplification ( scaling ) . though simplest in nature but the significance of this scheme lies in its low cost implementation and effectiveness against fading . nevertheless , from theoretical point of view study of such a relaying schemecan help us to estimate lower bounds of the channel capacity of other communication scenarios ( e.g. analog network coding ) .very recently , researchers have started investigating the significance of amplify and forward relaying for attaining physical layer security .in our paper we consider a scenario where relay nodes uses amplify and forward relaying to convey the source message to the destination .however , due to the presence of one or more eavesdropper secrecy of communication is in jeopardy .for such a scenario secrecy rate of the network provide a good measure of performance of the system . unlike some previous works where only total relay power constraints is assumed , we consider the individual relay power constraint also . in practicethe relay nodes are generally powered by their individual power source without any means to share their power sources ( e.g. battery ) .therefore , individual relay constraint is more relevant in practical situations and general . assuming the availability of global channel state information ( csi ), we consider a two hop network consists of a single source , a single destination and multiple relay nodes .as each of the relay node connects the source and destination separately , they form a diamond like structure and hence named accordingly .we begin our analysis with a symmetric diamond network and provide the analytical solution for optimal scaling factor of relay nodes .we then relax the symmetric network assumption and analyze the scenario where eavesdropper s channel vector is scaled version of receiver s channel . for general casewhere multiple eavesdroppers are involved we have multiple secrecy rate corresponding to each eavesdropper and the objective would be to maximize the minimum of them over the same constraint set . in our paper , we provide a sub - optimal solution for individual relay constraints , whereas we propose an iterative algorithm for secrecy rate in case of sum constraint and individual constraint .we summarize our contribution as follows : * for symmetric diamond network we provide analytical solution for secrecy rate .* we discuss and analyze a step - by - step procedure for calculating optimal secrecy rate when eavesdropper s channel vector is scaled version of receiver s channel . * for general casewe discuss the sub - optimal zero - forcing " solution for individual relay constraint .there we reformulate the optimization problem as a quadratic program which can be solved efficiently .* we propose an iterative algorithm for obtaining optimal secrecy rate for sum relay and individual constraint scenarios .our paper is organized as follows : in section [ sec : rel ] , we survey the related work . the system model and notationsare introduced in section [ sec : model ] . in section [ sec :secrate ] , we analyze the secrecy rate for several channel conditions of receiver and eavesdropper . we illustrate numerical results of the formulations in section [ sec : res ] .we conclude our paper in section [ sec : concl ] with a brief summary and possible future work .amplify and forward scheme was introduced by schein and gallager and was considered as a mean of cooperative communication by , , .later several researchers have reported that cooperative scheme like amplify and forward not only provides robustness against channel variations but also ensures non - zero secrecy rate in certain scenarios where otherwise it is zero .for example , if the source to destination channel is poor as compared to source to eavesdropper channel , then by using appropriate scaling factor in relay nodes we can cancel out the received signal at eavesdropper and thereby improve the secrecy rate .as we have assumed gaussian channel model it is worth mentioning that the secrecy capacity for gaussian wiretap channel was evaluated by cheong and hellman .later the effect of fading on gaussian wiretap channel model was analysed by and . the wire - tap model in context of multi - antenna system was considered and analysed by , , and .but both the single and multi - antenna system were limited to single - hop network .as the multi - hop wireless networks are equally significant , so recently research in this area has got a good pace .lai and gamal evaluated the secrecy capacity of relay - eavesdropper channel for different cooperative schemes and also evaluated corresponding equivocation region .authors in reported the improvement in physical layer security with the help of cooperating relays .same authors later elaborated the significance of amplify and forward scheme for attaining physical layer security in .but in their work they considered total relay constraint criteria and provided bounding results for secrecy capacity . for multiple eavesdroppers scenario they suggested the zero - forcing solution " where by beam - forming the transmitted signalis nullified at each eavesdropper .the more practical individual relay constraint criteria was considered in , .the authors of the both the papers provided an iterative algorithm for calculating the optimal amplification vector for relay nodes for maximizing secrecy rate using semi - definite relaxation .our work discusses the special cases of the model considered in and investigate the nature of the solution for those special cases .our approach significantly differs from the techniques used in and as we have converted our problem into a convex optimization problem by using a noble transformation of variables .further we discuss the convergence of the solution to the global optimum .infact we identify that for certain special cases we can evaluate the optimal scaling factor analytically . those analytical results are motivated from and , where authors have devised schemes to find the optimal amplification vector for attaining the capacity of amplify and forward network under individual relay constraints in absence of secrecy criteria .= [ draw , shape = circle , minimum size=0.9cm , style = thick ] ( v0 ) at ( 180:2.5 ) ; ( v1 ) at ( 90:1.5 ) ; ( v2 ) at ( 90:0.5 ) ; ( v3 ) at ( 270:1 ) ; ( v4 ) at ( 25:3 ) ; ( v5 ) at ( 5:2.85 ) ; ( v6 ) at ( 340:3.05 ) ; ( v0 ) ( v1 ) ; ( v0 ) ( v2 ) ; ( v0 ) ( v3 ) ; ( v1 ) ( v4 ) ; ( v2 ) ( v4 ) ; ( v3 ) ( v4 ) ; ( v1 ) ( v5 ) ; ( v2 ) ( v5 ) ; ( v3 ) ( v5 ) ; ( v1 ) ( v6 ) ; ( v2 ) ( v6 ) ; ( v3 ) ( v6 ) ; ( 0,0.10 ) ( 0,-.6 ) ; ( 2.9,-0.15 ) ( 2.9,-.6 ) ; ( 2.4,0.65 ) rectangle ( 3.3,-1.5 ) ;the system model consists of a single source , a single destination , relay nodes and eavesdroppers as shown in figure [ fig : model ] .the channel gain from source node to the relay node is denoted by a real constant .similarly channel gain from the relay node to destination or to eavesdropper is denoted by and , respectively .now , if we consider discrete time instants and neglect the transmission delays , then signal received at each relay node due to the source can be expressed as : = h_{s , i}x_s[n ] + z_i[n]\ ] ] where ] is the noise at relay node .we assume that \ } , -\infty < n<\infty ] which is independent of the input signal at that receiver . in case of af schemeeach relay node scales its received signal before transmitting .the maximum scaling factor is determined by the individual power constraint of relay node and the received signal power at that relay node .we assume a power constraint over transmitted signal from each node which can be expressed as : \le p_i,\quad -\infty <n<\infty,\quad i \in \{s,1,2,\dots , m\}\ ] ] so , transmitted signal from each relay node can be written as : =\beta_iy_i[n],\quad -\beta_{i , max } \le\beta_i \le \beta_{i , max } \mbox { where } \beta_{i , max}^2= \frac{p_i}{h_{s , i}^2 p_s + \sigma^2}\end{aligned}\ ] ] now , we can express the received signal at destination and eavesdroppers in following manner : = & \sum\limits_{i=1}^{m}h_{i , d}x_i[n]+z_d[n ] \label{eq : dest}\\ y_k[n ] = & \sum\limits_{i=1}^{m}h_{i , k}x_i[n]+z_k[n ] \label{eq : eve}\end{aligned}\ ] ] here ] are mutually independent i.i.d .random variables distributed according to and also independent of ] ) and noise signals ( ] .therefore , optimal secrecy rate can be written as following optimization problem .[ eq : opt1 ] \\ & = \underset{\boldsymbol{\beta}}{\max}\underset{k \in \{1,2,\dots , k\}}{\min}\;\left [ \frac{1}{2}\log\left(1+snr_d \right)-\frac{1}{2}\log\left(1+snr_k \right ) \right]\end{aligned}\ ] ] where .if we use the following vector and matrix notations then snr at destination or eavesdropper can be represented as where ^t ] and .in simple words the objective function will be maximized when the vector composed of those variables lies in the direction of ^t ] region then the start point , i.e. indicates that ordered variables have reached their corresponding boundary and scaling factor . on the otherhand at , variable has just reached its boundary , i.e. .+ _ remark : _ if , then we can obtain the optimal scaling vector using first approach . in this subsectionwe evaluate the optimal vector for two different kind of constraints without imposing any assumption on channel gains . at firstwe consider the _ zero forcing _ solution where values are chosen such that the transmitted signal get canceled at eavesdropper .we formulate a quadratic program with individual constraint for this scenario . in the subsequent paragraphwe formulate an optimization problem with total sum constraint on vector and discuss an iterative approach for calculating the optimal value .it is easy to see that the zero forcing solution will lower bound the optimum secrecy rate for individual constraint whereas total sum constraint will upper bound the same .+ * zero forcing for individual constraints : * in this approach we equate the to zero . as a resultthe equivalent optimization problem can be written as : [ zfs ] we can also formulate following quadratic program which can be solved efficiently . the optimization problem [ zfs ] is equivalent to following quadratic program it can be shown that the optimal solution does not change if we rewrite the objective function as , because . if we consider a new variable such that , then in terms of variable vector ] .we consider two new parameters and .we then define matrix such that and . is generated by concatenating the constraint due to denominator and eavesdroppers . can be obtained by rearranging the following constraint : * optimal secrecy rate for total sum constraint : * from equation we can rewrite the equivalent optimization problem in following manner : _ our approach _ :this problem becomes a single dimensional optimization problem if we know the solution of inner optimization problem for a fixed .then we can search over the range of for the optimum at which the maximum objective function value is attained .we can use bi - section algorithm to find the optimal and the corresponding optimum decision variable ._ range of : _ as will result in sub - optimal zero - forcing solution , so we can start with small values of , typically in the range of . the upper bound on can be calculated by solving the following optimization problem : [ eta_range ] ^\mathbf{t} ] and we transform the above expression in terms of and rearrange it to obtain : if then is a diagonal matrix with positive entries , therefore , is a positive definite matrix .* * total constraint * : total constraint can be written in the following vector notations : ) % \mathbf{x^tx}\le\frac{\upsilon}{1+\upsilon}=:\tau \end{aligned}\ ] ] as and the constraints are quadratic in nature , so we claim that we can replace the quadratic objective function by a linear one _ i.e. _ , . though maximum value of for the given constraints can be obtained by finding the maximum and minimum value of linear objective for those same constraints , but those values will be indeed same . we can argue that using contradiction .let us assume that the solutions of maximization and minimization problem are and , respectively .now , if , then we can find a vector which will not only satisfy the constraints but also has higher objective function value than .hence , can not be optimum .similar argument can be presented for minimization problem also and thus it proves our claim . for a fixed reformulated optimization problem becomes : [ opt ] now , it is easy to see that is a m dimensional ellipsoid , also , are positive semidefinite symmetric matrix . hence , this is a convex optimization problem and therefore global solution can be obtained using numerical routines .once the optimal solution for a particular is obtained , then we can calculate corresponding secrecy rate by evaluating .now , we use any line search method to calculate optimal due to following proposition .we have used golden - section search for generating the results . is an unimodal function of in the range for lower values of , eavesdroppers constraints of optimization problem are dominating and as increases , so the volume of ellipsoids corresponding to those constraints .this results in enlargement of the feasible region , which causes increment in the objective function value of problem .therefore , for values around , increases with . for higher values of , is dominating constraint and objective function value become constant for those values of .so , as increases objective function starts decreasing .hence , there is an intermediate point where reaches maximum value . in the following subsectionwe discuss the single eavesdropper scenario with sum relay constraint and characterize the solution of secrecy rate maximization problem . * * case 1 * : if is the minimum eigen value of matrix or then as long the maximum eigen value of , the constraint will be inactive and hence we are left with maximization of quadratic objective with a quadratic equality constraint .this is indeed generalized rayleigh - quotient and the solution of this problem can be easily obtained by calculating the eigen vector corresponding to maximum eigen value of the matrix . to calculate the eigen value of the matrix we can solve the following equation : as is a scalar , then indeed lies in the direction of and infact by neglecting scale factor we can write . therefore , for this fixed the secrecy objective function becomes : * * case 2 * : when the criteria mentioned above is not satisfied then both the constraints might be active , so we use following two step approach : * * we solve the problem considering the first constraint only . if the solution obtained ( )satisfy the second constraint then this is the solution else we discard it and follow the next step . * * in this case both the constraints are active and we have to solve the following problem .+ now for this problem we can use any suitable numerical routine to find the global optimum which should satisfy the criteria mentioned in .the objective function and eavesdroppers constraints remain same as it was for total relay constraint scenario .but , unlike the previous case , instead of single relay constraint we have relay constraints corresponding to each of the relay nodes . the individual relay constraint and the transformed one is presented below : : we can upper bound the in this scenario by using the solution of optimization problem .we use in this case to evaluate the upper bound of , thereafter to be denoted as .similarly , .the inner optimization problem for a fixed can be written as : [ opt_indv ] similar argument for unimodularity of as function of within the range ] , where is the optimal solution of problem .also , it is well known that for golden - section search method after iterations the updated interval can be written as : \end{aligned}\ ] ] where . in other words and and therefore the sequence and linearly converges to . if we consider a tolerance parameter , we can easily estimate the number of the iterations required .for evaluation of secrecy rate we consider a network whose main channel gains are sampled from a rayleigh distribution with parameter 0.5 . to obtain the degraded channels for eavesdroppers , we multiply the relay to destination channel gains with the samples from _ _ uniform distribution__ $ ] .we average the results of 100 such networks while plotting the graphs . in figure [ fig_psrsbeta ]we plot the variation of optimal value and secrecy rate ( ) with respect to source power ( ) for symmetric network case . as the source power increases the bounds on value keep contracting and therefore the optimal value starts declining .this results in saturation of the secrecy rate . and secrecy rate ( ) with respect to for symmetric network case . ] in figure [ fig_obsrv_bar_plot ] we compare the secrecy rate obtained using proposed iterative approach with the optimal solution ( solving directly using numerical routines ) for randomly generated channel values .it is apparent that the outputs of the proposed iterative are equal to the corresponding optimal values . )obtained for 5 relay node diamond network with 3 eavesdroppers using iterative approach , direct and zero - forcing formulation .here we used , , . ] ) vs. source power ( ) for five ( 5 ) relay node diamond network with three ( 3 ) eavesdroppers using iterative approach , direct and zero - forcing formulation .here we considered and ] in figure [ fig_secrate ] we plot the secrecy rate with respect to source power ( ) for direct solutions and the solutions obtained using iterative algorithm and zero forcing approach .the reason behind the shape of these curves is already discussed in context of figure [ fig_psrsbeta ] . ) vs. no . of relay nodes ( ) for diamond network with three ( 3 ) eavesdroppers for iterative approach , direct and zero - forcing formulation .the parameters used are , , . ]figure [ fig_secvsrelays ] depicts variation of secrecy with respect to number of relay nodes deployed . as the number of relay node increases new paths from source to destinations are available and therefore , by choosing proper scaling factor we can achieve better secrecy rate .this applies for all three schemes ( zero forcing , individual constraint and sum constraint ) and is also evident from the plot .we have calculated the optimal scaling vector for two - hop amplify and forward ( af ) to obtain the optimum secrecy rate in the presence of multiple eavesdroppers .we begin with considering the special channel conditions for the diamond network and gradually moved to general scenario .analytical solution for special channel conditions and numerical solution for general scenario is proposed . in futurewe would like to investigate optimum secrecy rate of a general af network .also , as friendly jamming improves the secrecy rate in several scenarios , we would like to investigate the impact of jamming on secrecy rate in af networks .10 [ 1]#1 url [ 2]#2 [ 2]l@#1=l@#1#2 a. wyner , `` the wire - tap channel , '' _ bell systems technical journal _ , vol .54 , no . 8 , pp . 13551387 , jan 1975 . b. e. schein , `` distributed coordination in network information theory , '' ph.d .dissertation , massachusetts institute of technology , 2001 .j. laneman , d. tse , and g. w. wornell , `` cooperative diversity in wireless networks : efficient protocols and outage behavior , '' _ information theory , ieee transactions on _ , vol .50 , no . 12 , pp . 30623080 , 2004 .y. zhao , r. adve , and t. j. lim , `` improving amplify - and - forward relay networks : optimal power allocation versus selection , '' in _ information theory , 2006 ieee international symposium on _ , 2006 , pp .12341238 .s. borade , l. zheng , and r. gallager , `` amplify - and - forward in wireless relay networks : rate , diversity , and network size , '' _ information theory , ieee transactions on _ , vol .53 , no .10 , pp . 33023318 , 2007 .s. shafiee , n. liu , and s. ulukus , `` towards the secrecy capacity of the gaussian mimo wire - tap channel : the 2 - 2 - 1 channel , '' _ ieee transactions on information theory _ , vol .55 , no . 9 , pp .4033 4039 , sept .2009 .s. sarma , s. shukla , and j. kuri , `` joint scheduling & jamming for data secrecy in wireless networks , '' in _2013 11th international symposium on modeling optimization in mobile , ad hoc wireless networks , ( wiopt 13 ) _, 2013 , pp . 248255 .a. qualizza , p. belotti , and f. margot , `` , '' in _ _ , ser .the i m a volumes in mathematics and its applications , j. lee and s. leyffer , eds.1em plus 0.5em minus 0.4emspringer new york , 2012 , vol .r. a. horn and c. r. johnson , eds . , _matrix analysis_.1em plus 0.5em minus 0.4emnew york , ny , usa : cambridge university press , 1986 .d. j. wilde , _ optimum seeking methods_.1em plus 0.5em minus 0.4emprentice - hall englewood cliffs , nj , 1964 , vol .
|
we have evaluated the optimal secrecy rate for amplify - and - forward ( af ) relay networks with multiple eavesdroppers . assuming i.i.d . gaussian noise at the destination and the eavesdroppers , we have devised technique to calculate optimal scaling factor for relay nodes to obtain optimal secrecy rate under both sum power constraint and individual power constraint . initially , we have considered special channel conditions for both destination and eavesdroppers , which led us to analytical solution of the problem . contrarily , the general scenario being a non - convex optimization problem , not only lacks an analytical solution , but also is hard to solve . therefore , we have proposed an efficiently solvable _ quadratic program _ ( qp ) which provides a sub - optimal solution to the original problem . then , we have devised an iterative scheme for calculating optimal scaling factor efficiently for both the sum power and individual power constraint scenario . necessary figures are provided in result section to affirm the validity of our proposed solution .
|
the inspiral of compact objects such as neutron stars and black holes are expected to be a major source of gravitational waves ( gw ) for the ground - based detectors ligo , virgo , geo600 and tama , as well as for the future planned space - based detector lisa . due to the effect of radiation reaction, the orbit of the binary system slowly decays over time . as this happens the amplitude and frequency of the waveform increases emitting a ` chirp ' waveform .there have been many efforts to create templates which will approximate a possible signal to high accuracy .on one hand we have the post - newtonian ( pn ) expansion of einstein s equations to treat the dynamics of the system .this works well in the adiabatic or slow - motion approximation for all mass ranges . on the other handwe have black hole perturbation theory which works for any velocity , but only in situations where the mass of one body is much greater than the other .while templates have been generated to 5.5 pn order for a test - mass orbiting a schwarzschild black hole , and to 3.5 pn order for non - spinning binaries of comparable mass , a number of difficulties still need to be tackled .the main problem is that both templates are a function of the orbital energy and gw flux functions . in the test - mass case , an exact expression in known for the orbital energy , but we have a pn expansion for the flux function . in the comparable - mass case , a pn expansion is known for both functions .it has been shown that the convergence of both methods is too slow to be useful in creating templates that can be confidently used in a gw search .we also know that the pn approximation begins to break down when the orbital separation of the two bodies is .this means that as we approach the last stable orbit ( lso ) the templates begin to go out of phase with a possible signal due to the increase of relativistic effects .as most search methods are based on matched filtering , any mismatch in phase between our templates and a signal will result in a loss of recovered signal - to - noise ratio ( snr ) and an increase in the error in the estimation of parameters .it was shown that templates based on resummation methods such as pad approximation have a faster convergence in modelling the gravitational waveform .the pad based templates were then used to partially construct effective one body templates which went beyond the adiabatic approximation and modelled the waveform into the merger phase .other more phenomenological templates such as the bcv templates seem to be excellent at detecting gw , but are not necessarily the best template to use in the extraction of parameters .in this paper we focus on the inspiral of imri sources .these sources encompass the inspiral of a neutron star ( ns ) into a black hole with masses of 10s to 100s of solar masses ( which should be observable in the ground based detectors ) , to the inspiral of a low mass supermassive black hole ( ) into a more massive black hole ( ) as should be observable with future space based detectors .we do nt believe that the pn approximation used here will be sufficiently accurate to model emri sources , and expect that other methods such as analytic and numerical kludge waveforms will be used to properly model emri waveforms .on the other hand , we fully believe that this method of resumming the pn series using chebyshev polynomials will be also applicable to the comparable mass case . throughout the paper we use the units . the problem with expansions like a taylor series is that they are based on weierstrass s theorem , which assumes that there are enough terms in the expansion to sufficiently model the function we are approximating .we know from previous studies that the 11 term expansion for the flux function for test - mass systems may not be sufficient . a more promising possibility is based on getting close to the minimax polynomial by using the family of ultraspherical ( or gegenbauer ) polynomials which are defined by where is a constant .these polynomials are orthogonal over ] . for amplitude of the oscillations remain constant throughout the interval and is conducive to trying to find an `` equal - ripple '' error curve , which is integral to the minimax polynomial .this value of corresponds to the chebyshev polynomials of the first kind , , ( hereafter chebyshev polynomials ) .these polynomials are closely related to the minimax polynomial due to the fact that there are points in [ -1,1 ] where attains a maximum absolute value with alternating signs , i.e. .it can be shown that the chebyshev polynomials exhibit the fastest convergence properties of all of the ultraspherical polynomials . for our purposes ,we need to approximate polynomials which are a function of the dimensionless velocity in the domain ] to an arbitrary interval $ ] using , s\in[-1,1].\ ] ] in this case we have .\ ] ] we can now write the shifted chebyshev polynomials in the form and the recurrence relation as such that the shifted polynomials have the initial conditions the stationary phase approximation the fourier transform for positive frequencies is given by } , \label{d4.6a}\ ] ] where is a normalization constant .the phase of the fourier transform in the stationary phase approximation , , is found by solving the set of order odes given by where is the total mass , is the instantaneous velocity , is the derivative of the orbital energy with respect to the velocity , where is the angular velocity as observed at infinity and is an invariant velocity parameter observed at infinity .finally , is the gravitational wave flux function . for a test - mass particle in circular equatorialorbit about a schwarzschild black hole , an exact expression for the orbital energy exists .its derivative with respect to the velocity is given by where we have introduced a finite - mass dependence through the reduced mass ratio , . from this equationthe lso is found by demanding , giving .for the flux function we only have a pn expansion of the form , \label{eq : flux}\ ] ] where is the dominant _ newtonian _ flux function given by and the coefficients in the expansion of the flux function are given by .we begin to encounter logarithmic terms at and above .it is well know that terms such as these can destroy the convergence of a power series expansion .the pad approximant to the flux is defined as p^{n}_{m}\left[\sum_{k=0}^{11}\,f_{_{k}}v^{k}\right],\ ] ] where is the pad operator , is the velocity at the photon ring and the coefficients and are related to the coefficients in the original pn expansion .after factoring out the logarithmic term and introduce a linear term into the non - logarithmic series , the next step is to expand both power series in the above equation in terms of the shifted chebyshev polynomials .this is done by writing each monomial in the power series in terms of the shifted chebyshev polynomials and substituting back into the series above .so , for example , starting with equations ( [ eqn : rec ] ) and ( [ eqn : init ] ) , we can invert each expression for the monomials in , i.e. ... \ ] ] and so on . proceeding like this for all monomials, it then allows us to write the power series in the pn expansion solely in terms of the shifted chebyshev polynomials .the first advantage the chebyshev approximation has over both the pn and pad approximations is that we can also expand the power series appearing in the logarithmic terms as a chebyshev series .substituting for the monomials , we can write as the expression for both series in terms of the coefficients , are long so we will omit them here .we should emphasise the fact here that although the values of the coefficients are zero up to , the chebyshev expansion includes terms from .this allows us to define the chebyshev approximation to the gravitational wave flux function as \left[\sum_{k=0}^{11}\,\lambda_{_{k}}t_{n}^{*}(v)\right],\ ] ] where we re - introduce the pole at the photon ring ..the top two lines give the truncation errors associated with each order of approximation for both the pn and chebyshev flux functions at the lso , where the error in the approximation is greatest .the bottom two lines give the total truncation error incurred as we reduce the order of approximation from 5.5 to 2 pn . [cols="^,^,^,^,^,^,^,^",options="header " , ]
|
we introduce a new method for modelling the gravitational wave flux function of a test - mass particle inspiralling into an intermediate mass schwarzschild black hole which is based on chebyshev polynomials of the first kind . it is believed that these intermediate mass ratio inspiral events ( imri ) are expected to be seen in both the ground and space based detectors . starting with the post - newtonian expansion from black hole perturbation theory , we introduce a new chebyshev approximation to the flux function , which due to a process called chebyshev economization gives a model with faster convergence than either post - newtonian or pad based methods . as well as having excellent convergence properties , these polynomials are also very closely related to the elusive minimax polynomial . we find that at the last stable orbit , the error between the chebyshev approximation and a numerically calculated flux is reduced , , at all orders of approximation . we also find that the templates constructed using the chebyshev approximation give better fitting factors , in general , and smaller errors , , in the estimation of the chirp mass when compared to a fiducial exact waveform , constructed using the numerical flux and the exact expression for the orbital energy function , again at all orders of approximation . we also show that in the intermediate test - mass case , the new chebyshev template is superior to both pn and pad approximant templates , especially at lower orders of approximation .
|
the advent of high - throughput genomics technologies has made available large quantities of data , transforming molecular biology into a remarkably data - rich science .each passing year sees an increase in the use of high - dimensional data to probe everything from gene regulation and the evolution of genomes to the individual genetic profile of complex disease development .life scientists now find themselves having to cope with huge data sets , and face challenges extracting and interpreting the wealth of information hidden in these data .representing data in a well - studied formal structure is ideally suited to follow - up analysis and to addressing many of the questions arising from the interpretation of large scale data .recently developed experimental and computational techniques yield networks of increased size and sophistication .the study of such complex cellular networks is emerging as a new challenge in biology .network science is now central to molecular biology , serving as a framework for reconstructing and analyzing relations among biological units .the characteristic combination in biology of minute observation and a large number of variables results in very dense networks , the upshot of which , from a data analysis perspective , is the so - called curse of dimensionality " problem .biological networks carry information , transfer information from one region to another and implement functions represented by the network s interactions .the visualization and analysis of such networks can pose significant challenges , which are often met by identifying the backbone of complex networks . over the last decade , determining the vital features of these huge networks has been an intriguing topic , and continues to be a challenge .dimension reduction methods offer a potentially useful strategy for tackling such problems .they aim to reduce the predictor dimension prior to any modeling efforts .the main aim of all these efforts is to extract a processing core from large noisy networks .surprisingly , the amount of information lost or conserved in so doing has remained unknown or unquantified . furthermore , there is no general framework for evaluating and comparing these methods .here we propose a novel approach for studying the complexity of biological networks and for evaluating network dimensionality reduction processes , applying information - theoretic measures to detect global and local patterns .in particular , we study the rate at which information can be lost , recovered or reconstructed in reduced complex artificial and real networks , while retaining the typical features of biological , social , and engineering networks , such as scale - free edge distribution and the small - world property . we will use a more powerful measure of information and randomness than shannon s information entropy , namely , the so - called kolmogorov complexity . has been proven to be a universal measure theoretically guaranteed to asymptotically find any computable regularity in a dataset . can be effectively approximated by using lossless compression algorithms , for example .that is , compression algorithms for which decompression fully recovers the original object , with no loss of information . a good introduction to the subjectmay be found in and . to approximate kolmogorov complexity, we use a technique called the _ block decomposition method _ ( or simply bdm ) based on algorithmic probability and two generally employed lossless compression algorithms , bzip2 and deflate .bzip2 is an open source data compressor that uses a stack of different algorithms superimposed one atop the other starting with run - length encoding , burrows - wheeler or the huffman coding , among others .we sometimes compare , strengthen or complement findings by also providing estimations of shannon s information entropy .while more dimension reduction techniques can be conceived of than can be thoroughly analyzed in a single paper , we provide the tools and methods with which to do so , regardless of the technique . here , however , we compare three distinct graph dimension reduction techniques ( graph spectrum , sparse graph and motif profile ) and evaluate their ability to preserve the information content of the original network .these methods have been applied to different biological networks in order to understand complex cellular behaviours .the logic behind the use of motif profiles is the basic assumption that the over - representation of a certain motif in a network indicates that it has some functional importance .thus , exploring the most frequently occurring motifs in a network may afford novel insights into the functionality of these motifs within the network .fanmod has been used to find network motif profiles .the sparse networks have been obtained by applying the effective resistances sparsification method .effective resistances sparsification has been reported to be one of the quickest sparsification methods , which keeps the backbone of a network intact .we compare what the three methods ( see appendix ) , graph spectra , graph motifs and graph sparsification which are clearly forms of lossy compression as the networks can not be fully recovered capture , and we test whether they characterize families of networks . in other words , we measure the ability of these methods to preserve key information .we show that they not only capture different properties but also preserve different amounts of information from the original objects .there were four main sources of networks to which dimensionality reduction methods and information - theoretic measures were applied one source was tailored graphs produced specifically for this paper , such as spider graphs and co - spectral graphs .real - world networks come from the landmark paper where network motifs for systems biology was introduced .finally , from the widely - known artificial gene network series century database ( mendes db ) , a sample comsisting of two small - world networks ( sw ) , two scale - free networks ( sf ) and two erds - rnyi networks ( er ) were used , all of them with 100 nodes and 200 edges .these are public data sources of well - known networks , used instead of custom - made networks in the interest of impartiality .methods and measures were thus applied to networks that are widely available and not to networks contrived to suit the particular methods or measures applied in this paper . from now on all graphs analyzed , whether natural or synthetic , are directed , but no information regarding activation or inhibition is taken into account ( since for several of them there is none ) ..*complexity approximation by bdm , deflate and bzip2 of all original graphs . *[ cols="^,^,^,^,^,^,^,^,^,^ " , ] complexity of all graphs approximated by the three methods : bdm , deflate and bzip2 , normalized by number of nodes and number of edges . list sorted by last column bzip2 normalized by number of edges .a figure plotting these values for comparison and normalized between 0 and 1 is provided in fig .[ figure3 ] .[ tab : table1 ] , [ figure4 ] and [ figure5].,width=453 ] the complexity of biological networks may be studied by employing information - theoretic measures to detect global and local patterns and to measure the information content of graphs and networks .[ figure1 ] shows the flowchart of the proposed testbed for assessing information loss / preservation in network dimensionality techniques .first , as a proof of concept , fig .[ figure2]a shows that the shannon entropy of the adjacency matrix diminishes in value for a growing number of disconnected nodes .[ figure2]b shows the impact of adding disconnected nodes to a graph as an estimation error of approximations to graph entropy ( ) , and of graph algorithmic complexity estimated by bdm ( ) . the block decomposition method ( bdm )is a novel technique for approximating kolmogorov complexity by means of algorithmic probability ( c.f .section [ kmotifs ] ) .both bdm and measures behave as expected : while algorithmic complexity increases marginally due to the small information content added , with diminishing impact , by the contribution of every disconnected node , entropy asymptotically moves towards 0 . since the graph entropy and complexityare measured over the adjacency matrix of the graph , adding disconnected nodes means adding rows and columns of 0s , which are highly compressible and of low entropy and block entropy ( entropy rate , i.e. taking as unit _ micro - states _ all submatrices of bits from to the length of the adjacency matrix ) .it follows then that algorithmic complexity captures important features of these graphs . in , we showed that deflate and bdm very closely approximated the complexity of dual graphs .here we performed a similar test using cospectral graphs , with a surprising positive outcome . in graph theory , the set of eigenvalues of the adjacency matrix of a graph is referred to as the _ spectrum _ of the graph .two graphs are _isospectral _ or _ cospectral _ if the adjacency matrices of the graphs have equal multisets of eigenvalues , i.e. , the same spectra .cospectral graphs may look very different ; two examples are shown in [ figure2]b. however , entropy ( fig .[ figure2]b ) and algorithmic complexity estimated by bzip2 ( fig .[ figure2]b and c ) and bdm ( fig .[ figure1]d ) provided the same information content values for almost all co - spectral graphs considered .bdm ( fig .[ figure2]d ) provided better estimations ( with higher rank correlation and less outliers ) than bzip2 and entropy ( fig .[ figure1]c ) for 180 graphs and their cospectrals , that is , the original graphs and their cospectral counterparts had values closer to each other .this is consistent with the fact that cospectral graphs share important algebraic properties and should therefore have a similar information content , but it was not necessarily theoretically expected , there being no known procedure for producing all graphs with a certain spectrum and no simple algorithm for producing a cospectral graph from a given graph . in general , there is no one - to - one correspondence , and in this sense the cospectral information - content similarity is more surprising than that of dual graphs .that classic entropy , bzip2 complexity and bdm based on algorithmic probability produce very similar complexity values for cospectral graphs means that these methods are ( from worse to better ) able to capture fundamental algebraic properties shared by cospectral graphs and so can be used , as we claim , for comparing reduction methods . as part of the dataset to be considered , we assessed the amount of information ( in bits ) in six networks from an artificial gene network database : two networks with small - world ( sw ) topology , two scale - free networks ( sf ) and two erds - rnyi ( er ) . in the past ,most of the work on the complexity of graphs was focused on random networks , the so - called erds - rnyi networks .but most of the interesting features of biological networks arise from the fact that these networks are not like random graphs .connections among elements in a biological network are neither simple nor random . the small - world property of networks signified by a small diameter has been established beyond a doubt , revealing the key role of short cuts common in many real networks , from protein interactions to social networks , and from the network of hyperlinked documents to the interconnected hardware behind the internet .real networks , including biological networks , are also known to be scale - free .this suggests other possible mechanisms that could be guiding network formation . herewe explore the complexity of these three large random graph classes , i.e. , er , sf and sw and various real - world , biological and non - biological networks .the results of the estimation of the kolmogorov complexity of these artificial networks show that while there is no agreement as regards whether sw is more complex than sf or vice versa , for shannon entropy , sw networks display greater combinatorial complexity ( not shown in graphs ) .but for bdm , sf networks are more complex ( fig .[ figure3]a , b ) , and both compression algorithms are in agreement as to the slightly greater complexity of sw and er networks ( fig .[ figure3]a , b ) .and in fact we have found that both bdm and compression can separate these graph in topological groups ( see and ) . however , compression algorithms reverse the complexity order among sf , sw and er , which is once again in agreement with bdm on motifs as a network dimensionality reduction method ( fig .[ figure5 ] ) , thus showing that bdm does not harbor a bias toward motifs . that compression of the original graphs retrieves a different order than bdm and compression on motif profiles is counterintuitive because sw networks for small rewiring probability are very close to regular ( ring / cycle ) networks and should therefore not have large complexity values .however , compression algorithms differ from bdm in that they are entropy rate estimators and can therefore be fooled if no trivial statistical regularities are found .since we have normalized kolmogorov complexity estimations by number of edges and nodes , this result can be compared directly with other networks , and we do not need to have exactly the same number of nodes or edges for comparison .[ figure3 ] shows the complexity values and information content estimations of the 16 graphs from and the mendes db using bzip2 and deflate lossless compression algorithms ( bdm can not be applied directly to real - number values , see section [ kmotifs ] ) as approximations to kolmogorov complexity normalized by node .interestingly , we see bdm values retrieve differences between networks , meaning that local regularities better characterize them .so bdm values can be used to characterize families of networks .we report the results of our evaluation of the loss and preservation of information in network reduction techniques .to do this we first measure the information content of the adjacency matrix of a graph , then the information content of the graph resulting from the application of each dimensionality reduction method .finally we consider the difference of these values for complexity measure and reduction method . in general , , but some methods , such as spectral analysis ( c.f .section [ kspectra ] ) , can lead to the introduction of spurious information such that , especially for complexity measures of an entropic nature , such as compression ( in contrast to those of an algorithmic nature such as bdm ) .but we are mostly interested in the case in which given 2 graphs and such that for complexity measure , then , especially for cases in which this is preserved across different . the subgraph complexity ( bdm ) and lossless compression ( bzip2 ) values of the networks ( fig . [ figure3 ] ) that were classified by their network motifs have been studied before , in order to assess the preservation of relative information content . that is , whether , where is any of the complexity methods used in this project : bdm , deflate ( compress ) and bzip2 , on all reduction methods : motif profiles , graph spectra and sparsification . the results summarized in fig .[ figure3 ] encompass genetic , protein , power grid and social networks , as described in .the plot shows that compression and bdm preserve to some degree the relative information content of most types of networks but bdm produces a convex curve while all others are more concave ( fig .[ figure3]b ) .while deflate and bzip2 show different degrees of success at distinguishing families of networks , bdm was the best at distinguishing networks by their families assigning lower or higher complexity to different groups ( e.g. , genetic vs protein vs electric , or erds - rnyi vs scale - free vs small world ) even normalizing by edge density and thus truly capturing essential differences of their topological properties .this is consistent with the main result in , showing that local graph structures can classify network families with great precision , bdm , however , looks at local structures in the network adjacency matrices instead , which is a proper superset than counting subgraphs ( motifs ) as done before in the cited papers .there is an extensive literature on connecting graph properties to the eigenvalues of graph adjacency matrices .the so - called eigenvalue spectrum of these graphs provides information about their structural properties .eigenvalues store information about a graph. many properties of a graph can be recognized from its spectrum .we have calculated the amount of information preserved in spectra of different network families .graph spectra can characterize certain properties of graphs .for example , spider graphs with rays have redundant eigenvalues , and the spider graph spectrum characterizes the graph by its number of rays and diameter .indeed , it follows from the configuration of the adjacency matrix that the spectrum of a spider graph of rays and diameter 1 is : , with spiders of greater diameter having slightly greater complexity .this simplicity in the redundancy of the spectrum of spider graphs is consistent with their low kolmogorov complexity . unlike the process of growing a spider graph , growing a random graph with edge density 0.5 requires a larger amount of information to specify the graph spectrum .indeed , the kolmogorov complexity of the spectrum of a spider graph is bounded by the number of rays with the same eigenvalue with complexity , and the number of trailing 0s with kolmogorov complexity .all biological networks were subject to the greatest loss of information when spectral sparsification was used ( see fig . [ figure3 ] , where spectral curves are mostly flat , thus not allowing us to distinguish between different networks ) .this is because spectra analysis is lossy ( many graphs can have the same graph spectrum ) and therefore is bound to lose vital information , even if spectra capture important algebraic properties of a network .biological networks were also found to have close to nilpotent eigenvalues , but we found no theoretical explanation for this ( see fig .[ figure4 ] where biological networks have values closer to ) .we think the reason is the high number of low degree nodes in biological networks .indeed , it has been pointed out that the spectrum of these networks is quite susceptible to fluctuations of the vertex degrees , and in the case of irregular graphs the eigenvalues of the adjacency matrix just mirror the tails of the degree distribution and thus do not reflect any global graph properties . while spectra analysis is known to be a lossy data reduction technique ,our results show that spectra analysis respects the information order of real - world networks , as compared to the full lossless compressed lengths of the networks .another interesting phenomenon was the perfect match of values between bdm and deflate for the synthetic networks .thus , taken together , bdm and deflate perfectly differentiate between the natural and artificial networks to be further investigated .that graph spectra are inconsistent with the common estimation trend of kolmogorov complexity , as reported in previous experiments , suggests that graph spectra analysis is the method with the greater loss of information .yet this does not make it less interesting as a measure for quantifying certain aspects of a graph , provided we take into account that this method may indeed lose the relative complexity and information content of the original graph .the graph laplacian may be claimed to more naturally represent some properties of graphs , when compared to the plain graph spectrum . from the point of view of information content, the laplacian can not contain more information than the information that can already be extracted from the adjacency matrix .indeed , the laplacian is defined as , where is a diagonal matrix where each diagonal entry is the number of links for each node and the adjacency matrix . can clearly be derived from as the sum of 1s in each row . moreover , the calculation of the laplacian is of fixed size .hence differs only by a constant value .but it remains to be ascertained whether the laplacian conserves more information than the regular graph spectrum , despite the fact that both retrieve the same number of vector entries . taking the information content from the spectra alone does not preserve the relative order or show any clustering capabilities by type of network .this means that when using bdm , graph motif and compressibility analysis , order is better preserved among networks of the same family than among different families .the idea of a local scale subgraph - based analysis was first presented in , when network motifs were discovered in the gene regulation ( transcription ) network of the bacteria e. coli and then in a large set of natural networks .a network motif is defined as a recurrent and statistically significant sub - graph occurring in a network or across various networks .more formally , if and are two graphs , , the number of appearances of graph in is referred to as the frequency of in . a graph is referred to as recurrent ( or frequent ) in when its frequency is above a predefined threshold or cut - off value ( usually compared to a random graph ) .much work has been done on the subject , resulting in the discovery of characteristic motifs among species and network types , and even superfamilies of network motifs that characterize complete classes of networks such as transcription interaction , signal transduction , even social networks .motifs have recently garnered much attention as a useful concept for uncovering the structural design principles of complex networks .there have been suggestions that motif analysis can not deliver on the promise of a deeper understanding of the networks studied ( eg ) , mainly because of a loss of information pertaining to context , i.e. , the broken connections between subgraphs .while it is clear that local scale information is lost , it is not clear how much a subgraph analysis can preserve of the information content of the original full - size networks .motifs have been of signal importance largely because they may reflect functional properties .we ask how much information can be recovered by looking at a network on a very local scale , as proposed by the network motif analysis approach .the concept of algorithmic probability will enable us to approximate and add small - scale complexity from the decomposition of a network into its possible subgraphs in order to determine the amount of information that is preserved in this bottom - up approach , as compared to the information content of the full - size network . in fig .[ figure2 ] we show the motifs , as calculated by the open - source software fanmod , of escherichia coli , together with information - theoretic measures associated with each motif .we see that shannon s entropy distinguishes two cases , assigning the two lowest possible entropic measures ( and ) , while bdm approximations provide a finer - grained classification , retrieving 3 different values for all 4 motifs .both shannon entropy and kolmogorov complexity approximations agree on the equal complexity of the first two motifs .results of applying compressibility ( deflate and bzip2 ) and algorithmic probability ( bdm ) to approximate the kolmogorov complexity motifs of the artificial network showing the agreement of the compressed size of network motif files , network motifs of size 4 and 5 , when compared to the complexity of the original networks ( bdm , bzip2 and deflate ) ( see fig .[ figure3 ] ) . fig .[ figure3]c shows that natural and synthetic networks that belong to the same family or have the same topology have similar complexity values , both for the original and for the motif compressed file sizes .the same compression trend is confirmed between motifs and bdm for both sets of graphs , providing further evidence of the connection established in this paper between the information content of subgraphs ( more properly , some subarrays of the adjacency matrices ) and the frequency of a subset of overrepresented graphs ( known as graph motifs ) .similarly , but to a lesser degree , bzip2 and deflate ( fig .[ figure3]a - b ) show network family clustering capabilities , assigning graphs of similar origins or topology more or less the same incompressibility values as approximations of their complexity / information content .sparsification can be viewed as a procedure for finding a set of representative edges and weighting them appropriately in order to choose a smaller but representative number of vertices and edges that preserve important features of a network , for example , its _backbone_. sparse graphs are easier to handle than denser ones and can be used for network dimensionality reduction for the study of very large networks .a sparse graph is one whose number of edges is reasonably viewed as being proportional to its number of vertices .one may consider a graph sparse if its average degree is less than 10 . while real - world networks are already sparse by most standards , because of their typically large size it is often useful to reduce their dimensionality further in order to enable inspection of the most important connections , for example , in biology , where even a new link of regulation between genes can be a breakthrough .sparsification methods have been used in biology ( eg ) .it has traditionally been shown that these algorithms preserve topological properties of the original networks after sparsification , but little is known about the information content conservation . in this sectionwe calculated the amount of information preserved in spectral sparsifiers of different types of network .we used the algorithm suggested in for the purpose , a fast algorithm to calculate sparse networks by random sampling , where the sampling probabilities are given by the effective resistances of the edges .the effective resistance of an edge is known to be equal to the probability that the edge appears in a random spanning tree of .it has been proved that for each error parameter there is such a spectral sparsifier , and that it can be calculated in time for some large constant of the sampling method by replacement from graph . will denote the graph resulting from the application of the sparsification method to .here we are interested in determining whether this other method actually preserves the information of the network , beyond topological properties .to which end we again measure the information content by way of shannon entropy and kolmogorov complexity of the networks previously studied .[ figure3 ] show that the method does indeed follow the relative information content of the lossless compressed lengths of the original networks .we have chosen the error terms for all networks so as to keep 20% , 40% , 60% and 80% of the edges , following a recent , widely accepted network sparsification algorithm , as described in .we report the findings for the rate of information loss in fig .[ figure3 ] .the information loss rates for sparsification preserving degree distribution ( see fig . [ figure5 ] ) ( differences between 20% , 40% , 60% and 80% threshold values ) are -2.44 , -0.908 and -0.611 .the relative order of information content was preserved upon application of all methods .only bzip2 reports an inconsistency in the relative information conservation for sw networks .the rest including deflate indicate good preservation of the features that characterize the information content of the original networks .we calculated the graph spectra of several real - world networks from . , consisting of a list of eigenvalues of the adjacency matrix of , denotes the spectra .[ figure3 ] shows the result of compressing both the original networks and their graph spectra .the approximation of the algorithmic information content preserved by the spectra is calculated by losslessly compressing of eigenvalues of the graph adjacency matrix of of size , and is denoted by , where is a lossless compression algorithm ( e.g. , bzip2 or deflate ) and the eigenvalues sorted from smallest to largest . as seen in fig .[ figure3 ] , bdm fully characterizes network topology ( see the synthetic network values ) and assigns similar complexity to similar networks , in agreement with previously reported motif analysis results .[ figure3 ] shows the complexity values for the protein networks 1 , 2 and 3 ; social networks 1 and 2 ; electronic circuit networks 1 and 2 ; genetic networks ( yeast and ecoli ) ; and 3 types of graph with different topologies ( erds - rnyi , scale - free and small - world from the mendes db ) .the rate of information loss is clear , with the greatest loss at 80% and then diminishing at a decreasing speed the greater the sparsity , keeping relative information but deleting edges at the determined values .trends show that the algorithm preserves the absolute and relative information content of the original networks .[ figure4 ] shows a very interesting phenomenon .reaching a 40% sparsification value has the diametrically opposite effect to losing information ; the resulting network appears more random because most of the structure is lost . then at 20% the original trend resurfaces; the resulting sparse graph is truly small as compared to and comes last , with the smallest information content .combined , this strongly suggests that keeping less than 50% leads to important information being lost , and some complexity may actually be introduced ( e.g. , from graph disconnection ) .this of course depends on the topological structure of the graph it is known that scale - free networks are more robust in the face of random failure but less so in the face of targeted attacks .this is in contrast to motif analysis , as shown in subsection [ kmotifs ] , where it was demonstrated that very few elements of local structure ( subgraphs ) preserve the basic information necessary to continue characterizing the networks .sparsification is thus seen to be safe for real - world networks at a 50% value , and unsafe for lower values , where most of the information begins to be lost , as happens in spectral analysis .while a variety of dimensionality reduction techniques have been proposed in recent years , beyond network motif analysis and sparsification techniques , there has not been much done in the area of network dimensionality reduction . here, we presented a novel and systematic way to compare old and new dimensionality reduction techniques based on information theory .the suggested methodology is based on the calculation of the preservation of information content .here it was empirically demonstrated the application of these novel methods .we have measured their effectiveness on a relatively limited but representative set of data , and reported their potential and associated information loss for dimensionality reduction . while our empirical results are a useful pointer , further numerical and theoretical work is probably needed to understand better the reasons underpinning the reported results . as a proof of concept, we first showed that approximations of the information content of cospectral networks are similar , as is consistent with the theory .we then tested three important graph dimensionality reduction techniques , showing the various ways and the degrees to which each method is capable of preserving the information content of the original networks .we calculated for the first time the impact of applying three important network reduction techniques to the information content of the 3 most important random and complex graph models , namely erds - rnyi , barabsi - albert and watts - strogatz .the results of the experiments reveal that the sparsification method evaluated preserves relative information and that its rate of information loss is as expected , but in deleting more than half the edges it leads to significant inconsistencies and loss of information . in the case of motif analysis , we found results in agreement with the method based on algorithmic probability that approximated the algorithmic information content of a network by considering local regularities , validating ( and generalizing ) the surprising fact that local regularities ( subgraphs ) preserve information to such a degree that important profiling information from the networks is fully recoverable ( e.g. , their type across different superfamilies ) , as reported in . finally , graph spectra was the most irregular reduction technique , capturing important algebraic information in a condensed fashion but in the process losing most of the information content of the original network .the results we report indicate that a local complexity approach retrieves enough local information about networks to distinguish between families , which is not possible by averaging their information content on the global scale through applying lossless compressibility to the complete networks .the results suggest that despite its local nature , motif analysis is the method that preserves the most information , while sparsification techniques are to be used carefully and can not reduce the network edge density by more than 50% without losing information essential for characterizing the network s original complexity . and finally , while graph spectra analysis captures important algebraic features , it is to be used with the greatest care , as it is definitely the technique that loses most of the original information content , making it impossible to reconstruct properties of the original graph in the general case .the paper explains these results by identifying weaknesses among these techniques and providing instructions on what they are best at and what to avoid , thus making it possible to improve the application of these methods for different purposes and clearing a path to assess other techniques and make meaningful comparisons .it helps to evaluate and compare network dimension reduction techniques that have been proposed so far and may be introduced in the future .the authors would like to thank our karolinska institutet colleagues : gordon ball for his valuable technical help , and the other members of the unit of computational medicine .this work was supported in part by the foundational questions institute ( hz ) , the john templeton foundation ( hz ) , the vinnova ( vinnmer ) marie - curie fellowship , ( nk ) , afa insurance ( jt ) , torsten sderberg foundation ( jt ) , stategra ( jt , hz ) , the stockholm county council and the swedish research council . the funders played no role in the design of the study , in data collection and analysis , in the decision to publish , or in the preparation of the manuscript .aho , m.r .garey , j.d.ullman , the transitive reduction of a directed graph , _ siam journal on computing _ 1 ( 2 ) : 131137 , 1972. m. albrecht , a. kerren , k. klein , o. kohlbacher , p. mutzel , w. paul , f. schreiber , and m. wybrow , on open problems in biological network visualization , d. eppstein and e. gansner , editors , graph drawing , _ lecture notes in computer science _ , 5849:256267 , springer berlin heidelberg , 2010 .r. albert , h. jeong , a.l .barabasi , error and attack tolerance of complex networks , _ nature _ jul 27;406(6794):37882 , 2000 .r. albert and a .-barabsi , statistical mechanics of complex networks , _ rev .74 , _ 47 , 2002 .aliferis , a. statnikov , and i. tsamardinos , challenges in the analysis of mass - throughput data : a technical commentary from the statistical machine learning perspective , _ cancer inform _ , 2 : 133162 , 2006 .u. alon , collection of complex networks .uri alon homepage 2007 ( accessed on july 2013 ) .e. august and a. papachristodoulou , efficient , sparse biological network determination , _ bmc systems biology , _ 3:25 , 2009 .a. banerjee , j. jost , graph spectra as a systematic tool in computational biology , _discrete applied mathematics _, 157:10 , pp . 24252431 , 2009 .barabsi , r. albert , emergence of scaling in random networks .science 286 ( 5439 ) : 509512 , 1999 .j. batson , d.a .spielman , n. srivastava , and s .- h .teng , spectral sparsification of graphs : theory and algorithms , vol .56:8 , _ communications of the acm _s. boccaletti , v. latora , y. moreno , m. chavez , d .- u .hwang , complex networks : structure and dynamics , _ physics reports , _vol 424:45 , pp .175308 , 2006 .s. carmi , s. havlin , s. kirkpatrick , y. shavitt , e. shir , a model of internet topology using k - shell decomposition , _ proc natl acad sci usa _ 104 : 1115011154 , 2007 .calude , _ information and randomness : an algorithmic perspective _ , eatcs series , 2nd . edition , 2010 , springer .g.j . chaitin . on the length of programs for computing finite binary sequences _ journal of the acm _ , 13(4):547569 , 1966 .there are planar graphs almost as good as the complete graph , _j. comput .sci . , _ 39:205219 , 1989 . t.m . cover and j.a .thomas , _ elements of information theory _ ,2nd edition , wiley - blackwell , 2009 .e.r . van dam , and w.h .haemers , spectral characterizations of some distance - regular graphs ,_ j. algebraic combin . _ 15 , 189 - 202 , 2003 .delahaye and h. zenil , numerical evaluation of the complexity of short strings : a glance into the innermost structure of algorithmic randomness , _ applied mathematics and computation _ 219 , 6377 , 2012 .dorogovtsev and j.f.f .mendes , evolution of networks , _ adv ._ 51 , 1079 , 2002 .p. eichenberger , m. fujita , s.t .jensen , e.m .conlon , d.z .rudner , s.t .wang , c. ferguson , k. haga , t. sato , j.s .liu , r , losick , the program of gene transcription for a single differentiating cell type during sporulation in bacillus subtilis , _ plos biology _ 2 ( 10 ) : e328 , 2004 . n.j .foti , j.m .hughes , and d.n .rockmore , nonparametric sparsification of complex multiscale networks , _ plos one _ , 6(2 ) : e16431,2011 .r.l .. graham , p hell , on the history of the minimum spanning tree problem annals of the history of computing , vol 1:7 , 4357 , 1985. a. gundert and u. wagne , on laplacians of random complexes , _ proceedings of the acm symposium on computational geometry , socg 2012_. s. ivakhno and j.d .armstrong , non - linear dimensionality reduction of signaling networks , _ bmc systems biology _ , 127,2007. m. kitsak , l.k .gallos , s. havlin , f. liljeros , l. muchnik , h.e .stanley , h.a .makse , identification of influential spreaders in complex networks ._ nat phys _ 6 : 888893 , 2010 .three approaches to the quantitative definition of information , _ problems of information and transmission _, 1(1):17 , 1965 .knabe , _ computational genetic regulatory networks : evolvable , self - organizing systems _ , springer , 2013 .knabe , c.l .nehaniv , m.j .schilstra , do motifs reflect evolved function ? no convergent evolution of genetic regulatory network subgraph topologies , _ biosystems _ , 94 ( 1 - 2 ) : 6874 , 2008 .langton , studying artificial life with cellular automata , _ physica d : nonlinear phenomena , _ 22 ( 1 - 3 ) : 120149 , 1986 .laws of information conservation ( non - growth ) and aspects of the foundation of probability theory , _ problems of information transmission _, 10(3):206210 , 1974 . m. li and p.vitnyi , _ an introduction to kolmogorov complexity and its applications _ , 3rd ed . ,springer , 2009 .m. liu , b. liu , f. wei , graphs determined by their ( signless ) laplacian spectra , _ electronic journal of linear algebra , _ 22 , pp .112124 , 2011 .p. mendes , s. wei , and ye keying , artificial gene networks for objective comparison of analysis algorithms , _ eccb _ , pp 122129 , 2003 .r. milo , s. shen - orr , s. itzkovitz , n. kashtan , d. chklovskii , and u. alon .network motifs : simple building blocks of complex networks , _ science _ 298 , no .5594 : 824 - 827 , 2002 .r. milo , s. itzkovitz , n. kashtan , r. levitt , s. shen - orr , v. ayzenshtat , m. sheffer , u. alon , superfamilies of designed and evolved networks , _ science _ 303 , 15381542 , 2004 .newman , finding community structure in networks using the eigenvectors of matrices , _ phys .e _ 74 , 036104 , 2006 . n. przulj , d.g .corneil , and i. jurisica . modeling interactome : scale - free or geometric ? ._ bioinformatics 20 , _ 18 : 35083515 , 2004 .t. rado , on non - computable functions , _ bell system technical j. _ , 41 , 877884 , may 1962 .shannon , a mathematical theory of communication ._ bell system technical journal , _ 27 ( 3 ) : 379423 , 1948 .shen - orr , r. milo , s. mangan and u. alon , network motifs in the transcriptional regulation network of escherichia coli , _ nature genet ._ 31 , 6468 , 2002 . k. tsuda , hj .shin , and b. schlkopf , fast protein classification with multiple networks , _ bioinformatics _ , vol .21 , issue suppl .2 , pp ii59ii65 , 2005 . m.e.j .newman , the structure and function of complex networks , _ siam review _ 45 ( 2 ) : 167256 , 2003 .f. soler - toscano , h. zenil , j .-delahaye and n. gauvrit , _ correspondence and independence of numerical evaluations of algorithmic information measures _ , computability , vol .2 , pp . 125140 , 2013 . f. soler - toscano , h. zenil , j .-delahaye and n. gauvrit , _ calculating kolmogorov complexity from the frequency output distributions of small turing machines _ , plos one 9(5 ) , e96223 , 2014 .solomonoff , a formal theory of inductive inference : parts 1 and 2 ._ information and control _ , 7:122 and 224254 , 1964 .spielman , n. srivastava , graph sparsification by effective resistances , _ proceedings of the fortieth annual acm symposium on theory of computing ( stoc 08 ) , _ 563568 , 2008 .spielman , s .- h.teng , spectral sparsification of graphs , _ siam j. comput . , _40(4 ) , 9811025 , 2011 . s. wernicke , f. rasche , fanmod : a tool for fast network motif detection , _ bioinformatics , _ 22 ( 9 ) : 11521153 .h. zenil , f. soler - toscano , k. dingle and a. louis , graph automorphisms and topological characterization of complex networks by algorithmic information content , physica a : statistical mechanics and its applications , vol .404 , pp . 341?358 , 2014 .h. zenil , f. soler - toscano , j .- p .delahaye and n. gauvrit , _ two - dimensional kolmogorov complexity and validation of the coding theorem method by compressibility _h. zenil , n.a .kiani and j. tegnr , methods of information theory and algorithmic complexity for network biology , arxiv:1401.3604 , 2014 .network dimensionality methods have been introduced and used in biology for purposes such as analysis and profiling . in generalthe goal of network sparsification is to approximate a given graph by a sparse graph on the same set of vertices .if is close to in some appropriate metric , then can be used as a signature , preserving important properties of for faster computation after reducing the size of and without introducing too much error , thus making computation time and storage of cheaper , as the network is more sparse compared with .obvious trivial sparsification methods include edge deletion by some criteria , such as the outermost ones ( called a -shell , often used to identify the core and the periphery of the networks ) , but most of them ( such as the aforementioned ) are rather arbitrary or ad - hoc , devised for specific purposes , rather than general methods aiming at preserving important , algebraic , topological or dynamical properties of the original graph .several notions of graph sparsification have been proposed .for example , a method motivated by proximity problems in computational geometry was introduced in the form of _ graph spanners _a spanner is a sparse graph in which the shortest - path distance between every pair of vertices is approximately the same in the original graph as in the spanner .for example , a popular sparsification algorithm is the _ spanning tree _ designed to preserve node distance but clearly destroy all other local node properties such as the clustering coefficient . not many non - trivial methods for network dimensionality reduction exist today , and it is acknowledged that spectral graph sparsification is among the most efficient both in preserving important algebraic and dynamical properties of a network and in terms of fast calculation . in partthe lack of more methods is due to a lack of assessment tools using which to decide whether one method is better than another in general terms , rather than whether it preserves one or another specific graph theoretic property ( e.g. , the transitive edge deletion method destroys the clustering coefficient of the original graph ) . among the methods considered in this paperis a high - quality cutting - edge one based on graph spectra .graph spectral sparsification is a technique that has been used in data analysis as a dimensionality reduction method in biology .however , whether most graphs are uniquely determined by their spectrum is an open problem .but because at least some graphs share the same spectrum the process is lossy , because one can not fully recover the original graph from its spectrum , at least in these cases .for example , almost all trees are cospectral ( the share of cospectral trees on vertices tends to 1 as grows ) , where _ almost _ means that the set of elements for which the property does not hold is a set of measure zero . a good introduction to spectral graph sparsificationmay be found in and we use it to illustrate the network dimension reduction assessing tool introduced in this paper .the notion upon which all these methods are based is the spectral similarity of graph laplacians .spectral sparsification requires that the laplacian quadratic form of the sparsifier approximate that of the original graph on all real vector inputs .this is equivalent to saying that the laplacian of the sparsifier is a good preconditioner for the laplacian of the original .another more recent method closer to biology works by looking at the subgraphs ( of very small size ) that make up a graph .the method , introduced in , turns out to be capable of characterizing networks by the families to which they belong ( e.g. , genetic versus social ) and is therefore also a perfect candidate for quantifying the amount and type of information that is preserved and lost when retaining only the network motifs of the original graph .it compares the frequency of small subgraphs with randomized versions of the same network ( i.e. , networks with the same size and the same degree distribution ) . over andunder - represented subgraphs are called network motifs and turn out to characterize a network type .each of these network motifs , defined by a particular pattern of interaction between vertices , may reflect a framework in which particular functions are achieved efficiently .it is generally believed that motifs are of signal importance largely because they may reflect functional properties .the calculation of network motifs may provide a deep insight into a network s function but their calculation is computationally challenging .we are therefore limited to small sizes and hence to considering only local structures .the surprising result is that these local structures contain enough information about a system to characterize it uniquely , at least in the case of graphs with similar topologies and functions .a graph in is considered frequent and therefore denoted as a motif when its frequency is above a predefined threshold or cut - off value .there is an ensemble of random graphs corresponding to the null - model associated with .we should choose random graphs uniformly from and calculate the frequency for a particular frequent sub - graph in . if the frequency of in is higher than its arithmetic mean frequency in random graphs , where , we consider this frequent subgraph ` significant ' and hence treat as a network motif of .the -score has been defined by the formula , where and stand for mean and standard deviation frequency in set .the larger the , the more significant is the sub - graph as a motif .the biological studies endeavor to interpret the motifs detected for biological networks .for example , the network motifs found in e. coli were discovered in the transcription networks of other bacteria such as yeast , among others .[ kmotifs ] ) and two universally employed lossless compression algorithms , bzip2 and deflate , the former set to maximum compression ( option flag set at -9 ) and the latter in the default position as implemented in _s compress function version 10 .bzip2 is an open source data compressor that uses a stack of different algorithms superimposed one atop the other , starting with run - length encoding , burrows - wheeler or the huffman coding , among others .we sometimes compare , strengthen or complement findings by also providing estimations of shannon s information entropy on the adjacency matrix .central to information theory is the concept of shannon s information entropy , which quantifies the average number of bits needed to store or communicate a message .shannon s entropy determines that one can not store ( and therefore communicate ) a symbol with different symbols in less than bits . in this sense ,shannon s entropy determines a lower limit below which no message can be further compressed , not even in principle . for an ensemble , where isthe set of possible outcomes ( the random variable ) , and is the probability of an outcome in .the shannon information content or entropy of is then given by for example , a complete graph and a completely disconnected graph would have minimal shannon entropy because the adjacency matrix entries are either all 0 or all 1 ( assuming self - loops ) . however , erds - rnyi ( er ) graphs with edge density 0.5 would have maximal shannon entropy because their adjacency matrices have about the same number of 1s and 0s and are therefore statistically ` typical' every bit is equally highly surprising .in other words , the bits of the adjacency matrices of complete and completely disconnected graphs are unsurprising , because getting a 1 after a long list of 1s , or a 0 after a long list of 0s does not add any shannon information .this is , however , very different in algorithmic information theory , where we are interested in whether bits are causally related .for example , the adjacency matrix of a directed complete graph the direction being that the matrix is diagonal , with either all 1s on one side of the diagonal and 0s on the other side ( or the other way around)would have maximal shannon entropy but is clearly not random , and should therefore have a low ( algorithmic ) information content .we therefore used a graph algorithmic complexity measure rather than this statistical combinatorial one .the concept of algorithmic probability ( also known as levin s semi - measure ) yields a method for approximating kolmogorov complexity related to the frequency of patterns in the adjacency matrix of a network , including therefore the number of subgraphs in a network .the algorithmic probability of a subgraph is a measure that describes the probability that a random computer program will produce when run on a 2-dimensional tape universal ( prefix - free ) .for details see . ] ) turing machine .that is , .an example of a popular 2-dimensional tape turing machine is langton s ant , commonly referred to as a _turmite_. the probability semi - measure is related to kolmogorov complexity in that is at least the maximum term in the summation of programs ( ) , given that the shortest program carries the greatest weight in the sum .the algorithmic coding theorem further establishes the connection between and as ( ) : ( eq .1 ) , where is some fixed constant , independent of .the theorem implies that one can estimate the kolmogorov complexity of a graph from its frequency of production by running random programs that simply rewrite eq .( 1 ) as : . in technique was advanced for approximating ( hence ) by means of a function that considers all turing machines of increasing size ( by number of states ) . indeed , for small values of states and colors ( usually 2 colors only ) , is computable for values of the busy beaver problem that are known , providing a means to numerically approximate the kolmogorov complexity of small graphs , such as network motifs .the coding theorem then establishes that graphs produced with lower frequency by random computer programs have higher kolmogorov complexity , and vice versa .the method is called the _ block decomposition method _ ( bdm ) because it consists of decomposing the adjacency matrix of a graph into subgraphs of sizes for which complexity values have been estimated , then reconstructing an approximation of the kolmogorov complexity of the graph by adding the complexity of the individual pieces according to rules of information theory , as follows : where represents the set with elements , obtained when decomposing the adjacency matrix of into all subgraphs contained in of size . in each pair , is one such submatrix of the adjacency matrix and its multiplicity ( number of occurrences ) .as is evident from the formula , repeated subgraphs only contribute to the complexity value with the subgraph bdm complexity value once plus a logarithmic term as a function of the number of occurrences .this is because the information content of subgraphs is only sub - additive , as one would expect from the growth of their description lengths ( `` times a subgraph '' ) .applications of and have been explored in , and include applications to graph theory and complex networks and where the technique was first introduced . in fig .[ figure2 ] the motif - analysis software called fanmod was used to calculate the graph motifs ( the over - represented subgraphs ) , and we took the output files with the adjacency matrices in string form .this was done for motifs of size 4 and 5 .the files considered contained the occurring subgraphs in string notation followed by their frequency of occurrence , so in a strict sense these files are already a compressed version as they only contain the different subgraphs but not their repetitions , other than as encoded in their frequencies . in the files only motifs were considered , that is , subgraphs of size 4 and 5 that were either over or under - represented as compared with randomized versions of the same network .more precisely , motifs were calculated with fanmod by using a parameter absolute score larger than 2 , a -value less than 0.05 , a frequency of at least 0.01% , and included in the output file motifs that have been found at least 5 times .the files were therefore further compressed with both bzip2 and deflate in order to capture possible statistical regularities in the type and frequencies of the motifs .then the files were compared to the compressed lengths of the original networks for both compression algorithms .the underlying rationale is that non - random graphs will show an over - representation of certain motifs and an under - representation of others , hence reducing or increasing the number of these objects in the resulting files .indeed , from the output files of fanmod one can reconstruct some of the information of the original network , with the number of subgraphs and their frequency , but some information is lost in the form of the way in which all these subgraphs ( motifs ) may have been connected .motif analysis displays both conservation of information and clustering capabilities by families , as reported in and verified again here with bdm .results are summarized in fig .[ figure2]c .it is worth noting that because bdm looks at local regularities only , it may be biasing or amplifying the results toward network motifs over other network dimensionality reduction approaches .this is not a problem but nonetheless something to be taken into consideration .another interesting phenomenon to investigate is the information loss and preservation when considering all possible induced subgraphs ( graphlets ) in a graph .
|
to cope with the complexity of large networks , a number of dimensionality reduction techniques for graphs have been developed . however , the extent to which information is lost or preserved when these techniques are employed has not yet been clear . here we develop a framework , based on algorithmic information theory , to quantify the extent to which information is preserved when network motif analysis , graph spectra and spectral sparsification methods are applied to over twenty different biological and artificial networks . we find that the spectral sparsification is highly sensitive to high number of edge deletion , leading to significant inconsistencies , and that graph spectral methods are the most irregular , capturing algebraic information in a condensed fashion but largely losing most of the information content of the original networks . however , the approach shows that network motif analysis excels at preserving the relative algorithmic information content of a network , hence validating and generalizing the remarkable fact that despite their inherent combinatorial possibilities , local regularities preserve information to such an extent that essential properties are fully recoverable across different networks to determine their family group to which they belong to ( eg genetic vs social network ) . our algorithmic information methodology thus provides a rigorous framework enabling a fundamental assessment and comparison between different data dimensionality reduction methods thereby facilitating the identification and evaluation of the capabilities of old and new methods . + keywords : dimensionality reduction techniques ; kolmogorov complexity ; network ; graph spectra ; graph motifs ; graph sparsification
|
with the advent of the imaging atmospheric cherenkov technique ( iact ) in late 1980 s , ground - based observations of very high - energy gamma rays came into reality . since the detection of the crab nebula using the iact in 1989 by whipple the number of high energy gamma - ray sources has rapidly grown .today the sources are more than 150 and the number is increasing year by year thanks to the new generation experiments .this first detection at tev energies was followed by the discovery of the tev emission from the first extragalactic source ( mrk 421 ) , showing that acceleration processes are taking part in agns too . with the present generation experiments like h.e.s.s . , veritas and magic new classes of sources as well as about a dozen of unknown new ones were detected at gev - tev energies both galactic and extragalactic .the recent advances in -ray astronomy have shown that the 10 gev 100 tev energy band is crucial to investigate the physics in extreme conditions .some interesting scientific topics are the galactic center , pulsar wind nebulae , pulsars and binary systems , blazars , radio - galaxies , star - forming galaxies . for the interested reader ,a comprehencive review on tev astronomy has been recently published in .ground - based experiments using cherenkov photons produced in air represent a cost - effective way to implement observations in this band . at present ,magic , h.e.s.s . andveritas are the state of the art of such ground - based experiments .they have collecting area , obtained by combining several mirror segments , of the order of 500 - 1000 .+ the cherenkov telescope array ( cta ) represents the future generation of iact , with the goal of increasing sensitivity by a factor of 10 with respect to the present best installations and a total mirror collecting area of the array of the order of .the cta observatory is a project designed by a worldwide consortium that will make use of well demonstrated technologies of present generation cherenkov telescopes as well as new developed solutions .cta will be based on telescopes with different sizes installed over a large area . at its southern sitee.g. 70 small size telescopes ( 4 m primary mirror diameter ) , 20 medium size telescopes ( 12 m ) and 4 large size telescopes ( 23 m ) are envisaged to be implemented in order to cover a broad spectral energy range from a few tens of gev up to more than 100 tev .the mirrors for cherenkov telescopes are in general composed by many reflecting segments to be assembled together in order to mimic the full size mirror .so far , just single reflection telescopes have been used with davis - cotton or parabolic layouts . in both cases the segmentsare in general designed with a spherical geometry and proper radius of curvature .these mirrors are also characterized by a reflectivity performance typically above 80% ( in the energy band ) but , at the same time , require angular resolution of typically a few arc - minutes , i.e. about two orders of magnitude the one of mirrors for optical astronomy . despite the quite modest requirement in angular resolution, the distribution of the concentrated light is an important parameter in the performance of such telescopes .in fact , it has a direct impact on the measured energy and flux of gamma rays from the observed sources ; and moreover in the determination of the energy threshold of the instrument .most of the current and future cherenkov telescopes make use of spherical mirrors ; each telescope has hundreds of segments or even thousands in the cta case .in addition , it is common to have different suppliers for the same telescope .production and testing of such mirrors need a full characterization through appropriate facilities with suitable set - up for the testing of the prototypes and to perform the quality control during the production phase in order to cross - calibrate mirrors from different industrial pipelines .+ optical properties , reflecting surfaces and mechanical structure are designed aiming at obtaining the best compromise between costs and performance .cost of the industrial production has to be sufficiently low but it has to guarantee the requirements for cherenkov optics . to address these issues , for instance , the cta observatory is planning to take advantage of calibration facilities .some of those are based on the direct imaging of a light source .there are already calibration facilities based on this method available in tbingen ( germany ) , saclay ( france ) and san antonio de los cobres ( argentina ) .another approach which is now widely being used for mirrors , either cherenkov or not , is based on the deflectometry method .it consists in observing the distortions of a defined pattern after its reflection by the examined surface and from them to reconstructing the surface shape .a facility based on this concept has been developed at erlangen - nrnberg university .a variant of this method has been implemented at the osservatorio astronomico di brera of the italian national institute of astrophysics ( inaf - oab ) to test and characterize the mirrors for the astri sst-2 m telescope proposed for the cta .a similar approach was previously used also for the characterization of mirrors for ring imaging cherenkov counters . in this framework , a new optical facility has been implemented by inaf - oab . it has been designed and developed to test spherical mirrors with long radius of curvature ( several tens of meters ) .the facility is a system working in open - air , so that accurate evaluation of the main parameters can be achieved under different environmental condition .moreover this facility is able to accurately investigate the scattering effect by means of an high sensitivity large format ccd camera .several light sources with different spectral emissions are also available . in principle , this facility can be used either during the prototyping phase or the production phase .however , considering the high number of segments required by cherenkov telescopes the most appropriate use of this facility is to cross - calibrate the characterization pipeline of the suppliers and to perform random checks in the production . in this paperwe present the facility and discuss its measuring capabilities .the facility measures the focused light of the mirrors using a simple optical configuration .since mirrors have a spherical surface profile , a spherical wavefront can be used to generate the focal spot from the radius of curvature .this setup is commonly referred as _2-f method _ ; it is sketched in figure [ 2f ] . to retrieve the focal length of the mirror under testthe well known formula for the conjugate points can be used : where is the distance of the object ( e.g. a light source ) from the mirror and is the distance of its image from the mirror . assuming spherical mirrors ( i.e. the typical geometry of the mirror segments used by cherenkov telescopes ) ,once the light source is positioned at a distance of , then the image can be seen at the same distance , because the incoming rays hit the surface of the mirror perpendicularly and are reflected back along the incoming direction this distance being the radius of curvature of the mirror under test .the above mentioned optical setup is the simplest one to check the imaging quality of the mirrors , however it requires a long baseline . the only possibility to providea setup with a shorter length would be to produce parallel light rays which hit the surface and get focused at a distance from the mirror ( called _ 1-f method _ ) .the problem with the 1-f setup is that one needs a light source emitting parallel rays which illuminate the whole mirror facet ( typically larger than ) , which would be much harder to realize . + the equipment needed to perform the _ 2-f _ test is schematically based on a light source , a detector and a room which shall be large enough to host the baseline .our facility is indeed composed of two stages .the stage # 1 is a mirror s support structure mounted on a long travel rail .the mirror s support and the rail are motorized in order to allow the alignment of the mirror under test .figure [ renderingoutdoor ] shows a rendering of the design study performed on this part and a photo .the stage # 2 is located into a control room where a compact bench hosting a light source and a detection unit take place .this system is motorized , thus enabling the possibility to scan the focusing plane . a control - command unit ( i.e a desktop computer ) , an electrical cabinet and storage spacecomplete the apparatus .+ the facility is installed at the merate ( lecco , italy ) site of inaf - oab .it is based on a long baseline to fit mirrors with radii of curvature ranging from 30 meters up to 36 meters .this choice was driven by the fact that most of the current and future cherenkov telescopes ( e.g. h.e.s.s . ,magic and cta ) make use of mirrors with similar characteristics .moreover , the stage # 1 is installed outdoor , thus giving the possibility to study also the mirror performance for different thermal conditions , i.e. mimicking the real operative configuration of the mirrors mounted on a real cherenkov telescope .+ as previously stated , this method is widely used for the characterization of mirrors for cherenkov telescopes .however the facility presented in this paper has few peculiar characteristics that , combined together , makes it unique and innovative .these features are : * the entire system has been designed to be user - friendly . to this regard, the manipulator hosting the mirror and the support of the detector are fully robotized .they can be easily automated to run long - time acquisitions without the on - site intervention of the operators .the automation reduces the time needed to fully characterize a mirror to no more than 15 minutes ; * the stage # 1 is installed in open air .cherenkov telescopes typically work between - and 5 - 90% relative humidity .indeed , their mirrors are not influenced from temperature variations of few degrees as those experienced during a typical nighttime period . while the long time variations such as the seasonal ones ( of the order of 30 - 40c ) could in principle change the radius of curvature up to few percent ; hence the focusing performance of the mirrors .the seasonal variation effects are what the facility can investigate ; * the direct imaging on a large format ccd camera mounted on a 2-axis motorized stage .this configuration is a high sensitivity setup that allows to catch diffused photons on a large area and perform a correct evaluation of the encircled energy function of the mirror .this kind of study is of great importance for the evaluation of the large deviations from the ideal focal position due to the scattering from the micro - rough profile of the mirror . in the following subsections we report a detailed technical description of the two stages .the outdoor stage has three main subsystems : a rail , a mirror s support and an electrical cabinet .it has been conceived and designed at the inaf - oab . the engineering , realization and installation activities were performed by the officina opto - meccanica insubrica and automation one companies .+ [ [ the - rail - subsystem ] ] the rail subsystem + + + + + + + + + + + + + + + + + + it is composed of a couple of 6 m long stainless steel linear guides .the drive system uses one brushless motor with ip code 65 and can work in open environment .its rear shaft is equipped with an absolute rotary encoder with endat interface .the shaft of the motor brings the motion to both the linear guides and it is then distributed to the carriage through toothed belts .this solution is able to ensure a positioning of the carriage well below 1 mm on the full travel range of the rail , because the position loop is closed through the reading of the encoder .the nominal position of the carriage with respect to the indoor stage of the facility is recorded by an external laser distance meter , it is suitable for outdoor measurements over large distances .it guarantees a knowledge of the optical baseline within a few mm .the rail subsystem is mounted over an optical bench made of aluminum profiles .figure [ rail - cad ] details the rail subsystem .+ [ [ the - mirror - support ] ] the mirror support + + + + + + + + + + + + + + + + + + the mirror support is shown in figure [ mirrorsupport - cad ] .it is installed over the carriage and is designed to ease the mounting and dismounting operations of the mirror under test as well as to facilitate the alignment of the mirror itself over the long optical baseline of the facility .it can be divided into two parts .one part can be horizontally reclined to execute the loading and unloading of the mirror .when this part is standing vertical , it can be blocked to prevent undesired movements .the holding for mirrors is obtained by means of an adjustable system of aluminum beam profiles and soft clamps .this part can be moved in such a way the mirror tilts with respect to two axes .the drive system is based on linear actuators with re - circulating ball screws .it has wide angular ranges of 5 along the x axis and 10 along y , with a resolution better than 12 arcsec .the mirror s support can be loaded up to 45 kg , different mirror tile s shapes ( e.g. squares , hexagons , rounds ) up to 1.5 meters in diameter can be managed .+ [ [ the - outdoor - electrical - cabinet ] ] the outdoor electrical cabinet + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the electrical cabinet is made of a stainless steel water tight box for external applications .it is equipped with a thermoregulation system composed by heaters , coolers and dryers to keep the electronics within its working conditions .this solution ensures the functionality of the facility within a wide range of environmental conditions .in addition to the thermoregulation system , the cabinet hosts the drivers to pilot the three motors of the motion system , an ethernet switch and a gateway to handle the input / output digital signals .the cabinet receives the power from the main grid of the observatory through a dedicated 400v line .the power is handled by the system to provide 220v and 24v lines that are distributed to all the devices of the outdoor stage whether they are resident in the cabinet or not .a safety stop red push button is available for emergency handling .the cabinet is equipped with a proper interface to connect a keypad to send motion commands to the system .the indoor stage is based on three main subsystems : a light source , a photon detection unit and an electrical cabinet .[ [ the - light - source ] ] the light source + + + + + + + + + + + + + + + + the light source is a compact device able to generate a spherical wavefront of light .five ultra - bright led sources are disposed in a pattern : an rgb led is surrounded by a red ( 626 nm ) , a green ( 525 nm ) , a blu ( 470 nm ) and a warm white leds .any combination of leds can be switched on and off , as needed for the measurement . with respect to laser sources ,the choice of leds has been made as a compromise for their cheapness and safety of use versus the quality of the wavefront generated ( quality in terms of light intensity , spatial distribution and emission angle ) . concerning the quality of wavefront , the half cone emission diagram of the leds used is shown in figure [ led](a ) .considering the typical angular size of the mirrors under test ( i.e. 3or less ) , this diagram shows that the non - uniformity of the light wavefront at the mirror pupil is never exceeding 2% .a filter wheel with logarithmic neutral filters permits to dim the light intensity and illuminate the mirror with a suitable light flux in order not to saturate the detector ( see figure [ led](b ) ) .+ the source is equipped also with a very low power laser beam for alignment purposes .+ [ [ the - detection - unit ] ] the detection unit + + + + + + + + + + + + + + + + + + this unit is composed of a ccd camera and a long - range laser distance meter for outdoor applications .the laser meter gives the absolute measurement of the distance between the detector plane and the mirror .the device is a disto d8 model , with a declared precision better than mm up to 36 m. the ccd camera is used to detect the light reflected back from the mirror under test .the model is pl4301e from the proline series characterized by low noise , high sensitivity , high resolution and deep cooling .the sensor mounted is a truesense kaf-4301 from on semiconductor inc .producer .it is a large format ccd with pixels 24 m side for a total diagonal of 70.7 mm .the camera is equipped with a 90 mm shutter to avoid any vignetting on the detector .a filter wheel can be mounted on top either to dim the incoming light or to select a particular wavelength , in case of need .the pl4301e has a thermoelectric cooling system capable to cool down the detector temperature to 50c below the ambient one ( see figure [ ccdcalib](d ) ) .+ the ccd camera has undergone a careful characterization in terms of gain ( named also conversion factor ) , read - out noise ( ron ) , linearity , dark current and charge transfer efficiency ( cte ) .the gain has been evaluated acquiring a series of images with increasing exposure times followed by another series of images with decreasing times . for each imagethe variance is computed and plotted against the median counts of the image itself .the gain corresponds to the angular coefficient of the best fit line , while the ron is obtained by multiplying the gain and the mean value of the bias frame .each image used is given by the mean of two subsequent acquisitions a and b , both corrected for dark and bias signals .this procedure guarantees to check both shot- and long- term variations of the camera .+ linearity and dark current are evaluated by varying the exposure time . respectively , by acquiring a number of bright and dark acquisitions at increased exposure times .then the median counts of the acquired images is computed and plotted against the exposure time .+ the cte has been derived by the cosmic rays impacts detected after a 1800 seconds dark exposure .cosmic rays impact the detector as stochastic events with casual angles and energy but they can be used to diagnose the cte of the detector as suggested by a. riess et al . . + all these parameters depend on the download speed ( i.e. the frame readout frequency ) .we report in figure [ ccdcalib ] the results for the 1 mhz high gain setup that is typically used . in table[ ccdcalibtab ] the full set of calibration results are reported . + the detection unit is completed by a 2-axis stage to move around the ccd camera along the detection plane .the scan covers an area of .the motion has a resolution of 0.1 mm .figure [ detunit ] shows the system assembled .+ + .analytical results of the ccd calibration for different download speeds . [ cols="<,^,^,^,^ " , ] [ ccdcalibtab ] [ [ the - indoor - electrical - cabinet ] ] the indoor electrical cabinet + + + + + + + + + + + + + + + + + + + + + + + + + + + + + this cabinet routes all the i / o commands and communication signals between the computer and the outdoor stage .its main components are the ethernet switch and the gateway .it has an independent 220v power line with respect to the outdoor cabinet .the main power is handled by the system to provide proper voltages to all its internal devices .a safety red push stop button is available for emergency handling .also this cabinet has the interface to connect a keypad to send motion commands to the system . to run the complete facility four independent programs are available : one for the outdoor stage , two for the detection unit and one for the light source .however , not all these computer programs are mandatory to acquire measurements , since the choice depends on the kind of measure the user is interested in .+ concerning the detection unit , one application is devoted to the motion of the axes .it can be programmed to make follow a specific path .the ccd detector has its own software that permits a variety of acquisitions and settings ( e.g. the dark and flat field frames , binning mode , gain etc . ), it commands also the filter wheel .both are commercially available software programs that come with the hardware .+ the light source can be commanded to switch on the different leds with which it is equipped . with the outdoor stage control program commands and setting instructions can be transmitted .it shows three panels , one for each axis of the motion . the user can set the maximum speed of the motion and the position to be reached , either in absolute or relative .the user can also set the motion in jog mode , this configuration being particularly useful during the alignment phase .+ each axis can be independently set with respect to the other ones ; more axes can be moved at the same time .in this section we discuss some typical measurements and calibrations that can be pursued with the facility .all the results are based on the photometric analysis of the data retrieved and on the information of the distance read by the laser distance meter . in analogy with an optical telescope ,the angular resolution quality of a mirror for cherenkov telescope is evaluated from its point spread function ( psf ) .however , the parameter in general used for the cherenkov case is the , i.e. the radius that contains the 80% of the focused light .this parameter is tipically preferred with respect to the more commonly used ( in optical astronomy ) full width half maximum ( fwhm ) .indeed the psf of the mirrors for cherenkov telescopes can hardly be reducible to a gaussian distribution since the shape s errors introduced from the low cost manufacturing processes adopted are dominant with respect to the intrinsic aberration of the theoretical design .the micro - roughness can also play an important role .moreover , since cherenkov observations often deal with very faint signals , the turns to be a better estimator to qualify the mirrors and hence the amount of concentrated light .a standard measurement carried out with the facility is the acquisition of a number of images at various distances from the mirror .after the mounting of the mirror on its support and the alignment of the optical axis with the light source and detector unit ( _ z _ axis in figure [ mirrorsupport - cad ] ) , the procedure foresees the rough localization of the best focus position .all the measurements are then taken with discrete steps around this position .the discrete steps are measured by means of the encoder mounted on the shaft of the _ z _ axis motor .the value of the origin in terms of distance from the mirror is retrieved by averaging a number of reads with the laser distance meter .particular care is taken in settling this point in order to avoid systematic errors .+ each image is then treated as an astronomical image and analyzed following standard aperture photometry procedures ( i.e. use of dark and bias frames , evaluation of the background , etc . ) and using standard software routines ( e.g. daophot photometry library , saoimage ds9 , etc . ) .+ + an example of the results is shown in figure [ psfbestfocus ] and figure [ focalparabola ] , respectively : the serie of psf images taken at the various distances from the origin and the results of the images analysis . in particular , from the plot in figure [ focalparabola ] it is possible to estimate two geometrical parameters of the mirror under test : the best focus position ( being also the radius of the best fitting sphere ) and the focal depth .the first parameter is evaluated from the vertex of the parabola that best fits the experimental data , while the latter is due to the sensitivity in estimating the and its relative uncertainty from the experimental psfs data .the errors associated to the and the relative distance are evaluated in two different ways . for the we use the poissonian noise associated both with the psf photometry and the background evaluation .the two values are quadratically summed , even if typically the second one is the dominant contribution .for the relative distance the sensitivity of the disto d8 is taken .+ in the example shown in figure [ focalparabola ] we obtained the best focus position at + 73 mm from the local origin with a focal depth of 100 mm ( over a radius of curvature of mm ) . for the valueswe have computed an error of .more detailed investigation on the shape s errors of the mirrors can be undertaken using the fwhm as estimator . by evaluating the contributions over the two orthogonal axes lying on the focus plane ( _ x _ and _ y _ axes in figure [ mirrorsupport - cad ] )it is possible to disentangle the astigmatism aberration of the mirror . +the procedure to acquire the measures is the same as that described in section [ r80 ] , the analysis is also based on standard aperture photometry , but fwhm is taken as reference instead of . + in figure [ astigatism ] we present , from bottom to top , the plots of the total fwhm , the fwhm along the _ x _ axis and the fwhm along the _ y _ axis as functions of the focal distance ( the radius of curvature ) .experimental data and best fit parabolas are shown .it is possible to appreciate a difference in the best focus positions independently achieved on the _ x _ and _ y _ axes of 60 mm , over a radius of curvature of mm .the diffuse scattering is , in general , due to irregularities of the surface of the mirror at microscopic level that induce coherent large angular deviations from the specular direction , thus generating a broad diffused light component surrounding the core of the psf .if those irregularities have a specific spatial pattern , the scattering can generate structured tails in the psf .the more pronounced the irregularities are , the more diffused the light is , thus covering a wide area on the focus plane and reducing the amount of light falling into the telescope camera detector .the method to detect the scattering is very important to understand the behavior of the mirror in terms of angular resolution . to cover a wider area around the psf we therefore raster scan the focal planefor each position an image is acquired that is later stitched together with the others to generate a wide single image of the focus plane .different approaches have been suggested to evaluate the integral value of the specular plus scattered components that require an ad hoc setup .the one proposed here makes use of the same equipment as for .moreover , aperture s photometry directly on the ccd is profitably exploited to avoid using any objective or imaging screen that would imply a transfer function .as an example of the application , we show the image of the psf acquired by a single frame at the center of the focal plane ( figure [ scattering ] left panel ) and by a raster scan ( figure [ scattering ] right panel ) . in the second case ,the contrast has been intentionally stretched in order to saturate the bulk of the psf ( in this way the tails due to surface imperfections are clearly visible ) . while these tails do not influence by a reasonable amount the estimation of the best focus position , they have some effects on the total amount of concentrated light . to give the reader a quantitative value we chose a mirror with a pronounced contribution from the scatteringand we compared the obtained from the two images . from the single frame we obtained mm while from the raster scan we got mm .the plot of the encircled energy is shown in figure [ r80mosaic ] .in - focus total reflectivity is among the most important parameters for understanding the performance of a mirror for cherenkov telescopes , indeed one of the most difficult to assess . while the local surface reflectivity is commonly measured sampling the mirror s surface with spectrophotometer devices , their detector s acceptance angle is in general wide enough to collect also an important fraction of the scattered component , mixing it to the specular reflection one .the mirror s surface shape quality is obtained through the use of facilities based on the _ 2-f method _ , as described in this paper . the capability to combine togetherthe afore - mentioned information by means of a single measurement ( now wavelength dependent ) will allow us to get a more reliable evaluation of the expected psf of the entire cherenkov telescope and to estimate the background component due to the optical surface errors .+ such a measurement is possible thanks to the facility presented in this paper as soon as the scattering evaluation method presented in section [ scat ] is coupled with a reliable way to measure the light flux of the source in use .this can be done for instance by using a calibrated photodiode and a semi - reflective folding mirror .a detailed study is ongoing and some preliminary tests have been already carried out .activities to improve the software programs integration are also ongoing. this will give an easier and faster measuring experience .an open - air user - friendly facility for the characterization of mirrors for cherenkov telescopes with long radius of curvature is presented .it is devoted to the precise determination of the radius of curvature and the measurement of the on - focus light distribution generated by the mirror under test .the latter in terms of focused and scattered components , normalized to the total incoming light at the detector .+ the facility has a flexible light source able to provide wavefronts at different wavelengths .this capability combined to the large field of view of the camera and the possibility to perform raster scans , makes the facility ideal to pursue calibrations of cherenkov mirrors with direct ccd imaging , with a correct evaluation of the encircled energy function .+ a detailed technical description covering its electro - mechanical , electrical , optical and software components has been presented . some typical measurements made possible through the facilityhave been discussed together with the forthcoming possibility to implement the on - focus total reflectivity evaluation .+ the radius of curvature and the on - focus light distribution measurements can be correlated to the ambient and/or mirror temperature thus opening the possibility to experimentally assess the thermal behavior of the mirror .the facility is run by the inaf personnel of the observatories of brera and padova but the access is open to the entire scientific community who may feel the need of these types of measurements , such as that of the cta observatory or others present and future projects .this work was supported in part by the astri flagship project " financed by the italian ministry of education , university , and research ( miur ) and led by the italian national institute of astrophysics ( inaf ) .we also acknowledge miur for the support through prin 2009 and teche.it 2014 special grants. the authors also thank officina opto - meccanica insubrica and automation one companies for their valuable support .
|
cherenkov telescopes are equipped with optical dishes of large diameter in general based on segmented mirrors with typical angular resolution of a few arc - minutes . to evaluate the mirror s quality specific metrological systems are required that possibly take into account the environmental conditions in which typically these telescopes operate ( in open air without dome protection ) . for this purpose a new facility for the characterization of mirrors has been developed at the labs of the osservatorio astronomico di brera of the italian national institute of astrophysics . the facility allows the precise measurement of the radius of curvature and the distribution of the concentred light in terms of focused and scattered components and it works in open air . in this paper we describe the facility and report some examples of its measuring capabilities . cherenkov telescopes , mirrors , scattering , focusing
|
many social , biological , and technological systems possess a characteristic network structure consisting of communities or modules , which are groups of nodes distinguished by having a high density of links between nodes of the same group and a comparatively low density of links between nodes of different groups .such a network structure is expected to play an important functional role in many systems . in a social network, communities might indicate factions , interest groups , or social divisions ; in biological networks , they encompass entities having the same biological function ; in the world wide web they may correspond to groups of pages dealing with the same or related topics ; in food webs they may identify compartments ; and a community in a metabolic or genetic network might be related to a specific functional task .since community structure constitutes a fundamental feature of many networks , the development of methods and techniques for the detection of communities represents one of the most active research areas in network science . in comparison ,much less work has been done to address a fundamental question : how do communities arise in networks ? .clearly , the emergence of characteristic topological structures , including communities , from a random or featureless network requires some dynamical process that modifies the properties of the links representing the interactions between nodes .we refer to such link dynamics as a _rewiring process_. links can vary their strength , or they can appear and disappear as a consequence of a rewiring process . in our view ,two classes of rewiring processes leading to the formation of structures in networks can be distinguished : ( i ) rewirings based on local connectivity properties regardless of the values of the state variables of the nodes , which we denote as _ topological rewirings _ ; and ( ii ) rewirings that depend on the state variables of the nodes , where the link dynamics is coupled to the node state dynamics and which we call _ adaptive rewirings_. topological rewiring processes have been employed to explain the origin of small - world and scale - free networks . these rewirings can lead to the appearance of community structures in networks with weighted links or by preferential attachment driven by local clustering . on the other hand , there is currently much interest in the study of networks that exhibit a coupling between topology and states , since many systems observed in nature can be described as dynamical networks of interacting nodes where the connections and the states of the nodes affect each other and evolve simultaneously .these systems have been denoted as coevolutionary dynamical systems or adaptive networks and , according to our classification above , they are subject to adaptive rewiring processes . the collective behavior of coevolutionary systems is determined by the competition of the time scales of the node dynamics and the rewiring process .most works that employ coevolutionary dynamics have focused on the characterization of the phenomenon of network fragmentation arising from this competition .although community structures have been found in some coevolutionary systems , investigating the mechanisms for the formation of perdurable communities remains an open problem . in this paperwe investigate the emergence and the persistence of communities in networks induced by a process of adaptive rewiring .our work is based on a recently proposed general framework for coevolutionary dynamics in networks .we characterize the topological structures forming in a coevolutionary network having a simple node dynamics .we unveil a region of parameters where the formation of a supertransient modular structure on the network occurs .we study the stability of the community configuration under small perturbations of the node dynamics , as well as for different initial conditions of the system .we recall that a rewiring process in a coevolutionary network can be described in terms of two basic actions that can be independent of each other : disconnection and connection between nodes .these actions may correspond to discrete connection - disconnection events , or to continuous increase - decrease strength of the links , as in weighted networks .both actions in an adaptive rewiring process are , in general , based on some mechanisms of comparison of the states of the nodes .the disconnection action can be characterized by a parameter ] that describes the probability that two nodes in identical states become connected , and such that is the probability that two nodes in different states connect to each other . in a social context , these actions allow the description of diverse manifestations of phenomena such as inclusion - exclusion , homophily - heterophily , and tolerance - intolerance . to investigate the formation of topological structures through an adaptive rewiring process , we consider a random network of nodes having average degree .let be the set of neighbors of node , possessing elements .the state variable of node is denoted by . for simplicity , we assume that the node state variable is discrete , that is , can take any of possible options. the states are initially assigned at random with a uniform distribution .therefore there are , on the average , nodes in each state in the initial random network .we assume that the network is subject to a rewiring process whose actions are characterized by parameters and . for the node dynamics , we employ an imitation rule such as a voterlike model that has been used in several contexts .this model provides a simple dynamics for the node state change without introducing any additional parameter .parameters of the node dynamics can modify the time scale of the change of state of the nodes ; however , those parameters should not produce qualitative changes in the global behavior of the system . then , the coevolution dynamics in this system is given by iterating these three steps : ( 1 ) chose at random a node such that . ( 2 ) apply the rewiring process : select at random a neighbor and a node .if the edge can be disconnected according to the rule of the disconnection action and the nodes and can be connected according to the rule of the connection action , break the edge and create the edge .( 3 ) apply the node dynamics : chose randomly a node such that and set .this rewiring conserves the total number of links in the network .we have verified that the collective behavior of this system is statistically invariant if steps ( 2 ) and ( 3 ) are reversed .the parameters , , and remain constant .we also maintain fixed the ratio . to study the dynamical behavior of the network topology, we consider the time evolution of several statistical quantities in the system for different values of the parameters and .we characterize the integrity of the network by calculating the normalized ( divided by ) average size of the largest component or connected subgraph in the system , regardless of the states of the nodes , at time denoted by , where a time step consists of iterations of the algorithm .we call a domain a subset of connected nodes that share the same state , and denote by the normalized average size of the largest domain in the system at time .additionally , we calculate the fraction of links that are active in the system at a given time , that we call .a link is active if it connects two nodes in different states .lastly , as a measure of the modular structure of the network , we define the quantity as the modularity change , where is the modularity of the network at time , calculated through a community detection algorithm , and is the value of this quantity for the initial random network .figure [ f2 ] shows the above four quantities as functions of time for a fixed value and different values of . for , fig .[ f2](a ) reveals that for all times , a value corresponding to a large component whose size is comparable to that of the system .this indicates that the network remains connected during the evolution of the system .the quantity initially increases in time until it reaches a stationary value during a long time interval ( four orders of magnitude ) ; there are two connected groups of nodes in different states on the average .due to finite size fluctuations , the system eventually reaches a homogeneous absorbing state , where .however , the sizes of these fluctuations decrease as the size of the system increases , until they decay to zero in the limit ; in that situation the homogeneous absorbing state is not reached . on the other hand ,the fraction of active links decreases as increases , until reaches a stationary value during the same interval of time as becomes stationary .since eventually one state survives on a large connected network component , the number of active links goes to zero .this behavior agrees with that observed in refs .the value of the quantity remains close to zero , indicating that the modularity of the initial random network does not vary in time in this region of parameters .( ) , ( ) , ( ) and ( ) .system size is , , and ; fixed parameter .a ) .b ) .c ) .the gray zone indicates the interval of time for which the quantity reaches a constant value .all numerical data points are averaged over realizations of initial conditions . ]( ) , ( ) , ( ) and ( ) .system size is , , and ; fixed parameter .a ) .b ) .c ) .the gray zone indicates the interval of time for which the quantity reaches a constant value .all numerical data points are averaged over realizations of initial conditions . ]( ) , ( ) , ( ) and ( ) .system size is , , and ; fixed parameter .a ) .b ) .c ) .the gray zone indicates the interval of time for which the quantity reaches a constant value .all numerical data points are averaged over realizations of initial conditions . ]figure [ f2](b ) shows that , for , decays rapidly to a value tending to , indicating that the network has been fragmented in various small components .this fragmentation is associated with a rapid decay to zero of the fraction of active links .the rapid drop of brings a limitation to the process of state change of the nodes and , therefore , the size of the largest domain remains about the value of the average fraction of nodes in a given state that are present in the initial network , i.e. . the fragmentation of the network is also reflected in the behavior of , that grows until a stationary value of maximum modularity associated to the presence of separate domains , according to the employed algorithm ., , , . a ) ; b ) ; c ) ; d ) . ]the evolution of the quantity in fig .[ f2](c ) indicates that the initial network with ( visualized in fig .[ f3](a ) ) undergoes a fragmentation process consisting of separated domains where decreases ( fig .[ f3](b ) ) , and then a recombination process takes place ( figs . [f3](c ) , [ f3](d ) ) until the network becomes a connected graph again , where .a minimum value of separates these two processes occurring during the time evolution of the system .the early fragmentation and recombination processes occurring in the network are also manifested in the behavior of the modularity change , which exhibits a maximum as goes to a minimum .the minimum of also coincides with the decay of to a small value that is maintained for a long interval of time ( four orders of magnitude in time , indicated in color gray ) , until eventually drops to zero when the nodes in the reconnected network reach a homogeneous state , corresponding to .the subsistence of a minimum fraction of active links in the network for a long time permits the reattachment of separated domains to form a large connected network during this time interval , characterized by and .since active links connect different domains , then the majority of links must lie inside the different domains coexisting on the large connected network .therefore , there exist several domains inside which nodes are highly connected , with fewer connections between different domains .this type of network structure has been called a modular or community structure .the corresponding network is visualized in fig .[ f3](d ) .the emergence of a modular structure in the network is reflected in the quantity , which remains at a constant positive value during this stage .the asymptotic state of the system corresponds to a large random connected network ( ) , similar to the initial one ( ) , but with its nodes in a homogeneous state ( ) and therefore , with no active links left ( ) . to investigate the effects of the size of the system on the persistence of communities in the network , we show in fig .[ f5 ] a semilog plot of the average asymptotic time for which ( ) , as a function of .we numerically find that scales exponentially with as , with .this behavior is characteristic of supertransient states in dynamical systems . for a finite size system , the modular structure andthe coexistence of various domains on a connected network should eventually give place to one large domain .however , the asymptotic random connected network in a homogeneous state can not occur in an infinite size system .thus , for large enough , the decay of the modular structure can not be observed in practice . for which , as a function of the system size , for fixed values , and .the continuous line is the linear fitting with slope .error bars indicate standard deviations obtained over realizations of initial conditions for each point . ]the emergence of a modular structure can be characterized by calculating the value of the modularity change at a fixed time ( within the corresponding lapse of existence of communities ) as a function of with a fixed value of , as shown in fig .there is a critical value below which is zero , reflecting the subsistence of the initial random topology , and above which increases , indicating the appearance of a modular structure in the network .the onset of modularity can be described by the relation , with , typical of a continuous phase transition .figure [ f4 ] also shows the fraction of active links at as a function of .we observe that the modularity transition at coincides with a drop of to small values below a value .since active links are associated to the contact points defining the interphase between different domains , a low density of active links constrains the growth of domains , giving rise to the modular structure in the network .( ) , ( ) , and ( , right vertical axis ) as functions of , with fixed , at ( within the interval of subsistence of communities ) for a network with , .the continuous thick line is the fitting of the values of corresponding to the function , with .the horizontal dashed line marks the value below which modularity emerges .gray color indicates the region of parameters where communities appear in the connected network .all numerical data points are averaged over realizations of initial conditions .inset : space of parameters showing in gray the region where communities appear within the boundary curves ( ) and ( ) . ] in fig . [ f4 ]we also plot as a function of .there is a critical value above which a fragmentation of the network , characterized by , takes place .the employed modularity measure gives high values for , manifesting the presence of trivial communities or separated graph components .we have verified that algorithm gives a behavior for modularity similar to that shown in fig .[ f4 ] for $ ] . for , we obtain and ; indicating that the network remains connected and preserves its initial random structure .the modular structure appears in the connected network for ; this state is characterized by , , and .the inset in fig .[ f4 ] shows the region on the space of parameters where communities appear .network fragmentation in this space occurs for parameter values below the open - circles boundary line .to shed light on the nature of the transient behavior of the modular structure , we introduce a perturbation in the node dynamics as follows : at each time step ( every iterations of the algorithm ) there is a probability that a randomly chosen agent changes its state assuming any of the possible states at random .thus , the parameter represents the intensity of the random noise affecting the node dynamics , with corresponding to the original algorithm .intrinsic random noise in the local states has been employed to simulate the phenomenon of cultural drift in models of social dynamics .in addition , we study the robustness of the communities for different initial conditions of the system : ( i ) an initial random network and a random distribution of states ; ( ii ) an initial random network and a homogeneous state ; and ( iii ) an initial fragmented network consisting of separated domains , each with nodes . condition ( i ) corresponds to the initial condition used in the original algorithm , while initial conditions ( ii ) and ( iii ) correspond to the absorbing states in the connected and the fragmented configurations , respectively . on the intensity of the noise for three different initial conditions of the network structure and states , with fixed , , , , . on each panel , initial condition ( i ) ; initial condition ( ii ) ; initial condition ( iii ) .( a ) versus time with .( b ) versus time with .( c ) versus time with .( d ) as a function of at . ] figures [ f7](a)-(c ) show versus time with fixed parameters , , for three different values of the intensity of the noise and the three initial conditions described above . figure [ f7](a ) shows that , in absence of noise and regardless of the initial conditions , the system reaches the same asymptotic state , with , as in fig .no transient structures appear for the homogeneous initial condition ( ii ) , as expected ; however a modular structure emerges as a transient state for conditions ( i ) y ( iii ) . for these conditions ,the transient time for the modular structure depends on the system size as in fig .figure [ f7](b ) shows that a modular structure , characterized by a nonvanishing value of , can be sustained in time by the presence of a small noise for the different initial conditions , in spite of the finite size of the network .we have verified that for the three cases in both fig .[ f7](a ) and fig .[ f7](b ) .a larger noise intensity leads to an increment of the value of for the different initial conditions , as shown in fig .[ f7](c ) .for the three cases we obtained , corresponding to a fragmented network .figure [ f7](d ) shows as a function of at fixed time , after transients , for the three initial conditions .note that the asymptotic behavior of is independent of the initial conditions .there is an intermediate range of the noise intensity where a modular structure can be maintained in the network .the value of in this region corresponds to the value of this quantity observed in the temporal plateau in fig .our results show that , for an intermediate range of noise intensity , the modular structure can be sustained in time in a finite size coevolutionary system . an appropriate level of noise keeps the diversity of states in the system and prevents the disappearance of active links . as a consequence , the convergence to a homogeneous asymptotic state does not occur .the role of noise in the modular configuration is similar to that of the limit of infinite system size , , where a diversity of states is always present and domains can subsist indefinitely .we have employed a recent description of the process of adaptive rewiring in terms of two actions : connection and disconnection between nodes , both based on some criteria for comparison of the nodes state variables .we have found that , for some values of the parameters and characterizing these actions , a modular structure emerges previous to the settlement of a random network topology .the actions of the rewiring process modify the competition between the time scales of the rewiring and the node dynamics , and therefore they can also control the emergence of communities .the modular behavior separates two network configurations on the space of parameters : a state where the initial random topology stays stationary in time , and a fragmented configuration .we have shown that the modular structure is a supertransient state .the presence of communities has been characterized by several collective properties : the network is connected ( ) ; there are various domains coexisting on the network ( ) ; and the modularity measure increases with respect to that of the initial random network ( ) .the formation of modular structures is related to the number of active links present in the network : communities emerge when the fraction of those links drops to small values .since active links are associated with contact points that define the interphase between different domains in the network , a low density of active links means a restriction to the possibility of growth for domains . as a result ,different domains are connected by few links , leading to the appearance of communities .the appearance of a short - lived modular structure always precedes the fragmentation of the network : fig .[ f2](b ) shows that the quantities , , , and at time reach those values associated to a modular structure .we have verified , by plotting successive snapshots , that the network topology indeed passes through a modular phase before becoming fragmented .thus , communities constitute temporary configurations that are likely to emerge during the evolution of the network topology of coevolutionary systems .community structure has also been observed in the transient dynamics of models of epidemic spreading on adaptive networks .we have found that , for appropriate parameter values of the corresponding adaptive rewiring process , the community structure can become a supertransient state .we have shown that noise in the node dynamics can sustain a diversity of states and the community structure in time in a finite size coevolutionary system .the role of noise on the lifetime of the modular structure state is similar to that of the limit of infinite system size .thus , large system size and/or local noise can explain the persistence of communities and diversity in many real systems .j.c.g - a acknowledges support from cnpq , brazil .m. g. c. is grateful to the senior associates program of the abdus salam international centre for theoretical physics , trieste , italy , for the visiting opportunities . 99 girvan m. , newman m. e. j. , proc .99 * , ( 2002 ) 7821 .fortunato s. , phys .rep . * 486 * , ( 2010 ) 75 .porter m. a. , onnela j. p. , mucha p. j. , not .* 56 * , ( 2009 ) 1082 .lancichinetti a. , kivel m. , saramki j. , fortunato s. , plos one * 5*(8 ) , ( 2010 ) e11976 .spirin v. , mirny l. , proc .* 100 * ( 2003 ) 12123 .wilkinson d. m. , huberman b. a. , proc .101 * , ( 2004 ) 5241 .guimer r. , amaral l. a. n. , nature * 433 * , ( 2005 ) 895 .dourisboure y. , geraci f. , pellegrini m. , acm trans . web * 3 * ( 2009 ) 7 .stouffer d. b. , sales - pardo m. , sirer m. i. , bascompte j. , science * 335 * , ( 2012 ) 1489 .thiele i. , palsson b. , nat .protocols * 5 * , ( 2010 ) 93 .newman m. e. j. , proc .sci . u.s.a . * 102 * ( 2006 ) 8577 .danon l. , duch j. , daz - guilera a. , arenas a. , j. stat .* 9 * ( 2005 ) p09008 .fortunato s. , barthelemy m. , proc .104 * , ( 2007 ) 36 .blondel v. d. , guillaume j. l. , lambiotte r. , lefebvre e. , j. stat .* 10 * , ( 2008 ) p10008 .good b. h. , de montjoye y. a. , clauset a. , phys .e * 81 * , ( 2010 ) 046106 .malliaros f. d. , vazirgiannis m. , phys .rep . * 533 * , ( 2013 ) 95 .bassett d. s. , porter m. a. , wymbs n. f. , grafton s. t. , carlson j. m. , mucha p. j. , chaos * 23 * , ( 2013 ) 013142 .palla g. , barabasi a - l . ,vicsek , t. , nature * 446 * , ( 2007 ) ( 664 ) .watts d. j. , strogatz s. h. , nature * 393 * , ( 1998 ) 440 .barabsi a - l . ,albert r. , science * 286 * , ( 1999 ) 509 .kumpula j. m. , onnela j - p ., saramki j. , kaski k. , kertsz j. , phys . rev .lett . * 99 * , ( 2007 ) 228701 .bagrow j. p. , brockmann d. , phys .x * 3 * , ( 2013 ) 021016 .zimmermann m. g. , eguluz v. m. , san miguel m. , spadaro a. , adv .complex syst . * 3 * , ( 2000 ) 283 .zimmermann m. g. , eguluz v. m. , san miguel m. , phys .e * 69 * , ( 2004 ) 065102 .gross t. , blasius b. , j. r. soc .interface * 5 * , ( 2008 ) 259 .bornholdt s. , rohlf t. , phys .. lett . * 84 * , ( 2000 ) 6114 .zanette d. h. , gil s. , physica d * 224 * , ( 2006 ) 156 .gross t. , sayama h. ( editors ) _ adaptive networks : theory , models , and applications _, springer - verlag , heidelberg ( 2009 ) .herrera j. l. , cosenza m. g. , tucci k. , gonzlez - avella j. c. , epl * 95 * , ( 2011 ) 58006 .assenza s. , gutirrez r. , gmez - gardees j. , latora v. , boccaletti s. , sci .rep . * 1 * , ( 2011 ) 99 .avalos - gaytn v. , almendral j. a. , papo p. , schaeffer s. e. , bocaletti s. , phys .rev e * 86 * , ( 2012 ) 015101(r ) .iiguez g. , kertsz j. , kaski k. , barrio r. a. , phys .e * 80 * , ( 2009 ) 066119 .mandr s. , fortunato s. , castellano c. , phys .e * 80 * , ( 2009 ) 056105 .holme p. , newman m. e. j. , phys .e * 74 * , ( 2006 ) 056108 .holley r. , liggett t. m. , ann .probab . * 3 * , ( 1975 ) 643 .castellano c. , fortunato s. , loreto v. , rev .phys . * 81 * , ( 2009 ) 591 .ben - naim e. , frachebourg l. , krapivsky p. l. , phys .e * 53 * , ( 1996 ) 3078 .vazquez f. , gonzlez - avella j. c. , eguluz v. m. , san miguel m. , phys .e * 76 * ( 2007 ) 046120 .vazquez f. , eguluz v. m. , san miguel m. , phys .lett . * 100 * , ( 2008 ) 108702 .bhme g. a. , gross t. , phys .e * 85 * , ( 2012 ) 066117 .kaneko k. , phys .lett a * 149 * , ( 1990 ) 105 .axelrod r. , j. conf .resolution * 41 * , ( 1997 ) 203 .klemm k. , eguluz v. m. , toral r. , san miguel m. , phys .e * 67 * , ( 2003 ) 045101(r ) .yang , h. , tang , m. , zhang , h. , new j. phys .* 14 * , ( 2012 ) 123017 .macarthur r. h. , wilson e. o. , _ the theory of island biogeography _, princeton university press ( 1967 ) .prugh l. r. , hodges k. e. , sinclair a. r. e. , brashares j. s. , proc .* 105 * , ( 2008 ) 20770 .
|
we investigate the emergence and persistence of communities through a recently proposed mechanism of adaptive rewiring in coevolutionary networks . we characterize the topological structures arising in a coevolutionary network subject to an adaptive rewiring process and a node dynamics given by a simple voterlike rule . we find that , for some values of the parameters describing the adaptive rewiring process , a community structure emerges on a connected network . we show that the emergence of communities is associated to a decrease in the number of active links in the system , i.e. links that connect two nodes in different states . the lifetime of the community structure state scales exponentially with the size of the system . additionally , we find that a small noise in the node dynamics can sustain a diversity of states and a community structure in time in a finite size system . thus , large system size and/or local noise can explain the persistence of communities and diversity in many real systems .
|
the study of arguments as abstract entities and their interaction as introduced by dung has become one of the most active research branches within artificial intelligence and reasoning , see , e.g. , .argumentation handles possible conflicts between arguments in form of attacks .the arguments may either originate from a dialogue between several agents or from the pieces of information at the disposal of a single agent , this information may even contain contradictions .a main issue for any argumentation system is the selection of acceptable sets of arguments , where an acceptable set of arguments must be in some sense coherent and be able to defend itself against all attacking arguments .abstract argumentation provides suitable concepts and formalisms to study , represent , and process various reasoning problems most prominently in defeasible reasoning ( see , e.g. , , ) and agent interaction ( see , e.g. , ) . extending dung s concept ,bench - capon introduced _ value - based argumentation _systems that allow to compare arguments with respect to their relative strength such that an argument can not successfully attack another argument that is considered of a higher rank .the ranking is specified by the combination of an assignment of _ values _ to arguments and an ordering of the values ; the latter is called an _ audience _ . as laid out by bench - capon ,the role of arguments in this setting is to persuade rather than to prove , demonstrate or refute .whether an argument can be accepted with respect to _ all possible _ or _ at least one _ audience allows to formalize the notions of _ objective acceptance _ and _ subjective acceptance _, respectively .an important limitation for using value - based argumentation systems in real - world applications is the _ computational intractability _ of the two basic acceptance problems : deciding whether a given argument is subjectively accepted is -hard , deciding whether it is objectively accepted is -hard .therefore it is important to identify classes of value - based systems that are still useful and expressible but allow a polynomial - time tractable acceptance decision .however , no non - trivial tractable classes of value - based systems have been identified so far , except for systems with a tree structure where the degree of nodes and the number of nodes of degree exceeding 2 are bounded .in fact , as pointed out by dunne , the acceptance problems remain intractable for value - based systems whose graphical structures form trees , in strong contrast to the main computational problems for non - value - based argumentation that are linear - time tractable for tree systems , or more generally , for systems of bounded treewidth .[ [ our - contribution ] ] our contribution + + + + + + + + + + + + + + + + in this paper we introduce nontrivial classes of value - based systems for which the acceptance problems are tractable . the classes are defined in terms of the following notions : * the _ value - width _ of a value - based system is the largest number of arguments of the same value . * the _ extended graph structure _ of a value - based system has as nodes the arguments of the value - based system , two arguments are joined by an edge if either one attacks the other or both share the same value . * the _ value graph _ of a value - based system has as vertices the values of the system , two values and are joined by a directed edge if some argument of value attacks an argument of value .we show that the acceptance problems are tractable for the following classes of value - based systems : * value - based systems with a bipartite graph structure where at most two arguments share the same value ( i.e. , systems of value - width 2 ) ; * value - based systems whose extended graph structure has bounded treewidth ; and * value - based systems of bounded value - width whose value graphs have bounded treewidth . in fact , we show that both acceptance problems are _ linear time tractable _ for the classes ( p2 ) and ( p3 ) , the latter being a subclass of the former .our results suggest that the extended graph structure is a suitable structural representation of value - based argumentation systems .the positive results ( p1)(p3 ) hold for systems with unbounded number of arguments , attacks and values .we contrast our positive results with negative results that rule out classes conjectured to be tractable .we show that the acceptance problems are ( co)-np - hard for the following classes : * value - based systems of value - width 2 ; * value - based systems where the number of attacks between arguments of the same value is bounded ( systems of _ bounded attack - width _ ) ; * value - based systems with bipartite value graphs .in fact , we show that both acceptance problems are intractable for value - based systems of value - width 2 and attack - width 1 . classes ( n1 ) and ( n2 ) were conjectured to be tractable , the complexity of ( n3 ) was stated as an open problem .the reminder of the paper is organized as follows . in section [ sec : pre ] we provide basic definitions and preliminaries . in section [ sec : valuewidth ] we define the parameters value - width and attack - width and establish the results involving systems of value - width 2 , we also discuss the relationship between systems of value - width 2 and dialogues . in section [ sec : tw ] we consider value - based systems with an extended graph structure of bounded treewidth and show linear time tractability .we close in section [ sec : concl ] with concluding remarks .some proofs of technical lemmas are given in an appendix .the main results of this paper have been presented in preliminary and shortened form at comma10 . herewe provide full proofs , examples , and additional discussions .further new additions are the results ( p3 ) and ( n3 ) involving value graphs , and the discussion of the relationship between systems of value - width 2 and dialogues .in this section we introduce the objects of our study more formally .an _ abstract argumentation system _ or _argumentation framework _ is a pair where is a finite set of elements called _ arguments _ and is a binary relation called the _ attack relation_. if we say that _ attacks . an abstract argumentation system can be considered as a directed graph , and therefore it is convenient to borrow notions and notation from the theory of directed graphs .for example we say that a system is _ acyclic _ if is a dag ( a directed acyclic graph ) .[ exa : af ] an abstract argumentation system with arguments , , , , , and attacks , , , , , is displayed in figure [ fig : af ] .= [ circle , minimum size=6mm , inner sep=0pt ] ( 0,0 ) node[draw ] ( a ) ( 1.5,0 ) node[draw ] ( b ) ( 0,-2 ) node[draw ] ( c ) ( 1.5,-2 ) node[draw ] ( d ) ( 1.5,-4 ) node[draw ] ( f ) ( 0,-4 ) node[draw ] ( e ) ( .75,-5.5 ) node ( ) ; ( b ) edge ( a ) ( c ) edge ( d ) ( d ) edge ( b ) ( a ) edge ( d ) ( f ) edge ( c ) ( a ) edge[bend right ] ( e ) ; = [ circle , minimum size=6mm , inner sep=0pt ] ( 0,0 ) node[draw ] ( a ) ( 1.5,0 ) node[draw ] ( b ) ( 0,-2 ) node[draw ] ( c ) ( 1.5,-2 ) node[draw ] ( d ) ( 1.5,-4 ) node[draw ] ( f ) ( 0,-4 ) node[draw ] ( e ) ( .75,-5.5 ) node ( ) ; \(b ) edge ( a ) ( c ) edge ( d ) ( d ) edge ( b ) ( a ) edge ( d ) ( f ) edge ( c ) ( a ) edge[bend right ] ( e ) ; ( 0.75,0 ) ellipse ( 1.95 and 0.85 ) ; ( 0.75,-2 ) ellipse ( 1.95 and 0.85 ) ; ( 0.75,-4 ) ellipse ( 1.95 and 0.85 ) ; ( -1.6,0 ) node ( ) : ; ( -1.6,-2 ) node ( ) : ; ( -1.6,-4 ) node ( ) : ; next we define commonly used semantics of abstract argumentation systems as introduced by dung . for the discussion of other semantics and variants , see , e.g. , baroni and giacomin s survey .let be an abstract argumentation system and . 1 . is _ conflict - free _ in if there is no with .2 . is _ acceptable _ in if for each and each with there is some with .3 . is _ admissible _ in if it is conflict - free and acceptable .4 . is a _ preferred extension _ of if is admissible in and there is no admissible set of that properly contains .for instance , the admissible sets of the abstract argumentation system of example [ exa : af ] are the sets and , hence is its only preferred extension .let be an abstract argumentation system and .the argument is _ credulously accepted _ in if is contained in some preferred extension of , and is _ skeptically accepted _ in if is contained in all preferred extensions of .in this paper we are especially interested in finding preferred extensions in _ acyclic _ abstract argumentation systems .it is well known that every acyclic system has a unique preferred extension , and that can be found in polynomial time ( coincides with the `` grounded extension '' ) .in fact , can be found via a simple labeling procedure that repeatedly applies the following two rules to the arguments in until each of them is either labeled in or out : 1 .an argument is labeled in if all arguments that attack are labeled out ( in particular , if is not attacked by any argument ) .an argument is labeled out if it is attacked by some argument that is labeled in .the unique preferred extension is then the set of all arguments that are labeled in .a _ value - based argumentation framework _ or _ value - based system _ is a tuple where is an argumentation framework , is a set of _ values _ , and is a mapping such that the abstract argumentation system is acyclic for all .we call two arguments to be _ equivalued _ ( in ) if . the requirement for to be acyclicis also known as the _ multivalued cycles assumption _ , as it implies that any set of arguments that form a directed cycle in will contain at least two arguments that are not equivalued .an _ audience _ for a value - based system is a partial ordering on the set of values of .an audience is _ specific _ if it is a total ordering on . for an audience also define in the obvious way , i.e. , if and only if and . given a value - based system and an audience for , we define the abstract argumentation system _ induced by from _ as with . note that if is a specific audience , then is an acyclic system and thus , as discussed above , has a unique preferred extension .[ exa : vaf ] consider the value - based system obtained from the abstract argumentation framework of example [ exa : af ] by adding the set of values and the mapping with , , .the value - based system is depicted in figure [ fig : af ] where the three ellipses indicate arguments that share the same value .let be a value - based system .we say that an argument is _ subjectively accepted in _ if there exists a specific audience such that is in the unique preferred extension of .similarly , we say that an argument is _ objectively accepted in _ if is contained in the unique preferred extension of for every specific audience .consider our running example , the value - based system given in example [ exa : vaf ] .suppose represents the interaction of arguments regarding a city development project , and assume the arguments are related to sustainability issues ( ) , the arguments are related to economics ( ) , and the arguments are related to traffic issues ( ) .now , consider the specific audience that gives highest priority to sustainability , medium priority to economics , and lowest priority to traffic ( ) .this audience gives rise to the acyclic abstract argumentation system obtained from by deleting the attack ( as , can not attack with respect to the audience ) and deleting the attack ( as , can not attack with respect to the audience ) .figure [ fig : six ] exhibits the acyclic abstract argumentation systems induced by the six possible specific audiences . the unique preferred extension for each of the six systems is indicated by shaded nodes .we conclude that all arguments of are subjectively accepted , and are the arguments that are objectively accepted .= [ circle , minimum size=6mm , inner sep=0pt ] ( -1.5,0 ) node : ( -1.5,-2 ) node : ( -1.5,-4 ) node : ( 0,0 ) node[draw ] ( a ) ( 1.5,0 ) node[draw , fill = lightgray ] ( b ) ( 0,-2 ) node[draw , fill = lightgray ] ( c ) ( 1.5,-2 ) node[draw ] ( d ) ( 1.5,-4 ) node[draw , fill = lightgray ] ( f ) ( 0,-4 ) node[draw , fill = lightgray ] ( e ) ( .75,-5.5 ) node ( ) ( ; ( b ) edge ( a ) ( c ) edge ( d ) ( a ) edge ( d ) ( a ) edge[bend right ] ( e ) ; = [ circle , minimum size=6mm , inner sep=0pt ] ( 0,0 ) node[draw ] ( a ) ( 1.5,0 ) node[draw , fill = lightgray ] ( b ) ( 0,-2 ) node[draw ] ( c ) ( 1.5,-2 ) node[draw , fill = lightgray ] ( d ) ( 1.5,-4 ) node[draw , fill = lightgray ] ( f ) ( 0,-4 ) node[draw , fill = lightgray ] ( e ) ( .75,-5.5 ) node ( ) ( ; ( b ) edge ( a ) ( c ) edge ( d ) ( a ) edge ( d ) ( f ) edge ( c ) ( a ) edge[bend right ] ( e ) ; = [ circle , minimum size=6mm , inner sep=0pt ] ( 0,0 ) node[draw ] ( a ) ( 1.5,0 ) node[draw , fill = lightgray ] ( b ) ( 0,-2 ) node[draw , fill = lightgray ] ( c ) ( 1.5,-2 ) node[draw ] ( d ) ( 1.5,-4 ) node[draw , fill = lightgray ] ( f ) ( 0,-4 ) node[draw , fill = lightgray ] ( e ) ( .75,-5.5 ) node ( ) ( ; ( b ) edge ( a ) ( c ) edge ( d ) ( d ) edge ( b ) ( a ) edge[bend right ] ( e ) ; = [ circle , minimum size=6mm , inner sep=0pt ] ( 0,0 ) node[draw ] ( a ) ( 1.5,0 ) node[draw , fill = lightgray ] ( b ) ( 0,-2 ) node[draw , fill = lightgray ] ( c ) ( 1.5,-2 ) node[draw ] ( d ) ( 1.5,-4 ) node[draw , fill = lightgray ] ( f ) ( 0,-4 ) node[draw , fill = lightgray ] ( e ) ( .75,-5.5 ) node ( ) ( ; ( b ) edge ( a ) ( c ) edge ( d ) ( d ) edge ( b ) ; = [ circle , minimum size=6mm , inner sep=0pt ] ( 0,0 ) node[draw ] ( a ) ( 1.5,0 ) node[draw , fill = lightgray ] ( b ) ( 0,-2 ) node[draw ] ( c ) ( 1.5,-2 ) node[draw , fill = lightgray ] ( d ) ( 1.5,-4 ) node[draw , fill = lightgray ] ( f ) ( 0,-4 ) node[draw , fill = lightgray ] ( e ) ( .75,-5.5 ) node ( ) ( ; ( b ) edge ( a ) ( c ) edge ( d ) ( a ) edge ( d ) ( f ) edge ( c ) ; = [ circle , minimum size=6mm , inner sep=0pt ] ( 0,0 ) node[draw , fill = lightgray ] ( a ) ( 1.5,0 ) node[draw ] ( b ) ( 0,-2 ) node[draw ] ( c ) ( 1.5,-2 ) node[draw , fill = lightgray ] ( d ) ( 1.5,-4 ) node[draw , fill = lightgray ] ( f ) ( 0,-4 ) node[draw , fill = lightgray ] ( e ) ( .75,-5.5 ) node ( ) ( ; ( b ) edge ( a ) ( c ) edge ( d ) ( d ) edge ( b ) ( f ) edge ( c ) ; we consider the following decision problems ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ subjective acceptance _ instance : _ a value - based system and a query argument . _question : _ is subjectively accepted in ? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ objective acceptance _ instance : _ a value - based system and a query argument ._ question : _ is objectively accepted in ? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ as shown by dunne and bench - capon , subjective acceptance is -complete and objective acceptance is -complete .indeed , there are possible specific audiences for a value - based system with values . hence , even if is moderately small , say , checking all induced abstract argumentation system becomes impractical .dunne studied properties of value - based systems that allow to reduce the number of audiences to consider . in view of the general intractability of subjective acceptance and objective acceptance , the main decision problems for value - based systems ,it is natural to ask which restrictions on shape and structure of value - based systems allow tractability .a natural approach is to impose structural restrictions in terms of certain graphical models associated with value - based systems .we present three graphical models : the _ graph structure _ ( an undirected graph on the arguments of the value - based system under consideration , edges represent attacks ) the _ value graph _ ( a directed graph on the values of the value - based system under consideration , edges represent attacks ) and the _ extended graph structure _ ( an undirected graph on the arguments of the value - based system under consideration , edges represent attacks and `` equivaluedness '' ) .the concept of value graphs was recently introduced and studied by dunne .the concept of _ extended graph structures _ is our new contribution .[ def : graphs ]let be a value - based system .the _ graph structure _ of is the ( undirected ) graph whose vertices are the arguments of and where two arguments are joined by an edge ( in symbols ) if and only if contains the attack or the attack .the _ value graph _ of is the directed graph whose vertices are the values of and where two values are joined by a directed edge from to ( in symbols ) if and only if there exist some argument with , some argument with , and .the _ extended graph structure _ of is the ( undirected ) graph whose vertices are the arguments of and where two arguments are joined by an edge if and only if or .figure [ fig : graphs ] shows the value - based system of example [ exa : vaf ] and the three associated graphical models .= [ circle , minimum size=6mm , inner sep=0pt ] ( .75,-5.5 ) node ( ) ( 0,0 ) node[draw ] ( a ) ( 1.5,0 ) node[draw ] ( b ) ( 0,-2 ) node[draw ] ( c ) ( 1.5,-2 ) node[draw ] ( d ) ( 1.5,-4 ) node[draw ] ( f ) ( 0,-4 ) node[draw ] ( e ) ; \(b ) edge ( a ) ( c ) edge ( d ) ( d ) edge ( b ) ( a ) edge ( d ) ( f ) edge ( c ) ( a ) edge[bend right ] ( e ) ; ( 0.75,0 ) ellipse ( 1.95 and 0.85 ) ; ( 0.75,-2 ) ellipse ( 1.95 and 0.85 ) ; ( 0.75,-4 ) ellipse ( 1.95 and 0.85 ) ; ( -1.6,0 ) node ( ) : ; ( -1.6,-2 ) node ( ) : ; ( -1.6,-4 ) node ( ) : ; = [ circle , minimum size=6mm , inner sep=0pt ] ( .75,-5.5 ) node ( ) ( 0,0 ) node[draw ] ( a ) ( 1.5,0 ) node[draw ] ( b ) ( 0,-2 ) node[draw ] ( c ) ( 1.5,-2 ) node[draw ] ( d ) ( 1.5,-4 ) node[draw ] ( f ) ( 0,-4 ) node[draw ] ( e ) ; \(b ) edge ( a ) ( c ) edge ( d ) ( d ) edge ( b ) ( a ) edge ( d ) ( f ) edge ( c ) ( a ) edge[bend right ] ( e ) ; = [ circle , minimum size=6mm , inner sep=0pt ] ( 0,0 ) node[draw ] ( s ) ( 0,-2 ) node[draw ] ( t ) ( 0,-4 ) node[draw ] ( e ) ( 0,-5.5 ) node ( ) ; ( s ) edge[bend right ] ( t ) ( t ) edge[bend right ] ( s ) ( e ) edge ( t ) ( s ) edge[bend right ] ( t ) ( s ) edge[bend right=45 mm ] ( e ) ; = [ circle , minimum size=6mm , inner sep=0pt ] ( .75,-5.5 ) node ( ) ( 0,0 ) node[draw ] ( a ) ( 1.5,0 ) node[draw ] ( b ) ( 0,-2 ) node[draw ] ( c ) ( 1.5,-2 ) node[draw ] ( d ) ( 1.5,-4 ) node[draw ] ( f ) ( 0,-4 ) node[draw ] ( e ) ; \(b ) edge ( a ) ( c ) edge ( d )( d ) edge ( b ) ( a ) edge ( d ) ( f ) edge ( c ) ( a ) edge[bend right ] ( e ) ( e ) edge ( f ) ; a value - based system is called _ bipartite _ if its graph structure is a bipartite graph , i.e. , if can be partitioned into two conflict - free sets .dunne suggested to consider restrictions on the number of arguments that share the same value , and the number of attacks between equivalued arguments .we state these restrictions in terms of the following notions .let be a value - based system .the _ value - width _ of is the largest number of arguments that share the same value , i.e. , .the _ attack - width _ of is the cardinality of the set .for instance , the value - based system of example [ exa : vaf ] has value - width 2 and attack - width 2 ._ value - based systems of value - width 1 _ are not very interesting : every argument in such a value - based system is subjectively accepted ( is accepted with respect to any specific audience where is largest ) , and objectively accepted if and only if is not attacked by any argument ( if attacks then is not accepted with respect to any specific audience where is largest ) .thus , for value - based systems of value - width 1 the problems subjective and objective acceptance are trivial , and the expressive power of such value - based systems is very limited .on the other hand , _ value - based systems of value - width 3 _ are already too expressive to allow a tractable acceptance decision : dunne showed that the problems subjective and objective acceptance are intractable ( -complete and -complete , respectively ) for value - based systems of value - width 3 , even if their graph structure is a tree .this leaves the intermediate class of _ value - based systems of value - width 2 _ as an interesting candidate for a tractable class .in fact , dunne conjectured that both acceptance problems are polynomial - time decidable for value - based systems of value - width 2 .he also conjectured that the problems are polynomial for value - based systems with an attack - width that is bounded by a constant .we disprove both conjectures and show that the problems remain intractable for value - based systems of value - width 2 and ( simultaneously ) of attack - width 1 . on the positive side , we show that under the additional assumption that the value - based system is bipartite ( that entails value - based systems whose graph structures are trees ) both acceptance problems can be decided in polynomial time for value - based systems of value - width 2 .[ size2hard ] subjective acceptance remains -hard for value - based systems of value - width 2 and attack - width 1 .objective acceptance remains -hard for value - based systems of value - width 2 and attack - width 1 .[ bipeasy ] subjective acceptance can be decided in polynomial time for bipartite value - based systems of value - width 2 .objective acceptance can be decided in polynomial time for bipartite value - based systems of value - width 2 . in the remainder of this sectionwe will demonstrate the two theorems .the key to the proofs of theorems [ size2hard ] and [ bipeasy ] is the notion of a `` certifying path '' which defines a certain path - like substructure within a value - based system .we show that in value - based systems of value - width 2 , the problems of subjective and objective acceptance can be expressed in terms of certifying paths .we then show that in general finding a certifying path in a value - based system of value - width 2 is -hard ( 3sat can be expressed in terms of certifying paths ) but is easy if the system is bipartite .[ def : cp ] let be a value - based system of value - width .we call an odd - length sequence , , of distinct arguments a _ certifying path for in _ if it satisfies the following conditions : 1 . for every it holds that .2 . for every exists a such that attacks .3 . for every it holds that attacks but does not attack any argument in .4 . argument attacks but it does not attack any argument in .if there exists an argument with then either attacks or does not attack any argument in .[ certpathsa ] let be a value - based system of value - width 2 and .then is subjectively accepted in if and only if there exists a certifying path for in .the rather technical proof of this lemma is given in the appendix .we discuss the intuition behind the concept of certifying paths by means of an example .consider the value - based system of example [ exa : vaf ] .we want to check whether argument is subjectively accepted , i.e. , to identify a specific audience such that is in the unique preferred extension of .since is attacked by and we can not eliminate this attack ( and are equivalued ) , we need to defend by attacking .the only possibility for that is to attack by .hence we need to put in our audience . however , since is attacked by the equivalued argument , we need to defend it by attacking by , hence we need to put . since is not attacked by any other argument we can stop . via this processwe have produced a certifying path , and we can check that indeed satisfies definition [ def : cp ] . for the other subjectively accepted arguments of we have the certifying paths , , , and .in order to use the concept of certifying paths for objective acceptance , we need the following definition .let be a value - based system and a value .we denote by the value - based system obtained from by deleting all arguments with value and all attacks involving these arguments .[ certpathoa ] let be a value - based system of value - width 2 and .then is objectively accepted in if and only if for every argument that attacks it holds that and is _ not _ subjectively accepted in .again , the technical proof is moved to the appendix . in our example , consider the argument .we want to check whether is objectively accepted .since is only attacked by , and since , it remains to check whether is not subjectively accepted in .in fact , contains no certifying path for .hence is objectively accepted in .this subsection is devoted to prove theorem [ size2hard ] .we devise a polynomial reduction from 3sat .let be a 3cnf formula with clauses and for . in the following weconstruct a value - based system of value - width 2 and attack - width 1 such that the query argument is subjectively accepted in if and only if is satisfiable .see figure [ fig : hardness ] for an example .= [ circle , minimum size=6mm , inner sep=0pt ] ( 0,0 ) ellipse ( 1.1 and 0.45 ) ( 0,-1 ) ellipse ( 1.1 and 0.45 ) ( 0,-2 ) ellipse ( 1.1 and 0.45 ) ( 0,-3 ) ellipse ( 1.1 and 0.45 ) ( 0,-4 ) ellipse ( 1.1 and 0.45 ) ( 0,-5 ) ellipse ( 1.1 and 0.45 ) ( 0,-6 ) ellipse ( 1.1 and 0.45 ) ( -2.5,-1 ) ellipse ( 1.1 and 0.45 ) ( 2.5,-1 ) ellipse ( 1.1 and 0.45 ) ( -2.5,-3 ) ellipse ( 1.1 and 0.45 ) ( 2.5,-3 ) ellipse ( 1.1 and 0.45 ) ( -2.5,-5 ) ellipse ( 1.1 and 0.45 ) ( 2.5,-5 ) ellipse ( 1.1 and 0.45 ) ( -0.5,0 ) node[draw ] ( z1 ) ( .5,0 ) node[draw ] ( x1 ) ( .5,-2 ) node[draw ] ( x2 ) ( -.5,-2 ) node[draw ] ( z2 ) ( .5,-4 ) node[draw ] ( x3 ) ( -.5,-4 ) node[draw ] ( z3 ) ( 0,-6 ) node[draw ] ( t ) ( -2.0,-1 ) node[draw ] ( z11 ) ( -3.0,-1 ) node[draw ] ( x11 ) ( .5,-1 ) node[draw ] ( z12 ) ( -.5,-1 ) node[draw ] ( x12 ) ( 3.0,-1 ) node[draw ] ( z13 ) ( 2.0,-1 ) node[draw ] ( x13 ) ( -2.0,-3 ) node[draw ] ( z21 ) ( -3.0,-3 ) node[draw ] ( x21 ) ( .5,-3 ) node[draw ] ( z22 ) ( -.5,-3 ) node[draw ] ( x22 ) ( 3.0,-3 ) node[draw ] ( z23 ) ( 2.0,-3 ) node[draw ] ( x23 ) ( -2.0,-5 ) node[draw ] ( z31 ) ( -3.0,-5 ) node[draw ] ( x31 ) ( .5,-5 ) node[draw ] ( z32 ) ( -.5,-5 ) node[draw ] ( x32 ) ( 3.0,-5 ) node[draw ] ( z33 ) ( 2.0,-5 ) node[draw ] ( x33 ) ; ( z1 ) edge ( x1 ) ( x11 ) edge ( z1 ) ( z11 ) edge ( x1 ) ( x12 ) edge ( z1 ) ( z12 ) edge ( x1 ) ( x13 ) edge ( z1 ) ( z13 ) edge ( x1 ) ( z2 ) edge ( x11 ) ( x2 ) edge ( z11 ) ( z2 ) edge ( x12 ) ( x2 ) edge ( z12 ) ( z2 ) edge ( x13 ) ( x2 ) edge ( z13 ) ( x21 ) edge ( z2 ) ( z21 ) edge ( x2 ) ( x22 ) edge ( z2 ) ( z22 ) edge ( x2 ) ( x23 ) edge ( z2 ) ( z23 ) edge ( x2 ) ( z3 ) edge ( x21 ) ( x3 ) edge ( z21 ) ( z3 ) edge ( x22 ) ( x3 ) edge ( z22 ) ( z3 ) edge ( x23 ) ( x3 ) edge ( z23 ) ( x31 ) edge ( z3 ) ( z31 ) edge ( x3 ) ( x32 ) edge ( z3 ) ( z32 ) edge ( x3 ) ( x33 ) edge ( z3 ) ( z33 ) edge ( x3 ) ( t ) edge ( z31 ) ( t ) edge ( z32 ) ( t ) edge ( z33 ) ; ( x21 ) edge[bend left ] ( x11 ) ( x23 ) edge[bend left ] ( x13 ) ( x31 ) edge[bend left ] ( x21 ) ( x32 ) edge[bend left=60 ] ( x22 ) ( x32 ) edge[bend right=27 ] ( x12 ) ( x33 ) edge[bend right=25 ] ( x13 ) ; the set contains the following arguments : 1 . a pair of arguments for ; 2 . a pair of arguments for and ; 3 .an argument .the set contains the following attacks : 1 . ; 2 . and for and ; 3 . and for and ; 4 . for ; 5 . for and whenever and are complementary literals .the set contains one value for each pair , and one value for argument , i.e. , .consequently , the mapping is defined such that , for , , and .evidently has attack - width 1 and value - width 2 , and it is clear that can be constructed from in polynomial time . we establish part ( a ) of theorem [ size2hard ] by showing the following claim . [claim : subj ] is satisfiable if and only if is subjectively accepted in .first we note that every certifying path for in must have the form , , , , , , , , , , , , , where for every and for every pair there is no attack . hence there exists a certifying path for in if and only if there exists a set of literals that corresponds to a satisfying truth assignment of ( i.e. , contains a literal of each clause of but does not contain a complementary pair of literals ) . in order to show part ( b ) of theorem [ size2hard ] ,let be the value - based system as constructed above and define to be the value - based system with 1 . , 2 . , 3 . , 4 . and for every .part ( b ) of theorem [ size2hard ] follows from the following claim which follows from claim [ claim : subj ] and lemma [ certpathoa ] .[ claim : obj ] is satisfiable if and only if is not objectively accepted in . by a slight modification of the above reduction we can also show the following , answering a research question recently posed by dunne .the detailed argument is given in the appendix .[ cor : bip ] subjective and objective acceptance remain -hard and -hard , respectively , for value - based systems whose value graphs are bipartite . bench - capon , doutre , and dunne developed a general _ dialogue framework _ that allows to describe the acceptance of arguments in a value - based system in terms of a game , played by two players , the proponent and the opponent .the proponent tries to prove that a certain argument ( or a set of arguments ) is accepted , the opponent tries to circumvent the proof .an argument is subjectively accepted if the proponent has a winning strategy , that is , she is able to prove the acceptance regardless of her opponent s moves . in the following we outline a simplified version of the dialogue framework that applies to value - based systems of value - width 2 .we will see that certifying paths correspond to winning strategies for the proponent .let be a value - based system of value - width .we have two players , the proponent and the opponent , who make moves in turn , at each move asserting a new argument .this produces a sequence of arguments and a set of audiences with .the proponent has the first move , where she asserts the query argument whose subjective acceptance is under consideration . after each move of the proponent, asserting argument , the opponent asserts a new argument which has the same value as but is not attacked by , and attacks some argument asserted by the proponent . if no such argument exists , the proponent has won the game . after each move of the opponent asserting an argument , it is again the proponent s turn to assert a new argument .this argument must attack the opponent s last argument , but must not attack any argument asserted by the proponent .if no such argument exists , the proponent has lost the game . because the value - width of is assumed to be , the opponent has at most one choice for each move .therefore , the proponent s wining strategy does not need to consider several possibilities for the opponent s counter move .hence , a winning strategy is not a tree but just a path and can be identified with a sequence that corresponds to a play won by the proponent .it is easy to verify that such a sequence is exactly a certifying path .[ exa : game ] consider again the value - based system of example [ exa : vaf ] .the proponent wants to prove that argument is subjectively accepted in and asserts with her first move .now , it is the opponent s turn .he has no other choice but to assert ( the only argument different from with the same value as ) .now , it is again the proponent s turn .she must assert an argument that attacks but does not attack .argument satisfies this property ( it happens that this is the only choice ) .next , the opponent asserts , and the proponent asserts , and it is again the opponent s turn .the only argument with the same value as is argument , but does not attack any of the arguments in .hence , the proponent wins .the sequence of arguments produced by this play is indeed a certifying path for in .hence is subjectively accepted . in this subsectionwe prove theorem [ bipeasy ] . throughout this section, we assume that we are given a bipartite value - based system together with a query argument .furthermore , let and be the subsets of containing all arguments such that the length of a shortest directed path in from to is even and odd , respectively .[ parity ] let be a certifying path for in . then and .the claim follows easily via induction on by using the properties of a certifying path and the fact that is bipartite .based on the observation of lemma [ parity ] , we construct an auxiliary directed graph as follows .the vertex set of is the set of values of .there is a directed edge from to if and only if there is an argument with and an argument with such that .note that since is bipartite .[ path1 ] if is a certifying path for in , then is a directed path from to in . by the definition of a certifying path , we have and for every it holds that .lemma [ parity ] implies that for and are contained in for every , and hence for every .lemma [ path1 ] tells us that each certifying path in gives rise to a directed path in .[ exa : aux ] figure [ fig : aux ] shows a bipartite value - based system and the associated auxiliary graph .the query argument is .hence and .the query argument is subjectively accepted in as is a certifying path for in .indeed , gives rise to the directed path ( i.e. , ) in , as promised by lemma [ path1 ] .= [ circle , minimum size=6mm , inner sep=0pt ] ( 1.25,-10 ) node ( ) ( 0,0 ) node[draw ] ( x1 ) ( 2.5,0 ) node[draw ] ( z1 ) ( 0,-2 ) node[draw ] ( x2 ) ( 2.5,-2 ) node[draw ] ( z2 ) ( 0,-4 ) node[draw ] ( x3 ) ( 2.5,-4 ) node[draw ] ( z3 ) ( 0,-6 ) node[draw ] ( x4 ) ( 2.5,-6 ) node[draw ] ( z4 ) ( 0,-8 ) node[draw ] ( x5 ) ( 2.5,-8 ) node[draw ] ( z5 ) ; ( 1.25,0 ) ellipse ( 1.95 and 0.90 ) + ( -2.3,0 ) node ( ) : ; ( 1.25,-2 ) ellipse ( 1.95 and 0.90 ) + ( -2.3,0 ) node ( ) : ; ( 1.25,-4 ) ellipse ( 1.95 and 0.90 ) + ( -2.3,0 ) node ( ) : ; ( 1.25,-6 ) ellipse ( 1.95 and 0.90 ) + ( -2.3,0 ) node ( ) : ; ( 1.25,-8 ) ellipse ( 1.95 and 0.90 ) + ( -2.3,0 ) node ( ) : ; ( z1 ) edge ( x1 ) ( x2 ) edge ( z1 ) ( x2 ) edge ( z5 ) ( x3 ) edge ( z2 ) ( x5 ) edge ( z4 ) ( x5 ) edge ( z3 ) ( x4 ) edge ( z2 ) ( z3 ) edge ( x3 ) ( z2 ) edge ( x1 ) ( z4 ) edge ( x2 ) ; ( 6,-10 ) node ( ) ( 6,0 ) node[draw ] ( v1 ) ( 6,-2 ) node[draw ] ( v2 ) ( 6,-4 ) node[draw ] ( v3 ) ( 6,-6 ) node[draw ] ( v4 ) ( 6,-8 ) node[draw ] ( v5 ) ; ( v5 ) edge [ bend left ] ( v3 ) ( v3 ) edge ( v2 ) ( v2 ) edge ( v1 ) ( v5 ) edge ( v4 ) ( v4 ) edge [ bend left ] ( v2 ) ( v2 ) edge [ bend left ] ( v5 ) ; ( 8,-10 ) node ( ) ( 8,-2 ) node[draw ] ( v2 ) ( 8,-4 ) node[draw ] ( v3 ) ( 8,-6 ) node[draw ] ( v4 ) ( 8,-8 ) node[draw ] ( v5 ) ; ( v5 ) edge [ bend left ] ( v3 ) ( v3 ) edge ( v2 ) ( v4 ) edge [ bend left ] ( v2 ) ( v5 ) edge ( v4 ) ( v2 ) edge [ bend left ] ( v5 ) ; ( 10,-10 ) node ( ) ( 10,-2 ) node[draw ] ( v2 ) ( 10,-4 ) node[draw ] ( v3 ) ( 10,-6 ) node[draw ] ( v4 ) ( 10,-8 ) node[draw ] ( v5 ) ; ( v5 ) edge [ bend left ] ( v3 ) ( v3 ) edge ( v2 ) ( v5 ) edge ( v4 ) ( v4 ) edge [ bend left ] ( v2 ) ( v2 ) edge [ bend left ] ( v5 ) ; ( 12,-10 ) node ( ) ( 12,0 ) node[draw ] ( v1 ) ( 12,-2 ) node[draw ] ( v2 ) ( 12,-6 ) node[draw ] ( v4 ) ( 12,-8 ) node[draw ] ( v5 ) ; ( v2 ) edge ( v1 ) ( v5 ) edge ( v4 ) ( v4 ) edge [ bend left ] ( v2 ) ( v2 ) edge [ bend left ] ( v5 ) ; ( 14,-10 ) node ( ) ( 14,0 ) node[draw ] ( v1 ) ( 14,-4 ) node[draw ] ( v3 ) ( 14,-6 ) node[draw ] ( v4 ) ( 14,-8 ) node[draw ] ( v5 ) ; ( v5 ) edge [ bend left ] ( v3 ) ( v5 ) edge ( v4 ) ; ( 16,-10 ) node ( ) ( 16,0 ) node[draw ] ( v1 ) ( 16,-2 ) node[draw ] ( v2 ) ( 16,-4 ) node[draw ] ( v3 ) ( 16,-6 ) node[draw ] ( v4 ) ( 16,-8 ) node[draw ] ( v5 ) ; ( v5 ) edge [ bend left ] ( v3 ) ( v3 ) edge ( v2 ) ( v2 ) edge ( v1 ) ( v5 ) edge ( v4 ) ( v4 ) edge [ bend left ] ( v2 ) ( v2 ) edge [ bend left ] ( v5 ) ; it would be desirable if we could find certifying paths by searching for directed paths in .however , not every directed path in gives rise to a certifying path in .to overcome this obstacle , we consider for each value the subgraph of which is obtained as follows : if there is an argument that is not attacked by some equivalued argument , then for every argument that is attacked by we remove the vertex from .figure [ fig : aux ] shows the graphs for the value - based system of example [ exa : aux ] .[ path2 ] consider an odd - length sequence of distinct arguments of a bipartite value - based system of value width .then is a certifying path for in if and only if the following conditions hold : 1 . for .2 . is a directed path from to in .3 . none of the sub - sequences is a directed path from to in for .assume is a certifying path for in .property ( 1 ) follows from condition c1 of a certifying path , property ( 2 ) follows from condition c5 and lemma [ path1 ] .property ( 3 ) follows from conditions c2 and c3 . to seethe reverse assume that satisfies properties ( 1)(3 ) .condition c1 follows from property ( 1 ) .conditions c3 , c4 and c5 follow from property ( 2 ) and the assumption that is bipartite .condition c2 follows from property ( 3 ) .hence is a certifying path for in .indeed , consider the certifying path of example [ exa : aux ] which gives rise to the sequence of values .this sequence is a directed path in , however is not a directed path in , is not a directed path in , and is not a directed path in .lemma [ path2 ] suggests a simple strategy for finding a certifying path for in , if one exists . for each value we search for a directed path from to in .if we find such a path , we check for each subsequence , , whether it is a directed path in .if the answer is no for all , then satisfies the conditions of lemma [ path2 ] .hence the sequence of arguments in whose values form is a certifying path for in .if , however , the answer is yes for some , we take the smallest for which the answer is yes .now the sequence satisfies the conditions of lemma [ path2 ] and so gives rise to a certifying path for in . on the other hand ,if there is no value such that contains a directed path from to , then there is no certifying path for in .the pseudo code for this algorithm is given in figure [ fig : algo ] . ' '' ''height.35 mm = = = = = algorithm detect certifying path + input : value - based system , query argument + output : a directed path in that corresponds to a certifying path for in , + or no if there is no certifying path for in + for all do + check if contains a directed path from to + if yes do + find such a path , , + for do + check if is a directed path in + if yes , output and terminate + return no and terminate ' '' '' height.35 mm [ treealgo ] the algorithm detect certifying path correctly returns a certifying path for in if one exists and returns no otherwise in time .the correctness of detect certifying path follows from lemma [ path2 ] . for ,building and finding a shortest directed path from to , if one exists , takes linear time in the input size of ( which we estimate by the term ) . as we iterate over all vertices of , and we check for at most subsequences whether it is a directed path in , the claimed running time follows .we are now ready to combine the above results to a proof of theorem [ bipeasy ] .statement ( a ) of the theorem follows from lemma [ certpathsa ] and proposition [ treealgo ] .statement ( b ) follows from statement ( a ) and lemma [ certpathoa ] .as mentioned above , it is known that both acceptance problems remain intractable for value - based systems whose graph structure is a tree .this is perhaps not surprising since two arguments can be considered as linked to each other if they share the same value .in fact , such links may form cycles in an otherwise tree - shaped value - based system .therefore we propose to consider the _ extended graph structure _ of the value - based system ( recall definition [ def : graphs ] in section [ subsection : graphs ] ) that takes such links into account .we show that the problems subjective and objective acceptance are easy for value - based systems whose extended graph structure is a tree , and more generally , the problems can be solved in _ linear - time _ for value - based systems with an extended graph structure of _ bounded treewidth_. treewidth is a popular graph parameter that indicates in a certain sense how similar a graph is to a tree .many otherwise intractable graph problems ( such as 3-colorability ) become tractable for graphs of bounded treewidth .bounded treewidth ( and related concepts like _ induced width _ and _ d - tree width _ ) have been successfully applied in many areas of ai , see , e.g. , . deciding acceptance for argumentation frameworks of bounded treewidth has been investigated by dunne and by dvork , pichler , and woltran . however , for value - based argumentation , the concept of bounded tree - width has not been applied successfully : the basic decision problems for value - based systems remain intractable for value - based systems of value width 3 whose graph structure has treewidth 1 .hardness even prevails for value - based systems whose value graph has pathwidth 2 .these negative results are contrasted by our theorem [ mso ] , which indicates that the extended graph structure seems to be a suitable and adequate graphical model for value - based systems .the treewidth of a graph is defined using the following notion of a tree decomposition ( see , e.g. , ) .a _ tree decomposition _ of an ( undirected ) graph is a pair where is a tree and is a labeling function that assigns each tree node a set of vertices of the graph such that the following conditions hold : 1 .every vertex of occurs in for some tree node .2 . for every edge of is a tree node such that .3 . for every vertex of ,the tree nodes with form a connected subtree of .the _ width _ of a tree decomposition is the size of a largest bag minus among all nodes of .a tree decomposition of smallest width is _optimal_. the _ treewidth _ of a graph is the width of an optimal tree decomposition of .figure [ fig : treedecomp ] exhibits a graph ( the extended graph structure of the value - based system of example [ exa : vaf ] ) and a tree decomposition of it .the width of the tree decomposition is , and it is not difficult to see that this is optimal .hence the treewidth of the graph in the figure is 2 .= [ circle , minimum size=6mm , inner sep=0pt ] ( .75,-6 ) node ( ) ( 0,0 ) node[draw ] ( a ) ( 1.5,0 ) node[draw ] ( b ) ( 0,-2 ) node[draw ] ( c ) ( 1.5,-2 ) node[draw ] ( d ) ( 1.5,-4 ) node[draw ] ( f ) ( 0,-4 ) node[draw ] ( e ) ; \(b ) edge ( a ) ( c ) edge ( d ) ( d ) edge ( b ) ( a ) edge ( d ) ( f ) edge ( c ) ( a ) edge[bend right ] ( e ) ( e ) edge ( f ) ; = [ ellipse , minimum width=14 mm ] ( 8+.75,-6 ) node ( ) ( 8 + 0,0 ) node[draw ] ( 1 ) ( 8+.5,-1.5 ) node[draw ] ( 2 ) ( 8 - 1,-3 ) node[draw ] ( 3 ) ( 8 + 1.5,-3 ) node[draw ] ( 4 ) ( 8 + 2,-4.5 ) node[draw ] ( 5 ) ; ( 1)(2)(4)(5 ) ( 2)(3 ) ; we are going to establish the following result .[ mso ] the problems subjective and objective acceptance can be decided in linear time for value - based systems whose extended graph structure has bounded treewidth . to achieve tractability we have to pay a price in generality : the mentioned hardness results of imply that if subjective acceptance is fixed - parameter tractable for any parameter , then , unless , parameter can not be bounded by a function of any of the following three parameters : the treewidth of the graph structure , the treewidth of the value graph , and the value - width .this even holds if the bounding function is exponential .indeed , the treewidth of the extended graph structure can be arbitrarily large for value - based systems where one of these three parameters is bounded by a constant .the reminder of this section is devoted to a proof of theorem [ mso ] .we shall take a logic approach and use the celebrated result of courcelle , which states that all properties that can be expressed in a certain formalism ( monadic second - order logic , mso ) can be checked in linear time for graphs ( or more generally , for finite structures ) of bounded treewidth .courcelle s theorem is constructive in the sense that it not only promises the existence of an algorithm for the particular problem under consideration , but it provides the means for actually producing such an algorithm .the algorithm produced in this general and generic way leaves much room for improvement and provides the basis for the development of problem - specific and more practical algorithms .in the following we use courcelle s result as laid out by flume and grohe .let denote a finite relational structure and a sentence in monadic second - order logic ( mso logic ) on .that is , may contain quantification over atoms ( elements of the universe ) and over sets of atoms .furthermore , we associate with the structure its _ gaifman graph _ , whose vertices are the atoms of , and where two distinct vertices are joined by an edge if and only if they occur together in some tuple of a relation of .we define the _ treewidth of structure _ as the treewidth of its gaifman graph .now courcelle s theorem states that for a fixed mso sentence and a fixed integer , one can check in linear time whether holds for a given relational structure of treewidth at most .the proof of theorem [ mso ] boils down to the following two tasks : _ task a. _ to represent a value - based system and a query argument by a relational structure ] ._ task b. _ to construct formulas and in mso logic such that for every value - based system and every argument of it holds that is true for ] if and only if is objectively accepted in . for many problemsit is rather straight - forward to find an mso formulation so that courcelle s theorem can be applied . in our case , however , we have to face the difficulty that we have to express that `` a certain property holds for some total ordering '' ( subjective acceptance ) and `` a certain property holds for all total orderings '' ( objective acceptance ) , which can not be directly expressed in mso .our solution to this problem lies in the introduction of an auxiliary directed graph , the _ reference graph _ , which will allow us to quantify over total orderings of .the relational structure ] be the directed graph obtained from the reference graph by reversing all edges in , i.e. , :={\{\,}\newcommand{\sm}{\;{|}\;}\newcommand{\se}{\,\}}(u , v ) \sm ( u , v ) \ine_r \setminus q ) \se\cup { \{\,}\newcommand{\sm}{\;{|}\;}\newcommand{\se}{\,\}}(v , u ) \sm ( u , v ) \ine_r \cap q \se ] as the system obtained from with :={\{\,}\newcommand{\sm}{\;{|}\;}\newcommand{\se}{\,\}}(u , v ) \in a \sm ( \eta(u),\eta(v ) ) \notin e_r[q ] ) \se ] is acyclic , and conversely , every set such that ] is acyclic and is in the unique preferred extension of ] is acyclic it holds that is in the unique preferred extension of ] is acyclic '' and `` a certain property holds for all subsets of which ] that represents the value - based system together with the reference graph .the universe of ] has one unary relation and four binary relations , , and that are defined as follows : 1 . if and only if ( used to `` mark '' the query argument ) .2 . if and only if ( used to represent the `` tail relation '' of ) 3 . if and only if ( used to represent the `` head relation '' of ) 4 . if and only if ( used to represent the attack relation ) . 5 . if and only if ( used to represent the mapping ) .consequently , the gaifman graph of ] with and , see figure [ fig : geifmann ] for an illustration .= [ circle , minimum size=6mm , inner sep=0pt ] ( 0.75,-6.25 ) coordinate ( ) ; ( 0,0 ) node[draw ] ( a ) ( 1.5,0 ) node[draw ] ( b ) ( 0,-2 ) node[draw ] ( c ) ( 1.5,-2 ) node[draw ] ( d ) ( 1.5,-4 ) node[draw ] ( f ) ( 0,-4 ) node[draw ] ( e ) ( .75,-5.5 ) node ( ) ; \(b ) edge ( a ) ( c ) edge ( d ) ( d ) edge ( b ) ( a ) edge ( d ) ( f ) edge ( c ) ( a ) edge[bend right ] ( e ) ; ( 0.75,0 ) ellipse ( 1.95 and 0.75 ) ; ( 0.75,-2 ) ellipse ( 1.95 and 0.75 ) ; ( 0.75,-4 ) ellipse ( 1.95 and 0.75 ) ; ( -1.7,0 ) node ( ) : ; ( -1.7,-2 ) node ( ) : ; ( -1.7,-4 ) node ( ) : ; ( 8 + 0,0 ) node[draw ] ( a ) ( 8 + 1.5,0 ) node[draw ] ( b ) ( 8 + 0,-2 ) node[draw ] ( c ) ( 8 + 1.5,-2 ) node[draw ] ( d ) ( 8 + 1.5,-4 ) node[draw ] ( f ) ( 8 + 0,-4 ) node[draw ] ( e ) ( 8 + 3.5,0 ) node[rectangle , draw ] ( s ) ( 8 + 3.5,-2 ) node[rectangle , draw ] ( e ) ( 8 + 3.5,-4 ) node[rectangle , draw ] ( t ) ( 8 + 4,-1 ) node[rectangle , draw , inner sep=2pt ] ( se ) ( 8 + 4,-3 ) node[rectangle , draw , inner sep=2pt ] ( et ) ( 8 + 6,-2 ) node[rectangle , draw , inner sep=2pt ] ( st ) ( 8 + 2,-5.5 ) node ( ) ) ] is at most twice the treewidth of the extended graphs structure of plus . the easy proof is given in the appendix . in order to define and we introduce the following auxiliary formulas : a formula that holds if and only if is the tail and is the head of : a formula that holds if and only if the directed edge is contained in ] is acyclic .we use the well - known fact that a directed graph contains a directed cycle if and only if there is a nonempty set of vertices each having an out - neighbor in ( see , e.g. , ) . )\ ] ] a formula that holds if and only if attacks in ] : \ ] ] now the formula can be defined as follows : \ ] ] it follows from lemma [ lem : q ] that is true for ] if and only if is objectively accepted in .we summarize the above construction in the next lemma .[ lem : formula ] there exists an mso sentence such that is true for ] if and only if is objectively accepted in . in view of lemmas [lem : tw ] and [ lem : formula ] , theorem [ mso ] now follows by courcelle s theorem . if both the treewidth of the value graph and the value - width of an value - based system are bounded , then also the extended graph structure has bounded treewidth , hence we have the following corollary . the problems subjective and objective acceptance can be decided in linear time for value - based systems for which both value - width and the treewidth of their value graphs are bounded .let and be constants .let be a tree decomposition of the value graph of a value - based system of width and assume the value - width of is .then with is a tree decomposition of the extended graph structure of .since holds for all nodes of , it follows that the width of is bounded by the constant .we conclude , in view of theorem [ mso ] , that we can decide both acceptance problems for in linear time .we have studied the computational complexity of persuasive argumentation for value - based argumentation frameworks under structural restrictions .we have established the intractability of deciding subjective or objective acceptance for value - based systems with value - width 2 and attack - width 1 , disproving conjectures stated by dunne .it might be interesting to note that our reductions show that intractability even holds if the attack relation of the value - based system under consideration forms a directed acyclic graph .on the positive side we have shown that value - based systems with value - width 2 whose graph structure is bipartite are solvable in polynomial time .these results establish a sharp boundary between tractability and intractability of persuasive argumentation for value - based systems with value - width 2 .furthermore we have introduced the notion of the _ extended graph structure _ of a value - based system and have shown that subjective and objective acceptance can be decided in linear - time if the treewidth of the extended graph structure is bounded ( that is , the problems are _ fixed - parameter tractable _ when parameterized by the treewidth of the extended graph structure ) .this is in strong contrast to the intractability of the problems for value - based systems where the treewidth of the graph structure or the treewidth of their value graph is bounded .therefore we conclude that the extended graph structure seems to be an appropriate graphical model for studying the computational complexity of persuasive argumentation. it might be interesting for future work to extend this study to other graph - theoretic properties or parameters of the extended graph structure .10 jrgen bang - jensen and gregory gutin . .springer monographs in mathematics .springer verlag , london , 2001 .pietro baroni and massimiliano giacomin .semantics of abstract argument systems . in iyad rahwan and guillermosimari , editors , _ argumentation in artificial intelligence _ , pages 2544 .springer verlag , 2009 .t. j. m. bench - capon and paul e. dunne .argumentation in artificial intelligence .171(10 - 15):619641 , 2007 . trevor j. m. bench - capon .persuasion in practical argument using value - based argumentation frameworks .13(3):429448 , 2003 . trevor j. m. bench - capon , sylvie doutre , and paul e. dunne .audiences in argumentation frameworks . , 171(1):4271 , 2007 .philippe besnard and anthony hunter . .the mit press , 2008 . h. l. bodlaender . a tourist guide through treewidth ., 11:121 , 1993 .a. bondarenko , p. m. dung , r. a. kowalski , and f. toni .an abstract , argumentation - theoretic approach to default reasoning ., 93(1 - 2):63101 , 1997 .bruno courcelle .recognizability and second - order definability for sets of finite graphs .technical report i-8634 , universit de bordeaux , 1987 .adnan darwiche .recursive conditioning ., 126(1 - 2):541 , 2001 .rina dechter .bucket elimination : a unifying framework for reasoning ., 113(1 - 2):4185 , 1999 . phan minh dung . on the acceptability of arguments and its fundamental role in nonmonotonic reasoning , logic programming and -person games, 77(2):321357 , 1995 .paul e. dunne .computational properties of argument systems satisfying graph - theoretic constraints . , 171(10 - 15):701729 , 2007 .paul e. dunne .tractability in value - based argumentation . in pietro baroni , federico cerutti , massimiliano giacomin , and guillermo r. simari , editors , _ computational models of argumentation , proceedings of comma 2010 _ , volume 216 of _ frontiers in artificial intelligence and applications _ , pages 195206 .ios , 2010 .paul e. dunne and trevor j. m. bench - capon .complexity in value - based argument systems . in josjlio alferes and joo alexandre leite , editors , _ logics in artificial intelligence , 9th european conference , jelia 2004 , lisbon , portugal , september 27 - 30 , 2004 , proceedings _ ,volume 3229 of _ lecture notes in computer science _ , pages 360371 .springer verlag , 2004 .wolfgang dvork , reinhard pichler , and stefan woltran . towards fixed - parameter tractable algorithms for argumentation . in fangzhenlin , ulrike sattler , and miroslaw truszczynski , editors , _ principles of knowledge representation and reasoning : proceedings of the twelfth international conference , kr 2010 , toronto , ontario , canada , may 9 - 13 , 2010_. aaai press , 2010 .jrg flum and martin grohe ., volume xiv of _ texts in theoretical computer science .an eatcs series_. springer verlag , berlin , 2006 .eugene c. freuder .a sufficient condition for backtrack - bounded search ., 32(4):755761 , 1985 .michael r. garey and david r. johnson . .w. h. freeman and company , new york , san francisco , 1979 .georg gottlob , reinhard pichler , and fang wei . bounded treewidth as a key to tractability of knowledge representation and reasoning . in _21st national conference on artificial intelligence and the 18th innovative applications of artificial intelligence conference_. aaai press , 2006 .eun jung kim , sebastian ordyniak , and stefan szeider .algorithms and complexity results for persuasive argumentation . in pietro baroni , federico cerutti , massimiliano giacomin , and guillermo r. simari , editors , _ computational models of argumentation , proceedings of comma 2010 _ , volume 216 of _ frontiers in artificial intelligence and applications _ , pages 311322 .ios , 2010 .simon parsons , michael wooldridge , and leila amgoud . properties and complexity of some formal inter - agent dialogues . , 13(3):347376 , 2003 .john l. pollock . how to reason defeasibly ., 57(1):142 , 1992 .iyad rahwan and guillermo r. simari , editors . .springer verlag , 2009 .let be a certifying path for in . take a specific audience such that and all other values in are smaller than .we claim that the unique preferred extension of includes and excludes , which means that is subjectively accepted in .it follows from c5 that is not attacked by any other argument in and hence ( see also section [ sec : pre ] for a description of an algorithm to find the unique preferred extension of an acyclic abstract argumentation system ) . from c4it follows that .furthermore , if there exists an argument , then either or does not attack an argument in . in the first case and does not influence the membership in for any other arguments in . in the second case but it does not attack any argument in . in both casesit follows that . using c3it follows that and since we already know that it follows that .a repeated application of the above arguments establishes the claim , and hence follows .conversely , suppose that there exists a specific audience such that is contained in the unique preferred extension of .we will now construct a certifying path for in .clearly , if there is no with and , then is a certifying path for in .hence , it remains to consider the case where such a exists .since it follows that . the sequence clearly satisfies properties c1c3 .we now show that we can always extend such a sequence until we have found a certifying path for in .hence , let be such a sequence satisfying conditions c1c3 , and in addition assume satisfies the following two conditions : clearly , the sequence satisfies s1 and s2 , hence we can include these conditions in our induction hypothesis .it remains to show how to extend to a certifying path .let .then because by condition s2 and the assumption that is a preferred extension .we choose arbitrarily .note that satisfies the condition c4 ; ( as ) and for ( as and is conflict - free ) .since we assume that is not a certifying path , must violate c5 .it follows that there exists some argument with such that and for some .we conclude that satisfies conditions c1c3 and s1s2 .hence , we are indeed able to extend and will eventually obtain a certifying path for in .assume that is objectively accepted in .suppose there is a that attacks and .if we take a specific audience where is the greatest element , then is not in the unique preferred extension of , a contradiction to the assumption that is objectively accepted .hence for all arguments that attack .next suppose there is an argument that attacks and is subjectively accepted in .let be a specific audience such that is in the unique preferred extension of .we extend to a total ordering of ensuring .clearly is not in the unique preferred extension of , again a contradiction .hence indeed for all that attack we have and is not subjectively accepted in we establish the reverse direction by proving its counter positive .assume that is not objectively accepted in .we show that there exists some that attacks and where either or is subjectively accepted in .let be a specific audience of such that is not in the unique preferred extension of .in view of the labeling procedure for finding as sketched in section [ sec : pre ] , it follows that there exists some that attacks with . if then we are done . on the other hand ,if , then is in the unique preferred extension of , and so is subjectively accepted in .we slightly modify the reduction from 3sat as given in section [ subsection : hard ] .let be the clauses of the 3cnf formula .it is well - known that 3sat remains -hard for formulas where each clause is either positive ( all three literals are unnegated variables ) or negative ( all three literals are negated variables ) , see .hence we may assume that for some , are positive clauses and are negative clauses .let and be the two value - based systems corresponding to as constructed in section [ subsection : hard ] .we obtain from the value - based system by adding a new pair of arguments with a new value and inserting the pair between the pairs and the pair .that is , for we replace the attacks and with the attacks and , and we add the attacks , . by the same modification we obtain from the value - based system .clearly claims [ claim : subj ] and [ claim : obj ] still hold for the modified value - based systems , i.e. , is satisfiable if and only if is subjectively accepted in , and is satisfiable if and only if is not objectively accepted in .we partition the set of arguments into two sets and . contains the values for , the value , and the values for . contains the values for , the values for , and the value . for , contains also the value .it is easy to check that there is no attack with or , hence and have bipartite value graphs .let be the graph obtained from )=(v_{s[f , x_1]},e_{s[f , x_1]}) ] .conversely one can obtain )$ ] from by subdividing all edges of the form for and with a vertex .however , subdividing edges does not change the treewidth of a graph , hence it suffices to show that the treewidth of is at most twice the treewidth of the extended graph structure of plus 1 .let be a tree decomposition of the extended graph structure of .we observe that where is a tree decomposition of where for all nodes of ; hence the width of is at most twice the width of plus 1 .
|
the study of arguments as abstract entities and their interaction as introduced by dung ( _ artificial intelligence _ 177 , 1995 ) has become one of the most active research branches within artificial intelligence and reasoning . a main issue for abstract argumentation systems is the selection of acceptable sets of arguments . value - based argumentation , as introduced by bench - capon ( _ j . logic comput . _ 13 , 2003 ) , extends dung s framework . it takes into account the relative strength of arguments with respect to some ranking representing an audience : an argument is subjectively accepted if it is accepted with respect to some audience , it is objectively accepted if it is accepted with respect to all audiences . deciding whether an argument is subjectively or objectively accepted , respectively , are computationally intractable problems . in fact , the problems remain intractable under structural restrictions that render the main computational problems for non - value - based argumentation systems tractable . in this paper we identify nontrivial classes of value - based argumentation systems for which the acceptance problems are polynomial - time tractable . the classes are defined by means of structural restrictions in terms of the underlying graphical structure of the value - based system . furthermore we show that the acceptance problems are intractable for two classes of value - based systems that where conjectured to be tractable by dunne ( _ artificial intelligence _ 171 , 2007 ) .
|
the hawkes point process was first introduced in as a model of chain - reaction - like phenomena , in which the occurrence of an event increases the likelihood of more such events happening in the future .this intrinsic self - exciting property has made hawkes processes appealing to a wide variety of researchers dealing with data exhibiting strong temporal clustering . although it was originally used to model the dynamics of aftershocks that accompany strong earthquakes , it has since found application to many other problems , including accretion disc formation , gene interactions , social dynamics , insurance risk , corporate default clustering , market impact , high - frequency financial data , micro - structure noise , crime , generic properties of high - dimensional inverse problems in statistical mechanics , and dynamics of neural networks .there is also recent theoretical work exploring generic mathematical properties of the process in its own respect .as the areas of application of hawkes models continue to grow , it becomes increasingly important to understand the probabilistic behavior of the process .unfortunately , despite its ubiquity , the mathematical properties of the hawkes process are still not fully known .in fact , the same dynamical characteristics that make it such a useful model in practice are the ones that complicate formal analysis .hawkes processes do not ( except in some special cases , see ) possess the markov property , making it impossible to study them using standard techniques .recently , quite a few methods have been devised to circumvent this problem ; there are now many well known results describing hawkes process stability , long - term behavior and large deviation properties . yet , since the early works of hawkes himself on the covariance density and bartlett spectrum of self - exciting processes , few have tried to further elucidate their statistical properties . in his work , adamopoulos , for example , attempts to derive the probability generating functional of the hawkes process , but manages only to represent it implicitly , as a solution of an intractable functional equation .errais et al . , using the elegant theory of affine jump processes , show that the moments of hawkes processes can be computed by solving a system of non - linear odes . once again , however , explicit formulas turn out to be unobtainable by analytic means .lastly , saichev and sornette , using the alternative poisson cluster representation of self - exciting processes , show that the moment generating function of the hawkes process satisfies a transcendental equation which does not admit an explicit solution. statistical behavior of , for example , hawkes process moments and cumulants is of some importance in neuroscience , where the problem of quantifying levels of synchronization of action potentials has become very pertinent .it has been shown that nerve cells can be extremely sensitive to synchronous input from large groups of neurons .more precisely , a neuron s firing rate profile depends , to a large degree , on higher order correlations amongst the presynaptic spikes .of course , which synchronous patterns are favored by the network is also determined by its connection structure . while the contribution of specific structural motifs to the emergence of pairwise correlations ( i.e. , two - spike patterns ) has already been dissected , no such result exists in the case of more complex patterns , stemming from correlations of higher order . in this paper, we derive analytic formulas for the order cumulant densities of a linear , self - exciting hawkes process with arbitrary interaction kernels , generalizing the result in .inspired by the approach of saichev et al ., we do this by utilizing the poisson cluster process representation , which simplifies calculations considerably .furthermore , we show that the cumulant densities admit a natural and intuitive graphical representation in terms of the branching structure of the underlying process and describe an algorithm that facilitates practical computation .finally , we generalize the result in by showing that the integrated cumulant densities can be expressed in terms of formal sums of topological motifs of a graph , induced by specifying the physical interactions between different types of point events .consider a sequence of positive , random variables , representing times of random occurrences of a certain event . alternatively , can be also thought of as as collection of random points on the positive half - line . by superposing all event times in the sequence ,we obtain the point process , formally defined by setting where denotes the dirac delta function , centered at the random point .it is easy to see that the number of events occurring before time is given by the conditional probability , given the past activity , of a new event occurring in the interval is given by the conditional rate function .more specifically , we have , up to first order where represents the history of the point process up to time .additionally , we assume that i.e. that the probability of two or more events arriving simultaneously is negligibly small .intuitively , therefore , the conditional rate function represents the probability of a new event occurring in the infinitesimally near future , given the information about all events in the past .furthermore , from our previous considerations it also follows that is ( up to first order ) a bernoulli random variable and therefore , as was pointed out in , the hawkes process can be defined in two equivalent ways : either by specifying its conditional rate function or as a _poisson cluster process _ , generated by a certain branching structure .following and , let us consider a -dimensional point process , with rate function defined by where denotes the -dimensional base rate vector with positive entries ( ) and is an matrix of non - negative , integrable functions , with support on , called the interaction kernel . in principle , therefore , the rate should always remain positive , but models for which the probability of negative values is sufficiently small may be useful approximations . rewriting equation ( [ rate ] ) in terms of the components of the conditional rate function , we find that , , from equations ( [ rate_definition ] ) and ( [ rate_comp ] ) , we can now see that i.e. that the probability of an event of type occurring at time is simply the sum of a constant base rate and a convolution of the complete history of the process with the interaction kernel , whose component describes the increase of the likelihood of type events at , caused by a type event , occurring at .note that , in the special case of no interactions ( ) , we recover the definition of a multivariate poisson process with constant rate . in this case , however , the ( conditional ) rate function is independent both of time and of the history .let us consider a poisson cluster process , which evolves in the following way ( , see also figure [ pedfigure ] ) : 1 .let be a realization , on the interval ] extracts component of a given matrix , and denotes the convolution power of the interaction kernel , defined recursively by indeed , if we define to be equal to the probability that an event of type at , after generations , causes a type event at , we have {ij}\textmd{,}\\ & \frac{p^{ij}_{1}(t)}{dt } = [ \mathbf{g}(t)]_{ij}\textmd{,}\\ & \frac{p^{ij}_{2}(t)}{dt } = \sum_{k=1}^{d}\int_{-\infty}^{t}[\mathbf{g}(t - s)]_{ik}[\mathbf{g}(s)]_{kj}ds = [ \mathbf{g}^{\star 2}(t)]_{ij}\textmd{,}\end{aligned}\ ] ] and therefore , by induction , {ij}\\ & = \left[\sum_{n\geq0}\mathbf{g}^{\star n}(t)\right]_{ij}\textmd{. } \end{aligned}\ ] ] furthermore , noting that immigrant at is , by construction , equal to , we obtain the probability of an immigrant ( arriving at any point in time ) generating an event of type at time .it equals {mk}\mu^{k } = \lambda^{m}\textmd{,}\ ] ] i.e. it is the component of the stationary rate vector in ( [ stationary_rate ] ) , where the first equality in the previous equation follows from {mk}dx \\ & = \sum_{n\geq 0}[\mathbf{g}^{n}]_{mk } = [ ( \mathbf{i}-\mathbf{g})^{-1}]_{mk}\textmd{. } \end{aligned}\ ] ] computing the probability of the family tree in figure [ 2-trees ] is now straightforward ; recalling the definition of and taking into account our previous considerations , we get recovering a classical and well known result on the covariance density of the hawkes process ( see ) . a big advantage of our approach , however , is that it can be used to compute cumulant densities of orders greater than .for example , in order to compute the order density we start , as in the -dimensional case , by enumerating all possible family trees with leaves , and . in this case , however , there are in total different possibilities ( see figure [ 3-trees ] ). we can now proceed in much the same way as before , summing up the probabilities of all possible trees in order to derive the desired formula .we define , and {ij}\textmd{,}\ ] ] finally obtaining it is important to point out that equation ( [ 3-density ] ) can be derived in a different , albeit a more tedious way using martingale theory arguments , generalizing the derivation of bacry et al . in for the second order cumulant density .the newly introduced function corresponds to the probability of a type event at generating a type event at , after at least one generation .the appearance of such a term in the above equations is a consequence of the fact that , for instance , contracting the link between nodes and in tree to a point turns it into , which is already accounted for .thus , in order to avoid counting certain configurations twice , we must introduce a `` stiff '' link between the two internal nodes and in trees , and . by generalizing the above considerations , it is possible to construct a general procedure for computing the order cumulant density . 1 . for a given , generate all possible rooted trees with leaves .2 . label the leaves of with ordered pairs , in arbitrary order .label the internal nodes ( including the root ) of arbitrarily .3 . for every tree ,construct an integral term , according the the following pseudo - algorithm : 1 .set ; 2 . for every edge in t , connecting a node of type to a leaf of type : 3 . for every edge in t , connecting an internal node of type to another internal node of type : 4 .let be the root of . set 5 .integrate with respect to the variable , for every internal node .sum over all for every internal node . 7 .sum over all 4 .add up all integral terms for every rooted tree , generated in the first step , to obtain the order cumulant density .the principal difficulty of the above procedure lies its first step , i.e. in the enumeration of all topologically distinct rooted trees with labeled leaves . while there are known algorithms that can tackle this problem ( see e.g. the classic text by felsenstein ) , the number of terms grows very quickly with increasing ( see figure [ number_of_terms ] ) and thus computing becomes impractical .[ cols="^,>",options="header " , ]let be , for a given time vector and multi - index , the order cumulant density of a -dimensional hawkes process .we define the integrated cumulant of order , denoted simply by , by setting note that can be seen as the -dimensional laplace transform , `` at zero '' , of . indeed ,if we denote where and , we have , clearly , thus , if we define we can , by laplace transforming the covariance density , prove ( see appendix [ formulas_for_integrated_cumulants ] ) that , {im}[\mathbf{r}]_{jm}\textmd{,}\ ] ] where we set expanding in powers of , we get {im}[\mathbf{g}^{l}]_{jm}\textmd{.}\ ] ] interpreting now the matrix power in the sense of graph theory , i.e. as a matrix whose component corresponds to the sum of lengths of all paths from node to node in exactly steps , we see that the integrated covariance density can be equivalently represented as where the sum goes over the set of all rooted trees with root , containing nodes . here, denotes the weight of tree , defined as the product of weights of all edges , contained in , times the weight of the root , defined as being equal to .the graph with adjacency matrix can be thought of as follows .each node in corresponds to a type of event in the underlying hawkes process , and the existence of an edge from to indicates the possibility of generating type events from those of type .starting in node , traversing the corresponding edge to reach node is equivalent to generating type offspring of a type immigrant .therefore , each path through graph represents a specific `` bloodline '' of a type immigrant , while a tree accounts for the possibility of the bloodline splitting somewhere along the way , concluding in , after a certain number of generations , in offspring of both types and .the previous formula tells us that the sum of weights of all such trees is equal to the integrated covariance .now , reasoning in much the same way as before we have , for , {im}[\mathbf{r}]_{jm}[\mathbf{r}]_{km}\\ & + \sum_{m , n=1}^{d}\lambda_{n}[\mathbf{r}]_{im}[\mathbf{r}]_{jm}[\bm{\psi}]_{mn}[\mathbf{r}]_{kn}\\ & + \sum_{m , n=1}^{d}\lambda_{n}[\mathbf{r}]_{jm}[\mathbf{r}]_{km}[\bm{\psi}]_{mn}[\mathbf{r}]_{in}\\ & + \sum_{m , n=1}^{d}\lambda_{n}[\mathbf{r}]_{im}[\mathbf{r}]_{km}[\bm{\psi}]_{mn}[\mathbf{r}]_{jn}\textmd{,}\end{aligned}\ ] ] where . once again , expanding and in powers of yields where is the set of all rooted trees with root , containing nodes and is the already defined weight function .it is now easy to see that the general result is of the form where and is the set of all rooted trees with root , containing nodes .in this paper we described the method for computing a class of statistics of linear hawkes self - exciting point processes with arbitrary interaction kernels . by using the poisson cluster process representation, we were able to obtain a general procedure for deriving formulas for order cumulant densities .furthermore , we have shown there is a one - to - one correspondence between the integral terms , appearing in said densities , and all topologically distinct rooted trees with labeled leaves .we also considered the problem of computing time - integrated cumulants and showed this can be done by simplifying the expressions for the corresponding cumulant densities .moreover , and not surprisingly , we demonstrated that integrated cumulants likewise admit a representation in terms of a formal sum of topological motifs , generalizing previous work on the topological expansion of the integrated covariance .the problem of quantifying higher - order correlations is of some importance in theoretical neuroscience .indeed , it has long been suggested that understanding the cooperative dynamics of populations of neurons would provide fundamental insight into the nature of neuronal computation . however , while direct experimental evidence for coordinated activity on the spike train level mostly relies on the correlations between pairs of nerve cells , it is becoming increasingly clear that such pairwise correlations can not completely resolve the cooperative dynamics of neuronal populations and that higher - order cumulants need to be taken into account. one possible shortcoming of our work is the ( supra - exponentially ) increasing complexity of the closed - form expressions for the densities for higher values of .this `` explosion '' , however , is mostly due to combinatorial factors , that arise in many problems involving cumulants .as their definition naturally involves objects such as set partitions , it seems to us that these sorts of issues would be quite difficult to avoid .another limitation of the present model is that it only allows for excitatory interactions - an arrival of an event at a given time can only _ increase _ the likelihood of future event , never decrease it .we hope , in the future , to be able to extend our analysis to include models in which there also exists a possibility of mutual inhibition between points of different types .further generalizations of our results might involve computing cumulants ( and other important statistics ) of a non - linear hawkes processes ( see e.g. for the definition ) , whose conditional rate function involves a non - linear transformation of equation ( [ rate ] ) , thus allowing for , for example , multiplicative interaction between point events .however , in this case , the resulting process no longer admits an immigrant - offspring representation , meaning an alternative approach would be necessary .let be an arbitrary time vector and an arbitrary multi - index . from ( [ bernoulli ] ) , for every vector and multi - index we have that furthermore, it is clear that where denotes the complement of the set indeed , events of type either are , or are nt all in some cluster .we now proceed by induction in . for , we have but , as the only way that two events are not in the same cluster is if they each belong to a different one ; say , if and , because of independence of different clusters and .thus , proving that formula ( [ cumulant_probability_formula ] ) is true for .next , we assume that ( [ cumulant_probability_formula ] ) is true for and prove that it then must also be true for .consider the complementary set .if events are not all in the same cluster , how could they be distributed ?one possibility is that they are divided up between two different clusters , like in the previous case .in fact , they could potentially be distributed in different clusters , where .therefore , where the first sum goes over all possible numbers of different clusters that events could be partitioned in , while denotes the subset of that belong to the cluster ( and denotes their types ) .now , note that the previous equation is , in fact , a sum over all partitions of the set with at least two blocks ( i.e. ) .let us now fix one such partition . as , we must have , , , that .but then , by the inductive assumption , and , therefore , finally , from ( [ decomposition_of_average ] ) , ( [ different_clusters ] ) and ( [ moments_cumulant_formula ] ) , we get which completes the proof .let be the set of all rooted trees with root and leaves .next , let and let be the corresponding integral term .in order to compute the laplace transform , we first consider the leaves of .each leaf contributes a term , for some internal node . for simplicity ,let us assume that leaves all descend from a single internal node , which we denote . then , applying to the laplace transform with respect to variables , we obtain of course , in general the leaves are divided into several groups , according to which internal node they descend from . in that case ,applying to each such group the laplace transform in the already described way , yields several terms of type ( [ leaf_terms ] ) .moving one level up in tree , we are now in a situation in which several internal nodes , each with its own group of dependent leaves , all descend from a common node , residing one level above them .we denote these internal nodes by .each such internal node contributes to a term . transforming the exponential term in ( [ leaf_terms ] ) by induction, we can now see that this procedure must end after a finite number of steps ( equal to the `` depth '' of tree ) , at which point we are left with a product of various terms of types ( [ leaf_terms ] ) and ( [ internal_node_terms ] ) , integrated with respect to the position of the root ( as this is the last node we reach by `` climbing up '' ) .the exponential terms in this product can be combined to form by setting , we now see that the formulas for can be obtained from formulas for the cumulant densities by simply `` erasing '' all the integral signs and replacing all the functional terms with their integrated counterparts . 49ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) link:\doibase 10.1080/01621459.1988.10478560 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1214/10-aos806 [ * * , ( ) ] , * * , ( ) ( ) , ( ) , ( ) link:\doibase 10.1137/090771272 [ * * , ( ) ] ( ) , _ _( , ) in _ _ ( , ) pp .link:\doibase 10.1198/jasa.2011.ap09546 [ * * , ( ) ] http://stacks.iop.org/1742-5468/2011/i=10/a=p10012 [ * * , ( ) ] link:\doibase 10.1371/journal.pcbi.1002059 [ * * , ( ) ] link:\doibase 10.1103/physreve.89.042817 [ * * , ( ) ] link:\doibase 10.1103/physreve.89.012104 [ * * , ( ) ] link:\doibase 10.1103/physreve.87.022815 [ * * , ( ) ] link:\doibase 10.1103/physreve.90.062807 [ * * , ( ) ] link:\doibase 10.1239/aap/1316792671 [ * * , ( ) ] * * , ( ) ( ) ( ) , ( ) ( ) * * , ( ) link:\doibase 10.1140/epjb / e2013 - 30493 - 9 [ * * , ( ) , 10.1140/epjb / e2013 - 30493 - 9 ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ , vol . ( , ) * * , ( ) link:\doibase 10.1007/s11009 - 011 - 9272 - 5 [ * * , ( ) ] _ _ , griffin books of cognate interest ( , ) * * , ( ) _ _ ( , ) _ _ , ed . ( , , ) * * , ( ) link:\doibase 10.1073/pnas.86.5.1698 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ,
|
we derive explicit , closed - form expressions for the cumulant densities of a multivariate , self - exciting hawkes point process , generalizing a result of hawkes in his earlier work on the covariance density and bartlett spectrum of such processes . to do this , we represent the hawkes process in terms of a poisson cluster process and show how the cumulant density formulas can be derived by enumerating all possible `` family trees '' , representing complex interactions between point events . we also consider the problem of computing the integrated cumulants , characterizing the average measure of correlated activity between events of different types , and derive the relevant equations .
|
the extremely low key rates afforded by quantum key distribution ( qkd ) compared to computational cryptographic schemes pose a significant challenge to the wide spread adoption of qkd .the main reason for the poor rate performance is that the qkd _capacity _ of a single - mode lossy bosonic channel , i.e. , the maximum key rate attainable using any direct - transmission qkd protocol , is proportional to the end - to - end transmissivity of the channel in the high - loss regime .therefore , to increase the key rate one must increase the number of modes used by the system .this can be done by increasing the optical bandwidth in modes / s that can be used by the qkd protocol as well as employing multiple spatial modes . herewe investigate the latter .formally , the qkd capacity of a single - mode bosonic channel where we employ both polarizations of light is bits / s when .. since , this corresponds to an exponential decay of key rate with distance in fiber and free - space propagation in non - turbulent atmosphere . while the extinction coefficient may be modest for the atmospheric propagation in clear weather at a well - chosen wavelength , the inverse - square decay of rate with distance is unavoidable in the _ far - field _ regime even in vacuum ( where )this is because free - space optical channel is characterized by the fresnel number product , where and are the respective areas of the transmitter and receiver apertures , and is the transmission center wavelength . in the far - field regime and only one transmitter - pupil spatial mode couples significant power into the receiver pupil over an -meter line - of - sight channel with input - output power transmissivity , employing multiple orthogonal spatial modes in the far - field regime can not yield a appreciable improvement in the achievable qkd rate .therefore , our interest in this paper is in the _ near - field _ propagation regime ( ) , which is relevant to metropolitan area qkd , as well as line - of - sight over - the - surface maritime applications of qkd . in this near - field regime, approximately mutually - orthogonal spatial modes have near - perfect power transmissivity ( ) .thus , multiplexing over multiple orthogonal spatial modes could substantially improve the total qkd rate with the gain in rate over using a single spatial mode ( such as a focused gaussian beam ) being approximately proportional to , and hence more pronounced at shorter range ( where is high ) .laguerre - gauss ( lg ) functions in the two - dimensional transverse coordinates form an infinite set of mutually - orthogonal spatial modes , which happen to carry orbital angular momentum ( oam ) .there have been several suggestions in recent years to employ lg modes for qkd , both based on laser - light and single - photon encodings , and the purported rate improvement has been attributed to the oam degree of freedom of the photon . while multiplexing over orthogonal spatial modescould undoubtedly improve qkd rate in the near - field propagation regime as explained above : 1 .can other orthogonal spatial mode sets that do _ not _ carry oam be as effective as lg modes in achieving the spatial - multiplexing rate improvement in the near field ? 2 .does one truly need orthogonal modes to obtain this spatial - multiplexing gain or are there simpler - to - generate mode profiles that might suffice ?question ( 1 ) was answered affirmatively for classical and quantum - secure private communication ( without two - way classical communication as is done in qkd ) over the near - field vacuum propagation and turbulent atmospheric optical channels : hermite - gauss ( hg ) modes are unitarily equivalent to the lg modes and have identical power - transfer eigenvalues , . since the respective communication capacity of mode is a function of and the transmit power on mode , hg modes , which do _ not _ carry oam , can in principle achieve the same rate as lg modes , notwithstanding that the hardware complexity and efficiency of generation and separation of orthogonal lg and hg modes could be quite different .our goal is to address questions ( 1 ) and ( 2 ) above for qkd .the answer to ( 1 ) is trivially affirmative , at least for the case of vacuum propagation ( no atmospheric turbulence or extinction ) , based on an argument similar to the one used in refs .we show potential gain of between to orders of magnitude in the key rate by using multiple spatial modes over a km link , assuming cm radii transmitter and receiver apertures , and m laser - light transmission . the bulk of our analysis addresses question ( 2 ) for the optical vacuum propagation channel , which we answer negatively .we show that most of the spatial - multiplexing gain afforded by mutually - orthogonal modes ( either hg or lg ) in the near field can be obtained using a focused overlapping gaussian beam array ( ogba ) with optimized beam geometry in which beams are individually amplitude and/or phase modulated to realize the qkd protocol .these gaussian focused beams ( fbs ) are _ not _ mutually - orthogonal spatial modes , and therefore the power that leaks into fb from the neighboring fbs has the same effect on the key rate of that fb as do excess noise sources like detector dark current or electrical johnson noise .non - zero excess noise causes the rate - distance function to fall to zero at a minimum transmissivity threshold , or , equivalently , at a maximum range threshold such that for .thus , while packing the fbs closer increases the spatial - multiplexing gain , it also increases the excess noise on each fb channel , resulting in decreased . forany given range there should exist an optimal ( key - rate - maximizing ) solution for spatial geometry ( tiling ) of the fbs , power allocation across the fbs , and beam widths .for shorter range the optimal solution should involve a greater number of fbs , and the number of beams employed should be approximately proportional to . here , instead of evaluating the optimal rate - maximizing solution as explained above ( which is extremely difficult ) , we find a numerical solution to a constrained optimization problem assuming a square - grid tiling of the fbs in the receiver aperture and restricting our attention to the discrete - variable ( dv ) laser - light decoy - state bb84 protocol .the rationale behind this is to obtain an _ achievable _ rate - distance envelope for the ogba transmitter to compare with the ultimate key capacity attainable by employing infinitely many lg ( or hg ) modes .since we restrict our attention to dv qkd , we assume that the ogba transmitter is paired with a single - photon detector ( spd ) array at the receiver with square - shaped pixels and unity fill factor with each fb being focused at the center of a detector pixel and there are as many detector pixels as the number of fbs ( the optimal number of which is a function of as discussed above ) .azimuthal lg modes retain their orthogonality when passed through hard - pupil circular apertures .thus , generating and separating these modes without any power leaking between them is possible in theory , and has been the subject of much experimental work .the current state of the art is the separation of 25 oam modes with average efficiency of , as was demonstrated in .we compare the qkd rate achievable with our ogba proposal to what is achievable using ideal separation of azimuthal lg modes as well as the best currently possible . in the latter case, we obtained the data for the cross - talk ( overlap ) between the separated modes ( see ( * ? ? ?* table 4a ) ) from the authors of .we evaluate performance assuming ideal photodetectors and no atmospheric extinction .we find that the achievable rate using our ogba architecture is at worst db less than the state - of - the - art azimuthal lg mode separation in and at worst db less than the theoretical maximum for entire azimuthal lg mode set , while using hard - pupil transmitter and receiver apertures of same areas and the same center wavelength .the maximum rate gap occurs because the square - grid ogba architecture does allow the use of two and three beams ; with two square pixels placed side - by - side at the receiver , the gap between the systems employing the state - of - the - art and ideal azimuthal lg mode separation reduces to db and db , respectively .current technology for optical communication using orthogonal modes use bulky and expensive components .while advances in enabling technology could reduce the device size , weight and cost of orthogonal mode generation and separation , our results show that using oam modes for qkd may not be worth the trouble : the gain in qkd key rate in the near field is modest compared to what can already be obtained by our fairly simple - to - implement ogba architecture .this paper is organized as follows : in the next section we introduce the basic mathematics of laser light propagation in vacuum using soft - pupil ( gaussian attenuation ) apertures . in section [ sec : lgmodes ] we consider the propagation of lg modes using hard - pupil circular apertures , while in section [ sec : gaussian ] we discuss the mathematical model of the ogba architecture that we propose in this paper . using the expressions derived in sections [ sec : lgmodes ] and [ sec : gaussian ] , we numerically evaluate the qkd rate using various beam and aperture geometries , and report the results in section [ sec : results ] .we conclude with a discussion of the implications of our results as well as future work , in section [ sec : discussion ] .consider propagation of linearly - polarized , quasimonochromatic light with center wavelength ( that is , a narrow transmission band around the center wavelength ) from alice s transmitter pupil in the transverse plane with a complex - field - unit pupil function , , through a -meter line - of - sight free - space channel , and received by bob s receiver pupil in the plane with aperture function , .alice s transmitted field s complex envelope is multiplied ( truncated ) by the complex - valued transmit - aperture function , undergoes free - space diffraction over the -meter path , and is truncated by bob s receiver - aperture function , to yield the received field .the overall input - output relationship is described by the following linear - system equation : where the channel s green s function is a spatial impulse response .we assume vacuum propagation and drop the time argument from the green s function : }{i \lambda l } \ , a_{\rm t}({\bm \rho}),\end{aligned}\ ] ] where .normal - mode decomposition of the vacuum - propagation green s function yields an infinite set of orthogonal input - output spatial - mode pairs ( a mode being a normalized spatio - temporal field function of a given polarization ) , that is , an infinite set of non - interfering parallel spatial channels . in other words , where forms a complete orthonormal ( con ) spatial basis in the transmit - aperture plane before the aperture mask , and forms a con spatial basis in the receiver - aperture plane after the aperture mask .that is , where is the kronecker delta function .therefore , the singular - value decomposition ( svd ) of yields : physically this implies that if alice excites the spatial mode , it in turn excites the corresponding spatial mode ( and no other ) within bob s receiver .this specific set of transmitter - plane receiver - plane spatial - mode pairs that form a set of non - interfering parallel channels are the eigenmodes for the channel geometry .the fraction of power alice puts in the mode that appears in bob s spatial mode is the modal transmissivity , .we assume that the modes are ordered such that if alice excites the mode in a coherent - state quantum description of an ideal laser - light pulse of intensity ( photons ) and phase , then the resulting state of bob s mode is an attenuated coherent state .the power transmissivities are strictly increasing functions of the transmission frequency , each increasing from at , to at .let us consider gaussian - attenuation ( soft - pupil ) apertures with \text{~and}\\ \label{eq : rx_gauss_ap } a_{\rm r}({\bm \rho^\prime } ) & = \exp\left[-|{\bm \rho^\prime}|^2/r_{\rm r}^2\right].\end{aligned}\ ] ] for this choice of pupil functions , there are two unitarily - equivalent sets of eigenmodes : the aforementioned laguerre - gauss ( lg ) modes , which have circular symmetry in the transverse plane and are known to carry orbital angular momentum ( oam ) , and the hermite - gauss ( hg ) modes , which have rectangular symmetry in the transverse plane and do not carry oam .the input lg modes , labeled by the radial index and the azimuthal index , are expressed using the polar coordinates as follows : ^{|l|}\mathcal{l}_p^{|l|}\left(\frac{r^2}{a^2}\right)\exp\left(-\left[\frac{1}{2a^2}+\frac{ik}{2l}\right]r^2+il\theta\right),\end{aligned}\ ] ] where denotes the generalized laguerre polynomial indexed by and .for completeness of exposition , the input hg modes , labeled by the horizontal and vertical indices , are expressed using the cartesian coordinates as follows : [x^2+y^2]\right)\end{aligned}\ ] ] where is the hermite polynomial . in the expressions for both lg and hg modes , is a beam width parameter given by where is the product of the transmitter - pupil and receiver - pupil fresnel number products for this soft - pupil vacuum propagation configuration .alternatively , when expressed using the transmitter and receiver pupils areas and .the expressions for the output lg and hg modes are given by equations ( 28 ) and ( 24 ) in , respectively .the expression for the power - transfer eigenvalues for either mode set admits the following simple form : where for lg modes , and for hg modes .thus , there are spatial modes of transmissivity .the lg and hg modes span the same eigenspace , and hence are related by a unitary transformation ( a linear mode transformation ) . the first mode in both lg or hg mode sets , defined by , is known as the _ gaussian beam_. the input gaussian beam is expressed as follows : (x^2+y^2)\right).\end{aligned}\ ] ]soft - pupil gaussian apertures used in the preceding section are purely theoretical constructs : while they greatly simplify the mathematics , they are impossible to realize physically .let us thus consider hard - pupil circular apertures of areas and , that is , with the corresponding areas defined as and .neither lg nor hg modes form an eigenmode set for these hard - pupil apertures . instead ,their eigenmodes are prolate spheroidal functions , and the power - transfer eigenvalues , indexed by two integers , have known , yet quite complicated expressions . if the lg ( or hg ) modes are used as input into the hard - pupil system , the output modes are non - orthogonal in general , as the expressions that we derive next show . employing the vacuum propagation kernel in with the expression for the input lg mode in , substituting the expressions for the hard circular pupils in and , and re - arranging terms yields : \sqrt{p!}}{i a\lambda l\sqrt{\pi(|l|+p)!}}\int_0^{r_{\rm t}}\int_{0}^{2\pi } \left[\frac{r}{a}\right]^{|l|}\mathcal{l}_p^{|l|}\left[\frac{r^2}{a^2}\right]\exp\left[-\frac{r^2}{2a^2}+\frac{ik}{2l}(r'^2 - 2rr'\cos\theta)+il\theta\right]r\mathrm{d}\theta\mathrm{d}r,\end{aligned}\ ] ] for ] , where we first substitute , and then substitute .now , the integral representation of the bessel function of the first kind given in appendix [ app : besselint ] allows the following evaluation of the integral with respect to in : \mathrm{d}\theta=2\pi \exp\left[-\frac{il\pi}{2}\right]j_l\left[\frac{krr'}{l}\right].\end{aligned}\ ] ] substitution of into yields : \sqrt{\pi p!}}{i a\lambda l\sqrt{(|l|+p)!}}\int_0^{r_{\rm t } } \left[\frac{r}{a}\right]^{|l|}\mathcal{l}_p^{|l|}\left[\frac{r^2}{a^2}\right]\exp\left[-\frac{r^2}{2a^2}\right]j_l\left[\frac{krr'}{l}\right]r\mathrm{d}r.\end{aligned}\ ] ] while the bessel function is not an elementary function , it can be efficiently evaluated by a computer ( using , e.g. , matlab ) .now let s evaluate the cross - talk ( overlap ) between the output modes .we are interested in the fraction of power transmitted on the mode indexed by that is leaked to the mode indexed by : substituting , we note that evaluation of the integral with respect to yields : \mathrm{d}\theta'=2\pi\delta_{l , m}12 & 12#1212_12%12[1][0] `` , '' ( ) * * , ( ) * * , ( ) http://stacks.iop.org/1367-2630/17/i=3/a=033033 [ * * , ( ) ] * * , ( ) , * * , ( ) http://stacks.iop.org/1367-2630/16/i=11/a=113028 [ * * , ( ) ] , * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * ( ) , in _ _ ( ) pp . * * , ( ) * * , ( ) * * , ( ) `` , '' _ _ , ed . , edited by and ( , ) * * , ( )eq . ( 8.411.1 ) in gives the following integral representation of bessel function of the first kind : where is an integer .we perform several substitutions to obtain the form of this integral that is useful to us .first , substitute : now substitute and split the resulting integral : now , since for integer , and , substitution into the first integral in only changes its limits , yielding the form we need : .now , +i\mathfrak{im}[\operatorname{erf}(u+iv)]+\mathfrak{re}[\operatorname{erf}(u - iv)]+i\mathfrak{im}[\operatorname{erf}(u - iv)]\\ \label{eq : symerf}&=\mathfrak{re}[\operatorname{erf}(u+iv)]+i\mathfrak{im}[\operatorname{erf}(u+iv)]+\mathfrak{re}[\operatorname{erf}^\ast(u+iv)]+i\mathfrak{im}[\operatorname{erf}^\ast(u+iv)]\\ \label{eq : conj}&=2\mathfrak{re}[\operatorname{erf}(u+iv)],\end{aligned}\ ] ] where in we use the fact that and follows from the definition of complex conjugation .here we review the decoy state discrete variable bb84 qkd protocol , borrowing the development of the key generation rate expression from ( * ? ? ? * section iv.b.3 ) .suppose that alice transmits pulses to bob at the rate of hz .the lower bound for the rate of secure key generation from these pulses is : where denotes the information shared between alice and bob , while and denote the information captured by eavesdropper eve from alice and bob , respectively .privacy amplification aims to destroy eve s information , sacrificing part of the information in the process ( hence subtraction in ) .we take the minimum of and in since alice and bob choose the reference set of pulses on which eve has least information .the qkd rate in bits / second is then . for lossy bosonic channels , , with qkd capacity given by : where captures all losses , which include the diffraction described in the previous sections , as well as atmospheric losses and detector inefficiency .alice transmits a sequence of polarized laser pulses with average intensity photons per pulse . following the standard bb84 protocol ,polarization is chosen by first randomly selecting one of two non - orthogonal polarization bases ( rectilinear or diagonal ) , and then encoding a random bit in the selected bases .bob randomly chooses one of two polarization bases in which to measure the received pulse .when alice and bob select the same bases , alice s pulse is directed to one of two detectors via a polarizing beam splitter and ideally only the detector corresponding to the transmitted bit can click , registering the detection event ( we discuss the non - ideal case later ) .when the bases are not the same , either detector can click with equal probability .we call bob s detector `` correct '' when it corresponds to alice s basis choice , otherwise we call the detector `` incorrect . ''the probability of a click from a signal pulse when the bases match is : in the decoy state bb84 protocol , alice changes the value of the intensity randomly from one pulse to the other ; she reveals the list of values she used at the end of the exchange of transmissions .this prevents eve from adapting her attack to alice s state , and allows alice and bob to estimate their parameters in post - processing .the probability of a click in one of the detectors from either the received pulse or a dark click is : where is the probability of a dark click .when pulse is not detected , an error can occur only because of a dark click in the incorrect detector .the probability of this event is .when the pulse is received , non - idealities of the polarizing beam splitter can result in a click in the erroneous detector .these non - idealities are captured by the visibility parameter , which is effectively the probability that the beam splitter directs the pulse according to the bases chosen by bob . since an incorrect bases choice results in a click happening with equal probability in one of the detectors ,the probability of an erroneous click with pulse received is . combining the above probabilities ,the quantum bit error rate is : the rate at which bob can extract information from the clicks at his detectors is thus : where is the binary entropy function , is the expression for the shannon capacity of the binary symmetric channel , and is the efficiency of the error correction code ( ecc ) used by alice and bob .now let s study the amount of information about the key collected by eve .she only gains information when photons are transmitted , and provided that bob detects the photon that she forwarded ( thus , when alice does not send a photon but bob detects a dark click , eve does not obtain any information about the key ) .if alice sends a single photon pulse , eve has to introduce an error if she is to obtain any information . in this caseeve gains bits of information , where is the probability of error event when alice transmits a single photon .alice transmits a single photon with probability , and a detection event occurs at one of the detectors with probability conditioned on the event that a click occurs in one of bob s detectors , the probability becomes : the probability of alice transmitting one photon and a click occurring in the incorrect detector is : conditioning on the event that alice transmits a single photon and a detection event occurs at one of the detectors yields : for multi - photon pulses , photon number splitting is an optimal attack , in which eve forwards one photon to bob and keeps the others .she gains one bit from the photons she keeps when there is a click in one of bob s detectors .the probability of a click in one of the detectors when alice transmits more than one photons is where is given by and is the probability of a click in one of the detectors when alice does not transmit a photon given that a click occurred . since alice sends no photons with probability , the probability of a click in one of the detectors when alice does not transmit a photon is : conditioning on the event that a click occurs in one of bob s detectors , we obtain : therefore , the expression for the qkd rate is thus : \\ \label{eq : r}&=\max[0,p_{\rm r}(y_0+y_1(1-h_2(\epsilon_1))-f_{\rm leak}h_2(q ) ) ] \text{~bits / mode}.\end{aligned}\ ] ] we note that in the numerical optimization performed in section [ sec : results ] we use a version of without taking the maximum .allowing negative rate allows matlab s ` fmincon ` function to construct the gradient over the entire space of optimization variables .
|
the secret key rate attained by a free - space qkd system in the _ near - field _ propagation regime ( relevant for - km range using cm radii transmit and receive apertures and m transmission center wavelength ) can benefit from the use of multiple spatial modes . a suite of theoretical research in recent years has suggested the use of orbital - angular - momentum ( oam ) bearing spatial modes of light to obtain this improvement in rate . we show that most of the aforesaid rate improvement in the near field afforded by spatial - mode multiplexing can be realized by a simple - to - build overlapping gaussian beam array ( ogba ) and a pixelated detector array . with the current state - of - the - art in oam - mode - sorting efficiencies , the key - rate performance of our ogba architecture could come very close to , if not exceed , that of a system employing oam modes , but at a fraction of the cost .
|
old and rare books : vestiges of the past which are well preserved in special reserves of main libraries , and constitute a source of pride of these institutions . among the many incunabula owned by argentina s national library , a couple of volumes stand out .the liber chronicarum cum figuris et ymaginibus , compiled by the humanist hartmann schedel and printed by anton koberger in 1493 ( the famous nrnberg chronicle ) is one of the `` must see '' of the library .it includes not only a description of pliny s marvelous hominids -headless libyans endowed with eyes and mouth in their chests , cyclopes from india and other _ mirabilia_-but also many images with clear cosmological flavor .this original latin edition includes , among its nearly 1800 engravings , and of most interest to us , a thorough description of the seven ages of the world after creation , beginning with a biblical heptameron , of which already the forth day shows a nice geocentric ptolemy s universe ( second picture in fig.[fig1 ] ) .the library also owns a venice 1484 copy of dante s divine comedy , with comments by cristoforo landino , which beautifully illustrates beatrice guiding the pilgrim across the astronomical and metaphysical spheres of the celestial paradise .these and dozens of other equally interesting books build up an important part of both literary and pre - scientific culture .however , only a handful of persons ( mainly researchers ) have access to them .just as we all know that the feeling of seeing mars in pictures can not be compared to the experience of actually seeing it through a telescope , in the case of old books , the project of exhibiting them is much more rewarding than just looking at them through the internet . partly because of this, we are organizing an exhibition of rare books with astronomical flavor in years 2009 2010 .many other old or rare books are also included in our list : iacobus valderus sphaera ( 1536 ) , egnatio danti s trattato dellastrolabio ( 1569 ) , clavius commentary of sacro bosco s sphaera ( 1585 ) , joanne voello s de horologiis ( 1608 ) , henrico hofmanno s de octantis ( 1612 ) , blaev s theatrum orbis terrarium ( 1640 ) , and a long etcetera .some of these books were presumably brought to the river plate by the catholic religious order of the jesuits during the xviii century .hence , these books allow us to reconstruct the kind of science imparted in our country at that time . among these books , german jesuit athanasius kircher sthick and numerous volumes , with their gorgeous engravings illustrating all possible areas of knowledge , naturally attract the attention : his musurgia universalis ( 1654 ) depicts cosmic harmony as musical sounds emanating from an organ played by god ; his ars magna lucis et umbrae ( 1671 ) shows the analogy between micro and macro cosmos , with man placed both at the center of the universe and of the zodiacal signs ( fig.[fig4 ] ) . in kirchers mundus subterraneus ( 1678 ) the frontispiece shows a lady ( the allegory of astronomy ) inspecting a celestial globe and taking notes with a _ plume doiseau _ , while another feminine character looks through a telescope . inside the bookwe find a sketch of the moon as an aqueous body , with spots , mountains and sources , as well as other rough earthy textures scattered on its visible face , and another of the sun , divided in different regions , including an equatorial torrid zone , quite similar to the one of the earth at the time , and covered with drawings of smoke and fires as sources of sun spots .finally , father gaspare schotto s iter exstaticum kircherianum , of 1671 , shows in its very frontispiece a peculiar engraving of kircher himself ( as theodidactus , the disciple of god ) when , guided by angel cosmiel , he travels across the universe , in a clear parallel to dante s voyage following his _ donna - angelo_. the universe depicted is neither ptolemaic nor copernican , but that of tycho brahe , with the sun orbiting the earth , while the rest of the planets complete their movements around the sun ; a clear eclectic cosmology agreeing well with the author s world view .the planned exhibition will collect not only these and other books , but also historical documents , maps and drawings ( may be also artifacts ) .hopefully , it will offer a timeline of our understanding of old and renaissance astronomy and , with it , part of the _ imago mundi _ of the time .
|
astronomical and cosmological knowledge up to the dawn of modern science was profoundly embedded in myth , religion and superstition . many of these inventions of the human mind remain today stored in different supports : medieval engravings , illuminated manuscripts , and of course also in old and rare books .
|
the no - cloning theorem states that quantum information can not be copied , i.e. there exists no quantum device whose input is an arbitrarily prepared quantum system and the output consists of two quantum systems whose individual states coincide with that of the input .this and other quantum no - go theorems play an important role in quantum information theory and there exist deep connections with problems in quantum cryptography such as that of eavesdropping . with applications in mind , it is more interesting to derive a quantitative version of the theorem which says how good an approximate cloning machine can do , by providing lower bounds for the error made by any such machine .the quality of the approximate clones can be judged either locally , by comparing the state of each individual clone with the input state , or globally by comparing the joint state of the approximate clones with that of independent perfect clones .note that , because the no - cloning theorem requires that each individual system has the same marginal state as the input , it is the local quality criterion which captures more of its flavor .however if we are interested in the joint state of the output then the global criterion is more useful as it takes into account the correlations between the systems . before stating our cloning problem we would like to mention a few important results in this area and we refer to the review for a more detailed discussion .the problem of universal cloning for finite dimensional pure states was analyzed and solved in .interestingly , when the figure of merit is the supremum over all input states of the fidelity between the ideal and the approximate clones , it was shown that the same cloning machine is optimal from both the local and the global point of view . in the case of continuos variablessystems the gaussian states have received a special attention due to their importance in quantum optics , quantum communication and cryptography .problems in quantum information such as entanglement measures and quantum channels have been partially solved by restricting to the framework of gaussian states and operations . for coherent statesthe optimal cloning problem has been investigated in under the restriction of gaussian transformation .the question whether the optimal cloning map is indeed gaussian has been answered positively in the case of global figure of merit with fidelity , and negatively for the individual figure of merit . as noticed in ,the area of optimal cloning for mixed states is virtually open partly due to the technical difficulties compared with the pure state case .however we should mention here the phenomenon of `` super - broadcasting '' which allows not only perfect to ( local ) cloning of mixed states but even purification , if is large enough and the input states are sufficiently mixed .this happens however at the expense of creating big correlations between the individual clones , just as in the case of classical copying .quantum cloning shows some similarities to quantum state estimation , for example the pure state case is easier than the mixed case in both contexts .recently it has been shown that local cloning for pure states is asymptotically equivalent to estimation .this paper makes another step in this direction by pointing out that global cloning has a natural statistical interpretation .the statistics literature dedicated to the classical version of this problem has been an inspiration for this paper and may prove to be useful in future quantum investigations .the problem which we investigate is that of optimal to cloning of _ mixed _ gaussian states using a _ global _ figure of merit . we show that the optimal cloner is gaussian and is similar to the optimal one for the pure state case .our figure of merit is based on the _ norm distance _ rather than fidelity , the latter being more cumbersome to calculate in the case of mixed states .however the result holds as well with other figures of merit such as total variation distance between the distributions obtained by performing quantum homodyne tomography measurements . in quantum state estimationit has been shown that the family of mixed gaussian states appears as asymptotic limit of multiple mixed qubit states .based on this result it can be proved that the problem of to global cloning of mixed qubit states is asymptotically equivalent to that of to cloning of mixed gaussian states which is addressed in this paper . in deriving our resultwe have transformed the optimal cloning problem into an optimal amplifying problem and then used covariance arguments to restrict the optimization to the set of mixed number states of the idler of a non - degenerate parametric amplifier . the argument leading to the conclusion that the optimal state of the idler is the vacuum ,is based on the notion of _ stochastic ordering _ which is also used in deriving the solution to the classical problem of optimal gaussian cloning . in section [ sec.ntom ]we extend the solution of the to cloning problem to the case of optimal to cloning of mixed gaussian states .the transformation involves three steps : one first concentrates the modes in one by means of a unitary fourier transform , then amplifies this mode with a phase - insensitive linear amplifier with gain , and finally the amplified state is distributed over the output modes by using another fourier transform with ancillary modes prepared in a thermal equilibrium state identical to that of the input .we consider the problem of optimal cloning for a family of gaussian states of a quantum oscillator , namely the displaced thermal equilibrium states with a given , known temperature .let be the creation and annihilation operators acting on the hilbert space and satisfying the commutation relations = \mathbf{1}$ ] , and let be a thermal equilibrium state where is related to the temperature by , and represent the fock basis vectors of -photons states .let be the displaced thermal states where and consider the quantum statistical models : in the next subsection we will give a statistical interpretation to the optimal figure of merit for cloning as a kind of gap ( deficiency ) between the less informative model and the more informative one .the aim of 1 to 2 global cloning is to transform the state into without knowing .this is however impossible , and this fact has nothing to do with the quantum no - cloning theorem which is about local cloning .in fact the same phenomenon occurs in classical statistics : given one gaussian random variable whose distribution has unknown center , it is impossible to produce two _ independent _ variables with the same distribution . in both classical and quantum set - ups, if this was possible one could determine exactly the displacement by first cloning the state to an infinite number of independent states and then estimating the displacement using statistical methods .thus we will try to perform an approximate cloning transformation which is optimal with respect to a given figure of merit .we consider a _criterion rather than a local , individual one .the classical version of this problem has been previously considered in mathematical statistics and we will adopt here the same terminology by defining the _ deficiency _ of the model with respect to the model as where the infimum is taken over all possible cloning maps with denoting the space of states ( density matrices ) on and is a completely positive and trace preserving map .the norm one of a trace class operator is defined as .we are looking for a map satisfying this figure of merit is very natural from the statistical point of view and can be related with the fidelity through the two sided inequalities although the fidelity is a popular figure of merit , it is more difficult to handle in the case of mixed states .note also that we do not take any average with respect to a prior distribution over the unknown parameter but just consider the cloner which performs best with respect to _ all _ , i.e. we are in a minimax framework as in . in the classical casethe problem of optimal gaussian cloning is equivalent to that of `` amplifying '' the location of the center of a gaussian variable .we will show that this is also the case in the quantum setup by proving a fairly simple lemma allowing us to simplify the problem and make the connection with the theory of linear amplifiers beautifully exposed in .let us start with the classical case and suppose that we draw a real number from the normal distribution with unknown center and fixed and known variance .we would like to devise some statistical procedure ( for this purpose we may use an additional source of randomness ) whose input is and the output is a pair of independent clones , each having distribution . let us assume for the moment that this can actually be done and note that by performing the invertible transformation no statistical information is lost and moreover the two newly obtained terms are independent , has distribution while does not contain any statistical information about .conversely , starting from an `` amplified '' version of , that is a variable with distribution , one can recover the independent clones by adding an subtracting an independent variable : in fact this is nothing else than the classical analogue of a 50 - 50 beamsplitter where should be replaced by independent input modes carrying gaussian states , and can be seen as the output fields .the moral of this is that perfect cloning would be equivalent to perfect amplifying if any of them was possible , but in fact the two problems also are equivalent when we content ourselves with finding the optimal solution .we will prove this now in the quantum framework .let be a completely positive , trace preserving channel and define the figure of merit for amplification and let be the optimal figure of merit . [ lemma.cloning-amplifier ] if is an optimal 1 to 2 cloning map then the map is an optimal amplifier , where is the beamsplitter transformation which in the heisenberg picture is given by the linear transformation acting on the creation and annihilation operators of the two modes .conversely , if is an optimal amplifier , then the channel is optimal for cloning , and in particular ._ let be an optimal amplifier , i.e. , and the corresponding cloning map , then now let us suppose that there exists another clonig map with , then the corresponding amplifier satisfies butthis is in contradiction with the definition of the optimal figure of merit for amplification .a similar argument can be applied in the other direction . as in other statistical problemsthe search for an optimal solution can be simplified if we can restrict the optimization set by means of a covariance argument .if the cloning map has the property that if we first displace the input and then apply , is equivalent to first applying and then displacing the outputs by the same amount , then we say that is ( displacement ) _ covariant _ : for all and . by convexity of the distance we have where is the `` mean with respect to '' , the analogue of averaging with respect to an invariant probability measure for the case of compact spaces .thus the mean is at least as good as the initial channel .there is a technical point here concerning the fact that may be singular as it is the case for example if maps all states in a fixed one then which is not trace preserving .a more detailed analysis shows however that such cases can be excluded and one can restrict attention to proper covariant and trace preserving channels .it can be shown that the general form of a covariant cloning map is given in the heisenberg picture by the linear transformations between the input mode and the output modes and : where , are two additional modes whose joint state determines the action of the cloning map .the covariance property can be cast in the amplifier framework as well : a map is a covariant amplifier if for all and . as shown in lemma [ lemma.cloning-amplifier ] an optimal clonercan be transformed into an optimal amplifier by using a 50 - 50 beamsplitter to recombine the 2 clones and then keeping one of the outgoing modes .for covariant cloning maps as described above in the heisenberg picture , this leads to the family of covariant amplifiers ( see appendix ) with : where the mode has been eliminated and the amplifier depends only on the state of the mode . taking this into account , we will analyze the optimality problem in its formulation as optimal amplification .we will often use the fact that a particular covariant amplifier is in one to one correspondence with a state of the mode as specified by the above linear transformation in the heisenberg picture , and we emphasize this by writing . by using a further covariance argument we will show that the search for optimal amplifier can be restricted to states which are mixtures of number states , i.e. states which are diagonal in the fock vectors basis . indeed for any displacement covariant amplifier we have let be the phase transformation with the number operator of the mode , and define similar phase transformations for the modes and .the amplifier is covariant with respect to phase transformations if it is now easy to check that if then where .moreover , from we deduce that because the state is invariant under phase transformations and thus where and is the phase averaged state , i.e. a diagonal density matrix in the number operator eigenbasis . the rest of the paper deals with the problem of finding the optimal diagonal state for the mode .let us consider an arbitrary diagonal state of the mode , and denote by the coefficients of the thermal equilibrium state of the mode .the state of the mode is itself diagonal and its coefficients can be written as with fixed coefficients having a complicated combinatorial expression .the optimal amplifier state satisfies the problem has been now reduced to the following `` classical '' one : given a convex family of discrete probability distributions on and an additional probability distribution which does not belong to , find the closest point in with respect to the distance. in general such an optimization problem may not have an explicit solution but in our case the notion of stochastic ordering is a key tool in finding the optimum .let and be two probability distributions over .we say that is stochastically smaller than ( ) if the following lemma is a key technical result which will allow us to identify the optimal amplifier map .[ lemma.stoch.ordering ] assume that the mode is prepared in the thermal equilibrium state , and the mode in an arbitrary diagonal state . then the following stochastic ordering holds : where is the distribution of the mode defined in and is the vacuum state ._ we will prove the result in two steps .first we show that the statement can be reduced to the case where the input state is the vacuum rather than a thermal equilibrium state .then we prove the lemma for the mode in the vacuum state . in quantum opticsthe equation describes a non - degenerate parametric amplifier whose general input - output transformation has the form where represents the time and is a susceptibility constant . if both and modes are prepared in the vacuum state then each of the outputs separately will be in the thermal equilibrium state .this means that we can consider that our input mode is one of the _ outputs _ of a parametric amplifier with .thus which together with gives where , and .the right side of the last equation can be interpreted as follows : the modes and are combined using a beamsplitter with transmitivity and one of the emerging beams denoted is further used together with the mode , as inputs of a parametric amplifier characterized by the coefficient . by hypothesis we assumed that the mode is in state , and by construction the mode is in the vacuum , thus the state of is given by the well known binomial formula the only property which we need here is that is the vacuum state if and only if is the vacuum state . in conclusion , by introducing the additional modes and we have transfered the `` impurity '' of the thermal equilibrium state from the mode to the mode , and the stochastic ordering statement can be now reformulated in our original notations as follows : the mode is prepared in the vacuum , and the mode is prepared in a state which is equal to the vacuum if and only if is the vacuum .in addition , the relation should be replaced by under the assumption that is in the vacuum , we proceed with the second step of the proof . because stochastic ordering is preserved by taking convex combinations, we may assume without loss of generality that for .the following formula gives a computable expression of the output two - modes vector state of the amplifier where and . by tracing over the mode we obtain the desired state of .the relation reduces to showing that for all . with the notation we get [ lemma.distance ] we have where is the integer part of ._ because both distributions are geometric , there exists an integer such that for and for , and this proves the first equality . from the proof of lemma [ lemma.stoch.ordering ]we can compute where and thus the integer is given by the integer part of the . in conclusion arrive now to the main result of the paper .we will show that amplifier whose output is closest to the desired state state , is that corresponding to .intuitively this happens because the `` target '' distribution is geometrically decreasing and the closest to it in the family is the output which is the least `` spread '' .this intuition is cast into mathematics through the concept of stochastic ordering and the result of lemma [ lemma.stoch.ordering ] .[ th.main ] the state of the mode for which the corresponding amplifier map is optimal is . in particular , the optimal amplifying and cloning maps are gaussian ._ define and for all .note that by lemma [ lemma.stoch.ordering ] we have for all , and thus .using the relation we obtain the chain of inequalities the first equality follows directly form the definition of .the following inequality restricts the supremum over all to one element . in the second inequalitywe replace the distribution by using the stochastic ordering proved in lemma [ lemma.stoch.ordering ] . in the following equality we use the fact that both distributions and are geometric ( see also lemma [ lemma.distance ] ) .as discussed in section [ sec.covariance ] , we can restrict to covariant amplifiers and the figure of merit in this case is simply , thus the optimal amplifier is .moreover , by the equivalence between optimal cloning and optimal amplification we also obtain the optimal cloning map ( see lemma [ lemma.cloning-amplifier ] ) .the optimal 1 to 2 cloning figure of merit is with as in lemma [ lemma.distance ] ._ this follows from theorem [ th.main ] and lemma [ lemma.distance ] .the derivation of our result on optimal quantum cloning is inspired by a similar one in the classical domain . in this subsectionwe comment on the optimal figures of merit in the two cases as function of the parameter .it is well known that an arbitrary state of a quantum harmonic oscillator has an alternative representation as a function called the wigner function . in the case of the family of displaced thermal equilibrium statesthe wigner function is a two dimensional gaussian with variance and .we have shown that the best quantum amplifier produces a gaussian state with or in terms of the variance which implies that for any we have the relation which indicates the least noisy amplification according to in the fundamental theorem for phase - insensitive amplifiers .let us consider now the classical problem of gaussian cloning as discussed in the beginning of subsection [ subsec.cloningvsamplifying ] : given a gaussian random variable with distribution , we want to produce a pair of independent clones of . by using the equivalence between the cloning and the amplification problems ,the task is equivalent to that of producing a variable with distribution , and the optimal solution to this problem is simply to take !we note that in the classical case the variance of the output is _ always _ equal to the double of the variance of the input , while in the quantum case the output `` noise '' is always higher due to the unitarity conditions imposed by quantum mechanics , and we recuperate the factor 2 in the high temperature limit . in the classical caseone can deduce by a simple scaling argument that the classical figure of merit does not depend on the variance of the gaussian but only on the amplifying factor , and in our case it takes the value . as expected , the quantum figure of merit is larger than the classical one to which it converges in the limit of high temperature , .the upper line in figure [ fig.cloning ] represents the optimal figure of merit as function of .an interesting feature of this function is that it appears to have discontinuities in the first derivative precisely at the values of for which the `` crossing point '' makes a jump ( see lemma [ lemma.distance ] ) . for comparisonwe have also plotted the norm one distance between the corresponding gaussian wigner functions which does not seem to show any roughness .the results which we obtained for optimal to cloning can be easily extended to the case of optimal cloning of to cloning of mixed gaussian states .the idea is to first `` concentrate '' the state of the input into a single mode by means of a unitary transformation followed by discarding the uninteresting modes .then , one amplifies the obtained state by a factor ( gain factor ) and distributes it using another unitary transformation applied on the amplified mode together with additional ancillary modes prepared in state .the unitary transformations are the fourier transforms : where are the input modes and are the output modes .the amplifying part is described by the covariant map where is an additional mode prepared in a diagonal state as in the to case , and after amplification the second unitary transformation is performed on and the ancillary modes prepared in the state .an obvious extension of lemma [ lemma.cloning-amplifier ] holds in this case as well , showing the equivalence of optimal cloning and optimal amplification .similarly , lemma [ lemma.stoch.ordering ] and theorem [ th.main ] hold in general for any amplifying factor and we arrive to the conclusion that the optimal amplifier is given by the transformation described in equation with the idler mode in the vacuum state .an interesting fact is that our transformations are similar to those of optimal to broadcasting with the exception that in the last step of the procedure different ancillary states are used : the optimal state for broadcasting is the vacuum while for global cloning it is same thermal equilibrium state which characterizes the family .we have constructed an optimal 1 to 2 cloning map for the family of displaced thermal equilibrium states of a fixed , known temperature .we have considered a global figure of merit based on the supremum over all displacements of the norm distance between the joint state of the approximate clones and that of the ideal ones .the optimal cloner is gaussian and is similar with the optimal cloner for coherent states with global figure of merit and consists of two operations .the amplification step uses a non - degenerate linear amplifier with idler prepared in the vacuum state .the cloning step uses a beamsplitter and another ancillary mode in thermal equilibrium state with the same temperature as the input .computations which have not been included here indicate that the optimal cloning map remains unchanged under global figures of merit using different `` distances '' between states .the local version of the optimal cloning problem would probably lead to a non - gaussian optimum as it is the case with coherent states .the equivalence between cloning and amplifying can be extended to an arbitrary number of input states and number of clones , as well as the proof of the optimal amplifier . in the case , the first step is the concentration into one mode by means of a unitary fourier transform , followed by amplification with gain factor , and distribution into output modes using another fourier transform .some other generalizations of the gaussian cloning problem may be considered for future investigations , such as an arbitrary number of modes with larger families of gaussian states .for example in the case of a family of thermal equilibrium states with unknown temperature , one may need to perform an additional estimation of the thermal states in the last step of the cloning which requires ancillary modes prepared in the equilibrium state . finally , the key ingredient in our proof was the notion of stochastic ordering which is worth investigating more closely in the context of quantum statistics ._ acknowledgments ._ we thank richard gill and jonas kahn for discussions and sugesstions .mdlin gu acknowledges the financial support received from the netherlands organisation for scientific research ( nwo ) .we give here a short proof of the fact that the displacement covariant amplifiers have the form .let be a covariant amplifier such that then the dual has a similar property , for all , by choosing and using the weyl relations we get for some scalar factor .now , according to the theorem 2.3 of if is trace preserving and completely positive the constant is of the form where is a state in .thus now , it can be checked that if we start from with the mode prepared in state then describes the channel transformation from the input to the output mode .
|
we construct the optimal 1 to 2 cloning transformation for the family of displaced thermal equilibrium states of a harmonic oscillator , with a fixed and known temperature . the transformation is gaussian and it is optimal with respect to the figure of merit based on the joint output state and norm distance . the proof of the result is based on the equivalence between the optimal cloning problem and that of optimal amplification of gaussian states which is then reduced to an optimization problem for diagonal states of a quantum oscillator . a key concept in finding the optimum is that of stochastic ordering which plays a similar role in the purely classical problem of gaussian cloning . the result is then extended to the case of to cloning of mixed gaussian states .
|
as a simple starting example , consider the classic random effects model , with data modeled as for an unknown constant and with and modeled at random variables having mean 0 and variances and respectively .these random variables are assumed independent across and , though assumptions will generally not be made about their distributions .the present paper advocates treating quantities such as the as random variables , using a capital letter to emphasize this , : as seen below , we will also treat regressor variables , if any , to be random . in short ,the goal of this paper is to encourage analysts to model all quantities as random , even in most cases those fixed by experimental design .advantages to this approach will turn out to include : * much richer , more probing analyses can be devised . *the derivation of estimators and their standard errors can be simplified . * for some large problems, the computation for fully random models can be parallelized , under a method known as software alchemy .let s begin with the . why model them as random ? as a motivating example of the topic here , consider recommnder systems , such those that might be applied to the movie lens data , with ratings of many movies by many users .if one views this ratings data in matrix form , as do gao and owen , with rows and columns corresponding to users and movies , respectively , then the matrix is sparse : in the notation of , for most and , where is an indicator variable for whether user has rated movie . the authors in that paper consider the users to be a random sample from the potential population of all users , and similarly for the movies , and thus use a random effects model .details on that model will be presented shortly , but for now , let s consider only the users , not the movies .then we might model the data using ( [ classic ] ) or ( [ nrand ] ) , with being a measure of ratings variability from user to user .our fr approach might be used , for instance , if we suspect that users who rate a lot of movies become jaded , thus tending to give lower ratings . in other words , there may be a statistical relation between and . if such a relation were estatblished , we may wish to discount the ratings of users having large .having a statistically significant but small relation to ] to investigate this , it is natural to model the as random variables , just as we do for the , say with a model the quantities are now assumed independent conditional on , and their variances , and , are now conditional on as well .the quantity now represents the overall rating tendency for user , after the effect of count of ratings has been removed .the modeling of the as a variance component could be useful in many different application fields .it is known , for example , that there is a negative correlation between family size and household income .if the observation units in a study are children within families , it would be thus useful to incorporate the number of children into the analysis .a study of workers at various companies may be similar to this .it is common to include linear - model terms into ( [ classic ] ) : for a vector of regressors and unknown constant vector .( our old term is now folded in by inserting a 1 element in . ) but it may be helpful to consider the regressors random also , so that our model becomes where again the use of a capital letter indicates a random variable . in ( [ randomx ] ) , we may even wish to reverse the usual prediction relationship , predicting one or more of the regressors from the . in the case of recommender systems , for example , the analyst may wish to infer certain information about the user .some values of the regressors may be missing , for instance , and we may wish to impute them using the other variables . this would be even more reason to treat the regressors as random .similarly , consider the * instval * data included with the r random - effects modeling package * lme4*. it s essentially another recommender system , with university students rating their instructors .presumably the higher - quality instructors are chosen by more students , and a valuable approach to studying this could be to try to predict the by the . in this manner , very elaborate , powerful models can be developed .for instance , the fr approach can be used to extend and enhance popular recommender systems methodology such as _ collaborative filtering_.the approach can also be used in models with more than one variance component . for instance , consider the model used by with the movie dats , here is the number of users and is the number of movies . applying our method to this model, we treat the as random variables , and define the analogs of the above to be the row and column observation counts : the and are then random as before .we might also bring in random regressors , for both users and movies : method of moments ( mm ) is an attractive approach here , as it will enable estimation of , for instance , in ( [ nrand ] ) without assuming a particular distribution family for the .let s take ( [ nrand ] ) as our example , using as our pivot quantity .it will be very helpful to define generic versions of the variables : let , , and have the same distributions as , , and .also , let be i.i.d . with the distribution of .then write now apply the `` pythagorean theorem for expectations , '' + var \left [ e(u |v ) \right ] \ ] ] to ( [ seqn ] ) . first , & = & e \left [ n^2 \sigma_a^2 + n \sigma_e^2 \right ] \\ & = & ( \nu_2+\nu_1 ^ 2 ) \sigma_a^2 + \nu_1 \sigma_e^2\end{aligned}\ ] ] where are the population mean and variance of . next , = \mu^2 \nu_2\ ] ] in other words , also , we have 5 unknowns to estimate , , , and and thus need 5 equations for mm .( [ vars ] ) and ( [ vary ] ) provide the right - hand sides of 2 equations , with the left - hand sides being the sample variances of the and the , respectively .the other 3 equations come quite simply : we estimate the by the sample mean and variance of , and estimate by , where .the estimation of more advanced models can be approached similarly , i.e. deriving expressions for variances and means , typically with the aid of the `` pythagorean theorem . '' in the regression setting ( [ xigamma ] ) , since we have we can estimate separately using standard linear model methods , and proceed as before .note , though , that with the fr approach , the resulting equations may be nonlinear .for example , consider ( [ famsize ] ) .the details will not be presented here , but the key points are as follows : the term in ( [ seqn ] ) now becomes taking the variance of this quantity then brings in the third and fourth moments of , and produces product terms such as .the former issue is no problem , as the moments are readily estimated from the , but the latter issue means we are now dealing with nonlinear equations .computation then must be done iteratively .it is convenient to not write explicit expressions for the variance of ( [ nneqn ] ) , but simply write at each iteration , we take our current estimates of the , and compute the sample variance of the quantities as our estimate of ( [ newvarn ] ) .in many applications of random effects models , quantities such as and above are fixed in the experimental design .however , one can show that typically the same estimators emerge , whether one assumes a random or fixed .the same is true for regressors . as a quick example , consider ( [ classic ] ) . instead of ( [ vars ] ) , we have also , .say we set up mm by equating the sample average of the to its expectation .the latter would be = \frac{1}{r } \sum_{i=1}^r [ n_i^2 \sigma_a^2 + n_i \sigma_e^2 + ( n_i \mu)^2]\ ] ] even without algebraic simplification , it s clear that the result will be essentially the same as that obtained for the random model .for instance , the term corresponds to the term in ( [ vars ] ) .in essence , the above derivation is implicitly treating the ( constant ) row counts as random , having a uniform distribution on .the significance of this is that one can enjoy the benefits of the fr approach ( sections [ simplified ] and [ sa ] ) even if the quantities truly are fixed .equations in random effects analysis can become quite complex .note for instance the conditions needed merely to establish consistency in .this complexity certainly includes the settings of mm estimation .for instance , even in the simplest model , ( [ vars ] ) seems rather complicated in its form here , but is even more sprawling if the are taken as fixed .we argue here that our fr method can greatly ease the derivation of the mm equations .this is especially true in light of our use of generic variables , as in ( [ seqn ] ) , which can reduce large amounts of equation clutter .consider for example the model ( [ movie ] ) .suppose we need to find the covariance between and .once again , the details will not be shown here , but a glance at ( [ seqn ] ) shows that when we will apply the covariance form of the `` pythagorean theorem , '' the key quantity will be distributed as where is the number of columns that rows and have in common . the distribution of can be estimated empirically , as we did for above .the point is that all this can be done without any explcit writing of the .the difference in complexity of expressions between the fr and fixed- approaches will be quite substantial .in random - effects modeling applications involving very large data sets , a major concern is computation time and space . as noted in , the reml method of estimation in a two - component model , for example, requires time and memory space , where would be either or in the movie ratings example above .indeed , reported that `` sas proc mixed ran out of memory when we attempted to fit a model with random smoking effects . '' a method that i call software alchemy can help remedy both time and memory problems in contexts of i.i.d .data , , using a very simple idea .say we are estimating a population value , typically vector - valued .one breaks the data into approximately equal - size chunks , finds on each one , and then takes the one s overall estimate to be this changes the original problem into an `` embarrassingly parallel '' computational problem , i.e. easy to compute in parallel , say on machines in a cluster or on cores in a multicore machine .this speeds up computation by a factor of , and since each requires only of the memory space requirement , the method may remedy memory limitation problems in cluster settings .in fact , the same is true even on a single - core machine , since one would still need only of the memory space requirement at each iteratioo .the procedure also gives us a mechanism for empirical computation of standard errors .it is shown in that if is asymptotically normal , then the same will be true for , and moreover , the latter will have the same asymptotic covariance matrix as the former .thus no statistical efficiency is lost .the point then is that this can be applied profitably to random - effects models if the i.i.d .requirement of software alchemy is satisfied . by making quantities like the random ,this can be done in many cases .consider the model ( [ nrand ] ) , for instance .a set of key quantities in the estimation procedure consists of the . by modeling the as i.i.d ., the same will be true for the , and software alchemy can used. now consider ( [ nrand1 ] ) , a more subtle setting .let , ... denote the , arranged in the order in which the ratings are submitted , and write where the and are now drawn in an i.i.d .manner from distributions on 1, ... ,r and 1, ... ,c . assuming that submissions come in to the rating site in an i.i.d .manner , this structure is reasonable . we can then divide the into chunks , estimate , and as before on each chunk , then average over chunksnote that random effects models can be viewed in terms of mixing distributions , with the advantage , for example , that the entire distribution of might be estimated , rather than just its variance .this might be used to develop prediction intervals , say for a continuous .this paper has presented full randomness , a proposed framework for the enhanced analysis of random effects .fr enables the formation of richer models of the phenomena under study , simplifies derivations of complex models , and can facilitate parallel speedup of computation .many further directions in methodology could be explored under this framework , with applications to a number of specific fields , such as the aforementioned collaborative filtering .j. herlocker , j. konstan , a. borchers , j. riedl ( 1999 ) .an algorithmic framework for performing collaborative filtering , _ proceedings of the 1999 conference on research and development in information retrieval _
|
a different general philosophy , to be called full randomness ( fr ) , for the analysis of random effects models is presented , involving a notion of reducing or preferably eliminating fixed effects , at least formally . for example , under fr applied to a repeated measures model , even the number of repetitions would be modeled as random . it is argued that in many applications such quantities really are random , and that recognizing this enables the construction of much richer , more probing analyses . methodology for this approach will be developed here , and suggestions will be made for the broader use of the approach . it is argued that even in settings in which some factors are fixed by the experimental design , fr still `` gives the right answers . '' in addition , computational advantages to such methods will be shown .
|
wave problems in unbounded media can occur in many applications in mechanics and engineering such as in acoustics , solid mechanics , electromagnetics , etc .it is well known that analytical solutions for such problems are available only for some special cases . on the contrary ,numerical methods can be applied to many complex problems .physically , for problems in infinite domains , the energy is produced by sources in the region to be analyzed and must escape to infinity . for methods solving the problem on a bounded domain like the finite element method , it introduces the difficulty of an artificial boundary to get a bounded domain. this boundary must be such that the energy crosses it without reflection and special conditions must be specified at the artificial boundary to reproduce this phenomena . generally , these can be classified into local or global boundary conditions . with a global conditionall degrees of freedom ( dofs ) on the boundary are coupled while a local condition connects only neighboring dofs .the first global method which has been used for solving such problems was the boundary element method .this method is well adapted for infinite domains and is described in numerous classical textbooks like .it consists in solving an equation on the boundary of the domain only and the radiation conditions are taken into account analytically .it also reduces the dimension of the problem to a surface in 3d and to a curve in 2d decreasing thus the size of the linear problem to solve .however , the final problem involves full matrices which are also generally non symmetrical .it is also mainly limited to linear problems and to homogeneous domains or otherwise one has to introduce special and complex techniques to deal with non linear or non homogeneous situations .there are also singularities in the integrals which need special attention for the numerical integrations .so this method is interesting and has been extensively used but it can lead to heavy computations when the number of degrees of freedom increases .more information on such techniques can be found in the historical and review papers . in the other approaches, the computational domain is truncated at some distance and boundary conditions are imposed at this artificial boundary .these conditions at finite distance must simulate as closely as possible the exact radiation condition at infinity .an approach leading to a global boundary condition is the dirichlet to neumann ( dtn ) mapping proposed by and in an earlier version by .it consists in dividing the domain into a finite part containing the sources and an infinite domain of simple shape .the solution in the infinite domain is solved analytically , for example by series expansions , and an exact impedance relation is obtained on the boundary between the finite and infinite domains .this relation links the variable and its normal derivative on the whole boundary .the dtn mapping is thus non local and every node on the boundary is connected to all other nodes .this gives a full matrix for the nodes of the boundary which partially destroys the sparse matrix of the fem and increases substantially the computing resources needed to get the solution .the solution has to be found in the exterior domain by analytical or numerical methods .when the analytical solution can be found , it is generally under the form of a series expansion. the number of terms in the expansion must be sufficient for an accurate solution which can lead to heavy computations .developments of the method can be found in .an application to the case of wave scattering in plates is also found in .the other methods are local and the condition at a node involves only neighboring nodes which make them less demanding in computing resources and much easier to implement in a finite element code but also less accurate . a first possibility of such approaches is the use of infinite elements as proposed by .it consists in developing special elements with a behavior at infinity reflecting that of analytical solutions obtained for the same type of problems . for wave problems , it involves complex - valued basis functions with outwarding propagation wave - like behavior in the radial direction .the elements were further developed by to considered other coordinate systems such as expansions in prolate coordinates .this method is interesting but the inclusion of infinite elements requires the development of special elements and these elements can depend on decay parameters which have to be accurately chosen .a review of these methods has been proposed by . in the perfectly matched layer proposed by , originally for electromagnetic waves , an exterior layer of finite thicknessis introduced around the bounded domain .the absorption in this domain is increasing as we move towards the exterior such that outgoing waves are absorbed before reaching the exterior truncation boundary .the number of elements in the layer , its thickness , the variation of the absorption properties have to be carefully chosen to optimize the efficiency of the method . this efficiency is better for a layer with a large thickness but this can lead to a significant increase in the number of elements in the finite element model .various developments of the method can be found in and its optimization in .otherwise , various classes of absorbing boundary conditions were also developed by .they consist in the numerical approximation of differential operators on the boundary .for instance , examples of the application of bayliss - turkel conditions are presented by .however , the more accurate boundary conditions involve high order derivatives on the boundary which are difficult to implement in the finite element method .finally , it was proved by that , in fact , the perfectly matched layer and the absorbing boundary conditions were closely connected .the helmholtz equation was also solved with these two boundary conditions by and these conditions were compared and optimized to minimize the reflection .other boundary conditions involving only second order derivatives have also been proposed .they introduce auxiliary variables and systems of equations on the boundary which lead to high order boundary conditions , see for a review of such non - reflecting boundary ( nrbc ) methods .they were mainly developed for acoustic problems but in a local boundary condition for elastic waves has been proposed . in impedance boundary condition in new coordinates was developed for the convected helmholtz equation .for fluid dynamic problems , developed lagrange multipliers for imposing various absorbing boundary conditions for cases where the type and the number of boundary conditions can change , for instance as the flow changes from subsonic to supersonic regimes and its direction varies with time . a general review of the methods described in the precedent paragraphs for various dynamic, acoustic and wave propagation problems can be found in .comparisons are also made between the different methods . in the present study , another local method is proposed .this method works on discrete systems directly , in contrast with many existing absorbing boundary conditions which are written on the continuous differential equations and discretized after .the principle of the method is to compute wave propagations in groups of elements near the boundary from the dynamic stiffness matrix of these elements .then , a boundary condition is obtained for cancelling the reflected waves .this condition is finally written as an impedance boundary condition relating the force and displacement degrees of freedom on the boundary .the approach is based on the waveguide theory proposed by and is used to determine absorbing boundary conditions at the truncation boundary of 2d periodic media . only information related to one period , obtained from any standard fe software ( the discrete stiffness and mass matrices and the nodal coordinates )are required to formulate the method .the advantage is that it can be applied to media with various complex behaviors .this paper is outlined as follows . in section 2 ,the methodology for determining absorbing boundary conditions for periodic media is described .then , a discussion for the application of the method to general media is proposed . in section 3 ,a simple application is proposed to show the results of the method in a case where detailed computations can be done . in section 4 ,two examples of finite element computations in acoustics and elastodynamics are presented .they allow to check the efficiency and accuracy of the proposed method for more complex cases .finally , the paper is closed with some conclusions .we suppose that we want to solve a mechanical problem on an infinite domain exterior to the bounded domain ( see figure [ fig01 ] ) .the infinite domain is approximated by the finite domain which is exterior to and is limited by the exterior boundary .we are looking for a solution with radiating condition at infinity which means that the solution should be outgoing near the boundary . near this exterior boundarythe solution can be seen as composed of incident waves denoted and reflected waves . for a perfectly absorbing boundary , one should have .in fact this condition is very difficult to implement in the numerical solutions of such problems .indeed , only the global solution is easily computed but the decomposition into incident and reflected waves is difficult to obtain .the problem is thus to find an appropriate boundary condition to impose on the exterior boundary to finally get on the solution. to be easily included in a finite element model the searched boundary condition should be local and the condition at a node of the boundary should involve only neighboring nodes .the approach proposed in this paper consists in studying this problem by first considering the case of periodic media . for this case ,positive and negative waves and their amplitudes and can be computed by the method presented below . then an exact boundary condition can be formulated for a half - plane boundary .it is further shown how this condition can be approximated by a local condition on the boundary .as homogeneous media are special cases of periodic media , the method presented here applies also to homogeneous media . before considering the general case ,a simple example for the klein gordon equation will be presented .consider first the stationary klein gordon equation given by where is the solution and , are real parameters .this equation is discretized with linear two nodes elements such that where , , and are the values of the function at the two nodes of the element .the discretization of the first and second parts of relation ( [ eqc01 ] ) leads , for an element of length , to the matrices \ \ \ \ \ \mathbf{m } = \frac{l}{6}\left [ \begin{array}{cc } 2\ & 1 \\ 1\ & 2 \end{array}\right]\ ] ] and the dynamic stiffness matrix of one element is given by + \frac{l}{6}(k^2-m^2)\left [ \begin{array}{cc } 2\ & 1 \\ 1\ & 2 \end{array}\right]\ ] ] waves of propagating constant are such that leading in an element to where are the components of the matrix .taking into account the symmetries in the matrix , this yields whose solutions are with for , one gets , at first order , meaning there are thus two waves in an element , such that = \left [ \begin{array}{c } 1 \\ d_{11 } + e^{\mu_+}d_{12 } \end{array}\right ] \ \ and \ \ \left [ \begin{array}{c } u_- \\f_- \end{array}\right ] = \left [ \begin{array}{c } 1 \\ d_{11 } + e^{\mu_-}d_{12 } \end{array}\right]\ ] ] and the general solution is given by =a_+ \left [ \begin{array}{c } u_+ \\f_+ \end{array}\right ] + a_- \left [ \begin{array}{c } u_- \\ f_- \end{array}\right]\ ] ] the condition for only outgoing waves is thus on the right boundary and on the left boundary leading respectively to the conditions the condition in the first case is in the second case , one gets we recognize approximations of the classical absorbing boundary conditions which have been obtained here directly from the discretized equations .compared to the classical boundary condition on the right ( and exact in this case ) , the relative error is which depends mainly on the size of the element relatively to the wavelength .the present boundary condition has been obtained entirely from the discrete matrices without any knowledge of the analytical solution of the problem . to estimate the reflection coefficient created by such a boundary condition consider an incident wave on the boundary . a reflected wave created . the total solution and its associated forceare given by writing the boundary condition ( [ eq01a ] ) , for instance by taking the boundary at , yields so the reflection coefficient is finally given by this coefficient is low and of second order when . in this sectionwe present the general outline of the method before starting a more rigorous developement in the following section .so , to extend the precedent example to more general cases , consider a vector function and a force vector acting on a line parallel to the axis as in figure [ fig04 ] . they can be decomposed by a fourier transform as let us suppose that is solution of a linear operator in the fourier domain , this relation yields for a given value of , the precedent relation has non zero solutions for such that the determinant let us denote by the positive solutions such that or and the energy flux is directed towards positive values of x. we denote by the other solutions .we have the decomposition in the same way , for the force components where and are the force components respectively associated to and .if the boundary is such that only positive waves exists at proximity , one has where and are the matrices whose columns are respectively and .eliminating the coefficients , one gets with and means the convolution . in the following this boundary conditionwill be computed directly from the discrete equations for general linear media .consider an infinite two dimensional periodic medium , as shown in figure [ fig02 ] .the elementary period is limited by the domain \times [ 0 , b_2] ] and .the discrete dynamic equation of a cell ( an elementary period ) obtained from a fe model at a frequency and for the time dependence is given by : where , and are the stiffness , mass and damping matrices , respectively , is the loading vector and the vector of the degrees of freedom ( dofs ) . introducing the dynamic stiffness matrix , decomposing the dofs into boundary and interior dofs , and assuming that there are no external forces on the interior nodes , result in the following equation : \left [ \begin{array}{c } \mathbf{q}_{b}\\ \mathbf{q}_{i } \end{array } \right ] = \left [ \begin{array}{c } \mathbf{f}_{b}\\ \mathbf{0 } \end{array } \right ] \label{eq04}\ ] ] the interior dofs can be eliminated using the second row of equation ( [ eq04 ] ) , which results in the first row of equation ( [ eq04 ] ) becomes which can be written as itshould be noted that only boundary dofs are considered in the following .the periodic cell is assumed to be meshed with an equal number of nodes on their opposite sides .the boundary dofs are decomposed into left , right , bottom , top dofs and associated corners , , and as shown in figure [ fig03 ] .the longitudinal dofs vector is defined as ] .this can be expressed as where the matrices and depend on the wavenumber and are given by \qquad \mathbf{w}_1 = \left [ \begin{array}{cc } \mathbf{o } & \mathbf{o}\\ \mathbf{i } & \mathbf{o}\\ \mathbf{o } & \mathbf{o}\\ \mathbf{o } & \mathbf{i}\\ \mathbf{o } & e^{ikb_2}\mathbf{i}\\ \mathbf{o } & \mathbf{o } \end{array } \right ] \label{eq16}\ ] ] the equilibrium conditions between adjacent cells are given by that can be written as where denotes the operator of complex conjugate and transpose .combining ( [ eq13 ] ) , ( [ eq15 ] ) and ( [ eq18 ] ) , lead to that can be written as where the eigenvalue and the eigenvector are thus solutions of a quadratic eigenvalue problem .it is convenient to transform the problem ( [ eq20 ] ) into another linear eigenvalue problem as \left [ \begin{array}{c } \mathbf{q}_r\\ \widetilde{\mathbf{q}}_r \end{array } \right ] = \left [ \begin{array}{cc } \mathbf{o } & \mathbf{a}_3\\ -\mathbf{a}_0 & -(\mathbf{a}_1+\mathbf{a}_2 ) \end{array } \right ] \left [ \begin{array}{c } \mathbf{q}_r\\ \widetilde{\mathbf{q}}_r \end{array } \right ] \label{eq22}\ ] ] with . from equations ( [ eq12 ] ) and ( [ eq13 ] ), one can notice that moreover , from ( [ eq16 ] ) , we have and from ( [ eq21 ] ) it can be easily shown by taking the determinant of the matrix in relation ( [ eq20 ] ) that if is an eigenvalue for the wavenumber , is also an eigenvalue for the wavenumber .these represent a pair of positive and negative - going waves , respectively .the eigensolutions of equation ( [ eq22 ] ) can be split into two sets of and eigensolutions with , which are denoted by and respectively , with the first set such that . in the case , the first set of positive - going waves must contain waves propagating in the positive direction such that where is the reduced set of boundary force dofs of left cells on right cells and is given by = \mathbf{w}_0^*\mathbf{f}_j^l = \mathbf{w}_0^*\mathbf{d}_l\left ( \mathbf{w}_0 + e^{i\mu_j}\mathbf{w}_1\right ) \mathbf{q}_j \label{eq26}\ ] ] in the second set of negative - going waves , the eigenvalues are associated with waves such that . with the eigenvector and the force component of relation ( [ eq26 ] ), we introduce the state vector = \left [ \begin{array}{c } \mathbf{q}_j(k ) \\( \mathbf{a}_1(k)+e^{i\mu_j(k)}\mathbf{a}_3(k))\mathbf{q}_j(k ) \end{array } \right]\ ] ] in this relation is the eigenvector associated to .one can also introduce \label{eq27}\ ] ] in this relation is the eigenvector associated to since we have seen that is also an eigenvalue of ( [ eq20 ] ) for the wavenumber . from relation ( [ eq20 ] ) written for the eigenvector , multiplying this relation by and then on the left by , one gets in the same way , writing relation ( [ eq20 ] ) for the eigenvector , taking the transpose of the relation , using relations ( [ eq25 ] ) and multiplying on the right by , leads , after a global multiplication by , to the following relation the difference between the two precedent relations yields in the case , we get now it is possible to compute the product by the result of relation ( [ eq28 ] ) has been used in the case and is a factor depending on the eigenvector .this gives orthogonality relations on the statevectors associated to the eigenvalues .figure [ fig04 ] presents the periodic medium near the exterior boundary . in this domainthe solution is described by relation ( [ eq00 ] ) , yielding , respectively for the displacement and force components , with the force components given by relation ( [ eq26 ] ) . introducing the state vector and decomposing this solution into the different waves , we get the last relation is the approximation obtained by the finite element computation of wave solutions presented before .the condition of outgoing waves means that there is no incoming wave , so the amplitudes associated with incoming waves must equal zero .this condition is obtained by in this relation are the vectors associated to the negative going waves , given by relation ( [ eq27 ] ) . using relation ( [ eq29 ] ) , one gets for for the amplitudes of the negative going waves . introducingthe matrix with lines given by leads to decomposing now into its displacement and force components , doing the same thing for ] .numerical green s functions are calculated for zero and second order boundary conditions .the excitation is at point and the analytical solution in infinite space is given by formula ( [ eq45 ] ) .the green s functions are presented in figure [ fig10 ] for a point at on the horizontal axis and in figure [ fig11 ] for a point at along the diagonal .good agreements between the two types of absorbing boundary conditions and the analytical solution can be observed .both boundary conditions fail at low frequencies because the size of the domain is too small compared to the wavelength .similarly the error for high frequencies are of same orders for both boundary conditions . for intermediate frequencies the error is lower for the second order boundary condition .this is more clearly seen in figure [ fig12 ] where the relative error for the point is plotted versus the frequency .the same results are presented in figure [ fig13 ] for the point and a mesh density of elements .the solution is clearly much better at high frequencies meaning that the errors seen in figures [ fig10 ] and [ fig11 ] can be explained by elements too large for these frequencies and not by the quality of the boundary conditions . in figure [ fig14 ]the domain is now with elements , so with elements of the same size as for figures [ fig10 ] and [ fig11 ] .results are plotted for the point .now the improvement is clearly seen for low frequencies while there is no difference for high frequencies . the analytical solution in direction of the two dimensional elastodynamics case when submitted to a unit force at origin in direction ,is given by \ ] ] with : \\ b & = & -2a + \left[h_0(k_tr ) + \beta^2h_0(k_lr)\right]\end{aligned}\ ] ] where and are the hankel functions of first type , of orders zero and one respectively . the wavenumbers are and for the longitudinal and transverse waves respectively .the velocities , and the ratio between them are given by , in this example , the same global meshes as for the acoustic case are used .the boundary condition is computed from the square four nodes elements .the material is steel with , and and plane strain conditions are used in the computation .the sizes of the periodic cell can be or . in figure [ fig15 ] ,numerical solutions are compared with analytical solutions for the point and for different sizes of the cell .the curves represent the real and imaginary parts of the first component of the displacement for an excitation at origin in direction 1 . the same remarks as the previous examples can be made : a denser mesh leads to lower errors over the high frequency band ] , numerical results are different from the analytical solutions due to the finite size of the domain . in figure [ fig16 ] ,the results for these two points are presented when the size of the domain is increased successively with and . in this case , when the size of the cell is fixed to , a larger domain leads to lower errors over the low frequency band $ ] .the results are the same as previously in the high frequency band because in this domain the precision depends on the size of the elements and not on the size of the global domain .some improvements are however seen for intermediate frequencies . in figure [ fig17 ] the error for boundary conditions of zero andsecond orders are plotted versus the frequency .the second order condition is much more accurate for intermediate frequencies as for the acoustic case .in this paper , a method to determine absorbing boundary conditions for two dimensional periodic media has been presented .it works directly on the discretized equations .the boundary condition is first obtained as a global impedance relation and is then localized into boundary conditions of various orders . in two examples ,good agreements were observed when compared with analytical solutions . in any case , the proposed method is efficient because it requires only the discrete dynamic matrices which can be obtained by any standard finite element software .this method could be used for media with more complex behaviors than those presented in the precedent examples .p. bettess , c. emson , t.c .chiam , numerical methods in coupled systems , edited by r.w .lewis , p. bettes and e. hinton , a new mapped infinite element for exterior wave problems , chap 17 , ( 1984 ) 489 - 504 .t. strouboulis , r. hidajat , i. babuska , the generalized finite element method for helmholtz equation .part ii : effect of choice of handbook functions , error due to absorbing boundary conditions and its assessment , comput .methods appl .mech . engrg . , 197 ( 2008 ) 364 - 380 .o. guasch , r. codina , an algebraic subgrid scale finite element method for the convected helmholtz equation in two dimensions with applications in aeroacoustics , comput .methods appl ., 196 ( 2007 ) 4672 - 4689 .
|
this paper proposes a new method , in the frequency domain , to define absorbing boundary conditions for general two - dimensional problems . the main feature of the method is that it can obtain boundary conditions from the discretized equations without much knowledge of the analytical behavior of the solutions and is thus very general . it is based on the computation of waves in periodic structures and needs the dynamic stiffness matrix of only one period in the medium which can be obtained by standard finite element software . boundary conditions at various orders of accuracy can be obtained in a simple way . this is then applied to study some examples for which analytical or numerical results are available . good agreements between the present results and analytical solutions allow to check the efficiency and the accuracy of the proposed method . , number of pages : 47 number of figures : 17 absorbing boundary conditions , waveguide , finite element , periodic medium .
|
multiple - input multiple - out ( mimo ) wireless communications have been witnessed to offer large gains in spectral efficiency and reliability .efficient designs of signal transmission schemes include space - time ( st ) codes over mimo systems have been active areas of research over the past decade .orthogonal st block code ( ostbc ) is one of the most powerful st code designs due to its simple low - complexity maximum - likelihood ( ml ) decoding while achieving maximum diversity gain .however , it is found that ostbc has a low code rate that can not be above symbols per channel use for more than two transmit antennas . to improve the code rate of the stbc ,numerous code designs have been developed including quasi - orthogonal stbc and stbc based on algebraic number theory .two typical designs of those codes are threaded algebraic st ( tast ) codes and cyclic division algebra based st codes which have been shown to obtain full rate and full diversity .the full rate means symbols per channel use for transmit antennas .note that the ostbc for two transmit antennas , also namely alamouti code , has a rate of symbol per channel use only , i.e. , two independent information symbols are sent through a codeword occupying two symbol intervals . since most of the high - rate stbc are designed based on the rank criterion which was derived from the pairwise error probability of the st codes with ml decoding , they have to rely on the ml decoding to collect the full diversity .considering that the ml decoding complexity grows exponentially with the number of information symbols embedded in the codeword , the high - rate stbc obtain the full diversity at a price of the large decoding complexity .recently , several fast decodable stbc have been proposed to reduce the high decoding complexity while not compromising too much performance gains .mimo systems with linear receivers have also received a lot of research attention and information - theoretic analysis has been done in .efficient designs of st codes for transmission over mimo systems with linear receivers have also been studied in .linear receiver based stbc designs are attractive because they can exploit both gains of efficiency and reliability of the signal transmission over mimo systems with a low - complexity receiver such as zero - forcing ( zf ) or minimum mean square error ( mmse ) receiver . similar to the ostbc , the stbc designs in can also obtain full diversity with linear receivers .however , it is found that the rate of the linear receiver based stbc is upper bounded by one , though it is larger than that of ostbc . to address the complexity and rate tradeoff , a partial interference cancelation ( pic ) group decoding for mimo systems was proposed and the design criterion of stbc with pic group decoding was also derived in . in fact , the pic group decoding can be viewed as an intermediate decoding approach between the ml receiver and the zf receiver by trading a simple single - symbol decoding complexity for a higher code rate more than one symbol per channel use .very recently , a systematic design of stbc achieving full diversity with pic group decoding was proposed in .however , the decoding complexity of the stbc design in is still equivalent to a joint ml decoding of symbols . in order to further reduce the decoding complexity , in this paperwe propose a new design of stbc with pic group decoding which can obtain both full diversity and low - complexity decoding , i.e. , only half complexity of the stbc in .our proposed stbc is featured as an alamouti block matrix , i.e. , every element of the alamouti code matrix is replaced by an elementary matrix and each elementary matrix is designed from multiple diagonal layers .it should be mentioned that in the similar alamouti block matrix was used where each entry of the alamouti matrix was replaced by a toeplitz stbc .the major difference between the stbc in and our proposed stbc lie in the construction of elementary matrix , i.e. , the toeplitz matrix used in and the multiple diagonal layers used in our codes .while the stbc in achieves the full diversity with linear receivers but the code rate is not more than .it will be shown that our proposed stbc can achieve full diversity under both ml and pic group decoding and the code rate can be up to when full diversity is obtained .our simulation results demonstrate that the codes can obtain similar good performance to the codes in but a half decoding complexity is reduced .this paper is organized as follows . in sectionii , the system model is described and the pic group decoding algorithm is reviewed . in section iii ,a design of stbc achieving full diversity with a reduced - complexity pic group decoding is proposed .the full diversity is proved when pic group decoding is applied . in section iv ,a few code design examples are given .in section v , simulation results are presented . finally , we conclude the paper in section vi . _notation : _ throughout this paper we use the following notations .column vectors ( matrices ) are denoted by boldface lower ( upper ) case letters .superscripts and stand for transpose and conjugate transpose , respectively . denotes the field of complex numbers . denotes the identity matrix , and denotes the matrix whose elements are all . represents the determinant of the matrix . denotes the kronecker product . denotes the frobenius norm of matrix ( vector ) .we consider a mimo system with transmit antennas and receive antennas where data symbols , are sent to the receiver over block fading channels . before the data transmission , the information symbol vector , selected from a signal constellation such as qam ,are encoded into a space - time block codeword matrix of size , where is the block length of the codeword . for any and , the -th entry of transmitted to the receiver from the -th antenna during the -th symbol period through flat fading channels .the received space - time signal at receive antennas , denoted by the matrix , can be expressed as where is the noise matrix of size whose elements are of i.i.d . with circularly symmetric complex gaussian distribution with zero mean and unit variance denoted by , is the channel matrix whose entries are also i.i.d . with distribution , denotes the average signal - to - noise ratio ( snr ) per receive antenna and is the normalization factor such that the average energy of the coded symbols transmitting from all antennas during one symbol period is one .we suppose that channel state information ( csi ) is known at receiver only .therefore , the signal power is allocated uniformly across the transmit antennas in the absence of transmitter csi . in this paper, we consider that information symbols are coded by linear dispersion stbc as . to decode the transmitted sequence , we need to extract from . through some operations , we can get an equivalent signal model from ( [ eqn : y ] ) as where is a vector of length , is a noise vector , and is an equivalent channel matrix of size with column vectors for , i.e. , ] and ] , ^t=\mathbf{\theta } [ \begin{array}{cc } s_{3 } & s_{4 } \end{array } ] ^t ] , and ^t=\mathbf{\theta } [ \begin{array}{cc } s_{7 } & s_{8 } \end{array } ] ^t ] and ^t ] as long as , i.e. , for not all zero , and where is a column vector . to prove ( [ eqn : true ] ), we use the self - contradiction method as follows .suppose that for not all zero , and for any with ^t ] and not all zero , .recall , where is the row of the matrix in ( [ eqn : upsilon ] ). we can rewrite ( [ eqn : cond1 ] ) as for not all zero , where is the ( )-th entry of the matrix .note that the rotation matrix in ( [ eqn : cyclo ] ) is designed so that , for not all zero .it contradicts the result ( [ eqn : cond2 ] ) based on the assumption of ( [ eqn : assump ] ) .hence , ( [ eqn : true ] ) holds , i.e. , any non - zero linear combination of the vectors in over does not belong to the space linearly spanned by all the vectors in the vector groups ] .similarly , we can prove that any non - zero linear combination of the vectors in over does not belong to the space linearly spanned by all the vectors in the remaining vector groups , for .note that is a row permutation of for , respectively .we prove that any non - zero linear combination of the vectors in over does not belong to the space linearly spanned by all the vectors in the remaining vector groups , for . in the above, we prove that for the stbc ( [ eqn : new ] ) with pic group decoding the second condition in _ proposition [ prop1 ] _ is satisfied when there is only one receive antenna . for , the equivalent channel matrix will be a stacked matrix of ( [ eqn : hh ] ) with the number of columns unchanged .it is easy to see that when there are multiple receive antennas , the second condition of _ proposition [ prop1 ] _ is also satisfied .the proof of _ theorem [ theorem2 ] _ is completed . for the proposed stbc ( [ eqn : new ] ) with any number of layers and the pic - sic group decoding we have the following results .[ theorem3 ] consider a mimo system with transmit antennas and receive antennas over block fading channels .the stbc as described in ( [ eqn : new ] ) with diagonal layers is used at the transmitter .the equivalent channel matrix is . if the received signal is decoded using the pic - sic group decoding with the grouping scheme and with the _ sequential order _ , where for , i.e. , the size of each group is equal to , then the code achieves the full diversity. the code rate of the full - diversity stbc can be up to symbols per channel use .the proof is similar to that of _ theorem [ theorem2]_. note that for the code in _ lemma 1 _ can be written as an alternative form similar to the one in ( [ eqn : hhh ] ) except the expansion of column dimensions .it is then not hard to follow the proof for the case of in section iii - c to prove _ theorem [ theorem3 ] _ by showing that the criterion in _ proposition [ prop2 ] _ is satisfied .the detailed proof is omitted .in this section , we show a few examples of the proposed stbc given in ( [ eqn : new ] ) . for and .according to the code design in ( [ eqn : new ] ) , we have ,\ ] ] where , { \rm{~and~ } } \mathbf{c}^2_{2,3,2 } = \left[\begin{array}{cc } x_{3,1 } & 0 \\ x_{4,1 } & x_{3,2}\\ 0 & x_{4,2 } \end{array } \right].\ ] ] the code rate of is .the equivalent channel matrix of the code is ,\ ] ] where and with . to achieve the full diversity ,the grouping scheme for the pic group decoding of the code is , , , and . for given ,the code achieving full diversity with pic group decoding can be designed as follows , ,\ ] ] where \\ \mathbf{c}^2_{4,5,2 } & = & \left[\begin{array}{cccc } x_{3,1 } & & & \\x_{4,1 } & x_{3,2 } & & \\ & x_{4,2 } & x_{3,3 } & \\ & & x_{4,3 } & x_{3,4 } \\ & & & x_{4,4 } \\\end{array } \right ] .\end{aligned}\ ] ] the code rate of is .the equivalent channel of the code is ,\end{aligned}\ ] ] where and with being the row of the matrix ^t\otimes\mathbf { \theta}$ ] for and being a rotation matrix of -by- . to achieve the full diversity ,the grouping scheme for the pic group decoding of the code is , , , and .table i shows the decoding complexity of the new codes compared with the codes in .it can be seen that the proposed stbc in ( [ eqn : new ] ) have more groups than the stbc in , but each group has half number of symbols to be jointly coded .therefore , the pic group decoding complexity is reduced by half .this mainly attributes to the introduction of alamouti code structure into the code design in ( [ eqn : new ] ) and the group orthogonality in the code matrix can reduce the decoding complexity without sacrificing any performance benefits .[ 1 ] .comparison in pic group decoding complexity [ cols="^,^,^,^,^ " , ]in this section , we present some simulation results of the proposed stbc and compare it to the existing codes .flat mimo rayleigh fading channels are considered .1 shows the bit error rate ( ber ) simulation results of the proposed code in ( [ eqn : f4 ] ) with zf , blast detection , pic group decoding and ml detection in a system , respectively .the signal modulation is 16qam and then the bandwidth efficiency is bps / hz . it is clearly that the ml decoding achieves the best ber performance .the pic group decoding can obtain the full diversity as the ml decoding does , but it suffers a loss of coding gain around 1 db . neither zf nor blast detection method can obtain the full diversity . fig .2 illustrates the performance comparison among the proposed code , the tast code , and the perfect st code for a system .it is known that both tast and perfect st codes can obtain full diversity and full rate ( rate- ) for mimo systems , but they are designed based on the ml decoding whose complexity is high , i.e. , a joint -symbol ml decoding . when pic group decoding is applied , it is seen from fig .2 that both tast code and perfect st code lose the diversity gain . for the proposed code , the full diversity can be obtained for both ml and pic group decoding .it should be mentioned that the code with pic group decoding achieves the full diversity with a very low decoding complexity , i.e. , a double - symbol ml decoding , much lower than that of tast and perfect st codes ( a joint 16-symbol ml decoding ) .moreover , the proposed code is compared with other code designs based on pic group decoding such as codes and in , and guo - xia s code in ( * ? ? ?3 shows that the ber performance of the codes with the pic group decoding for transmit and receive antennas over rayleigh block fading channels .it can be seen that the codes in , guo - xia s code and the code all obtain full diversity and have very similar performance .it should be mentioned that the new code only has half decoding complexity of the code in .guo - xia s code is indeed a case of the systematic design of the new code in ( [ eqn : new ] ) .this can be seen from the equivalent channel matrix of guo - xia s code shown in ( * ? ? ?* eq . ( 41 ) ) and that of the code in ( [ eqn : h4 ] ) .moreover , the code in has db loss compared to the code due to its high bandwidth efficiency . in a mimo system with antennas and receive antennas at bps / hz.,width=326 ] , tast code and perfect st code for a mimo system with transmit antennas and receive antennas at bps / hz.,width=326 ]in this paper , a design of full - diversity stbc with reduced - complexity pic group decoding was proposed .the proposed code design can obtain full diversity under both ml decoding and pic group decoding .moreover , the decoding complexity of the full diversity stbc is equivalent to a joint ml decoding of symbols for transmit antennas while the rate can be up to symbols per channel use .for example , in a mimo system with transmit antennas the full diversity can be achieved by the proposed codes with double - symbol ml decoding complexity and a code rate of .simulation results were shown to validate that the proposed codes achieve the full diversity gain with a low complexity decoding .although in this paper only alamouti code is used in our new design , the method can be generalized to a general ostbc .for miso , the received signal is given by .considering the code structure in ( [ eqn : new ] ) , we can rewrite in the matrix form as follows , = \sqrt{\frac{\rho}{\mu } } \left[\begin{array}{cc } \mathbf{c}^{1 } & \mathbf{c}^{2}\\ -\left(\mathbf{c}^{2}\right)^\ast & \left(\mathbf{c^{1}}\right)^\ast\\ \end{array}\right ] \left[\begin{array}{c } \mathbf{h}_{1}\\ \mathbf{h}_{2 } \end{array}\right ] + \left[\begin{array}{c } \mathbf{w}_{1}\\ \mathbf{w}_{2 } \end{array } \right],\ ] ] where , and .we further have using shown in ( [ eqn : rm ] ) , we then have therefore , & = & \sqrt{\frac{\rho}{\mu } } \left[\begin{array}{cc } \mathcal{h}_{1 } & \mathcal{h}_{2}\\ -(\mathcal{h}_{2})^\ast & ( \mathcal{h}_{1})^\ast \end{array}\right ] \left[\begin{array}{c } \mathbf{s}^{1}\\ \mathbf{s}^{2 } \end{array}\right ] + \left[\begin{array}{c } \mathbf{w}_{1}\\ -\mathbf{w}_{2}^ * \end{array } \right]\nonumber \\ & = & \sqrt{{\rho}/{\mu}}\mathcal{h}\mathbf{s}+\mathbf{w}^{'},\end{aligned}\ ] ] where the equivalent channel matrix is given by , \nonumber \end{aligned}\ ] ] with , i=1,2;\,\,\,p=1,2,\cdots , p . \end{aligned}\ ] ] s. m. alamouti , `` a simple transmit diversity technique for wireless communication , '' _ ieee j. sel .areas commun . _ ,14511458 , oct .v. tarokh , n. seshadri , and a. calderbank , `` space - time codes for high data rate wireless communications : performance criterion and code construction , '' _ ieee trans .inf . theory _ ,44 , pp . 744765 , mar .v. tarokh , h. jafarkhani , and a. r. calderbank , `` space - time block codes from orthogonal designs , '' _ ieee trans .inf . theory _ ,14561467 , july 1999 . also ,`` corrections to ` space - time block codes from orthogonal designs ' , '' _ ieee trans .inf . theory _ ,46 , p. 314, jan . 2000 .k. lu , s. fu , and x .-xia , `` closed form designs of complex orthogonal space - time block codes of rates for or transmit antennas , '' _ ieee trans .inf . theory _ ,43404347 , dec . 2005. h. wang and x .-g xia , `` upper bounds of rates of complex orthogonal space - time block codes , '' _ ieee trans .inf . theory _, vol 49 , pp . 27882796 , oct .h. jafarkhani , `` a quasi - orthogonal space - time block code , '' _ ieee trans .49 , pp . 14 , jan .o. tirkkonen , a. boariu , and a. hottinen , `` minimal non - orthogonality rate space - time block code for tx antennas , '' in _ proc .ieee 6th int .symp . on spread - spectrum tech . and appl .( isssta 2000 ) _ , sep . 2000 , pp .429432 .n. sharma and c. b. papadias , `` full - rate full - diversity linear quasi - orthogonal space - time codes for any number of transmit antennas , '' _eurasip j. applied signal processing _ , vol .9 , pp . 12461256 , aug .2004 .g. wang , h. liao , h. wang , and x .-xia , `` systematic and optimal cyclotomic lattices and diagonal space - time block code designs , '' _ ieee trans .inf . theory _ ,33483360 , dec . 2004 .g. wang and x .- g .xia , `` on optimal multilayer cyclotomoc space - time code designs , '' _ ieee trans .inf . theory _ ,51 , pp . 11021135 , mar .2005 .p. elia , k. r. kumar , s. a. pawar , p. v. kumar , and h .- f .lu , `` explicit space - time codes achieving the diversity - multiplexing gain tradeoff , '' _ ieee trans .inf . theory _ ,52 , pp . 38693884 , sep . 2006 .y. jiang , m. k. varanasi , and j. li , `` performance analysis of zf and mmse equalizers for mimo systems : an in - depth study of the high snr regime , '' _ ieee trans .inf . theory _ , to appear .http://www.sal.ufl.edu/yjiang/papers/vbitwcr6.pdf .j. liu , j .- k .zhang , and k. m. wong , `` full - diversity codes for miso systems equipped with linear or ml detectors , '' _ ieee trans .inf . theory _ ,54 , pp . 45114527 ,y. shang and x .- g .xia , `` space - time block codes achieving full diversity with linear receivers , '' _ ieee trans .inf . theory _ ,45284547 , oct .2008 .x. guo and x .- g .xia , `` on full diversity space - time block codes with partial interference cancellation group decoding , '' _ ieee trans .inf . theory _ , vol .55 , pp . 43664385 , oct .2009 . also , `` corrections to ` on full diversity space - time block codes with partial interference cancellation group decoding ' , '' http://www.ece.udel.edu/~xxia/correction_guo_xia.pdf .w. zhang , t. xu , and x .-xia , `` two designs of space - time block codes achieving full diversity with partial interference cancellation group decoding , '' _ ieee trans .inf . theory _ , submitted . also , http://arxiv.org/abs/0904.1812v3
|
partial interference cancellation ( pic ) group decoding proposed by guo and xia is an attractive low - complexity alternative to the optimal processing for multiple - input multiple - output ( mimo ) wireless communications . it can well deal with the tradeoff among rate , diversity and complexity of space - time block codes ( stbc ) . in this paper , a systematic design of full - diversity stbc with low - complexity pic group decoding is proposed . the proposed code design is featured as a group - orthogonal stbc by replacing every element of an alamouti code matrix with an elementary matrix composed of multiple diagonal layers of coded symbols . with the pic group decoding and a particular grouping scheme , the proposed stbc can achieve full diversity , a rate of and a low - complexity decoding for transmit antennas . simulation results show that the proposed codes can achieve the full diversity with pic group decoding while requiring half decoding complexity of the existing codes . diversity techniques , space - time block codes , linear receiver , partial interference cancellation .
|
this report continues our work on bespaced by introducing new operators for filtering , folding and normalization of bespaced descriptions .the new operators are inspired by operations from functional programming languages and are applied to spatio - temporal models .bespaced is our framework for spatio - temporal reasoning .bespaced is characterised by * a description language that is based on abstract datatypes .* means to reason about descriptions formulated in the description language .related work comprises process algebra based formalisms and . similar to this work , we have created a formal specification language as part of our bespaced framework . in our case , specifications are instances of abstract datatypes and follow a more functional programming style .further related is work on type systems in connection with this process algebra work that has been introduced in .analog to the work presented in , bespaced can also be used to define spatio - temporal types . a verification tool to check properties based on the process algebra inspired formalism is described in .applications include concurrency and ressource control .further related are approaches for the specification of hybrid systems .a framework for specifying hybrid programs with stochastic features is presented in .other logic approaches to spatial reasoning can be found , e.g. , in .specialised solutions for reasoning about geometric constraints have been developed for robot path planning .this area has already been studied for decades , see e.g. , .this report provides an overview on bespaced in section [ sec : bes ] .filtering is covered in section [ sec : filt ] .folding of time and space is covered in section [ sec : fold ] while normalization is described in section [ sec : norm ] .section [ sec : concl ] concludes the report .bespaced is a spatio - temporal modeling and reasoning framework . here, we describe the modeling language , bespaced - based reasoning , and provide some implementation background information .bespaced is implemented in scala .its core functionality runs in a java environment . in the past, we have successfully applied bespaced in different contexts such as decision support for factory automation , coverage analysis for mobile devices and for verification of spatio - temporal properties for industrial robots .bespaced models are created using scala case classes .thus , we provide a functional abstract datatype - like feeling .major language constructs which are currently available ( see also ) are provided below .an invariant is the basic logical entity , something that is supposed to hold for a system throughout space and time .invariants can and typically do , however , contain conditional parts , something that requires a precondition to hold , e.g. , a certain time implies a certain state of a system .constructors for basic logical operations connect invariants to form a new invariant .some of these basic constructors are provided below : .... abstract class invariant ; abstract class atom extends invariant ; case class or ( t1 : invariant , t2 : invariant ) extends invariant ; case class and ( t1 : invariant , t2 : invariant ) extends invariant ; case class not ( t : invariant ) extends invariant ; case class implies ( t1 : invariant , t2 : invariant ) extends invariant ; case class bigor ( t : list[invariant ] ) extends invariant ; case class bigand ( t : list[invariant ] ) extends invariant ; case class true ( ) extends atom ; case class false ( ) extends atom ; .... additional predicates are used to indicate timepoints , timeintervals , events , ownership , and related information . .... ... case class timepoint [ t ] ( timepoint : t ) extends atom ; case class timeinterval [ t](timepoint1 : t , timepoint2 : t ) extends atom ; ... case class event[e ] ( event : e ) extends atom ; case class owner[o ] ( owner : o ) extends atom ; case class prob ( probability : double ) extends atom ; case class componentstate[s ] ( state : s ) extends atom ; .... some geometric constructs are provided below : .... case class occupybox ( x1 : int , y1 : int , x2 : int , y2 : int ) extends atom ; case class occupyboxsi ( x1 : si , y1 : si , x2 : si ,y2 : si ) extends atom ; case class eroccupybox ( x1 : ( ertp = > int),y1 : ( ertp = > int ) , x2 : ( ertp = > int),y2 : ( ertp = > int ) ) extends atom ; case class ownbox[c ] ( owningcomponent : c , x1 : int , y1 : int , x2 : int , y2 : int ) extends atom ; ... case class occupy3dbox ( x1 : int , y1 : int , z1 : int , x2 : int , y2 : int , z2 : int ) extends atom ; ... case class occupypoint ( x : int , y : int ) extends atom case class ownpoint[c ] ( owningcomponent : c , x : int , y : int ) extends atom case class occupy3dpoint ( x : int , y : int , z : int ) extends atom ... case class occupycircle ( x1 : int , y1 : int , radius : int ) extends atom ; case class owncircle[c ] ( owningcomponent : c , x1 : int , y1 : int , radius : int ) extends atom ; ... .... in addition , we have topological constructs which are shown below : .... case class occupynode[n ] ( node : n ) extends atom ... case class edge[n ] ( source : n , target : n ) extends atom case class transition[n , e ] ( source : n , event : e , target : n ) extends atom .... in summary , the language constructs comprise basic logical operators ( e.g. , and , or ) and constructs for space , time , and topology . for instance , occupybox refers to a rectangular two - dimensional geometric space. it is parameterized by its left lower and its right upper corner points .an example is provided below : if we want to express that the rectangular space with the corner points and is subject to a semantic condition `` a '' between integer - defined time points and , we may use the following bespaced formula : .... ...implies(and(timeinterval(100,150 ) , owner("a " ) ) , occupybox(42,3056,1531,2605 ) ) ... .... bespaced comes with a variety of library - like functionality that involves the handling of specifications .bespaced formulas can be efficiently analyzed , i.e. , verifying spatio - temporal and other properties .a simple example involves the specification of a point in time and a predicate .bespaced can derive the spatial implications from these definitions .we have implemented algorithms and connected tools such as an external smt solvers ( e.g. , we have a connection to z3 ). these can help to resolve geometric constraints such as overlapping of different areas in time and space .different operators ( for instance , breaking geometric constraints on areas down to geometric constraints on points ) exist . in this report, we are looking at filtering in datatypes , normalization and spatio - temporal fold operations .filtering and normalization were already supported in previous versions but have seen some updates in the implementation .the folding functionality is new to this report . in the implementation of the presented operations we are using the scala programming language .in particular scala s case classes are used to define the language and we make heavy use of pattern matching on types to implement algorithms .for unit testing we use _ scalatest _ which allows us to easily code tests with assertions and rapidly re - run them as needed .the team used a combination of the eclipse `` scala ide '' ( mostly for runtime execution ) and intellij idea ( mostly for for editing and refactoring ) .filtering allows the selection of relevant information out of larger bespaced invariants thereby returning a sub - invariant .filter functions have the following signiture scheme : .... filter(inv : invariant , filtercondition ) : invariant .... an invariant is filtered following a filtercondition . the filter condition may be a spatial area or a timeinterval .all relevant information for the filtercondition is encapsulated in the returned invariant .a variety of different filter functions are possible , each one evaluating the invariant with respect to different levels of semantic depth . as part of the implementation of a fold, a filter function is required .to illustrate one interesting example of this we will discuss the filtertime function .presented below is one of our versions of filtering time which is of particular interest because it uses an elegant rewrite then simplify algorithm that greatly reduces the code complexity compared to other versions of filtertime .however , we did not perform an analysis of algorithmic complexity so far .the key line of source code is : .... case tp : timepoint[integertime ] = > if ( withintimewindow(tp , starttime , stoptime ) ) tp else successreturn.not .... this line rewrites the time point predicate as either a true or false value .`` tp '' effects no change to the invariant and `` successreturn.not '' will rewrite it as either a true or false value .this depends on the parent context of the re - write : false for conjunctions and true for disjunctions . later in the simplification process : .... simplifyinvariant(filteredinvariant ) .... will cause whole branches of the abstract datatype tree to be truncated where the time point is rewritten as false because one of the standard simplification rules is : .... case implies(false(),t2 ) = > true ( ) .... in effect this filters out all the sub expressions that do not match the desired time point .other aspects of the source code can be found below : .... def filtertime2(starttime : integertime , stoptime : integertime , successreturn : bool ) ( invariant : invariant ) : invariant = { val filter = filtertime2(starttime , stoptime , successreturn ) _def withintimewindow(timepoint : timepoint[integertime ] , starttimeinclusive : integertime , stoptimeexclusive : integertime ) : boolean = { val time = timepoint.timepoint time > = starttimeinclusive & & time < stoptimeexclusive } val filteredinvariant : invariant = invariant match { case tp : timepoint[integertime ] = > if ( withintimewindow(tp , starttime , stoptime ) ) tp else successreturn.not case implies(premise , conclusion ) = > implies(filter(premise ) , filter(conclusion ) ) case and(t1 , t2 ) = > and(filter(t1 ) , filter(t2 ) ) case bigand(sublist ) = > bigand(sublist map filter ) case or(t1 , t2 ) = > or ( filter(t1 ) , filter(t2 ) ) case bigor(sublist ) = > bigor(sublist map filter ) case other = > other } simplifyinvariant(filteredinvariant ) } ....figure [ fig : foldop ] shows an illustration of our fold operation for time and space .the general principle of folding , the iteration through time and space while accumulating processed information is shown . in this section ,we discuss the folding of time and space in separate subsections and follow our implementation .a generalized signature for a folding time function is provided below .no assumptions are made on the implementation of time , other than that time has to be partially ordered . ....foldtime[a , t ] ( invariant : invariant , a : a , starttime : t , stoptime : t , step : t , f [ a - > invariant - > a ] ) .... we are exemplifying the folding of time by using an example with weather data . we are regarding a geometric space ( a matrix - like structure ) that contains values indicating cloud coverage of an area . to validate folding time we devised a complex test case that aggregates clouded areas over multiple time points .this test case uses a naive approach as it requires a strict format for the knowledge invariants .the knowledge about the space and whether its cloudy or not is encoded as follows : .... //time val t1 = new integertime(1 ) val t2 = new integertime(2 ) val t3 = new integertime(3 ) // overlapping boxes val b1 = occupybox(1 , 1 , 10 , 10 ) //area 100 val b2 = occupybox(5 , 5 , 15 , 15 ) // area 121 val b3 = occupybox(10 , 10 , 20 , 20 ) //area 121 //time occupied val to1= implies(timepoint(t1 ) , b1 ) val to2 = implies(timepoint(t2 ) , b2 ) val to3 = implies(timepoint(t3 ) , b3 ) val timeseries = list(to1 , to2 , to3 ) val conjunction = bigand(timeseries ) .... in order to calculate the area occupied for one time step we implemented a function called addareaoccupied .this function takes two parameters : 1 .total : this is called the accumulator as it will store the running total ( an integer ) as the fold is being called recursively .2 . item : this is bespaced data ( an invariant ) that represents the knowledge of the spatial cloud system ..... def calculatearea(box : occupybox ) : int = math.abs((box.x2 - box.x1 + 1 ) * ( box.y2 - box.y1 + 1 ) ) def addareaoccupied(total : int , item : invariant ) : int = { val area = item match { case implies(imp , box : occupybox ) = > calculatearea(box ) case _= > 0 } total + area } .... it is interesting to note that the result of this function is a calculation derived from the two parameters ( as seen in the last line above ) . in this casethe accumulator ( total ) is added to the area which is calculated from the second parameter ( item ) .this is the essence of an `` aggregation function '' that is commonly used by all fold operations : the accumulated value and a function applied to the next iteration item are combined to give the next accumulated value .the signature for an implemented function that folds time is as follows : .... def foldtime[a ] ( invariant : invariant , accumulator : a , starttime : integertime , stoptime : integertime , step : integertime.step , f : ( a , invariant ) = > a ): a .... additionally , for our example , we need to setup a few more parameters .the accumulator used in the fold needs an initial value : .... val initialvalue = 0 .... and finally we need to define a series of `` steps '' in time for the fold operation to `` fold '' . in this test casewe specified the following : .... t1 , t3 , 1 ....this series of values simply represents start with time point 1 , stop at time point 3 and step through time incrementing by 1 .here is what the call to the foldtime function with the parameters described above looks like : .... val foldedtime = foldtime[int ] ( conjunction , initialvalue , t1 , t3 , 1 , addareaoccupied ) .... to validate the test runs correctly we assert the expected result : .... assertresult(expected = 342)(actual = foldedtime ) .... the expected result of 342 is explained as follows : there are three time points : 1 , 2 and 3 each with their own boxes : b1 , b2 , b3 .the area of each are 100 , 121 , 121. the fold of these three areas will be the sum which is 342 .the actual fold over time example is also depicted in figure [ fig : foldtime ] .the regarded spatial area stays the same , but the _ cloud _ moves over time ( depicted in different colors ) . folding space in general is done by using a function with the following signature .note , that it works using generic area descriptions . ....foldspace[a ] ( invariant : invariant , z : a , startarea : invariant , stoparea : invariant , stepareaorbox : invariant , f : ( a - > invariant - > a ) ) .... a concrete signature for an implemented function that folds space is as follows and assumes that areas are boxes : .... foldspace[a ] ( invariant : invariant , accumulator : a , startarea : occupybox , stoparea : occupybox , translation : translation , f : ( a , invariant ) = > a ): a .... to validate folding space we devised a simple test case that aggregates clouded areas over multiple spatial iteration steps .this test case uses a naive approach as it requires a strict format for the knowledge invariants .the knowledge about the space and whether its cloudy or not is encoded as follows : .... //boxes val b1 = occupybox(1 , 1 , 10 , 10 ) val b2 = occupybox(5 , 5 , 15 , 15 ) val b3 = occupybox(10,10 , 20,20 ) val b4 = occupybox(21,21 , 30,30 ) //space owner - occupied val s1 = implies(mountain , b1 ) val s2 = implies(cloud , b2 ) val s3 = implies(cloud , b3 ) val s4 = implies(mountain , b4 ) val spaceseries = list(s1 , s2 , s3 , s4 ) val conjunction = bigand(spaceseries ) .... in order to apply the foldspace function we need to pass in the calculatearea function from the above example as the parameter f and a spatial model as the invariant .additionally we need to setup a few more parameters .the accumulator using in the fold needs an initial value : .... val initialvalue = 0 .... and finally we need to define a series of steps in space for the fold operation to `` fold '' . in this test casewe defined the following : .... val startbox = occupybox(1 , 1 , 5 , 5 ) val stopbox = occupybox(26,26,30,30 ) val step : translation = ( 5 , 5 ) ....this means we want the first box cover coordinates : ( 1,1 ) ...( 1,5 ) , ( 2,1 ) ...( 2,5 ) ...in this example there are six iteration steps none of which overlap .in addition , as future work we would like to regard overlapping steps and explore semantic implications for this .the code piece below shows a call to the foldspace function with the parameters described above : .... val foldedspace = foldspace[int ] ( normaliseddata , initialvalue , startbox , stopbox , step , addcloudyarea ) .... to validate the test runs we assert the expected result using the following code line : .... assertresult(expected = 76)(actual = foldedspace ) .... the expected result on 76 is explained as follows : the first iteration box ( 1,1 ... 5,5 ) is indicated to contain a mountain ( see above ) .however , one point also contains a cloud .this point is counted for cloud coverage .we further iterate through the boxes . the second iteration box ( 6,6 ... 10,10 )is indicated to contain a cloud .we continue iterating through the iteration path thereby examining all 6 boxes in the path .the total cloud covered area in the iteration path adds up to 1 + 25 + 25 + 25 + 0 + 0= 76 . in order to calculate a `` degree of cloudiness '' for one spatial step we implemented a function called addcloudyareathis function takes two parameters : 1 .total : this is called the accumulator as it will store the running total ( an integer ) as the fold is being called recursively .invariant : this is bespaced data that represents the model containing clouds and other spatio - temporal information ..... def addcloudyarea(total : int , invariant : invariant ) : int = { def iscloudyarea(owner : owner[any ] ) : boolean = owner = = cloud def calculatearea(list : list[invariant ] ) : int = { val areas : list[int ] = list map { inv : invariant = > inv match { case implies(owner : owner[any ] , point : occupypoint ) = > if ( iscloudyarea(owner ) ) 1 else 0 case implies(owner : owner[any ] , and(p1 : occupypoint , p2 : occupypoint ) ) = > if ( iscloudyarea(owner ) ) 2 else 0 case implies(owner : owner[any ] , bigand(points : list[occupypoint ] ) ) = > if ( iscloudyarea(owner ) ) points.length else 0 case _= > 0 } } areas.sum } val area = invariant match { case and(t1 , t2 ) = > calculatearea(t1 : : t2 : : nil ) case bigand(sublist : list[invariant ] ) = > calculatearea(sublist ) case _= > 0 } total + area } .... it is interesting to note that the result of this function is a calculation derived from the two parameters ( as seen in the last line above ) . in this casethe accumulator ( total ) is added to the area which is calculated from the second parameter ( invariant ) .the folding of space example is also illustrated in figure [ fig : foldspaceex ] .different boxes are shown for the different steps in our example .in order to make invariants comparable , we need to normalize them .normalization ensures , that the same invariants are represented in the same way . however , there are different levels of normalization .normalization can take a higher or lower degree of semantics into account . at the lower end, we may just reorder arguments for logical operators such as `` and '' and `` or '' . on the other end, we may look into the semantic meaning of geometric shapes and , e.g. , replace areas with sets of points in order to make them comparable .different normalization strategies may require different resources and work on different subsets of our language .choosing an appropriate form of normalization depends on the actual use - case .this is why we have not implemented a single normalization function , but rather have a family of functions . in general normalizationinvolves term rewriting steps until a fix point is reached .the invariant is then in a normal form .several properties of the term rewriting should be fulfilled such as confluence .comparison is then carried out by checking the resulting terms for equality . to implement normalization in bespaced we used functional composition in chains .the type of each element in the chain is a function from invariant to invariant : .... type processor[i , o ] =i = > o type invariantprocessor = processor[invariant , invariant ] .... we have created various invariant processors each of which perform one step in the normalisation process .the following is a general normalization function , showing the composition of four processors : .... val normalize : invariantprocessor = flatten compose order compose deduplicate compose simplify .... a more specific example of a normalization function designed for data of the form .... owner > occupypoints .... is described in the following .it composes the standard normalization with a further processor specific to the data structure ..... val normalizeowneroccupied : invariantprocessor = mergeowners compose normalize .... below is a brief description of the step each processor does in the normalization process : * flatten rewrites nested conjunctions and disjunctions into single level conjunctions and disjunctions where possible .* order orders the terms of all conjunctions and disjunctions according to a standard ordering .* deduplicate removes duplicate terms of conjunctions and disjunctions . *simplify uses term rewriting to rewrite expressions in a simpler form .e.g. implies(true , t2 ) is rewritten as t2 * mergeowners - expects a bigand of implies and conjuncts all the conclusions of implications that have identical premises .e.g. + ....bigand(implies(a , x ) , implies(b , y ) , implies(a , z ) ) .... + is rewritten as + ....bigand(implies(a , and(x , z ) ) , implies(b , y ) ) ....we presented new operators for bespaced in this paper .these operators comprise filtering , folding and normalization for spatio - temporal formulas .the operations are inspired by functional programming languages and are ported to the spatio - temporal context .in addition to theoretical considerations we presented an implementation and a testing infrastructure for the bespaced framework . the implementation work presented in this paper is open source and freely available .it has been done as part of the research and development activities in the australia - india centre for automation software engineering at rmit university in melbourne .b. bennett , a. g. cohn , f. wolter , m. zakharyaschev .multi - dimensional modal logic as a framework for spatio - temporal reasoning . applied intelligence , volume 17 , issue 3 , kluwer academic publishers , november 2002 .j. o. blech and h. schmidt .bespaced : towards a tool framework and methodology for the specification and verification of spatial behavior of distributed software component systems . in _arxiv.org_ , http://arxiv.org/abs/1404.3537 , 2014 .j. o. blech , i. peake , h. schmidt , m. kande , a. rahman , s. ramaswamy , sudarsan sd ., and v. narayanan .efficient incident handling in industrial automation through collaborative engineering ._ emerging technologies and factory automation ( etfa ) _ , ieee , 2015 .l. caires and h. torres vieira .slmc : a tool for model checking concurrent systems against dynamical spatial logic specifications .tools and algorithms for the construction and analysis of systems .springer , 2012 .
|
in this report , we present some spatio - temporal operators for our bespaced framework . we port operators known from functional programming languages such as filtering , folding and normalization on abstract data structures to the bespaced specification language . we present the general ideas behind the operators , highlight implementation details and present some simple examples .
|
since the late 1960s , projection methods [ 13 ] have been widely used for the numerical simulations of transient incompressible navier - stokes equations in extensive scientific areas [ 4 , 5 ] and industrial fields [ 6 , 7 ] . the high popularity of these methods is due to the scheme s abilities to execute the incompressibility constrains using a set of two decoupled elliptic equations : a nonlinear advection - diffusion equation , and subsequently , a linear pressure poisson equation .although each of the cascade equations imposes computational costs to the system , poisson s equation tends to be the most time - consuming component of the flow simulation in complex geometries [ 8 ] as well as in computations with open boundary conditions [ 6 , 9 ] . to substantially lessen the computational expenses , a common approach is to design robust multigrid ( mg ) solvers , which are solely devoted to one of the existing elliptic systems ( see e.g. , refs . [ 1012 ] for mg solvers for the advection - diffusion equation , and refs .[ 13 , 14 ] for those associated with poisson s equation ) .in contrast with the methods cited above , a novel family of multigrid procedures , the so - called coarse grid projection ( cgp ) methodology by the authors [ 1518 ] , involves both the nonlinear and linear equations to efficiently accelerate the computations . in the cgp methodology , the nonlinear momentum is balanced on a fine grid , and the linear poisson s equation is performed on a corresponding coarsened grid .mapping functions carry out data transfer between the velocity and pressure fields . in this sense ,the cgp methods not only effectively relieve the stiff behavior of the primary poisson problem , but can also take advantage of any desired fast elliptic solver to achieve large speedups while maintaining excellent to reasonable accuracy .the cgp methodology is provided in detail in sect .the cgp framework was originally proposed by lentine et al . in 2010[ 15 ] for visual simulations of inviscid flows for video game applications .the uniform grid finite - volume and explicit forward euler methods were chosen , respectively , for spatial and temporal discretizations in their numerical simulations .this gridding format led to dealing with a volume - weighted poisson equation [ 19 ] in the presence of a solid object ( e.g. , a sphere ) inside the flow field . aside from the added complication, the grid format would have caused considerable reductions in visual fidelities , if the authors had not employed the complex mapping operators that they did .this makes the cgp technique less practical for curved boundaries .furthermore , the explicit time integration restricted their algorithm to low cfl ( courant - friedrichs - lewy ) numbers , and therefore long run times . in 2014jin and chen [ 18 ] implemented the same cgp scheme [ 15 ] in the fast fluid dynamics ( ffd ) models to calibrate the effect of this computational accelerator tool on simulating building airflows . a decrease in spurious fluctuations of ventilation rateswas observed , but the maximum achieved speedup was only a factor of approximately 1.50 . in 2013san and staples [ 16 ] expanded the cgp technique ( labeled `` cgprk3 '' ) to the vorticity - stream function formulation of the incompressible navier - stokes equations and demonstrated speedup factors ranging from approximately 2 to 42 in their numerical studies .additionally , they extended the method in order to dramatically lower computational costs associated with elliptic equations of potential vorticity in quasigeostrophic ocean models [ 17 ] . notwithstanding these successes , the cgprk3 strategy has four main shortcomings .first , the nine - term full weighting restriction [ 20 ] and bilinear prolongation [ 20 ] operators used can be exclusively utilized in equally spaced grids .consequently , one must analytically reformulate all steps according to generalized curvilinear coordinates except in uniform rectangular domains .second , cgprk3 applies the third - order runge - kutta [ 21 ] temporal integration to the non - incremental pressure correction method [ 1 ] , while the splitting error of this specific projection scheme is irreducibly first - order in time and higher - order temporal integration methods do not improve the overall accuracy [ 1 ] .third , the third - order time stepping scheme unnecessarily forces the cgprk3 scheme to run the poisson solver three times at each time step , whereas the primary goal of cgp is to reduce the computational effort that arises from the poisson equation .fourth , their suggested mapping procedure becomes more costly than that of the poisson equation solver for high coarsening levels . to obviate the aforementioned problems , a semi - implicit - time - integration finite - element version of the cgp method ( ife - cgp )is presented in the current study .the incorporation of a semi - implicit backward time integration results in a simple five - step cgp algorithm with nearly zero cost for the data restriction /prolongation operators .it typically enables flow simulations at large time steps and thus improves speedup [ 22 ] .the triangular finite element meshes improve the cgp method in the following ways .first , they enhance the scheme s capacity for the solution of fluid problems defined on complicated domains , where irregular grids and realistic boundary conditions are unavoidable .second , they facilitate the design of the required mapping modules so that the restriction / prolongation operators can be optionally equivalent to the shape functions approximating multilevel nested spaces of velocity and pressure fields .third , the generation of the laplacian and divergence operators on a coarsened grid is expedited by means of available geometric / algebraic multigrid ( gmg / amg ) tools for the finite element method ( see e.g. , [ 2326 ] ) .this feature is particularly important for obtaining a sufficiently accurate solution of poisson s equation in modeling flow over obstacles .this article s objective is to present a simple , elegant version of the cgp method for finite element discretizations of incompressible fluid flow problems . as accelerating incompressible flow computationsis the major application of cgp , speedup rates of the computations and the corresponding reduction in the accuracy of velocity and pressure fields are calculated . besides this main application, we explore mesh refinement usages of the cgp method for the first time . as a next concern , because the accuracy of the pressure field in projection methods suffers from the negative influence of artificial homogeneous dirichlet and neumann boundary conditions on formations of boundary layers [ 6 , 9 ] , the possibility of thickening the layers by the cgp prolongation operator is investigated .lastly , because the cgp procedure reduces the degree of freedom for the pressure component , a greater reduction in the integrity of the pressure field appears in comparison with the velocity field [ 16 ] . on the other hand , the pressure gradient ( instead of simply pressure )is applied to a velocity correction step of pressure projection schemes [ 27 ] . with these two hypotheses in mind, we examine the effect of the cgp process on variations of both the pressure and its gradient .all the above mentioned numerical challenges are investigated through several representative benchmark problems : the taylor - green vortex in a non - trivial geometry , flow over a backward - facing step , and finally flow past a circular cylinder .this article structured as follows .2.1 provides the governing equations for incompressible viscous flows and their finite element formulations .details on the proposed cgp algorithm are presented in sect .the computational implementation and a computational cost analysis are described , respectively , in sect .2.3 and 2.4 .numerical results and their interpretations are collected in sect .conclusions and notes for extensions of the work are given in sect .mass and momentum conservation of an incompressible isothermal flow of a newtonian fluid are governed by the navier - stokes and continuity equations , with boundary conditions -\mu \delta \textbf{\textit{u } } + \nabla p=\textbf{\textit{f } } \textrm { in } v,\ ] ] where and respectively denote the velocity vector and the absolute pressure in the fluid domain . the external force and stress vectorsare represented by and , respectively . is the fluid density and is the dynamic viscosity .the boundary of the domain consists of two non - overlapping subsets of dirichlet and neumann boundaries , where indicates the outward unit vector normal to them .the system of equations is temporally integrated by the semi - implicit first - order backward differentiation formula [ 28 ] with time increment , and takes the form : -\mu \delta \textbf{\textit{u}}^{n+1 } + \nabla p^{n+1 } \\ = \textbf{\textit{f}}^{n+1 } \textrm { in } v , \end{aligned}\ ] ] in the next stage , the non - incremental pressure - correction method [ 1 ] decouples the solution of the velocity and pressure variables .based on it , at each time step , the output of the momentum equation is a non - divergent vector field called the intermediate velocity .next , the divergence of the intermediate velocity is fed to the source term of the pressure poisson equation .finally , the intermediate velocity is corrected using the obtained pressure such that the end - of - step velocity vector satisfies the incompressibility constraint .the procedure yields two elliptic problems along with one correction equation , expressed as -\mu \delta \textbf{\textit{\~u}}^{n+1}=\textbf{\textit{f}}^{n+1 } \textrm { in } v,\ ] ] notice that the poisson equation is restricted to unrealistic homogenous neumann boundary conditions , when the velocity is subject to dirichlet types .contrarily , natural neumann conditions for the momentum equation result in boundaries with spurious homogenous dirichlet pressures .the velocity and pressure spaces are approximated by the piecewise linear basis functions ( * * p**/**p** ) of standard galerkin spectral elements [ 29 ] . in this way , the resulting finite element form of eqs .( 9)(15 ) is \textrm{\~u}^{n+1}=\textbf{m}_v\textrm{f}^{n+1},\ ] ] where * * m** and * n * indicate , respectively , the velocity mass and the nonlinear convection matrices .* * l** , * * l** , * d * , and * g * denote the matrices associated , respectively , to the velocity laplacian , the pressure laplacian , the divergence , and the gradient operators .the vectors , u , f , and p contain , respectively , the nodal values of the intermediate velocity , the end - of - step velocity , the forcing term and the pressure at time . the desired boundary conditions are implicitly enforced in the discrete operators .further details of the elemental matrices are available in references [ 29 , 30 ] .* remark 2.1 * _ ( on the discrete brezzi - babuska condition_[31 , 32 ] _ ) ._ because the projection method overcomes the well - known saddle - point issue of eqs .( 1)(2 ) , the discrete brezzi - babuska condition [ 31 , 32 ] can be ignored [ 27 ] and therefore the degree of the polynomials over the triangular mesh elements is chosen to be equal for the velocity and pressure .in addition , the identical resolutions allow the comparison of computational effectiveness between the current approach and that described in previous works [ 1518 ] .the cgp methodology provides a multiresolution framework for accelerating pressure - correction technique computations by performing part of the computations on a coarsened grid .alternatively , cgp can be thought as a guide to mesh refinement for pressure - correction schemes .figure 1 gives a schematic illustration of the ife - cgp algorithm for triangular finite element meshes .as shown in fig . 1, at each time step , the intermediate velocity field data obtained on a fine grid is restricted to a coarsened grid .the divergence of , the intermediate velocity field data on the coarsened grid , plays the role of the source term in solving the poisson equation on the coarse grid to detrimine the pressure p on the coarse grid ; then , the resulting pressure data p is prolonged to the fine grid and becomes of p .either gmg or amg techniques are key numerical tools to conveniently formulate the grid transfer , laplacian , and divergence operators in the ife - cgp algorithm . in the present work ,gmg methods are used , because using amg methods in the derivation of the divergence operator is challenging for anisotropic triangular grids [ 33 ] . herewe generate hierarchical meshes in which each triangular element of a coarse mesh is conformingly subdivided into four new triangles . as a consequence ,if the available finest mesh ( with -element resolution ) is generated after refinement levels , a relatively coarse mesh ( with a resolution of elements ) and its basis functions are available .generally , one can obtain the corresponding coarsened grid using the space decomposition algorithm discussed for nonuniform triangular grids in the literature ( see e.g. , refs .[ 34 , 35 ] ) . in principle, there can be an arbitrary number , , of nested spaces of progressively coarsened grids such that : . in practice , however , reasonable levels of accuracy can only be obtained for a maximum of three levels of coarsening .we restrict our attention to such cases .consider a sequence of four nested spaces , , wherein if ( ) corresponds to a fine grid , characterizes the space of the next coarsest grid .recall that for any arbitrary element on , there is a piecewise linear shape function that is capable of estimating the space of the four sub - elements over .with this in mind , linear interpolation is implemented to construct the prolongation operator and its matrix representation * * p** .mathematically , the transpose of the prolongation operator is a feasible option for the definition of the restriction operator [ 26 ] . in spite of this fact ,the restriction matrix is designed so that fine data of is directly injected throughout the coarse grid only if the data belongs to a common node between the two spaces ; otherwise , it can not be accessed by the coarse grid . unlike , is able to directly connect two non - nested spaces .it is worth noting that the cgp method is compatible with other advanced data interpolation / extrapolation architectures ( see e.g. , [ 36 ] ) ; however , even the simple mapping techniques introduced here are adequate . in the gmg method used , a relatively coarse mesh , , with as a basis is directly accessible . hence , the relevant laplacian ( ) and divergence ( ) operators are derived by taking the inner products of a coarse - grid finite element shape function on .note that the spaces of velocity and pressure variables are not physically the same , although the same notation is used here for the sake of simplicity .( 19)(23 ) summarize the explanations in the preceding paragraphs and depict the ife - cgp scheme for executing one time interval of the simulation .calculate on by solving 2 .map onto and obtain via 3 .calculate p on by solving 4 .remap p onto and obtain p via 5 .calculate u via a ` + + object oriented code is developed according to the concepts addressed in ref .[ 37 ] . to accelerate linear algebra executions and minimize memory requirements , the standard compressed sparse row ( ` sr ) format [ 38 ]is employed for sparse matrix - vector multiplication ( spmv ) except in data transfer applications .the preconditioned gmres( ) algorithm [ 39 , 40 ] is chosen as an iterative linear solver .absolute and relative tolerances are set to .the public unstructured finite element mesh generator gmsh [ 41 ] is utilized .calculations are performed on a single intel(r ) xeon(r ) processor with a 2.66 ghz clock rate and 64 gigabytes of ram .a finite element analysis traditionally consists of three major portions : preprocessing , processing , and postprocessing . in the simulations undertaken here ,the postprocessing stage mostly involves writing output files without a significant effect on the cpu time . as a result, the numerical cost , , of the pressure correction approach on a given coarse grid is estimated by where is the preprocessing cost , whereas and comprise , respectively , the cost of the poisson equation ( see , eq . (17 ) ) and the remaining algorithms in the processing portion . for transient problems with a significant number of time steps, the processing cost always overcomes the preprocessing price . by -level uniform refinements of a two dimensional domain , a high - resolution simulation takes time ,roughly expressed as with two factors and .the factor depends on the computational resources and global matrix assembling algorithms . because by quadrupling the element numbers , the global node numbers are doubled at minimum, is lower bound for the increment in the node numbers .consequently , represents a lower bound associated with the matrix size enhancement . additionally , note that is the lowest possible factor for cost scaling of eqs .( 16)(17 ) , because the cost of a matrix inversion is not linearly proportion to its size . taking the advantages of the ife - cgp method into account, the poisson solution is performed on the coarse grid and its cost is not scaled .therefore , is reduced to the ife - cgp cost , given by where is a new factor for the preprocessing and indicates the mapping expenses .the coefficient is not necessarily equal to the factor and the relation is variable .for instance , because the assembling process of rather than is more cost - effective , it might be concluded that .but if one takes the transpose of to establish , that conclusion is questionable .besides , the preparation of and involves an extra cost for ife - cgp . a numerical comparison in sect .3.2 clarifies this point furthur . is shown to be negligible in comparison with the other three terms of eq .( 26 ) in * remark 2.2*. from a mesh refinement application point of view , the cost increment factor of the computational ife - cgp tool ( ) is approximated by similarly , this factor for a regular triangulation refinement ( ) is conjectured to be based on the above discussion , is greater than .this is mainly due to the factor of that multiples in eq .these results imply that mesh refinement using the cgp idea is more cost effective than the standard technique .* remark 2.2 * _ ( on implementing the mapping operators)_. the implementation of in the csr format is not possible because the matrix contains null vectors .additionally , if is constructed in the csr format , the data prolongation cost of each ife - cgp loop is approximated by where is proportional to the number of non zero elements per row of ( depending on the mesh nonuniformity , it varies between 2 and 10 in the current grids . ) and denotes the number of pressure unknowns on . though csr is inexpensive , an easier - to - implement method is introduced next .consider two data structures and including three ( ) and two ( ) integer indices , respectively .an array of each data structure and , with equal to the node numbers on , is created as follows : + for find a pair of indices that satisfies and for find an index that satisfies where p is the pressure value at the node of the spanned space , while is the restricted intermediate velocity of the node of the discretized space .p , p , and are similarly defined . with respect to the prolongation function, this trick improves the performance by reducing the computational effort to , with only moderately increased memory usage .likewise , the numerical expense of the suggested injection operator is of order .to evaluate the various aspects of the ife - cgp method , three standard test cases are studied : the taylor - green decaying vortex problem , the flow over a backward - facing step , and the flow around a circular cylinder . here , the grid resolution of a numerical simulation is denoted by , where indicates the element numbers of a fine grid used for the advection - diffusion solver , and demonstrates the element numbers of a corresponding coarsened grid for the poisson solver . when is equal to , the standard , non - ife - cgp , algorithm is recovered . indicates the level of mesh coarsening used in the ife - cgp method , and is equal to zero for the standard algorithm .if function is assumed to be a finite element approximation of an exact solution , , on the domain , with elements , the difference between these two functions , , is defined as : the discrete norms are defined as : where is the discrete space of element , , of . note that when an exact solution is not available , the norms are measured with reference to the standard algorithm ( ) .the taylor - green vortex problem [ 42 ] is a widely - used benchmark problem which is an exact analytic solution of the unsteady incompressible navier - stokes equations in two - dimensions ( see e.g. , [ 6 ] ) .the velocity field is given by : and the pressure field is given by : a density value of kg / m and a viscosity of pa are used .one goal of the taylor - green test case is an examination of the ife - cgp method capability in complex geometries . for this purpose ,a square domain with a circular hole is chosen such that + + the geometry is depicted in fig . 2 and details of the meshesare given .a similar computational domain ( a rectangular hole instead of a circular one ) has been implemented by j. m. alam et al . [ 43 ] to perform this test case .a second goal of this section is an investigation of the effects of velocity dirichlet boundary condition ( and consequently artificial neumann boundary conditions ) on the rate of accelerating computations by the ife - cgp algorithm .therefore , the velocity domain boundaries are set to the exact solution of eqs .( 34)(35 ) .these types of boundary conditions have been previously applied to the taylor - green vortex problem in the literature [ 6 , 27 ] .san and staples [ 16 ] have also studied this problem to validate cgprk3 performance , but using periodic boundary conditions . in this way ,an opportunity for comparison is provided .as a last concern , the effects of the prolongation operator on the thickening of artificial boundary layers are investigated .the simulations are run with a constant cfl number . as a result, a time step of 0.01 s is selected for the 3384:3384 case , and this is halved for each quadrupling of the advection - diffusion solver grid .the discrete norms of the velocity field for different mesh resolutions are tabulated in table 1 .considering all the cases , for one and two levels ( , and ) of the poisson grid coarsening , the minimum and maximum of the error percentage relative to the finest mesh ( ) are , respectively , 0.30% and 3.61% . however , a considerable reduction in the velocity accuracy is found for three levels of coarsening .for instance , the infinity norm computed for the velocity field obtained on the 216576:3384 grid resolution indicates a 99% reduction in the accuracy level , but it is still two orders of magnitude more accurate in comparison with the resulting data captured from the full coarse scale simulation performed on the 3384:3384 grid resolution .the speedup factors achieved range from 1.601 to 2.337 .the velocity and the pressure field magnitudes for the result with three levels of coarsening are depicted in fig .the flow fields have reasonable levels of accuracy , however , dampened flows can be observed near the boundaries .llllll & resolution & & & cpu time/ t & speedup + 0 & 216576:216576 & 1.59938e & 8.34306e & 91644000 & 1.000 + 1 & 216576:54144 & 1.59453e & 8.38567e & 57223280 & 1.601 + 2 & 216576:13536 & 1.63584e & 8.64473e & 40771280 & 2.247 + 3 & 216576:3384 & 3.18962e & 1.15333e & 40339600 & 2.272 + 0 & 54144:54144 & 1.29418e & 6.78120e & 1546544 & 1.000 + 1 & 54144:13536 & 1.29274e & 6.85697e & 761040 & 2.032 + 2 & 54144:3384 & 1.38146e & 7.48644e & 661788 & 2.337 + 0 & 13536:13536 & 1.05619e & 5.58118e & 30944 & 1.000 + 1 & 13536:3384 & 1.08807e & 5.73166e & 13598 & 2.275 + 0 & 3384:3384 & 8.73658e & 4.69697e & 661 & 1.000 + in the non - incremental pressure correction methods , for a non - smooth domain with artificial neumann boundary conditions , the pressure error norms are not sensitive to the spatial resolution and they are mostly not improved by increasing the node numbers [ 27 ] .contrarily , the pressure gradient error norms decrease by an increase in the number of degree of freedom [ 27 ] .the data collected in table 2 demonstrates these findings . regarding the standard computations ( ) , the discrete norms of the pressureare only reduced by one order of magnitude with three levels of mesh refinement of both the pressure and the velocity spaces , whereas these quantities for the pressure gradient have a reduction of three orders of magnitude with the same grid resolution increase . by looking at the resulting pressure error norms using ife - cgp ,we find almost the same data obtained using the standard algorithm .but this does not mean that the ife - cgp procedure conserves the accuracy of the pressure field , because the pressure field from the full fine scale simulations itself has a high level of error .in fact , it can be only concluded that the prolongation operator does not make the results worse .in contrast with the pressure norms , we observe similar trends between the velocity and pressure gradient norms for the ife - cgp results .for instance , for one level ( ) of coarsening , the minimum and maximum of the error percentage with reference to the finest mesh ( ) are , respectively , 5.37% and 16.83% . as a note ,the discrete norms calculated for the pressure gradient are relatively small in comparison with the velocity and pressure norms , and it is because we use the discrete gradient operator * g * in order to compute the gradient of both the exact solution and the numerical result .recall that at the velocity correction step ( see eq .( 15 ) ) , the pressure gradient is used to correct the velocity field , not the pressure itself .hence , since ife - cgp conserves the pressure gradient accuracy , the fidelity of the velocity field is also conserved .llllll & resolution & & & & + 0 & 216576:216576 & 1.34241e & 6.00500e & 3.21871e & 1.81154e + 1 & 216576:54144 & 1.34241e & 6.00416e & 3.09665e & 1.90897e + 2 & 216576:13536 & 1.34241e & 6.00270e & 2.22832e & 2.43972e + 3 & 216576:3384 & 1.34241e & 6.00171e & 3.65721e & 5.78906e + 0 & 54144:54144 & 4.98048e & 2.38748e & 4.09860e & 3.48936e + 1 & 54144:13536 & 4.98048e & 2.38692e & 3.96108e & 3.81608e + 2 & 54144:3384 & 4.98048e & 2.38657e & 3.22830e & 6.43534e + 0 & 13536:13536 & 1.89334e & 9.42727e & 4.86969e & 7.21105e + 1 & 13536:3384 & 1.89334e & 9.42590e & 5.31351e & 8.42464e + 0 & 3384:3384 & 6.74979e & 3.66242e & 6.55823e & 1.66480e + the cartesian coordinate of an element s center at which the infinity error norm occurs for the velocity , pressure , and pressure gradient in the taylor - green vortex domain are tabulated in table 3 . for different levels of coarsening , the location changes ( except for the pressure error , which remains unchanged ) .however , it is always near the boundaries of either the ring or the square . in particular, the maximum errors for the velocity and pressure gradient fields occur on the edge of the square for ( ) .thus , even using the prolongation operator of the ife - cgp method , the maximum errors still occur around the boundaries with velocity dirichlet conditions .lllll & & & & **g**p + & resolution & & & + 0 & 216576:216576 & ( 0.576623 , 0.808348 ) & ( 0.482527 , 0.110053 ) & ( 0.340604 , 0.695205 ) + 1 & 216576:54144 & ( 0.580109 , 0.809028 ) & ( 0.482527 , 0.110053 ) & ( 0.659396 , 0.304795 ) + 2 & 216576:13536 & ( 0.601023 , 0.813106 ) & ( 0.482527 , 0.110053 ) & ( 0.306953 , 0.664904 ) + 3 & 216576:3384 & ( 0.770613 , 0.977284 ) & ( 0.482527 , 0.110053 ) & ( 0.004544 , 0.243629 ) + from a general point of view , the spatial discretization of the advection - diffusion domain acts as a low - pass filter on the grid , and the poisson solver also acts as a pre - filtering process [ 16 ] . the cgp procedure specifically uses the belief in order to increase saving in computational time without negatively affecting the properly - resolved advection - diffusion field .a visual demonstration of these effects is displayed in figs .if the difference between the exact solution and the numerical result is considered as a noise over the domain , figure 4 and figure 5 show the noise distribution , respectively , for the pressure and velocity variables .for example , by switching the grid resolution from 216576:216576 ( see fig .4a and fig .5a ) to 216576:54144 ( see fig . 4b and fig .5b ) , the maximum absolute value of the pressure noise roughly changes from to , implying a 7.14% noise increment , whereas there is no significant noise enhancement for the velocity field .similar behavior is observed for further levels of the poisson grid coarsening .for instance , as it can be seen in fig . 4d and fig .5d , the noise of the pressure and velocity domains with three levels of coarsening increases , respectively , to 85.71% and 33.33% , demonstrating that the low - pass filtering feature of the discretization of the advection - diffusion part of the algorithm is why the large errors of the pressure field do not get transmitted to the velocity field in cgp methods .* remark 3.1 * _ ( on the error order of the non incremental pressure - correction scheme)_. by calculating the slope of the discrete norms for the standard algorithm results , it is induced that the order of the overall accuracy is not fully a function of the spatial resolution .this is because the time error is dominant over the spatial step in the non - incremental pressure correction scheme with dirichlet boundary conditions [ 1 ] .numerically , in order to remove the time error , very small time intervals should be taken .for instance , this time error becomes invisible if the time step is chosen to be s for the simulation with a 3384:3384 spatial resolution .this severe condition is not odd because the fluid domain is surrounded by velocity dirichlet boundaries from both the outside and inside .however , our computational resources do not allow us to choose such a tiny time increment for this test problem .note that this discussion is not directly relevant to the topic of cgp methodology , but it can be concluded that a multigrid method ( e.g. , the cgp technique ) provides optimal results in the presence of velocity dirichlet boundary conditions if users take the incremental pressure - correction schemes introduced in refs [ 1 , 6 , 27 , 44 , and 45 ] . to study the ife - cgp algorithm efficiency in the presence of open boundary conditions , the flow over a backward facing step inside a channel is analyzed .6 presents the problem geometry and imposed boundary conditions .because an inlet channel upstream significantly affects the flow simulation at low reynolds number [ 46 ] , the inflow boundary is located at the step and is described by a parabolic profile : the origin of the coordinate system is placed in the lower left corner of the step .homogeneous natural neumann conditions are enforced at the exit .the reynolds number is calculated as where is the channel height and represents the space - averaged mean entrance flow velocity .although the stress free boundary condition is less restrictive than dirichlet type boundary conditions at the outflow [ 27 ] , it leads to the increased the possibility of a loss in spatial accuracy [ 6 ] .this unfavorable situation is mainly due to imposing artificial homogenous dirichlet boundary conditions in the pressure poisson solver [ 6 ] . on the other hand , because the ife - cgp technique reduces the number of pressure unknowns at the outflow , it is one of the current research interests to check whether the ife - cgp methodology provides a valid solution . for this purpose, the prediction of the reattachment length with respect to is plotted in fig .7 . the obtained results for one ( ) and two ( ) levels of coarseningreveal good agreement with the numerical data of kim and moin [ 47 ] , and erturk [ 48 ] . at ,the reattachment length just differs from that reported by erturk [ 48 ] about 0.7% and 4.0% , respectively , for the ife - cgp ( ) and ife - cgp ( ) algorithms .however , the ife - cgp ( ) approach vastly overestimates the reattachment length after the reynolds number has reached the value 300 . as a function of reynoldsnumber.,width=84 ] to save space , detailed results are only presented for a reynolds number of .the time step is chosen to be s. based on our numerical experiments , the flow field reaches stationarity at s for the regular fine computations .the results for all other options are reported at this fluid flow time .figure 8 depicts the axial velocity contour maps of the flow simulated using both the normal and the ife - cgp processes .additionally , the efficiency and accuracy of the velocity field for the standard algorithm and the ife - cgp approach are compared in table 4 .flow velocity variables with one and two levels of coarsening agree well with the fine scale standard computations .significantly , a 30-times reduction in computational cost of the ife - cgp solution with the 102400:1600 resolution is obtained , as the corresponding velocity error norms are still in the acceptable range . as can be seen from fig .9a , interestingly , the advection - diffusion solver diverges for a simulation with the 1600:1600 resolution .this behavior will be discussed comprehensively in sect .llllll & resolution & & & cpu ( s ) & speedup + 0 & 102400:102400 & - & - & 3447941&1.000 + 1 & 102400:25600 & 6.69447e & 7.88459e&173052&19.924 + 2 & 102400:6400 & 1.43997e & 4.87393e&120500&28.613 + 3 & 102400:1600 & 4.4854e & 2.23362e&116745&29.534 + the cpu times devoted to the restriction / prolongation operators are tabulated in table 5 . by increasing the coarsening level ,the prolongation operator becomes more expensive , whereas the time consumed by means of the restriction function decreases .to explain this fact , let s consider , for instance , the data mapping procedure of the ife - cgp strategy on the 102400:1600 grid resolution .the restriction operator directly injects the intermediate velocity field data from a grid with 102400 elements into the corresponding coarse grid with 1600 elements . from a programming point of view, this operation needs only 905 loops , which is the pressure node numbers of the coarse grid .the prolongation operator , in contrast , has to utilize two intermediate grids , associated with and , in order to extrapolate the pressure domain data . to handle this transformation ,the prolongation operation requires 68659 loops , which is the summation of grid points belonged to the two intermediate as well as the finest meshes .as can be seen from table 5 , the ife - cgp method slightly increases the time spent on the preprocessing block .even though the and matrix assembling process is computationally cheaper in comparison with the standard algorithm , the mapping operator constructions ultimately overcome these savings .that is , the ratio is greater than 1.00 in table 4 .appling amg tools in order to establish the and matrices in the ife - cgp technique is a way to optimize the preprocessing subroutine costs .llllll & resolution & restriction ( s ) & prolongation ( s ) & preprocessing ( s ) & ( ) + 0 & 102400:102400 & - & - & 407.471&1.000 + 1 & 102400:25600 & 0.89 & 3.11 & 698.441&1.714 + 2 & 102400:6400 & 0.21 & 3.67&713.521&1.751 + 3 & 102400:1600 & 0.05 & 3.64&725.521&1.780 + the concern of this section is a study of the effect of curvature on the ife - cgp method .although this capability of the method has been already investigated for decaying vortices in curved geometries in sect .3.1 , an external unsteady flow over a cylinder is a more physically meaningful benchmark case [ 49 ] .san and staples [ 16 ] have performed this fluid mechanics problem by means of the cgprk3 solver , but for steady - state flows , at , and exclusively using one level of coarsening ( ) . here , the ife - cgp methodology with three levels of coarsening is applied to this canonical flow problem at .the computational field is considered as a rectangular domain \times[0 , 32]$ ] .a circle with diameter represents the cylinder in two dimensions , and its center lies at the point ( 8 , 16 ) .a free stream velocity parallel to the horizontal axis is imposed at the inflow boundary .the circle is treated as a rigid boundary and no - slip conditions are enforced .the velocity at the top and bottom of is perfectly slipped in the horizontal direction .the outflow velocity is specified with a natural neumann condition , eq .the reynolds number is determined as to set this dimensionless number to , the density , cylinder diameter , and free stream velocity are set to 1.00 ; and the viscosity is set to 0.01 in the international unit system .the described geometry and boundary conditions are taken from the literature [ 50 , 51 ] to satisfy far - field assumptions . a fixed time increment of sis selected and the numerical experiments are executed until time s. the grids utilized by the poisson solver for , , , and are like those shown in fig .10 , with 108352 nodes and 215680 elements , 27216 nodes and 53920 elements , 6868 nodes and 13480 elements and 1749 nodes and 3370 elements , respectively , with very fine grid spacing near the circle . a visual comparison between the obtained vorticity fields with and without the ife - cgp methodis made in fig .11 for several spatial resolutions at time s. for all levels of coarsening , the ife - cgp field provides more detailed data compared to that modeled with a full coarse grid resolution .a comparison between the resulting vorticity fields with the resolutions of 215680:215680 and 13480:13480 demonstrate that the phases of periodic variation of these two fields are not equal to each other .conversely , the fields computed by 215680:215680 and 215680:13480 oscillate with the same phase .however , there is a phase lag between the outcomes with 215680:215680 and 215680:53920 or 215680:3370 mesh resolutions .our numerical experiments show that the phase lag between the standard and ife - cgp results depends on the time step chosen .for instance , the velocity field phases , and consequently the vorticity ones , are the same in both the simulations performed by ife - cgp ( ) and ife - cgp ( ) tools when s. to more precisely analyze the ife - cgp algorithm s performance , velocity and vorticity distributions along the horizontal centerline , behind the cylinder and in the wake region , are shown in fig . 12 at time s. by coarsening the advection - diffusion mesh at the exit of the fluid domain , the results of the 53920:53920 resolution includes spurious fluctuations .these fluctuations become stronger in the pure coarse case with the 13480:13480 resolution .however , they are successfully removed using the ife - cgp approach .the computational times for the performed simulations are : 491068.2 s , 70809.2 s , 65492.7 s , and 56251.1 s , respectively for ( 215680:215680 ) , ( 215680:53920 ) , ( 215680:13480 ) , and ( 215680:3370 ) , leading to speedup factors between 6.935 and 8.730. table 6 lists the calculated strouhal number ( ) , drag ( ) and lift ( ) coefficients compared with the experimental and numerical studies presented in [ 5255 ] .the strouhal number is based on the time evolution of the blunt body lift , and is formulated as : where is the shedding frequency .the data presented in table 6 demonstrates that as the coarsening level ( ) increases , the drag and lift forces slightly decrease and increase , respectively .however , they still agree well with the values found in the literature . because the ife - cgp method computes the pressure variable on a coarsened mesh , there is a concern it might reduce the accuracy of the drag and lift coefficients . to check this issue ,the time evolution of the viscous ( ) , pressure ( ) , and total lift coefficients are plotted separately in fig . 13 .interestingly , the viscous and pressure lift diagrams for the full fine scale and the ife - cgp ( ) simulations become nearly identical after approximately time s ; nevertheless , the ife - cgp configuration provides the maximum magnitude of the lift force two periods of the flow cycle earlier . even though a numerical computation performed on a pure coarse grid degenerates the viscous lift coefficient , choosing one level of coarsening for the ife - cgp mechanism does not influence the viscous force integrity . for two further levels of the poisson grid coarsening , spurious fluctuations can be observed at the beginning of the fluid flow simulation . for the ife - cgp ( ) results ,when the simulation is using the 215680:3370 grid resolution , the lift coefficient reaches its final amplitude at approximately time s , roughly 60 s earlier than the standard full fine scale computations .surprisingly , the lift force obtained by full coarse scale ( 3370:3370 ) computations oscillates around approximately 0.2 ( instead of 0.0 ) , showing the flow field is under - resolved .contrarily , the ife - cgp ( ) lift force is more reliable since it fluctuates around 0.0 .llll study & & c & c + braza et al .[ 52 ] & 0.160 & 1.364.015 & .25 + liu et al .[ 53 ] & 0.165 & 1.350.012 & .339 + hammache and gharib [ 54 ] & 0.158 & - & - + rajani et al . [ 55 ] & 0.156 & - & - + ife - cgp ( ) & 0.156 & 1.258.006 & .217 + ife - cgp ( ) & 0.152 & 1.223.006 & .250 + ife - cgp ( ) & 0.149 & 1.168.034 & .344 + the magnitude of the centerline pressure and its gradient along the -axis in the wake region are shown in fig .14 for the full fine ( 215680:215680 ) , ife - cgp ( , and 2 ) ( 215680:53920 and 215680:13480 ) , and full coarse ( 53920:53920 and 13480:13480 ) simulations at time s. this figure reveals a key feature of the cgp methodology .the pressure magnitude estimated by the ife - cgp ( , and 2 ) method is far from that computed with the full - fine scale resolution .comparably , the pressure magnitudes obtained using the standard algorithm with the full fine and coarse resolutions are relatively close to each other . notwithstanding this ,the pressure gradient magnitude of the full fine - scale and the ife - cgp ( , and 2 ) computations are indistinguishable for most of the domain ( although with a phase lag in the case of one level coarsening ) . as discussed earlier , in contrast with the implicit pressure quantity , the pressure gradient is the relevant variable in the incompressible navier - stokes equations [ 27 ] .hence , it does not matter if the ife - cgp strategy does not retain absolute pressure values close to the full fine results .importantly , it calculates the pressure gradients with a high level of accuracy and significantly better than those that are solely computed on a coarse grid .so far we have emphasized the fact that spurious oscillations of the velocity and vorticity fields are removed by ife - cgp .now by looking at fig .14 , we see that this is also the case for the pressure fields .although both the ife - cgp ( , and 2 ) and full coarse ( 53920:53920 and 13480:13480 ) simulations solve the pressure poisson equation on the same coarse mesh , the artificial oscillations at the end of domain are removed only in the case of the ife - cgp outputs .thus , for the same number of degrees of freedom , when the pressure poisson equation is fed with a smoother intermediate velocity field , the outcome pressure filed is also smoother .lastly , there is a noticeable difference between the pressure gradient magnitude of ife - cgp and full fine scale computations near m. this difference comes from homogenous artificial dirichlet boundary condition for the pressure ( ) . as can be seen from fig .14a and fig .14c , the pressure of ife - cgp falls sharply near m to satisfy this boundary condition .it is conjectured that this issue would be eliminated by switching from non - incremental pressure projection methods to incremental ones .let s consider a condition that the standard numerical simulation diverged for the case due to a relative high reynolds number or too coarse a mesh . or the results obtained with a grid resolution are not sufficiently resolved and a fluid field with more detailed information is needed . the standard approach to resolving these common issues in projection methods is to refine both the advection - diffusion and the poisson grids .in contrast with this approach , the cgp strategy suggests refining the advection - diffusion grid , without changing the resolution of the poisson mesh . to be more precise from a terminology point of view, cgp does not propose a new mesh refinement method ; however , it guides users to implement available mesh refinement techniques for the grids associated with the nonlinear equations . here, we describe the concept by showing two simple examples .consider the simulation of the flow over a backward - facing step .let s assume that one is interested in the flow information at ; however , due to wall clock time or computational resource limitations , he is not able to run a simulation with the required pure fine 102400:102400 grid resolution . on the other hand , because a coarse 1600:1600 resolution is not high enough , the simulation diverges after 96 time steps , as depicted in fig .9a . the ife - cgp framework with an intermediate resolution of 102400:1600 provides a converged solution as shown in fig .the relative velocity error norms with reference to the full fine simulation are of order .furthermore , the normalized reattachment length can be estimated around 14 .although this estimation is slightly different from that obtained by the standard computations , it is captured 30 times faster .note that these results are achieved by only refining the advection - diffusion equation solver mesh . in this case , because the simulation on the coarse grid diverges , there is no real number for ; however , if a virtual considered , .let s reconsider the flow over a circular cylinder computations described in sect .3.3 . coming back to the lift coefficient graphs , depicted in fig. 13 , and with an emphasis on the ife - cgp ( ) outputs , another interpretation of these results is discussed here .let s assume an exact measurement of the lift coefficient is needed for a specific engineering purpose .using standard methods , this can be accomplished using either 215680:215680 or 53920:53920 grid resolutions .an implementation with the finer grid produces a more precise answer .it could be a user s incentive to locally / globally refine the full coarse mesh .obviously , this mesh refinement ends in an increase in cpu time for the processing part of the simulation . in this case ,our numerical experiments show that the increase is equal to 339271.6 s ( over 94 hr ) .as mentioned earlier , having a coarse mesh only degrades the level of the viscous lift accuracy .in fact , instead of refining the grids associated with both the nonlinear and linear equations , a mesh refinement of the nonlinear part is enough alone .hence , in order to increase the precision of the lift force , one can refine the advection - diffusion grid and keep the resolution of the poisson mesh unchanged . in this case, the ife - cgp grid refinement cost factor is , whereas this factor for the regular mesh refinement is , illustrating a considerable saving of computational resources . as a last point ,obviously the types of two dimensional flow simulations described here are not challenging computation problems today .these problems are merely used as examples to explain one of the features of the ife - cgp algorithm .practical applications of the ife - cgp mechanism as a mesh refinement tool are expected to be useful for three dimensional flow simulations on parallel machines .figures 15a15d compare the cpu times consumed by various components of the processing segment for four test cases using different boundary conditions .the taylor - green vortex , flow over a backward - facing step , and flow past a cylinder are modeled by the ife - cgp approach , while the double shear layer problem is simulated by the cgprk3 technique [ 16 ] . regarding the ife - cgp strategy , by the coarsening level increment , the poisson equation price lessens dramatically so that its portion becomes less than 5% after just one level of coarsening .for that reason , a considerable speedup can not be achieved after .a similar trend occurs in application of the cgprk3 method with the difference that the major reduction in the poisson solver cost is obtained at .the significant difference between the ife - cgp and cgprk3 algorithms is associated with the computational expense of the data transfer between the advection - diffusion and the poisson grids . according to fig .15d , the cgprk3 method allocates more cpu resources to mapping the data for each subsequent level of coarsening , and finally the mapping process costs overcome the poisson equation costs at .in contrast , this charge never exceeds 0.004% of the total algorithm computational cost using the ife - cgp strategy . concerning influences of the boundary condition ,the maximum speedup is achieved when the outflow velocity in the backward - facing step problem is treated as a stress - free condition , and the lowest acceleration of the computations is observed when velocity dirichlet boundary conditions are enforced for the taylor - green vortex simulation .the speedups when using periodic boundary conditions are in between these two extremes .this difference is expected because solving a poisson equation with spurious homogeneous dirichlet boundary conditions requires more computational effort in comparison with pressure neumann conditions [ 56 ] . in terms of the accuracy level, the cgp class of methods retains the velocity field data close to that of full fine grid resolution simulations in the presence of less restrictive velocity boundary conditions , such as open and periodic ones .however , the cgp approach with pure velocity dirichlet conditions results in a dampened velocity field , which has been also reported by lentine et al . [ 15 ] . in the case of euler s equations , where the viscous term is neglected and the flow is allowed to slip on the solid surfaces , the cgp method acquires much less damped flows . for the same reason ,the damping phenomenon disappears when the cgp solver is run at high reynolds numbers .the cgp method is a new multigrid technique applicable to pressure projection methods for solving the incompressible navier - stokes equations . in the cgp approach ,the nonlinear momentum equation is evolved on a fine grid , and the linear pressure poisson equation is solved on a corresponding coarsened grid .mapping operators transfer the data between the grids .hence , one can save a considerable amount of cpu time due to reducing the resolution of the pressure filed while maintaining excellent to reasonable accuracy , depending on the level of coarsening . in this article, we proposed a semi - implicit - time - integration finite - element version of the cgp ( ife - cgp ) .the newly added semi - implicit time integration feature enabled cgp to run simulations with large time steps , and thus further accelerated the computations compared to the standard / previous cgp algorithms .the new data structure introduced resulted in nearly zero computational cost for the mapping procedures . using the finite element discretization ,cgp was adapted to be suitable for complex geometries and realistic boundary conditions .moreover , the mapping functions were conveniently derived from the finite element shape functions . in order to examine the efficiency of the ife - cgp method , we solved three standard test cases : the taylor - green vortex in a non - trivial geometry , flow over a backward - facing step , and flow past a circular cylinder .the speedup factors ranged from 1.601 to 29.534 .the minimum speedup belonged to the taylor - green vortex problem with velocity dirichlet boundary conditions , while the maximum speedup was found for the flow over a backward - facing step with open boundary conditions .generally , the outputs for one and two levels of the poisson grid coarsening agreed well with those computed using full fine scale computations . for three levels of coarsening , however , only a reasonable level of accuracy was achieved .the mesh refinement application of the cgp method was introduced for the first time in this work .based on it , if for a given spatial resolution the numerical simulation diverged or the velocity outputs were not accurate enough , instead of refining both the advection - diffusion and the poisson grids , the ife - cgp mesh refinement suggests to only refine the advection - diffusion grid and keep the poisson grid resolution unchanged .the application of the novel mesh refinement tool was shown in the cases of flow over a backward - facing step and flow past a cylinder .for the backward facing step flow , a three - level partial mesh refinement made a previously diverging computation numerically stable .for the flow past a cylinder , the error of the viscous lift force was reduced from 31.501% to 7.191% ( with reference to the standard mesh refinement results ) by the one - level partial mesh refinement technique .we showed that the prolongation operator of ife - cgp did not thicken the artificial layers that arose from the artificial neumann boundary conditions .additionally , we demonstrated that although cgp reduces the accuracy level of the pressure field , it conserves the accuracy of the pressure gradient , a key to the efficacy of the cgp method . for future studies , the contribution of different incremental pressure projection schemes ( such as standard [ 44 ] , rotational [ 27 ] , and vectorial forms [ 45 ] ) to the cgp methodology will be analyzed .it is conjectured that because the above mentioned methods diminish the errors introduced by spurious neumann conditions , applying a cgp technique to the incremental projection methods can lead to undamped flows in comparison with applying the cgp strategy to non - incremental pressure correction computations .another objective of our future research is to perform a comparison between the cgp approach with one level of coarsening ( ) and the standard finite element algorithm with taylor - hood mixed finite elements ( * * p**/**p** ) ( see e.g. , refs . [ 1 , 6 ] ) . from a grid resolution point of view , for an assumed number of grid points of the velocity component , the poisson solver utilizes a space with an equal pressure node numbers , discretized using either the ife - cgp ( ) method or taylor - hood elements . in this sense , a detailed investigation of the similarity / difference between these two concepts may introduce novel mapping functions for the ife - cgp tool .guermond j , minev p , shen j ( 2006 ) an overview of projection methods for incompressible flows .computer methods in applied mechanics and engineering 195 ( 44):6011 - 6045 chorin aj ( 1968 ) numerical solution of the navier - stokes equations .mathematics of computation 22 ( 104):745 - 762 temam r ( 1969 ) sur lapproximation de la solution des quations de navier - stokes par la mthode des pas fractionnaires ( ii ) .archive for rational mechanics and analysis 33 ( 5):377 - 385 shen j ( 1992 ) on error estimates of projection methods for navier - stokes equations : first - order schemes .siam journal on numerical analysis 29 ( 1):57 - 77 hugues s , randriamampianina a ( 1998 ) an improved projection scheme applied to pseudospectral methods for the incompressible navier stokes equations .international journal for numerical methods in fluids 28 ( 3):501 - 521 jobelin m , lapuerta c , latch j - c , angot p , piar b ( 2006 ) a finite element penalty - projection method for incompressible flows .journal of computational physics 217 ( 2):502 - 518 fedkiw r , stam j , jensen hw visual simulation of smoke . in : proceedings of the 28th annual conference on computer graphics and interactive techniques , 2001 .acm , pp 15 - 22 korczak kz , patera at ( 1986 ) an isoparametric spectral element method for solution of the navier - stokes equations in complex geometry .journal of computational physics 62 ( 2):361 - 382 guermond j - l , minev p , shen j ( 2005 ) error analysis of pressure - correction schemes for the time - dependent stokes equations with open boundary conditions .siam journal on numerical analysis 43 ( 1):239 - 258 reusken a ( 1995 ) fourier analysis of a robust multigrid method for convection - diffusion equations .numerische mathematik 71 ( 3):365 - 397 filelis - papadopoulos ck , gravvanis ga , lipitakis ea ( 2014 ) on the numerical modeling of convection - diffusion problems by finite element multigrid preconditioning methods .advances in engineering software 68:56 - 69 gupta mm , kouatchou j , zhang j ( 1997 ) a compact multigrid solver for convection - diffusion equations .journal of computational physics 132 ( 1):123 - 129 gupta mm , kouatchou j , zhang j ( 1997 ) comparison of second - and fourth - order discretizations for multigrid poisson solvers .journal of computational physics 132 ( 2):226 - 232 zhang j ( 1998 ) fast and high accuracy multigrid solution of the three dimensional poisson equation .journal of computational physics 143 ( 2):449 - 461 lentine m , zheng w , fedkiw r a novel algorithm for incompressible flow using only a coarse grid projection . in : acm transactions on graphics ( tog ) , 2010 .acm , p 114 san o , staples ae ( 2013 ) a coarse - grid projection method for accelerating incompressible flow computations . journal of computational physics 233:480 - 508 san o , staples ae ( 2013 ) an efficient coarse grid projection method for quasigeostrophic models of large - scale ocean circulation .international journal for multiscale computational engineering 11 ( 5 ) jin m , liu w , chen q ( 2014 ) accelerating fast fluid dynamics with a coarse - grid projection scheme .hvac&r research 20 ( 8):932 - 943 losasso f , gibou f , fedkiw r simulating water and smoke with an octree data structure . in :acm transactions on graphics ( tog ) , 2004 .acm , pp 457 - 462 moin p ( 2010 ) fundamentals of engineering numerical analysis .cambridge university press gottlieb s , shu c - w ( 1998 ) total variation diminishing runge - kutta schemes .mathematics of computation of the american mathematical society 67 ( 221):73 - 85 gropp wd , kaushik dk , keyes de , smith bf ( 2001 ) high - performance parallel implicit cfd .parallel computing 27 ( 4):337 - 362 heys j , manteuffel t , mccormick s , olson l ( 2005 ) algebraic multigrid for higher - order finite elements .journal of computational physics 204 ( 2):520 - 532 becker r , braack m ( 2000 ) multigrid techniques for finite elements on locally refined meshes .numerical linear algebra with applications 7 ( 6):363 - 379 reitzinger s , schberl j ( 2002 ) an algebraic multigrid method for finite element discretizations with edge elements .numerical linear algebra with applications 9 ( 3):223 - 238 sampath rs , biros g ( 2010 ) a parallel geometric multigrid method for finite elements on octree meshes .siam journal on scientific computing 32 ( 3):1361 - 1392 timmermans l , minev p , van de vosse f ( 1996 ) an approximate projection scheme for incompressible flow using spectral elements .international journal for numerical methods in fluids 22 ( 7):673 - 688 wanner g , hairer e ( 1991 ) solving ordinary differential equations ii , vol 1 .springer - verlag , berlin reddy jn ( 1993 ) an introduction to the finite element method , vol 2 .mcgraw - hill new york jiang cb , kawahara m ( 1993 ) a three - step finite element method for unsteady incompressible flows .computational mechanics 11 ( 5 - 6):355 - 370 brezzi f ( 1974 ) on the existence , uniqueness and approximation of saddle - point problems arising from lagrangian multipliers . revue franaise dautomatique , informatique , recherche oprationnelle analyse numrique 8 ( 2):129 - 151 babuka i ( 1973 ) the finite element method with lagrangian multipliers .numerische mathematik 20 ( 3):179 - 192 xu j , chen l , nochetto rh ( 2009 ) optimal multilevel methods for h ( grad ) , h ( curl ) , and h ( div ) systems on graded and unstructured grids . in : multiscale ,nonlinear and adaptive approximation .springer , pp 599 - 659 yserentant h ( 1986 ) on the multi - level splitting of finite element spaces .numerische mathematik 49 ( 4):379 - 412 bank re , xu j ( 1996 ) an algorithm for coarsening unstructured meshes .numerische mathematik 73 ( 1):1 - 36 hu j ( 2015 ) a robust prolongation operator for non - nested finite element methods .computers & mathematics with applications 69 ( 3):235 - 246 besson j , foerch r ( 1997 ) large scale object - oriented finite element code design .computer methods in applied mechanics and engineering 142 ( 1):165 - 187 bell n , garland m ( 2008 ) efficient sparse matrix - vector multiplication on cuda .nvidia technical report nvr-2008 - 004 , nvidia corporation saad y , schultz mh ( 1986 ) gmres : a generalized minimal residual algorithm for solving nonsymmetric linear systems .siam journal on scientific and statistical computing 7 ( 3):856 - 869 van der vorst ha ( 2003 ) iterative krylov methods for large linear systems , vol 13 .cambridge university press geuzaine c , remacle jf ( 2009 ) gmsh : a 3d finite element mesh generator with builtin preand postprocessing facilities .international journal for numerical methods in engineering 79 ( 11):1309 - 1331 taylor g , green a ( 1937 ) mechanism of the production of small eddies from large ones .proceedings of the royal society of london series a , mathematical and physical sciences 158 ( 895):499 - 521 alam jm , walsh rp , alamgir hossain m , rose am ( 2014 ) a computational methodology for twodimensional fluid flows . international journal for numerical methods in fluids 75 ( 12):835 - 859 goda k ( 1979 ) a multistep technique with implicit difference schemes for calculating two - or three - dimensional cavity flows .journal of computational physics 30 ( 1):76 - 95 caltagirone j - p , breil j ( 1999 ) sur une mthode de projection vectorielle pour la rsolution des quations de navier - stokes .comptes rendus de lacadmie des sciences - sries iib - mechanics - physics - astronomy 327 ( 11):1179 - 1184 barton i ( 1997 ) the entrance effect of laminar flow over a backwardfacing step geometry .international journal for numerical methods in fluids 25 ( 6):633 - 644 kim j , moin p ( 1985 ) application of a fractional - step method to incompressible navier - stokes equations .journal of computational physics 59 ( 2):308 - 323 erturk e ( 2008 ) numerical solutions of 2-d steady incompressible flow over a backward - facing step , part i : high reynolds number solutions .computers & fluids 37 ( 6):633 - 655 belov aa ( 1997 ) a new implicit multigrid - driven algorithm for unsteady incompressible flow calculations on parallel computers .behr m , hastreiter d , mittal s , tezduyar t ( 1995 ) incompressible flow past a circular cylinder : dependence of the computed flow field on the location of the lateral boundaries .computer methods in applied mechanics and engineering 123 ( 1):309 - 316 ding h , shu c , yeo k , xu d ( 2004 ) simulation of incompressible viscous flows past a circular cylinder by hybrid fd scheme and meshless least square - based finite difference method .computer methods in applied mechanics and engineering 193 ( 9):727 - 744 braza m , chassaing p , minh hh ( 1986 ) numerical study and physical analysis of the pressure and velocity fields in the near wake of a circular cylinder .journal of fluid mechanics 165:79 - 130 liu c , zheng x , sung c ( 1998 ) preconditioned multigrid methods for unsteady incompressible flows .journal of computational physics 139 ( 1):35 - 57 hammache m , gharib m ( 1989 ) a novel method to promote parallel vortex shedding in the wake of circular cylinders .physics of fluids a : fluid dynamics ( 1989 - 1993 ) 1 ( 10):1611 - 1614 rajani b , kandasamy a , majumdar s ( 2009 ) numerical simulation of laminar flow past a circular cylinder . applied mathematical modelling 33 ( 3):1228 - 1247 wang zj ( 1999 ) efficient implementation of the exact numerical far field boundary condition for poisson equation on an infinite domain .journal of computational physics 153 ( 2):666 - 670
|
coarse grid projection ( cgp ) methodology is a novel multigrid method for systems involving decoupled nonlinear evolution equations and linear elliptic poisson equations . the nonlinear equations are solved on a fine grid and the linear equations are solved on a corresponding coarsened grid . mapping operators execute data transfer between the grids . the cgp framework is constructed upon spatial and temporal discretization schemes . this framework has been established for finite volume / difference discretizations as well as explicit time integration methods . in this article we present for the first time a version of cgp for finite element discretizations , which uses a semi - implicit time integration scheme . the mapping functions correspond to the finite - element shape functions . with the novel data structure introduced , the mapping computational cost becomes insignificant . we apply cgp to pressure - correction schemes used for the incompressible navier - stokes flow computations . this version is validated on standard test cases with realistic boundary conditions using unstructured triangular meshes . we also pioneer investigations of the effects of cgp on the accuracy of the pressure field . it is found that although cgp reduces the pressure field accuracy , it preserves the accuracy of the pressure gradient and thus the velocity field , while achieving speedup factors ranging from approximately 2 to 30 . exploring the influence of boundary conditions on cgp , the minimum speedup occurs for velocity dirichlet boundary conditions , while the maximum speedup occurs for open boundary conditions . we discuss the cgp method as a guide for partial mesh refinement of incompressible flow computations and show its application for simulations of flow over a backward - facing step and flow past a cylinder .
|
mixed mode oscillations ( mmos ) , composed of well - defined periods of subthreshold oscillations ( stos ) separating spikes , appear in biological rhythms , neural dynamics , chemical oscillations , and network oscillators , with many models proposed to capture this phenomenon in these and other applications .one question in applications is the significance of the stos that is , do the interspike intervals ( isis ) and stos encode important information for the system ( e.g. , refs . or ) .an element in the answer is identifying the mechanisms that drive the stos .a challenge in this identification is that noisy time series generated by structurally different models can appear hardly distinguishable from each other , even quantitatively . experimentally observed behavior can be captured with different types of models with different noise levels , so that model identification and calibration requires consideration of several classes of models over wide parameter ranges .previous studies indicate that a minimum of three degrees of freedom is necessary to produce mmos in deterministic models ( see examples in ref . ) , while stochastic van der pol and fitzhugh - nagumo - type models illustrate how noise can drive mmos in simple 2d models .similarities in mmo signals often stem from an underlying hopf bifurcation and canard structure in the system . in periods where the system is in some sense close to the hopf bifurcation ,stos can be both supported and disrupted by noise . in these modelsit is possible to choose parameters so that the mmos are driven by the deterministic dynamics , while for other parameter choices the appearance of mmos relies on stochastic fluctuations . while on the one hand noise can make it difficult to distinguish between different models with various routes for producing stochastic mmos , it is nevertheless possible to use noise as a way to identify underlying mechanisms . in this paperwe compare a suite of measures at different noise levels to extract the structure of the model , and thus the mechanism for the stos and mmos .the choice of measures is partly motivated by the presence of multiple time scales , an important feature that allows both well - defined periods of stos and more than one mechanism for mmo generation . as described in more detail in the next section , the common substructure of the models we consider corresponds to a subsystem with a slowly varying control parameter .depending on the combination of the other model parameters , that control parameter can slowly vary through a hopf bifurcation , in which case the sto - spike transition exhibits the dynamics of bifurcation delay ( ref . and references therein ) .for other parameter combinations corresponding to quiescence in the deterministic system , noise excites subthreshold coherent oscillations with a frequency near that of the hopf bifurcation .the weak damping of these oscillations is due to the presence of multiple time scales , observed for parameters near a hopf bifurcation .these noise - induced stos are just one type of oscillation appearing in the context of coherence resonance ( cr ) . in its broadest sense cris observed when a range of noise levels excites coherent oscillations in a system that is quiescent without noise , with a maximum coherence at an optimal noise level .this phenomenon is most commonly cited for transitions between steady equilibria with large excursions , such as relaxation oscillations , but in this paper it refers to the noise - induced stos near a hb , also exhibiting cr - type behavior .whether the stos appear via slow passage through a hopf bifurcation ( sphb ) or via a cr - type mechanism , the impact of the noise on the mmos is concentrated in the interspike interval ( isi ) where the stos arise and eventually transition to a spike .it is no accident that this is also the interval where the slow dynamics are most prominent , and it is well known that noise can have a significant impact in such intervals .the suite of measures used in this paper focuses on the stos and the isi behavior , capitalizing on previous analytical and computational studies of noise sensitivity of sphb and cr - driven oscillations .these previous analyses show how noise can enhance or suppress the stos through interactions with the fast - slow dynamics inherent in the mmos , and these characteristics are captured by the behavior of the suite of measures .focused on the noise sensitive stos in the isi , these measures can isolate differences between models and identify mechanisms for the mmos as noise levels and bifurcation and control parameters vary .an additional advantage of these measures is that they are based on features that are easy to analyze in time series data , making them amenable for use on experimental and simulated data .furthermore , recognition of the multiple time scales of the mmos suggests ways to experimentally introduce fluctuations into the system as a tunable extrinsic noise .when scaled appropriately by considering the isi dynamics , this extrinsic noise can be used to mimic the effects of other intrinsic noise sources .then this introduction of noise , combined with the suite of measures , can be applied both to identify the likely mechanism for mmos and limit the variety of models one must consider for calibration .* summary of results * we use a combination of amplitude trend , isi density , and coherence measure based on power spectral density ( psd ) to classify and differentiate mechanisms for mmos .we compare phenomenological and physiological models to explore the types of sto and mmo behaviors that can be captured by simplified or reduced stochastic models over a range of noise levels .we then analyze these results within intra - model comparisons to get a deeper understanding of those mmo characteristics related to model structure , those dictated by deterministic dynamics , and those driven by noise .we focus particularly on cases where cr and/or sphb are key mechanisms . in the conclusionwe give a detailed list of identifying characteristics of mmo mechanisms obtained from this suite of measures .understanding the dynamical features reveals the sensitivity in model calibration on bifurcation structure and on model choice , and illustrates ways that noise can be used to help to differentiate between models . for stos of the cr - typewe observe longer tails in the isi density together with stronger peaks in the coherence measure vs. noise , and no trend in the average sto amplitude leading up to the spike .in contrast , sphb - dominated stos have different behaviors in all three measures .simplified low dimensional integrate and fire ( if ) models can be used to capture some sto and isi behavior over a range of noise levels through adjustment of the speed of the control variable and reset .however , they can not mimic differences observed from underlying bifurcation structure , particularly in the case where a combination of cr and sphb are at play .models with voltage - dependent control variables can have a rich variety of underlying deterministic behavior , allowing possibilities for both cr- and sphb - driven mmos .these differences can drive significantly different stochastic behavior in the isi and psd behavior as compared to simple if models .related to this last observation , our analysis also illustrates how refractory dynamics , dynamics following the spike at the initial stage of the isi , can influence isi density for increasing noise , and disrupt the coherence of the stos in mmos .the influence of the reset or return mechanism , as well as the underlying deterministic behavior , observed in the measures considered here is consistent with the influence of these modeling aspects observed for noise - driven clusters of spikes , that is , repeated spikes without stos in the isi .the article is organized as follows : in section [ sec : models ] we first give a brief overview of two mathematical models in the literature one physiological , the other phenomenological that have been used to explain mmos in neural systems , and discuss different mechanisms for mmos in these stochastic systems . in the same section we present a number of different computational measures for analyzing important characteristics of the time series in order to identify the underlying model structure . in section [ sec : analysis_inter ] we generate and classify mmos with the physiological model and find matching mmos generated by the phenomenological model .we then apply the measures to these time series and provide a set of features that in combination make the time series distinguishable .[ sec : analysis_intra ] and [ sec : intra_fhn ] focus on the comparison of time series generated by the same model with varying parameters , to further highlight differences in the computational measures that appear with different model or bifurcation structure .we also highlight the dramatic effect of weak noise on different families of mmos that are close in parameter space . in sec .[ sec : other_models ] we outline how tunable extrinsic noise can be introduced to mimic the effect of intrinsic noise , based on the inherent multiple time scales and the fact that the noise impact is concentrated in the isi .we also illustrate the applicability of our analysis to another mmo - generating model .we review the setup and dynamics of two models whose sto dynamics will be analyzed in this paper : a detailed biophysical model for mmos in stellate cells ( ` sc ' model ) as well as a modified fitzhugh - nagumo model ( ` mfn ' ) .the two main mmo - generating mechanisms in these noisy models are slow passage through a hopf bifurcation ( sphb ) and coherence resonance ( cr ) .we consider a biophysical model for the mmos observed experimentally in the layer ii stellate cells of the medial entorhinal cortex introduced in ref . .it consists of a voltage equation that includes the three usual hodgkin - huxley currents for sodium , potassium and a leak current together with a persistent sodium current and a hyperpolarisation - activated current .together with the dynamical equations for six gating variables , this model is a system of seven coupled ordinary differential equations ( ` 7dsc ' ) , given in app .[ app : sc_eq ] .since we focus on the sto dynamics , for most of this paper we analyze a reduced , three - dimensional version ( 3dsc ) of the full model , introduced in ref . ,consisting of only the equations for the voltage and two gating variables and ( in the equation for , we use the term from the original work ( see footnote in ref . ) . ) .this 3dsc model provides a good approximation of the isi dynamics of the full 7d model . throughout this article, we refrain from giving units in the equations or for parameters ( see app .[ app : sc_eq ] for the original units and parameter values ) .the three equations considered for the reduced model are : \label{eq : sc_v } \\\dot{r}_f & = \left[1/[1+\exp\left((v+79.2)/9.78\right)]-r_f\right ] / \left[0.51/[\exp\left((v-1.7)/10\right)+\exp\left(-(v+340)/52\right)]+1\right ] \label{eq : sc_rf } \\\dot{r}_s & = \left[1/[1+\exp\left((v+71.3)/7.9\right)]-r_s\right ] / \left[5.6/[\exp\left((v-1.7)/14\right)+\exp\left(-(v+260)/43\right)]+1\right ] .\label{eq : sc_rs}\end{aligned}\ ] ] eq .[ eq : sc_v ] contains a noise term representing white noise , corresponding to fluctuations in the persistent sodium current .according to ref ., this current gives the main stochastic contribution in stellate cells , due to the low number of channels .the model ( eqs .[ eq : sc_v][eq : sc_rs ] ) is treated as an integrate and fire ( if ) model , where spiking and refractory dynamics of the full ( 7d ) model are replaced by a reset value once the trajectory of the system crosses a threshold defined in terms of .the value of reset is chosen such that the trajectory is placed close to a post - spike value of the full deterministic system ( , as in ref . ) .the deterministic ( ) version of this reduced model has been analyzed in detail , and there have been also studies of the noisy ( ) case . both the full 7d and the 3dsc deterministic system produce attracting states of either a steady state , mmos , or tonic spiking , depending on . in the 3dsc modelthese different long time patterns are observed for , , and , respectively .these values are slightly shifted in the 7d version of the model , e.g. , the hopf bifurcation value separating steady state solutions from mmos lies at when using the alternative eq .[ eq : sc_rs_58 ] for the dynamics of ( see app .[ app : sc_eq ] ) . as discussed in ref ., the 3dsc system can be viewed as a 2d subsystem in - with slowly varying control parameter . in the nondimensionalized version of the 3dsc model analyzed in ref ., the ratio of time scales for and is approximately 0.02 , with the time scale for even slower than that of by a factor of . from that viewpoint , fig .[ fig : bifurcation_sc ] shows the relevant part of the bifurcation diagram ( with fixed ) of the underlying 2d subsystem with treated as the bifurcation parameter . for the parameters considered ,there is a subcritical hopf bifurcation at and a canard transition at ( , for and , for obtained with xppaut ) . in the stochastic 3dsc model ( ) , mmos are observed for a broader range of values than listed above , generated by two mechanisms : a slow passage through a hopf bifurcation and coherence resonance . for intermediate values of and noise , mixtures of both of these mechanisms are observed .one of the purposes of this paper is to provide measures that characterize and identify these different mechanisms in various settings .the top panel of fig .[ fig : bifurcation_sc ] shows the stochastic dynamics of the 3dsc model when the deterministic dynamics corresponds to a sphb , or delay bifurcation .the dependent variable is attracted to a steady state for .as the variable slowly ramps through , the full system does not immediately make the transition to spiking . instead , there is a delayed transition , as stos increase gradually about the unstable steady state .it is well known from previous studies that for delay bifurcations , or more generally for certain systems with alternating slow / fast dynamics , certain characteristics of the slow dynamics are exponentially sensitive to noise .that is , for noise levels above those exponentially small in terms of the parameter of the slow time scale , the behavior of the slow dynamics is qualitatively changed by the noise , as has been discussed for examples of delay bifurcations and resonance and in other slow systems . in the example in fig .[ fig : bifurcation_sc ] the stochastic model typically has shorter intervals of stos as compared with the deterministic model , as even small noise typically drives faster transitions to spiking .the example in the bottom panel of fig .[ fig : bifurcation_sc ] corresponds to stos of the cr - type .the underlying deterministic system has a steady state corresponding to , so stos are damped . in the stochastic system coherence resonance ( cr )occurs when noise amplifies the stos in the presence of weak damping ( see refs ., , , and references therein ; note that this type of cr is different to what is described , e.g. , in refs . or ) .these stos have a frequency corresponding to that of the hopf bifurcation at of the 2d subsystem .results in ref . indicate that cr drives stos in the range of noise levels where coherence is optimal , that is , where the psd is relatively narrow with significant power .the analysis in ref .shows that the amplification factor of the stos is inversely proportional to the proximity to the hopf bifurcation , so that the stos are a prominent and prolonged feature in the isi for near .variability in the amplitude of these stos yields a nontrivial probability of spiking , leading to frequent appearance of mmos , even for if noise is strong enough . ):trajectory in the underlying 2d bifurcation diagram and time series .* top : * slow passage through a hb ( see class 1 later ; , ) ; * bottom : * stos of the cr - type ( see class 2 later ; , ) .the scale in the time series is 250 .a solid / dashed red line represents a stable / unstable fixed point .solid / open circles represent stable / unstable limit cycles .there is another stable steady state at .,title="fig : " ] ): trajectory in the underlying 2d bifurcation diagram and time series . *top : * slow passage through a hb ( see class 1 later ; , ) ; * bottom : * stos of the cr - type ( see class 2 later ; , ) .the scale in the time series is 250 .a solid / dashed red line represents a stable / unstable fixed point .solid / open circles represent stable / unstable limit cycles .there is another stable steady state at .,title="fig : " ] in ref ., a fitzhugh - nagumo - type ( fn ) model is given by the following pair of coupled nonlinear stochastic differential equations : we follow ref . and use , and , chosen to give a ratio of the time scales of the stos and the spikes ( relaxation oscillations ) that is close to that typically observed experimentally and in the sc model .the parameter is a control or bifurcation parameter .we refer to this model as the modified fn model ( mfn ) .taking , , adding a constant term in eq .[ eq : makarov_u ] and a -dependent term in eq .[ eq : makarov_v ] yields the general fn model . with , , and a simple rescaling eqs .[ eq : makarov_u ] and [ eq : makarov_v ] are the original van der pol ( vdp ) model .[ fig : bifurcation_ma ] shows the corresponding bifurcation diagram : a supercritical hopf bifurcation with a canard transition to large amplitude relaxation oscillations .depending on the value of the control parameter , the deterministic system ( ) takes one of three solutions : a stable steady state for where is the hopf bifurcation value , stable small amplitude oscillations ( stos ) for with the canard value ( see , e.g. , ref . ) , and large amplitude oscillations for values above the canard value . as discussed in ref ., applying noise to this system ( ) can produce mmos as noise drives the system between small and large amplitude oscillations .the influence of noise on the mmo dynamics of the vdp model has been studied recently in detail , providing scaling relationships between the proximity to the hopf value and the noise levels that lead to different types of behavior .here we use the model given by eqs .[ eq : makarov_u ] and [ eq : makarov_v ] as a basis for comparison with the biophysical model for two reasons .first , the use of gives control over the time scale and shape of the spike ( see above ) .the other important effect that the nonlinearity has on the system is a shift of the canard point , enlarging the parameter region of stable stos ( i.e. , ) by a factor of roughly 1.5 .this leads to mmos with well - defined stos over significant parameter ranges even in the presence of stronger noise values , in contrast to the vdp model studied in ref . .one other difference , compared to the vdp model , is that the choice of yields a less abrupt transition from stos to relaxation oscillations , allowing for a greater probability of observing spikes with reduced amplitude , an effect rarely observed in the sc model . as in ref .we augment the mfn model ( eqs .[ eq : makarov_u ] and [ eq : makarov_v ] ) with a slow variation in the control parameter given by an additional dynamical equation for .the presence of a slowly varying control parameter is a feature common to a number of mmo models including the sc model , and is necessary to reproduce some of the features observed in if models .additional comments about the model of eqs .[ eq : makarov_u ] and [ eq : makarov_v ] relative to the augmented models are included in subsec .[ ssec : intra_lmfhn ] .the first augmented model we consider is an if model with a linear ramp for plus reset at a fixed threshold value for : once the trajectory surpasses , is reset to and we choose to reset and such that the system is on the nullcline ( and ) or very close to it ( see subsec .[ ssec : intra_lmfhn ] for a discussion of the dynamics when reset values are off of the nullcline ) .this system , which we reference as the linearly augmented modified fn model ( lmfn ) , generates only mmos within the parameter values considered .the stochastic lmfn model exhibits features of both a sphb and cr , depending on the value of as well as where the trajectory is reset .this allows for a comparison with simple if models , highlighting the effect of noise on the dynamics for different ramp speeds of the control parameter and reset values .the second augmentation of the mfn model considered is a nonlinear -dependent equation for with a functional form similar to that in the sc model : / \left[5\exp\left ( -\frac{u-0.8}{0.15}\right)+1\right ] .\label{eq : nlinb}\ ] ] this augmentation allows for a comparison of cases where the dependence of the control variable on results in qualitatively different stable behaviors for the deterministic system .[ fig : bifurcation_ma ] shows the three possible long time deterministic behaviors ( ) depending on the parameter , keeping the other constants in eq .[ eq : nlinb ] fixed . for there is a stable steady state , for intermediate values there are sustained stos , and for the attracting state is mmos .the number of sto periods between spikes depends on and on the other constants in eqs .[ eq : makarov_u ] , [ eq : makarov_v ] , and [ eq : nlinb ] .we refer to this augmentation of the mfn model as nonlinearly augmented modified fn ( nlmfn ) and in the noisy version of this model similar patterns as in the sc model can be observed . for noise can excite cr - type stos and for the trajectory passes through the underlying hb and canard transition thereby displaying features of the sphb .( see text ) : * top : * , stable steady state ; * middle : * , stable stos ; * bottom : * , stable mmos .the timescale in the time series is 2 .lines and symbols as of fig .[ fig : bifurcation_sc].,title="fig : " ] ( see text ) : * top : * , stable steady state ; * middle : * , stable stos ; * bottom : * , stable mmos .the timescale in the time series is 2 .lines and symbols as of fig .[ fig : bifurcation_sc].,title="fig : " ] ( see text ) : * top : * , stable steady state ; * middle : * , stable stos ; * bottom : * , stable mmos .the timescale in the time series is 2 .lines and symbols as of fig .[ fig : bifurcation_sc].,title="fig : " ] for the analysis of the mmos and specifically the stos in the isis , we use a number of measures : isi density , amplitude of the stos preceding a spike , power spectral distribution ( psd ) of the stos , and a coherence measure .the time series were generated by numerically integrating the coupled differential equations given in the preceeding subsections using a simple euler - forward routine with constant time step . for the output, it was ensured that the data was sampled at a high enough rate , such that the stos could clearly be identified ( ) .the density of the interspike intervals ( isi ) was obtained in the following way : in if - type models ( 3dsc and lmfn ) , isi duration is measured from the reset to the crossing of the threshold corresponding to initiation of the spike . the typically long refractory period without stos in the 3dsc modelwas removed . in spiking models ( 7dsc , nlmfn ) , it is measured as the time between two consecutive crossings of a threshold ( i.e. , the time between two spikes including a spike ) . to compare the different models ( with different time scales ), we measure the isi duration in , the approximate mean sto period . for simplicity, we use only one for each of the model versions and obtain these approximately from one time series .we use the following approximate values for : 3dsc : 107.5 , nlmfn : 0.47 , lmfn : 0.45 , but note that the actual average vary by about among the classes introduced in the next section .average isi duration is used as a coarse measurement to calibrate fn - type models to the biophysical ( sc ) model , and the isi density is used to compare models and parameter choices , including influence of the noise . a second characteristic of the stos used to calibrate fn - type models to the biophysical model is average sto amplitude trend before a spike .for this , the maxima of the low - pass filtered time series were extracted and averaged over many isis ( for details see app .[ app : measures ] ) . to obtain the power spectral distribution ( psd ) of the stos, the spikes were removed from the time series and it was ensured that the normalization of the psd is consistent within each model . from the psd , we compute a measure of coherence , namely where is the frequency of the peak in the psd corresponding to the stos , is its associated maximal power and is its full width at half maximum . since the different models use different units , a comparison of the absolute value of across models was not performed . for details regarding the computation of the psd and ,see app .[ app : measures ] .in this section we present an analysis of the characteristics of mmos with computational measures , focusing on the stos , produced by the different models as introduced in section [ sec : models ] .sto amplitude trend before a spike and average isi provide a coarse classification of the mmos in the subsections below .for each class of behavior the parameters of the 3dsc model are chosen to produce a type of sto observed in the 7dsc model , and the parameters for the fn - type models are chosen such that they reproduce the mean isi and amplitude trend similar to the 3dsc .the results illustrate features of the mmos that can be reproduced by both the biophysical and phenomenological models .the measures introduced in subsec . [ ssec : tools ] provide a finer comparison of the three broad classes .we introduce different noise levels to highlight the mathematical elements of both the underlying deterministic model and stochasticity that influence the mmo characteristics . in class 1 , we analyze mmos with short isis during which the amplitude of the stos increases . examples for this class of mmos are shown in fig .[ fig : bifurcation_sc ] ( top panel ) for the 3dsc model and fig .[ fig : mmo_timeseries_1 ] for the lmfn model .the comparison of the amplitude dynamics of the sto and isi density is shown in fig .[ fig : class1_isi ] , for 3dsc and lmfn , where the isi density and mean agree up to a shift due to different reset conditions .this class 1 behavior is not readily reproducible within the nlmfn model where the choices of needed for short isis lead to stos with a larger amplitude throughout the isi . , , reset to .the scale in the time series is 2 . ] ) ) for time series of class 1 . ( solid red line ) : 3dsc ( left ordinate ) ; ( dotted black line ) : lmfn ( right ordinate ) . *bottom : * densities of isi lengths for long ( 5500 spikes ) versions of the time series shown in figs .[ fig : bifurcation_sc ] ( top panel ) and [ fig : mmo_timeseries_1 ] . solid line ( red ) : 3dsc ; dashed line ( blue ) : lmfn . parameters : , ( 3dsc ) ; , , reset to ( lmfn).,title="fig:",scaledwidth=32.0% ] ) ) for time series of class 1 . ( solid red line ) : 3dsc ( left ordinate ) ; ( dotted black line ) : lmfn ( right ordinate ) .* bottom : * densities of isi lengths for long ( 5500 spikes ) versions of the time series shown in figs .[ fig : bifurcation_sc ] ( top panel ) and [ fig : mmo_timeseries_1 ] . solid line ( red ) : 3dsc ; dashed line ( blue ) : lmfn .parameters : , ( 3dsc ) ; , , reset to ( lmfn).,title="fig:",scaledwidth=32.0% ] as shown in sec .[ sec : models ] , the underlying dynamics for both the 3dsc and the lmfn model corresponds to a slow passage through a hopf bifurcation . in the sc model , the increasing amplitude of the stosare generated when the system spirals away from an unstable fixed point for values beyond the subcritical hb . in the lmfn model ,the trajectory passes through the underlying supercritical hb at followed by an increasing sto amplitude , with escape from stos to a spike appearing for relatively large ( around 0.34 ) beyond the canard point .the significance of the speed of variation of in the lmfn model is discussed further in subsec .[ ssec : intra_lmfhn ] .the noise levels shown in figs . [ fig : bifurcation_sc ] ( top panel ) and [ fig : mmo_timeseries_1 ] are among the lowest at which stochastic effects are observed , with the sto dynamics dominated by the deterministic trend of increasing amplitude .increasing the noise strengths has very similar effects in both models . typical for a sphb , increased noise drives an earlier escape , indicated by shorter isis ( fig .[ fig : class1_isi_noise ] ) and lower average amplitude before escape ( fig .[ fig : class1_amp_noise ] ) .differences due to the nature of the underlying hopf bifurcation ( sub-/supercritical ) are evident only in the amount of reduction of the amplitude of the stos .this reduction is more significant in the 3dsc , where the stos are unstable for the underlying 2d system with , as compared to the lmfn model , where the supercritical hopf bifurcation yields stable stos for . increased noise shifts the isi density to include peaks at shorter durations , consistent with earlier escape described above .contributions at longer isis are due to a small fraction of trajectories for which the noise disrupts the transition and lengthens the isi . class 1 behavior survives only for low to moderate noise levels , as larger noise regularly eliminates the stos .this is illustrated by the considerable spread of both the isi density and the psd ( fig .[ fig : class1_psd_noise ] ) for large noise levels . .* top : * 3dsc ; * bottom : * lmfn .the parameters are as of fig .[ fig : class1_isi ] . , title="fig:",scaledwidth=32.0% ] .* top : * 3dsc ; * bottom : * lmfn .the parameters are as of fig .[ fig : class1_isi ] ., title="fig:",scaledwidth=32.0% ] . *top : * 3dsc ; * bottom : * lmfn .the parameters are as of fig .[ fig : class1_isi].,title="fig:",scaledwidth=32.0% ] .* top : * 3dsc ; * bottom : * lmfn .the parameters are as of fig .[ fig : class1_isi].,title="fig:",scaledwidth=32.0% ] .* top : * 3dsc ( ) ; * bottom : * lmfn ( , reset to ) . for details of how we computed and normalized the psd see app .[ app : measures].,title="fig:",scaledwidth=32.0% ] .* top : * 3dsc ( ) ; * bottom : * lmfn ( , reset to ) . for details of how we computed and normalized the psd see app .[ app : measures].,title="fig:",scaledwidth=32.0% ] the coherence measure is not clearly defined for mmos of class 1 , since the strong amplitude trend of the deterministic dynamics within the short isi periods yields psds with many well - defined peaks ( one of them at the sto frequency see fig .[ fig : class1_psd_noise ] ) .the sc model shows more peaks due to larger amplitudes of the stos across the isi .the strong peak at low frequencies comes from the spiking ( i.e. , thresholding and concatenation ) .next we analyze stos appearing in longer isis with a characteristic increasing amplitude before a spike .we consider two examples from the 3dsc model that have similar amplitude trends but different isi behavior in the presence of noise . as discussed in sec .[ sec : models ] the choice of for these two examples , and , corresponds to different underlying deterministic behavior , regular mmos and stable steady state , respectively .we refer to these as class 2a and class 2b , respectively . , ; * bottom : * lmfn with , , reset to .trajectory in the underlying 2d bifurcation diagram and time series .the scale in the time series is 5.,title="fig : " ] , ; * bottom : * lmfn with , , reset to .trajectory in the underlying 2d bifurcation diagram and time series .the scale in the time series is 5.,title="fig : " ] sections of the time series as well as the trajectories within the underlying bifurcation are shown in figs .[ fig : bifurcation_sc ] ( bottom panel ) and [ fig : ma_class2 ] for the 3dsc , nlmfn , and lmfn , respectively .these time series were calibrated for similar average amplitude trend and average isi durations .[ fig : class2_amp ] shows that for low noise levels , it is possible to find parameter values for lmfn that compare well in amplitude behavior with 3dsc for both class 2a and class 2b . for nlmfn ,the amplitude behavior is similar to 3dsc for class 2b but differs in class 2a for stos well before the spike . in the refractory period followingthe spike in the nlmfn the system relaxes near but not on the left null cline with , yielding slowly damped stos in the first part of the isi .for the lmfn , the reset value for and is chosen very close to the fixed point , avoiding this part of stos with decaying amplitude .comparable average isis are easily obtained in the lmfn with chosen an order of magnitude smaller than in class 1 , and for nlmfn with near unity for weak noise .differences in the isi density are apparent from fig .[ fig : class2_isi ] and are related to differences in the deterministic dynamics as discussed in secs .[ sec : analysis_intra ] and [ sec : intra_fhn ] .( solid red line ) : 3dsc ( spikes ) ; ( dashed blue line ) : nlmfn ( spikes ) ; ( dotted black line ) : lmfn ( spikes ) .* top : * class 2a : 3dsc : , ; nlmfn : , ; lmfn : , , ; * bottom : * class 2b : , ( fig .[ fig : bifurcation_sc ] , bottom panel ) ; nlmfn : , ( fig .[ fig : ma_class2 ] , top panel ) ; lmfn : , , ( fig . [ fig : ma_class2 ] , bottom panel ) . , title="fig:",scaledwidth=32.0% ] ( solid red line ) : 3dsc ( spikes ) ; ( dashed blue line ) : nlmfn ( spikes ) ; ( dotted black line ) : lmfn ( spikes ) . * top : * class 2a : 3dsc : , ; nlmfn : , ; lmfn : , , ; * bottom : * class 2b : , ( fig .[ fig : bifurcation_sc ] , bottom panel ) ; nlmfn : , ( fig .[ fig : ma_class2 ] , top panel ) ; lmfn : , , ( fig .[ fig : ma_class2 ] , bottom panel ) . , title="fig:",scaledwidth=32.0% ] .,title="fig:",scaledwidth=32.0% ] .,title="fig:",scaledwidth=32.0% ] as we see below , by tuning parameters and noise levels , the fn - type model can capture some but not all of the amplitude , isi and coherence behavior for class 2 .results for different noise levels point to differences in the underlying structure of the models . for noise levels increased by an order of magnitude , both subclasses for 3dsc show roughly a 20 - 30% decrease in the average amplitude of the largest sto immediately preceding a spike , while the fn - type models show a slightly greater decrease ( fig .[ fig : class2_amp_noise ] ) . for class 2b ,the entire average amplitude curve of 3dsc shifts to lower levels with increased noise , while both fn - type models show both shifts and changes in the average amplitude curve shape . for class 2a , both 3dsc and lmfn show an increase of the average amplitude of stos well before the spike for increased noise , indicating stos driven both by cr and by sphb . for the nlmfn model ,the return sets following an earlier spike , so that the combination of damped stos for and cr effects yields robust stos with less variation with noise level than for for the lmfn model . in class 2b , the 3dsc model has longer tails in the isi density for both small and intermediate noise levels .these longer tails are not seen in the fn - type models due to differences in the underlying deterministic dynamics ( see sec . [sec : analysis_intra ] ) . for weak noise ,the multi - modal isi density for the nlmfn is due to the stronger attraction to the underlying deterministic mmo or sto dynamics , with larger stos initially in the isi . in the lmfn modelthe reset onto the nullcline avoids damped stos initially in the isi , allowing a unimodal isi density . for both class 2a and 2b , 3dsc and lmfnshow a shift to shorter isis and a similar shape of the isi density for larger noise , with a larger variance for lmfn in class 2a . for nlmfnthere is no shift in the isi density with increased noise , just an increased variance both for class 2a and 2b , reflecting the dynamic return mechanism following the spike . .* left column : * 3dsc ; * middle column : * nlmfn ; * right column : * lmfn . * top row : * class 2a ; * bottom row : * class 2b . for parameters ,see fig .[ fig : class2_amp].,title="fig:",scaledwidth=32.0% ] .* left column : * 3dsc ; * middle column : * nlmfn ; * right column : * lmfn . * top row : * class 2a ; * bottom row : * class 2b . for parameters ,see fig .[ fig : class2_amp].,title="fig:",scaledwidth=32.0% ] .* left column : * 3dsc ; * middle column : * nlmfn ; * right column : * lmfn .* top row : * class 2a ; * bottom row : * class 2b . for parameters , see fig .[ fig : class2_amp].,title="fig:",scaledwidth=32.0% ] + .* left column : * 3dsc ; * middle column : * nlmfn ; * right column : * lmfn . * top row : * class 2a ; * bottom row : * class 2b . for parameters ,see fig .[ fig : class2_amp].,title="fig:",scaledwidth=32.0% ] .* left column : * 3dsc ; * middle column : * nlmfn ; * right column : * lmfn . * top row : * class 2a ; * bottom row : * class 2b . for parameters , see fig .[ fig : class2_amp].,title="fig:",scaledwidth=32.0% ] .* left column : * 3dsc ; * middle column : * nlmfn ; * right column : * lmfn .* top row : * class 2a ; * bottom row : * class 2b . for parameters , see fig . [ fig : class2_amp].,title="fig:",scaledwidth=32.0% ] . * left column : * 3dsc ; * middle column : * nlmfn ; * right column : * lmfn . * top row : * class 2a ; * bottom row : * class 2b . for parameters ,see fig .[ fig : class2_amp ] . in the top row , middle panel , values for large are cut off.,title="fig:",scaledwidth=32.0% ] .* left column : * 3dsc ; * middle column : * nlmfn ; * right column : * lmfn . * top row : * class 2a ; * bottom row : * class 2b . for parameters ,see fig .[ fig : class2_amp ] . in the top row , middle panel , values for large are cut off.,title="fig:",scaledwidth=32.0% ] .* left column : * 3dsc ; * middle column : * nlmfn ; * right column : * lmfn . *top row : * class 2a ; * bottom row : * class 2b . for parameters , see fig .[ fig : class2_amp ] . in the top row , middle panel , values for large are cut off.,title="fig:",scaledwidth=32.0% ] + .* left column : * 3dsc ; * middle column : * nlmfn ; * right column : * lmfn . *top row : * class 2a ; * bottom row : * class 2b . for parameters , see fig .[ fig : class2_amp ] . in the top row , middle panel ,values for large are cut off.,title="fig:",scaledwidth=32.0% ] . * left column : * 3dsc ; * middle column : * nlmfn ; * right column : * lmfn . * top row : * class 2a ; * bottom row : * class 2b . for parameters ,see fig .[ fig : class2_amp ] . in the top row , middle panel , values for large are cut off.,title="fig:",scaledwidth=32.0% ] .* left column : * 3dsc ; * middle column : * nlmfn ; * right column : * lmfn . * top row : * class 2a ; * bottom row : * class 2b . for parameters ,see fig .[ fig : class2_amp ] . in the top row , middle panel ,values for large are cut off.,title="fig:",scaledwidth=32.0% ] fig .[ fig : class2_beta ] shows the differences between the coherence measure ( subsec .[ ssec : tools ] ) for the three different models for various noise levels . for 3dsc in class 2b the stos are driven by cr and there is a peak in ( indicating cr ) , while for class 2a noise reduces in general , since the stos are driven by a sphb . for low noise levels in nlmfn , the robust stos are represented by a multi - peaked psd ( equivalent to the isi density see above ) , for which is not defined . for larger noise levels in class 2 , is defined , but noise only disrupts the underlying stos , so decreases with noise . for lmfn , a moderate noise level can enhance the stos , particularly at the beginning of an isi , yielding an optimal noise level for coherence in both types of class 2 .the influence of the underlying bifurcation structure on the coherence measure is discussed further in sec .[ sec : analysis_intra ] .( see subsec .[ ssec : tools ] ) for the 3dsc model ( * top * ) , nlmfn ( * middle * ) and the lmfn model ( * bottom * ) for various values of noise strength .class 2a is within the red solid lines , class 2b within the blue dashed lines , and class 3 within the black dotted lines ( blue dashed for nlmfn).,title="fig:",scaledwidth=32.0% ] ( see subsec . [ssec : tools ] ) for the 3dsc model ( * top * ) , nlmfn ( * middle * ) and the lmfn model ( * bottom * ) for various values of noise strength .class 2a is within the red solid lines , class 2b within the blue dashed lines , and class 3 within the black dotted lines ( blue dashed for nlmfn).,title="fig:",scaledwidth=32.0% ] ( see subsec . [ssec : tools ] ) for the 3dsc model ( * top * ) , nlmfn ( * middle * ) and the lmfn model ( * bottom * ) for various values of noise strength .class 2a is within the red solid lines , class 2b within the blue dashed lines , and class 3 within the black dotted lines ( blue dashed for nlmfn).,title="fig:",scaledwidth=32.0% ] here we analyze stos with an average isi similar in length to class 2 , but with a constant average amplitude preceding a spike .[ fig : mmo_timeseries_3 ] shows the trajectories from the 3dsc , nlmfn , and lmfn calibrated for similar average amplitude behavior and average isi duration .class 3 can be observed in 3dsc only at higher noise levels compared with classes 1 and 2 ., ) ; * middle : * nlmfn ( , ) ; * bottom : * lmfn ( , , ) .the scales in the time series are 250 ( sc ) and 5 ( mfn).,title="fig : " ] , ) ; * middle : * nlmfn ( , ) ; * bottom : * lmfn ( , , ) .the scales in the time series are 250 ( sc ) and 5 ( mfn).,title="fig : " ] , ) ; * middle : * nlmfn ( , ) ; * bottom : * lmfn ( , , ) .the scales in the time series are 250 ( sc ) and 5 ( mfn).,title="fig : " ] fig .[ fig : class3_amp ] shows that the average sto amplitude for class 3 is constant for at least the last ten periods of stos with a slight amplitude increase in transition to the spike .this nearly constant average amplitude results from strong variation in amplitude with a constant ensemble average .a clear single peak in the psd ( data not shown ) confirms well - defined stos with variability in phase and amplitude .the amplitude and average isi characteristics of these mmos are readily reproduced with the fn - type models at stronger noise levels with the average amplitude lower by roughly a factor of two compared to classes 1 and 2 .class 3 behavior can also be obtained for the mfn model ( eqs .[ eq : makarov_u ] and [ eq : makarov_v ] ) with constant control parameter as in ref . .the isi densities shown in fig . [ fig : class3_isi ] are similar to each other , with a slightly more significant tail in the isis for 3dsc .( red solid line ) : 3dsc ( spikes ) ; ( blue dashed line ) : nlmfn ( spikes ) ; ( black dotted line ) : lmfn ( spikes ) . *bottom : * densities of isi lengths for class 3 mmos. solid line ( red ) : 3dsc ; dashed line ( blue ) : nlmfn ; dotted line ( black ) : lmfn . for parameters , see fig .[ fig : mmo_timeseries_3].,title="fig:",scaledwidth=32.0% ] ( red solid line ) : 3dsc ( spikes ) ; ( blue dashed line ) : nlmfn ( spikes ) ; ( black dotted line ) : lmfn ( spikes ) . * bottom : * densities of isi lengths for class 3 mmos .solid line ( red ) : 3dsc ; dashed line ( blue ) : nlmfn ; dotted line ( black ) : lmfn . for parameters , see fig .[ fig : mmo_timeseries_3].,title="fig:",scaledwidth=32.0% ] [ fig : class3_isi ] figs .[ fig : class3_amp_noise ] and [ fig : class3_isi_noise ] show the trend of the sto amplitude and the density of isi when noise levels increase . for stronger noise the increase in amplitude directly before the spike in the fn - type models is eliminated , with stronger noise levels rather than deterministic dynamics dominating the transition to spikes . at higher noise levels , distinguishing stochastic fluctuations from regular stos is less dependable , so that the amplitude dynamics for the largest noise levels are not shown in fig .[ fig : class3_amp_noise ] . in the 3dsc model, there is a slight increase in average amplitude during the isi which is due to our algorithm ( subsec .[ ssec : tools ] ) not fully eliminating the trend of increasing in steady state with increasing for . the isi densities for 3dsc and lmfn are more concentrated at shorter isi durations , while the isi density for nlmfn remains spread over a large range of isis .this is due to the underlying stable stos for nlmfn , together with the nonlinear return mechanism that typically returns to values well below following a spike , thus allowing for a range of isi durations . with the isi density concentrated at low values , clustered spikesare more frequent in 3dsc and lmfn with nearly tonic spiking at the higher noise level ( cf . ref . ) . for class 3 , for the 3dsc model shows a similar behavior as for class 2b ( fig .[ fig : class2_beta ] ) , with a clear maximum .the values of in class 3 are lower than in class 2 for 3dsc , mainly because of smaller amplitudes of the stos ( being inversely proportional to the distance from for cr also see subsec . [ ssec : tools ] ) .similarly , the trend for in class 2b and 3 is the same in lmfn . for the nlmfn model, the value of is the same for class 2b and class 3 and the behavior of is shown in fig .[ fig : class2_beta ] . .* top : * 3dsc ( ) ; * middle : * nlmfn ( ) ; * bottom : * lmfn ( , ).,title="fig:",scaledwidth=32.0% ] . *top : * 3dsc ( ) ; * middle : * nlmfn ( ) ; * bottom : * lmfn ( , ).,title="fig:",scaledwidth=32.0% ] .* top : * 3dsc ( ) ; * middle : * nlmfn ( ) ; * bottom : * lmfn ( , ).,title="fig:",scaledwidth=32.0% ] . *top : * 3dsc ; * middle : * nlmfn ; * bottom : * lmfn . for parameters ,see fig .[ fig : class3_amp_noise].,title="fig:",scaledwidth=32.0% ] .* top : * 3dsc ; * middle : * nlmfn ; * bottom : * lmfn . for parameters ,see fig .[ fig : class3_amp_noise].,title="fig:",scaledwidth=32.0% ] .* top : * 3dsc ; * middle : * nlmfn ; * bottom : * lmfn . for parameters , see fig .[ fig : class3_amp_noise].,title="fig:",scaledwidth=32.0% ]the comparisons in the previous section point to the role of the underlying bifurcation structure on the characteristics of the stos and mmos .we summarize the results for the 3dsc model from sec .[ sec : analysis_inter ] , comparing within that model .we also discuss alternative values of for the 7dsc model in order to compare with previous studies of the full model . within the 3dsc model ( and similarly the 7dsc model ) differences in the stochastic behaviorcan be related to the differences in the deterministic behavior for different ranges of . for in 3dsc ( see subsec . [ssec : scmodel ] ) , the stable steady state has a value of well below . for there is no steady state for the deterministic system , and the stos are driven by sphb with as the slowly varying control parameter . in the case , characteristics of both cr and sphb are observed . amplitude : for well below , is well below , so only high noise levels can drive mmos , and stos are purely of the cr - type without any trend in the average amplitude ( as in class 3 ) .in contrast in class 1 , with clearly above the stos due to the sphb have an increasing trend in amplitude before the spike .well - defined periods of stos survive only for lower noise levels , as the slow passage is sensitive to very weak noise . for values of closer to , as in class 2, both cr and sphb can affect the dynamics , so that periods of stos with increasing amplitude trend can survive over a large range of noise levels . isi density : significant differences are seen at lower noise levels , where longer tails in the isi density correspond to stos of the cr - type , not observed in stos driven by sphb . coherence measure : cr - driven stos in class 2b and class 3 are responsible for the more pronounced peak in the coherence measure in fig .[ fig : class2_beta ] .intermediate values of were also analyzed ( not shown ) , and as expected intermediate behavior between the three classes is observed . for ,sto and isi behavior was closer to that of class 2b than class 1 , without contributions to longer isis , and for intermediate to larger noise levels very weak coherence is observed with a mild peak in around .here we consider the effect of noise on mmo families in the full 7dsc model in the range ( i.e. , within the regime of deterministic mmos ) , as illustrated through the isi density .the 3dsc model shows a similar behavior , with a shift in the values of .recall that for any fixed , the underlying deterministic dynamics is a slow passage of through .we restrict our study to very low noise levels and consider trajectories that fit into classes 1 and 2a ( sec .[ sec : analysis_inter ] ) when stronger noise is added . for the sake of comparison with the analysis in ref ., we use an alternative equation for the dynamics of . instead of eq .[ eq : sc_rs ] , we use eq .[ eq : sc_rs_58 ] given in app .[ app : sc_eq ] , which alters the numerical values of the model but not the qualitative features . ) ) with various values of ( in the top panel corresponding to family ; in the bottom panel corresponding to family ) .the deterministic case ( ) is shown as a histogram ( solid red lines ) whereas the data for simulations with noise ( ) is shown as line plots ( dashed blue / dotted black lines ) .we adopt the notation for the mmos from ref . ..,title="fig : " ] ) ) with various values of ( in the top panel corresponding to family ; in the bottom panel corresponding to family ) .the deterministic case ( ) is shown as a histogram ( solid red lines ) whereas the data for simulations with noise ( ) is shown as line plots ( dashed blue / dotted black lines ) .we adopt the notation for the mmos from ref . ..,title="fig : " ] fig .[ fig : sc_7d_isi ] shows examples of the isi density for different families of mmos .the families ( for notation , see ref . )have mmos with stos in each isi , yielding isi densities with a single point mass .the mmos for a second , more complex family , , have an isi with stos followed by isis each with a different type of stos that increase in amplitude and period for each of the subsequent isis .the families then have point masses in the isi density .we consider only those mmo families that are stable in the deterministic model . in the range stable and families are found for alternating ranges of . for ,the stable solutions are only families densely filling the space in .the time series analyzed earlier as class 1 in this notation would be a mixture of and . with ( deterministic family * top * ) and ( deterministic family * bottom*).,title="fig : " ] with ( deterministic family * top * ) and ( deterministic family * bottom*).,title="fig : " ] for values of near the center of the stability range for a particular type of mmo , well separated from other families , weaker noise broadens the isi density and stronger noise shifts the density towards shorter isis when noise drives an early escape typical in a slow passage through a hb , as shown in fig .[ fig : sc_7d_isi ] for .the bottom panel of fig .[ fig : sc_7d_isi ] shows that for values of near the edge of a stability range , very weak noise can lead to a mixture of close families of mmos . as noise drives a faster escape to spiking the peak corresponding to the longest isi and largest sto of reduced as the nearby mmos of occur , appearing as stronger nearby peaks in the isi density .there is also considerable sensitivity in the isi density when stability ranges are small and densely packed .noise can drive a sampling of a mixture of families close in terms of the bifurcation parameter , yielding clearly defined peaks that are not present in the deterministic case , as shown in fig . [fig : sc_7d_isi_2 ] . in the top panel ,the deterministic mmo is , and noise excites a second peak for a family or a mixture of families between and . at stable solution is and weak noise drives a sampling of the six nearby mmo solutions with . at stronger noisethe longer isis are lost and the isi density spreads out and shifts towards shorter isis . in conclusion , even very weak noise can alter the dynamics of a mmo - generating system significantly , particularly if the deterministic solutions are close in parameter space .weak noise drives a sampling between a few or many of these solutions. then it can be difficult to distinguish between complex deterministic solutions like the families and a noisy trajectory sampling a few different mmo families .as in the sc model , the underlying bifurcation structure and deterministic behavior influence the source and characteristics of the stos .for the fn - type models , the underlying bifurcation is a supercritical hopf , with stable stos in the 2d reduced system for - with constant . in the lmfn model with appropriate reset the slowly varying control variable sweeps through an underlying hb and canard transition , which always yields mmos in the deterministic setting .we consider two aspects of the control parameter that can be varied to change the characteristics of the stos , the rate of change in , , and its reset , following a spike / threshold crossing . in the different classes studied in sec .[ sec : analysis_inter ] , we analyzed four different combinations of and : slower variation in given by smaller , with reset ( class 3 ) or ( class 2b ) , and larger values of ( class 2a ) and ( class 1 ) both with .the main underlying feature that influences amplitude dynamics for these different parameter combinations , is time spent in the parameter range . increase is more pronounced for low noise levels and smaller values of . for low noise levels ,the system follows the underlying sto dynamics , less likely to be driven to spiking by noise . for smaller values of ,the control parameter behaves almost as a constant relative to the oscillation frequency .this allows attraction to larger stos for slowly increasing , with the possibility of reaching values of before spiking , so that larger amplitudes are typical before escape . for larger values of , as in class 1 or class 2a , increases through the sto regionless slowly .then the trend of increasing amplitude consistent for survives but increased variation in limits the attraction to the stable stos as approaches and exceed . the behavior is closer to a series of transients that is more susceptible to noise - induced escape to spiking . for larger noise levels ,the escape to spiking is noise - driven in or near the stable sto region .an increase in sto amplitude farther from the spike , for is observed due to the role of cr together with the stable stos .an earlier escape limits the opportunity for amplitude increase closer to the spike . the relation between and and isi density is as expected : smaller values of or lead to greater likelihood of longer isis . the coherence measure shown in fig .[ fig : class2_beta ] increases for long stretches of large amplitude oscillations , and decreases with fluctuating amplitude .trajectories with the reset close to the hb show the largest value of for moderate noise levels .slower variation in in these cases also reduces fluctuation in amplitude , thus increasing coherence . for reset there is fluctuation in amplitude due to oscillations driven by both cr and stos , so is reduced somewhat overall . for larger values of noise coherenceis destroyed in all cases .[ fig : intra_lmfhn ] shows the analysis of an additional parameter combination , a very slow sweep through the hb and reset before the hb ( , ) , whose coherence measure was included in fig .[ fig : class2_beta ] . the difference in amplitude dynamics for lower and higher noise levels is as described above . for lower noise levels and very slow dynamics of ,the system is attracted to larger amplitude stos before spiking , so that larger amplitudes are typical for more stos before escape . the isis are very long , and only for stronger noise are they comparable to class 3 ( cf . fig .[ fig : class3_isi ] ) .in contrast to class 2 , only for stronger noise levels is the isi density concentrated at shorter isis , since the trajectory takes longer to reach . the shape of the curve vs. is qualitatively similar to the classes studied in sec .[ sec : analysis_inter ] .however , at very low noise values , is larger than other cases , caused by cr - driven stos and slower variation in as described above .the last panel in fig .[ fig : intra_lmfhn ] shows the isis for an intermediate case , with larger and .the isi density then shows characteristics between class 2 and class 3 , with a narrow isi density for smaller noise levels , and longer tails for stronger noise , as oscillations of the cr - type appear for . ) and reset before the hb ( ) .isi density ( * bottom * ) for an intermediate case ( , ).,title="fig:",scaledwidth=32.0% ] ) and reset before the hb ( ) .isi density ( * bottom * ) for an intermediate case ( , ).,title="fig:",scaledwidth=32.0% ] ) and reset before the hb ( ) .isi density ( * bottom * ) for an intermediate case ( , ).,title="fig:",scaledwidth=32.0% ] we add one additional note about resetting off of the nullcline . the choice of reset used in sec .[ sec : analysis_inter ] ( reset very close to the nullcline ) , can be relaxed to a reset near the nullcline without qualitative changes in the measures used above .a choice of reset away from the nullcline modifies the amplitude dynamics of the sto at the beginning of the isi . for reset with well below , stos with considerable amplitude follow the reset and relax as approaches , similar to nlmfn . for classes 1 and 2this increased amplitude supports an earlier escape to spiking and reduced isis , particularly in class 1 . for low noise levels in class 2the isis are then narrower , since the larger stos are less susceptible to small noise . for class 3the stronger noise dominates , so that variation in the reset has limited effect . as noted in subsec .[ ssec : fhn ] , the model of eqs .[ eq : makarov_u ] and [ eq : makarov_v ] has a larger distance between and , as compared with vdp , yielding a larger region in parameter space where well - defined periods of stos are observed in the presence of noise .this behavior is confirmed by comparisons of psds for mfn and for a rescaled version of vdp to effectively increase the canard value .this comparison illustrates that the distance between and plays a critical role in the robustness of stos and mmos , in addition to the reset and ramp speed . as described in subsec .[ ssec : fhn ] , the long time behavior of the nlmfn deterministic model includes different states depending on : : steady state ; : stable stos ; : mmos . in sec .[ sec : analysis_inter ] we have considered only those values of that support phenomenological behavior observed in the sc model . here we also discuss cases that differ qualitatively from those studied in sec .[ sec : analysis_inter ] , and compare the amplitude , isi density , and coherence measure within this model .values of and were used in sec .[ sec : analysis_inter ] to approximate class 2 behavior . for underlying deterministic dynamics is mmos , with regularly crossing . then the observed measures are consistent with a slow passage through a hopf bifurcation and canard point as observed in the other models : increasing amplitude before the spike over a range of noise levels . multi - peaked isi density for lower noise levels and concentrated isi density for larger noise . decreasing trend in , defined only for larger values of the noise .for values of the return value of following a spike takes values between and , thus shortening the isi significantly .as discussed above , we do not classify this behavior as class 1 , since the return value yields larger amplitude stos and limited increase for low noise levels , similar to those observed for a related model in subsec .[ ssec : self - coupled_mfhn ] .in contrast , for , the deterministic system exhibits stable stos where oscillates near . the stochastic behavior is similar to that of the case with , but differences due to the underlying deterministic dynamics are observed for lower noise levels and : increased noise drives an earlier escape to spiking .this results in a reset of , yielding both cr - type and sphb - driven oscillations , and a limited opportunity for amplitude increase just before the spike . the isi density is broader for ( compared to ) .the attracting deterministic behavior for consists of small oscillations midway between and , so characteristics include some elements from class 2 for smaller noise , and similarities to class 3 and ( see below ) for larger noise : the amplitude dynamics is similar to class 2 for lower noise , with a weaker increase . the isi behavior has characteristics closer to class 2 . the coherence measure vs. behaves similarly to , with larger values of for small noise . for larger values of ,the attracting state of the deterministic system takes values of near . for long term deterministic behavior is quiescent with the steady state corresponding to just below . for deterministic system has stable stos , with small oscillations in just above . to drive mmos in either of these cases, noise must be strong enough to cause excursions beyond the canard transition at , certainly stronger than in the cases for above .isi and amplitude measures can therefore only be derived for these stronger values of noise , leading only to class 3 mmos with the following characteristics ( fig .[ fig : intra_nlmfhn ] ) : primarily constant trend in average amplitude before a spike . broad isi density with long tails . a clear difference between and due to the underlying deterministic dynamics is seen in the coherence measure ( fig .[ fig : class2_beta ] ) . for noise levels disrupt the coherence of the stos produced deterministically , so that strictly decreases with noise . for , has a maximum indicating an optimal noise level typical of cr - driven oscillations .at which the deterministic system is quiescent.,title="fig:",scaledwidth=32.0% ] at which the deterministic system is quiescent.,title="fig:",scaledwidth=32.0% ]the stochastic nature of the sc system results from noise in ion channels and noisy external signals .the model we consider above follows ref . , including the primary noise source in the gating variable of the persistent sodium channel . here, we compare our results with those obtained with a different noise source in the 3dsc model , namely , in the applied current .the consideration of this additive noise is motivated by several factors .first , systems with multiple time scales often can be divided into regions of time or space where noise plays a more or less significant role .this raises questions about where or when noise plays a critical role , and whether the type of noise can make a significant difference .second , since is the only variable in the system that can be easily experimentally controlled , this raises the question of whether varying can be used as a controllable input to probe the dynamics . providing noisy input is a technique used in experimental neuroscience ( e.g. , ref . ) to investigate the structure of the complex dynamics .finally , understanding whether different types of noise sources have similar or different effects contributes both to the understanding of the mechanism for the dynamics and to model identification . to test whether fluctuations in can reproduce dynamics similar to those generated by noise in the persistent sodium current, we consider the voltage equation in the interval near the end of the isi , when the dynamics of is slow .this is the stage at which stos are generated for parameters near the hopf bifurcation of the reduced system . in that part of the cycle, can be approximated by a constant in eq .[ eq : sc_v ] .this approximation suggests that even though the noise is parametric , at the end of the isi when the stos are generated , the noise behaves as additive noise to leading order with a coefficient and eq .[ eq : sc_v ] can be approximated by \nonumber \\ & \quad + \frac{1}{c}\sqrt{2d'}\eta(t ) .\label{eq : sc_v_noisei}\end{aligned}\ ] ] eq .[ eq : sc_v_noisei ] can be interpreted as having a noisy applied current with and ( for ) instead of the original noisy gating variable .then one would expect that the effects of different noise levels on the stos as described in sec .[ sec : analysis_inter ] could be reproduced by varying the noise level in , as long as the results are scaled in a manner consistent with eq .[ eq : sc_v_noisei ] .indeed for the ranges of noise levels considered in this paper , the results for stochastic are essentially the same as described in the classification of sec .[ sec : analysis_inter ] . with the above scaling , the plots for the coherence measure ( as in the top panel of fig .[ fig : class2_beta ] ) overlap almost perfectly .we also reproduced similar behaviors of the isi density and amplitude dynamics as analyzed for the 3dsc model in sec .[ sec : analysis_inter ] ( data not shown ) . here, we analyze another mmo - generating model , a self - coupled modified fn model derived and used for the study of coupled hodgkin - huxley neurons .similar to the mfn model presented earlier , represents a voltage variable , is gating variable , and is a dynamic coupling between neurons .compared to refs . and , we add a noise term in the first differential equation : is the heaviside function .this system of differential equations has multiple time scales similar to the models analyzed so far : the dynamic variable plays the role of a slowly varying control parameter , decreasing and passing through a hb at for the underlying - system .also , the spikes are included in the model ( not just a threshold and reset ) but this is done using a sharp switch ( ) rather than a smooth nonlinear function for the control variable as in our nlmfn .the parameters used in ref .are as follows : coupling strength , activation rate , slow time scale of the relaxation oscillations , and regulating the speed of the dynamic coupling . ref .analyzes the deterministic ( ) dynamics of the system in detail . by varying the parameters in eqs .[ eq : de_v] [ eq : de_s ] , in particular and , we find mmos according to the classification scheme of sec . [sec : analysis_inter ] .we choose and to obtain a sharper canard transition , similar to the mfn models analyzed earlier .there are a number of differences between eqs .[ eq : de_v][eq : de_s ] and the mfn model .the slow variation of is nonlinear , slowing down significantly in the sphb , a behavior somewhat similar to in the nlmfn case with .another noticeable difference between eqs .[ eq : de_v][eq : de_s ] and the mfn models is that the dynamics of yields a return value well below the hopf point following a spike .after the return there is a significant interval during which approaches , which has implications for both the amplitude and isi behavior . )bifurcation diagram of the self - coupled fn - type model with and together with the trajectory and time series for and . at lower noise values, the trajectory crosses at lower values of ( around for ) .lines and symbols as in fig .[ fig : bifurcation_ma ] .the time scale in the time series is 100.,scaledwidth=32.0% ] class 1 time series are obtained with and , yielding both the amplitude dynamics and the isi density similar to those shown in fig .[ fig : class1_isi ] . one difference for the model of eqs .[ eq : de_v][eq : de_s ] is that increasing noise strength typically increases the amplitude of the stos before a spike .this is consistent both with the slowing of near and with the opportunity for cr - type stos in the interval where . as in class 1, we see the appearance of new well - defined peaks in the isi density with low noise levels . for this particular choice of parameters ,the first of these new peaks appears at longer isis , similar to the behavior in the upper panel of fig .[ fig : sc_7d_isi_2 ] for the 7dsc model . for time series with the characteristics of class 2, we find and in eqs .[ eq : de_v][eq : de_s ] yielding dynamics comparable to sec .[ sec : analysis_inter ] in terms of average isi and amplitude trend ( fig .[ fig : de_amp_isi ] ) . for the stoswell before the spike , an increase of the amplitude results from the algorithm that subtracts the average as has a decreasing trend .the amplitude before the spike increases with noise level , once again due to the combination of cr - driven stos before crosses , and a slowing of .larger values of shift the isi density towards smaller values and broadens it , yet a certain minimal isi is maintained due to the return of well beyond .the effect of such a return mechanism was also observed in the isi density in the nlmfn . ) and various noise strengths .* bottom : * isi density for class 3-like time series from the self - coupled mfn ( ) ..,title="fig : " ] ) and various noise strengths . *bottom : * isi density for class 3-like time series from the self - coupled mfn ( ) ..,title="fig : " ] ) and various noise strengths . *bottom : * isi density for class 3-like time series from the self - coupled mfn ( ) . .,title="fig : " ] reduced speed of ( ) combined with stronger noise ( ) leads to time series that fit into our class 3 .the amplitude dynamics is very similar , but the isi density is different from those shown in fig . [fig : class3_isi ] . at this noise level ,the isi density is still centered around 15 , again due to the return of well above . only for the highest noise levels considered ( )the noise drives an early escape , reflected in the isi density concentrated at shorter isi lengths . in fig .[ fig : beta_desroches ] we show the coherence measure for the two parameter sets used for generating class 2 and class 3 time series . as with the earlier analyses , obtaining for class 1 time seriesis not possible due to a multi - peaked psd . for classes 2 and 3 ,we neglect a strong peak in the psd resulting from the concatenation and compute as in previous examples .there is a weak peak in at intermediate noise levels , consistent with previous examples that exhibit mmos related to both cr and sphb . in eqs .[ eq : de_v][eq : de_s ] this combination of sto mechanisms results from the return of and its slower variation near the hb .( see subsec .[ ssec : tools ] ) for the self - coupled fn model at two values of . ]attention towards oscillatory neural dynamics has expanded to include coherent subthreshold oscillatory activity in the voltage of some neuron classes as well as spikes .combined , the sequence of subthreshold oscillations ( stos ) followed by a spike comprises a class of mixed - mode oscillations mmos .as these mmos have been recognized more frequently in both experimental and modeling settings there is an increasing challenge to distinguish between different mmo - generating mechanisms . for stochastic systemsthere are additional routes for mmos that include noise - induced oscillations and transients that are sustained with noise . in this article , we analyzed mmos of this type both in a biophysical model for the dynamics of stellate cells and in augmented versions of phenomenological fn - type models .our analysis focuses on the sto part of the signal where the slow dynamics are prominent and noise has the greatest impact on the stos . in both model types we observe signatures of two distinct oscillation - generating mechanisms : slow passage through a hopf bifurcation ( sphb ) and noise - induced coherent oscillations with frequencies close to that of the hopf bifurcation .the latter of these appears in certain types of coherence resonance ( cr ) . for noise levels that are in the range of what is observed experimentally , features of the underlying deterministic modelscan be hidden or transformed in the stochastic time series .a further complication in identifying the mmo mechanism in stochastic models is that substantially distinct models can easily be tuned to produce very similar time series .then model calibration based on time series alone must search through a larger parameter space or range of stochastic models needed for real world data .we use a suite of measures to be able to distinguish between different mmo- and sto - generating mechanisms in the presence of noise , differentiating between the types of underlying models as well as analyzing the influence of certain general model parameters .identifying possible mmo mechanisms in this way limits the broad range of parameter or model space for finding appropriate mmo mechanisms or calibrating a biophysical model .furthermore this type of comparison can also identify appropriate classes of reduced models , with appropriate stochastic behavior over a range of parameters , to be used to approximate a full biophysical model .such reduced models are used both for simplicity and for computational speed within larger modeling frameworks , as , e.g. , in ref . .the suite of these measures was chosen to focus on the stos in the isi , where noise has the greatest impact on the character of the mmos .we show that the measures can also be used to exploit the noise to identify the route to the mmos .given the variety of routes to mmos , more than one such measure is needed , and we use three distinct measures ( interspike interval , trend of the sto amplitude and noise - dependent coherence ) . while the focus of these measures is on bifurcation parameters , noise levels , and slowly varying control parameters , we found that these measures also reveal information about the refractory behavior or reset in the models .furthermore , we have focused on using an approach that can easily be applied to time series , so that it can be used in both experimental and simulation settings .we summarize the main characteristics obtained from these measures for different mmo generating mechanisms .stos of the cr - type have distinct features in these measures in comparison with those dominated by sphb : \1 .mmos driven primarily by the deterministic behavior of sphb display a strong trend in increasing sto amplitude , while those driven by cr have a weaker trend or no trend at all , for average amplitude . \2 . for small noise isi densitiesare highly concentrated for families driven by sphb , in contrast to the stos of the cr - type that have isi densities with long tails . for larger noise values , these isi densities are more similar and may depend on the return mechanism ( see items 79 below ) .the coherence measure has a clear optimal noise level for cr - driven stos .psds are typically multipeaked for sphb for smaller noise , so that a coherence measure is not well defined for small noise . for larger noise values ,the coherence measure for sphb is strictly decreasing . while each model has a hb for the underlying 2d subsystem , differences in the criticality of these bifurcations can be observed depending on the underlying attracting deterministic dynamics .\4 . for stos dominated by sphb ,an increase in noise level typically drives a greater reduction in sto amplitude before the spike for a subcritical hb than a supercritical hb , since the latter has attracting stos .the underlying deterministic dynamics influence different behaviors of as a function of noise level .systems with a supercritical hb may have underlying attracting behavior of small oscillations that can support increased coherence .this type of deterministic behavior is typically not seen for subcritical hb .\6 . for stos dominated by sphb in the subcritical case, even very low noise levels can drive a sampling from a variety of mmo families , together with a shift and spread of isi values , making it difficult to distinguish from mmos driven by other mechanisms .in addition to identifying features related to noise level and bifurcation or slowly varying control parameters , we identified a number of additional behaviors that are related to the reset or refractory dynamics following a spike .reset and rate of increase of the control parameter in the simple if model can be varied to capture the amplitude and isi behavior of the different classes of time series in the physiological model .however , a simple if model may not have the flexibility to capture different behaviors of the coherence measure . \8 . for the dynamic voltage - dependent control parameters ,an early escape to spiking can translate into a lower return value . in that case the isi density does not shift with larger noise but is just less concentrated . this behavior distinguishes it from if - type models with a fixed reset or models where the return mechanisms are independent of the spike .\9 . for return mechanisms with limited damping of stos following the spike, the coherence measure shows only a decrease in the coherence measure or no well - defined coherence over the isi .one important element in these comparisons is that characteristics of the time series , captured by the suite of measures , can change in distinct ways when the noise level is varied .a direct way of varying noise levels in neuroscience experiments is by injecting a noisy current . recognizing that the noise has its greatest impact in the isi where the slow dynamics are prominent leads to the proposal that tunable extrinsic noise can be introduced through an applied current to mimic the effects of the intrinsic noise .we show that this is indeed the case with an appropriate scaling of the noise in the physiological model .the distinctions between model features highlighted by these measures have been observed in an additional measure related to spike clusters , repeated spikes without stos in the isi .there it was shown that the dynamic return mechanism of nlmfn , where an earlier spike can result in a reset value farther from the hb , can result in more robust stos in the isi , consistent with the isi density behavior of nlmfn .the spike cluster frequency increases much more dramatically with noise for systems with sphb - driven stos , as compared with the case where stos are cr - driven , consistent with the amplitude and isi density results observed here .also , small perturbations in the reset value can translate into a large increase in spike cluster frequency in stochastic systems , particularly where the stos are sphb - driven for an underlying subcritical hb . our analysis in this paper is focused on a specific type of mmos , namely a combination of small amplitude oscillations and spikes relevant for neural systems .we have proposed measures that are focused on characteristics of mmos that are particularly sensitive to the noise , due to the presence of multiple scales .this suggests that understanding the presence of multiple time scales and noise - sensitive characteristics of the underlying bifurcation structures would provide a solid basis for identifying measures that characterize mmos in broader or more generic settings .acker , c. d. , kopell , n. , and white , j. a. ( 2003 ) . ., 15(1):7190 .baer , s. , erneux , t. , and rinzel , j. ( 1989 ) . ., 49:5571 .berglund , n. and gentz , b. ( 2002 ) . ., 122:341388 .celet , j. c. , dangoisse , d. , glorieux , p. , g. , l. , and erneux , t. ( 1998 ) . ., 81:975978 . chaos ( 2008 ) . , volume 18 .dcds - s ( 2009 ) . , volume 2 , number 4 .desroches , m. , krauskopf , b. , and osinga , h. m. ( 2008 ) . ., 18:015107 .epsztein , j. , lee , a. k. , chorev , e. , and brecht , m. ( 2010 ) . ., 327:474477 .erchova , i. and mcgonigle , d. j. ( 2008 ) . ., 18:015115 .ermentrout , b. ( 2008 ) .bard / xpp / xpp.html .gang , h. , ditzinger , t. , ning , c. z. , and haken , h. ( 1993 ) .stochastic resonance without external periodic force ., 71(6):807810 .georgiou , m. and erneux , t. ( 1992 ) . ., 45:66366642 .izhikevich , e. m. and edelman , g. m. ( 2008 ) . . , 105:35933598 .izhikevich , e. m. and fitzhugh , r. ( 2006 ) . fitzhugh - nagumo model ., 1(9):1349 . j z su , j. r. and terman , d. ( 2004 ) .effects of noise on elliptic bursters . , 17:133157 .kanamaru , t. ( 2007 ) . van der pol oscillator ., 2(1):2202 .klosek , m. and kuske , r. ( 2005 ) . ., 3:706729 .krupa , m. and szmolyan , p. ( 2001 ) . ., 33(2):286314 .kuske , r. and borowski , p. ( 2009 ) .survival of subthreshold oscillations : the interplay of noise , bifurcation structure , and return mechanism . , 2:873895 .kuske , r. , gordillo , l. f. , and greenwood , p. ( 2007 ) . ., 245:459469 .kuske , r. and papanicolaou , g. ( 1998 ) . ., 120:255272 .lindner , b. , garcia - ojalvo , j. , neiman , a. , and schimansky - geier , l. ( 2004 ) .effects of noise in excitable systems . , 392:321424 .longtin , a. and hinzer , k. ( 1996 ) . ., 8:215255 .lythe , g. d. and proctor , m. r. e. ( 1993 ) .noise and slow - fast dynamics in a 3-wave resonance problem ., 47:31223127 .mainen , z. f. and sejnowski , t. j. ( 1995 ) . ., 268:15031506 .makarov , v. a. , nekorkin , v. i. , and velarde , m. g. ( 2001 ) . ., 86:34313434 .muratov , c. b. and vanden - eijnden , e. ( 2008 ) . . , 18(1 )pikovsky , a. s. and kurths , j. ( 1997 ) . ., 78(5):775778 .press , w. , teukolsky , s. , vetterling , w. , and flannery , b. ( 1992 ) . .cambridge university press , cambridge , uk , 2nd edition .rotstein , h. g. , oppermann , t. , white , j. a. , and kopell , n. ( 2006 ) . ., 21:271292 .rotstein , h. g. , wechselberger , m. , and kopell , n. ( 2008 ) . ., 7(4):15821611 .tucker , a. b. , editor ( 2004 ) . .chapman & hall / crc , 2nd edition .wechselberger , m. ( 2005 ) . . ,4(1):101139 .wechselberger , m. and weckesser , w. ( 2009 ) . .white , j. a. , klink , r. , alonso , a. , and kay , a. r. ( 1998 ) . ., 80:262269 .yu , n. , kuske , r. , and li , y. x. ( 2006 ) . ., 73(5 , part 2 ) .[ cols= " < , < " , ]the full seven dimensional system for the stellate cells ( 7dsc ) as presented in ref .consists of one differential equation for the transmembrane voltage and six for six gating variables . throughout this article , we omit the units both in the equations and the parameters .voltage and reversal potentials are in mv , time in ms , all gating variables are unitless , the membrane capacitance is in , in and the conductances in . \\ \frac{{\,\text{d}}}{{\,\text{d}}t}{m } & = -0.1\frac{v+23}{\exp(-0.1(v+23))-1 } ( 1-m ) \nonumber \\ & \quad - 4 m \exp\left(-(v+48)/18\right ) \\\frac{{\,\text{d}}}{{\,\text{d}}t}{h } & = 0.07\exp\left(-(v+37)/20\right ) ( 1-h ) \nonumber \\ & \quad - h /(\exp(-0.1(v+7))+1 ) \\\frac{{\,\text{d}}}{{\,\text{d}}t}{n } & = -0.01\frac{v+27}{\exp(-0.1(v+27))-1 } ( 1-n ) \nonumber \\ & \quad - 0.125n \exp\left(-(v+37)/80\right)\\ \frac{{\,\text{d}}}{{\,\text{d}}t}{p } & = \frac{1}{0.15}\left(\frac{1}{1+\exp\left(-\frac{v+38}{6.5}\right)}-p\right ) \label{eq : sc_7d_p } \\\frac{{\,\text{d}}}{{\,\text{d}}t}{r_f } & = \left[\frac{1}{1+\exp\left(\frac{v+79.2}{9.78}\right)}-r_f\right ] \nonumber \\ & \quad / \left[\frac{0.51}{\exp\left(\frac{v-1.7}{10}\right)+\exp\left(-\frac{v+340}{52}\right)}+1\right ] \\\frac{{\,\text{d}}}{{\,\text{d}}t}{r_s } & = \left[\frac{1}{1+\exp\left(\frac{v+71.3}{7.9}\right)}-r_s\right ] \nonumber \\ & \quad / \left[\frac{5.6}{\exp\left(\frac{v-1.7}{14}\right)+\exp\left(-\frac{v+260}{43}\right)}+1\right ] .\label{eq : sc_rs_app}\end{aligned}\ ] ] the parameters used throughout this article are as follows : equivalent to ref . and what we did in the 3dsc model in eq .[ eq : sc_v ] , we augment eq .[ eq : sc_7d_p ] by the additive noise term .the following alternative equation for was introduced in ref .: \nonumber \\ & / \left[\frac{5.6}{\exp\left(\frac{v-1.7}{14}\right)+\exp\left(-\frac{v+260}{43}\right)}+1\right ] .\label{eq : sc_rs_58 } \end{aligned}\ ] ] the right hand sides of eqs .[ eq : sc_rs_app ] ( eq .[ eq : sc_rs ] ) and [ eq : sc_rs_58 ] are very similar within the relevant parameter ranges of and with the only noticeable deviation at very small values of ( between and ) .the qualitative features of the reduced 3dsc model therefore are not altered . here , we use eq .[ eq : sc_rs_app ] throughout the paper , except for subsec .[ ssec : intra_sc_weak_noise ] , where we replace eq .[ eq : sc_rs_app ] by eq .[ eq : sc_rs_58 ] for the sake of comparability with the analysis in ref . .we refer to the respective footnote in ref . for a discussion .first , the data ( time series ) was low - pass filtered by convolving it with a triangle function ( e.g. , ref . ) of a length that is on the order of half a sto period .a simple algorithm then finds the spikes , chooses a window before each spike , removes the average and finds the local maxima in each window going backwards in time .the amplitude at each consecutive maximum before a spike is averaged over all isis in the time series . in this analysisthe choice of the low pass filter ( ` triangle length ' ) is crucial , to avoid reduction of amplitudes or miscounting of the order of the maxima resulting from the time scale of the filter being too short or long . for the spiking models ,spikes were removed from the time series by deleting data points corresponding to a typical spike duration ( including recovery ) after the respective variable ( or ) crossed a certain value . in the if - type models ,a similar but much shorter stretch of data was removed .the remaining data points were concatenated .typically , psds computed from the full time series ( including spikes and recovery ) show significant power in broad frequency ranges that differ strongly between different models due to the different shapes of the spikes .removing the spikes makes the power contribution of the stos more clearly visible . to remove low frequency content in the psds ,the average of the time series was removed .the routine ` spctrm ` from numerical recipes was used to compute an estimate of the psd .the normalization is such that the total power in the psd is equal to the mean squared amplitude of the time series .our ` standard ' psd for the mfn models is obtained from time series sampled at with a bartlett window of size 4096 data points with overlap .the ` standard ' psd for the sc models is obtained from time series sampled at with the same window function . for timeseries sampled at a different , we scale the psd accordingly .we scale the psds such that the frequencies are expressed in ( see subsec .[ ssec : tools ] ) . to compute the coherence measure according to eq .[ eq : beta ] , we obtain the relevant values of the psd by fitting a non - normalized cauchy - lorentz distribution to the peak of the psd corresponding to stos .the fit is generally good for intermediate and strong values of noise . for weaker noise valuesthe psds are often multi - peaked making it difficult to obtain a meaningful coherence measure .the coherence measure has units of the psd ( squared unit of the considered variable when using as a time scale ) and depends strongly on the sampling rate of the original time series and the windowsize of the psd - estimator .it is therefore difficult to compare absolute values of between time series with different units , scales and sampling rates .rk and pb received funding from an nserc discovery grant .pb was partially funded by the pacific institute for the mathematical sciences .pb acknowledges support by the indian institute of technology madras ( iit - m ) for a visit to the department of physics at the iit - m .we thank ozgur yilmaz for helpful discussions .
|
many neuronal systems and models display a certain class of mixed mode oscillations ( mmos ) consisting of periods of small amplitude oscillations interspersed with spikes . various models with different underlying mechanisms have been proposed to generate this type of behavior . stochastic versions of these models can produce similarly looking time series , often with noise - driven mechanisms different from those of the deterministic models . we present a suite of measures which , when applied to the time series , serves to distinguish models and classify routes to producing mmos , such as noise - induced oscillations or delay bifurcation . by focusing on the subthreshold oscillations , we analyze the interspike interval density , trends in the amplitude and a coherence measure . we develop these measures on a biophysical model for stellate cells and a phenomenological fitzhugh - nagumo - type model and apply them on related models . the analysis highlights the influence of model parameters and reset and return mechanisms in the context of a novel approach using noise level to distinguish model types and mmo mechanisms . ultimately , we indicate how the suite of measures can be applied to experimental time series to reveal the underlying dynamical structure , while exploiting either the intrinsic noise of the system or tunable extrinsic noise .
|
[ sec : introduction ] this paper is set in the framework of _ inductive inference _ , a branch of ( algorithmic ) learning theory .this branch analyzes the problem of algorithmically learning a description for a formal language ( a computably enumerable subset of the set of natural numbers ) when presented successively all and only the elements of that language . for example, a learner might be presented more and more even numbers .after each new number , outputs a description for a language as its conjecture . the learner might decide to output a program for the set of all multiples of , as long as all numbers presented are divisible by .later , when sees an even number not divisible by , it might change this guess to a program for the set of all multiples of .many criteria for deciding whether a learner is _ successful _ on a language have been proposed in the literature .gold , in his seminal paper , gave a first , simple learning criterion , __ -learning _ _ stands for learning from a _ text _ of positive examples ; stands for gold , who introduced this model , and is used to to indicate full - information learning ; stands for _ explanatory_. ] , where a learner is _ successful _iff , on every _ text _ for ( listing of all and only the elements of ) it eventually stops changing its conjectures , and its final conjecture is a correct description for the input sequence .trivially , each single , describable language has a suitable constant function as a -learner ( this learner constantly outputs a description for ) .thus , we are interested in analyzing for which _ classes of languages _ there is a _ single learner_ learning _each _ member of .this framework is also sometimes known as _ language learning in the limit _ and has been studied extensively , using a wide range of learning criteria similar to -learning ( see , for example , the textbook ) .a wealth of learning criteria can be derived from -learning by adding restrictions on the intermediate conjectures and how they should relate to each other and the data .for example , one could require that a conjecture which is consistent with the data must not be changed ; this is known as _ conservative _ learning and known to restrict what classes of languages can be learned ( , we use to denote the restriction of conservative learning ) .additionally to conservative learning , the following learning restrictions are considered in this paper ( see section [ sec : learningcriteria ] for a formal definition of learning criteria including these learning restrictions ) . in _ cautious _ learning ( , ) the learner is not allowed to ever give a conjecture for a strict subset of a previously conjectured set . in _ non - u - shaped _ learning ( , ) a learner may never _ semantically _ abandon a correct conjecture ; in _ strongly non - u - shaped _ learning ( , ) not even syntactic changes are allowed after giving a correct conjecture . in_ decisive _ learning ( , ) , a learner may never ( semantically ) return to a _ semantically _ abandoned conjecture ; in _ strongly decisive _ learning ( , ) the learner may not even ( semantically ) return to _ syntactically _ abandoned conjectures .finally , a number of monotonicity requirements are studied ( ) : in _ strongly monotone _ learning ( ) the conjectured sets may only grow ; in _ monotone _ learning ( ) only incorrect data may be removed ; and in _ weakly monotone _ learning ( ) the conjectured set may only grow while it is consistent .the main question is now whether and how these different restrictions reduce learning power .for example , non - u - shaped learning is known not to restrict the learning power , and the same for strongly non - u - shaped learning ; on the other hand , decisive learning _ is _ restrictive .the relations of the different monotone learning restriction were given in .conservativeness is long known to restrict learning power , but also known to be equivalent to weakly monotone learning .cautious learning was shown to be a restriction but not when added to conservativeness in , similarly the relationship between decisive and conservative learning was given . in exercise 4.5.4b of is claimed ( without proof ) that cautious learners can not be made conservative ; we claim the opposite in theorem [ thm : cautvarconv ] .this list of previously known results leaves a number of relations between the learning criteria open , even when adding trivial inclusion results ( we call an inclusion trivial iff it follows straight from the definition of the restriction without considering the learning model , for example strongly decisive learning is included in decisive learning ; formally , trivial inclusion is inclusion on the level of learning restrictions as predicates , see section [ sec : learningcriteria ] ) . with this paperwe now give the complete picture of these learning restrictions .the result is shown as a map in figure [ fig : goldrelations ] .a solid black line indicates a trivial inclusion ( the lower criterion is included in the higher ) ; a dashed black line indicates inclusion ( which is not trivial ) .a gray box around criteria indicates equality of ( learning of ) these criteria .a different way of depicting the same results is given in figure [ fig : partialordergold ] ( where solid lines indicate any kind of inclusion ) .results involving monotone learning can be found in section [ sec : monotone ] , all others in section [ sec : caution ] . at ( -5,-1 ) ; ( nothing ) at ( 0,0 ) * t * ; ( dec ) at ( 0,-1.5 ) ; ( sdec ) at ( 0,-3 ) ; ( mon ) at ( 2.5,-4.5 ) ; ( wmon ) at ( -2.5,-4.5 ) ; ( smon ) at ( 0,-6 ) ; ( nothing ) ( dec ) ; ( dec ) ( sdec ) ; ( sdec ) ( wmon ) ; ( sdec ) ( mon ) ; ( mon ) ( smon ) ; ( wmon ) ( smon ) ; for the important restriction of conservative learning we give the characterization of being equivalent to cautious learning .furthermore , we show that even two weak versions of cautiousness are equivalent to conservative learning .recall that cautiousness forbids to return to a strict subset of a previously conjectured set .if we now weaken this restriction to forbid to return to _ finite _ subsets of a previously conjectured set we get a restriction still equivalent to conservative learning .if we forbid to go down to a correct conjecture , effectively forbidding to ever conjecture a superset of the target language , we also obtain a restriction equivalent to conservative learning . on the other hand ,if we weaken it so as to only forbid going to _ infinite _ subsets of previously conjectured sets , we obtain a restriction equivalent to no restriction .these results can be found in section [ sec : caution ] . in _set - driven _learning the learner does not get the full information about what data has been presented in what order and multiplicity ; instead , the learner only gets the set of data presented so far . for this learning modelit is known that , surprisingly , conservative learning is no restriction !we complete the picture for set driven learning by showing that set - driven learners can always be assumed conservative , strongly decisive and cautious , and by showing that the hierarchy of monotone and strongly monotone learning also holds for set - driven learning .the situation is depicted in figure [ fig : hierarchysetdriven ] .these results can be found in section [ sec : setdriven ] . at ( -5,-1 ) ; ( nothing ) at ( 0,0 ) ; ( mon ) at ( 0,-1.5 ) ; ( smon ) at ( 0,-3 ) ; ( nothing ) ( mon ) ; ( mon ) ( smon ) ; a major emphasis of this paper is on the techniques used to get our results .these techniques include specific techniques for specific problems , as well as general theorems which are applicable in many different settings .the general techniques are given in section [ sec : techniques ] , one main general result is as follows .it is well - known that any -learner learning a language has a _ locking sequence _ , a sequence of data from such that , for any further data from , the conjecture does not change and is correct .however , there might be texts such that no initial sequence of the text is a locking sequence .we call a learner such that any text for a target language contains a locking sequence _ strongly locking _ , a property which is very handy to have in many proofs .fulk showed that , without loss of generality , a -learner can be assumed strongly locking , as well as having many other useful properties ( we call this the _ fulk normal form _ , see definition [ defn : fulknormalform ] ) . for many learning criteria considered in this paper it might be too much to hope forthat all of them allow for learning by a learner in fulk normal form . however , we show in corollary [ cor : sinklocking ] that we can always assume our learners to be strongly locking , total , and what we call _ syntactically decisive _ , never _ syntactically _ returning to syntactically abandoned hypotheses .the main technique we use to show that something is decisively learnable , for example in theorem [ thm : natnumsdec ] , is what we call _ poisoning _ of conjectures . in the proof of theorem [ thm : natnumsdec ]we show that a class of languages is decisively learnable by simulating a given monotone learner , but changing conjectures as follows .given a conjecture made by , if there is no mind change in the future with data from conjecture , the new conjecture is equivalent to ; otherwise it is suitably changed , _ poisoned _ , to make sure that the resulting learner is decisive .this technique was also used in to show strongly non - u - shaped learnability . finally , for showing classes of languages to be not ( strongly ) decisively learnable , we adapt a technique known in computability theory as a `` priority argument '' ( note , though , that we do not deal with oracle computations ) .we use this technique to reprove that decisiveness is a restriction to -learning ( as shown in ) , and then use a variation of the proof to show that strongly decisive learning is a restriction to decisive learning .[ sec : mathprelim ] unintroduced notation follows , a textbook on computability theory . denotes the set of natural numbers , .the symbols , , , respectively denote the subset , proper subset , superset and proper superset relation between sets ; denotes set difference . and denote the empty set and the empty sequence , respectively .the quantifier means `` for all but finitely many '' . with and denote , respectively , domain and range of a given function .whenever we consider tuples of natural numbers as input to a function , it is understood that the general coding function is used to code the tuples into a single natural number .we similarly fix a coding for finite sets and sequences , so that we can use those as input as well . for finite sequences ,we suppose that for any we have that the code number of is at most the code number of .we let denote the set of all ( finite ) sequences , and the ( finite ) set of all sequences of length at most using only elements .if a function is not defined for some argument , then we denote this fact by , and we say that on _ diverges _ ; the opposite is denoted by , and we say that on _converges_. if on converges to , then we denote this fact by .we let denote the set of all partial functions and the set of all total such functions . and denote , respectively , the set of all partial computable and the set of all total computable functions ( mapping ) .we let be any fixed acceptable programming system for ( an acceptable programming system could , for example , be based on a natural programming language such as c or java , or on turing machines ) .further , we let denote the partial computable function computed by the -program with code number .a set is _ computably enumerable ( ` ce ` ) _ iff it is the domain of a computable function .let denote the set of all ` ce ` sets .we let be the mapping such that . for each , we write instead of . is , then , a mapping from _ onto _ .we say that is an index , or program , ( in ) for .we let be a blum complexity measure associated with ( for example , for each and , could denote the number of steps that program takes on input before terminating ) . for all and we let ( note that a complete description for the finite set is computable from and ) .the symbol is pronounced _ pause _ and is used to symbolize `` no new input data '' in a text .for each ( possibly infinite ) sequence with its range contained in , let ) . by using an appropriate coding ,we assume that and can be handled by computable functions . for any function and all , we use ] to denote the set of all -learnable classes ( learnable by some learner in ) .[ sec : techniques ] in this section we present technically useful results which show that learners can always be assumed to be in some normal form .we will later always assume our learners to be in the normal form established by corollary [ cor : sinklocking ] , the main result of this section .we start with the definition of _delayable_. intuitively , a learning criterion is delayable iff the output of a hypothesis can be arbitrarily ( but not indefinitely ) delayed .[ defn : delayable ] let be the set of all non - decreasing with infinite limit inferior , i.e. for all we have . a learning restriction is _ delayable _ iff , for all texts and with , all and all , if and ) \subseteq { \mathrm{content}}(t'[n]) ]. then we have , for all , ) = h(t[r(n)]) ] .thus , we have and , as is non - decreasing , we get as desired .next we define another useful property , which can always be assumed for delayable learning restrictions .[ defn : stronglylocking ] a _ locking sequence for a learner on a language _ is any finite sequence of elements from such that is a correct hypothesis for and , for sequences with elements from , . it is well known that every learner learning a language has a locking sequence on .we say that a learning criterion _ allows for strongly locking learning _iff , for each -learnable class of languages there is a learner such that -learns and , for each and any text for , there is an such that ]. then we have ) \subseteq { \mathrm{content}}(f(t[m])).\ ] ] as this holds for every , we get . from the construction of we know that .thus , is a text for . from the construction of get that does not -learns from as changes infinitely often its mind , a contradiction .next , we will show that converges on and is strongly locking . as is finite, there is such that , for all , ) = f(t[n_0]).\end{aligned}\ ] ] as converges to ) ] is a locking sequence of on . therefore we get that , for all , ) = f(t[n_0 ] \diamond \tau)\ ] ] and therefore ) = h'(t[n_0 ] \diamond \tau).\ ] ] thus , is strongly locking and converges on . to show that fulfills the -restriction , we let ) \diamond t ] .let be such that )| , & \text{if } n \leq n_0 ; \\r(n_0 ) + n - n_0 , & \text{otherwise . }\end{cases}\]]we now show ) = h'(t[n]).\ ] ] _ case 1 : _then we get ) & = h(t'[|f(t[n])| ] ) \\ & = h(f(t[n ] ) ) & \text{as } \\ & = h'(t[n]).\end{aligned}\ ] ] _ case 2 : _then we get ) & = h(t'[r(n_0 ) + n - n_0])\\ & = h(t'[|f(t[n_0])| + n - n_0 ] ) \\ & = h(f(t[n_0])\diamond t[n - n_0 ] ) & \text{as }\\ & = h(f(t[n_0 ] ) ) & \text{\hspace{-10mm}as is a locking sequence of } \\ & = h'(t[n]).\end{aligned}\ ] ] thus , all that remains to be shown is that . obviously , is non - decreasing .especially , we have that is strongly monotone increasing for all .thus we have , for all , .finally we show that ) \subseteq { \mathrm{content}}(t[n]) ] . from the construction of and get that , for all , .thus we get , for all , ) \subseteq { \mathrm{content}}(t[n]) ] , ] are equivalent .last , we will separate these three learning criteria from strongly decisive -learning and show that ] and .thus , there is such that ) : \phi_{h(t[n])}(x ) \leq m ] .next we show that is strongly decisive and conservative ; for that we show that , with every mind change , there is a new element of the target included in the conjecture which is currently not included but is included in all future conjectures ; it is easy to see that this property implies both caution and strong decisiveness .let and be such that ) ) = t[i] ]. then there is such that ) ) = t[j] ] ; ( labeljl ) at ( 10,-1)[] ) = h(t[j]) ] ; at ( 10,-1.5 ) ) \subseteq w_{h(t[j])} ] , there exists such that and )} ] as )} ] does not hold .then we have for all .thus as for any , . but does not -learns from text as for all and , ) ] in steps although is infinite ._ case 2 : _ there are and such that ,t) ] as we will show now .let be a text for ) ] . additionally , for all , we have )) ] , i.e. ) \in { \mathcal{l}} ] as we know from the predicate that ) \subset w_{h'({\mathrm{content}}(e[n+1]))} ] and \subseteq [ { \mathbf{txt}^{}}{\mathbf{g}}{\mathbf{caut}}_\mathbf{fin}{\mathbf{ex}}] ] .let be -learnable by a syntactically decisive learner ( see corollary [ cor : sinklocking ] ) .using the s - m - n theorem we get a function such that we let be the following computable predicate . for given sequences and we say if this means that , for every , the set of all such that is finite and computable .we define a learner such that where using recursion . for a given sequence let be such that . means only changes its hypothesis if ensures that made a mind change and that the previous hypothesis does not contain something of the new input data .we first show that is conservative .let and be such that and let be such that .then we have , for all with , , \text { which is equivalent to } \\ & \exists \rho \in ( w_{h(\hat{\sigma})}^t)^ * , |\hat{\sigma } \diamond \rho| \leq t : h(\hat{\sigma } \diamond \rho ) \neq h(\hat{\sigma});\end{aligned}\ ] ] as there is such that .therefore , we get , as is monotone non - decreasing in .thus , is conservative .second , we will show that converges on any text for a language .let and be a text for .thus , converges on .suppose does not converge on .let the corresponding sequence of hypotheses .then is a text for as for every , .as infinitely often changes its mind , we have that , for infinitely many , there is , for each , such that with holds . as means that , diverges on , a contradiction .third we will show that converges to a correct hypothesis .let be such that converges to on . in the following we consider two cases for this ._ case 1 : _ if is a locking sequence of on we have , for all , and especially for all with , .thus , ._ case 2 : _suppose is not a locking sequence .as and converges , we have for all and with ] .using theorem [ thm : cautvarconv ] and the equivalence of weakly monotone and conservative learning ( using ) , we get the following .[ cor : convwmoncaut ] we have = [ { \mathbf{txt}^{}}{\mathbf{g}}{\mathbf{wmon}}{\mathbf{ex } } ] = [ { \mathbf{txt}^{}}{\mathbf{g}}{\mathbf{caut}}{\mathbf{ex}}].\ ] ] using corollary [ cor : convwmoncaut ] and theorem [ thm : convinsdec ] we get that weakly monotone -learning is included in strongly decisive -learning .theorem [ thm : convwmonint ] shows that this inclusion is proper .[ cor : wmoninsdec ] we have \subset [ { \mathbf{txt}^{}}{\mathbf{g}}{\mathbf{sdec}}{\mathbf{ex}}].\ ] ] the next theorem is the last theorem of this section and shows that forbidding to go down to strict _infinite _ subsets of previously conjectures sets is no restriction .[ thm : cautinft ] we have = [ { \mathbf{txt}^{}}{\mathbf{g}}{\mathbf{ex}}].\ ] ] obviously we have \subseteq [ { \mathbf{txt}^{}}{\mathbf{g}}{\mathbf{ex}}] ] .let be a set of languages and be a learner such that -learns and is strongly locking on ( see corollary [ cor : sinklocking ] ) .we define , for all and , the set such that using the s - m - n theorem we get a function such that we define a learner as we will show now that the learner -learns . let an and a text for be given . as is strongly locking there is such that for all , \diamond \tau ) = h(t[n_0]) ] .thus we have , for all , ) = h'(t[n_0]) ] . to show that the learning restriction holds , we assume that there are such that ) } \subset w_{h'(t[i])} ] is infinite . w.l.o.g . is the first time that returns the hypothesis )} ] . from the definition of the function we get that ) \subseteq w_{h'(t[j ] ) } \subseteq w_{h'(t[i])} ] and therefore )} ] is infinite .[ sec : decisiveness ] in this section the goal is to show that decisive and strongly decisive learning separate ( see theorem [ thm : stronglydecisivelearning ] ) . for this proofwe adapt a technique known in computability theory as a `` priority argument '' ( note , though , that we are not dealing with oracle computations ) . in order to illustrate the proof with a simpler version , we first reprove that decisiveness is a restriction to -learning ( as shown in ) . for both proofswe need the following lemma , a variant of which is given in for the case of decisive learning ; it is easy to see that the proof from also works for the cases we consider here .[ lem : notnatnum ] let be such that and , for each finite set , there are only finitely many with .let . then , if is -learnable , it is so learnable by a learner which never outputs an index for . now we get to the theorem regarding decisiveness .its proof is an adaptation of the proof given in , rephrased as a priority argument .this rephrased version will be modified later to prove the separation of decisive and strongly decisive learning .[ thm : decisivelearning ] we have \subset [ { \mathbf{txt}^{}}{\mathbf{g}}{\mathbf{ex}}].\ ] ] for this proof we will employ a technique from computability theory known as _ priority argument_. for this technique , one has a set of _ requirements _ ( we will have one for each ) and a _ priority _ on requirements ( we will prioritize smaller over larger ) .one then tries to fulfill requirements one after the other in an iterative manner ( fulfilling the unfulfilled requirement of highest priority without violating requirements of higher priority ) so that , in the limit , the entire infinite list of requirements will be fulfilled .we apply this technique in order to construct a learner ( and a corresponding set of learned sets ) .thus , we will give requirements which will depend on the to be constructed .in particular , we will use a list of requirement , where lower have higher priority . for each , will correspond to the fact that learner is not a suitable decisive learner for .we proceed with the formal argument . for each ,let requirement be the disjunction of the following three predicates depending on the to be constructed .a. : and learns .b. and learns and some with . c. .if all hold , then every learner which never outputs an index for fails to learn decisively as follows . for each learner which never outputs an index for , either ( i ) of holds , implying that some co - singleton is learned by but not by . or ( ii ) holds , then there is a on which generalizes , but will later have to abandon this correct conjecture in order to learn some finite set ; as , after the change to a hypothesis for , the text can still be extended to a text for , the learner is not decisive .however , the price for avoiding it is to output a conjecture for . ]thus , all that remains is to construct in a way that all of are fulfilled . in order to coordinate the different requirements when constructing on different inputs , we will divide the set of all possible input sequences into infinitely many segments ,of which every requirement can `` claim '' up to two at any point of the algorithm defining ; the chosen segments can change over the course of the construction , and requirements of higher priority might `` take away '' segments from requirements with lower priority ( but not vice versa ) .we follow with the division of segments : for any set we let be the _ i d of _ ; for ease of notation , for each finite sequence , we let . for each ,the segment contains all with .we note that is _ monotone _ ,i.e. the first way of ensuring some requirement is via ( i ) ; as this part itself is not decidable , we will check a `` bounded '' version thereof .we define , for all , for any , if we can find an such that , for all , we have , then it suffices to make learn in order to fulfill via part ( i ) ; this requires control over segment in defining .note that , if we ever can not take control over some segment because some requirement with higher priority is already in control , then we will try out different ( only finitely many are blocked ) .if we ever find a such that , then we can work on fulfilling via ( ii ) , as we directly get a where over the content generalizes . in order to fulfill via ( ii ) we have to choose a finite set with .we will then take control over the segments corresponding to and ( for growing ) , _ but not necessarily over segment _ , and thus establish via ( ii ) . note that , again , the segments we desire might be blocked ; but only finitely many are blocked , and we require control over and , both of which are at least ( this follows from being monotone , see equation ( [ eq : idmonotone ] ) , and from ) ; thus , we can always find an for which we can either follow our strategy for ( i ) or for ( ii ) as just described .it is tempting to choose simply , this fulfills all desired properties .the main danger now comes from the possibility of being an index for : this will imply that , for growing , will also be growing indefinitely .of course , there is no problem with satisfying , it now holds via ( iii ) ; but as soon as at least two requirements will take control over segments for indefinitely growing , they might start blocking each other ( more precisely , the requirement of higher priority will block the one of lower priority ) .we now need to know something about our later analysis : we will want to make sure that every requirement either ( a ) converges in which segments to control or ( b ) for all , there is a time in the definition of after which will never have control over any segment corresponding to ids ; in fact , we will show this later by induction ( see claim [ claim : inductionproof ] ) . any requirement which takes control over segments for indefinitely growing be blocked infinitely often , and thus forced to try out different for fulfilling , including returning to that were abandoned previously because of ( back then ) being blocked by a requirement of higher priority .thus , such a requirement would fulfill neither ( a ) nor ( b ) from above .we will avoid this problem by _ not _ choosing , but instead choosing a which grows in i d along with the corresponding .the idea is to start with and then , as grows , add more elements . for thiswe make some definitions as follows . for a finite sequence we let be the least element not in which is larger than all elements of . for any finite sequence and let be such that for all and with we have thus , we will use the sets to satisfy ( ii ) of ( in place of ) . we now have all parts that are required to start giving the construction for . in that constructionwe will make use of a subroutine which takes as inputs a set of blocked indices , a requirement and a time bound , and which finds triples with such that .\ ] ] we call fulfilling equation ( [ eq : defntwitness ] ) for given and a _ -witness for . the subroutine is called ` findwitness ` and is given in algorithm [ alg : priorityargumentdecsubroutine ] .` error ` we now formally show termination and correctness of our subroutine .let and a finite set be given .the algorithm ` findwitness ` on terminates and returns a -witness for such that . from the condition in line [ line : pcondition ]we see that the search in line [ line : sigmasearch ] is necessarily successful , showing termination . using the monotonicity of from equation ( [ eq : idmonotone ] ) on equation ( [ eq : defdtsigma ] ) we have that the subroutine ` findwitness ` can not return ` error ` on any arguments : for , we either have or the and chosen are larger than . with the subroutine given above, we now turn to the priority construction for defining detailed in algorithm [ alg : priorityargumentdec ] .this algorithm assigns witness tuples to more and more requirements , trying to make sure that they are -witnesses , for larger and larger . for each , will be the witness tuple associated with after iterations ( defined for all ) .we say that a requirement _ blocks _ an i d iff for the witness tuple currently associated with .we say that a tuple is _-legal _ iff it is a -witness for and and are not blocked by any with .clearly , it is decidable whether a triple is -legal . in order to define the learner we will need some functions giving us indices for the languages to be learned . to that end ,let ( using the s - m - n theorem ) be such that to increase readability , we allow assignments to values of for arguments on which was already defined previously ; in this case , the new assignment has no effect. regarding algorithm [ alg : priorityargumentdec ] , note that lines 38 make sure that we have an appropriate witness tuple .we will later show that the sequence of assigned witness tuples will converge ( for learners never giving a conjecture for ) .lines 911 will try to establish the requirement via ( i ) , once this fails it will be established in lines 1216 via ( ii ) . after this construction of , we let be the target to be learned . first note that the ids blocked by different requirements are always disjoint ( at the end of an iteration of ) . as the major part of the analysis , we show the following claim by induction , showing that , for each , either the triple associated with converges or it grows arbitrarily in both its and value ( this is what we earlier had to carefully choose the for ) . [claim : inductionproof ] for all we have and , for all , there is such that either or as our induction hypothesis , let be given such that the claim holds for all .case 1 : there is such that .+ then , for all , is a -witness for ; in the case of , we have that , for all but finitely many with , , and index for ; this implies , which shows .otherwise we have , for all , .furthermore we get , for all but finitely many with , , and index for ; this implies .consider now all those with .if , then is already be defined on infinitely many such , namely in case of .however , we have that is a _ proper _ subset of , which shows that , on any text for , will eventually only output , which gives as desired and , thus , .case 2 : otherwise .+ for each i d there exists at most finitely many with and is used in the witness triple for ; this follows from the choice of in the subroutine ` findwitness ` as a minimum , where , for larger , all previously considered are still considered ( so that the chosen minimum might be smaller for larger , but never go up , which shows convergence ) .a triple is only abandoned if it is not legal any more ; this means it is either blocked or it is not a -witness triple for some . using the induction hypothesis , the first can only happen finitely many times for any given tuple ; the second implies the desired increase in both the and the value of the witness tuple .for this we also use our specific choice of as growing along with the i d of the associated and we use that any witness tuple with a with has and value of at least , due to the monotonicity of . to show ( we will show ( 3 ) ) , let be the maximum over all existing for the converging by the induction hypothesis and . let be the -witness triple chosen for in iteration .suppose , by way of contradiction , that is not an index for ; let .let be the maximum over all found by the induction hypothesis for all with the chosen .since the triple is -legal for all , we get a contradiction to the unbounded growth of the witness triple .this shows that is an index for , and thus we have . with the last claimwe now see that all requirement are satisfied .this implies that can not be -learned by a learner never using an index for as conjecture .we have that .furthermore , for any i d , there are only finitely many sets in with that i d ; this implies that , for every finite set , there are only finitely many elements with .thus , using lemma [ lem : notnatnum ] , is not decisively learnable at all .while the previous theorem showed that decisiveness poses a restriction on -learning , the next theorem shows that the requirement of strong decisiveness is even more restrictive .the proof follows the proof of theorem [ thm : decisivelearning ] , with some modifications .[ thm : stronglydecisivelearning ] we have \subset [ { \mathbf{txt}^{}}{\mathbf{g}}{\mathbf{dec}}{\mathbf{ex}}].\ ] ] we use the same language and definitions as in the proof of theorem [ thm : decisivelearning ] .the idea of this proof is as follows .we build a set with a priority construction just as in the proof of theorem [ thm : decisivelearning ] , the only essential change being in the definition of the hypothesis : the change from to and back to on texts for is what made not decisively learnable .thus , we will change to be a hypothesis for as well _ as soon as changed its hypothesis on an extension of _ , and otherwise it is a hypothesis for as before .this will make decisive on texts for , but will not be strongly decisive . furthermore , we will make sure that for sequences with i d , only conjectures for sets with i d are used , so that indecisiveness can only possibly happen within a segment. now the last source of not being decisively learnable is as follows .when different requirements take turns with being in control over the segment , they might introduce returns to abandoned conjectures . to counteract this ,we make sure that any conjecture which is ever abandoned on a segment of i d is for , which will give decisiveness .we first define an alternative for the function from that proof with the s - m - n theorem such that , for all , as we have , this is a valid application of the s - m - n theorem .we also want to replace the output of according to line [ line : folowe ] of algorithm [ alg : priorityargumentdec ] .to that end , let be as given by the s - m - n theorem such that , for all and , we construct now a learner again according to a priority construction , as given in algorithm [ alg : priorityargumentsdec ] .note that lines 1[line : elseline ] are identical with the construction from algorithm [ alg : priorityargumentdec ] and lines 38 again make sure that we have an appropriate witness tuple and lines 911 try to establish the requirement via ( i ) .the main difference lies in the way that is established once this fails in lines 1218 via ( ii ) : here we need to check for a mind change and adjust what language should learn accordingly .it is easy to check that , on any sequence , gives conjectures for languages of the same i d as that of .thus , indecisiveness of can only occur within a segment .next we will modify to avoid indecisiveness from different requirements taking turns controlling the same segment . with the s - m - n theoremwe let be such that , for all , let be such that , for all , we now let .it is easy to see that is decisive on all texts where it always makes an output , since indecisiveness can again only happen within a segment , and _ poisons _ any possible non - final conjectures within a segment .let a strongly decisive learner for be given which never makes a conjecture for ( we are reasoning with lemma [ lem : notnatnum ] again ) .let be such that .reasoning as in the proof of theorem [ thm : decisivelearning ] , we see that there is a triple such that converges to that triple in the construction of .if , for all , , then we have that ( on any sequences with i d , gives an output for , and it converges ) .assume now that there is such that , for all , we have . case 1 : there is with such that .+ let be a text for .then on converges to an index for , giving . butthis shows that was not strongly decisive on any text for starting with , a contradiction .case 2 : otherwise .+ let be a text for .then on converges to an index for , giving . but converges on any text for starting with to , a contradiction to ( so the convergence is not to a correct hypothesis ) . in both caseswe get the desired contradiction .[ sec : setdriven ] in this section we give theorems regarding set - driven learning . for thiswe build on the result that set - driven learning can always be done conservatively .first we show that any conservative set - driven learner can be assumed to be cautious and syntactically decisive , an important technical lemma .[ thm : sdsyntdec ] we have = [ { \mathbf{txt}^{}}{\mathbf{sd}}{\mathbf{conv}}{\mathbf{syndec}}{\mathbf{ex}}].\ ] ] in other words , every set - driven learner can be assumed syntactically decisive .let a set - driven learner be given .following we can assume to be conservative .we define a learner such that , for all finite sets , let .we will show that is syntactically decisive and -learns .let be given and let be a text for .first , we show that -learns from . as is a set driven learnerthere is such that ) ) = h({\mathrm{content}}(t[n])) ] .we will show that , for all ] , i.e. -learns .second , we will show that is conservative .whenever makes a mind change , will also make a mind change ; as , for all , ) ) } = w_{h'({\mathrm{content}}(t[n]))} ] . thus , there are and with such that and .we consider the case that ) = h(t[n-1]) ] is a locking sequence .let ) ] .thus we have , for all , ) ) ) = d' ] . to increase readability we let ) ] . as is syntactically decisive , only changes its mind if changed its mind before .thus we have as and we get from claim [ claim : syndecconcl ] ( with and ) that this shows that is conservative .we will now show that as this implies that is cautious and strongly decisive . from the construction of get that there is with such that is consistent on , i.e. using claim [ claim : syndecconcl ] again ( this time with , and ) , we see that there is which shows that .[ sec : monotone ] in this section we show the hierarchies regarding monotone and strongly monotone learning , simultaneously for the settings of and in theorems [ thm : smon ] and [ thm : wmonnotmon ] . with theorems [ thm : natnumsdec ] and[ thm : moninsdec ] we establish that monotone learnabilty implies strongly decisive learnability .this is a standard proof which we include for completeness .let and .let such that and using the s - m - n theorem such that , for all , we first show that is -learnable .we let a learner such that , for all , let and be a text for .thus , there is such that and any element in ) ] and ) } = w_{p(k ) } = l_k ] for the first time .thus we have that for all ) ) = h({\mathrm{content}}(t[n])) ] .obviously learns weakly mononote as the learner only change its mind if a greater odd element appears in the text .suppose now there is a learner such that -learns . let be a locking sequence of on and such that , for all , .let a locking sequence of on and be a text for starting with .let be a locking sequence of on .then , we have as the datum is in and in but not in , is not monotone on the text for .let be a learner in fulk normal form such that -learns with . as is strongly locking on is a locking sequence of on . using this locking sequence we get an uniformly enumerable sequence of languages such that , we will use the as hypotheses .note that any hypothesis is either semantically equivalent to or , if is not a locking sequence of for any language , is an index for a finite superset of . in the latter casewe call the hypothesis _poisoned_. let and be a text for .as is strongly locking and -learns there is such that , for all , ) = h(t[n_0 ] \diamond \sigma) ] .thus , there is such that , for all ) ] .this implies that , for all , ) = h'(t[n]) ] and ) \neq h'(t[j]) ] ._ case 1 : _) ]is poisoned or not , there is ] .( ] otherwise . ) as ) ] we get through the construction of that )} ] , a contradiction ._ case 2.1 : _) ] is a locking sequence on for a language and ) } \in { \mathbf{txt}^{}}{\mathbf{g}}{\mathbf{ex}}(h) ] is poisoned we have ) } \notin { \mathbf{txt}^{}}{\mathbf{g}}{\mathbf{ex}}(h) ] , a contradiction .[ thm : moninsdec ] we have that any monotone -learnable class of languages is strongly decisive learnable , while the converse does not hold , i.e. \subset [ { \mathbf{txt}^{}}{\mathbf{g}}{\mathbf{sdec}}{\mathbf{ex}}].\ ] ] _ case 1 : _ is dense .we will show now that -learns the class . let and be a text for .suppose there are and with such that ) } \nsubseteq w_{h(t[j])} ] .let )}\backslash w_{h(t[j])} ] .let be a text for and be such that \diamond t' ] but )}$ ] which is a contradiction as is monotone .thus , -learns , which implies that -learns . using corollary [ cor : wmoninsdec ]we get that is -learnable .k. jantke .monotonic and non - monotonic inductive inference of functions and patterns . in j. dix ,k. jantke , and p. schmitt , editors , _ nonmonotonic and inductive logic _ ,volume 543 of _ lecture notes in computer science _ , pages 161177 .
|
we investigate how different learning restrictions reduce learning power and how the different restrictions relate to one another . we give a complete map for nine different restrictions both for the cases of complete information learning and set - driven learning . this completes the picture for these well - studied _ delayable _ learning restrictions . a further insight is gained by different characterizations of _ conservative _ learning in terms of variants of _ cautious _ learning . our analyses greatly benefit from general theorems we give , for example showing that learners with exclusively delayable restrictions can always be assumed total .
|
fnsymbol#1 * in the early years of the previous century , the main aim of population genetics theory was to validate the darwinian theory of evolution , using the mendelian hereditary mechanism as the vehicle for determining how the characteristics of any daughter generation depended on the corresponding characteristics of the parental generation . by the 1960s , however , that aim had been achieved , and the theory largely moved in a new , retrospective and statistical , direction .this happened because , at that time , data on the genetic constitution of a population , or at least on a sample of individuals from that population , started to become available .what could be inferred about the past history of the population leading to these data ?retrospective questions of this type include : `` how do we estimate the time at which mitochondrial eve , the woman whose mitochondrial dna is the most recent ancestor of the mitochondrial dna currently carried in the human population , lived ? how can contemporary genetic data be used to track the ` out of africa ' migration ? how do we detect signatures of past selective events in our contemporary genomes ? ''kingman s famous coalescent theory became a central vehicle for addressing questions such as these .the very success of coalescent theory has , however , tended to obscure kingman s other contributions to population genetics theory . in this notewe review his various contributions to that theory , showing how coalescent theory arose , perhaps naturally , from his earlier contributions .kingman attended lectures in genetics at cambridge in about 1960 , and his earliest contributions to population genetics date from 1961 .it was well known at that time that in a randomly mating population for which the fitness of any individual depended on his genetic make - up at a single gene locus , the mean fitness of the population increased from one generation to the next , or at least remained constant , if only two possible alleles , or gene types , often labelled and , were possible at that gene locus .however , it was well known that more than two alleles could arise at some loci ( witness the abo blood group system , admitting three possible alleles , a , b and o ) . showing that in this case the mean population fitness is non - decreasing in time under random mating is far less easy to prove .this was conjectured by mandel and hughes ( 1958 ) and proved in the ` symmetric ' case by scheuer and mandel ( 1959 ) and mulholland and smith ( 1959 ) , and more generally by atkinson _ et al . _( 1960 ) and ( very generally ) kingman , ( 1961a , b ) . despite this success ,kingman then focused his research in areas quite different from genetics for the next fifteen years .the aim of this paper is to document some of his work following his re - emergence into the genetics field , dating from 1976 .both of us were honoured to be associated with him in this work .neither of us can remember the precise details , but the three - way interaction between the uk , the usa and australia , carried out mainly by the now out - of - date flimsy blue aerogrammes , must have started in 1976 , and continued during the time of kingman s intense involvement in population genetics .this note is a personal account , focusing on this interaction : many others were working in the field at the same time .one of kingman s research activities during the period 1961 - 1976 leads to our first ` background ' theme .in 1974 he established ( kingman , 1975 ) a surprising and beautiful result , found in the context of storage strategies .it is well known that the symmetric -dimensional dirichlet distribution where , , does not have a non - trivial limit as , for given fixed . despite this , if we let and in such a way that the product remains fixed at a constant value , then the distribution of the _ order statistics _ converges to a non - degenerate limit .( the parameter will turn out to have an important genetical interpretation , as discussed below . )kingman called this the poisson dirichlet distribution , but we suggest that its true author be honoured and that it be called the ` kingman distribution ' .we refer to it by this name in this paper .so important has the distribution become in mathematics generally that a book has been written devoted entirely to it ( feng , 2010 ) .this distribution has a rather complex form , and aspects of this form are given below .the kingman distribution appears , at first sight , to have nothing to do with population genetics theory .however , as we show below , it turns out , serendipitously , to be central to that theory . to see why this is so, we turn to our second ` background ' theme , namely the development of population theory in the 1960s and 1970s .the nature of the gene was discovered by watson and crick in 1953 . for our purposesthe most important of their results is the fact that a gene is in effect a dna sequence of , typically , some 5000 bases , each base being one of four types , a , g , c or t. thus the number of types , or alleles , of a gene consisting of 5000 bases is .given this number , we may for many practical purposes suppose that there are infinitely many different alleles possible at any gene locus . however , gene sequencing methods took some time to develop , and little genetic information at the fundamental dna level was available for several decades after watson and crick . the first attempt at assessing the degree of genetic variation from one person to another in a population at a less fundamental level depended on the technique of gel electrophoresis , developed in the 1960s . in loose terms, this method measures the electric charge on a gene , with the charge levels usually thought of as taking integer values only .genes having different electric charges are of different allelic types , but it can well happen that genes of different allelic types have the same electric charge . thus there is no one - to - one relation between charge level and allelic type .a simple mutation model assumes that a mutant gene has a charge differing from that of its parent gene by either .we return to this model in a moment . in 1974kingman travelled to australia , and while there met pat moran ( as it happens , the phd supervisor of both authors of this paper ) , who was working at that time on this ` charge - state ' model .the two of them discussed the properties of a stochastic model involving a population of individuals , and hence genes at any given locus .the population is assumed to evolve by random sampling : any daughter generation of genes is found by sampling , with replacement , from the genes from the parent generation .( this is the well - known ` wright fisher ' model of population genetics , introduced into the population genetics literature independently by wright ( 1931 ) and fisher ( 1922 ) . )further , each daughter generation gene is assumed to inherit the same charge as that of its parent with probability , and with probability is a charge - changing mutant , the change in charge being equally likely to be and . at first sightit might seem that , as time progresses , the charge levels on the genes in future generations become dispersed over the entire array of positive and negative integers .but this is not so .kingman recognized that there is a coherency to the locations of the charges on the genes brought about by common ancestry and the genealogy of the genes in any generation . in kingmans words ( kingman 1976 ) , amended here to our terminology , `` the probability that [ two genes in generation have a common ancestor gene [ in generation , for , ] is , which is near unity when is large compared to .thus the [ locations of the charges in any generation ] form a coherent group , , and the relative distances between the [ charges ] remain stochastically bounded '' .we do not dwell here on the elegant theory that kingman developed for this model , and note only that in the above quotation we see here the beginnings of the idea of looking backward in time to discuss properties of genetic variation observed in a contemporary generation .this viewpoint is central to kingman s concept of the coalescent , discussed in detail below .parenthetically , the question of the mean number of ` alleles ' , or occupied charge states , in a population of size ( 2 genes ) is of some mathematical interest .this depends on the mutation rate and the population size .it was originally conjectured by kimura and ohta ( 1978 ) that this mean remains bounded as .however , kesten ( 1980a , b ) showed that it increases indefinitely as , but at an extraordinarily slow rate .more exactly , he found the following astounding result .define , , , 2 , 3 , , and as the largest such that .suppose that .then the random number of ` alleles ' in the population divided by converges in probability to a constant whose value is approximately 2 as .some idea of the slowness of the divergence of the mean number of alleles can be found by observing that if , then . in a later paper ( kingman 1977a ), kingman extended the theory to the multi - dimensional case , where it is assumed that data are available on a vector of measurements on each gene .much of the theory for the one - dimensional charge - state model carries through more or less immediately to the multi - dimensional case .as the number of dimensions increases , some of this theory established by kingman bears on the ` infinitely many alleles ' model discussed in the next paragraph , although as kingman himself noted , the geometrical structure inherent in the model implies that a convergence of his results to those of the infinitely - many - alleles model does not occur , since the latter model has no geometrical structure .the infinitely - many - alleles model , introduced in the 1960s , forms the second background development that we discuss .this model has two components .the first is a purely demographic , or genealogical , model of the population .there are many such models , and here we consider only the wright fisher model referred to above .( in the contemporary literature many other such models are discussed in the context of the infinitely - many - alleles model , particularly those of moran ( 1958 ) and cannings ( 1974 ) , discussed in section [ robust ] . )the second component refers to the mutation assumption , superimposed on this model . in the infinitely - many - alleles modelthis assumption is that any new mutant gene is of an allelic type never before seen in the population .( this is motivated by the very large number of alleles possible at any gene locus , referred to above . )the model also assumes that the probability that any gene is a mutant is some fixed value , independent of the allelic type of the parent and of the type of the mutant gene . from a practical point of view, the model assumes a technology ( relevant to the 1960s ) which is able to assess whether any two genes are of the same or are of different allelic types ( unlike the charge - state model , which does not fully possess this capability ) , but which is not able to distinguish any further between two genes ( as would be possible , for example , if the dna sequences of the two genes were known ) .further , since an entire generation of genes is never observed in practice , attention focuses on the allelic configuration of the genes in a sample of size , where is assumed to be small compared to , the number of genes in the entire population .given the nature of the mechanism assumed in this model for distinguishing the allelic types of the genes in the sample , the data in effect consist of a partition of the integer described by the vector , where is the number of allelic types observed in the sample exactly times each .it is necessary that , and it turns out that under this condition , and to a close approximation , the stationary probability of observing this vector is where is defined as and ( ewens ( 1972 ) , karlin and mcgregor ( 1972 ) ) .the marginal distribution of the number of distinct alleles in the sample is found from ( [ eqn : ke ] ) as where is a stirling number of the first kind .it follows from ( [ eqn : ke ] ) and ( [ eqn : kdist ] ) that is a sufficient statistic for , so that the conditional distribution of given is independent of .the relevance of this observation is as follows . as noted above, the extent of genetic variation in a population was , by electrophoresis and other methods , beginning to be understood in the 1960s .as a result of this knowledge , and for reasons not discussed here , kimura advanced ( kimura 1968 ) the so - called ` neutral theory ' , in which it was claimed that much of the genetic variation observed did not have a selective basis .rather , it was claimed that it was the result of purely random changes in allelic frequency inherent in the random sampling evolutionary model outlined above .this ( neutral ) theory then becomes the null hypothesis in a statistical testing procedure , with some selective mechanism being the alternative hypothesis .thus the expression in ( [ eqn : ke ] ) is the null hypothesis allelic - partition distribution of the alleles in a sample of size .the fact that the conditional distribution of given is independent of implies that an objective testing procedure for the neutral theory can be found free of unknown parameters .both authors of this paper worked on aspects of this statistical testing theory during the period 19721978 , and further reference to this is made below .the random sampling evolutionary scheme described above is no doubt a simplification of real evolutionary processes , so in order for the testing theory to be applicable to more general evolutionary models it is natural to ask : `` to what extent does the expression in ( [ eqn : ke ] ) apply for evolutionary models other than that described above ?'' one of us ( gaw ) worked on this question in the mid-1970s ( watterson , 1974a , 1974b ) .this question is also discussed below .one of us ( gaw ) read kingman s 1975 paper soon after it appeared and recognized its potential application to population genetics theory . in the 1970sthe joint density function ( [ eqn : k1 ] ) was well known to arise in that theory when some fixed finite number of alleles is possible at the gene locus of interest , with symmetric mutation between these alleles . in population genetics theoryone considers , as mentioned above , infinitely many possible alleles at any gene locus , so that the relevance of kingman s limiting ( ) procedure to the infinitely many alleles model , that is the relevance of the kingman distribution , became immediately apparent .this observation led ( watterson 1976 ) to a derivation of an explicit form for the joint density function of the first order statistics , , , in the kingman distribution .( there is an obvious printer s error in equation ( 8) of watterson s paper . )this joint density function was shown to be of the form where , is euler s constant , and is best defined through the laplace transform equation ( watterson and guess ( 1977 ) ) the expression ( [ eqn : k2 ] ) simplifies to when , and in particular , when .population geneticists are interested in the probability of ` population monomorphism ' , defined in practice as the probability that the most frequent allele arises in the population with frequency in excess of 0.99 . equation ( [ eqn : k5 ] ) implies that this probability is close to .kingman himself had placed some special emphasis on the largest of the order statistics , which in the genetics context is the allele frequency of the most frequent allele .this leads to interesting questions in genetics .for instance , crow ( 1973 ) had asked : `` what is the probability that the most frequent allele in a population at any time is also the oldest allele in the population at that time ? '' a nice application of reversibility arguments for suitable population models allowed watterson and guess ( 1977 ) to obtain a simple answer to this question . in models where all alleles are equally fit ,the probability that any nominated allele will survive longest into the future is ( by a simple symmetry argument ) its current frequency . for time - reversible processes ,this is also the probability that it is the oldest allele in the population .thus conditional on the current allelic frequencies , the probability that the most frequent allele is also the oldest is simply its frequency .thus the answer to crow s question is simply the mean frequency of the most frequent allele .a formula for this mean frequency , as a function of the mutation parameter , together with some numerical values , were given in watterson and guess ( 1977 ) , and a partial listing is given in the first row of table [ tmfao ] .( we discuss the entries in the second row of this table in section [ gem ] . ).mean frequency of ( a ) the most frequent allele , ( b ) the oldest allele , in a population as a function of . the probability that the most frequent allele is the oldest allele is also its mean frequency . [cols="^,^,^,^,^,^,^,^,^",options="header " , ] as will be seen from the table , the mean frequency of the most frequent allele decreases as increases .watterson and guess ( 1977 ) provided the bounds , which give an idea of the value of for small values of , and also showed that decreases asymptotically like ( log )/ , giving an idea of the value of for large .from the point of view of testing the neutral theory of kimura , watterson ( 1977 , 1978 ) subsequently used properties of these order statistics for testing the null hypothesis that there are no selective forces determining observed allelic frequencies .he considered various alternatives , particularly heterozygote advantage or the presence of some deleterious alleles .for instance , in ( watterson 1977 ) he investigated the situation when all heterozygotes had a slight selective advantage over all homozygotes .the population truncated homozygosity figures prominently in the allelic distribution corresponding to ( [ eqn : k2 ] ) and was thus studied as a test statistic for the null hypothesis of no selective advantage .similarly , when only a random sample of genes is taken from the population , the sample homozygosity can be used as a test statistic of neutrality . herewe make a digression to discuss two of the values in the first row of table [ tmfao ] .it is well known that in the case , the allelic partition formula ( [ eqn : ke ] ) describes the probabilistic structure of the lengths of the cycles in a random permutation of the numbers .each cycle corresponds to an allelic type and in the notation thus indicates the number of cycles of length .various limiting ( ) properties of random permutations have long been of interest ( see for example finch ( 2003 ) ) .finch ( page 284 ) gives the limiting mean of the normalized length of the longest cycle as in such a random permutation , and this agrees with the value listed in table [ tmfao ] for the case .( finch also in effect gives the standard deviation of this normalized length as . )next , ( [ eqn : k5 ] ) shows that the limiting probability that the ( normalized ) length of the longest cycle exceeds is this is the limiting value of the exact probability for a random permutation of the numbers , which from ( [ eqn : ke ] ) is .finch also considers aspects of a random mapping of to .any such a mapping forms a random number of ` components ' , each component consisting of a cycle with a number ( possibly zero ) of branches attached to it .aldous ( 1985 ) provides a full description of these , with diagrams which help in understanding them .finch takes up the question of finding properties of the normalized size of the largest component of such a random mapping , giving ( page 289 ) a limiting mean of for this .this agrees with the value in table [ tmfao ] for the case .this is no coincidence : aldous ( 1985 ) shows that in a limiting sense ( [ eqn : ke ] ) provides the limiting distribution of the number and ( unnormalized ) sizes of the components of this mapping , with now indicating the number of components of size . as a further result , ( [ eqn : k5 ] ) shows that the limiting probability that the ( normalized ) size of the largest component of a random mapping exceeds is .arratia _ et al . _( 2003 ) show that ( [ eqn : ke ] ) provides , for various values of , the partition structure of a variety of other combinatorial objects for finite , and presumably the kingman distribution describes appropriate limiting results .thus the genetics - based equation ( [ eqn : ke ] ) and the kingman distribution provide a unifying theme for these objects .the allelic partition formula ( [ eqn : ke ] ) was originally derived without reference to the -allele model ( [ eqn : k1 ] ) , but was also found ( watterson , 1976 ) from that model as follows .we start with a population whose allele frequencies are given by the dirichlet distribution ( [ eqn : k1 ] ) .if a random sample of genes is taken from such a population , then given the population s allele frequencies , the sample allele frequencies have a multinomial distribution . averaging this distribution over the population distribution ( [ eqn : k1 ] ) , and then introducing the alternative order - statistic sample description as above , the limiting distribution is the partition formula ( [ eqn : ke ] ) , found by letting and in ( [ eqn : k1 ] ) in such a way that the product remains fixed at a constant value .as stated above , the expression ( [ eqn : ke ] ) was first found by assuming a random sampling evolutionary model .as also noted , it can also be arrived at by assuming that a random sample of genes has been taken from an infinite population whose allele frequencies have the dirichlet distribution ( [ eqn : k1 ] ) .it applies , however , to further models .moran ( 1958 ) introduced a ` birth - and - death ' model in which , at each unit time point , a gene is chosen at random from the population to die .another gene is chosen at random to reproduce .the new gene either inherits the allelic type of its parent ( probability ) , or is of a new allelic type , not so far seen in the population , with probability .trajstman ( 1974 ) showed that ( [ eqn : ke ] ) applies as the stationary allelic partition distribution exactly for moran s model , but with replaced by the finite population number of genes and with defined as .more than this , if a random sample of size is taken without replacement from the moran model population , it too has an exact description as in ( [ eqn : ke ] ) .this result is a consequence of kingman s ( 1978b ) study of the consistency of the allelic properties of sub - samples of samples .( in practice , of course , the difference between sampling with , or without , replacement is of little consequence for small samples from large populations . )kingman ( 1977a , 1977b ) followed up this result by showing that random sampling from various other population models , including significant cases of the cannings ( 1974 ) model , could also be approximated by ( [ eqn : ke ] ) .this was important because several consequences of ( [ eqn : ke ] ) could then be applied more generally than was first thought , especially for the purposes of testing of the neutral alleles postulate .he also used the concept of ` non - interference ' ( see the concluding comments in section [ partition ] ) as a further reason for the robustness of ( [ eqn : ke ] ) .it was noted in section [ pit ] that watterson ( 1976 ) was able to arrive at both the kingman distribution and the allelic partition formula ( [ eqn : ke ] ) from the same starting point ( the ` -allele ' model ) .this makes it clear that there must be a close connection between the two , and in this section we outline kingman s work ( kingman 1977b ) which made this explicit .kingman imagined a sequence of populations in which the size of population , ( , 2 , ) tends to infinity as . for any fixed and any fixed sample size of genes taken from the population , there will be some probability of the partition , where has the definition given in section [ back ] .kingman then stated that this sequence of populations would have the _ewens sampling property _ if , for each fixed , this corresponding sequence of probabilities of approached that given in ( [ eqn : ke ] ) as . in a parallel fashion , for each fixed there will also be a probability distribution for the order statistics , where denotes the frequency of the most frequent allele in the population .kingman then stated that this sequence would have the _ poisson dirichlet limit _ if this sequence of probabilities approached that given by the poisson dirichlet distribution .( we would replace ` poisson dirichlet ' in this sentence by ` kingman ' . )he then showed that this sequence of populations has the _ ewens sampling property _ if and only if it has the poisson dirichlet ( kingman distribution ) limit .the proof is quite technical and we do not discuss it here . we have noted that the kingman distribution may be thought of as the distribution of the ( ordered ) allelic frequencies in an infinitely large population evolving as the random sampling infinitely - many - allele process , so this result provides a beautiful ( and useful ) relation between population and sample properties of such a population . by 1977kingman was in full flight in his investigation of various genetics problems .one line of his work started with the probability distribution ( [ eqn : ke ] ) , and his initially innocent - seeming observation that the size of the sample of genes bears further consideration .the size of a sample is generally taken in statistics as being comparatively uninteresting , but kingman ( 1978b ) noted that a sample of genes could be regarded as having arisen from a sample of genes , one of which was accidently lost , and that this observation induces a consistency property on the probability of any partition of the number .specifically , he observed that if we write for the probability of the sample partition in a sample of size , we require fortunately , the distribution ( [ eqn : ke ] ) does satisfy this equation .but kingman went on to ask a deeper question : `` what are the most general distributions that satisfy equation ( [ pstructure ] ) ? '' these distributions he called ` partition structures ' .he showed that all such distributions that are of interest in genetics could be represented in the form where is some probability measure over the space of infinite sequences satisfying , . an intuitive understanding of this equation is the following .one way to obtain a consistent set of distributions satisfying ( [ pstructure ] ) is to imagine a hypothetically infinite population of types , with a proportion of the most frequent type , a proportion of the second most frequent type , and so on , forming a vector * x*. for a fixed value of , one could then imagine taking a sample of size from this population , and write for the ( effectively multinomial ) probability that the configuration of the sample is .it is clear that the resulting sampling probabilities will automatically satisfy the consistency property in ( [ pstructure ] ) .more generally one could imagine the composition of the infinite population itself being random , so that first one chooses its composition * x * from , and then conditional on * x * one takes a sample of size with probability .the right - hand side in ( [ repmeasure ] ) is then the probability of obtaining the sample configuration averaged over the composition of the population .kingman s remarkable result was that all partition structures arising in genetics must have the form ( [ repmeasure ] ) , for some .kingman called partition structures that could be expressed as in ( [ repmeasure ] ) ` representable partition structures ' and the ` representing measure ' , and later ( kingman 1978c ) found a representation generalizing ( [ repmeasure ] ) applying for any partition structure . the similarity between ( [ repmeasure ] ) and the celebrated de finetti representation theorem for exchangeable sequences might be noted .this has been explored by aldous ( 1985 ) and kingman ( 1978a ) , but we do not pursue the details of this here . in the genetics context , the results of section [ robust ] show that samples from moran s infinitely many neutral alleles model , as well as the population as a whole , have the partition structure property .so do samples of genes from other genetical models .this makes it natural to ask : `` what is the representing measure for the allelic partition distribution ( [ eqn : ke ] ) ? '' and here we come full circle , since he showed that the required representing measure is the kingman distribution , found by him in ( kingman , 1975 ) in quite a different context !the relation between the kingman distribution and the sampling distribution ( [ eqn : ke ] ) is of course connected to the convergence results discussed in the previous section . from the point of view of the geneticist , the kingman distribution is then regarded as applying for an infinitely large population , evolving essentially via the random sampling process that led to ( [ eqn : ke ] ) .this was made precise by kingman in ( 1978b ) , and it makes it unfortunate that the kingman distribution does not have a ` nice ' mathematical form .however , we see in section [ gem ] that a very pretty analogue of the kingman distribution exists when we label alleles not by their frequencies but by their ages in the population .this in turn leads to the capstone of kingman s work in genetics , namely the coalescent process . before discussing these matters we mention another property enjoyed by the distribution ( [ eqn : ke ] ) that kingman investigated , namely that of non - interference .suppose that we take a gene at random from the sample of genes , and find that there are in all genes of the allelic type of this gene in the sample .these genes are now removed , leaving genes .the non - interference requirement is that the probability structure of these genes should be the same as that of an original sample of genes , simply replacing wherever found by .kingman showed that of all partition structures of interest in genetics , the only one also satisfying this non - interference requirement is ( [ eqn : ke ] ) .this explains in part the robustness properties of ( [ eqn : ke ] ) to various evolutionary genetic models .however , it also has a natural interpretation in terms of the coalescent process , to be discussed in section [ coal ] .we remark in conclusion that the partition structure concept has become influential not only in the genetics context , but in bayesian statistics , mathematics and various areas of science , as the papers of aldous ( 2009 ) and of gnedin , haulk and pitman ( 2009 ) in this festschrift show . that this should be so is easily understood when one considers the natural logic of the ideas leading to it .we have noted above that the kingman distribution is not user - friendly .this makes it all the more interesting that a _ size - biased _ distribution closely related to it , namely the gem distribution , named for griffiths ( 1980 ) , engen ( 1975 ) and mccloskey ( 1965 ) , who established its salient properties , is both simple and elegant , thus justifying the acronym ` gem ' .more important , it has a central interpretation with respect to the ages of the alleles in a population .we now describe this distribution .we have shown that the ordered allelic frequencies in the population follow the kingman distribution .suppose that a gene is taken at random from the population .the probability that this gene will be of an allelic type whose frequency in the population is is just .this allelic type was thus sampled by this choice in a size - biased way .it can be shown from properties of the kingman distribution that the probability density of the frequency of the allele determined by this randomly chosen gene is this result was also established by ewens ( 1972 ) .suppose now that all genes of the allelic type just chosen are removed from the population .a second gene is now drawn at random from the population and its allelic type observed .the frequency of the allelic type of this gene among the genes remaining at this stage is also given by ( [ eqsbsbyp ] ) .all genes of this second allelic type are now also removed from the population . a third gene then drawn at random from the genes remaining, its allelic type observed , and all genes of this ( third ) allelic type removed from the population .this process is continued indefinitely . at any stage, the distribution of the frequency of the allelic type of any gene just drawn among the genes left when the draw takes place is given by ( [ eqsbsbyp ] ) .this leads to the following representation .denote by the population frequency of the allelic type drawn. then we can write where the are independent random variables , each having the distribution ( [ eqsbsbyp ] ) .the random vector then has the gem distribution .all the alleles in the population at any time eventually leave the population , through the joint processes of mutation and random drift , and any allele with current population frequency survives the longest with probability .that is , since the gem distribution was found according to a size - biased process , it also arises when alleles are labelled according to the length of their future persistence in the population .time reversibility arguments then show that the gem distribution also applies when the alleles in the population are labelled by their age .in other words , the vector can be thought of as the vector of allelic frequencies when alleles are ordered with respect to their ages in the population ( with allele 1 being the oldest ) .the kingman coalescent , to be discussed in the following section , is concerned among other things with ` age ' properties of the alleles in the population .we thus present some of these properties here as an introduction to the coalescent : a more complete list can be found in ewens ( 2004 ) .the elegance of many age - ordered formulae derives directly from the simplicity and tractability of the gem distribution .given the focus on retrospective questions , it is natural to ask questions about the oldest allele in the population .the gem distribution shows that the mean population frequency of the oldest allele in the population is this implies that when is very small , this mean frequency is approximately .it is interesting to compare this with the mean frequency of the most frequent allele when is small , found in effect from the kingman distribution to be approximately . a more general set of comparisons of these two mean frequencies , for representative values of ,is given in table [ tmfao ] . more generally , the mean population frequency of the oldest allele in the population is for the case , finch ( 2003 ) gives the mean frequencies of the second and third most frequent alleles as and respectively , which may be compared to the mean frequencies of the second and third oldest alleles , namely and .for the mean frequency of the second most frequent allele is , while the mean frequency of the second oldest allele is .next , the probability that a gene drawn at random from the population is of the type of the oldest allele is the mean frequency of the oldest allele , namely as just shown ( see also table [ tmfao ] ) . more generally the probability that genes drawn at random from the population are all of the type of the oldest allele in the population is the gem distribution has a number of interesting mathematical properties , of which we mention here only one .it is a so - called ` residual allocation ' model ( halmos 1944 ) .halmos envisaged a king with one kilogram of gold dust , and an infinitely long line of beggars asking for gold . to the first beggar the king gives kilogram of gold , to the second kilogram of gold , and so on , as specified in ( [ eqformforgem ] ) , where the are independently and identically distributed ( i.i.d . ) random variables , each having some probability distribution over the interval .different forms of this distribution lead to different properties of the distribution of the ` residual allocations ' , , , .one such property is that the distribution of , , , be invariant under size - biased sampling .it can be shown that the gem distribution is the only residual allocation model having this property .this fact had been exploited by hoppe ( 1986 , 1987 ) to derive various results of interest in genetics and ecology .we now turn to sampling results .the probability that genes drawn at random from the population are all of the same allelic type as the oldest allele in the population is given in ( [ eqn.freqoldest ] ) .the probability that genes drawn at random from the population are all of the same unspecified allelic type is in agreement with ( [ eqn : ke ] ) for the case , , 2 , , , .from this result and that in ( [ eqn.freqoldest ] ) , given that genes drawn at random are all of the same allelic type , the probability that they are all of the allelic type of the oldest allele is .the similarity of this expression with that deriving from a bayesian calculation is of some interest .perhaps the most important sample distribution concerns the frequencies of the alleles in the sample when ordered by age .this distribution was found by donnelly and tavar ( 1986 ) , who showed that the probability that the number of alleles in the sample takes the value and that the age - ordered numbers of these alleles in the sample are , in age order , , , , , is where is defined below ( [ eqn : ke ] ) .this formula can be found in several ways , one being as the size - biased version of ( [ eqn : ke ] ) .these are many interesting results connecting the oldest allele in the sample to the oldest allele in the population .for example , kelly ( 1976 ) showed that the probability that the oldest allele in the sample is represented times in the sample is he also showed that the probability that the oldest allele in the population is observed at all in the sample is the probability that a gene seen times in the sample is of the oldest allelic type in the population is when , so that there is only one allelic type present in the sample , this probability is .donnelly ( 1986 ) showed , more generally , that the probability that the oldest allele in the population is observed times in the sample is this is of course closely connected to kelly s result . for the case the probability ( [ unconprobo ] ) is , confirming the complementary probability found above .conditional on the event that the oldest allele in the population does appear in the sample , a straightforward calculation using ( [ unconprobo ] ) shows that this conditional probability and that in ( [ petergorold ] ) are identical . it will be expected that various exact results hold for the moran model , with defined as .the first of these is an exact representation of the gem distribution , analogous to ( [ eqformforgem ] ) .this has been provided by hoppe ( 1987 ) .denote by , , the numbers of genes of the oldest , second - oldest , alleles in the population .then , , can be defined in turn by where has a binomial distribution with index and parameter , where , , are i.i.d .continuous random variables each having the density function ( [ eqsbsbyp ] ) . eventually and the process stops , the final index being identical to the number of alleles in the population .if there is only one allele in the population , so that the population is strictly monomorphic , this allele must be the oldest one in the population .the above representation shows that the probability that the oldest allele arises times in the population is and this reduces to the exact monomorphism probability for the moran model .more generally , kelly ( 1977 ) has shown that the probability that the oldest allele in the population is represented by genes is , exactly , the case considered above is a particular example of ( [ kellygorold ] ) , and the mean number also follows from ( [ kellygorold ] ) .we now consider ` age ' questions .it is found that the mean time , into the past , that the oldest allele in the population entered the population ( by a mutation event ) is it can be shown ( see watterson and guess ( 1977 ) and kelly ( 1977 ) ) that not only the mean age of the oldest allele , but indeed the entire probability distribution of its age , is independent of its current frequency and indeed of the frequency of all alleles in the population .if an allele is observed in the population with frequency its mean age is this is a generalization of the expression in ( [ eq8.2asold ] ) , since if only one allele exists in the population , and it must then be the oldest allele .our final calculation concerns the mean age of the oldest allele in a sample of genes .this is except for small values of , this is close to the mean age of the oldest allele in the population , given in ( [ eq8.2asold ] ) .in other words , unless is small , it is likely that the oldest allele in the population is represented in the sample .we have listed the various results given in this section not only because of their intrinsic interest , but because they form a natural lead - in to kingman s celebrated coalescent process , to which we now turn .the concept of the coalescent is now discussed at length in many textbooks , and entire books ( for example hein , schierup and wiuf ( 2005 ) and wakeley ( 2009 ) ) and book chapters ( for example marjoram and joyce ( 2009 ) and nordborg ( 2001 ) ) have been written about it . herewe can do no more than outline the salient aspects of the process .the aim of the coalescent is to describe the common ancestry of the sample of genes at various times in the past through the concept of an equivalence class . to do this we introduce the notation , indicating a time in the past ( so that if , time is further in the past than time ) .the sample of genes is assumed taken at time .two genes in the sample of are in the same equivalence class at time if they have a common ancestor at this time .equivalence classes are denoted by parentheses : thus if and at time genes 1 and 2 have one common ancestor , genes 4 and 5 a second , and genes 6 and 7 a third , and none of the three common ancestors are identical and none is identical to the ancestor of gene 3 or of gene 8 at time , the equivalence classes at time are we call any such set of equivalence classes an equivalence relation , and denote any such equivalence relation by a greek letter . as two particular cases , at time equivalence relation is , and at the time of the most recent common ancestor of all eight genes , the equivalence relation is .the kingman coalescent process is a description of the details of the ancestry of the genes moving from to .for example , given the equivalence relation in ( [ equivclases ] ) , one possibility for the equivalence relation following a coalescence is .such an amalgamation is called a coalescence , and the process of successive such amalgamations is called the coalescence process . coalescences are assumed to take place according to a poisson process , but with a rate depending on the number of equivalence classes present .suppose that there are equivalence classes at time .it is assumed that no coalescence takes places between time and time with probability .( here and throughout we ignore terms of order . )the probability that the process moves from one nominated equivalence class ( at time ) to some nominated equivalence class which can be derived from it is . in other words ,a coalescence takes place in this time interval with probability , and all of the amalgamations possible at time are equally likely to occur . in order for this process to describe the ` random sampling ' evolutionary model described above , it is necessary to scale time so that unit time corresponds to generations . with this scaling, the time between the formation of an equivalence relation with equivalence classes to one with equivalence classes has an exponential distribution with mean .the ( random ) time until all genes in the sample first had just one common ancestor has mean ( the suffix ` mrcas ' stands for most recent common ancestor of the sample . )this is , of course close to 2 coalescent time units , or 4n generations , when is large .tavar ( 2004 ) has found the ( complicated ) distribution of .kingman ( 1982a , b , c ) showed that for large populations , many population models ( including the ` random sampling ' model ) are well approximated in their sampling attributes by the coalescent process .the larger the population the more accurate is this approximation .we now introduce mutation into the coalescent .suppose that the probability that any particular ancestral gene mutates in the time interval is .all mutants are assumed to be of new allelic types ( the infinitely many alleles assumption ) .if at time in the coalescent there are equivalence classes , the probability that either a mutation or a coalescent event had occurred in is we call such an occurrence a defining event , and given that a defining event did occur , the probability that it was a mutation is and that it is a coalescence is .the probability that different allelic types are seen in the sample is then the probability that of these defining events were mutations .the above reasoning shows that this probability must be proportional to , where is defined below ( [ eqn : ke ] ) , the constant of proportionality being independent of .this argument leads to ( [ eqn : kdist ] ) . using these results andcombinatorial arguments counting all possible coalescent paths from a partition back to the original common ancestor , kingman ( 1982a ) was able to derive the more detailed sample partition probability distribution ( [ eqn : ke ] ) , and deriving this distribution from coalescent arguments is perhaps the most pleasing way of arriving at it .for further comments along these lines , see ( kingman ( 2000 ) ) .the description of the coalescent given above follows the original derivation given by kingman ( 1982a ) .the coalescent is perhaps more naturally understood as a random binary tree .these have now been investigated in great detail : see for example aldous and pitman ( 1999 ) .many genetic results can be obtained quite simply by using the coalescent ideas .for example , watterson and donnelly ( 1992 ) used kingman s coalescent to discuss the question `` do eve s alleles live on ? '' to answer this question we assume the infinitely - many - neutral - alleles model for the population and consider a random sample of genes taken at time ` now ' . looking back in time ,the ancestral lines of those genes coalesce to the mrcas , which may be called the sample s ` eve ' .of course if eve s allelic type survives into the sample it would be the oldest , but it may not have survived because of intervening mutation .if we denote by the number of representative genes of the oldest allele , and by the number of genes having eve s allele , then kelly s result ( [ petergorold ] ) gives the distribution of .we denote that distribution here by , , 1 , 2 , , , and the distribution of by , , 1 , 2 , , .unlike the simple explicit expression for , the corresponding expression for is very complicated : see ( 2.14 ) and ( 2.15 ) in watterson and donnelly ( 1992 ) , derived using some of kingman s ( 1982a ) results. using the relative probabilities of a mutation or a coalescence at a defining event gives rise to a recurrence equation for , , 1 , 2 , , as {n}(j)\notag \\ & \quad{}= n(j-1)q_{n-1}(j-1 ) + n(n - j-1 ) q_{n-1}(j ) + ( j+1)\theta q_{n}(j+1 ) \label{griffiths}\end{aligned}\ ] ] for , 1 , 2 , , , ( provided that we interpret as zero outside this range ) , and for , 3 , .the boundary conditions for , for , and apply , the latter because if then all sample genes descend from a gene having the oldest allele , and ` she ' must be eve .the recurrence ( [ griffiths ] ) is a special case of one found by griffiths ( 1989 ) in his equation ( 3.7 ) . the expected number of genes of eve s allelic type was given by griffiths ( 1986 ) , ( see also beder ( 1988 ) ) , as watterson and donnelly ( 1992 ) gave some numerical examples , some asymptotic results , and some bounds for the distribution , , 1 , 2 , , .one result of interest is that , the probability of eve s allele being extinct in the sample , increases with , to say .one reason for this is that a larger sample may well have its ` eve ' further back in the past than a smaller sample .we might interpret as being the probability that an infinitely large population has lost its ` eve s ' allele .note that the bounds for , indicate that for all in this range , is neither 0 nor 1 .thus , in contrast to the situation in branching processes , there are no sub - critical or super - critical phenomena here .there are many other topics that we could mention in addition to those described above . on the mathematical side, the kingman distribution has a close connection to prime factorization of large integers . on the genetical side ,we have not mentioned the ` infinitely many sites ' model , now frequently used by geneticists , in which the dna structure of the gene plays a central role .it is a tribute to kingman that his work opened up more topics than can be discussed here .our main acknowledgement is to john kingman himself .the power and beauty of his work was , and still is , an inspiration to us both .his generosity , often ascribing to us ideas of his own , was unbounded .for both of us , working with him was an experience never to be forgotten .more generally the field of population genetics owes him an immense and , fortunately , well - recognized debt .we also thank an anonymous referee for suggestions which substantially improved this paper .aldous , d. j. 2009 .more uses of exchangeability : representations of complex random structures . in : bingham , n. h. , and goldie , c. m. ( eds ) , _ probability and mathematical genetics : papers in honour of sir john kingman_. london math .lecture note ser .cambridge : cambridge univ . press .arratia , r. , barbour , a. d. , and tavar , s. 2003 ._ logarithmic combinatorial structures : a probabilistic approach ._ european mathematical society monographs in mathematics .zurich : ems publishing house .gnedin , a. v. , haulk , c. , and pitman , j. 2009 .characterizations of exchangeable random partitions by deletion properties . in : bingham , n. h. , and goldie , c. m. ( eds ) , _ probability and mathematical genetics : papers in honour of sir john kingman_. london math .lecture note ser .cambridge : cambridge univ . press .griffiths , r. c. 1986 .family trees and dna sequences .pages 225227 of : francis , i. s. , manly , b. f. j. , and lam , f. c. ( eds),_proceedings of the pacific statistical congress_. amsterdam : elsevier science publishers .kingman , j. f. c. ( 1982b ) .exchangeability and the evolution of large populations .pages 97112 of : koch , g. , and spizzichino , f. ( eds ) , _ exchangeability in probability and statistics_. amsterdam : north - holland .marjoram , p. , and joyce ,practical implications of coalescent theory . in : lenwood , s. , and ramakrishnan , n. ( eds ) ,_ problem solving handbook in computational biology and bioinformatics_. new york : springer - verlag .tavar , s. 2004 .ancestral inference in population genetics .pages 1188 of : cantoni , o. , tavar , s. , and zeitouni , o. ( eds ) , _ cole dt de probabilits de saint - flour xxxi-2001_. berlin : springer - verlag .
|
mathematical population genetics is only one of kingman s many research interests . nevertheless , his contribution to this field has been crucial , and moved it in several important new directions . here we outline some aspects of his work which have had a major influence on population genetics theory .
|
cooperation is ubiquitous in every form of life , ranging from genes to societies such as the human one .the emergence of cooperation is an evolutionary transition that increases the complexity of life by forming new biological organisms from other ones .it is interesting to note that some biological individuals such as the human beings and the ants have a dual nature , they are part of a more complex organism at the same time that an organism more complex than its parts .this self - similar feature shows that cooperation is a fractal property of life .besides , it is important to note that the cooperation among organisms is temporary since every form of life dies . in this sense ,the cooperative organisms must to develop reproductive capacity in order to evolve beyond its life .therefore , understanding the mechanisms that allow the formation , persistence and reproduction of cooperative systems is essential .the evolution of cooperation has been widely considered in literature . in this long and successful tradition ,lot of attention has been paid to better understand the evolution of our society .although several proposed models allow explaining the cooperation of many actual situations , we still can not solve the problem in a general way .therefore , we have to keep looking for answers to develop a theoretical model of our society .evolutionary game theory has proven to be an appropriate theoretical framework to formally address the cooperation problem . in this theory ,the most important prototypical social interactions are represented by a game where each individual adopts a strategy to play against its opponent . in order to consider darwinian theory, the system evolves favoring the replication of the successful strategies .in particular , the prisoner s dilemma has been the most widely studied game as metaphor of the cooperation problem . in this, each player adopts one of the two possible strategies , cooperation or defection .when in an interaction both individuals cooperate each receive a payoff and other one under mutual defection . if one cooperates and the other defects , the former receives and the second .the game is completely defined when . under these conditions and in a one - shot game , it is better to defect no matter the strategy adopted by the opponents . thus , in fully connected systems defection always has the highest reproduction rate .therefore , the system evolves decreasing the fraction of cooperators to extinction .however , individuals of actual cooperative systems have local information of the system instead global one . they are placed in a structured population interacting with just some other ones .taking this into account , several studies have been performed considering different properties of actual networks interactions . in this way, it has been shown that some features such as locality and degree heterogeneity could be of great importance for the evolution of cooperation . however , given the high benefit - cost ratio ( see next section ) required for cooperation to evolve respect to the one observed in nature , which is particularly evident when the system has a high average number of connections , the problem is not completely solved just considering the structure of actual systems . in order to overcome this limitation , some features such as individuals who can distinguish their opponents , rewiring process introduced through coevolutionary dynamics and multiplex networks have been considered showing great importance for the evolution of cooperation .recently , it has proposed a new formulation of the cooperation problem taking into account two general features of actual cooperative systems , _ i.e. _ the growing process and the possibility of mutations . by considering individuals with imitation capacity and neglecting mutations, they have shown under very low conditions mechanisms to build highly cooperative systems of any size and topology .in particular , the minimal benefit - cost ratio required for cooperation to evolve approaches to the theoretical minimum when the average number of links of the system increases . remarkably , the growing process generates locality and degree heterogeneity that , as we previously stated , seem to be important features for the evolution of cooperation .however , from this process other features of actual systems emerge drastically improving the required conditions for cooperation to evolve .on the one hand , the system have a high level of cooperation in all stages instead half or less of the population normally considered as initial condition in already formed systems . on the other hand , defectors inhabit the less linked parts of the system which is the optimal arrangement for the cooperation . in this way , they have shown that the growing process is essential for the evolution of cooperation when the individuals have imitation capacity as the human being .nevertheless , the emerging cooperative systems require a very high benefit - cost ratio to overcome the emergence of mutant defectors in highly linked individuals of the system . in this way ,the new organism is alive just until highly connected mutants defectors appear into the system . in this sense , the longevity and the system size strongly dependent of the mutation rate of individuals .however , there are organisms such as the human society whose size and longevity can not be explained completely by the model .therefore , in order to show theoretically the very existence of this kind of systems , it is required to introduce new features to the model allowing to the system overcomes the apparition of mutants defectors . in this paper, we address this problem performing considerations over the strategy update rule by which individuals adopt their strategy . through the evolution of the system ,the individuals adopt a strategy by evaluating the information provided by the environment or by mutation . when the problem is considered over structured populations , each individual have information from its neighborhood , _i.e. _ itself and its neighbors .normally , players are considered without memory or prediction capacity .therefore , the information of each individual is restricted to payoffs and strategies from the last round of its neighborhood .generally , the strategy updates of each individual are performed by comparison of the payoff of the focal player with the one of a neighbor , who can be the most successful or a randomly selected one . in this way, the individuals with more than one link choose their strategy neglecting part of the information available .however , a strategy update where the whole neighborhood affects simultaneously the behavior of the focal player seems to be more realistic . taking this as motivation , a new strategy update rule has been recently proposed where the focal player evaluates its strategy by comparing the average payoff of each strategy in the neighborhood .studying the new rule over already formed regular lattices , they have shown a significant increment in the survivability of cooperators with respect to the one obtained by a pairwise comparison rule .although the average payoff of each strategy allows taking into account all the payoffs simultaneously , this strategy update rule is still neglecting part of the information available since the abundance of each strategy in the neighborhood is not taken into account .however , the last is very important in order to exploit properly the information available from individuals .fortunately , this has been recently taken into account through the learning activity of players by considering that it increases with the number of neighbors with different strategy .in particular , they have defined the wisdom of each strategy in a neighborhood as its abundance . in this way, they have shown that the neighborhood wisdom enhances the survivability of cooperation over already formed regular lattices .this occurs by a dynamically decelerated invasion process , by means of which interfaces separating different domains remain smooth and defectors therefore become unable to efficiently invade cooperators . here , we present a new strategy update rule , that we call _ democratic weighted update _, where the average payoff and the abundance of each strategy in the neighborhood of the focal players is taken into account simultaneously .it is important to state that we define the wisdom of individuals in a very different way than in .we consider the wisdom of each individual as a combination of its strategy and payoff instead just its strategy .in particular , we define the wisdom of each individual as proportional to its payoff . in this way , individuals with high payoff are wiser than those with low one .thus , the wisdom of the system is heterogeneously distributed among individuals instead of being homogeneous .we justify this consideration by the following .first , from the perspective of a focal player , the neighbors with high payoff seems to be better players than those with low ones and , therefore , they are more reliable . in this sense , the payoff of each individual can be interpreted as a measure of its reputation regardless of its strategy .second , from the statistical point of view , when the connectivity of the system is heterogeneous the payoff of an individual normally increases with its degree .thus , individuals with high payoff have , in average , more information of the system than individuals with low one and , therefore , their information is more reliable . otherwise , we consider that the behavior of individuals is socially influenced by the wisdom of every individuals of their neighborhood .when the total wisdom of the neighborhood is divided by the existence of both strategies , the focal player is influenced by these contradictory knowledge to opposite directions .hence , the effective social influence over the focal player is defined by the difference between the total wisdom of each strategy instead by the absolute one considered through the learning activity . through the work we broadly shown that the social influence allows to structured highly cooperative systems overcome the emergence of mutant defectors . in this way and considering , we conclude that in structured growing system whose individuals have imitation capacity and their behavior is affected by social influence cooperation evolves .therefore , here we present a theoretical solution of the cooperation problem among unrelated individuals with imitation capacity .each individual is represented by a node of a network , whose links determine the interacting individuals .we considered all nodes intrinsically equal and all connections undirected and with equal weight .each interaction between individuals is modeled by a round of the prisoner s dilemma game .we choose , , and that allow it to reduce the analisys to a single parameter defined by the benefit - cost ratio .after one round of the game for each link , the payoff of individual is defined as follows .if is a cooperator obtains a payoff , where is the degree of and the number of cooperative neighbors .when is a defector linked to cooperators it gets a payoff . for a sake of simplicity , we consider individuals without memory or prediction capacity and all strategies of the system are updated simultaneously ( synchronous update ) following the _ weighted democratic update _ explained in the next section . a complete update of the system is called a generation .besides , it is important to state that the conclusions reached through the work are not affected if the strategy update is performed in an asynchronous way .however , it would be of great interest perform a thorough analysis considering this general feature of actual systems .we assume that the system growth exponentially by the incorporation of new individuals . these are considered defectors in order to simulate the worst conditions that the cooperative system must to resist .unless otherwise stated , we do not take into account the genetic relatedness between players and , therefore , the origin of the new nodes is not important for conclusions . besides, for simplicity , we do not take into account the elimination of nodes but , however , this can be considered without modify conclusions . between two generations , we leave growth the system a time . considering that the system has performed an strategy updates in time ,the next generation is performed when the system reaches a size , where is the rate to which new individuals are introduced to the system . in this way , between two generationthe system grows a fraction .it is remarkable that any other growth more slowly than exponential do not modify the conclusion reached . in this situation, it is expected a lower value of than the one required for exponential growth and , as we will show , it is covered by the results .otherwise , we consider that individuals are allowed to change their strategy in two different ways , by imitation following the democratic weighted update or by mutation which do not require environmental motivation . in order to take the last into account, we define as the probability of an individual mutates between two generations . here, we explore the model under three different mechanisms of network growth to perform an exhaustive study of the model .first , we consider the barabsi - albert model ( bam ) . in this ,the system growth from fully connected nodes by the incorporation of new individuals with links , we consider .each new link is attached to an already existing node of the system by preferential attachment , _i.e. _ with a probability proportional to the degree of the existing nodes . this model generates networks with a degree distribution governed by a power law with an exponent in the thermodynamic limit .second , we explore the model by considering the model a ( ma ) . starting from a fully linked networks of ,the system grows by the incorporation of new individuals . for each new node ,the system introduces new links .one of them connects the new node with an existing one chosen by preferential attachment .the remaining links are placed between a random chosen node and other one selected by preferential attachment .this procedure also generates scale - free networks where individuals with less than links are allowed in opposition with bam .third , we employ a random network model ( rnm ) . as the previous ones , the system growth from fully linked individuals by the incorporation of new ones. then , the system introduces new links for each new node in the following way .the new node is linked to a random picked node and the remaining links are placed between any two nodes randomly chosen .this mechanism produces networks with an exponential degree distributions where individuals with less than links are allowed .it is important to state that in the three network generation models , we avoid the formation of double connections .we choose these models for two reasons . on the one hand , comparing the results of bam and ma we can determine the importance of the existence of nodes with less than links , which has shown to be very important in the formation of cooperative systems without mutations . besides , we can study the importance of the degree heterogeneity by comparing the results obtained with ma and rnm .as we stated in the introduction , we consider that the behavior of any focal player is affected by all the members of its neighborhood through the information provided by them , _i.e. _ payoff and strategy of each individual in the neighborhood . here, individuals evaluate their strategy considering this information as follows .first , for a sake of simplicity , we divide the neighborhood in two sets namely and , where are the players with different strategy than and the remaining ones .then , we say that the focal player is motivated to change its strategy if two conditions are satisfied . on the one hand , the set must not be empty .on the other hand , the average payoff of the set must to be higher than the average payoff of the set , _i.e. _ .when these conditions are satisfied the focal player notes that there is a different strategy than its own that , in average , it is doing better .when and have a similar number of elements , the average payoffs difference could be a good measure of which strategy is better . nevertheless , in the situation where ( ) have much more elements that ( ) , the average payoff of ( ) is statistically more reliable than the other one .in this sense , the abundance of each strategy becomes important information that must be considered . here, we take this into account by considering that the social influence of each individual over its neighbors is defined by its payoff . in this way, individuals with high payoff have a stronger influence over the behavior of its neighbors than players with low one .thereby , the total social influence of each strategy over a focal player is defined by the accumulated payoff for the strategy and for the strategy .thus , the effective social influence over the focal player is defined by . in this way , when ( ) the social influence favors the strategy of the set ( ) and when the social pressure does not have effect over .thus , we consider that a motivated player changes its strategy with a probability that increases with the effective social influence . we perform this by introducing the fermi function ( ref ) as follow : where is the intensity of social influence over the focal player .when the abundance of each strategy becomes neutral and the individual changes its strategy with a probability . otherwise , when the focal player changes its strategy with ( ) when ( ) and , therefore , the influence of the neighborhood becomes determinant in the behavior of .it is interesting to note that if or the motivated focal player preserves or changes its strategy with equal probability since .this situation can be interpreted as the individual trusts in its last choice as much as in the best average strategy of its neighborhood in the last round .therefore , the focal player valuates more its own information than the provided by a neighbor . otherwise , when and , the focal player considers equally important the information provided for each member of its neighborhood .lastly , for intermediate values of and , the focal player considers its information more important than the provided by a neighbor at the same time that the social influence is considered . besides , since for all and , any motivated player is allowed to change its strategy even when .this is important because allow to individuals overcome the social influence taking a strategy in opposition to the wisdom or pressure of the group . here, the motivation of a focal player to change its strategy has been defined in a deterministic way .however , the strategy update can be generalized by considering a motivation probability of change strategy that increases with . in particular, this can be newly performed through the fermi function defined by , where is the intensity of natural selection . in this way , when in a neighborhood there are two strategies , the probability that the focal player changes its strategy is defined by , where the previous situation is recovered for .it is interesting to note that when , the _ democratic weighted update _ is reduced to the strategy update proposed in .however , although it would be interesting to explore thoroughly the influence of in the results of the model , we just consider the case where .however , we have checked that the conclusions reached through the work are unaltered considering any .therefore , the conclusions are robust to this consideration .to perform a clear analysis of the model , we start exploring it for . to this , we look for the critical required to maintain a stable and high level of cooperation when the system grows from an initial structure of cooperators by the incorporation of new defectors . through the paper we consider in order to ensure a proper development of the topological structure of the network under consideration .nevertheless , we can justify this initial cooperative structure by extending the results of , where it has been introduced considerations that allow the formation of this initial structure of cooperators . in fig . we show the fraction of cooperators as a function of the benefit - cost ratio for the three growing mechanisms explored and different values of the parameters , and .these results have been obtained averaging different realizations of the model , which correspond to simulations where the system has grown to .the fraction of cooperators of each realization has been obtained averaging the level of cooperation reached by the system after each generation for . in fig .1(a ) we can see the results obtained by the three growing mechanisms explored , _i.e. _ bam , ma and rnm . in all caseswe observe a phase transition from a non - cooperative state to a highly cooperative one with some critical benefit - cost ratio .in particular , it is remarkable the very low required for cooperation to evolve irrespective of the underlying network formation mechanism .also , as it is expected , we observe by comparing the results of rnm and ma that decreases when the degree heterogeneity increases .nevertheless , the benefit for the evolution of cooperation produced by increasing the degree heterogeneity of the system is much less significant than the one of already formed systems .otherwise , we shown again that ma requires a lower value of than bam for cooperation to evolve . besides , it is very interesting to note that rnm presents a lower than the one of bam even when rnm is much less heterogeneous than bam .this remarkable result without precedent in the literature shows newly the great importance of avoid the introduction of new individuals with many connection performed simultaneously . when the systems is highly cooperative , the capacity of a defector to exploit the system is approximately proportional to its degree . therefore ,when the new individuals are introduced to the system with many connections ( ) simultaneously instead one at a time , the system requires a higher to overcome the incorporation of these strong new defectors and , therefore , the required for cooperation to evolve increases . .( c ) the dependence with the intensity of social pressure .( d ) exploring the model over different values of .the parameters not specified in each figure correspond to , , and using ma as growing mechanism . ] in fig .1(b ) we explore the influence of the average degree in the required for cooperation to evolve . as well as in ,the critical decreases approaching to the theoretical minimum when increases .when the systems is highly cooperative the payoff of individuals is approximately proportional to its degree . besides , considering that cooperators inhabit the most linked parts of the system and defectors the lowest ones , the average payoff difference between defectors and cooperators increases with and , therefore , decreases with , it is important to note that for any the system presents a very low . besides , we observe the existence of other transitions that increases slightly the fraction of cooperator into the system . in particular , these transitions are clear in and for , and also in for .as it has been shown , when a defector is connected only with a cooperator who in turn is connected with other cooperators , for the cooperator have a higher payoff than the defector and , therefore , the cooperator is favored by natural selection . in particular , the cases and correspond to the transitions and respectively .when the average degree of the system increases , the probability of a defector to be connected with a cooperator with few connection decreases and , therefore , these transitions become less evident . in fig .1(c ) we can see how the model behave for different values of the intensity of social influence .in particular , it shown that according as increases the critical decreases . in this way , we show that the social influence have a positive effect for the evolution of cooperation when .however , the cooperation evolves with a very low irrespective of . besides , it is interesting to note that goes to the one showed in when .this is important because shows that previous conclusions are robust when it is considered the average payoff of each strategy instead a pairwise comparison .also , since does not modify drastically we conclude that the social influence is important but not determinant in the formation of cooperative systems without mutations . otherwise , in fig .1(d ) we show results of the model for different values of .as we can see , is independent of .the only consequence of is a diminution in when increases .this is expected since determines the number of new defectors between two generations .finally , it is important to state , that the initial structure of cooperators can be justified by extending the results of since here we largely shown that the required is always improved by the social influence . in this way , we have justified the formation of highly cooperative systems of any size when . at this point, we could be interested in the consequences of the social influence in already formed structured populations . here ,for space reasons , we limit to comment that the required for cooperation to evolve is deeply worsened , in particular when increases .therefore , we are newly showing the great importance of the growing process for the evolution of cooperation in systems whose individuals have imitation capacity and the mutation probability is low enough to be neglected . under these conditions ,the cooperators inhabit the most linked parts of the system and defectors the lowest ones .since this arrangement of cooperators and defectors into the system is the optimal for cooperation , the required for cooperation to evolve is drastically reduced with respect to one of already formed systems .in particular , we have broadly shown that the system requires a very low independently of the parameters of the model shown , the growing mechanisms and the strategy update considered . besides, we have checked that equivalent conclusion can be reached for any traditional game , _ i.e. _ the snow - drift game , the stag - hunt game and public good games . therefore ,given the great robustness of conclusions , we propose the growing process as an universal mechanism for the evolution of cooperation when and the individuals of the system imitators .in the last section , we have shown that the required for cooperation to evolve when is very low independently of the remaining parameters of the model and the growing mechanism considered . however , in order to show that the model is a complete solution of the cooperation problem formulated in , it is required that cooperation prevails into the system even when .since the impact of mutant defectors over the system is very sensitive to the area of the network where they appear , it is very important to explore the emergence of mutants considering a lot of generations to cover a great number of possible configurations of defectors into the system . to this , we address the problem considering the apparition of mutants between generations but not the incorporation of new defectors . in this way, we study the evolution of the system when mutants appear in a system with fixed size and considering a fully cooperative system as initial condition . we choose this way to explore the model because it allows to consider efficiently many generations since otherwise the system size required to reach the same number of generations becomes very large , and therefore , computationally inefficient .besides , this procedure helps to focus the attention just over the mutants since they are the only source of defectors .however , it is important to state , as we will show ( see fig . ) , that almost equivalent results are obtained when both processes are considered simultaneously for . then , we will extend results to systems smaller than in order to show that the model allows cooperation to evolve under very low conditions for any .the results obtained considering a fixed system size have been obtained averaging different realizations of the model , where fully cooperative systems of individuals as initial condition have been considered .the fraction of cooperators of each realization has been obtained averaging the level of cooperation over generations after a transient of generations . .( b ) the dependence with the intensity of social influence .( c ) the effect of the average degree .( c ) the model over different growing mechanisms .the parameters not specified in each figure correspond to , , and using ma as growing mechanism . ] in fig . we show the fraction of cooperators into the system as a function of the benefit - cost ratio for the three growing mechanisms considered and many values of the parameters of the model , , and .in particular , fig . ( a ) shows results of the model for different values of . as we can see , the system presents a phase transition from a non - cooperative to a cooperative one for some .in particular , we observe an increment of with .however , it is remarkable that the system present a low even for the very high mutation rate . in this way, we shown that the social influence allows to overcome the apparition of mutant defectors in highly cooperative systems .it is important to state , that for some , as it is expected , the system can not reach a high and stable value of cooperation regardless of the value of given the great noise in the behavior of individuals . otherwise , in fig . ( b ) we show results of the model for different values of the social influence intensity .we observe that is not strongly dependent for but it increases fast for .in particular , we have checked that for is not possible to reach a stable value of cooperation for any .nevertheless , the cooperation evolves with a very low for all considered .therefore , the social influence is a very strong mechanism to preserve cooperation in highly cooperative systems despite the emergence of mutant defectors . , and . for the growing results we choose and a final system size .otherwise , the results of the static case have been obtained for . ] in fig . ( c ) we show the behavior of for different values of .surprisingly , the model presents an optimal where is minimal .this is a very interesting result that could help to understand why actual networks has evolved to the everage observed . in particular , it would be of great interest to develop a network generation model where the average degree of the system is determined by coevolutionary dynamics instead of being exogenously introduced .nevertheless , it is very important to note that the system presents a low for all the considered and , therefore , we are able to conclude that the model is very robust to . moreover , in fig . ( d ) we show results for the three growing mechanisms considered .we observed that cooperation requires a very low to evolve irrespective of the underlying growing mechanism and , therefore , the model is very robust to this consideration . besides , it is remarkable that ma and even rnm reduce with respect to the one required with bam . in this way, we are showing that the existence of individuals with less than links is also very important to overcome the emergence of mutant defectors .this feature is generally neglected in agent - based models but it could have also important implications .thus , we hope with this result motivates further researches considering ma as the skeleton of the system .otherwise , we observe , for the rnm results , a smoother phase transition than the one obtained with ma . in particular , the region where reaches intermediate values correspond to the situation where some realizations finish with a high level of cooperation and other ones near zero cooperation .this very important result shows that the life expectancy of the cooperative system increases with . in this way , by considering an greater number of generations is expected smoother transitions for any growing mechanism considered . besides, by comparing the rnm and ma results , we observe a higher life expectancy as well as a lower when the degree heterogeneity of the system increases . anyway , we have considered a great number of generations and realizations for results and , therefore , the model generates cooperative systems with a great life expentancy .otherwise , for completeness and as it is expected , we have corroborated that cooperation is extinguished after some generations in fully connected systems for any considered . in this way , the structure of the system is an essential feature for cooperation to evolve .however , it is remarkable that even for the random growth mechanism cooperation is able to evolve for a very low . finally , in fig . labeled as growing , we show results obtained considering the growing process simultaneously with the apparition of mutants . besides , labeled as static , we show the results obtained with the same parameters values but without consider the growing process .as we can see , both results are almost equivalents and , thus , we confirm that the conclusions can be extended to the case where the growing process is considered simultaneously with the emergence of mutants . , and .we used ma for the network formation . ]we have widely shown that the social influence allows to highly cooperative systems resist the apparition of mutants for all the parameters considered whenever . in order to extend these conclusions to systems with consider the cooperation fixation probability defined as the probability that a system of cooperators continues being highly cooperative when it grows by the incorporation of defectors and considering mutations . to obtain select a value and we perform simulations starting from a system of cooperators .then , we compute the number of systems that reach with a fraction of cooperators . finally , we compute for each as , we consider . as it can be seen in fig . for all curves , grows steeply and reaches the value for some systems size . from this structure ,called _ cooperative seed _ ( cs ) , the cooperation is stable despite the system grows by incorporating defectors and the emergence of mutants defectors . however , it is important to note that the instability of the cooperation into the system is generated by the mutant defectors but not by the new individuals given the results of the previous section .we observe that the minimal system size required for cooperation to be stable is reduced when increases . in this way, decreases quickly with when the system is small but , as soon as the degree distribution of the system is well developed , becomes approximately size independent .it is interesting to note that the existence of the _ cooperative seed _ is expected since the system starts to grow from a fully connected system , which can not support mutant defectors . in this way , the system need to develop payoff heterogeneities from degree ones in order to ensure the stability of the cooperation .however , considering fig .2 ( d ) and that from some system size , we expect the existence of a optimal payoff heterogeneity where cooperation is ensured for a low at the same time that these heterogeneities minimal .this is of great interest for the search of a more egalitarian society .nevertheless , for reasons of space , we leave these explorations for further works .otherwise , we observe that the size of the cs decreases with for a fixed . however , if and are not fixed , we have checked that increases with . in this way , we have shown that combining the growing process , social influence and individuals with imitation capacity , cooperation evolves under very low conditions when .in particular , when the system is small the cooperation becomes unstable but with a high probability to overcome the critical system size . thus , it would be of great interest to consider other features that allow reducing .in particular , if the new individuals come from the reproduction of already existing ones , it is expected high genetic relatedness between linked individuals when the system is small .then , it could be considered a that decreases with the genetic relatedness among individuals . in this way, would be low when the systems is small and the size of the cs reduced .this assumption can be justified considering that mutations between genically related individuals supposed that the survival probability of the offspring decreases .therefore , it is expected that individuals with low mutation rate between related individuals evolve .anyway , it would be of great interest performs a thorough analysis looking for the optimal that ensures the best performance of players . , , and using ma for the network generation . ] now , in order to better understand the underlying mechanism through which the social influence allow overcoming the emergence of mutant defectors , in fig . show the fraction of cooperation as a function of the generations of a single realization of the model .we observe that presents some irregular oscillations around the mean value .these are produced by the emergence of mutant defectors strong enough to spread their strategy into the system by imitation .after the apparition of these mutants , the fraction of cooperation decreases but , as we can see , the system is able to overcome these invasions by restoring the level of cooperation .it is interesting to note that the defection invasions are of short range since the system always preserves a high level of cooperation .however , it is important to state that increasing the defection invasions become deeper as it is expected considering fig . c. this could be particularly important to develop the previous suggested network generation model where the average degree of the system is determined by coevolutionary dynamics . otherwise , the invasions are short lived because the cooperation level is restored after few generations .in particular , we have checked that the range of the invasions depends of the degree of the mutant defector as well as the number of mutant defectors that appear into the systems between generations . in this sense ,the deepest invasion observed is produced by several highly connected mutants instead by just one .also , it is very important to note that if many highly linked individuals mutate simultaneously the system can die for any .nevertheless , this occurs with very low probability as we extensively shown in fig . the great number of generations and realizations considered for the results .even so , the death of the organism is ensured since and , as we stated in the introduction , it is essential for the new organism acquires reproduction capacity in order to avoid the extinction . , , , and using ma for the network generation] in fig . we show the fraction of defectors in function of the degree of the nodes .these results have been obtained averaging over generations after a transient of one and different networks .as we can see , the fraction of defectors decreases fast with the degree of individuals approaching to which correspond to . in this way , defectors inhabit preferably the low linked individuals of the system and cooperators the highly connected ones . considering that the result have been obtained without consider the incorporation of new individuals , we conclude that the invasions produced by the emergence of defectors spread to the low linked individuals when the social influence is considered . in this way, it is very important to the cooperative system the existence of individuals with low degree in order to reduce the explotation capacity of defectors into the system .therefore , this model behavior explain why ma and rnm present a lower than bam . otherwise , in order to understand the how the social influence allows to overcome the apparition of mutant defectors producing this distribution of defectors into the system , we analyze the mechanism by which a mutant defector newly becomes cooperator by social influence in a heterogeneous highly cooperative system . in particular , in order to explore the worst situation possible , we consider the case where a hub of a fully cooperative system mutates .when this occurs , the mutant increases its payoff next round since all its neighbors are cooperators .however , in the next generation its neighbors have a new possibility previously nonexistent , now they could change by imitating the strategy of the mutant .since this is a hub , it is expected in the neighborhoods of its neighbors and , therefore , all of them are motivate players . at this pointit is important to note that the accumulated payoffs and of a neighbor of the mutant can be expressed as and in the first generation after the mutation respectively . in this way, it is expected smaller in average for high connected neighbors than for low linked ones .therefore , the probability of a neighbor to change its strategy next generation decreases in average with its degree . in particular , considering and high enough , the low linked neighbors adopt defection by imitation and the highly linked ones preserve cooperation as strategy .in this situation , the mutant reduces its payoff next generation since now it have lot of new defector neighbors . besides, its average payoff is reduced since its defectors neighbors are poorly connected to the systems and , therefore , with a low payoff . however , of the mutant increases because all the neighbor cooperators are the highly linked ones .thus , the mutant becomes in a motivated player since in its neighborhood if high enough . therefore , in the next generations the mutant becomes newly cooperator . in this way , after the emergence of a mutant in a fully cooperative system , we have shown that defection spread preferably to the low linked nodes . in this way ,through this section we have widely shown that a highly cooperative system formed by a growing process is broadly able to overcome the emergence of mutants by considering the social influence .nevertheless , it is natural to ask about the outcome of the model if the abundance of strategies is considered through the learning activity instead by social influence .although it would be interesting a thorough analysis of this , we perform a brief one in appendix a in order to newly show the great importance of take into account the abundance of strategies as well as the advantages of the social influence .we believe of great interest to perform human experiments considering the growing process of the system given that it has theoretically shown to generate a favorable environment for cooperation to evolve irrespective of the underlying strategy update rule and parameters of the model considered . to this kind of experience ,the _ cooperative seed _ becomes particularly important due the great initial mutation rate reported in human experiments . besides , it would be of great interest to test if the human being follows a strategy updates similar to the _ democratic weighted update_. in this sense , a very important step towards the develop of a theoretical model of the human society is to understand how the information provided by the neighborhood affects the behavior of the human beings .taking this as motivation , it has been performed several great human experiments where this problem has begun to be addressed . although these experiments have shown several important consequences , here we just highlight those that we consider more relevant . in has been shown that the human mutation rate is much higher than typically assumed in theoretical models .in particular , it has been reported a very high initial mutation rate which decays exponentially in time .otherwise , in have been shown that individuals do not follow the well - known imitation to the best rule .lastly , in has been shown that degree heterogeneity do not promote cooperation when it is considered the payoff of individuals normalized by their degree . when these conclusions are taken into account , a great number of theoretical predictions , which are very susceptible to the strategy update rule considered , are no longer valid to fully explain human cooperation .nevertheless , it is important to note that the experimental conditions considered in these works are very different to the everyday human reality and , therefore , the results obtained can still not be conclusive .in particular , the experiments have been performed considering the payoff of individuals normalized by their degree instead of considers the accumulated one .this could have consequences as important as it does in theoretical models and , therefore , it would be of great interest to consider this feature which clearly is present in the actual human society .in this paper we have addressed the cooperation problem over structured populations considering the formulation performed in and the prisoner s dilemma game as metaphor of the social interactions .we have introduced a new strategy update called _ democratic weighted update _ considering that the individuals influence the behavior of their neighbors . in particular , we have considered that the capacity of individuals to influence others is given by their wisdom which has been defined as proportional to their payoff . besides , when in a neighborhood there are cooperators and defectors , the focal player is influenced contradictorily by the wisdom of individuals and , therefore , the effective social influence is defined by the difference between the total wisdom of each strategy in the neighborhood .we have extensively explored the model for a wide range of the parameters and several growing mechanism showing , in any case , very low conditions for cooperation to evolve . in this waywe conclude that considering the growing process of the system , the social influence and individuals with imitation capacity cooperation evolves and , therefore , we have shown a complete theoretical solution of the cooperation problem among unrelated individuals with imitation capacity . besides, we have extended the conclusions to other ways to consider the abundance of strategies in the neighborhood of individuals showing its great importance to overcome the emergence of mutant defectors in highly cooperative systems .thus , we hope to have taken a step towards a theoretical model of the human society .we believe that the model has very important implications in further theoretical researches. in particular , the model allows assuming a highly cooperative system as initial condition in agent - based models which has never been taken into account . here, we have considered that individuals are allowed to imitate cooperation or defection .however , it is possible to consider other features of human being that spread into society by imitation such as rumors , culture , opinions , thoughts , ideologies , etc . in this sense, it would be of great interest understand how the success of individuals , mainly determined by the underlying cooperative system , affect the propagation dynamics of this kind of information .in particular , it could be useful to understand how the knowledge of the human society increases in time . to this, we believe a good first hypothesis to consider the existence of the truth and that the new knowledge emerges from the mutation of an individual previous one . in this way, the knowledge spreads by imitation favoring those which are closer to the truth .i acknowledge financial support from generalitat de catalunya under grant 2009-sgr-00164 , ministerio de economa y competitividad under grant fis2012 - 32334 and phd fellowship 2011fib787 .i am grateful to the statistical physics group of the autonomous university of barcelona .i would like to thank david jou and vicen mndez for their unconditional support and assistance with the manuscript .in particular , i am deeply grateful to carolina perez mora for the extensive discussions about cooperation . 99 e.o .wilson , _ sociobiology _ ( harvard univ . press , cambridge , massachusetts , 1975 ) .j. maynard - smith , e. szathmry , _ the major transitions in evolution _ ( oxford univ .press , freeman , oxford , 1995 ) .michod , _ darwinian dynamics : evolutionary transitions in fitness and individuality _ ( princeton univ . press , princeton , nj , 1999 ) .r. trivers , q. rev .biol . * 46 * , 35 ( 1971 ) .hamilton , j. theor .* 7 * , 1 ( 1964 ) .r. axelrod , w.d .hamilton , science * 211 * , 1390 ( 1981 ) .m.a . nowak and k. sigmund ,nature * 364 * , 56 ( 1993 ) .riolo , m.d .cohen , and r. axelrod , nature * 414 * , 441 ( 2001 ) .m.a . nowak and k. sigmund ,nature * 437 * , 1291 ( 2005 ) .santos , m.d .santos , and j.m .pacheco , nature * 454 * , 212 ( 2008 ) .nowak , science * 314 * , 1560 ( 2006 ) .j. hofbauer and k. sigmund , _ evolutionary games and population dynamics _ ( cambridge university press , cambridge , england , 1998 ) .h. gintis , _ game theory evolving _( princeton university , princeton , nj , 2000 ) .kollock , annu .* 24 * , 183 ( 1998 ) .g. szab , g. fth , phys . rep . *446 * , 97 ( 2007 ) . c. p. roca , j. a. cuesta , a. snchez , phys . life rev .* 6 * , 208 ( 2009 ) .m.a . nowak and r.m .may , nature * 359 * , 826 ( 1992 ) .g. abramson , m. kuperman , phys .e * 63 * , 030901 ( 2001 ) .santos and j.m .pacheco , phys .lett . * 95 * , 098104 ( 2005 ) .santos , j.m .pacheco , and t. lenaerts , proc .103 * , 3490 ( 2006 ) .y .- s chen , h. lin and , c .- x wu , physica a * 385 * , 379 ( 2006 ) .s. assenza , j. gmez - gardees , and v. latora , phys . rev .e * 78 * , 017101 ( 2008 ) .x. chen , f. fu , and l. wang , physica a * 378 * , 512 ( 2006 ) .l. luthi , e. pestelacci , and m. tomassini , physica a * 387 * , 955 ( 2008 ) .liu , z. li , x .-j . chen , and l. wang , chin .* 26 * , 048902 ( 2009 ) .wu , j. -y .guan , x. -j .xu , and y. -h .wang , physica a * 379 * , 672 ( 2007 ) .n. masuda , proc r. soc .b * 274 * , 1815 ( 2007 ) .a. szolnoki , m. perc , and z. danku , physica a * 387 * , 2075 ( 2008 ) .h. ohtsuki , c. hauert , e. lieberman , and m.a .nowak , nature * 441 * , 502 ( 2006 ) .l. wardil and j.k.l .da silva , epl * 86 * , 38001 ( 2009 ) .m. perc and a. szolnoki , biosystems * 99 * , 109 ( 2010 ) .j. gmez - gardees , i. reinares , a. arenas , and l. m. flora , sci . rep . * 2 * , 620 ( 2012 ) .m. e. j. newman , siam review * 45 * , 167 ( 2003 ) .i. gomez portillo , eur .. j. b * 85 * , 409 ( 2012 ) .i. gomez portillo , phys .e * 86 * , 051108 ( 2012 ) .x. chen , f. fu , and l. wang , physics letters a * 372 * , 1161 ( 2008 ) .x. wang , m perc , y. liu , x. chen , and long wang , sci .* 2 * , 740 ( 2012 ) .a. szolnoki , z. wang , and m. perc , sci . rep . * 2 * , 576 ( 2012 ) .barabsi , and r. albert , science * 286 * , 509 ( 1999 ) .a. traulsen , d. semmann , r. d. sommerfeld , h .- j .krambeck , and m. milinski , proc .* 16 * , 2962 ( 2010 ) .j. gruji , c. fosco , l. araujo , j.a .cuesta , and a. snchez , plos one * 5 * , e13749 ( 2010 ) .c gracia - lzaro , a. ferrer , r. gonzalo , a. tarancn , j.a .cuesta , a. snchez , and y. moreno , proc .* 109 * , 12922 ( 2012 ) .the results have been obtained for , , and using ma for the network generation . ]as we stated in the introduction , the abundance of strategies in the neighborhood of the focal players has been recently introduced through the learning activity of individuals . in particular , they have defined the learning activity of individual as , where determines how seriously the abundance of strategies affects the behavior of individuals and the degree of the focal player ensures a proper normalization of the transition probability . in this appendix, we address the question about what happen with the outcome of the model if the behavior of the focal players is affected by the abundance of each strategy through the learning activity instead through the effective social influence . in fig . we show the fraction of cooperation into the system as a function of for different values of .these results have been obtained following the same procedure that for the ones of fig . but considering the learning activity instead the social influence .as we can see , the system presents a phase transition from a non - cooperative state to a cooperative one with a low for the three values of considered . besides ,we observe that increases with . however , we have checked for that the system is not able to overcome the emergence of mutants for the values of considered . in particular , in this case , we have observed and evolves with strong oscillations . in this way, it is expected an optimal where is minimal .however , it is important to note that the required for cooperation to evolve is larger than the one required considering the social influence .besides , we observe that the transitions are smoother than the ones shown in the previous section . in particular , the region where reaches intermediate values correspond to the situation where some realizations finish with a high level of cooperation and other ones near zero cooperation . besides , for we observe that presents some irregularities where the average fraction of cooperation slightly decreases for some values of .these are produced by the existence of some realization where the cooperative system dies before the generations considered . in this way, we conclude that the learning activity allows to highly cooperative systems overcome the emergence of mutants defectors but with higher and lower life expectancy that the ones obtained through the social influence . in order to better understand why the social influence improves the required conditions for cooperation to evolve with respect to the ones obtained with the learning activity , in fig . we show in log - log scale the fraction of defectors as a functions of the degree of the nodes .besides , we show the analogous results obtained by considering the social influence to perform a clear comparison .the results have been obtained in the same way that the ones of fig . . in both cases , we observe that defectors preferably inhabit the less linked individuals . however , the fraction of defectors is higher for the case with learning activity than the one with social influence for highly connected individuals . in this way , with the learning activitythe system is more prone to have strong defectors and , therefore , the conditions required for cooperation to evolve are larger than the ones required when it is considered the social influence .nevertheless , although we have shown that social influence have a better performance than the learning activity for the evolution of cooperation , in this section we have also shown the great importance of consider that the abundance of strategies affects the behavior of individuals in highly cooperative systems .however , it would be of great interest perform a thorough theoretical analysis looking for the optimal way to consider the available information in order to ensure the best performance of the focal player .
|
in this paper we address the cooperation problem in structured populations by considering the prisoner s dilemma game as metaphor of the social interactions between individuals with imitation capacity . we present a new strategy update rule called _ democratic weighted update _ where the individuals behavior is socially influenced by each one of their neighbors . in particular , the capacity of an individual to socially influence other ones is proportional to its wisdom which is defined by its successful in the game . when in a neighborhood there are cooperators and defectors , the focal player is contradictorily influenced by them and , therefore , the effective social influence is given by the difference of the total wisdom of each strategy in its neighborhood . first , by considering the growing process of the network and neglecting mutations we show the evolution of highly cooperative systems . then , we broadly shown that the social influence allows to overcome the emergence of mutants into highly cooperative systems . in this way , we are able to conclude that considering the growing process of the system , individuals with imitation capacity and the social influence the cooperation evolves . therefore , here we present a theoretical solution of the cooperation problem among unrelated individuals with imitiation capacity . keywords : cooperation ; evolutionary game theory ; prisoner s dilemma ; networks ; growing systems , social influence .
|
a quantum algorithm for the satisfiability problem was presented in [ 1 ] .this algorithm is based on quantum adiabatic evolution .if a state evolves according to the schrdinger equation with a slowly varying hamiltonian and is the ground state of , then will stay close to the instantaneous ground state of .the hamiltonian used in the algorithm is designed so that the ground state of is easy to construct and the ground state of encodes the solution to the instance of satisfiability . a crucial question is how large must the running time , , be to achieve an acceptable probability of success . in this paperwe simulate an -qubit continuous time quantum computer by numerically integrating the schrdinger equation in a -dimensional hilbert space .we randomly generate difficult instances of an np - complete problem and study how large must be as a function of the number of bits . for ,we find that the required grows modestly with since the data is well fit by a quadratic in .an -bit instance of satisfiability is a boolean formula with m clauses where each clause is true or false depending on the values of some subset of the bits .the task is to discover if one ( or more ) of the possible assignments of the values of the bits satisfies all of the clauses , that is , makes formula ( [ 21 ] ) true .we consider two restricted versions of satisfiability , both of which are restricted versions of a problem called exact cover " . in the first, ec3 , each clause involves only three bits .the clause always has the same form .the clause is true if and only if one of the three bits is a and the other two are , so there are three satisfying assignments out of the eight possible values of the three bits . the second , ec2 , has the restriction that each clause involves only two bits . in this casethe clause is true if and only if the two bits have the value 01 or 10 , so there are two satisfying assignments out of the four possible values for the two bits .we pick these two examples because ec2 is classically solvable in polynomial time whereas ec3 is np - complete and no polynomial time algorithm is known .it is interesting to see how the quantum algorithm treats these two classically very different problems . to understand the quantum algorithm we first recall the content of the adiabatic theorem .we are given a hamiltonian that depends smoothly on the parameter where varies from to .suppose that for each value of , has a unique lowest energy state , the ground state .that is , where is strictly less than any of the other eigenvalues of .introduce a time scale and define a time - dependent hamiltonian by where varies from to . as gets bigger, becomes more slowly varying .let obey the schrdinger equation with that is , at , is the ground state of .the adiabatic theorem tells us that this means that for large enough , is ( up to a phase ) very close to the ground state of .( [ 26 ] ) only holds if the gap between the ground state energy , of , and the next highest energy , of , is strictly greater than zero , that is , for .we will also discuss cases where at there are multiple ground states .in this situation the evolution ends very close to the ground state subspace .the idea behind the quantum algorithm is the following .given an instance of the satisfiability problem it is straightforward to construct a hamiltonian , , whose ground state corresponds to an assignment of the bits that violates the least number of clauses .although it is easy to construct , finding its ground state may be computationally challenging .however we can construct a hamiltonian whose ground state we explicitly know .now let and accordingly for some fixed .we start our quantum system in the known ground state of and then evolve according to eq .( [ 24 ] ) for time .suppose that the instance of satisfiability that gave rise to has a unique satisfying assignment .if is large enough , then will be very close to the ground state of . measuring will , with high probability , produce the satisfying assignment .in general this algorithm will produce an assignment that violates the minimum number of clauses in eq .( [ 21 ] ) .restricting to instances with a unique satisfying assignment appears to pick out difficult instances and simplifies the analysis of our algorithm .more explicitly , given an -bit instance of satisfiability , we work in an -qubit hilbert space that is spanned by the basis vectors where and is an eigenstate of the component of the spin , we also need the eigenstates of the component of the spin , and that obey for concreteness imagine that each clause in formula ( [ 21 ] ) involves bits .let and be the bits associated with clause .for each clause define an energy " function we immediately turn these into quantum operators and define by construction is nonnegative , that is , for all and if and only if is a superposition of states of the form satisfies all of the clauses . in this context seeing if formula ( [ 21 ] ) has a satisfying assignment is accomplished by finding the ground state of .if formula ( [ 21 ] ) has no satisfying assignment , the ground state ( or states ) of correspond to the assignment ( or assignments ) that violate the least number of clauses . given by eq .( [ 214 ] ) is the problem hamiltonian whose ground state we seek when we run the quantum algorithm . for the beginning hamiltonian ,first define for each clause , involving bits , and , let the beginning hamiltonian is given by the ground state of is which we will take to be the initial state when we run the quantum algorithm .the hamiltonian that governs the evolution of the quantum system is given by eq .( [ 28 ] ) with specified in eq .( [ 214 ] ) and specified in eq .( [ 217 ] ) .note that is a sum of terms , each of which is associated with one of the clauses in ( [ 21 ] ) , where each involves only the bits associated with clause and therefore is a sum of terms each of which involves a fixed number of bits . for a given problem such as ec3 or ec2 we must specify the running time as a function of the number of bits , . since the state is not exactly the ground state of we must also specify , the number of times we repeat the quantum evolution in order to have a desired probability of success .this paper can be viewed as an attempt to determine and by numerical methods .we now summarize the ingredients of the quantum adiabatic evolution algorithm . given a problem and an -bit instance , we assume we know the instance - independent running time and repetition number . for the instance and given , 1 .construct the time - dependent hamiltonian given by eq .( [ 28 ] ) with and given by eq .( [ 214 ] ) and eq .( [ 217 ] ) .2 . start the quantum system in the state given by eq .( [ 218 ] ) .3 . evolve according to eq .( [ 24 ] ) for a time to arrive at .measure in the state and check if the bit string satisfies all clauses .repeat times .for our numerical study we randomly generate instances of the two problems under study , ec3 and ec2 .focus first on ec3 . for a decision problem, it suffices to produce a satisfying assignment when one or more exists . fornow , we restrict our attention to instances with a unique satisfying assignment . we believe that instances with only one satisfying assignment include most of the difficult instances for our algorithm .in fact as we will see later , our algorithm runs faster on instances with more than one satisfying assignment so the restriction to a unique satisfying assignment appears to restrict us to the most difficult cases . with the number of bits fixed to be , we generate instances of ec3 as follows .we pick three bits at random , uniformly over the integers from to .( the bits must all be different . )we then have a formula with one exact cover clause .we calculate the number of satisfying assignments .we then add another clause by picking a new set of three bits .again we calculate the number of satisfying assignments .we continue adding clauses until the number of satisfying assignments is reduced to one or zero .if there is one satisfying assignment we accept the instance . if there are none we reject the instance and start again .using this procedure the number of clauses is not a fixed function of the number of bits but rather varies from instance to instance . for ec3we find that the number of clauses is typically close to the number of bits .we follow a similar procedure for ec2 . when we add a clause we randomly specifywhich two bits are involved in the clause .again we repeat this procedure until there are two satisfying assignments ( or none in which case we discard the instance ) .we stop at two satisfying assignments because ec2 has a bit negation symmetry .if is a satisfying assignment so is and accordingly there are no instances with a single satisfying assignment .for ec2 the number of clauses is typically close to the number of bits .we know that ec2 is classically computationally simple but of course there is no guarantee that quantum adiabatic evolution will work well on ec2 .we choose instances of ec2 with two satisfying assignments to parallel as closely as possible our study of ec3 .we are exploring the quantum adiabatic evolution algorithm by numerically simulating a perfectly functioning quantum computer .the quantum computer takes an initial state , given by eq .( [ 218 ] ) , and evolves it according to the schrdinger equation ( [ 24 ] ) , for a time , to produce .the hamiltonian is given by eq .( [ 28 ] ) with and determined by the instances of satisfiability being studied .if the number of bits is , the dimension of the hilbert space is .this exponential growth in required space is the well - known impediment to simulating a quantum computer with many bits . for our modest numerical investigation , using macs running matlab for a few hundred hours , we can only explore out to 16 bits .we integrate the schrdinger equation using a variable step size runge - kutta algorithm , checking accuracy by keeping the norm of the state equal to unity to one part in a thousand .since the number of bits is modest , we can always explicitly determine the ground state ( or ground states ) of .given and the ground state ( or states ) of we can calculate the probability that a measurement of will give a satisfying assignment by taking the sum of the squares of the inner products of with the median time to achieve probability 1/8 ------------------------------------------ our goal in this paper is to explore the running time and the repetition number that will give a successful algorithm . to this endwe first determine the typical running time needed to achieve a fixed probability of success for a randomly generated instance with bits for .in particular we determine the median time required to achieve a success probability of . since this is a numerical study we actually hunt for a time that produces a probability between 0.12 and 0.13 . for each between 7 and 15 ,for both ec3 and ec2 , we find the median of 50 randomly generated instances . in 1 the circles represent the data for ec3 and the solid curve is a quadratic fit to the data . at each number of bits the times required to reach probability range from roughly half the median to twice the median . for this range of , a quadratic , or even linear fit ,is clearly adequate .the exponential is also a good fit . in the next sectionwe show a situation where an exponential fit to the data is _ required _ for the same range of .we know of one anomalous instance ( discovered by daniel preda , outside of the data collected for this paper ) with 11 bits and a time to achieve probability 1/8 of close to 300. however , at , which is the value of the quadratic fit in 1 at , the probability of success for this instance is already 0.0606 .because this probability is not anomalously low , the algorithm proposed in section [ sec:7 ] will have no difficulty with this instance . in 2 the circles represent the data for ec2 and the solid curve is a linear fit to the data . herethe maximum time required to reach probability , for each number of bits , is roughly greater than the median .the hamiltonian is a sum of terms each of which involves only the few bits mentioned in clause ; see eq .( [ 219 ] ) .each is associated with a subspace of the hilbert space corresponding to the satisfying assignments of clause , that is , the space spanned by the ground states of ; see eq .( [ 213 ] ) .quantum adiabatic evolution ( for big enough ) yields a state in the intersection of the subspaces associated with all of the clauses .our intuition is that the bit structure of these subspaces is crucial to the success of the quantum algorithm . to test this intuition we destroy the bit structure of the hamiltonian and run the algorithm again . specifically consider the classical energy function that counts the number of violated clauses where and is given in eq .( 2.12 ) . from eq . ( [ 213 ] ) and eq . ( [ 214 ] ) we have that now let where is a random permutation of the integers .note that is _ not _ a permutation of the bits but rather a random scrambling of all of the s .let and accordingly the spectrum of is identical to that of but the relationship between the eigenvalues and the values of has been scrambled .accordingly the spectrum of is not equal to the spectrum of ) except at and .if has a unique ground state so does and for large enough we expect ( again by the adiabatic theorem ) that quantum evolution by will bring us to the ground state of . once the bit structure has been destroyed , finding the minimum of is essentially the problem of minimizing an arbitrary function defined on .solving this problem requires exponential time even on a quantum computer[2 , 3 ] . to confirm this we generated 100 instances of ec2 for each of each instance we generate a random permutation and quantum evolve with for time . in 3we show the median time required to achieve a success probability of .the linear behavior of the data on the log plot indicates an exponential growth as a function of and the solid line represents .in contrast with the data in figures [ fig:1 ] and [ fig:2 ] , the data in 3 can not be well fit by a quadratic .figures [ fig:1 ] and [ fig:2 ] , at least by comparison with 3 , indicate that the median times required to achieve probability , for ec3 and ec2 with , grow modestly with .thus the fitted medians , for ec3 and ec2 , are reasonable candidates for the running times of our algorithm for these two problems . automatically , with these run times, our algorithm will achieve a success probability of at least for about half of all instances .our goal now is to explore how low the success probability can get at these run times . to this endwe generate 100 new instances of ec3 and ec2 for each .now runs from 7 to 16 for ec3 and from 7 to 15 for ec2 and is given by the fit to the data shown in figures [ fig:1 ] and [ fig:2 ] . 4 displays the results for ec3 . for each value of from 7 to 16we show the median probability of success at as well as the smallest of the 100 probabilities and the smallest .it is no surprise that the median probability is close to for all values of .the fact that the smallest probability does not decrease with was not anticipated .this means that , at least for the range of the number of bits considered , the number of repetitions can be taken to be constant with to achieve a fixed desired probability of success . in 5 the data for ec2 is presented . here the run time is given by the linear fit to the data in 2 .again the median probability at each value of is close to as is expected .however , even for this classically easy problem we had no guarantee that the worst case probability would not decrease with .in fact it does not appear to decrease at all . in order to show that the running time must increase with to produce a successful algorithm, we explore the success probabilities obtained when using an -independent running time . more specifically for the ec3 instances used to generate 4 we run the algorithm for for a constant value of , the one previously used for . in 6the log plot shows that the median success probability decreases exponentially , which means that the number of repetitions would have to grow exponentially to achieve a fixed probability of success . in section [ sec:6 ] we gave evidence that the bit structure is crucial to the success of the quantum adiabatic evolution algorithm .we make this point again by taking the instances of ec3 , for , and running the algorithm with given by the fit in 1 but with replaced by . in 7 ,the median probability of success is seen to decrease exponentially with .this helps confirm our intuition that the quantum adiabatic evolution algorithm takes advantage of the bit structure inherent in the problem .all of the ec3 data presented up to this point was generated from instances with unique satisfying assignments .now we explore ec3 instances with more than one satisfying assignment . as in section [ sec:3 ]clauses are added at random but now instances are accepted as soon as the number of satisfying assignments is 6 , 7 , 8 , or 9 .if adding a clause reduces the number of satisfying assignments from more than 9 to less than 6 , the instance is rejected .we do this with 100 instances for 10 , 11 , 12 , and 13 bits and run at the same times used for instances with a unique satisfying assignment to generate 4 . in 8we show the median probability , the smallest probability , and the smallest for these instances . at the running timesused , the median probability for instances with unique satisfying assignments is close to 1/8 ( for any number of bits ) . for the instances with multiple satisfying assignments the medians are about 1/3 and the worst case has a probability of about 1/8 .this substantiates our intuition that instances with unique satisfying assignments are generally the most difficult for the quantum adiabatic algorithm . at the running times explored in this paper transitions out of the instantaneous ground state are not uncommon . in the case of a unique satisfying assignment such a transition ( assuming no transition back ) leads to a final state that does not correspond to the satisfying assignment . in the case of multiple satisfying assignments such transitionsmay lead to states that are headed towards the subspace spanned by the satisfying assignments .this is why the success probabilities are typically higher when there is more than one satisfying assignment .we have presented numerical evidence that a quantum adiabatic evolution algorithm can solve instances of satisfiability in a time that grows slowly as a function of the number of bits .here we have worked out to 16 bits , but with more computing power instances with higher numbers of bits can be studied .this algorithm operates in continuous time .the algorithm can be written as a product of few - qubit unitary operators where the number of factors in the product is of order times a polynomial in . however , understanding the idea behind the algorithm is obscure in the conventional quantum computing paradigm .( a quantum algorithm for satisfiability that is explicitly within the ordinary paradigm is presented in . )the algorithm studied in this paper works by having the quantum system stay close to the ground state of the time - dependent hamiltonian that governs the evolution of the system .we imagine that protecting a device that remains in its ground state from decohering effects may be easier than protecting a device that requires the manipulation of excited states .this work was supported in part by the department of energy under cooperative agreement de fc0294er40818 .thanks the participants at the aspen center for physics meeting on quantum information and computation ( june 2000 ) for many helpful discussions .we thank mehran kardar , joshua lapan , seth lloyd , andrew lundgren , and daniel preda for valuable input .
|
quantum computation by adiabatic evolution , as described in quant - ph/0001106 , will solve satisfiability problems if the running time is long enough . in certain special cases ( that are classically easy ) we know that the quantum algorithm requires a running time that grows as a polynomial in the number of bits . in this paper we present numerical results on randomly generated instances of an np - complete problem and of a problem that can be solved classically in polynomial time . we simulate a quantum computer ( of up to 16 qubits ) by integrating the schrdinger equation on a conventional computer . for both problems considered , for the set of instances studied , the required running time appears to grow slowly as a function of the number of bits .
|
the aim of this paper is to investigate a reduced - order approach for four - dimensional variational data assimilation ( 4d - var ) , with an illustration in the context of ocean modelling , which is our main field of interest .4d - var is now in use in numerical weather prediction centers ( e.g. rabier _ et al ._ 2000 ) and should be a potential candidate for operational oceanography in prospect of seasonal climate prediction and possibly of high resolution global ocean mesoscale prediction . however , ocean scales make the problem even more difficult and computationally heavy to handle than for the atmosphere .several applications were conducted these last years for various oceanic studies , including for example : basin - scale ocean circulation , either with quasigeostrophic ( moore 1991 ; schrter _ et al ._ 1993 ; luong _ et al ._ 1998 ) or with primitive equation models ( greiner _ et al ._ 1998 ; wenzel and schrter 1999 ; greiner and arnault 2000 ; weaver _ et al ._ 2002 ) ; coastal modelling ( leredde _ et al ._ 1998 ; devenon _ et al ._ 2001 ) ; or biogeochemical modelling ( lawson _ et al ._ 1995 ; spitz _ et al ._ 1998 ; lellouche _ et al ._ 2000 , faugeras _et al . _ , 2003 ) . however , although considerable work and improvements have been performed , a number of difficulties remain , common to most applications ( and also to other data assimilation methods ) .the first problem is the fact that ocean models are non - linear , while 4d - var theory is established in a linear context .more precisely , variational approach can adapt in principle to non - linear models , but the cost function is no longer quadratic with regard to the initial condition ( which is the usual control parameter ) which can lead to important difficulties in the minimization process and the occurence of multiple minima .several strategies have been proposed to overcome these problems : luong _ et al . _( 1998 ) and blum _ et al . _( 1998 ) perform successive minimizations over increasing time periods ; courtier _ et al . _( 1994 ) , with the so - called incremental approach , generate a succession of quadratic problems , which solutions should converge ( but with no general theoretical proof ) towards the solution of the initial minimization problem .a second major difficulty with variational problem implementation lies in our poor knowledge of the background error , whose covariance matrix plays an important role in the cost function and in the minimization process . in the absence of statistical information , these covariances are often approximated empirically by analytical ( e.g. gaussian ) functions .for instance , the covariances , used in the `` standard '' 4d - var experiment described in section [ sect : numexp ] are 3d but univariate .moreover , as discussed in ( lermusiaux , 1999 ) , errors evolve with the dynamics of the system and thus the error space should evolve in the same way . in realistic systems , it proves to be difficult to catch correctly this evolution .the third major problem in the use of 4d - var in realistic oceanic applications is probably the dimension of the control space .in fact , this dimension is generally equal to the size of the model state variable ( composed , in our case , by the two horizontal components of the velocity , temperature and salinity ) , which is typically of the order of - .this makes of course the minimization difficult and expensive ( typically tens to hundreds times the cost of an integration of the model ) , even with the best current preconditioners .this last difficulty can be addressed by reducing the dimension of the minimization space .this is for example the idea of the incremental approach ( courtier _ et al ._ 1994 ) , in which an important part of the successive quadratic minimization problems previously mentioned can be solved using a coarse resolution ( e.g. veers and thpaut 1998 ) .the dimension of the minimization problem can then be decreased by one or two orders of magnitude . however , even with such an approach , the dimension of the control space remains quite large in realistic applications . another way to reducethe dimension of the control space is the representer method ( bennett , 92 ) , performing the minimization in the observation space .the number of parameters to estimate is equal to the number of observation locations .concerning sequential data assimilation , reduced - order methods were developed to allow the specification of error covariances matrix even for realistic applications .this is the case for example of the singular extended evolutive kalman ( seek ) filter ( pham _ et al ._ 1998 ; brasseur _ et al ._ 1999 ) . in this paper, we propose an alternative way for drastically decreasing the dimension of the control space , and hence the cost of the minimization process .moreover this method provides a natural choice for a multivariate background error covariance matrix , which helps improving the quality of the final solution .the method is based on a decomposition of the control variable on a well - chosen family of a few relevant vectors , and has already been successfully applied in the simple case of a quasigeostrophic box model ( blayo _ et al .the aim of the present paper is to further develop this approach and to validate it in a more realistic case , namely a primitive equation model of the equatorial pacific ocean .the method is described in section 2 .then the model , the assimilation scheme and the numerical experiments are presented in section 3 , and their results are discussed .finally some conclusions are drawn in section 4 .let a model simply written as with the state vector in ] , with an observation operator mapping onto .the classical 4d - var approach consists in minimizing a cost function using the notations of ide _ et al . _ is a background value for the control vector , and is its associated error covariance matrix . in most applications ,the control variable is the state variable at the initial time : , and the background state is typically a forecast from a previous analysis given by the data assimilation system . in this case , once the model is discretized , the size of ( i.e. the dimension of the control space ) is equal to the size of , denoted by . stands for the state variable at time . in equation ( [ eq : j ] ) , is propagated by , the fully non - linear model . in the incremental formulation which is used here ,the cost function is written as a function of and the term is calculated using the linearized model * m * : where stands for the innovation vector : and is the temporal evolution performed by the model between the instants and .the basic idea then , for constructing a reduced - order approach , consists in defining a convenient mapping from into , with , and in replacing the control variable by the new control variable with .since we want to preserve a good solution while having only a rather small number of degrees of freedom on the choice of , the subspace of must be chosen in order to contain only the `` most pertinent '' admissible values for .more precisely , in the case of the control of the initial condition , we decide to define the mapping by an affine relationship of the form : in order to let span a wide range of physically possible states , represents an estimate of the state of the system , and are vectors containing the main directions of variability of the system ( the are scalars ) .such a definition relies on the fact that most of the variability of an oceanic system can be described by a low dimensional space .even if it is only rigorously proved for very simplified models ( lions _ et al ._ , 1992 ) , it is often expected that , away from the equator , ocean circulation can be seen as a dynamical system having a strange attractor .this means that the system trajectories are attracted towards a ( low dimension ) manifold . in the vicinity of this attractor ,orthogonal perturbations will be naturally damped , while tangent perturbations will not ( they can even be greatly amplified , due to the chaotic character of the system ) . to retrieve a system trajectory over of period of time ] and .the inner product is the usual one for a state vector containing several physical quantities expressed in different units : where is the empirical variance of the -th component : .this diagonalization leads to a set of orthonormal eigenvectors corresponding to eigenvalues .since trajectories are computed with the fully non - linear model , these modes represent non - linear variability around the mean state over the whole period . the first level ( m ) of the first eof is displayed on fig .1 . as can be seen , it is mostly representative of the variability of the equatorial zonal currents , of the north - south temperature oscillation and of the mean structure of the sea surface salinity .the fraction of variability ( or inertia " ) which is conserved when retaining only the first vectors is .its variation as a function of is displayed in fig .2 . we can see that a large part of the total variance can be represented by a very few eofs : 80% for the first 13 eofs , 92% for the first 30 eofs .finally , let us emphasize that a natural estimate for the covariance matrix of the first eigenvectors , i.e. in our reduced - order 4d - var , is simply the diagonal matrix . a 4d - var assimilation scheme , based on the incremental formulation of courtier _ et al . _( 1994 ) , has been developped for the opa model ( weaver _ et al ._ 2003 , vialard _ et al ._ 2003 ) . without going into details ( which can be found in references above ) ,let us recall that the nonquadratic cost function is expressed in terms of the increment , and that its minimization is replaced by a sequence of minimizations of simplified quadratic cost functions .the basic state - trajectory used in the tangent linear model is regularly updated in an outer loop of the assimilation algorithm , while the iterations of the actual minimizations are performed within an inner loop .different statistical models can be chosen for representing the correlations of background error . in the present study , we used a laplacian - based correlation model , which is implemented by numerical integration of a generalized diffusion - type equation ( weaver and courtier , 2001 ) .the horizontal correlation lengths for the gaussian functions are equal to in longitude and in latitude near the equator and in longitude / latitude outside the area situated between / s .the vertical correlation lengths depend on the depth . is thus block diagonal : covariances are spatially varying but remain monovariate .such a choice for leads to significantly better results than those given by a simple diagonal representation of this matrix .however , since remains univariate , the links between the model variables come only from the action of the model dynamics .the development of a multivariate model for is presently under way in research groups .( 2004 ) include a state - dependent temperature - salinity constraint , which works quite well in the 3d - var case but is not yet operational for the 4d - var case .the observation error covariance matrices depend of course of the assimilated data .we will consider in the present case only temperature observations , which are assumed independent with a standard error equal to .the are thus taken equal to .we have used for our experiments the classical framework of twin experiments .a one - year simulation of the model was performed , starting at the beginning of 1993 .this simulation ( further denoted ) will be the reference experiment .pseudo - observations of the temperature field were then generated , by extraction from this one - year solution at the locations of the 70 tao moorings ( fig .3 ) , with a periodicity of 6 hours , on the first 19 levels of the model ( i.e. the first 500 meters of the ocean ) . this corresponds to observing 0.17% of the model state vector every 6 hours .those temperature values have been perturbed by the addition of a gaussian noise , with a standard error set to , which is an upper bound for the standard error of the real tao temperature dataset . a 4d - var assimilation of these pseudo - observations ( i.e. with full control variable , built from the state vector ( u , v , t , s ) in the whole space )was then performed , using an independent field ( a solution of the model three months later ) as the first guess ( background field ) for the minimization process .this first assimilation experiment will be denoted , since it uses the full control space . in order to improve the validity of the tangent linear approximation , the assimilation time window was divided into successive one - month windows .then an additional simulation was performed , using the reduced - space approach described in section 2 with eofs ( which represent 92% of the total inertia - fig .this second assimilation experiment will be denoted .as detailed previously , the control variable in this case is , with the mapping and the preconditionning .as explained in section 2 , the reduced - space assimilation algorithm presents two main differences with regard to the full - space algorithm , which are the multivariate nature of the background error covariance matrix , and the small dimension of the control space .both aspects are expected to improve the efficiency of the assimilation , and we will now illustrate their respective impact .the background error covariance matrix used in the reduced - space approach is defined empirically by the eof analysis and is expressed in the full - space as .it integrates statistical information on the consistency between the different model variables , and is naturally multivariate . on the other hand ,the matrix used in the full - space 4d - var is univariate , since providing a multivariate model for this matrix remains challenging .this aspect is of course very important , and should lead to significant changes in the assimilation results .note that buehner _( 1999 ) have proposed a similar way of representing error covariances with eof analysis in the context of 3d - var .however they consider that the reduced basis is not sufficient to span the analysis increment space and blend this eof basis with the prior projected into the sub - space orthogonal to the eofs . an interesting way to illustrate these differences between the full - space and the reduced - space is to perform preliminary assimilation experiments with a single observation .for that purpose , we use a single temperature observation located within the thermocline at 160 on the equator , and specified at the end of a one - month assimilation time window .the innovation is set to 1 .the analysis increment at the initial time in such an experiment is proportional to the column of corresponding to the location of the observation . as can be seen in fig .4 , the reduced - space method performs , as expected , a rather weak correction over the whole basin , while the full - space method generates a much stronger and local increment .the structure of the increment is indeed much more elaborate in the reduced - space experiment , with scales larger than in the full - space experiment .note that the input from the first eof ( shown on fig .1 ) is quite clear in the horizontal pattern of the increment , since in this particular case .the maximum value of the increment however is only 0.06 for the reduced - space 4d - var , while it is 0.94 in the full - space 4d - var .the interest of the naturally multivariate aspect of is also clear in the results of our twin experiments .two different types of diagnostics were performed , the first one concerning only the assimilated variables ( i.e. temperature in the present case ) , while the second one relates to all other variables that are not assimilated .this second type of diagnostic is of course the most significant , since it evaluates the capability of the assimilation procedure to propagate information over the whole model state vector .an example of the first type of diagnostic is given in fig .5a , which displays the temperature rms error defined by the discretized formula becomes : ^{1/2}\ ] ] where and are the number of grid points in x and y. this error is significantly weaker in than in , although the assimilation system in has much less degrees of freedom to adjust the model trajectory to these data . an example of the second type of diagnostic is shown in fig .5b , c . in our test case , these results are clearly in favour of the reduced - space approach .the errors on the salinity s and the zonal component of the velocity for the solution provided by are systematically greater than for . the interest of this approach can also be illustrated by the results in the lower levels .it is well - known that the time - scale for the information to penetrate from the upper ocean into the deep ocean within an assimilation process may be quite long .however , in experiment the eofs add information on the vertical structure of the flow ( see fig .4 ) and then make the vertical adjustment easier .we have plotted for example in fig .6 the errors of the different solutions at level 20 ( depth : 750 m , _ ie _ below the observations ) . performs a very good identification of the solution due to the propagation of the information in depth .these results are only part of what should be shown in terms of diagnostic analyses . but all of them clearly prove that the results of vs are significantly improved for all , assimilated or not , variables . finally , it must be mentioned that we have also illustrated the fundamental role of the multivariate nature of by performing an additional reduced - order experiment ( not shown ) using univariate eofs . in this case , the directions proposed for the minimization were not relevant , and the assimilation failed .the second important difference brought by the reduced - space approach with regard to the full - space approach is the dimension of the minimization space , which is decreased by several orders of magnitude .this should reduce the number of iterations necessary for the minimization , i.e. reduce the cost of the data assimilation algorithm , which is an important practical issue .the evolution of the cost functions for experiments and are displayed on fig .7 . since we use different covariance matrices and in these two experiments , the curves are not quantitatively comparable . however , it is clear in fig . 7 that the number of iterations required to stabilize the cost function is reduced by nearly one order of magnitude between the full - space 4d - var approach ( which needs typically several tens of iterations ) and the reduced - space approach ( which needs eight to ten iterations ) . in the present experiments, we have kept the same number of iterations ( 2 outer loops of ten iterations each ) in the two experiments to strictly compare the results . but having a look at the cost function , it is clear that the minimum is quickly reached by experiment . considering the low number of freedom degrees, the computational cost can be thus divided by a factor of 4 or 5 between the two methods .this paper presents a reduced - space approach for 4d - var data assimilation .a new control space of low dimension is defined , in which the minimization is performed .an illustration of the method is given in the case of twin experiments with a primitive equation model of the equatorial pacific ocean .this method presents two important features , which make the assimilation algorithm effective .first the background error covariance matrix is built using statistical information ( an eof analysis ) on a previous model run .this introduces relevant additional information in the assimilation process and makes naturally multivariate , while providing an analytical multivariate model for is still challenging .this improves the identification of the solution , both on observed and non - observed variables , and at all depths in the model .secondly the reduction of the dimension of the control space limits the number of iterations for the minimization , which results in a decrease of the computational cost by roughly one order of magnitude .however the results presented in this work are only a first ( but necessary ) step , since they concern twin experiments .they need of course to be confirmed by additional experiments in other contexts , in particular experiments with real data and in other geographical areas . as a matter of fact, the efficiency of the method is closely related to the fact that the reduced basis does contain pertinent information on the variability of the true system .that is why , in the context of real observations ( i.e. in the case of an imperfect model ) , the control space must probably not be limited to model - based variability .therefore , we can imagine either compute eofs from results of previous data assimilation using for example full - space 4d - var ( durbiano 2001 ) , and/or improve the assimilation results by performing a few full - space iterations at the end of the reduced - space minimization ( hoteit _ et al ._ 2003 ) .several other ideas can be considered to extend the present methodology to a fully realistic context , and some of them are presently under investigation in our group .concerning the definition of the reduced basis , one could think of its evolutivity and adaptivity , as in some sequential assimilation methods ( brasseur _ et al ._ 1999 ; hoang _ et al .moreover a major source of difficulty ( common to all data assimilation methods ) is our insufficient knowledge ( and therefore parameterization ) of the model error .recent works have addressed this problem in the context of variational methods , which intend to model and control this error ( e.g. dandra and vautard 2001 ; durbiano 2001 ; vidard 2001 ) .such a control could probably be performed in a reduced - order context and complement efficiently the present method . + * acknowledgments * the authors would like to thank anthony weaver and arthur vidard for numerous helpful discussions .a. weaver provided the opavar package and helped us using it .laurent parent helped in the configuration of the numerical experiments .this work has been supported by the french project mercator for operational oceanography .idopt is joint cnrs - inpg - inria - ujf research project .+ bennett , a. f. , 1992 : inverse methods in physical oceanography ._ cambridge monographs on mechanics and applied mathematics_. cambridge university press .blayo , e. , blum , j. and verron , j. , 1998 : assimilation variationnelle de donnes en ocanographie et rduction de la dimension de lespace de contrle .199219 in _ equations aux drives partielles et applications_. gauthier - villars .blum , j. , luong , b. and verron , j. , 1998 : variational assimilation of altimeter data into a non - linear ocean model : temporal strategies ._ esaim proceedings _ , * 4 * , 2157 .brasseur , p. , ballabrera - poy , j. and verron , j. , 1999 : assimilation of altimetric data in the mid - latitude oceans using the singular evolutive extended kalman filter with an eddy - resolving primitive equation model . __ , * 22 * , 269294 .buehner , m. , brunet , g. and gauthier , p. , 1999 : empirical orthogonal functions for modeling 3d - var forecast error statistics .324327 in proceedings of the third wmo international symposium on assimilation of observations in meteorology and oceanography , 711 june 1999 , quebec city , canada .gauthier , p. , buehner , m. and fillion , l. , 1998 : background - error statistics modelling in a 3d variational data assimilation scheme : estimation and impact on the analyses .p - p 131145 in proceedings of the ecmwf workshop on the diagnostics of assimilation systems , 2 - 4 november 1998 , reading , u.k . cane , m. a. , kaplan , a. , miller , r. n. , tang , b. , hackert , e. c. and busalacchi , a. j. , 1996 : mapping tropical pacific sea level : data assimilation via a reduced state kalman filter ._ j geophys ._ , * 101 * , 2259922617 .courtier , p. , thpaut , j .- n . and hollingsworth , a. , 1994 : a strategy for operational implementation of 4d - var , using an incremental approach ._ q. j. r. meteorol ._ , * 120 * , 13671388 .dandra , f. and vautard , r. , 2001 : reducing systematic errors by empirically correcting model errors ._ tellus _ , * 52 * , 2141 .devenon , j .- l . , dekeyser , i. , leredde , y. and lellouche , j .- m . ,2001 : data assimilation method by a variational methodology using the adjoint of a 3-d coastal circulation primitive equation model ._ oceanol .* 24 * , 395407. de witte , b. , reverdin , g. and maes , c. , 1998 : vertical structures of an ogcm simulation of the equatorial pacific ocean in 1985 - 1994 ._ j. phys ._ , * 29 * , 15421570 .durbiano , s. , 2001 : vecteurs caractristiques de modles ocaniques pour la rduction dordre en assimilation de donnes .phd thesis , university of grenoble .faugeras , b. , levy , m. , memery , l. , verron , j. , blum , j. and charpentier , i. , 2003 : can biogeochemical fluxes be recovered from nitrate and chlorophyll data ?a case study assimilating data in the northwestern mediteranean sea at the jgofs - dyfamed station . _ jour . of mar ._ , * 40 - 41 * , 99125 .gilbert , j .- c . and lemarchal , c. , 1989 : some numerical experiments with variable storage quasi - newton algorithms ._ mathematical programming _ , * 45 * , 407435 .greiner , e. , arnault , s. and morlire , a. , 1998 : twelve monthly experiments of 4d - variational assimilation in the tropical atlantic during 1987 .part 1 : method and statistical results ._ , * 41 * , 141202 .greiner , e. and arnault , s. , 2000 : comparing the results of a 4d - variational assimilation of satellite and in situ data with woce cither hydrographic measurements in the tropical atlantic ._ , * 47 * , 168 .hoang , h.s ., baraille , r. and talagrand , o. , 2001 : on the design of a stable adaptive filter for state estimation in high dimensional systems ._ automatica _ , * 37 * , 341359 .hoteit , i. , khol , a. , stammer , d. and heimbach , p. , 2003 : a reduced - order optimization strategy for four dimensional variational data assimilation . in proceedings of egs - agu - eug joint assembly ,6 - 11 april 2003 , nice , france .ide , k. , courtier , p. , ghil , m. and lorenc , a. c. , 1997 : unified notation for data assimilation : operational , sequential and variational ._ j. meteorol .jpn _ , * 75 * , 181189 .lawson , l. m. , spitz , y. h. , hofman , b. e. and long , r. b. , 1995 : a data assimilation technique applied to a predator - prey model ._ , * 57 * , 593617 .legras , b. and vautard , r. , 1995 : a guide to liapunov vectors .. 143156 in proceedings of the ecmwf seminar on predictability .lellouche , j .- m . ,ouberdous , m. and eifler , w. , 2000 : 4d - var data assimilation system for a coupled physical - biological model ._ earth and planetary science _ , * 109 * , 491502 .lermusiaux , p. j. f. and robinson , a. r. , 1999 : data assimilation via error subspace statistical estimation .part i : theory and schemes ._ , * 127 * , 13851407 .leredde , y. , lellouche , j .- m . ,devenon , j .- l . and dekeyser , i. , 1998 : on initial , boundary conditions , and viscosity coefficient control for burger s equation .fluids _ , * 28 * , 113128 .lions , j.l . ,temam , r. and wang , s. , 1992 : on the equation of large scale ocean ._ nonlinearity _ , * 5 * , 10071053 .luong , b. , blum , j. and verron , j. , 1998 : a variational method for the resolution of a data assimilation problem in oceanography ._ inverse problems _ , * 14 * , 979997 .madec , g. , delecluse , p. , imbard , m. and levy , c. , 1999 : opa release 8.1 , ocean general circulation maodel reference manual ._ internal report , lodyc / ipsl _ , france .moore , a. m. , 1991 : data assimilation in a quasigeostrophic open - ocean model of the gulf - stream region using the adjoint model ._ j. phys ._ , * 21 * , 398427 .mu , m. , 2000 : nonlinear singular vectors and nonlinear singular values . _science in china _ ,* 43(d ) * , 375385 .pham , d. t. , verron , j. and roubaud , m .- c ., 1998 : a singular evolutive extended kalman filter for data assimilation in oceanography . _ j. mar . syst ._ , * 16 * , 323340 .rabier , f. , jarvinen , h. , klinker , e. , mantouf , j .- f . and simmons , a. , 2000 : the ecmwf operational implementation of 4d - var assimilation .part i : experimental results with simplified physics ._ q. j. r. meteorol ._ , * 126 * , 11431170 .ricci , s. , weaver , a. t. , vialard , j. and rogel , p. 2005 : incorporating state - dependent temperature - salinity constraints in the background error covariance of variational ocean data assimilation ._ , * 133 * , 317338 .schrter , j. , seiler , u. and wenzel , m. , 1993 : variational assimilation of geosat data into an eddy - resolving model of the gulf stream area ._ j. phys ._ , * 23 * , 925953 .spitz , y. h. , moisan , j. r. , abbott , m. r. and richman , j. g. , 1998 : data assimilation and a pelagic ecosystem model : parameterization using time series observations . _ j. mar ._ , * 16 * , 5168 .toth , z. and kalnay , e. , 1997 : ensemble forecasting at ncep : the breeding method .weather rev ._ , * 125 * , 32973318 .veers , f. and thpaut , j .- n . , 1998 : multiple - truncation incremental approach for four - dimensional variational data assimilation. _ q. j. r. meteorol ._ , * 125 * , 18891908 .vialard , j. , menkes , c. , boulanger , j .- p . , delecluse , p. , guilyardi , e. , mcphaden , m. j. and madec , g. , 2001 : a model study of oceanic mechanisms affecting equatorial pacific sea surface temperature during the 1997 - 98 el nino ._ j. phys ._ , * 31 * , 16491675 .vialard , j. , vialard , a. t. , anderson , d. l. t. and delecluse , p. , 2003 : three- and four - dimensional variational assimilation with a general circulation model of the tropical pacific ocean .part ii : physical validation ._ , * 131 * , 1379 - 1395 .vidard , p. , 2001 : vers une prise en compte des erreurs modle en assimilation de donnes 4d variationnelle . application un modle docan .phd thesis , university of grenoble .weaver , a. t. and courtier , p. , 2001 : correlation modelling on the sphere using a generalized diffusion equation ._ q. j. r. meteorol ._ , * 127 * , 18151846 .weaver , a. t. , vialard , j. , anderson , d. l. t. and delecluse , p. , 2003 : three- and four - dimensional variational assimilation with a general circulation model of the tropical pacific ocean .part i : formulation , internal diagnostics and consistency checks . _ mon ._ , * 131 * , 1360 - 1378 .wenzel , m. and schrter , j. , 1999 : 4d - var data assimilation into the lsg ogcm using integral constraints. pp .141144 in proceedings of the third wmo international symposium on assimilation of observations in meteorology and oceanography , 7 - 11 june 1999 , quebec city , canada .
|
this paper presents a reduced - order approach for four - dimensional variational data assimilation , based on a prior eof analysis of a model trajectory . this method implies two main advantages : a natural model - based definition of a multivariate background error covariance matrix , and an important decrease of the computational burden of the method , due to the drastic reduction of the dimension of the control space . an illustration of the feasibility and the effectiveness of this method is given in the academic framework of twin experiments for a model of the equatorial pacific ocean . it is shown that the multivariate aspect of brings additional information which substantially improves the identification procedure . moreover the computational cost can be decreased by one order of magnitude with regard to the full - space 4d - var method . , , , , ,
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.